[petsc-dev] Implementing longer pipelines with VecDotBegin and VecDotEnd
Jed Brown
jedbrown at mcs.anl.gov
Thu Mar 23 22:54:05 CDT 2017
Barry Smith <bsmith at mcs.anl.gov> writes:
>> Meh,
>>
>> VecNormBegin(X,&request1x);
>> VecNormBegin(Y,&request1y);
>> VecNormEnd(X,request1x,&norm);
>> VecAXPY(Y,-1,X);
>> VecNormBegin(Y,&request2y);
>> VecNormEnd(Y,request2y,&norm2y);
>> VecNormEnd(Y,request1y,&norm1y);
>
> I don't understand what you are getting at here. You don't seem to be understanding my use case where multiple inner products/norms share the same MPI communication (which was the original reason for VecNormBegin/End) see for example KSPSolve_CR
>
> Are you somehow (incompetently) saying that the first two VecNorms
> somehow share the same parallel communication (even though they
> have different request values) while the third Norm has its own
> MPI communication.
Yeah, same as now. Every time you call *Begin() using a communicator,
you get a new request for something in that "batch". When the batch is
closed, either by a *End() or PetscCommSplitReductionBegin(), any future
*Begin() calls will go into a new batch. The old batch wouldn't be
collected until all of its requests have been *End()ed.
> Please explain how this works? Because an End was done somehow the
> next Begin knows to create an entirely new reduction object that
> it tracks (while the old reduction is kept around (where?) to
> complete all the first phase requests?)
Yeah, I don't think it's hard to implement, but requires some
refactoring of PetscSplitReduction.
> I am ok with this model if it can be implemented.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170323/f540dbcb/attachment.sig>
More information about the petsc-dev
mailing list