[petsc-dev] Implementing longer pipelines with VecDotBegin and VecDotEnd

Barry Smith bsmith at mcs.anl.gov
Thu Mar 23 23:02:23 CDT 2017


> On Mar 23, 2017, at 10:54 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> 
> Barry Smith <bsmith at mcs.anl.gov> writes:
>>> Meh,
>>> 
>>> VecNormBegin(X,&request1x);
>>> VecNormBegin(Y,&request1y);
>>> VecNormEnd(X,request1x,&norm);
>>> VecAXPY(Y,-1,X);
>>> VecNormBegin(Y,&request2y);
>>> VecNormEnd(Y,request2y,&norm2y);
>>> VecNormEnd(Y,request1y,&norm1y);
>> 
>>   I don't understand what you are getting at here. You don't seem to be understanding my use case where multiple inner products/norms share the same MPI communication (which was the original reason for VecNormBegin/End) see for example KSPSolve_CR
>> 
>>    Are you somehow (incompetently) saying that the first two VecNorms
>>    somehow share the same parallel communication (even though they
>>    have different request values) while the third Norm has its own
>>    MPI communication. 
> 
> Yeah, same as now.  Every time you call *Begin() using a communicator,
> you get a new request for something in that "batch".  When the batch is
> closed, either by a *End() or PetscCommSplitReductionBegin(), any future
> *Begin() calls will go into a new batch.

   Ok, so to support their use case they would need to explicitly call PetscCommSplitReductionBegin?


   Start first inner product
   Do mat mult
    do PetscCommSplitReductionBegin() to indicate the first batch is done?
    start second inner product (will be handled in a second batch)
   do stuff
   get result of first inner product
   ....

   Ok I can live with this model. But my "hoist" model would have, in some cases fewer objects managed by the user but it would require an explicit 
PetscCommSplitReductionCreate().

  Barry



>  The old batch wouldn't be
> collected until all of its requests have been *End()ed.
> 
>>    Please explain how this works? Because an End was done somehow the
>>    next Begin knows to create an entirely new reduction object that
>>    it tracks (while the old reduction is kept around (where?) to
>>    complete all the first phase requests?)
> 
> Yeah, I don't think it's hard to implement, but requires some
> refactoring of PetscSplitReduction.
> 
>>   I am ok with this model if it can be implemented.
> 




More information about the petsc-dev mailing list