[petsc-dev] Implementing longer pipelines with VecDotBegin and VecDotEnd

Wim Vanroose wim at vanroo.se
Thu Feb 15 10:34:33 CST 2018


Here is the figure with the high level communications.

On Thu, Feb 15, 2018 at 5:30 PM, Wim Vanroose <wim at vanroo.se> wrote:

> Dear All,
>
> We have a working prototype of pipe(l) CG in Petsc, where dot products are
> taking multiple iterations to complete.  Due to the limitations of
> VecDotBegin we had to used MPI_WAIT and MPI_Iallreduce.
> A high level overview of the communication is given in the figure.   The
> preprint of the paper is https://arxiv.org/abs/1801.04728
>
> How should we proceed?  Can we contribute this routine to KSP while it
> uses primitive MPI calls?
> Or should we interact with petsc-dev to see if we can redesign
>  VecDotBegin and VecDotEnd
> to be able to handle these cases? And then rewrite the prototype with
> these new calls?
>
> Can we talk about this at SIAM PP18?
>
> Wim Vanroose
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180215/5c6d1c51/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: schematic_communication.png
Type: image/png
Size: 123614 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180215/5c6d1c51/attachment-0001.png>


More information about the petsc-dev mailing list