<div dir="ltr">Here is the figure with the high level communications. </div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 15, 2018 at 5:30 PM, Wim Vanroose <span dir="ltr"><<a href="mailto:wim@vanroo.se" target="_blank">wim@vanroo.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Dear All, <div><br></div><div>We have a working prototype of pipe(l) CG in Petsc, where dot products are taking multiple iterations to complete. Due to the limitations of VecDotBegin we had to used MPI_WAIT and MPI_Iallreduce. </div><div>A high level overview of the communication is given in the figure. The preprint of the paper is <a href="https://arxiv.org/abs/1801.04728" target="_blank">https://arxiv.org/abs/1801.<wbr>04728</a></div><div><br></div><div>How should we proceed? Can we contribute this routine to KSP while it uses primitive MPI calls? </div><div>Or should we interact with petsc-dev to see if we can redesign VecDotBegin and VecDotEnd </div><div>to be able to handle these cases? And then rewrite the prototype with these new calls?</div><div><br></div><div>Can we talk about this at SIAM PP18?</div><div><br></div><div>Wim Vanroose <br></div><div><br></div><div><img><br></div><div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote"><br></div></div></div>
</blockquote></div><br></div>