<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Thu, Jul 12, 2018 at 6:47 AM Lawrence Mitchell <<a href="mailto:lawrence.mitchell@imperial.ac.uk">lawrence.mitchell@imperial.ac.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear petsc-dev,<br>
<br>
we're starting to explore (with Andreas cc'd) residual assembly on<br>
GPUs. The question naturally arises: how to do GlobalToLocal and<br>
LocalToGlobal.<br></blockquote><div><br></div><div>There is not a lot of Mem Band difference between a GPU and a Skylake, but I assume this is</div><div>to use hardware already purchased by some center.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I have:<br>
<br>
A PetscSF describing the communication pattern.<br>
<br>
A Vec holding the data to communicate. This will have an up-to-date<br>
device pointer.<br>
<br>
I would like:<br>
<br>
PetscSFBcastBegin/End (and ReduceBegin/End, etc...) to (optionally)<br>
work with raw device pointers. I am led to believe that modern MPIs<br>
can plug directly into device memory, so I would like to avoid copying<br>
data to the host, doing the communication there, and then going back<br>
up to the device.<br>
<br>
Given that I think that the window implementation (which just<br>
delegates the MPI for all the packing) is not considered prime time<br>
(mostly due to MPI implementation bugs, I think), I think this means<br>
implementing a version of PetscSF_Basic that can handle the<br>
pack/unpack directly on the device, and then just hands off to MPI.<br></blockquote><div><br></div><div>I think that is the case.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The next thing is how to put a higher-level interface on top of this.<br>
What, if any, suggestions are there for doing something where the<br>
top-level API is agnostic to whether the data are on the host or the<br>
device.<br>
<br>
We had thought something like:<br>
<br>
- Make PetscSF handle device pointers (possibly with new implementation?)<br>
<br>
- Make VecScatter use SF.<br></blockquote><div><br></div><div>Yep, this is what I would do.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Calling VecScatterBegin/End on a Vec with up-to-date device pointers<br>
just uses the SF directly.<br>
<br>
Have there been any thoughts about how you want to do multi-GPU<br>
interaction?<br></blockquote><div><br></div><div>I don't think so, but Karl could reply if there has been.</div><div><br></div><div>How are you doing local assembly?</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Cheers,<br>
<br>
Lawrence<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div>