<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu">mvalera-w@sdsu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello guys,<div><br></div><div>I am working in a multi-gpu cluster and i want to request 2 or more GPUs, how can i do that from PETSc? evidently mpirun -n # is for requesting processors, but what if i want to use one mpi processor but several GPUs instead?</div></div></blockquote><div><br></div><div>We do not do that. You would run the same number of MPI processes as GPUs. Note that</div><div>you can have more than 1 MPI process on a processor.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Also, i understand the GPU handles the linear system solver, but what about the data management? can i do DMs for other than the linear solver using the GPUs</div><div>?</div><div><br></div><div>Thanks once more,</div><div><br></div><div><br></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>