<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Thu, Dec 6, 2018 at 2:43 PM Weston, Brian Thomas via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="m_-3414104773733694472WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt">For parallel computations on the CPU, we allocate our own vectors and then give PETSc a point to our vectors using VecCreateMPIWithArray. For computations only on the GPU, we allocate vectors on the GPU and
we’d like to do the same thing and give PETSc a pointer to our vector. <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">1. Does VecCreateMPICUDAWithArray have the same functionality and will it only operate on the GPU?</span></p></div></div></blockquote><div><br></div><div>Karl will know.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div lang="EN-US" link="#0563C1" vlink="#954F72"><div class="m_-3414104773733694472WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt">2. Are there ways to ensure that our PETSc matrices and vectors only live on the GPU (as to avoid data transfer from CPU to GPU)?</span></p></div></div></blockquote><div><br></div><div>The coherence strategy does not activate unless you ask for the CPU version. It should do what you want if you only execute GPU operations.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div lang="EN-US" link="#0563C1" vlink="#954F72"><div class="m_-3414104773733694472WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt">3. Lastly, if in the future we move to AMG clusters, will these PETSc CUDA commands still work?</span></p></div></div></blockquote><div><br></div><div>AMD? The viennacl implementation will work on AMD. Everything looks the same, and you could try that out on your Nvidia hardware as well.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div lang="EN-US" link="#0563C1" vlink="#954F72"><div class="m_-3414104773733694472WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt">Thanks,<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">Brian<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><u></u> <u></u></span></p>
</div>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>