<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 9, 2018 at 4:09 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-">On Mon, Apr 9, 2018 at 6:12 PM, Manuel Valera <span dir="ltr"><<a href="mailto:mvalera-w@sdsu.edu" target="_blank">mvalera-w@sdsu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello guys,<div><br></div><div>I've made advances in my CUDA acceleration project, as you remember i have a CFD model in need of better execution times. </div><div><br></div><div>So far i have been able to solve the pressure system in the GPU and the rest in serial, using PETSc only for this pressure solve, the library i got to work was ViennaCL. First question, do i still have to switch installations to use either CUDA library? this was a suggestion before, so in order to use CUSP instead of ViennaCL, for example, i currently have to change installations, is this still the case?</div></div></blockquote><div><br></div></span><div>I am not sure what that means exactly. However, you can build a PETSc with CUDA and ViennaCL support. The type of Vec/Mat is selected at runtime.</div></div></div></div></blockquote><div><br></div><div>Karl Rupp wrote in a previous email:</div><div><br></div><div><i><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"> * Right now only one of {native CUDA, </span><span class="gmail-il" style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">CUSP</span><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">, ViennaCL} can be activated at configure time. This will be fixed later this month.</span></i><br></div><div><i><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><br></span></i></div><div><span style="font-size:12.8px">I was asking if this was already solved in 3.9, </span></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Now, i started working in a fully parallelized version of the model, which uses the DMs and DMDAs to distribute the arrays, if i try the same flags as before i get an error saying "Currently only handles ViennaCL matrices" when trying to solve for pressure, i get this is a feature still not implemented? what options do i have to solve pressure, or assign a DMDA array update to be done specifically in a GPU device?</div></div></blockquote><div><br></div></span><div>If we can't see the error, we are just guessing. Please send the entire error message.</div></div></div></div></blockquote><div><br></div><div>Got it, I will paste the error at the end of this email</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>Note, we only do linear algebra on the GPU, so none of the FormFunction/FormJacobian stuff for DMDA would be on the GPU.</div></div></div></div></blockquote><div><br></div><div>Yes, we only use it for linear algebra, e.g. solving a linear system and updating an array with a problematic algorithm.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I was thinking of using the VecScatterCreateToZero for a regular vector,</div></div></blockquote><div><br></div></span><div>Why do you want a serial vector?</div></div></div></div></blockquote><div><br></div><div>Because it looks live ViennaCL doesn't handle arrays created with DMDAVec, it was just an idea</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> but then i would have to create a vector and copy the DMDAVec into it,</div></div></blockquote><div><br></div></span><div>I do not understand what it means to copy the DM into the Vec.</div></div></div></div></blockquote><div><br></div><div>I meant copying a DMDAVec into a Vec object, the first is created with a DMDA object for it's mapping across processors,</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> is this accomplished with DMDAVecGetArrayReadF90 and then just copy? do you think this will generate too much overhead? </div><div><br></div><div>Thanks so much for your input,</div><div><br></div><div>Manuel </div></div>
</blockquote></span></div><span class="gmail-HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div class="gmail-m_545928891280725234gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a></div></div></div></div></div></font></span></div></div></blockquote><div><br></div><div>The error happens when trying to use KSPSolve() for a vector made with DMDAVec routines, the matrix is created without any DMDA routines</div><div><br></div><div>Error:</div><div><br></div><div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: No support for this operation for this object type</div><div>[0]PETSC ERROR: Currently only handles ViennaCL matrices</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b GIT Date: 2018-04-02 11:59:41 +0200</div><div>[0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr 9 16:24:26 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2 --with-shared-libraries=1 --download-hypre --with-debugging=no --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl --download-cusp</div><div>[0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/<a href="http://saviennacl.cu">saviennacl.cu</a></div><div>[0]PETSC ERROR: #2 PCSetUp() line 924 in /home/valera/petsc/src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: #3 KSPSetUp() line 381 in /home/valera/petsc/src/ksp/ksp/interface/itfunc.c</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: No support for this operation for this object type</div><div>[0]PETSC ERROR: Currently only handles ViennaCL matrices</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b GIT Date: 2018-04-02 11:59:41 +0200</div><div>[0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr 9 16:24:26 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2 --with-shared-libraries=1 --download-hypre --with-debugging=no --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl --download-cusp</div><div>[0]PETSC ERROR: #4 PCSetUp_SAVIENNACL() line 47 in /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/<a href="http://saviennacl.cu">saviennacl.cu</a></div><div>[0]PETSC ERROR: #5 PCSetUp() line 924 in /home/valera/petsc/src/ksp/pc/interface/precon.c</div><div> Finished setting up matrix objects</div><div> Exiting PrepareNetCDF</div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range</div><div>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger</div><div>[0]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a></div><div>[0]PETSC ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors</div><div>[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run </div><div>[0]PETSC ERROR: to get more information on the crash.</div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: Signal received</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Development GIT revision: v3.8.4-2418-gd9c423b GIT Date: 2018-04-02 11:59:41 +0200</div><div>[0]PETSC ERROR: ./gcmBEAM on a cuda named node50 by valera Mon Apr 9 16:24:26 2018</div><div>[0]PETSC ERROR: Configure options PETSC_ARCH=cuda --download-mpich --download-fblaslapack COPTFLAGS=-O2 CXXOPTFLAGS=-O2 FOPTFLAGS=-O2 --with-shared-libraries=1 --download-hypre --with-debugging=no --with-cuda=1 --CUDAFLAGS=-arch=sm_60 --download-hypre --download-viennacl --download-cusp</div><div>[0]PETSC ERROR: #6 User provided function() line 0 in unknown file</div><div>application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0</div><div>[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59</div></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div> </div></div><br></div></div>