<div dir="ltr"><div dir="ltr">On Wed, Aug 6, 2025 at 11:53 AM howen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:</div><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>Dear Sir,</div><div><br></div><div>I am introducing petsc into our fortran + openacc code, <a href="https://urldefense.us/v3/__https://gitlab.com/bsc_sod2d/sod2d_gitlab__;!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRmJTqdFC$" target="_blank">https://gitlab.com/bsc_sod2d/sod2d_gitlab</a>.</div><div><br></div><div>My final objective is to run AMG (Boomer from hypre) on the GPU.</div><div><br></div><div>For the moment I am performing test on the CPU only.</div><div><br></div><div>I run on Marenostrum-V. <a href="https://urldefense.us/v3/__https://www.bsc.es/supportkc/docs/MareNostrum5/intro/__;!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRraUJcUF$" target="_blank">https://www.bsc.es/supportkc/docs/MareNostrum5/intro/</a></div><div><br></div><div>I am compiling my code with NVHPC and the support team from BSC has compiled petsc + hypre for me.</div><div><br></div><div>In the configuration they used -with-cuda. </div><div><br></div><div>I have run petsc and it works correctly both in serial and parallel on the CPU. In my code I use call MatSetType(amat,MATAIJ,ierr).</div><div>I understand this is teh expected behaviour for Petsc. </div><div>That is, that even if one compiles with petsc cuda support one can run only on the GPU depending on what one sets in MatSetType.</div><div>Could you confirm that this is the expected behaviour? Instead it seems that when petsc+hypre is used one needs to </div><div>compile specific versions for CPU and GPU. </div><div><br></div><div>When trying to use hypre through petsc if one has compiled petsc using -with-cuda the run fails.</div><div>From what I have understood this in not the expected behavior. could you confirm this?</div><div><br></div><div><br></div><div>Depending of what options I give, the error is different</div><div><br></div><div>If I use </div><div><br></div><div>-pc_type hypre</div><div>-pc_hypre_type boomeramg</div><div>-pc_hypre_boomeramg_coarsen_type hmis</div><div>-pc_hypre_boomeramg_interp_type ext+i</div><div>-pc_hypre_boomeramg_relax_type_all SOR/Jacobi</div><div>-pc_hypre_boomeramg_relax_type_coarse SOR/Jacobi</div><div>-pc_hypre_boomeramg_grid_sweeps_coarse 1</div><div>-pc_hypre_boomeramg_strong_threshold 0.25</div><div><br></div><div><br></div><div>I get</div><div><br></div><div>[0]PETSC ERROR: ------------------------------------------------------------------------</div><div>[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range</div><div>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger</div><div>[0]PETSC ERROR: or see <a href="https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRt2ImXnV$" target="_blank">https://petsc.org/release/faq/#valgrind</a> and <a href="https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRjGOi2o3$" target="_blank">https://petsc.org/release/faq/</a></div><div>[0]PETSC ERROR: or try <a href="https://urldefense.us/v3/__https://docs.nvidia.com/cuda/cuda-memcheck/index.html__;!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRqNF46dV$" target="_blank">https://docs.nvidia.com/cuda/cuda-memcheck/index.html</a> on NVIDIA CUDA systems to find memory corruption errors</div><div>[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------</div><div>[0]PETSC ERROR: The line numbers in the error traceback are not always exact.</div><div>[0]PETSC ERROR: #1 jac->setup()</div><div>[0]PETSC ERROR: #2 PCSetUp_HYPRE() at /gpfs/apps/MN5/ACC/PETSC/SRC/petsc-v3.21.0_hypre-debug/src/ksp/pc/impls/hypre/hypre.c:422</div><div>[0]PETSC ERROR: #3 PCSetUp() at /gpfs/apps/MN5/ACC/PETSC/SRC/petsc-v3.21.0_hypre-debug/src/ksp/pc/interface/precon.c:1079</div><div>[0]PETSC ERROR: #4 KSPSetUp() at /gpfs/apps/MN5/ACC/PETSC/SRC/petsc-v3.21.0_hypre-debug/src/ksp/ksp/interface/itfunc.c:415</div><div>--------------------------------------------------------------------------</div><div>MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0</div><div>with errorcode 59.</div><div><br></div><div>Which does not help much. </div><div> </div><div>Instead if I use only </div><div><br></div><div>-pc_type hypre</div><div><br></div><div>I get </div><div><br></div><div> </div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------</div><div>[0]PETSC ERROR: Invalid argument</div><div>[0]PETSC ERROR: HYPRE_MEMORY_DEVICE expects a device vector. You need to enable PETSc device support, for example, in some cases, -vec_type cuda</div><div><br></div><div><br></div><div>This helped my realise that hypre was trying to use the GPU despite in my code </div></div></blockquote><div><br></div><div>We try to have a good error message here (If you could give us the code with the SEGV, we will fix it to give an informative error). This is a limitation of Hypre, namely that it runs _either_ on the CPU or the GPU, not both at the same time. Hopefully this will not be the case with a future release.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>Both runs are succesfull when compiling without -with-cuda</div><div><br></div><div>Best,</div><div>
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div>Herbert Owen<br>Senior Researcher, Dpt. Computer Applications in Science and Engineering<br>Barcelona Supercomputing Center (BSC-CNS)<br>Tel: +34 93 413 4038</div><div>Skype: herbert.owen<br><br><a href="https://urldefense.us/v3/__https://scholar.google.es/citations?user=qe5O2IYAAAAJ&hl=en__;!!G_uCfscf7eWS!f0Y_9okAiFSBeUBOlNplMe6jOCPtdRHfhX2s_rz3N9kCK4OJGiGWGjFyRd3JOwojw7MeNEA-VJtgdn8wBbCmRkKVuPTJ$" target="_blank">https://scholar.google.es/citations?user=qe5O2IYAAAAJ&hl=en</a></div><div><br></div></div><br></div><br></div><br></div><br></div><br><br>
</div>
<br></div></blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!btmKhbl3NeBl7Rgz-znsnwgTYs0aZfcXn-qxj6ei6jgwTQfNYeQ7Rz8De5GhON1QxIqHrvi9CfpPQs5VjyVu$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>