<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div> So the PETSc test all run, including the test that uses a GPU.<div class=""><br class=""></div><div class=""> The hypre test is failing. It is impossible to tell from the output why. </div><div class=""><br class=""></div><div class=""> You can run it manually, cd src/snes/tutorials</div><div class=""><br class=""></div><div class="">make ex19</div><div class="">mpiexec -n 1 ./ex19 -dm_vec_type cuda -dm_mat_type aijcusparse -da_refine 3 -snes_monitor_short -ksp_norm_type unpreconditioned -pc_type hypre -info > somefile</div><div class=""><br class=""></div><div class="">then take a look at the output in somefile and send it to us. </div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""></div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jul 14, 2022, at 12:32 PM, Juan Pablo de Lima Costa Salazar via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" class="">petsc-users@mcs.anl.gov</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hello,<div class=""><br class=""></div><div class="">I was hoping to get help regarding a runtime error I am encountering <span style="caret-color: rgb(0, 0, 0);" class="">on a cluster node with 4 Tesla K40m GPUs</span> after configuring PETSc with the following command:</div><div class=""><br class=""></div><div class="">$./configure --force \</div><div class=""> --with-precision=double \</div><div class=""> --with-debugging=0 \</div><div class=""> --COPTFLAGS=-O3 \</div><div class=""> --CXXOPTFLAGS=-O3 \</div><div class=""> --FOPTFLAGS=-O3 \</div><div class=""> PETSC_ARCH=linux64GccDPInt32-spack \</div><div class=""> --download-fblaslapack \</div><div class=""> --download-openblas \</div><div class=""> --download-hypre \</div><div class=""> --download-hypre-configure-arguments=--enable-unified-memory \</div><div class=""> --with-mpi-dir=/opt/ohpc/pub/mpi/openmpi4-gnu9/4.0.4 \</div><div class=""> --with-cuda=1 \</div><div class=""> --download-suitesparse \</div><div class=""> --download-dir=downloads \</div><div class=""> --with-cudac=/opt/ohpc/admin/spack/0.15.0/opt/spack/linux-centos8-ivybridge/gcc-9.3.0/cuda-11.7.0-hel25vgwc7fixnvfl5ipvnh34fnskw3m/bin/nvcc \</div><div class=""> --with-packages-download-dir=downloads \</div><div class=""> --download-sowing=downloads/v1.1.26-p4.tar.gz \</div><div class=""> --with-cuda-arch=35</div><br class=""><div class="">When I run</div><div class=""><br class=""></div><div class="">$ make PETSC_DIR=/home/juan/OpenFOAM/juan-v2206/petsc-cuda PETSC_ARCH=linux64GccDPInt32-spack check</div><div class="">Running check examples to verify correct installation<br class="">Using PETSC_DIR=/home/juan/OpenFOAM/juan-v2206/petsc-cuda and PETSC_ARCH=linux64GccDPInt32-spack<br class="">C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process<br class="">C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes<br class="">3,5c3,15<br class="">< 1 SNES Function norm 4.12227e-06 <br class="">< 2 SNES Function norm 6.098e-11 <br class="">< Number of SNES iterations = 2<br class="">---<br class="">> CUDA ERROR (code = 101, invalid device ordinal) at memory.c:139<br class="">> CUDA ERROR (code = 101, invalid device ordinal) at memory.c:139<br class="">> --------------------------------------------------------------------------<br class="">> Primary job terminated normally, but 1 process returned<br class="">> a non-zero exit code. Per user-direction, the job has been aborted.<br class="">> --------------------------------------------------------------------------<br class="">> --------------------------------------------------------------------------<br class="">> mpiexec detected that one or more processes exited with non-zero status, thus causing<br class="">> the job to be terminated. The first process to do so was:<br class="">> <br class="">> Process name: [[52712,1],0]<br class="">> Exit code: 1<br class="">> --------------------------------------------------------------------------<br class="">/home/juan/OpenFOAM/juan-v2206/petsc-cuda/src/snes/tutorials<br class="">Possible problem with ex19 running with hypre, diffs above<br class="">=========================================<br class="">C/C++ example src/snes/tutorials/ex19 run successfully with cuda<br class="">C/C++ example src/snes/tutorials/ex19 run successfully with suitesparse<br class="">Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process<br class="">Completed test examples<br class=""><br class=""></div><div style="caret-color: rgb(0, 0, 0);" class="">I have compiled the code on the head node (without GPUs) and on the compute node where there are 4 GPUs. </div><div style="caret-color: rgb(0, 0, 0);" class=""><br class=""></div><div class=""><font class=""><span style="caret-color: rgb(0, 0, 0);" class="">$nvidia-debugdump -l<br class="">Found 4 NVIDIA devices<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device ID: 0<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device name: Tesla K40m<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>GPU internal ID: 0320717032250<br class=""><br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device ID: 1<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device name: Tesla K40m<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>GPU internal ID: 0320717031968<br class=""><br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device ID: 2<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device name: Tesla K40m<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>GPU internal ID: 0320717032246<br class=""><br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device ID: 3<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>Device name: Tesla K40m<br class=""><span class="Apple-tab-span" style="white-space: pre;"> </span>GPU internal ID: 0320717032235</span></font></div><div class=""><font class=""><span style="caret-color: rgb(0, 0, 0);" class=""><br class=""></span></font></div><div class=""><font class=""><span style="caret-color: rgb(0, 0, 0);" class="">Attached are the log files form configure and make.</span></font></div><div class=""><font class=""><span style="caret-color: rgb(0, 0, 0);" class=""><br class=""></span></font></div><div class=""><font class="">Any pointers are highly appreciated. My intention is to use PETSc as a linear solver for OpenFOAM, leveraging the availability of GPUs at the same time. Currently I can run PETSc without GPU support. </font></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">Cheers,</font></div><div class=""><font class="">Juan S.</font></div><div class=""><font class=""><br class=""></font></div><div class=""></div></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class=""><div class=""></div></div><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""></div><div class=""><font class=""><br class=""></font></div><div style="caret-color: rgb(0, 0, 0);" class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><span id="cid:8DC2CCDC-0FE5-4765-B588-199A913130BF"><configure.log.tar.gz></span><span id="cid:C86A7C79-FFAD-45ED-A9DE-4F61EEC7B01F"><make.log.tar.gz></span></div></blockquote></div><br class=""></div></body></html>