<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class=""><div dir="ltr" class="" style="font-family: Menlo-Regular;"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div dir="ltr" class="">> [0]PETSC ERROR: PETSc is configured with GPU support, but your MPI is not GPU-aware. For better performance, please use a GPU-aware MPI.<br class="">> [0]PETSC ERROR: If you do not care, add option -use_gpu_aware_mpi 0. To not see the message again, add the option to your .petscrc, OR add it to the env var PETSC_OPTIONS.<br class="">> [0]PETSC ERROR: If you do care, for IBM Spectrum MPI on OLCF Summit, you may need jsrun --smpiargs=-gpu.<br class="">> [0]PETSC ERROR: For OpenMPI, you need to configure it --with-cuda (<a href="https://www.open-mpi.org/faq/?category=buildcuda" target="_blank" class="">https://www.open-mpi.org/faq/?category=buildcuda</a>)<br class="">> [0]PETSC ERROR: For MVAPICH2-GDR, you need to set MV2_USE_CUDA=1 (<a href="http://mvapich.cse.ohio-state.edu/userguide/gdr/" target="_blank" class="">http://mvapich.cse.ohio-state.edu/userguide/gdr/</a>)<br class="">> [0]PETSC ERROR: For Cray-MPICH, you need to set MPICH_RDMA_ENABLED_CUDA=1 (<a href="https://www.olcf.ornl.gov/tutorials/gpudirect-mpich-enabled-cuda/" target="_blank" class="">https://www.olcf.ornl.gov/tutorials/gpudirect-mpich-enabled-cuda/</a>)</div></blockquote></div></div></blockquote><div class=""><br class=""></div>You seem to also be tripping up the gpu aware mpi checker. IIRC we discussed removing this at some point? I think Stefano mentioned we now do this check at configure time?<div class=""><br class=""></div><div class=""><div class="">
<div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div>Best regards,<br class=""><br class="">Jacob Faibussowitsch<br class="">(Jacob Fai - booss - oh - vitch)<br class=""></div></div></div>
</div>
<div><br class=""><blockquote type="cite" class=""><div class="">On Nov 13, 2021, at 22:57, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" class="">junchao.zhang@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta charset="UTF-8" class=""><div dir="ltr" style="caret-color: rgb(0, 0, 0); font-family: Menlo-Regular; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><div dir="ltr" class=""><br class=""><br class=""></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Nov 13, 2021 at 2:24 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" class="">mfadams@lbl.gov</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div dir="ltr" class=""><div class="">I have a user that wants CUDA + Hypre on Sumit and they want to use OpenMP in their code. I configured with openmp but without thread safety and got this error.</div><div class=""><br class=""></div><div class="">Maybe there is no need for us to do anything with omp in our configuration. Not sure.</div><div class=""><br class=""></div>15:08 main= summit:/gpfs/alpine/csc314/scratch/adams/petsc$ make PETSC_DIR=/gpfs/alpine/world-shared/geo127/petsc/arch-opt-gcc9.1.0-omp-cuda11.0.3 PETSC_ARCH="" check<br class="">Running check examples to verify correct installation<br class="">Using PETSC_DIR=/gpfs/alpine/world-shared/geo127/petsc/arch-opt-gcc9.1.0-omp-cuda11.0.3 and PETSC_ARCH=<br class="">C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process<br class="">Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI processes<br class="">See<span class="Apple-converted-space"> </span><a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank" class="">http://www.mcs.anl.gov/petsc/documentation/faq.html</a><br class="">[1] (280696) Warning: Could not find key lid0:0:2 in cache <=========================<br class="">[1] (280696) Warning: Could not find key qpn0:0:0:2 in cache <=========================<br class="">Unable to connect queue-pairs<br class="">[h37n08:280696] Error: common_pami.c:1094 - ompi_common_pami_init() 1: Unable to create 1 PAMI communication context(s) rc=1<br class=""></div></blockquote><div class="">I don't know what petsc's thread safety is. But this error seems to be in the environment. You can report to OLCF help.</div><div class=""> </div><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div dir="ltr" class="">--------------------------------------------------------------------------<br class="">No components were able to be opened in the pml framework.<br class=""><br class="">This typically means that either no components of this type were<br class="">installed, or none of the installed components can be loaded.<br class="">Sometimes this means that shared libraries required by these<br class="">components are unable to be found/loaded.<br class=""><br class=""> <span class="Apple-converted-space"> </span>Host: h37n08<br class=""> <span class="Apple-converted-space"> </span>Framework: pml<br class="">--------------------------------------------------------------------------<br class="">[h37n08:280696] PML pami cannot be selected<br class="">1,5c1,16<br class="">< lid velocity = 0.0016, prandtl # = 1., grashof # = 1.<br class="">< 0 SNES Function norm 0.0406612<br class="">< 1 SNES Function norm 4.12227e-06<br class="">< 2 SNES Function norm 6.098e-11<br class="">< Number of SNES iterations = 2<br class="">---<br class="">> [1] (280721) Warning: Could not find key lid0:0:2 in cache <=========================<br class="">> [1] (280721) Warning: Could not find key qpn0:0:0:2 in cache <=========================<br class="">> Unable to connect queue-pairs<br class="">> [h37n08:280721] Error: common_pami.c:1094 - ompi_common_pami_init() 1: Unable to create 1 PAMI communication context(s) rc=1<br class="">> --------------------------------------------------------------------------<br class="">> No components were able to be opened in the pml framework.<br class="">><br class="">> This typically means that either no components of this type were<br class="">> installed, or none of the installed components can be loaded.<br class="">> Sometimes this means that shared libraries required by these<br class="">> components are unable to be found/loaded.<br class="">><br class="">> Host: h37n08<br class="">> Framework: pml<br class="">> --------------------------------------------------------------------------<br class="">> [h37n08:280721] PML pami cannot be selected<br class="">/gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutorials<br class="">Possible problem with ex19 running with hypre, diffs above<br class="">=========================================<br class="">2,15c2,15<br class="">< 0 SNES Function norm 2.391552133017e-01<br class="">< 0 KSP Residual norm 2.325621076120e-01<br class="">< 1 KSP Residual norm 1.654206318674e-02<br class="">< 2 KSP Residual norm 7.202836119880e-04<br class="">< 3 KSP Residual norm 1.796861424199e-05<br class="">< 4 KSP Residual norm 2.461332992052e-07<br class="">< 1 SNES Function norm 6.826585648929e-05<br class="">< 0 KSP Residual norm 2.347339172985e-05<br class="">< 1 KSP Residual norm 8.356798075993e-07<br class="">< 2 KSP Residual norm 1.844045309619e-08<br class="">< 3 KSP Residual norm 5.336386977405e-10<br class="">< 4 KSP Residual norm 2.662608472862e-11<br class="">< 2 SNES Function norm 6.549682264799e-11<br class="">< Number of SNES iterations = 2<br class="">---<br class="">> [0]PETSC ERROR: PETSc is configured with GPU support, but your MPI is not GPU-aware. For better performance, please use a GPU-aware MPI.<br class="">> [0]PETSC ERROR: If you do not care, add option -use_gpu_aware_mpi 0. To not see the message again, add the option to your .petscrc, OR add it to the env var PETSC_OPTIONS.<br class="">> [0]PETSC ERROR: If you do care, for IBM Spectrum MPI on OLCF Summit, you may need jsrun --smpiargs=-gpu.<br class="">> [0]PETSC ERROR: For OpenMPI, you need to configure it --with-cuda (<a href="https://www.open-mpi.org/faq/?category=buildcuda" target="_blank" class="">https://www.open-mpi.org/faq/?category=buildcuda</a>)<br class="">> [0]PETSC ERROR: For MVAPICH2-GDR, you need to set MV2_USE_CUDA=1 (<a href="http://mvapich.cse.ohio-state.edu/userguide/gdr/" target="_blank" class="">http://mvapich.cse.ohio-state.edu/userguide/gdr/</a>)<br class="">> [0]PETSC ERROR: For Cray-MPICH, you need to set MPICH_RDMA_ENABLED_CUDA=1 (<a href="https://www.olcf.ornl.gov/tutorials/gpudirect-mpich-enabled-cuda/" target="_blank" class="">https://www.olcf.ornl.gov/tutorials/gpudirect-mpich-enabled-cuda/</a>)<br class="">> --------------------------------------------------------------------------<br class="">> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF<br class="">> with errorcode 76.<br class="">><br class="">> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br class="">> You may or may not see output from other processes, depending on<br class="">> exactly when Open MPI kills them.<br class="">> --------------------------------------------------------------------------<br class="">/gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutorials<br class="">Possible problem with ex19 running with cuda, diffs above<br class="">=========================================</div></blockquote></div></div></div></blockquote></div><br class=""></div></body></html>