<div dir="ltr">Also, the configure.log has<div><br></div><div> #define PETSC_HAVE_MPI_GPU_AWARE 1</div><br><div>which says PETSc thinks the GPU support is there.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Tue, Sep 23, 2025 at 1:20 AM Satish Balay <<a href="mailto:balay.anl@fastmail.org">balay.anl@fastmail.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">orte-info output does suggest OpenMPI is built with cuda enabled.<br>
<br>
Are you able to run PETSc examples? What do you get for:<br>
<br>
>>>><br>
balay@petsc-gpu-01:/scratch/balay/petsc/src/snes/tutorials$ make ex19<br>
/scratch/balay/petsc/arch-linux-c-debug/bin/mpicc -fPIC -Wall<br>
-Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch<br>
-Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 <br>
-I/scratch/balay/petsc/include<br>
-I/scratch/balay/petsc/arch-linux-c-debug/include<br>
-I/nfs/gce/projects/petsc/soft/u22.04/spack-2024-11-27-cuda/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cuda-12.0.1-gy7foq57oi6wzltombtsdy5eqz5gkjgc/include<br>
-Wl,-export-dynamic ex19.c <br>
-Wl,-rpath,/scratch/balay/petsc/arch-linux-c-debug/lib<br>
-L/scratch/balay/petsc/arch-linux-c-debug/lib<br>
-Wl,-rpath,/nfs/gce/projects/petsc/soft/u22.04/spack-2024-11-27-cuda/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cuda-12.0.1-gy7foq57oi6wzltombtsdy5eqz5gkjgc/lib64<br>
-L/nfs/gce/projects/petsc/soft/u22.04/spack-2024-11-27-cuda/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cuda-12.0.1-gy7foq57oi6wzltombtsdy5eqz5gkjgc/lib64<br>
-L/nfs/gce/projects/petsc/soft/u22.04/spack-2024-11-27-cuda/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cuda-12.0.1-gy7foq57oi6wzltombtsdy5eqz5gkjgc/lib64/stubs<br>
-Wl,-rpath,/scratch/balay/petsc/arch-linux-c-debug/lib<br>
-L/scratch/balay/petsc/arch-linux-c-debug/lib<br>
-Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11<br>
-L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -llapack -lblas -lm -lcudart<br>
-lnvToolsExt -lcufft -lcublas -lcusparse -lcusolver -lcurand -lcuda<br>
-lX11 -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi<br>
-lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -o ex19<br>
balay@petsc-gpu-01:/scratch/balay/petsc/src/snes/tutorials$ ./ex19 -snes_monitor -dm_mat_type seqaijcusparse -dm_vec_type seqcuda -pc_type gamg -pc_gamg_esteig_ksp_max_it 10 -ksp_monitor -mg_levels_ksp_max_it 3 <br>
lid velocity = 0.0625, prandtl # = 1., grashof # = 1.<br>
0 SNES Function norm 2.391552133017e-01 <br>
0 KSP Residual norm 2.013462697105e-01 <br>
1 KSP Residual norm 5.027022294231e-02 <br>
2 KSP Residual norm 7.248258907839e-03 <br>
3 KSP Residual norm 8.590847505363e-04 <br>
4 KSP Residual norm 1.511762118013e-05 <br>
5 KSP Residual norm 1.410585959219e-06 <br>
1 SNES Function norm 6.812362089434e-05 <br>
0 KSP Residual norm 2.315252918142e-05 <br>
1 KSP Residual norm 2.351994603807e-06 <br>
2 KSP Residual norm 3.882072626158e-07 <br>
3 KSP Residual norm 2.227447016095e-08 <br>
4 KSP Residual norm 2.200353394658e-09 <br>
5 KSP Residual norm 1.147903850265e-10 <br>
2 SNES Function norm 3.411489611752e-10 <br>
Number of SNES iterations = 2<br>
balay@petsc-gpu-01:/scratch/balay/petsc/src/snes/tutorials$ <br>
<<<<<br>
<br>
So what issue are you seeing with your code? And does it go away with the option: "-use_gpu_aware_mpi 0"? for example:<br>
<br>
>>>><br>
balay@petsc-gpu-01:/scratch/balay/petsc/src/snes/tutorials$ ./ex19 -snes_monitor -dm_mat_type seqaijcusparse -dm_vec_type seqcuda -pc_type gamg -pc_gamg_esteig_ksp_max_it 10 -ksp_monitor -mg_levels_ksp_max_it 3 -use_gpu_aware_mpi 0<br>
lid velocity = 0.0625, prandtl # = 1., grashof # = 1.<br>
0 SNES Function norm 2.391552133017e-01 <br>
0 KSP Residual norm 2.013462697105e-01 <br>
1 KSP Residual norm 5.027022294231e-02 <br>
2 KSP Residual norm 7.248258907839e-03 <br>
3 KSP Residual norm 8.590847505363e-04 <br>
4 KSP Residual norm 1.511762118013e-05 <br>
5 KSP Residual norm 1.410585959219e-06 <br>
1 SNES Function norm 6.812362089434e-05 <br>
0 KSP Residual norm 2.315252918142e-05 <br>
1 KSP Residual norm 2.351994603807e-06 <br>
2 KSP Residual norm 3.882072626158e-07 <br>
3 KSP Residual norm 2.227447016095e-08 <br>
4 KSP Residual norm 2.200353394658e-09 <br>
5 KSP Residual norm 1.147903850265e-10 <br>
2 SNES Function norm 3.411489611752e-10 <br>
Number of SNES iterations = 2<br>
balay@petsc-gpu-01:/scratch/balay/petsc/src/snes/tutorials$ <br>
<<<<<br>
<br>
Satish<br>
<br>
On Tue, 23 Sep 2025, 岳新海 wrote:<br>
<br>
> I get:<br>
> [mae_yuexh@login01 ~]$ orte-info |grep 'MCA btl'<br>
> MCA btl: smcuda (MCA v2.1, API v3.1, Component v4.1.5)<br>
> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.5)<br>
> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.5)<br>
> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.5)<br>
> <br>
> <br>
> <br>
> Xinhai<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> 岳新海<br>
> <br>
> <br>
> <br>
> 南方科技大学/学生/研究生/2023级研究生<br>
> <br>
> <br>
> <br>
> 广东省深圳市南山区学苑大道1088号<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> ------------------ Original ------------------<br>
> From: "Satish Balay"<<a href="mailto:balay.anl@fastmail.org" target="_blank">balay.anl@fastmail.org</a>>;<br>
> Date: Tue, Sep 23, 2025 03:25 AM<br>
> To: "岳新海"<<a href="mailto:12332508@mail.sustech.edu.cn" target="_blank">12332508@mail.sustech.edu.cn</a>>; <br>
> Cc: "petsc-dev"<<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>>; <br>
> Subject: Re: [petsc-dev] Question on PETSc + CUDA configuration with MPI on cluster<br>
> <br>
> <br>
> <br>
> <br>
> What do you get for (with your openmpi install) :orte-info |grep 'MCA btl'<br>
> <br>
> With cuda built openmpi - I get:<br>
> balay@petsc-gpu-01:/scratch/balay/petsc$ ./arch-linux-c-debug/bin/orte-info |grep 'MCA btl'<br>
> MCA btl: smcuda (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)<br>
> <br>
> And without cuda:<br>
> balay@petsc-gpu-01:/scratch/balay/petsc.x$ ./arch-test/bin/orte-info | grep 'MCA btl'<br>
> MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)<br>
> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)<br>
> <br>
> i.e "smcuda" should be listed for a cuda enabled openmpi.<br>
> <br>
> Its not clear if GPU-aware MPI makes a difference for all MPI impls (or versions) - so good to verify. [its a performance issue anyway - so primarily useful when performing timing measurements]<br>
> <br>
> Satish<br>
> <br>
> On Mon, 22 Sep 2025, 岳新海 wrote:<br>
> <br>
> > Dear PETSc Team,<br>
> > <br>
> > I am encountering an issue when running PETSc with CUDA support on a cluster. When I set the vector type to VECCUDA, PETSc reports that my MPI is not GPU-aware. However, the MPI library (OpenMPI 4.1.5) I used to configure PETSc was built with the --with-cuda option enabled.<br>
> > <br>
> > <br>
> > Here are some details:<br>
> > PETSc version: 3.20.6<br>
> > MPI: OpenMPI 4.1.5, configured with --with-cuda<br>
> > GPU: RTX3090<br>
> > CUDA version: 12.1 <br>
> > I have attached both my PETSc configure command and OpenMPI configure command for reference.<br>
> > <br>
> > My questions are:<br>
> > <br>
> > <br>
> > <br>
> > <br>
> > Even though I enabled --with-cuda in OpenMPI, why does PETSc still report that MPI is not GPU-aware?<br>
> > <br>
> > <br>
> > <br>
> > Are there additional steps or specific configuration flags required (either in OpenMPI or PETSc) to ensure GPU-aware MPI is correctly detected?<br>
> > <br>
> > <br>
> > Any guidance or suggestions would be greatly appreciated.<br>
> > <br>
> > <br>
> > <br>
> > Best regards,<br>
> > <br>
> > Xinhai Yue<br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > <br>
> > 岳新海<br>
> > <br>
> > <br>
> > <br>
> > 南方科技大学/学生/研究生/2023级研究生<br>
> > <br>
> > <br>
> > <br>
> > 广东省深圳市南山区学苑大道1088号<br>
> > <br>
> > <br>
> > <br>
> > <br>
> > </blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cdM21c-7ReRi2XQTPS6YvfilWnEw4nUkPb1NxGBgs3JvVOaKitEMUhroNxRlbKSqRNErzlAPkMZW22UEHKIh$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div>