[petsc-dev] Question on PETSc + CUDA configuration with MPI on cluster

Satish Balay balay.anl at fastmail.org
Mon Sep 22 14:25:47 CDT 2025


What do you get for (with your openmpi install) :orte-info |grep 'MCA btl'

With cuda built openmpi - I get:
balay at petsc-gpu-01:/scratch/balay/petsc$ ./arch-linux-c-debug/bin/orte-info |grep 'MCA btl'
                 MCA btl: smcuda (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)

And without cuda:
balay at petsc-gpu-01:/scratch/balay/petsc.x$ ./arch-test/bin/orte-info  | grep 'MCA btl'
                 MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)
                 MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)

i.e "smcuda" should be listed for a cuda enabled openmpi.

Its not clear if GPU-aware MPI makes a difference for all MPI impls (or versions) - so good to verify. [its a performance issue anyway - so primarily useful when performing timing measurements]

Satish

On Mon, 22 Sep 2025, 岳新海 wrote:

> Dear PETSc Team,
>  
> I am encountering an issue when running PETSc with CUDA support on a cluster. When I set the vector type to VECCUDA, PETSc reports that my MPI is not GPU-aware. However, the MPI library (OpenMPI 4.1.5) I used to configure PETSc was built with the --with-cuda option enabled.
> 
> 
> Here are some details:
> PETSc version: 3.20.6
> MPI: OpenMPI 4.1.5, configured with --with-cuda
> GPU: RTX3090
> CUDA version: 12.1 
> I have attached both my PETSc configure command and OpenMPI configure command for reference.
> 
> My questions are:
> 
>  
>  
>  
> Even though I enabled --with-cuda in OpenMPI, why does PETSc still report that MPI is not GPU-aware?
>  
>  
>  
> Are there additional steps or specific configuration flags required (either in OpenMPI or PETSc) to ensure GPU-aware MPI is correctly detected?
> 
> 
> Any guidance or suggestions would be greatly appreciated.
> 
>  
> 
> Best regards,
> 
> Xinhai Yue
> 
> 
> 
>  
> 
> 
> 
> 
> 
> 
> 
> 岳新海
> 
> 
> 
> 南方科技大学/学生/研究生/2023级研究生
> 
> 
> 
> 广东省深圳市南山区学苑大道1088号
> 
> 
> 
> 
>  


More information about the petsc-dev mailing list