<div>I get:</div><div>[mae_yuexh@login01 ~]$ orte-info |grep 'MCA btl'</div><div><div> MCA btl: smcuda (MCA v2.1, API v3.1, Component v4.1.5)</div><div> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.5)</div><div> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.5)</div><div> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.5)</div></div><div><font><br></font></div><div><sign signid="99"><div><font>Xinhai<br></font><font><br></font><div style="color:#909090;font-family:Arial Narrow;font-size:12px"></div></div><div style="font-size:14px;font-family:Verdana;color:#000;" class="signRealArea"><div><div class="logo" style="width:305px;height:35px;line-height:35px;margin:20px 0 0 0;"><img src="https://exmail.qq.com/cgi-bin/viewfile?type=logo&domain=mail.sustech.edu.cn" onerror=""></div><div class="c_detail" style="margin:10px 0 0 0;"><h4 class="name" style="margin:0;font-size:14px;font-weight:bold;line-height:28px;zoom:1;">岳新海</h4><p class="position" style="margin:0;line-height:22px;color:#a0a0a0;"></p><p class="department" style="margin:0;line-height:22px;color:#a0a0a0;">南方科技大学/学生/研究生/2023级研究生</p><p class="phone" style="margin:0;line-height:22px;color:#a0a0a0;"></p><p class="addr" style="margin:0;line-height:22px;color:#a0a0a0;">广东省深圳市南山区学苑大道1088号</p></div></div></div></sign></div><div> </div><div><includetail><div> </div><div> </div><div style="font:Verdana normal 14px;color:#000;"><div style="FONT-SIZE: 12px;FONT-FAMILY: Arial Narrow;padding:2px 0 2px 0;">------------------ Original ------------------</div><div style="FONT-SIZE: 12px;background:#efefef;padding:8px;"><div id="menu_sender"><b>From: </b> "Satish Balay"<balay.anl@fastmail.org>;</div><div><b>Date: </b> Tue, Sep 23, 2025 03:25 AM</div><div><b>To: </b> "岳新海"<12332508@mail.sustech.edu.cn>; <wbr></div><div><b>Cc: </b> "petsc-dev"<petsc-dev@mcs.anl.gov>; <wbr></div><div><b>Subject: </b> Re: [petsc-dev] Question on PETSc + CUDA configuration with MPI on cluster</div></div><div> </div><div style="position:relative;"><div id="tmpcontent_res"></div>
<br>What do you get for (with your openmpi install) :orte-info |grep 'MCA btl'<br><br>With cuda built openmpi - I get:<br>balay@petsc-gpu-01:/scratch/balay/petsc$ ./arch-linux-c-debug/bin/orte-info |grep 'MCA btl'<br> MCA btl: smcuda (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)<br><br>And without cuda:<br>balay@petsc-gpu-01:/scratch/balay/petsc.x$ ./arch-test/bin/orte-info | grep 'MCA btl'<br> MCA btl: openib (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: self (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: tcp (MCA v2.1, API v3.1, Component v4.1.6)<br> MCA btl: vader (MCA v2.1, API v3.1, Component v4.1.6)<br><br>i.e "smcuda" should be listed for a cuda enabled openmpi.<br><br>Its not clear if GPU-aware MPI makes a difference for all MPI impls (or versions) - so good to verify. [its a performance issue anyway - so primarily useful when performing timing measurements]<br><br>Satish<br><br>On Mon, 22 Sep 2025, 岳新海 wrote:<br><br>> Dear PETSc Team,<br>> <br>> I am encountering an issue when running PETSc with CUDA support on a cluster. When I set the vector type to VECCUDA, PETSc reports that my MPI is not GPU-aware. However, the MPI library (OpenMPI 4.1.5) I used to configure PETSc was built with the --with-cuda option enabled.<br>> <br>> <br>> Here are some details:<br>> PETSc version: 3.20.6<br>> MPI: OpenMPI 4.1.5, configured with --with-cuda<br>> GPU: RTX3090<br>> CUDA version: 12.1 <br>> I have attached both my PETSc configure command and OpenMPI configure command for reference.<br>> <br>> My questions are:<br>> <br>> <br>> <br>> <br>> Even though I enabled --with-cuda in OpenMPI, why does PETSc still report that MPI is not GPU-aware?<br>> <br>> <br>> <br>> Are there additional steps or specific configuration flags required (either in OpenMPI or PETSc) to ensure GPU-aware MPI is correctly detected?<br>> <br>> <br>> Any guidance or suggestions would be greatly appreciated.<br>> <br>> <br>> <br>> Best regards,<br>> <br>> Xinhai Yue<br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> <br>> 岳新海<br>> <br>> <br>> <br>> 南方科技大学/学生/研究生/2023级研究生<br>> <br>> <br>> <br>> 广东省深圳市南山区学苑大道1088号<br>> <br>> <br>> <br>> <br>> <br><br>
</div></div><!--<![endif]--></includetail></div>