[petsc-users] errors with hypre with MPI and multiple GPUs on a node

Victor Eijkhout eijkhout at tacc.utexas.edu
Thu Feb 1 11:26:56 CST 2024


Only for mvapich2-gdr:

#!/bin/bash
# Usage: mpirun -n <num_proc> MV2_USE_AFFINITY=0 MV2_ENABLE_AFFINITY=0 ./launch ./bin

export CUDA_VISIBLE_DEVICES=$MV2_COMM_WORLD_LOCAL_RANK
case $MV2_COMM_WORLD_LOCAL_RANK in
        [0]) cpus=0-3 ;;
        [1]) cpus=64-67 ;;
        [2]) cpus=72-75 ;;
esac

numactl --physcpubind=$cpus $@

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240201/20aef2e0/attachment.html>


More information about the petsc-users mailing list