[MPICH] Selecting which ethernet interface to use

Darius Buntinas buntinas at mcs.anl.gov
Wed Aug 22 21:41:56 CDT 2007


I don't know much about slurm, but looking at the docs, I bet it doesn't 
support that.

But if you can set environment variables for each process in slurm, you 
can set the address other processes will use to connect to a process by 
setting the MPICH_INTERFACE_HOSTNAME environment variable for that process.

E.g., if the address for a node is bb01 and the address for the ib 
interface is bb01-ib, set MPICH_INTERFACE_HOSTNAME=bb01-ib for any 
processes on that node.

Another way, if you can use mpd, would be to create a machinefile that 
looks like:

<hostname1>:<ncpus> ifhn=<ib-hostname1>
<hostname2>:<ncpus> ifhn=<ib-hostname2>
...

Where hostnameX is the name that mpd will use to start the process and 
connect to it, ncpus is the number of processes you want to start on 
that node, and ib-hostnameX is the address for the ib interface for that 
node.

Then if your machine file is called ./mf, do

   mpiexec -n num_nodes -machinefile ./mf foo

Hope that helps.

-d


On 08/22/2007 08:51 PM, Jeff Squyres wrote:
> Greetings.
> 
> Is there a way to have MPICH2 select which ethernet interface to use if 
> you are using SLURM to launch jobs?
> 
> I.e., I'm compiling and running my MPI application with:
> 
>     mpicc foo.c -L/path/to/slurm -lpmi -o foo
>     srun -N num_nodes foo
> 
> This seems to always choose eth0, but I'd much rather have it use ib0 
> (IP over InfiniBand).  Is there a way to make it use ib0?  I didn't see 
> anything about this in the user guide.
> 
> Any insight would be appreciated; thanks.
> 




More information about the mpich-discuss mailing list