[mpich-discuss] mpich2 and IPoIB

Fabio Motezuki kenji_japanese at ig.com.br
Wed Oct 29 20:35:55 CDT 2008


Returning, I compiled de 1.0.8rc1 source and using mpd the ifhn in the 
mpd.hosts file worked well, but not with interface name resolution, just 
using ip directly.

In 1.0.7 I tried to use ifhn in the machinefile for smpd execution, but 
no effect even when using the ip address of the infiniband interface.


Fabio



Jayesh Krishna escreveu:
>
>  Hi,
>   Yes, you can use the "-ifhn" option in the machinefile with smpd.
>   Let us know if you have any problems.
>
> Regards,
> Jayesh
>
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Rajeev Thakur
> Sent: Monday, October 27, 2008 2:22 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: RE: [mpich-discuss] mpich2 and IPoIB
>
> Probably :-). From doing a grep in the smpd directory it looks like it 
> might accept "-ifhn hostname"
>
> Rajeev
>
> > -----Original Message-----
> > From: owner-mpich-discuss at mcs.anl.gov
> > [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Fabio Motezuki
> > Sent: Monday, October 27, 2008 2:12 PM
> > To: mpich-discuss at mcs.anl.gov
> > Subject: Re: [mpich-discuss] mpich2 and IPoIB
> >
> > Thanks Rajeev,
> >
> > I'll try it, perhaps is there any options like this for smpd?
> >
> > Fabio
> >
> >
> > Rajeev Thakur escreveu:
> > > Try specifying the interface name using ifhn= in the
> > mpd.hosts file as
> > > described in Sec 5.1.5 of the installation guide.
> > >
> > http://www.mcs.anl.gov/research/projects/mpich2/documentation/
> > files/mpich2-1
> > > .0.8-installguide.pdf
> > >
> > > Rajeev
> > >
> > >  
> > >> -----Original Message-----
> > >> From: owner-mpich-discuss at mcs.anl.gov
> > >> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of
> > Fabio Motezuki
> > >> Sent: Monday, October 27, 2008 5:35 AM
> > >> To: mpich-discuss at mcs.anl.gov
> > >> Subject: [mpich-discuss] mpich2 and IPoIB
> > >>
> > >> Hi all,
> > >>
> > >> I'm working on a cluster where each node is connected with two
> > >> networks:
> > >>
> > >> eth0 - gigabit ethernet
> > >> ib0 - IP over infiniband
> > >>
> > >> I would like to use the ib0 net for all mpi communications, but
> > >> when I start the example cpi all comunications go through eth0 is
> > this the
> > >> expected behavior?
> > >>
> > >> I'm launching the example program with "mpiexec
> > -machinefile mf -n 16
> > >> ./cpi" where the file "mf" contains the ip address of infiniband
> > >> cards.
> > >>
> > >> Fabio
> > >>
> > >>
> > >>    
> > >
> > >
> > >  
> >
> >
>



More information about the mpich-discuss mailing list