[mpich-discuss] MPI on a cluster of dual-CPU machines

Rajeev Thakur thakur at mcs.anl.gov
Sat Jul 18 09:05:10 CDT 2009


If all your changes are in the romio directory, you may just be able to
plunk your version of romio into 1.1.

Rajeev 

> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov 
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of 
> Christina Patrick
> Sent: Friday, July 17, 2009 7:07 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI on a cluster of dual-CPU machines
> 
> Hi,
> 
> Just one more question though. You say that the latest 
> release detects this configuration. I am using mpich2-1.0.8. 
> Does this version too detect the presence of dual CPUs automatically?
> I have made quite a few code changes to this version and 
> don't really want to migrate at this point. Hence I really 
> need to know this.
> 
> Thanks and Regards,
> Christina.
> 
> On Fri, Jul 17, 2009 at 6:56 PM, Pavan 
> Balaji<balaji at mcs.anl.gov> wrote:
> >
> >> What is the most efficient way of doing this ... which 
> options should 
> >> I specify while configuring the software and how do I specify the 
> >> options to mpdboot and mpiexec to use this feature of MPI ?
> >
> > Just use the default configuration in the latest release of MPICH2; 
> > it'll automatically figure out that there are multiple CPUs on the 
> > same node. For mpdboot, there are different ways of specifying this:
> >
> > $ cat hostfile
> > host1
> > host2
> >
> > ... this means that the first process will be on host1, second on 
> > host2, then the third process will go back to host1, the 
> fourth on host2, etc.
> >
> > Another way is:
> >
> > $ cat hostfile
> > host1:2
> > host2:2
> >
> > ... this means that the first two processes will be on 
> host1, and the 
> > next two on host2.
> >
> > If you are using the second way, make sure you use the 
> --ncpus option 
> > to mpdboot.
> >
> >  $ mpdboot -f hostfile -n 2 --ncpus=2
> >
> >  $ mpiexec -n 4 ./app
> >
> > Also remember that the "-n" option in mpdboot specifies the 
> number of 
> > physical nodes, and not the number of CPUs, while in mpiexec, it 
> > refers to the number of processes.
> >
> > If you don't want to deal with all these things, an 
> alternate approach 
> > is to bypass mpd and use the new Hydra process manager. You 
> can just do:
> >
> > $ mpiexec.hydra -f hostfile -n 4 ./app
> >
> > ... you won't need mpdboot in this case.
> >
> > Hope that helps.
> >
> >  -- Pavan
> >
> > --
> > Pavan Balaji
> > http://www.mcs.anl.gov/~balaji
> >
> 



More information about the mpich-discuss mailing list