[MPICH] mpirun vs. mpiexec

Steve Young chemadm at hamilton.edu
Tue Jun 5 13:34:20 CDT 2007


Sorry I should of better clarified.... I wonder if say I needed 4 nodes
that starting my own ring on just those 4 nodes would perform better
than using the ring that is running across all the nodes (38). 

-Steve

On Tue, 2007-06-05 at 13:22 -0500, Anthony Chan wrote:
> 
> On Tue, 5 Jun 2007, Steve Young wrote:
> 
> > I was contemplating switching over to the setup where each user starts
> > their own ring. I have an older cluster that I am planning on testing
> > this setup with. I am curious to know if there are
> > advantages/disadvantages over the two setups. Would I notice anything
> > different in terms of performance?
> 
> I don't think you will see any difference in MPI performance whether mpds
> are started by root or users...
> 
> > Anyhow, For now I'm looking at the mpiexec that you and Garrick
> > suggested. Thanks a bunch for the quick replies!
> >
> > -Steve
> >
> >
> > On Tue, 2007-06-05 at 11:46 -0500, Anthony Chan wrote:
> > >
> > > On Tue, 5 Jun 2007, Steve Young wrote:
> > >
> > > > Hello,
> > > > 	I am trying to understand the differences of when to use mpirun vs.
> > > > using mpiexec.
> > > >
> > > > Currently, we have a cluster (x86_64 with 38 nodes - 4cpu's per node)
> > > > that is set up with mpich2-1.0.5 and running a ring that is started
> > > > across all the nodes by root.
> > > >
> > > > We are also using PBS (torque-2.0.0p7) to manage the resources.
> > > >
> > >
> > > Have you tried to start the mpd ring as a regular user ?  i.e. do
> > > mpdboot in your pbs script ?  If you don't want to start mpd within
> > > the pbs script, you may want to consider using Pete Wyckoff's mpiexec
> > > from osc.edu with mpich2+torque.
> > >
> > > > Our main problem is with using the sander.MPI program from the Amber9
> > > > software. But I have been able to produce the same results using the
> > > > simple bounce program.
> > > >
> > > >
> > > > Now first I use mpirun and the program will run as expected:
> > > >
> > > > mpirun -np 8 sander.MPI -O......
> > > >
> > > > However, using mpirun the the program doesn't go to the proper nodes
> > > > that PBS allocates to the job. when I try to give mpirun the -
> > > > machinefile argument mpirun complains about this as it doesn't appear to
> > > > know about this one.
> > >
> > > In mpich2, mpirun points to mpiexec.  Anyway, you should use mpiexec that
> > > comes with the process manager.
> > >
> > > A.Chan
> > >
> >
> >




More information about the mpich-discuss mailing list