[MPICH] running an MPI prog
Anthony Chan
chan at mcs.anl.gov
Fri Jul 27 13:20:11 CDT 2007
You probably used mpiexec from other implementation. Using mpiexec from
OpenMPI on a cpi exeutable compiled with MPICH2 produced the output that
you saw.
A.Chan
> ~/openmpi/install_123_gcc4/bin/mpiexec -n 3 cpi
Process 0 of 1 is on bblogin
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000649
Process 0 of 1 is on bblogin
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000609
Process 0 of 1 is on bblogin
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000638
On Fri, 27 Jul 2007 Mark.Donohue at avtechgroup.com wrote:
> Hello Anthony. Thank you. It was brought to my attention that an MPI
> code must be run using exactly the same MPICH implementation that
> created it.
> -Mark
>
> PS. If you have any input as far as what goes into the MPICH users
> manual, this would be good to add!
>
> -----Original Message-----
> From: Anthony Chan [mailto:chan at mcs.anl.gov]
> Sent: Friday, July 27, 2007 10:08 AM
> To: Mark Donohue
> Cc: mpich-discuss at mcs.anl.gov
> Subject: RE: [MPICH] running an MPI prog
>
>
> Your cpi output suggests the 3 cpi processes are not talking to one
> another. Here is the my cpi output:
>
> > ..../mpiexec -n 3 cpi
> Process 2 of 3 is on bblogin
> Process 0 of 3 is on bblogin
> Process 1 of 3 is on bblogin
> pi is approximately 3.1415926544231318, Error is 0.0000000008333387 wall
> clock time = 0.005751
>
> Try using mpdcheck to see if your network is setup correctly.
>
> A.Chan
>
> On Fri, 27 Jul 2007 Mark.Donohue at avtechgroup.com wrote:
>
> > Hey Anthony. Thanks for the reply. Yeah this seems to run ok. Here's
>
> > what I get (I only had three machines in the ring at that time). Is
> > this what you would expect to see?
> >
> > Aero1% mpiexec -n 3 cpi
> > Process 0 of 1 on Aero2
> > pi is approximately 3.1415926544231341, Error is 0.0000000008333410
> > wall clock time = 0.000453 Process 0 of 1 on Aero1 pi is approximately
>
> > 3.1415926544231341, Error is 0.0000000008333410 wall clock time =
> > 0.000622 Process 0 of 1 on Aero3 pi is approximately
> > 3.1415926544231341, Error is 0.0000000008333410 wall clock time =
> > 0.000455
> >
> >
> > -----Original Message-----
> > From: Anthony Chan [mailto:chan at mcs.anl.gov]
> > Sent: Thursday, July 26, 2007 6:29 PM
> > To: Mark Donohue
> > Cc: mpich-discuss at mcs.anl.gov
> > Subject: Re: [MPICH] running an MPI prog
> >
> >
> > Can you run a simple MPI program like cpi in
> > <mpich2-build-dir>/examples ?
> >
> > cd <mpich2-build-dir>/examples
> > <mpich2-build-dir>/bin/mpiexec -n 4 cpi
> >
> > A.Chan
> >
> > On Thu, 26 Jul 2007 Mark.Donohue at avtechgroup.com wrote:
> >
> > > Hi yall. Please help a simpleton with some basic MPICH knowledge.
> > > I am trying to run an MPI version of a CFD program that I use
> regularly.
> >
> > > I installed MPICH2 on our Linux cluster, but when I try and run, I
> > > get
> >
> > > the error at the bottom. I split my computational domain into two
> > > partitions, and the program complains that the number of partitions
> > > (2) doesn't equal the number of processors. Huh? Is there a way to
>
> > > fix this? I realize that this error is coming from my MPI code and
> > > not from MPICH itself, but I was hoping to gain some insight into
> > > why this is happening. For some reason I thought specifying "-n 2"
> > > meant that it will begin running with two processors. All the
> > > documentation
> >
> > > I've found hasn't been able to answer this for me. Thanks a bunch!
> > > -Mark
> > >
> > >
> > >
> > > Aero1% mpiexec -l -machinefile myhosts -n 2 usm3d_52p.mpip4
> > > Baseline_Elev
> > > 0: ** WARNING: fp stack is not empty
> > > 0: ** after call to etime in routine usm52p
> > > 0:
> > > 0: ** popping the fp stack and continuing...
> > > 1: ** WARNING: fp stack is not empty
> > > 1: ** after call to etime in routine usm52p
> > > 1:
> > > 1: ** popping the fp stack and continuing...
> > > 1: FORTRAN STOP
> > > 1: the MPI has been initialized on 1 processors
> > > 1: no. of partitions != no. of processors
> > > 0: FORTRAN STOP
> > > 0: the MPI has been initialized on 1 processors
> > > 0: no. of partitions != no. of processors
> > >
> > >
> >
> >
>
>
More information about the mpich-discuss
mailing list