[MPICH] running an MPI prog

Rajeev Thakur thakur at mcs.anl.gov
Thu Jul 26 22:54:43 CDT 2007


Make sure you are compiling and running your program with mpif90 and mpiexec
from the same MPI implementation. Give the full path to mpiexec if needed.

Rajeev 

> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Anthony Chan
> Sent: Thursday, July 26, 2007 7:29 PM
> To: Mark.Donohue at avtechgroup.com
> Cc: mpich-discuss at mcs.anl.gov
> Subject: Re: [MPICH] running an MPI prog
> 
> 
> Can you run a simple MPI program like cpi in 
> <mpich2-build-dir>/examples ?
> 
> cd <mpich2-build-dir>/examples
> <mpich2-build-dir>/bin/mpiexec -n 4 cpi
> 
> A.Chan
> 
> On Thu, 26 Jul 2007 Mark.Donohue at avtechgroup.com wrote:
> 
> > Hi yall.  Please help a simpleton with some basic MPICH 
> knowledge.  I am
> > trying to run an MPI version of a CFD program that I use 
> regularly.  I
> > installed MPICH2 on our Linux cluster, but when I try and 
> run, I get the
> > error at the bottom.  I split my computational domain into two
> > partitions, and the program complains that the number of 
> partitions (2)
> > doesn't equal the number of processors.  Huh?  Is there a way to fix
> > this?  I realize that this error is coming from my MPI code 
> and not from
> > MPICH itself, but I was hoping to gain some insight into why this is
> > happening.  For some reason I thought specifying "-n 2" 
> meant that it
> > will begin running with two processors.  All the documentation I've
> > found hasn't been able to answer this for me.  Thanks a bunch!
> > -Mark
> >
> >
> >
> > Aero1% mpiexec -l -machinefile myhosts -n 2 usm3d_52p.mpip4
> > Baseline_Elev
> > 0: ** WARNING: fp stack is not empty
> > 0: ** after call to etime in routine usm52p
> > 0:
> > 0: ** popping the fp stack and continuing...
> > 1: ** WARNING: fp stack is not empty
> > 1: ** after call to etime in routine usm52p
> > 1:
> > 1: ** popping the fp stack and continuing...
> > 1: FORTRAN STOP
> > 1:  the MPI has been initialized on            1  processors
> > 1:   no. of partitions != no. of processors
> > 0: FORTRAN STOP
> > 0:  the MPI has been initialized on            1  processors
> > 0:   no. of partitions != no. of processors
> >
> >
> 
> 




More information about the mpich-discuss mailing list