[MPICH] running an MPI prog

Mark.Donohue at avtechgroup.com Mark.Donohue at avtechgroup.com
Fri Jul 27 11:02:04 CDT 2007


Hey Anthony. Thanks for the reply.  Yeah this seems to run ok.  Here's
what I get (I only had three machines in the ring at that time).  Is
this what you would expect to see?

Aero1% mpiexec -n 3 cpi
Process 0 of 1 on Aero2
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000453
Process 0 of 1 on Aero1
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000622
Process 0 of 1 on Aero3
pi is approximately 3.1415926544231341, Error is 0.0000000008333410
wall clock time = 0.000455
 

-----Original Message-----
From: Anthony Chan [mailto:chan at mcs.anl.gov] 
Sent: Thursday, July 26, 2007 6:29 PM
To: Mark Donohue
Cc: mpich-discuss at mcs.anl.gov
Subject: Re: [MPICH] running an MPI prog


Can you run a simple MPI program like cpi in <mpich2-build-dir>/examples
?

cd <mpich2-build-dir>/examples
<mpich2-build-dir>/bin/mpiexec -n 4 cpi

A.Chan

On Thu, 26 Jul 2007 Mark.Donohue at avtechgroup.com wrote:

> Hi yall.  Please help a simpleton with some basic MPICH knowledge.  I 
> am trying to run an MPI version of a CFD program that I use regularly.

> I installed MPICH2 on our Linux cluster, but when I try and run, I get

> the error at the bottom.  I split my computational domain into two 
> partitions, and the program complains that the number of partitions 
> (2) doesn't equal the number of processors.  Huh?  Is there a way to 
> fix this?  I realize that this error is coming from my MPI code and 
> not from MPICH itself, but I was hoping to gain some insight into why 
> this is happening.  For some reason I thought specifying "-n 2" meant 
> that it will begin running with two processors.  All the documentation

> I've found hasn't been able to answer this for me.  Thanks a bunch!
> -Mark
>
>
>
> Aero1% mpiexec -l -machinefile myhosts -n 2 usm3d_52p.mpip4 
> Baseline_Elev
> 0: ** WARNING: fp stack is not empty
> 0: ** after call to etime in routine usm52p
> 0:
> 0: ** popping the fp stack and continuing...
> 1: ** WARNING: fp stack is not empty
> 1: ** after call to etime in routine usm52p
> 1:
> 1: ** popping the fp stack and continuing...
> 1: FORTRAN STOP
> 1:  the MPI has been initialized on            1  processors
> 1:   no. of partitions != no. of processors
> 0: FORTRAN STOP
> 0:  the MPI has been initialized on            1  processors
> 0:   no. of partitions != no. of processors
>
>




More information about the mpich-discuss mailing list