A question about parallel computation

Satish Balay balay at mcs.anl.gov
Thu Jul 9 09:52:49 CDT 2009


You'll have to learn about the MPI you've installed.

If its MPICH - how did you install it? Did you install with PETSc or
MPICH separately?

Did you make sure its install with mpd? [This is the default if its
installed separately. However if you've installed with PETSc - you
will need additional option: --download-mpich-pm=mpd]

And then have you configured mpd correctly across all the nodes you'd
like to use?

These are all MPI issues - you should figure these out - before
attempting PETSc.

Satish

On Thu, 9 Jul 2009, Yin Feng wrote:

> Firstly, thanks for all your replies!
> 
> I changed compiler to MPICH and tried a sample successfully but the
> problem is still there.
> I ran my code in 4 nodes and each node have 8 processors. And the
> information I saw is like:
>  NODE        LOAD
>        0             32
>        1              0
>         2             0
>        3              0
> 
> Normally, in that case, we should see is:
> NODE        LOAD
>        0             8
>        1              8
>         2             8
>        3              8
> 
> So, anyone got any idea about this?
> 
> Thank you in advance!
> 
> Sincerely,
> YIN
> 
> On Wed, Jul 8, 2009 at 4:15 PM, Xin Qian<chianshin at gmail.com> wrote:
> > You can try to run sole MPI samples coming with OpenMPI first, make sure the
> > OpenMPI is running all right.
> >
> > Thanks,
> >
> > Xin Qian
> >
> > On Wed, Jul 8, 2009 at 4:48 PM, Yin Feng <yfeng1 at tigers.lsu.edu> wrote:
> >>
> >> I tried OpenMPI build PETSc and used mpirun provided by OpenMPI.
> >> But, when I check the load on each node, I found the master node take
> >> all the load
> >> and others are just free.
> >>
> >> Did you have any idea about this situation?
> >>
> >> Thanks in adcance!
> >>
> >> Sincerely,
> >> YIN
> >>
> >> On Wed, Jul 8, 2009 at 1:26 PM, Satish Balay<balay at mcs.anl.gov> wrote:
> >> > Perhaps you are using the wrong mpiexec or mpirun. You'll have to use
> >> > the correspond mpiexec from MPI you've used to build PETSc.
> >> >
> >> > Or if the MPI has special instruction on usage - you should follow
> >> > that [for ex: some clusters require extra options to mpiexec ]
> >> >
> >> > Satish
> >> >
> >> > On Wed, 8 Jul 2009, Yin Feng wrote:
> >> >
> >> >> I am a beginner of PETSc.
> >> >> I tried the PETSC example 5(ex5) with 4 nodes,
> >> >> However, it seems every nodes doing the exactly the same things and
> >> >> output the same results again and again. is this the problem of petsc
> >> >> or
> >> >> MPI installation?
> >> >>
> >> >> Thank you in adcance!
> >> >>
> >> >> Sincerely,
> >> >> YIN
> >> >>
> >> >
> >> >
> >
> >
> >
> > --
> > QIAN, Xin (http://pubpages.unh.edu/~xqian/)
> > xqian at unh.edu chianshin at gmail.com
> >
> 



More information about the petsc-users mailing list