[petsc-users] Regarding printing to standard output. and possible mistake in the code comments in PetscBinaryWrite.m

Satish Balay balay at mcs.anl.gov
Mon Jan 17 19:35:12 CST 2011


On Mon, 17 Jan 2011, Matthew Knepley wrote:

> > (3) Should I run the mpd daemon before using mpiexec??? On the MPICH2 that
> > I had installed prior to my PETSc it required me type "mpd &"
> > before program execution.
> >
> > But it seems for my PETSc mpiexec I don;t need mpd. But should I type it in
> > ?? I mean I am not sure if this affects program performance
> >
> 
> The new version of MPICH uses hydra, not mpd, to manage the startup.

Actually with petsc-3.1 download-mpich uses pm=gforker [not hydra],
which limits MPI to a single node [with fork for all the mpi jobs]

If you need something more specific with MPI config you should build
your own MPI appropriately - and then configure PETSc with it - or
specify additional configure options to --download-mpich

./configure --help |grep mpich


With petsc-dev we default to '--with-pm=hydra
--with-hydra-bss=fork,ssh' so again mpiexec will again default to fork
[i.e 1 node] But one can use ' mpiexec -bootstrap ssh' to switch to
multi-node hydra.

PETSc defaults cater to easy software deveopment [this default works
for most users]. Performance runs with mult-nodes are usually done on
clusters/high performance machines - which usually have a tuned MPI
installed on it anyway..

Satish


More information about the petsc-users mailing list