[mpich-discuss] MPIRUN

Nicolas Rosner nrosner at gmail.com
Thu Jan 12 10:59:30 CST 2012


Hi Bharat,

> I have installed MPICH2 in order to use it in gromacs.
>    mdrun_mpi  mdrun -s prefix_.tpr -multi 5 -replex 100

GROMACS user manual (A.5):
"For communications over multiple nodes on a network, there is usually
a program called mpirun with which you can start the parallel
processes. A typical command line looks like: mpirun -p
goofus,doofus,fred 10 mdrun_mpi -s topol -v."

Looks like mdrun_mpi is intended as a replacement -- not to be
prepended before mdrun, but used instead of it. Can't have them both
on the same command line.

What you will need to prepend is mpirun before mdrun_mpi (as opp. to
mdrun_mpi before mdrun). To avoid confusion, you may prefer to call it
"mpiexec" instead of mpirun (both are symlinks pointing to the same
thing, anyway).


> Fatal error: The number of nodes (1)
> is not a multiple of the number of simulations (5)

This is unrelated to MPICH2.  It's an application-level restriction:

"With  -multi,  multiple systems are simulated in parallel.  As many
input files are required as the number of  systems.   The  system
number  is appended  to  the  run  input  and  each  output filename
[...].  The number of nodes  per system  is  the total number of nodes
divided by the number of systems."

So when simulating K systems on N nodes, each system gets N/K nodes;
the restriction is that you must choose N and K so that N/K will be a
whole number.


> Does it mean that the installation is not correct or MPI is not able to
> recognize all the 4 cores of my CPU.

No. I think it's just refusing to run because your choice of N and K
implied that N/K wouldn't be an integer. (Note that your "choice" of N
was probably N=1 so far, since you weren't using mpiexec to run the
application.)


> Process 0 of 4 is on BHARATPC
> Process 1 of 4 is on BHARATPC
> Process 2 of 4 is on BHARATPC
> Process 3 of 4 is on BHARATPC
> pi is approximately 3.1415926544231239, Error is 0.0000000008333307

Good. This means that your MPICH2 installation is working well, at
least within one machine. You'd have to also try across machines, if
you plan to use more than one, but I'd say your install is looking
pretty healthy from an MPI installation POV.


Hope this helps,

Nicolás


More information about the mpich-discuss mailing list