[mpich-discuss] hydra_pmi_proxy using 100% CPU
Pavan Balaji
balaji at mcs.anl.gov
Thu Jan 20 12:53:45 CST 2011
On 01/20/2011 09:51 AM, Colin Hercus wrote:
> The 1Gb is MPI messages, about the same is written to stdout.
1GB of stdout might cause the proxy and mpiexec to be busy for a long
time moving the data. Shouldn't you be considering MPI-I/O if you are
writing such large amounts?
> My PC for compile and (static) link was at 1.2.1 and I was running on a
> server with 1.3.1 when I first noticed this. I then upgraded to 1.3.1
> but it made no difference.
Yes, mpiexec is backward compatible to any version of MPICH2 (or any
other derivative of MPICH2). But it's best to always upgrade to the
latest version.
> When would mpiexec and hydra_pmi_proxy use CPU? is it just related to
> stdout?
Mostly, only for stdout/stderr/stdin + when one MPI process needs to
know the address of another remote MPI process (typically when you send
the first message).
-- Pavan
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpich-discuss
mailing list