[mpich-discuss] hydra_pmi_proxy using 100% CPU

Colin Hercus colin at novocraft.com
Thu Jan 20 18:34:20 CST 2011


Hi Pravind,

 That's 1Gb of stdout from master process only in in ~ 1hr is that really a
lot? Writes are in 16K blocks. I also read about the same using stdio
routines. I'll take a look at MPI-I/O

Colin

On Fri, Jan 21, 2011 at 2:53 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:

>
> On 01/20/2011 09:51 AM, Colin Hercus wrote:
>
>> The 1Gb is MPI messages, about the same is written to stdout.
>>
>
> 1GB of stdout might cause the proxy and mpiexec to be busy for a long time
> moving the data. Shouldn't you be considering MPI-I/O if you are writing
> such large amounts?
>
>
>  My PC for compile and (static) link was at 1.2.1 and I was running on a
>> server with 1.3.1 when I first noticed this. I then upgraded to 1.3.1
>> but it made no difference.
>>
>
> Yes, mpiexec is backward compatible to any version of MPICH2 (or any other
> derivative of MPICH2). But it's best to always upgrade to the latest
> version.
>
>
>  When would mpiexec and hydra_pmi_proxy use CPU? is it just related to
>> stdout?
>>
>
> Mostly, only for stdout/stderr/stdin + when one MPI process needs to know
> the address of another remote MPI process (typically when you send the first
> message).
>
>
>  -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji <http://www.mcs.anl.gov/%7Ebalaji>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110121/d35596a6/attachment.htm>


More information about the mpich-discuss mailing list