[mpich-discuss] hydra_pmi_proxy using 100% CPU
Colin Hercus
colin at novocraft.com
Fri Jan 21 00:42:36 CST 2011
Hi Pavan,
First apologies for calling you Pravind in earlier emails. Pravind is a past
colleague and I was in automatic mode :)
I've been looking at MPI IO and I have a couple of questions:
1. Is it possible to use MPI IO to write to stdout? I couldn't see how to do
this.
2. If I don't use MPI IO and stick to writing to stdout with fwrite() is it
better to do a) Send from slave to master and then fwrite in master or b)
write to stdout in the slaves.
Thanks, Colin
On Fri, Jan 21, 2011 at 2:05 PM, Colin Hercus <colin at novocraft.com> wrote:
> Hi Pravind,
>
> Thought I'd try MPI-IO that you mentioned so I went to my favourite MPI
> site https://computing.llnl.gov/tutorials/mpi/ and there was no mention of
> it. There's also no links or documentation on MPICH2 home page so that might
> explain why I missed it :)
>
> I've found some documentation and may give it a quick try but I think I've
> also fixed my problem.
>
> I had a bash script to monitor the progress of the MPI job that went like:
>
> mpiexec .... >report.file&
> while ..
> do
> echo `date` `wc -l report.file`
> sleep 30
> done
>
> If I monitor file size rather than number of lines using stat -c%s rather
> than wc -l then I don't get the slow downs and everything runs perfectly.
>
> It's interesting because server has a lot of ram and the file is in cache.
> When I try wc -l on 2G file (maximum size it gets to) it takes about 1sec
> and monitoring loop sleeps for 30 secs between each wc -l but it obviously
> interferes with mpiexec and hydra_pmi_proxy.
>
> So I'm happy now and thanks for your help. Just knowing mpiexec &
> hydra_pmi_proxy CPU was related to stdout IO was enough of a clue to get me
> moving in the right direction
>
> Thanks, Colin
>
>
>
> On Fri, Jan 21, 2011 at 8:34 AM, Colin Hercus <colin at novocraft.com> wrote:
>
>> Hi Pravind,
>>
>> That's 1Gb of stdout from master process only in in ~ 1hr is that really
>> a lot? Writes are in 16K blocks. I also read about the same using stdio
>> routines. I'll take a look at MPI-I/O
>>
>> Colin
>>
>>
>> On Fri, Jan 21, 2011 at 2:53 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>>
>>>
>>> On 01/20/2011 09:51 AM, Colin Hercus wrote:
>>>
>>>> The 1Gb is MPI messages, about the same is written to stdout.
>>>>
>>>
>>> 1GB of stdout might cause the proxy and mpiexec to be busy for a long
>>> time moving the data. Shouldn't you be considering MPI-I/O if you are
>>> writing such large amounts?
>>>
>>>
>>> My PC for compile and (static) link was at 1.2.1 and I was running on a
>>>> server with 1.3.1 when I first noticed this. I then upgraded to 1.3.1
>>>> but it made no difference.
>>>>
>>>
>>> Yes, mpiexec is backward compatible to any version of MPICH2 (or any
>>> other derivative of MPICH2). But it's best to always upgrade to the latest
>>> version.
>>>
>>>
>>> When would mpiexec and hydra_pmi_proxy use CPU? is it just related to
>>>> stdout?
>>>>
>>>
>>> Mostly, only for stdout/stderr/stdin + when one MPI process needs to know
>>> the address of another remote MPI process (typically when you send the first
>>> message).
>>>
>>>
>>> -- Pavan
>>>
>>> --
>>> Pavan Balaji
>>> http://www.mcs.anl.gov/~balaji <http://www.mcs.anl.gov/%7Ebalaji>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110121/ffa12e93/attachment-0001.htm>
More information about the mpich-discuss
mailing list