[MPICH] An idle communication process use the same CPU as computation process on multi-core chips
Yusong Wang
ywang25 at aps.anl.gov
Fri Sep 14 09:49:26 CDT 2007
On Fri, 2007-09-14 at 12:38 +0200, Reuti wrote:
> Hi,
>
> Am 14.09.2007 um 00:41 schrieb Yusong Wang:
>
> > I have a program which is implemented with a master/slave model and
> > the
> > master just do very little computation. In my test, the master spent
> > most of its time to wait other process to finish MPI_Gather
> > communication (confirmed with jumpshot/MPE). In several tests on
> > different multi-core chips (dual-core, quad-core, 8-core), I found the
> > master use the same amount of CPU as the slaves, which should do
> > all the
> > computation.
>
> what do you mean in detail - you have let's say the master process
> running and 4 slaves and see a CPU usage of 500% on a machine with 8
> cores?
>
> Having his programming style, you need a special configured
> machinefile if you use a queuing system, as otherwise one idling slot
> will be wasted for the master process.
Everything runs locally, so I don't need a machinefile. I just specify
the number of processes I want. I agree the programming style may be not
well suited for a small number of cores.While my point is if the
communication should use much CPU on a multi-core chip.
If this is the case, how can we take advantage of non-blocking communication to
overlap the communication and computation on a multi-core chip.
Yusong
> -- Reuti
>
>
> > . There are only two exceptions that the master use near 0%
> > CPU (one on Window, one on Linux), which is what I expect. The tests
> > were did on both Fedora Linux and Widows with MPICH2 (shm/nemesis
> > mpd/smpd). I don't know if it is a software/system issue or caused by
> > different hardware. I would think this is (at least )related with
> > hardware. As with the same operating system, I got different CPU usage
> > (near 0% or near 100%) for the master on different multi-core nodes of
> > our clusters.
> >
> > Is there any documents I can check out for this issue?
> >
> > Thanks,
> >
> > Yusong
>
More information about the mpich-discuss
mailing list