[MPICH] MPI_Barrier on ch3:shm and sched_yield
Sudarshan Raghunathan
rdarshan at gmail.com
Mon Feb 26 17:58:11 CST 2007
Rajeev,
Thank you for the reply.
I was running four MPI processes and then started another computationally
intensive task (running another MPI program on 2 processes, for example).
The other task gets around 50% of the CPU, but seems to be constantly
competing with the first MPI program which isn't really doing any heavy
lifting except waiting on the barrier.
I looked into a few options, all of which seem to have problems:
(a) put a sleep in the MPIDI_CH3I_Progress code so that the processes
waiting on the barrier sleep for a while instead of spinning in a
sched_yield loop and compete with other processes which might actually need
to use the CPU. The problem with this approach is that it might slow down
portions of the application where a fast barrier _is_ required
(b) use semaphores. It looks like it might touch a lot of places in the
implementation and again might cause performance problems where a fast
barrier is needed
(c) use sockets. The sock device in MPICH seems to be calling poll with a
time-out and that seems to be behaving much better than the shm device. I'm
not sure how much of the existing MPICH code can be reused.
Any other saner/more feasible options or advice on which of the above three
options is simplest to implement would be most appreciated.
Thanks again,
Sudarshan
On 26/02/07, Rajeev Thakur <thakur at mcs.anl.gov > wrote:
>
> Sudarshan,
> How many MPI processes are you running on the 4-way SMP
> and how many processes of the other application? Does the other application
> get any CPU at all? sched_yield just gives up the current time slice for the
> process and moves it to the end of the run queue. It will get run again when
> its turn comes. In other words, the process doesn't sleep until some event
> happens. It shares the CPU with other processes.
>
> Rajeev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070226/149ea08e/attachment.htm>
More information about the mpich-discuss
mailing list