[mpich-discuss] MPICH v1.2.7p1 and SMP clusters

Gustavo Miranda Teixeira magusbr at gmail.com
Tue Jan 13 08:03:03 CST 2009


Hello everyone!

I've been experiencing some issues when using MPICH v1.2.7p1 and a SMP
cluster and thought maybe some one can help me here.

I have a small cluster with two dual processor machines with gigabit
ethernet communication. Each processor is a dual core which sums up to 8
cores of processors. When I run an application spreading 4 processes in both
the machines (like distributing 2 processes in one machine and 2 processes
in another) I get a significantly better performance than when I run the
same application using 4 processes in only one machine. Isn`t it a bit
curious? I know some people who also noticed that, but no one can explain me
why this happens. Googling it didn't helped either. I originally thought it
was a problem from my kind of application (a heart simulator which using
PETSc to solve some differential equations) but some simple experimentations
showed a simple MPI_Send inside a huge loop causes the same issue. Measuring
cache hits and misses showed it`s not a memory contention problem. I also
know that a in-node communication in MPICH uses the loopback interface, but
as far as I know a message that uses loopback interface simply takes a
shortcut to the input queue instead of being sent to the device, so there is
no reason for the message to take longer to get to the other processes. So,
I have no idea why it`s taking longer to use MPICH in the same machine. Does
anyone else have noticed that too? Is there some logical explanation for
this to happen?

Thanks,
Gustavo Miranda Teixeira
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090113/38b45cd6/attachment.htm>


More information about the mpich-discuss mailing list