[mpich-discuss] processor/memory affinity on quad core systems

William Gropp gropp at mcs.anl.gov
Tue Jul 22 12:07:34 CDT 2008


I agree that we should add something to address the process affinity  
issue.  It might make sense to do this through the PMI v2 node  
attributes (since the processes may need to agree on who gets which  
processor).

Bill

On Jul 22, 2008, at 4:28 AM, Franco Catalano wrote:

> Hi,
> Is it possible to ensure processor/memory affinity on mpi jobs  
> launched
> with mpiexec (or mpirun)?
> I am using mpich2 1.0.7 with WRF on a 4 processor Opteron quad core  
> (16
> cores total) machine and I have observed a sensible (more than 20%)
> variability of the time needed to compute a single time step. Taking a
> look to the output of top, I have noticed that the system moves
> processes over the 16 cores regardless of processor/memory  
> affinity. So,
> when processes are running on cores away from their memory, the time
> needed for the time advancement is longer.
> I know that, for example, OpenMPI provides a command line option for
> mpiexec (or mpirun) to ensure the affinity binding:
> --mca param mpi_paffinity_alone = 1
> I have tried this with WRF and it works.
> Is there a way to do this with mpich2?
> Otherwise, I think that it would be very useful to include such
> cabability into the next release.
> Thank you for any suggestion.
>
> Franco
>
> --
> ____________________________________________________
> Eng. Franco Catalano
> Ph.D. Student
>
> D.I.T.S.
> Department of Hydraulics, Transportation and Roads.
> Via Eudossiana 18, 00184 Rome
> University of Rome "La Sapienza".
> tel: +390644585218
>

William Gropp
Paul and Cynthia Saylor Professor of Computer Science
University of Illinois Urbana-Champaign


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080722/92c0d8ba/attachment.htm>


More information about the mpich-discuss mailing list