[mpich-discuss] processor/memory affinity on quad core systems

chong tan chong_guan_tan at yahoo.com
Tue Jul 22 12:11:34 CDT 2008


no easy way with mpiexec, especially if you do mpiexec -n.  But this should work
 
 
mpiexec numactl --physcpubind N0 <1 of your proc> :
             numactl  -- physcpubind N1 <2nd of oof proc>  :
             .<same for the rest>
 
add --membind if you want (and you definately want it for Opteron).  
 
tan


--- On Tue, 7/22/08, Franco Catalano <franco.catalano at uniroma1.it> wrote:

From: Franco Catalano <franco.catalano at uniroma1.it>
Subject: [mpich-discuss] processor/memory affinity on quad core systems
To: mpich-discuss at mcs.anl.gov
Date: Tuesday, July 22, 2008, 2:28 AM

Hi,
Is it possible to ensure processor/memory affinity on mpi jobs launched
with mpiexec (or mpirun)?
I am using mpich2 1.0.7 with WRF on a 4 processor Opteron quad core (16
cores total) machine and I have observed a sensible (more than 20%)
variability of the time needed to compute a single time step. Taking a
look to the output of top, I have noticed that the system moves
processes over the 16 cores regardless of processor/memory affinity. So,
when processes are running on cores away from their memory, the time
needed for the time advancement is longer.
I know that, for example, OpenMPI provides a command line option for
mpiexec (or mpirun) to ensure the affinity binding:
--mca param mpi_paffinity_alone = 1
I have tried this with WRF and it works.
Is there a way to do this with mpich2?
Otherwise, I think that it would be very useful to include such
cabability into the next release.
Thank you for any suggestion.

Franco

-- 
____________________________________________________
Eng. Franco Catalano
Ph.D. Student

D.I.T.S.
Department of Hydraulics, Transportation and Roads.
Via Eudossiana 18, 00184 Rome 
University of Rome "La Sapienza".
tel: +390644585218


      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080722/4dc7417f/attachment.htm>


More information about the mpich-discuss mailing list