[mpich-discuss] Hydra framework (thru Grid Engine)

Bernard Chambon bernard.chambon at cc.in2p3.fr
Tue Dec 13 04:45:57 CST 2011


I answer to my own mail …

Le 13 déc. 2011 à 09:20, Bernard Chambon a écrit :

> The same code (advance_test) run with hydra thru Gridengine  :
>   mpiexec -rmk sge -iface eth2 -n $NSLOTS ./bin/advance_test
> 
> don't use secondary interface, but the first one (eth0 - 1Gb/s, see ~118MB/s)
> >dstat -n -N eth0,eth2
>    --net/eth0- --net/eth2-
>    recv  send: recv  send
>      0     0 :   0     0 
>    479k  118M: 432B  140B
>    476k  118M:1438B  420B
>    478k  118M:   0     0 
> 
> So, where to specify the equivalent ifhn option of mpd ? Does the  -iface option should be sufficient?


My failed situation was due to an older version of mpich2 used to compile my test code
With the latest version (MPICH2 Version: 1.4.1p1), and after a code compilation, I got the -iface option working fine,  cool! 

It's interesting to notice that a got (after 3 try) different values according to gcc or icc (Intel) compiler

with Intel compiler, I almost reach the limit of the second interface  (~ 1GB/s) 
> mpicc -cc=/usr/local/intel/cce/10.1.022//bin/icc -O2 …

> dstat -n -N eth0,eth2
--net/eth0- --net/eth2-
 recv  send: recv  send
   0     0 :   0     0 
 735k   77k:5720k  928M
  70B   29k:2268k  956M
 292B   28k:3838k  946M
  70B   30k:2170k  905M
 134B   24k:5453k  876M


with GNU compiler, I got around 700MB/s

>mpicc -cc=/usr/bin/gcc  -O2 …

 >dstat -n -N eth0,eth2
--net/eth0- --net/eth2-
 recv  send: recv  send
   0     0 :   0     0 
1435B   22k:1499k  715M
 326B   23k:1420k  656M
 578B   22k:1429k  686M
 198B   21k:1403k  666M

(With older release of mpich2, using gcc compiler, and mpd ring, I reach the second interface limit (1GB/s) 

Thank you very much, hydra (+ Grid Engine) is nice !

Best regards

---------------
Bernard CHAMBON
IN2P3 / CNRS
04 72 69 42 18

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20111213/5fd2a6f1/attachment.htm>


More information about the mpich-discuss mailing list