<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000099">
This is a problem with internode communication. <br>
Each node has 8 cores. So I used the following tests with 4,8 MPI
tasks. Please note that this is a smaller problem size (N=30000)
compared to the earlier run.<br>
<br>
<table style="border-collapse: collapse; width: 271pt;" border="0"
cellpadding="0" cellspacing="0" width="361">
<col style="width: 130pt;" width="173"> <col style="width: 79pt;"
width="105"> <col style="width: 62pt;" width="83"> <tbody>
<tr style="height: 16.5pt;" height="22">
<td class="xl65" style="height: 16.5pt; width: 130pt;" height="22"
width="173">HPL<span style=""> </span></td>
<td class="xl65" style="border-left: medium none; width: 79pt;"
width="105">N = 30000</td>
<td class="xl65" style="border-left: medium none; width: 62pt;"
width="83">N = 30000</td>
</tr>
<tr style="height: 16.5pt;" height="22">
<td class="xl65" style="border-top: medium none; height: 16.5pt;"
height="22"> </td>
<td class="xl65"
style="border-top: medium none; border-left: medium none;">sock</td>
<td class="xl65"
style="border-top: medium none; border-left: medium none;">nemesis</td>
</tr>
<tr style="height: 16.5pt;" height="22">
<td class="xl65" style="border-top: medium none; height: 16.5pt;"
height="22">8 on one node</td>
<td class="xl66"
style="border-top: medium none; border-left: medium none;"
align="right">4.70E+01</td>
<td class="xl68"
style="border-top: medium none; border-left: medium none;"
align="right"><b>4.74E+01</b></td>
</tr>
<tr style="height: 16.5pt;" height="22">
<td class="xl65" style="border-top: medium none; height: 16.5pt;"
height="22">4 on one node</td>
<td class="xl66"
style="border-top: medium none; border-left: medium none;"
align="right">2.60E+01</td>
<td class="xl66"
style="border-top: medium none; border-left: medium none;"
align="right">2.61E+01</td>
</tr>
<tr style="height: 16.5pt;" height="22">
<td class="xl65" style="border-top: medium none; height: 16.5pt;"
height="22"><b>8 - 4 x 2 nodes (internode)</b></td>
<td class="xl66"
style="border-top: medium none; border-left: medium none;"
align="right">4.39E+01</td>
<td class="xl67"
style="border-top: medium none; border-left: medium none;"
align="right"><font color="#ff0000"><b>3.06E+01</b></font></td>
</tr>
</tbody>
</table>
<br>
The case with 8 MPI tasks, 4 per each node (nemesis) is the one which
has significant 'System CPU' in the image below.<br>
<br>
<img src="cid:part1.09030306.02080805@ncsu.edu" alt=""> <img
src="cid:part2.04060509.04010909@ncsu.edu" alt=""><br>
<br>
Thanks,<br>
Sarat.<br>
<br>
Darius Buntinas wrote:
<blockquote cite="mid:494843EC.2050602@mcs.anl.gov" type="cite">
<pre wrap="">I'd like to see if the problem has to do with internode or intranode
communication.
Can you try running 10 processes on one node with sock and nemesis?
If nemesis is still doing worse, please try 5 processes on one node.
Thanks,
-d
On 12/16/2008 05:50 PM, Sarat Sreepathi wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hello,
We got a new 10-node Opteron cluster in our research group. Each node
has two quad core Opterons. I installed MPICH2-1.0.8 with Pathscale(3.2)
compilers and three device configurations (nemesis,ssm,sock). I built
and tested using the Linpack(HPL) benchmark with ACML 4.2 BLAS library
for the three different device configurations.
I observed some unexpected results as the 'nemesis' configuration gave
the worst performance. For the same problem parameters, the 'sock'
version was faster and the 'ssm' version hangs. For further analysis, I
obtained screenshots from the Ganglia monitoring tool for the three
different runs. As you can see from the attached screenshots, the
'nemesis' version is consuming more 'system cpu' according to Ganglia.
The 'ssm' version fares slightly better but it hangs towards the end.
I may be missing something trivial here but can anyone account for this
discrepancy? Isn't 'nemesis' device or 'ssm' device recommended for this
cluster configuration? Your help is greatly appreciated.
Thanks,
Sarat.
_*Details:*_
HPL built with AMD ACML 4.2 blas libraries
HPL Output for a problem size N=60000
*nemesis - 1.653e+02 Gflops
ssm - hangs
sock - 2.029e+02 Gflops*
c2master:~ # mpich2version
MPICH2 Version: 1.0.8
MPICH2 Release date: Unknown, built on Fri Dec 12 16:31:15 EST 2008
MPICH2 Device: ch3:nemesis
MPICH2 configure: --with-device=ch3:nemesis --enable-f77
--enable-f90 --enable-cxx
--prefix=/usr/local/mpich2-1.0.8-pathscale-k8-nemesis
MPICH2 CC: pathcc -march=opteron -O3
MPICH2 CXX: pathCC -march=opteron -O3
MPICH2 F77: pathf90 -march=opteron -O3
MPICH2 F90: pathf90 -march=opteron -O3
and similar configuration using ch3:ssm and ch3:sock devices.
*> nohup mpiexec -machinefile ./mf -n 80 ./xhpl < /dev/null &*
*Machine file used:*
</pre>
<blockquote type="cite">
<pre wrap="">cat mf
</pre>
</blockquote>
<pre wrap="">c2node2:8
c2node3:8
c2node4:8
c2node5:8
c2node6:8
c2node7:8
c2node8:8
c2node9:8
c2node10:8
c2node11:8
c2master:~ # uname -a
Linux c2master 2.6.22.18-0.2-default #1 SMP 2008-06-09 13:53:20 +0200
x86_64 x86_64 x86_64 GNU/Linux
Processor: Quad-Core AMD Opteron(tm) Processor 2350 - 2 GHz
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sarat Sreepathi
Doctoral Student
Dept. of Computer Science
North Carolina State University
<a class="moz-txt-link-abbreviated" href="mailto:sarat_s@ncsu.edu">sarat_s@ncsu.edu</a> ~ (919)645-7775
<a class="moz-txt-link-freetext" href="http://www.sarats.com">http://www.sarats.com</a>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------------------------------------------------
------------------------------------------------------------------------
</pre>
</blockquote>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sarat Sreepathi
Doctoral Student
Dept. of Computer Science
North Carolina State University
<a class="moz-txt-link-abbreviated" href="mailto:sarat_s@ncsu.edu">sarat_s@ncsu.edu</a> ~ (919)645-7775
<a class="moz-txt-link-freetext" href="http://www.sarats.com">http://www.sarats.com</a>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</pre>
</body>
</html>