[mpich-discuss] Scalability of 'Intel Core 2 duo' cluster

Wolfgang Betz betz.wolfgang at googlemail.com
Fri Mar 28 03:35:21 CDT 2008


Hi all,

I have nearly the same problem as reported in 'Scalability of Intel
quad core (Harpertown) cluster' on a 'Intel(R) Core(TM)2 CPU 6600  @
2.40GHz'-Cluster.

We use a program
(http://wissrech.ins.uni-bonn.de/research/projects/NaSt3DGP/index.htm)
which has an excellent scalability - even on several hundred
processors on a supercomputer - but in our cluster something is going
wrong. The nodes are connected with a gigabit network - we have tested
divers network interface cards and also gigabit via copper cable and
fibre --> but nothing works.
Even with only 4 machines and 8 processors the processor load is only
somewhere between 50% and 80% ... (same with MPICH2 and LAM/MPI)

The current NICs are: Allied Telesys AT-2916SX/SC    (fibre)
 ... or: Intel(R) PRO/1000 GT          (copper)

the system:
Linux version 2.6.22-14-generic (buildd at terranova) (gcc version 4.1.3
20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1 SMP Tue Feb 12
07:42:25 UTC 2008


I have done some NIC statistics (see appendix):
The colored columns are the statistics for MPICH1 (the number of
machines and processors is specified in the first row).
The upper half of the table is the pure output of 'ethtool -S ...',
the lower half contains an evaluation.
The numerical simulation can be divided into two categories ... first:
'writing' - and after that: calculating - both have a really bad
scalability.
The last two colored columns are the most interesting ones: both done
with 3 machines and 6 processors - but one column considers only
'writing' and the other column writing and the actual calculation (10
time steps).


Any ideas how to improve scalability and get 100% CPU load ???


best,

Wolfgang
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NIC_stat.pdf
Type: application/pdf
Size: 16811 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080328/df5d2f89/attachment.pdf>


More information about the mpich-discuss mailing list