Experiences with MPICH2 and C++

philip lavers psl02 at uow.edu.au
Thu Jun 9 02:56:18 CDT 2005


Hello MPICH2 people,

The writer is self-taught in all aspects of computing, and relates the 
following a) in case it is helpful to other amateurs and b) in case 
others more versed in the field can lend advice.

The current cluster is one dual opteron platform and three single 
athlon64 platforms. The switch is NetGear Gigabit.

The best "bang for buck" probably comes from 939 pin athlon64 3.2 or 
higher.

At one stage or another every machine has operated on Solaris 10 (x86), 
Mandrake Linux 64 10.1 and FreeBSD 64 5.3.

All these operating systems are impressive, but each has shortcomings - 
e.g. the Linux did not see the SATA disk, Tfe FreeBSD does not see some 
motherboard network ports, Solaris did not see the ultra-wide scsi etc.etc.

MPICH2 installed and worked fine on all OSs. Programming was in C++ 
compiled with gcc. SunStudio10 compilers were also tested and worked 
better than gcc under Solaris on any given machine.

The current project is simulating the behaviour of short magnetic 
vortices in 2D using a deterministic potential and a stochastic term. 
The force calculation and integration loop adapts well for 
parallelisation. A subtle point is that the stochastics( ran2() and 
gasdev() from Numerical Recipes in C++) need to be seeded separately (ie 
seed+=myid;) to avoid unintended correlations. Both opteron processors 
can be induced to work simply by making sure that mpiexec -n N is such 
that at least two processes will be allocated to that machine (whose 
kernel was compiled with SMP).

The performance is fine, and encourages me to build more one-processor 
nodes, because for many vortices (say >200) scaling is steep and roughly 
linear.

Here are the questions and observation:

1. How do I get graphics with Xorg ?

by the way, the examples for Section 3.10 of the book "Using MPI", 
downloaded from the sources in Appendix D,  do not have the graphics 
routines.

I got graphical output by going to an old 32 bit machine that had 
Mandrake 9.1 and XFree86, but my 64 bit machines with Xorg give error 
messages.

At the moment I am stuck with making plot files on process 0 and using 
gnuplot to show movies of the simulation. This seems clumsy.

 
2. Big puzzle:

I have gcc 3.4.2 on both FreeBSD and Linux.

the makefile says:
CFLAGS=-g -pg $(INCLUDES) -ffast-math  -O3 -march=athlon64

and everything is identical on the HD that has FreeBSD and the HD that 
has Mandrake 10.1.

I can test a single machines performance with mpiexec -n 1 etc. on that 
machine.

For exactly the same simulation on exactlty the same machine (athlon64 
3.2) for 50 vortices and 10000 time steps, the FreeBSD machine took 112 
seconds.
When I loaded Linux it took 369 seconds.
On the opteron machine (march=opteron) the figures were FreeBSD 152 
seconds, Linux 555 seconds.

Linux was supposed to be fast??

For this reason I plan to standardise on FreeBSD for the remaining nodes 
when I can afford them, but I would welcome input from knowledgeable 
Linux users.
 
Philip Lavers




More information about the mpich-discuss mailing list