[mpich-discuss] MPICH channel (ssm vs. sock)

林智仁 seiferlin at gmail.com
Sun Mar 29 22:21:19 CDT 2009


Hi all:

I have done a test with different channel.

I use 2 Windows XP machines (Quad-core Intel Q6600) to make a cluster.

I start 4 MPI processes (2 in each machine)


Case 1: without -channel command (this uses the default socket channel)

The elapsed time: 138 sec
The true consumed CPU time (obtained by Windows API): ~ 40 sec for all MPI
processes

>From the result, I know that the difference between of 138 sec and 40 sec
results from the network data transfer.
Since the CPU is idle while the data is transffered via network.


Case 2: Add -channel ssm right after mpiexec (mpiexec -channel ssm -pwdfile
pwd.txt .......)

The elapsed time: 167 sec
The true consumed CPU time (obtained by Windows API): ~ 160 sec for all MPI
processes

>From Case 1, the CPU needs only 40 sec to do the job, but in Case 2, the CPU
needs 4 times CPU time, WHY???

Is the result of my test wierd or normal ? If it's normal, then the ssm
channel has no benefit at all!

I have found the following statements in the changelog of MPICH:

Unlike the ssm channel which waits for new data to
arrive by continuously polling the system in a busy loop, the essm channel
waits by blocking on an operating system event object.


Maybe the problem is the "continuously polling the system in a busy loop"



regards,

Seifer Lin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090330/daabf197/attachment.htm>


More information about the mpich-discuss mailing list