<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.6000.16809" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2>Hi,</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2> The performance difference could depend on your MPI
program. Are you using a publicly available benchmark (or is it your
code)?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2># Do you consistently see the performance difference (How
many times did you run your code - did you take an average - did you rule out
the extreme cases)?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2># Can you send us your code ?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2> The newer nemesis channel will soon replace ssm
(Nemesis is available in the latest 1.1b1 release. However we are still working
on the performance of Nemesis On Windows.). </FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2>Regards,</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=910352115-30032009><FONT face=Arial
color=#0000ff size=2>Jayesh</FONT></SPAN></DIV><BR>
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> mpich-discuss-bounces@mcs.anl.gov
[mailto:mpich-discuss-bounces@mcs.anl.gov] <B>On Behalf Of
</B>???<BR><B>Sent:</B> Sunday, March 29, 2009 10:21 PM<BR><B>To:</B>
mpich-discuss@mcs.anl.gov<BR><B>Subject:</B> [mpich-discuss] MPICH channel (ssm
vs. sock)<BR></FONT><BR></DIV>
<DIV></DIV>
<DIV>Hi all:</DIV>
<DIV> </DIV>
<DIV>I have done a test with different channel.</DIV>
<DIV> </DIV>
<DIV>I use 2 Windows XP machines (Quad-core Intel Q6600) to make a
cluster.</DIV>
<DIV> </DIV>
<DIV>I start 4 MPI processes (2 in each machine)</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>Case 1: without -channel command (this uses the default socket
channel)</DIV>
<DIV> </DIV>
<DIV>The elapsed time: 138 sec</DIV>
<DIV>The true consumed CPU time (obtained by Windows API): ~ 40 sec for all MPI
processes</DIV>
<DIV> </DIV>
<DIV>From the result, I know that the difference between of 138 sec and 40 sec
results from the network data transfer.</DIV>
<DIV>Since the CPU is idle while the data is transffered via network.</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>Case 2: Add -channel ssm right after mpiexec (mpiexec -channel ssm -pwdfile
pwd.txt .......)</DIV>
<DIV> </DIV>
<DIV>The elapsed time: 167 sec</DIV>
<DIV>The true consumed CPU time (obtained by Windows API): ~ 160 sec for all MPI
processes</DIV>
<DIV> </DIV>
<DIV>From Case 1, the CPU needs only 40 sec to do the job, but in Case 2, the
CPU needs 4 times CPU time, WHY???</DIV>
<DIV> </DIV>
<DIV>Is the result of my test wierd or normal ? If it's normal, then the ssm
channel has no benefit at all!</DIV>
<DIV> </DIV>
<DIV>I have found the following statements in the changelog of
MPICH:</DIV>
<DIV> </DIV>
<DIV>Unlike the ssm channel which waits for new data to<BR>arrive by
continuously polling the system in a busy loop, the essm channel<BR>waits by
blocking on an operating system event object.</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>Maybe the problem is the "continuously polling the system in a busy
loop"</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>regards,</DIV>
<DIV> </DIV>
<DIV>Seifer Lin</DIV></BODY></HTML>