<div>Hi all:</div>
<div> </div>
<div>I have done a test with different channel.</div>
<div> </div>
<div>I use 2 Windows XP machines (Quad-core Intel Q6600) to make a cluster.</div>
<div> </div>
<div>I start 4 MPI processes (2 in each machine)</div>
<div> </div>
<div> </div>
<div>Case 1: without -channel command (this uses the default socket channel)</div>
<div> </div>
<div>The elapsed time: 138 sec</div>
<div>The true consumed CPU time (obtained by Windows API): ~ 40 sec for all MPI processes</div>
<div> </div>
<div>From the result, I know that the difference between of 138 sec and 40 sec results from the network data transfer.</div>
<div>Since the CPU is idle while the data is transffered via network.</div>
<div> </div>
<div> </div>
<div>Case 2: Add -channel ssm right after mpiexec (mpiexec -channel ssm -pwdfile pwd.txt .......)</div>
<div> </div>
<div>The elapsed time: 167 sec</div>
<div>The true consumed CPU time (obtained by Windows API): ~ 160 sec for all MPI processes</div>
<div> </div>
<div>From Case 1, the CPU needs only 40 sec to do the job, but in Case 2, the CPU needs 4 times CPU time, WHY???</div>
<div> </div>
<div>Is the result of my test wierd or normal ? If it's normal, then the ssm channel has no benefit at all!</div>
<div> </div>
<div>I have found the following statements in the changelog of MPICH:</div>
<div> </div>
<div>Unlike the ssm channel which waits for new data to<br>arrive by continuously polling the system in a busy loop, the essm channel<br>waits by blocking on an operating system event object.</div>
<div> </div>
<div> </div>
<div>Maybe the problem is the "continuously polling the system in a busy loop"</div>
<div> </div>
<div> </div>
<div> </div>
<div>regards,</div>
<div> </div>
<div>Seifer Lin</div>