[MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly

chong tan chong_guan_tan at yahoo.com
Thu Jan 18 13:08:43 CST 2007


I will give that a try.  
tan



----- Original Message ----
From: Rajeev Thakur <thakur at mcs.anl.gov>
To: chong tan <chong_guan_tan at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Thursday, January 18, 2007 9:43:37 AM
Subject: RE: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly


Can you try using the Nemesis channel? Configure with --with-device=ch3:nemesis. That will use shared memory within a node and TCP across nodes and should actually perform better than ssm.
 
Rajeev
 




From: owner-mpich-discuss at mcs.anl.gov [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of chong tan
Sent: Thursday, January 18, 2007 11:11 AM
To: William Gropp
Cc: mpich-discuss at mcs.anl.gov
Subject: Re: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly


all the messages are short messages, the shortest being 3 integer (32 bits), the longest 9 integers.
 
I can't send you the code per company policy.  There are about 3 million lines of C, C++ and Tcl.  MPI is used in an isolated part of the code.
 
I will try sock, sock runs almost 11X slower on this prtiticular machine.  On 2.1.04p1, overhead by ssm was 50 sec, and sock's overhead was 520 sec on the failed test.
 
tan


 
----- Original Message ----
From: William Gropp <gropp at mcs.anl.gov>
To: chong tan <chong_guan_tan at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Wednesday, January 17, 2007 6:37:19 PM
Subject: Re: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly

Can you send us the test case?  Does it fail with the ch3:sock device?  Are the messages short or long?   


Bill


On Jan 17, 2007, at 7:06 PM, chong tan wrote:


OS : RedHat Enterprise 4, 2.6.9-42.ELsmp
CPU   4 dual core Intel
 
the package was built with :
setenv CFLAGS "-m32 -O2"
setenv CC         gcc
./configure -prefix=/u/cgtan/my_release_dir --with-device=ch3:ssm --enable-fast |& tee configure.log

-----
the test programs run 5 processes, one master and 4 slaves.  Master always recv from slaves and them send to all of them.  Randomly, an MPI_Send performed in the master will complete, but the corresponidng MPI_Recv in the targeted slave would not complete, and the who thing hangs. 
 
I have a debugging mechanism that attachs a sequence id to all packages sent.  The packages are dumped before and after sent, and recv.  a message is also dumped on the the pending recv.  The sequence id traced OK all the way to the lost package.
 
The same code work fine with 2.1.04p1.  it has been tested on test cases longer than 100 million send/recv sequences.  any suggestions ?
 
tan
 



Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.







Now that's room service! Choose from over 150,000 hotels 
in 45,000 destinations on Yahoo! Travel to find your fit.



 
____________________________________________________________________________________
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070118/ecaa643c/attachment.htm>


More information about the mpich-discuss mailing list