[MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
chong tan
chong_guan_tan at yahoo.com
Thu Jan 18 13:56:00 CST 2007
performance, measured as overhead
nemsis : 3 (not precise)
ssm : 52 (1.04p1)
sock : 550
The performance is good, however, the mpd process creates a huge problem for the same 3 millions lines of code that run without MPI (MPI not even called), they are running 2X slower, if I kill mpd, theyregain their original performance.
I reported a similar mpd slowing down other process problem in Jul/Aug 2006.
tan
----- Original Message ----
From: Rajeev Thakur <thakur at mcs.anl.gov>
To: chong tan <chong_guan_tan at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Thursday, January 18, 2007 11:38:14 AM
Subject: RE: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
Cool. Why don't you use that for now until ssm is fixed. Also, we would like to know how it performs for you compared with ch3:sock and ch3:ssm.
Rajeev
From: chong tan [mailto:chong_guan_tan at yahoo.com]
Sent: Thursday, January 18, 2007 1:28 PM
To: Rajeev Thakur
Cc: mpich-discuss at mcs.anl.gov
Subject: Re: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
the test also work with ch3:nemesis.
tan
----- Original Message ----
From: Rajeev Thakur <thakur at mcs.anl.gov>
To: chong tan <chong_guan_tan at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Thursday, January 18, 2007 9:43:37 AM
Subject: RE: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
Can you try using the Nemesis channel? Configure with --with-device=ch3:nemesis. That will use shared memory within a node and TCP across nodes and should actually perform better than ssm.
Rajeev
From: owner-mpich-discuss at mcs.anl.gov [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of chong tan
Sent: Thursday, January 18, 2007 11:11 AM
To: William Gropp
Cc: mpich-discuss at mcs.anl.gov
Subject: Re: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
all the messages are short messages, the shortest being 3 integer (32 bits), the longest 9 integers.
I can't send you the code per company policy. There are about 3 million lines of C, C++ and Tcl. MPI is used in an isolated part of the code.
I will try sock, sock runs almost 11X slower on this prtiticular machine. On 2.1.04p1, overhead by ssm was 50 sec, and sock's overhead was 520 sec on the failed test.
tan
----- Original Message ----
From: William Gropp <gropp at mcs.anl.gov>
To: chong tan <chong_guan_tan at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Wednesday, January 17, 2007 6:37:19 PM
Subject: Re: [MPICH] MPICH2 1.05 MPI_Send & MPI_Recv dropping packages randomly
Can you send us the test case? Does it fail with the ch3:sock device? Are the messages short or long?
Bill
On Jan 17, 2007, at 7:06 PM, chong tan wrote:
OS : RedHat Enterprise 4, 2.6.9-42.ELsmp
CPU 4 dual core Intel
the package was built with :
setenv CFLAGS "-m32 -O2"
setenv CC gcc
./configure -prefix=/u/cgtan/my_release_dir --with-device=ch3:ssm --enable-fast |& tee configure.log
-----
the test programs run 5 processes, one master and 4 slaves. Master always recv from slaves and them send to all of them. Randomly, an MPI_Send performed in the master will complete, but the corresponidng MPI_Recv in the targeted slave would not complete, and the who thing hangs.
I have a debugging mechanism that attachs a sequence id to all packages sent. The packages are dumped before and after sent, and recv. a message is also dumped on the the pending recv. The sequence id traced OK all the way to the lost package.
The same code work fine with 2.1.04p1. it has been tested on test cases longer than 100 million send/recv sequences. any suggestions ?
tan
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.
Now that's room service! Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
____________________________________________________________________________________
8:00? 8:25? 8:40? Find a flick in no time
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070118/fe8798c8/attachment.htm>
More information about the mpich-discuss
mailing list