<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman, new york, times, serif;font-size:12pt"><DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">your assessment of 20us is right. Your guess on other were not. The master has to wait for</DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">other to return, and prepare the data to be exchange. The size of the data are mostly small. The smallest being 16 bytes, a few packages went up to about 8K in the test.</DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif"> </DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">I don't see performance problem in the app. It won't achieve the performance gain we are getting on SMP system if that is the case. However, this app is not a app for socket. In another word, socket is the problem, not the app, and I have no intention of using socket communication on SMP/SMT system like SUN's NIAGARA, that is why I ask for help at the first place.</DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif"> </DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">tan</DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif"> </DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif"><BR> </DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">----- Original Message ----<BR>From: Anthony Chan <chan@mcs.anl.gov><BR>To: chong tan <chong_guan_tan@yahoo.com><BR>Cc: mpich-discuss@mcs.anl.gov<BR>Sent: Thursday, April 26, 2007 12:47:28 PM<BR>Subject: Re: [MPICH] MPICH 105, SUN NIAGARA dead in the water<BR><BR>
<DIV>You are saying that 1 process takes 15 hours. The 6 process job takes 10<BR>times as long, 150 hours, and the master node send/recv 26*10^9 times<BR>during that time. The average send/recv time becomes<BR> 150*3600.0sec / (26*10^9) ~ 20 usec.<BR>The latency of tcp/ethernet is a bit less than 10 usec. This means either<BR>your app communicates with zero byte message and spending almost half of<BR>time in communication or communicates in small message and does no<BR>computation. Any shared memory communication is of the order of several<BR>nano seconds, so it is obvious that your app does better with shared<BR>memory device. In any case, the numbers suggest your app may have a<BR>performance problem. Hope this helps.<BR><BR>A.Chan<BR><BR><BR><BR>On Thu, 26 Apr 2007, chong tan wrote:<BR><BR>> yes and no. The same code achieved almost 4X performance improvement using nemesis on X86
running Linux, so comm cost is not as high as one would think. 26B is the combined count for both send and recieve on the master whose job is in syncing all the slaves. The packages are small, packed or not packed, the overhead is there. packing is not going to speed up the thing by more than 4X.<BR>><BR>> Don't have the number on how the sock comm cost will be on Linux. On INTEL SMP, MPI memesis can handle more than 1 million send or recv per second.<BR>><BR>> tan<BR>><BR>> ----- Original Message ----<BR>> From: Anthony Chan <chan@mcs.anl.gov><BR>> To: chong tan <chong_guan_tan@yahoo.com><BR>> Cc: mpich-discuss@mcs.anl.gov<BR>> Sent: Thursday, April 26, 2007 11:43:25 AM<BR>> Subject: Re: [MPICH] MPICH 105, SUN NIAGARA dead in the water<BR>><BR>><BR>> On Thu, 26 Apr 2007, chong tan wrote:<BR>><BR>> > shm/ssm : dropped pacakge<BR>>
> nemesis : not supported<BR>> > socket : works, but very-very-very-slow due to the number of communication. In one test, with 26 billion send+recieve, 6 processes with MPI is more than 10X slower than uni-process (uni-process takes 15 hours)<BR>><BR>> This is off from the original topic. But your stated performance<BR>> suggests your app's ratio of communication/computation may be too high<BR>> to archieve good performance. Does your message tend to be small and<BR>> frequent ? If so, is it possible to combine them using MPI_Pack ?<BR>> (I believe that there are tools that can do that for you.)<BR>><BR>> A.Chan<BR>><BR>> ><BR>> ><BR>> > any suggestion ?<BR>> ><BR>> > thanks<BR>> ><BR>> > tan<BR>> ><BR>> > __________________________________________________<BR>> > Do You Yahoo!?<BR>>
> Tired of spam? Yahoo! Mail has the best spam protection around<BR>> > <A href="http://mail.yahoo.com/" target=_blank>http://mail.yahoo.com</A><BR>><BR>> __________________________________________________<BR>> Do You Yahoo!?<BR>> Tired of spam? Yahoo! Mail has the best spam protection around<BR>> <A href="http://mail.yahoo.com/" target=_blank>http://mail.yahoo.com</A></DIV></DIV>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif"><BR></DIV></div><br>
<hr size=1>Ahhh...imagining that irresistible "new car" smell?<br> Check out
<a href="http://us.rd.yahoo.com/evt=48245/*http://autos.yahoo.com/new_cars.html;_ylc=X3oDMTE1YW1jcXJ2BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDbmV3LWNhcnM-">new cars at Yahoo! Autos.</a>
</body></html>