It is not surprising at all that shared memory is 20x faster than distributed memory using MPI, particularly on the machines you are using, which I assume are running GigE.<div><br></div><div>Jeff<br><br><div class="gmail_quote">
On Fri, Nov 9, 2012 at 8:59 AM, Aman Madaan <span dir="ltr"><<a href="mailto:madaan.amanmadaan@gmail.com" target="_blank">madaan.amanmadaan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Hello</div><div><br></div><div>I am pretty new to MPI programming and i was doing some research on performance of dumped computers in our college. </div><div>Although
my cluster is giving the expected gain for many embarrassingly parallel
problems,but the attached simple program for calculating maximum number
in an array is unexpectedly slow. </div>
<div><br></div><div>The file of numbers is first read at the root and
the array is scattered to all the supporting nodes. Everyone calculates their local maximum abd the MPI_REDUCE is
used to get the global maximum <br></div><div><br></div><div>The result of running the program are as follows : </div>
<div><br></div><div><b>Using 2 nodes : </b></div>
<p>axiom@node1:~/Programs/largest$ mpiexec -f f ./a.out numbers/num10000.txt 10000</p>
<p>Maximum = 3999213</p>
<p>wall clock time = 0.002819</p>
<p><b>Using single node : </b></p><p>axiom@node1:~/Programs/largest$ mpiexec ./a.out numbers/num10000.txt 10000</p>
<p>Maximum = 3999213</p>
<p>wall clock time = 0.000168</p><p>..........</p><p>The situation remains same even after increasing the number of elements.</p><p>I request your suggestions. </p><p>Thanks a lot for your time.</p>
<p><br></p><p>With regards</p><div><div><img src="http://images/cleardot.gif"></div></div><span class="HOEnZb"><font color="#888888"><br clear="all"><br>-- <br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<font style="background-color:rgb(204,204,255)">Aman Madaan</font><br></blockquote><br>
</font></span><br>_______________________________________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br>Jeff Hammond<br>Argonne Leadership Computing Facility<br>University of Chicago Computation Institute<br><a href="mailto:jhammond@alcf.anl.gov" target="_blank">jhammond@alcf.anl.gov</a> / (630) 252-5381<br>
<a href="http://www.linkedin.com/in/jeffhammond" target="_blank">http://www.linkedin.com/in/jeffhammond</a><br><a href="https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond" target="_blank">https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond</a><br>
</div>