[mpich-discuss] No gain for calculating maximum element of an array

Jeff Hammond jhammond at alcf.anl.gov
Fri Nov 9 09:07:28 CST 2012


It is not surprising at all that shared memory is 20x faster than
distributed memory using MPI, particularly on the machines you are using,
which I assume are running GigE.

Jeff

On Fri, Nov 9, 2012 at 8:59 AM, Aman Madaan <madaan.amanmadaan at gmail.com>wrote:

> Hello
>
> I am pretty new to MPI programming and i was doing some research on
> performance of dumped computers in our college.
> Although my cluster is giving the expected gain for many embarrassingly
> parallel problems,but the attached simple program for calculating maximum
> number in an array is unexpectedly slow.
>
> The file of numbers is first read at the root and the array is scattered
> to all the supporting nodes. Everyone calculates their local maximum abd
> the  MPI_REDUCE is used to get the global maximum
>
> The result of running the program are as follows :
>
> *Using 2 nodes : *
>
> axiom at node1:~/Programs/largest$ mpiexec -f f ./a.out numbers/num10000.txt
> 10000
>
> Maximum = 3999213
>
> wall clock time = 0.002819
>
> *Using single node : *
>
> axiom at node1:~/Programs/largest$ mpiexec  ./a.out numbers/num10000.txt
> 10000
>
> Maximum = 3999213
>
> wall clock time = 0.000168
>
> ..........
>
> The situation remains same even after increasing the number of elements.
>
> I request your suggestions.
>
> Thanks a lot for your time.
>
>
> With regards
>
>
> --
>
> Aman Madaan
>
>
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>


-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121109/9b42f09d/attachment.html>


More information about the mpich-discuss mailing list