[mpich-discuss] No gain for finding maximum element in an array.

Aman Madaan madaan.amanmadaan at gmail.com
Fri Nov 9 13:31:09 CST 2012


Thanks a lot for the answers!

I have calculated the maximum element of an array that has around 5*10^7
elements, but there were still no gains!
Is the code correct?
I have reattached the program file for your kind reference.

I would like to request some more guidance on the results.

As i mentioned, the results that i get from embarrassingly parallel
problems are not at all surprising .
The plots are (
https://github.com/madaan/BVP-MPI-PROJECT/blob/master/PI/PI.png) and (
https://github.com/madaan/BVP-MPI-PROJECT/blob/master/SumNELe/Sigma%28N%29.png)
.

The setup is as follows :

*Node configuration*
There are 3 similar nodes, each consisting of an Intel(R) Pentium(R) 4 CPU,
2.60GHz with 512 MB of memory. Each processor has 512 KB cache.

 Another node has an Intel Core i3-330M, 2.13GHz with 3GB of memory. It is
termed *FOX* for reference . All the nodes runs Ubuntu 12.04 Long term
support version.

*Network setup:*
Nodes are interconnected via a fast Ethernet switch (10 MB/s). The network
configuration is essentially meshed. For all tests the MPI/Pro library from
MPI Software Technology (MPI/Pro distribution version 1.6.3).


The aim of the project is to compare the computing power of a group of
dumped machines  with a state of the art ( or so i say ) computer, called
the fox for reference.

We call the project *Junk Computing*.

I request your thoughts /comments /criticism .

I also request suggestions for some simple benchmarking programs that i can
write.


Thanks for the precious time.

On Fri, Nov 9, 2012 at 11:30 PM, <mpich-discuss-request at mcs.anl.gov> wrote:

> Send mpich-discuss mailing list submissions to
>         mpich-discuss at mcs.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> or, via email, send a message with subject or body 'help' to
>         mpich-discuss-request at mcs.anl.gov
>
> You can reach the person managing the list at
>         mpich-discuss-owner at mcs.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of mpich-discuss digest..."
>
>
> Today's Topics:
>
>    1. Re:  No gain for calculating maximum element of an        array
>       (Jeff Hammond)
>    2. Re:  No gain for calculating maximum element of an        array
>       (Darius Buntinas)
>    3. Re:  do not get mpif90 after building mpich2 in linux (Gus Correa)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 9 Nov 2012 09:07:28 -0600
> From: Jeff Hammond <jhammond at alcf.anl.gov>
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] No gain for calculating maximum element
>         of an   array
> Message-ID:
>         <CAGKz=
> uL6NzyxqF_cKMhAsR3dyoXsEOcX0r40TC1EKtuRwPqD-A at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> It is not surprising at all that shared memory is 20x faster than
> distributed memory using MPI, particularly on the machines you are using,
> which I assume are running GigE.
>
> Jeff
>
> On Fri, Nov 9, 2012 at 8:59 AM, Aman Madaan <madaan.amanmadaan at gmail.com
> >wrote:
>
> > Hello
> >
> > I am pretty new to MPI programming and i was doing some research on
> > performance of dumped computers in our college.
> > Although my cluster is giving the expected gain for many embarrassingly
> > parallel problems,but the attached simple program for calculating maximum
> > number in an array is unexpectedly slow.
> >
> > The file of numbers is first read at the root and the array is scattered
> > to all the supporting nodes. Everyone calculates their local maximum abd
> > the  MPI_REDUCE is used to get the global maximum
> >
> > The result of running the program are as follows :
> >
> > *Using 2 nodes : *
> >
> > axiom at node1:~/Programs/largest$ mpiexec -f f ./a.out
> numbers/num10000.txt
> > 10000
> >
> > Maximum = 3999213
> >
> > wall clock time = 0.002819
> >
> > *Using single node : *
> >
> > axiom at node1:~/Programs/largest$ mpiexec  ./a.out numbers/num10000.txt
> > 10000
> >
> > Maximum = 3999213
> >
> > wall clock time = 0.000168
> >
> > ..........
> >
> > The situation remains same even after increasing the number of elements.
> >
> > I request your suggestions.
> >
> > Thanks a lot for your time.
> >
> >
> > With regards
> >
> >
> > --
> >
> > Aman Madaan
> >
> >
> >
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> >
>
>
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121109/9b42f09d/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Fri, 9 Nov 2012 09:38:38 -0600
> From: Darius Buntinas <buntinas at mcs.anl.gov>
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] No gain for calculating maximum element
>         of an   array
> Message-ID: <9E5094A9-A819-4982-A5EC-3E7BBAF8CFBE at mcs.anl.gov>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> It looks like your problem size is way too small to see a significant
> difference.  Try increasing the problem size until the run time is 10 or so
> minutes.
>
> -d
>
>
> On Nov 9, 2012, at 8:59 AM, Aman Madaan wrote:
>
> > wall clock time = 0.000168
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 09 Nov 2012 12:59:29 -0500
> From: Gus Correa <gus at ldeo.columbia.edu>
> To: Mpich Discuss <mpich-discuss at mcs.anl.gov>
> Subject: Re: [mpich-discuss] do not get mpif90 after building mpich2
>         in linux
> Message-ID: <509D4481.5050801 at ldeo.columbia.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 11/09/2012 08:25 AM, Michael Seman wrote:
> > In reply to various posts, the Intel compiler is set up using the
> > ifortvars and iccvars scripts. I have compiled FFTW package without a
> > hitch. I did see in one of the release notes that there is a problem
> > with version 13 of Intel compilers, so I used version 11.1 which I
> > already had.
> > Well anyway, I noticed in the mpich2 config.log the following message:
> >
> > ld: /opt/intel/Compiler/11.1/080/bin/lib/for_main.o: No such file: No
> > such file or directory
> >
> > That path is obviously wrong - I haven't figured out yet where that gets
> > screwed up. Now if a create a lib subdirectory under bin and copy all
> > the *.o files from the lib/intel64 directory, and then try building
> > mpich2, I do get the mpif90 wrapper. I think that's like plugging fuel
> > leaks with chewing gum, so I need to locate the origin of the defective
> > ld path.
> > _______________________________________________
> > mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> Hi Michael
> Still sounds as a flaw in the compiler environment.
> 'printenv | grep LIB' may give some clue.
> Hope it helps,
> Gus Correa
>
>
> ------------------------------
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
> End of mpich-discuss Digest, Vol 50, Issue 12
> *********************************************
>



-- 

Aman Madaan
+91-9958725963
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121110/5bedcf13/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DistributedLargest.c
Type: text/x-csrc
Size: 2268 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121110/5bedcf13/attachment.c>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DistributedLargest.c
Type: text/x-csrc
Size: 2268 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121110/5bedcf13/attachment-0001.c>


More information about the mpich-discuss mailing list