[mpich-discuss] mpich-discuss Digest, Vol 11, Issue 9
Vineet Pratap (Vampaier)
pratap.vineet at gmail.com
Wed Aug 12 23:56:20 CDT 2009
Thanks to Dorian, my problem now solved..................................
On Wed, Aug 12, 2009 at 10:30 PM, <mpich-discuss-request at mcs.anl.gov> wrote:
> Send mpich-discuss mailing list submissions to
> mpich-discuss at mcs.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> or, via email, send a message with subject or body 'help' to
> mpich-discuss-request at mcs.anl.gov
>
> You can reach the person managing the list at
> mpich-discuss-owner at mcs.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of mpich-discuss digest..."
>
>
> Today's Topics:
>
> 1. broadcast and reduce mechanism (Gra zeus)
> 2. Re: mpich-discuss Digest, Vol 11, Issue 8
> (Vineet Pratap (Vampaier))
> 3. Re: mpich-discuss Digest, Vol 11, Issue 8 (Dorian Krause)
> 4. Re: broadcast and reduce mechanism (Dave Goodell)
> 5. Re: broadcast and reduce mechanism (Rajeev Thakur)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 11 Aug 2009 19:34:09 -0700 (PDT)
> From: Gra zeus <gra_zeus at yahoo.com>
> Subject: [mpich-discuss] broadcast and reduce mechanism
> To: mpich-discuss at mcs.anl.gov
> Message-ID: <346772.58251.qm at web34502.mail.mud.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
> Can any one comfirm or link me to documents that explain
> communication?mechanism of MPI_Bcast and MPI_Reduce?
> I searched in google and It said Bcast,Reduce can use tree algorithm or
> sequential communication.However, I can't find any specification that
> indicate communication?mechanism of MPI_Bcast and MPI_Reduce in MPICH2.
> thx,gra
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090811/e5c36179/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 12 Aug 2009 15:23:05 +0530
> From: "Vineet Pratap (Vampaier)" <pratap.vineet at gmail.com>
> Subject: Re: [mpich-discuss] mpich-discuss Digest, Vol 11, Issue 8
> To: mpich-discuss at mcs.anl.gov
> Message-ID:
> <b99924940908120253i55a28d51s4f950667fa2da9b3 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Plz Correct this code................
>
> #include<iostream>
> #include<vector>
> #include "mpi.h"
>
> using namespace std;
>
> int main(int argc, char *argv[])
> {
> MPI::Status status;
> MPI::Init();
> int myrank = MPI::COMM_WORLD.Get_rank();
> int numprocs = MPI::COMM_WORLD.Get_size();
> vector<int> ourvector(2);
> if(myrank == 0){
>
> ourvector[0] = 98;
>
> // cout << "The max number the vector can hold is : " <<
> ourvector.max_size();
>
>
> // cout << "\nourvector has : " << ourvector.capacity() << " elements
> in it";
>
> ourvector.push_back(99);
>
>
> // cout << "\nNow ourvector has : " << ourvector.size() << " elements
> in it";
>
> // cout << "\nThe Value of the first vector element is : " <<
> ourvector[0];
>
>
> // cout << "\nThe Value of our second vector element is : " <<
> ourvector.at(1) << endl;
>
> MPI::COMM_WORLD.Send(&ourvector[0],2,MPI::INT, 1, 1);
> // MPI::COMM_WORLD.Send(&ourvector[1],1,MPI::INT, 1, 1);
> }
> else{
> ourvector.reserve(2);
> MPI::COMM_WORLD.Recv(&ourvector[0],2, MPI::INT,0,1);
> // MPI::COMM_WORLD.Recv(&ourvector[1],1, MPI::INT,0,1);
>
> //ourvector.pop_back();
>
>
> cout << "ourvector now has : " << ourvector.capacity() << "
> elements" << endl;
> cout << "\nNow ourvector has 1st : " << ourvector[0];
> cout << "\nNow ourvector has 2nd : " << ourvector[1]<< endl;
> // cout << "Our fisrt element in ourvector is : " << ourvector.front()
> << endl;
>
>
> ourvector.resize(9);
>
> ourvector.at(8) = 99;
>
> // cout << "Our last element in ourvect is : " << ourvector.back() <<
> endl;
>
> // cout << "ourvector now holds : " << ourvector.size() << " elements"
> << endl;
> }
> MPI::Finalize();
> return 0;
> } <http://trac.guake-terminal.org/files/>
>
>
> Now MY output is
> $ mpirun -np 2 ./vec.out
> ourvector now has : 2 elements
>
> Now ourvector has 1st : 98
> Now ourvector has 2nd :0
>
> I want output like <http://trac.guake-terminal.org/files/>
> $ mpirun -np 2 ./vec.out
> ourvector now has : 2 elements
>
> Now ourvector has 1st : 98
> Now ourvector has 2nd :99
>
> <http://trac.guake-terminal.org/files/>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090812/5cbe1cb6/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 3
> Date: Wed, 12 Aug 2009 15:44:45 +0200
> From: Dorian Krause <ddkrause at uni-bonn.de>
> Subject: Re: [mpich-discuss] mpich-discuss Digest, Vol 11, Issue 8
> To: mpich-discuss at mcs.anl.gov
> Message-ID: <4A82C74D.1030502 at uni-bonn.de>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> if you create outvector like
>
> vector<int> ourvector(2);
>
> and than push_back "99" the vector will look like
>
> { 98, (whatever), 99}
>
> Try to do call the constructor
>
> vector<int> ourvector(1);
>
> Then you should (If I understand the stl correctly) end up with
>
> { 98, 99 }
>
> as you intend.
>
> Hope this helps ...
>
> Regards,
> Dorian
>
>
> Vineet Pratap (Vampaier) wrote:
> > Plz Correct this code................
> >
> > #include<iostream>
> > #include<vector>
> > #include "mpi.h"
> >
> > using namespace std;
> >
> > int main(int argc, char *argv[])
> > {
> > MPI::Status status;
> > MPI::Init();
> > int myrank = MPI::COMM_WORLD.Get_rank();
> > int numprocs = MPI::COMM_WORLD.Get_size();
> > vector<int> ourvector(2);
> > if(myrank == 0){
> >
> > ourvector[0] = 98;
> >
> > // cout << "The max number the vector can hold is : " <<
> > ourvector.max_size();
> >
> >
> > // cout << "\nourvector has : " << ourvector.capacity() << "
> > elements in it";
> >
> > ourvector.push_back(99);
> >
> >
> > // cout << "\nNow ourvector has : " << ourvector.size() << "
> > elements in it";
> >
> > // cout << "\nThe Value of the first vector element is : " <<
> > ourvector[0];
> >
> >
> > // cout << "\nThe Value of our second vector element is : " <<
> > ourvector.at(1) << endl;
> >
> > MPI::COMM_WORLD.Send(&ourvector[0],2,MPI::INT, 1, 1);
> > // MPI::COMM_WORLD.Send(&ourvector[1],1,MPI::INT, 1, 1);
> > }
> > else{
> > ourvector.reserve(2);
> > MPI::COMM_WORLD.Recv(&ourvector[0],2, MPI::INT,0,1);
> > // MPI::COMM_WORLD.Recv(&ourvector[1],1, MPI::INT,0,1);
> >
> > //ourvector.pop_back();
> >
> >
> > cout << "ourvector now has : " << ourvector.capacity() << "
> > elements" << endl;
> > cout << "\nNow ourvector has 1st : " << ourvector[0];
> > cout << "\nNow ourvector has 2nd : " << ourvector[1]<< endl;
> > // cout << "Our fisrt element in ourvector is : " <<
> > ourvector.front() << endl;
> >
> >
> > ourvector.resize(9);
> >
> > ourvector.at(8) = 99;
> >
> > // cout << "Our last element in ourvect is : " <<
> > ourvector.back() << endl;
> >
> > // cout << "ourvector now holds : " << ourvector.size() << "
> > elements" << endl;
> > }
> > MPI::Finalize();
> > return 0;
> > } <http://trac.guake-terminal.org/files/>
> >
> >
> > Now MY output is
> > $ mpirun -np 2 ./vec.out
> > ourvector now has : 2 elements
> >
> > Now ourvector has 1st : 98
> > Now ourvector has 2nd :0
> >
> > I want output like <http://trac.guake-terminal.org/files/>
> > $ mpirun -np 2 ./vec.out
> > ourvector now has : 2 elements
> >
> > Now ourvector has 1st : 98
> > Now ourvector has 2nd :99
> >
> > <http://trac.guake-terminal.org/files/>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 12 Aug 2009 09:00:52 -0500
> From: Dave Goodell <goodell at mcs.anl.gov>
> Subject: Re: [mpich-discuss] broadcast and reduce mechanism
> To: mpich-discuss at mcs.anl.gov
> Message-ID: <79C6E563-D421-4EED-85D8-9A9D29278C06 at mcs.anl.gov>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> You can find some information in this paper:
> http://www.mcs.anl.gov/~thakur/papers/mpi-coll.pdf<http://www.mcs.anl.gov/%7Ethakur/papers/mpi-coll.pdf>
>
> However since that was written there have been a few changes, some
> major and some minor. One difference is that we now perform those
> collective operations hierarchically on SMP systems (intranode,
> internode, then intranode again). Also, additional algorithms might
> have been selected and the cutoff points are almost certainly
> different. So the best way to figure out what's going on in there is
> to read the code.
>
> As far as I know we don't use sequential communication to implement
> any of our collective operations.
>
> -Dave
>
> On Aug 11, 2009, at 9:34 PM, Gra zeus wrote:
>
> > Hello,
> >
> > Can any one comfirm or link me to documents that explain
> > communication mechanism of MPI_Bcast and MPI_Reduce?
> >
> > I searched in google and It said Bcast,Reduce can use tree algorithm
> > or sequential communication.
> > However, I can't find any specification that indicate communication
> > mechanism of MPI_Bcast and MPI_Reduce in MPICH2.
> >
> > thx,
> > gra
> >
>
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 12 Aug 2009 10:59:57 -0500
> From: "Rajeev Thakur" <thakur at mcs.anl.gov>
> Subject: Re: [mpich-discuss] broadcast and reduce mechanism
> To: <mpich-discuss at mcs.anl.gov>
> Message-ID: <E1B19E5F762B4CDEA8B54F9141E58351 at mcs.anl.gov>
> Content-Type: text/plain; charset="us-ascii"
>
> The code is in the src/mpi/coll directory. See bcast.c and reduce.c.
>
> Rajeev
>
> > -----Original Message-----
> > From: mpich-discuss-bounces at mcs.anl.gov
> > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Dave Goodell
> > Sent: Wednesday, August 12, 2009 9:01 AM
> > To: mpich-discuss at mcs.anl.gov
> > Subject: Re: [mpich-discuss] broadcast and reduce mechanism
> >
> > You can find some information in this paper:
> > http://www.mcs.anl.gov/~thakur/papers/mpi-coll.pdf<http://www.mcs.anl.gov/%7Ethakur/papers/mpi-coll.pdf>
> >
> > However since that was written there have been a few changes, some
> > major and some minor. One difference is that we now perform those
> > collective operations hierarchically on SMP systems (intranode,
> > internode, then intranode again). Also, additional algorithms might
> > have been selected and the cutoff points are almost certainly
> > different. So the best way to figure out what's going on in
> > there is
> > to read the code.
> >
> > As far as I know we don't use sequential communication to implement
> > any of our collective operations.
> >
> > -Dave
> >
> > On Aug 11, 2009, at 9:34 PM, Gra zeus wrote:
> >
> > > Hello,
> > >
> > > Can any one comfirm or link me to documents that explain
> > > communication mechanism of MPI_Bcast and MPI_Reduce?
> > >
> > > I searched in google and It said Bcast,Reduce can use tree
> > algorithm
> > > or sequential communication.
> > > However, I can't find any specification that indicate
> > communication
> > > mechanism of MPI_Bcast and MPI_Reduce in MPICH2.
> > >
> > > thx,
> > > gra
> > >
> >
> >
>
>
>
> ------------------------------
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
> End of mpich-discuss Digest, Vol 11, Issue 9
> ********************************************
>
--
VINEET PRATAP
(09868366605)
&
(09995211212)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090813/7b906453/attachment-0001.htm>
More information about the mpich-discuss
mailing list