[MPICH] Changing the comm size at runtime
Rajeev Thakur
thakur at mcs.anl.gov
Thu Mar 15 16:31:34 CDT 2007
You should be able to use the singleton init feature of MPI-2 by which a
program not started with mpiexec can become an MPI program by calling
MPI_Init. No environment variable needed. Then two such programs can connect
to each other with MPI_Comm_connect/accept. This should work at least in
MPICH2 on Unix. If it doesn't work on Windows let us know.
Rajeev
> -----Original Message-----
> From: Patrick Gräbel [mailto:pgraebel at stud.fh-dortmund.de]
> Sent: Thursday, March 15, 2007 4:23 PM
> To: Rajeev Thakur
> Cc: mpich-discuss at mcs.anl.gov
> Subject: Re: [MPICH] Changing the comm size at runtime
>
>
> Hey, looks like you are the author of the book "Using MPI-2" that I am
> just reading :) The book has a nice and easy to understand
> example. Now
> I can dynamically add/remove a slave to/from the master. Works fine.
>
> But I want to avoid the usage of mpiexec, so I tried to use the hint
> "Debugging jobs by starting them manually" from the MPICH
> windev pdf: It
> is possible to set some environment variables to get an MPI programm
> directly work w/o mpiexec. This method works for Intracomm
> (e.g. setting
> size env var to 2, MPI::Init block until 2 ranks are available), but I
> have no idea how to get it work with two separate MPI
> programms that use
> Intercomm: accept and connect block forever on both sides and
> don't meet
> up (size set to 1, rank to 0 on both sides)...
>
> Greetings
> Patrick
>
> Rajeev Thakur schrieb:
> >> You can use all MPI communication functions, including
> collectives. It
> > will
> >> be the intercommunicator version of the collectives
> defined in MPI-2, as
> > the
> >> communicator returned by a connect-accept is an intercommunicator.
> >
> > I should point out that you can also create an
> intracommunicator from an
> > intercommunicator by using MPI_Intercomm_merge and then use
> the regular
> > intracommunicator collectives if that's what you need.
> >
> > Rajeev
> >
> >
>
>
More information about the mpich-discuss
mailing list