Code structuring - Communicator

Amit.Itagi at seagate.com Amit.Itagi at seagate.com
Tue May 27 12:10:25 CDT 2008



owner-petsc-users at mcs.anl.gov wrote on 05/27/2008 11:23:44 AM:

> On Tue, May 27, 2008 at 10:18 AM,  <Amit.Itagi at seagate.com> wrote:
> > Barry,
> >
> > I got a part of what I was trying to do (sub-communicator etc.),
working.
> > Now suppose I want to repeat a calculation with a different input, I
have
> > two ways of doing it (based on what I have coded).
> >
> > 1)
> >
> > MPI_Initialize
> > Create a group using MPI_Comm_group
> > Create several sub-groups and sub-communicators using MPI_Group_Incl
and
> > MPI_Comm_create
> > Assign the sub-communicator to PETSC_COMM_WORLD
> > // Calculation 1
> > {
> > Do PetscInitialize
> > Perform the calculation
> > Do PetscFinalize
> > }
> > // Calculation 2
> > {
> > Do PetscInitialize
> > Perform the calculation
> > Do PetscFinalize
> > }
> > Do MPI_finalize
> >
> > 2)
> >
> > MPI_Initialize
> > Create a group using MPI_Comm_group
> > Create several sub-groups and sub-communicators using MPI_Group_Incl
and
> > MPI_Comm_create
> > Assign the sub-communicator to PETSC_COMM_WORLD
> > Do PetscInitialize
> > // Calculation 1
> > {
> > Perform the calculation
> > }
> > // Calculation 2
> > {
> > Perform the calculation
> > }
> > Do PetscFinalize
> > Do MPI_finalize
> >
> >
> > The first method crashes. I am trying to understand why.  The
documentation
>
> What do you mean "crashes" and what line does it happen on? You can use
> -start_in_debugger to get a stack trace. I do not completely understand
your
> pseudocode, however, you should never call PetscInitialize()/Finalize()
more
> than once.

As Barry pointed out, the multiple calls to PetscInitialize is the likely
reason for my problem.
Thanks, Matt and Barry.



>
>    Matt
>
> > says that PetscFinalize calls MPI_finalize only if MPI_Initialize is
not
> > called before PetscInitialize. In my case, is PetscFinalize destroying
the
> > sub-communicators ?
> >
> > Thanks
> >
> > Rgds,
> > Amit
> >
> >
> >
> >
> >
> >             Barry Smith
> >             <bsmith at mcs.anl.g
> >             ov>
To
> >             Sent by:                  petsc-users at mcs.anl.gov
> >             owner-petsc-users
cc
> >             @mcs.anl.gov
> >             No Phone Info
Subject
> >             Available                 Re: Code structuring -
Communicator
> >
> >
> >             05/09/2008 03:07
> >             PM
> >
> >
> >             Please respond to
> >             petsc-users at mcs.a
> >                  nl.gov
> >
> >
> >
> >
> >
> >
> >
> >    There are many ways to do this, most of them involve using MPI to
> > construct subcommunicators
> > for the various sub parallel tasks. You very likely want to keep
> > PetscInitialize() at
> > the very beginning of the program; you would not write the calls in
> > terms of
> > PETSC_COMM_WORLD or MPI_COMM_WORLD, rather you would use the
> > subcommunicators to create the objects.
> >
> >    An alternative approach is to look at the manual page for
> > PetscOpenMPMerge(), PetscOpenMPRun(),
> > PetscOpenMPNew() in petsc-dev. These allow a simple master-worker
> > model of parallelism
> > with PETSc with a bunch of masters that can work together (instead of
> > just one master) and each
> > master controls a bunch of workers. The code in src/ksp/pc/impls/
> > openmp uses this code.
> >
> > Note that OpenMP has NOTHING to do with OpenMP the standard. Also I
> > don't really have
> > any support for Fortran, I hope you use C/C++. Comments welcome. It
> > sounds like this matches
> > what you need. It's pretty cool,  but underdeveloped.
> >
> >    Barry
> >
> >
> >
> > On May 9, 2008, at 12:46 PM, Amit.Itagi at seagate.com wrote:
> >
> >>
> >> Hi,
> >>
> >> I have a question about the Petsc communicator. I have a petsc program
> >> "foo" which essentially runs in parallel and gives me
> >> y=f(x1,x2,...), where
> >> y is an output parameter and xi's are input parameters. Suppose, I
> >> want to
> >> run a parallel optimizer for the input parameters. I am looking at the
> >> following functionality. I submit the optimizer job on 16 processors
> >> (using
> >> "mpiexec -np 16 progName"). The optimizer should then submit 4 runs of
> >> "foo", each running parallely on 4 processors. "foo" will be written
> >> as a
> >> function and not as a main program in this case. How can I get this
> >> functionality using Petsc ? Should PetscInitialize be called in the
> >> optimizer, or in each foo run ? If PetscInitialize is called in the
> >> optimizer, is there a way to make the foo function run only on a
> >> subset of
> >> the 16 processors ?
> >>
> >> May be, I haven't done a good job of explaining my problem. Let me
> >> know if
> >> you need any clarifications.
> >>
> >> Thanks
> >>
> >> Rgds,
> >> Amit
> >>
> >
> >
> >
> >
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>




More information about the petsc-users mailing list