Code structuring - Communicator

Amit.Itagi at seagate.com Amit.Itagi at seagate.com
Mon May 12 08:27:50 CDT 2008


Thanks, Barry.

Rgds,
Amit



                                                                           
             Barry Smith                                                   
             <bsmith at mcs.anl.g                                             
             ov>                                                        To 
             Sent by:                  petsc-users at mcs.anl.gov             
             owner-petsc-users                                          cc 
             @mcs.anl.gov                                                  
             No Phone Info                                         Subject 
             Available                 Re: Code structuring - Communicator 
                                                                           
                                                                           
             05/09/2008 03:07                                              
             PM                                                            
                                                                           
                                                                           
             Please respond to                                             
             petsc-users at mcs.a                                             
                  nl.gov                                                   
                                                                           
                                                                           





    There are many ways to do this, most of them involve using MPI to
construct subcommunicators
for the various sub parallel tasks. You very likely want to keep
PetscInitialize() at
the very beginning of the program; you would not write the calls in
terms of
PETSC_COMM_WORLD or MPI_COMM_WORLD, rather you would use the
subcommunicators to create the objects.

    An alternative approach is to look at the manual page for
PetscOpenMPMerge(), PetscOpenMPRun(),
PetscOpenMPNew() in petsc-dev. These allow a simple master-worker
model of parallelism
with PETSc with a bunch of masters that can work together (instead of
just one master) and each
master controls a bunch of workers. The code in src/ksp/pc/impls/
openmp uses this code.

Note that OpenMP has NOTHING to do with OpenMP the standard. Also I
don't really have
any support for Fortran, I hope you use C/C++. Comments welcome. It
sounds like this matches
what you need. It's pretty cool,  but underdeveloped.

    Barry



On May 9, 2008, at 12:46 PM, Amit.Itagi at seagate.com wrote:

>
> Hi,
>
> I have a question about the Petsc communicator. I have a petsc program
> "foo" which essentially runs in parallel and gives me
> y=f(x1,x2,...), where
> y is an output parameter and xi's are input parameters. Suppose, I
> want to
> run a parallel optimizer for the input parameters. I am looking at the
> following functionality. I submit the optimizer job on 16 processors
> (using
> "mpiexec -np 16 progName"). The optimizer should then submit 4 runs of
> "foo", each running parallely on 4 processors. "foo" will be written
> as a
> function and not as a main program in this case. How can I get this
> functionality using Petsc ? Should PetscInitialize be called in the
> optimizer, or in each foo run ? If PetscInitialize is called in the
> optimizer, is there a way to make the foo function run only on a
> subset of
> the 16 processors ?
>
> May be, I haven't done a good job of explaining my problem. Let me
> know if
> you need any clarifications.
>
> Thanks
>
> Rgds,
> Amit
>






More information about the petsc-users mailing list