[petsc-users] PetscInitialize with MPI Groups

Lisandro Dalcin dalcinl at gmail.com
Mon May 17 19:57:20 CDT 2010


On 17 May 2010 20:34, Gaetan Kenway <kenway at utias.utoronto.ca> wrote:
> Hello
>
> I use PETSc in both fortran and in Python using the petsc4py bindings.  I
> currently have an issue with initializing PETSc when using MPI groups. I am
> using a code with two parts: an aero part and an structural part. I wish to
> only use PETSc on one of the processor groups,  say the aero side. I've
> attached a simple python script that replicates the behavior I see.
>  Basically, when you initialize PETSc on only a subset of MPI_COMM_WORLD,
> the program hangs. However, if the processors that are NOT being initialized
> with petsc are at a MPI_BARRIER, it appears to work.  Note: any combination
> of nProc_aero and nProc_struct that add up to 4, ( (1,3), (2,2) or (3,1) )
> give the same behavior.
>
> The test.py script as supplied should hang with run with
>
> mpirun -np 4 python test.py
>
> However, if line 37 is uncommented, it will work.
>
>
> This is very similar to my actual problem. After I take the communicator,
> comm, corresponding to the aero processors, I pass it to fortran (using
> mpi4py) and then use:
>
> PETSC_COMM_WORLD = comm
> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
>
> However, again, only if the struct processors have a MPI_BARRIER call which
> corresponds to the PetscInitialize call will the process carry on as
> expected.  If the other process exits before an MPI_BARRIER is called, the
> program simply hangs indefinitely.
>
> Currently, the workaround is to call an MPI_BARRIER on the other processors
> while the init is being called. However, I don think this is correct.
>
> Any guidance would be greatly appreciated.
>
> Gaetan Kenway
> Ph.D Candidate
> University of Toronto Institute for Aerospace Studies
>

Basically, the comment line near the end:

if is_aero == True:
    from petsc4py import PETSc
    # This will call PETSc initialize on JUST the aero processors

is wrong. When you "from petsc4py import PETSc", PETSc is actually
initialized in MPI_COMM_WORLD, just because you do not have
(currently) any way to set PETSC_COMM_WORLD = comm from Python side
before the call to PetscInitalize(). Then it likely hangs at a
collective MPI_Barrier(PETSC_COMM_WORLD/*==MPI_COMM_WORLD*/)

Adding some support for this would be more or less trivial, for
example something like:

if is_aero:
    import petsc4py; petsc4py.init(comm=subcomm)
    from petsc4py import PETSc
else:
    pass

Would this work for you?

-- 
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169


More information about the petsc-users mailing list