[petsc-users] PetscInitialize with MPI Groups
Gaetan Kenway
kenway at utias.utoronto.ca
Mon May 17 18:34:02 CDT 2010
Hello
I use PETSc in both fortran and in Python using the petsc4py bindings.
I currently have an issue with initializing PETSc when using MPI groups.
I am using a code with two parts: an aero part and an structural part. I
wish to only use PETSc on one of the processor groups, say the aero
side. I've attached a simple python script that replicates the behavior
I see. Basically, when you initialize PETSc on only a subset of
MPI_COMM_WORLD, the program hangs. However, if the processors that are
NOT being initialized with petsc are at a MPI_BARRIER, it appears to
work. Note: any combination of nProc_aero and nProc_struct that add up
to 4, ( (1,3), (2,2) or (3,1) ) give the same behavior.
The test.py script as supplied should hang with run with
mpirun -np 4 python test.py
However, if line 37 is uncommented, it will work.
This is very similar to my actual problem. After I take the
communicator, comm, corresponding to the aero processors, I pass it to
fortran (using mpi4py) and then use:
PETSC_COMM_WORLD = comm
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
However, again, only if the struct processors have a MPI_BARRIER call
which corresponds to the PetscInitialize call will the process carry on
as expected. If the other process exits before an MPI_BARRIER is
called, the program simply hangs indefinitely.
Currently, the workaround is to call an MPI_BARRIER on the other
processors while the init is being called. However, I don think this is
correct.
Any guidance would be greatly appreciated.
Gaetan Kenway
Ph.D Candidate
University of Toronto Institute for Aerospace Studies
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test.py
Type: text/x-python
Size: 1113 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100517/592b1103/attachment.py>
More information about the petsc-users
mailing list