[petsc-users] petsc4py and MPI.COMM_SELF.Spawn

Rodrigo Felicio Rodrigo.Felicio at iongeo.com
Tue Feb 28 18:17:57 CST 2017


Dear All,

I am new to both pestc (petsc4py) and MPI. I am prototyping  an application that acts on data belonging to thousands of different spatial locations, so I decided to use mpi4py to parallelize my computations over said  set of points. At each point, however, I have to solve  linear systems of the type A x = b, using least-squares. As matrices A are sparse and large, I would like use petsc LSQR solver to obtain x. Trouble is that  I cannot get mpi4py to work with petsc4py when using MPI.Spawn(), so that I could  solve those linear systems using threads spawned from my main algorithm. I am not sure if this is a problem with my petsc/mpi installation, or if there is some inherent incompatibility between mpi4py and petsc4py on spawned threads.  Anyway, I would be really  thankful to anyone who could shed some light on this issue. For illustration,  the child and parent codes found on the pestc4py spawning demo folder fail for me if I just include the petsc4py initialization in either of them. In fact, just the introduction of the line " from petsc4py import PETSc" is enough to make the code to hang and issue, upon keyboard termination, error msgs of the type "spawned process group was unable to connect back to parent port" .

Kind regards
Rodrigo

------------------------
 CHILD code:
-------------------------

# inclusion of petsc4py lines causes code to hang
import petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
from mpi4py import MPI
from array import array

master = MPI.Comm.Get_parent()
nprocs = master.Get_size()
myrank = master.Get_rank()

n  = array('i', [0])
master.Bcast([n, MPI.INT], root=0)
n = n[0]

h = 1.0 / n
s = 0.0
for i in range(myrank+1, n+1, nprocs):
    x = h * (i - 0.5)
    s += 4.0 / (1.0 + x**2)
pi = s * h

pi = array('d', [pi])
master.Reduce(sendbuf=[pi, MPI.DOUBLE],
              recvbuf=None,
              op=MPI.SUM, root=0)

master.Disconnect()


---------------------
MASTER CODE:
----------------------
from mpi4py import MPI
from array import array
from math import pi as PI
from sys import argv

cmd = 'cpi-worker-py.exe'
if len(argv) > 1: cmd = argv[1]
print("%s -> %s" % (argv[0], cmd))

worker = MPI.COMM_SELF.Spawn(cmd, None, 5)

n  = array('i', [100])
worker.Bcast([n,MPI.INT], root=MPI.ROOT)

pi = array('d', [0.0])
worker.Reduce(sendbuf=None,
              recvbuf=[pi, MPI.DOUBLE],
              op=MPI.SUM, root=MPI.ROOT)
pi = pi[0]

worker.Disconnect()

print('pi: %.16f, error: %.16f' % (pi, abs(PI-pi)))





________________________________


This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.



More information about the petsc-users mailing list