[petsc-dev] Merging scatter operations from all intra-node processes

James Hawkes jh2g09 at soton.ac.uk
Thu Feb 5 06:14:36 CST 2015




I'm building a solver based on block 'chaotic relaxations' or totally asynchronous Jacobi. Currently I have a version using a hybrid MPI + OpenMP scheme in PETSc. I use one MPI process per node (or socket) and OpenMP within each node. OpenMP isn't used for performance or anything like that, just that shared-default makes more sense for this code - as I have different threads doing different things to the same data at the same time (one thread performs communication, the others perform relaxations, with no thread locking).

I want to achieve the same thing using pure MPI, with shared-memory windows on each node. Having a solver that is only compatible with MPI+X applications is very limiting. For this, one process needs to be able to communicate the halo data for the entire node, whilst the other processes do their computational work.

My question is:

Given a sub-communicator across 1 node, is there a smart way, using existing PETSc functionality, that I could 'merge' the matrices/vectors from each slave process to the master, so that rank 0 is responsible for all communication between the nodes?

My two ideas so far:

- Open the Vecs and their VecScatters up on a shared-memory window, and have each sub-communicator master perform the VecScatter for each slave. Cons: by default this will try to communicate with itself; and the messages are not packed efficiently (multiple messages will be sent to the same processes).

- Grab the Vecs and Mats through a shared-memory window and pack them into a MATNEST/VECNEST on the sub-comm master, and use the existing functionality of PETSc to create my clean scatter patterns for me. Cons: not really sure, is that a lot of overhead?

Many thanks.

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150205/fb0758f5/attachment.html>


More information about the petsc-dev mailing list