[petsc-dev] Merging scatter operations from all intra-node processes

Jed Brown jed at jedbrown.org
Fri Feb 6 10:51:45 CST 2015


James Hawkes <jh2g09 at soton.ac.uk> writes:

> I'm building a solver based on block 'chaotic relaxations' or totally asynchronous Jacobi. Currently I have a version using a hybrid MPI + OpenMP scheme in PETSc. I use one MPI process per node (or socket) and OpenMP within each node. OpenMP isn't used for performance or anything like that, just that shared-default makes more sense for this code - as I have different threads doing different things to the same data at the same time (one thread performs communication, the others perform relaxations, with no thread locking).
>
> I want to achieve the same thing using pure MPI, with shared-memory
> windows on each node. Having a solver that is only compatible with
> MPI+X applications is very limiting. For this, one process needs to be
> able to communicate the halo data for the entire node, whilst the
> other processes do their computational work.

Having one process communicate for the entire node means serializing
buffer packing.  I consider this to be a pessimization.

If you want shared-window vectors and matrices with coalesced messages,
you'll need to implement a new VecScatter.  I'm interested in this for
some architectures, but I'm not deluding myself into thinking the
purpose is anything but message coalescing with log(P) depth.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150206/956ee429/attachment.sig>


More information about the petsc-dev mailing list