[petsc-dev] programming model for PETSc
Jed Brown
jedbrown at mcs.anl.gov
Thu Nov 24 17:29:51 CST 2011
On Thu, Nov 24, 2011 at 17:09, Matthew Knepley <knepley at gmail.com> wrote:
> One key operation which has not yet been discussed is the "push forward"
> of a mapping as Dmitry put it. Here is a scenario:
> We understand a matching of mesh points between processes.
>
Can I assume that ghosting processes know (owner rank, offset) for each
point?
> In order to construct a ghost communication (VecScatter), I
> need to compose the mapping between mesh points and the mapping of mesh
> points to data.
>
Can the mapping of mesh points to data be known through an array (of length
num_owned_points) local_offset or, in the case of non-uniform data size, as
the array (local_offset, size)?
In that case, the fact that remote processes have (owner rank, offset)
means that I can broadcast (local_offset, size) with purely local setup
(this is the first primitive which can be implemented using MPI_Get()).
Now the ghosting procs know (owner rank, offset, size) for each point, so
again, with purely local setup, the scatter is defined (the forward scatter
is implemented with MPI_Get(), the uses MPI_Accumulate() which does a
specified reduction). Note that the actual communication has no
user-visible packing because it uses an MPI_Datatype.
Alternatively, given (owner rank, offset, size), we can literally call
VecScatterCreate() after just an MPI_Scan(), which is logarithmic, and
local setup. but VecScatterCreate() does lots of unnecessary setup to build
the two-way representation.
Redistributing a mesh after partitioning is slightly more demanding. First,
senders are enumerated using a fetch-and-add initiated by the sending
process which has the side-effect of counting the number of nodes that will
be in the new partition and informing the senders of the offsets at which
to deposit those nodes. Then we broadcast the (rank, offset) of each node
on the sender to the receiver. Then we send connectivity using a
non-uniform broadcast. Now, before moving data, we can reorder locally,
inform the sender of the new ordering, and then move all the data.
I think these are all simple loops and single calls to the communication
primitives.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/4977ef78/attachment.html>
More information about the petsc-dev
mailing list