[petsc-dev] programming model for PETSc

Matthew Knepley knepley at gmail.com
Thu Nov 24 17:45:00 CST 2011


On Thu, Nov 24, 2011 at 5:29 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Thu, Nov 24, 2011 at 17:09, Matthew Knepley <knepley at gmail.com> wrote:
>
>> One key operation which has not yet been discussed is the "push forward"
>> of a mapping as Dmitry put it. Here is a scenario:
>> We understand a matching of mesh points between processes.
>>
>
> Can I assume that ghosting processes know (owner rank, offset) for each
> point?
>

This not model my current model, but it is a possible one. I do not "ghost"
the layout, only
the points. I cannot think of a reason you could not distribute this
information, but it would
not remove the need for a "fuse" function for the data which gets
communicated, that is
you cannot directly insert what you get from other processes.


>
>
>> In order to construct a ghost communication (VecScatter), I
>> need to compose the mapping between mesh points and the mapping of mesh
>> points to data.
>>
>
> Can the mapping of mesh points to data be known through an array (of
> length num_owned_points) local_offset or, in the case of non-uniform data
> size, as the array (local_offset, size)?
>

Yes.


> In that case, the fact that remote processes have (owner rank, offset)
> means that I can broadcast (local_offset, size) with purely local setup
> (this is the first primitive which can be implemented using MPI_Get()).
>

Okay, I would really like this coded up. We can do a 1-D mesh  of
Lagrangian elements just to show me what is going on.


> Now the ghosting procs know (owner rank, offset, size) for each point, so
> again, with purely local setup, the scatter is defined (the forward scatter
> is implemented with MPI_Get(), the uses MPI_Accumulate() which does a
> specified reduction). Note that the actual communication has no
> user-visible packing because it uses an MPI_Datatype.
>

I am still not sure how much this buys you since you had to communicate
that offset info somehow.


> Alternatively, given (owner rank, offset, size), we can literally call
> VecScatterCreate() after just an MPI_Scan(), which is logarithmic, and
> local setup. but VecScatterCreate() does lots of unnecessary setup to build
> the two-way representation.
>
>
> Redistributing a mesh after partitioning is slightly more demanding.
> First, senders are enumerated using a fetch-and-add initiated by the
> sending process which has the side-effect of counting the number of nodes
> that will be in the new partition and informing the senders of the offsets
> at which to deposit those nodes. Then we broadcast the (rank, offset) of
> each node on the sender to the receiver. Then we send connectivity using a
> non-uniform broadcast. Now, before moving data, we can reorder locally,
> inform the sender of the new ordering, and then move all the data.
>

Great. This is exactly what i hate about this crap. It always seems
specially coded for the problem. I think we can use exactly the
primitive above to do your non-uniform broadcast step above. Mesh topolgoy
is jsut another function over the mesh points.

    Matt


> I think these are all simple loops and single calls to the communication
> primitives.
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/926a8b0a/attachment.html>


More information about the petsc-dev mailing list