[petsc-users] creation of parallel dmplex from a partitioned mesh

Matthew Knepley knepley at gmail.com
Thu Aug 13 08:54:26 CDT 2020


On Thu, Aug 13, 2020 at 9:38 AM Cameron Smith <smithc11 at rpi.edu> wrote:

> Hello,
>
> We have a partitioned mesh that we want to create a DMPlex from that has
> the same distribution of elements (i.e., assignment of elements to
> processes) and vertices 'interior' to a process (i.e., vertices not on
> the inter-process boundary).
>
> We were trying to use DMPlexCreateFromCellListParallelPetsc() or
> DMPlexBuildFromCellListParallel() and found that the vertex ownership
> (roots in the returned Vertex SF) appears to be sequentially assigned to
> processes based on global vertex id.  In general, this will not match
> our mesh distribution.  As we understand, to subsequently set vertex
> coordinates (or other vertex data) we would have to utilize a star
> forest (SF) communication API to send data to the correct process. Is
> that correct?
>
> Alternatively, if we create a dmplex object from the elements that exist
> on each process using DMCreateFromCellList(), and then create a SF from
> mesh vertices on inter-process boundaries (using the mapping from local
> to global vertex ids provided by our mesh library), could we then
> associate the dmplex objects with the SF?  Is it as simple as calling
> DMSetPointSF()?
>

Yes. If you have all the distribution information, this is the easiest
thing to do.


> If manually defining the PointSF is a way forward, we would like some
> help understanding its definition; i.e., which entities become roots and
> which become leaves.  In DMPlexBuildFromCellListParallel()
>

Short explanation of SF:

SF stands for Star-Forest. It is a star graph because you have a single
root that points to  multiple leaves. It is
a forest because you have several of these stars. We use this construct in
many places in PETSc, and where it
is used determines the semantics of the indices.

The DMPlex point SF is an SF in which root indices are "owned" mesh points
and leaf indices are "ghost" mesh
points. You can take any set of local Plexes and add an SF to make them a
parallel Plex.

The SF is constructed with one-sided data. Locally, each process specifies
two things:

  1) The root space: The set of indices [0, Nr) which refers to possible
roots on this process. For the pointSF, this is [0, Np) where Np is the
number of local mesh points.

  2) The leaves: Each leaf is a pair (local mesh point lp, remote mesh
point rp) which says that local mesh point lp is a "ghost" of remote point
rp. The remote point is
       given by (rank r, local mesh point rlp) where rlp is the local mesh
point number on process r.

With this, the Plex will automatically create all the other structures it
needs.


>
> https://gitlab.com/petsc/petsc/-/blob/753428fdb0644bc4cb7be6429ce8776c05405d40/src/dm/impls/plex/plexcreate.c#L2875-2899
>
> the PointSF appears to contain roots for elements and vertices and
> leaves for owned vertices on the inter-process boundary.  Is that correct?
>

No, the leaves are ghost vertices. They point back to the owner.

  Thanks,

     Matt


> Thank-you,
> Cameron
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200813/6f3d0c26/attachment.html>


More information about the petsc-users mailing list