[petsc-dev] Redistribution of a DMPlex onto a new communicator

Matthew Knepley knepley at gmail.com
Wed Aug 7 07:52:48 CDT 2019


On Wed, Aug 7, 2019 at 7:13 AM Lawrence Mitchell via petsc-dev <
petsc-dev at mcs.anl.gov> wrote:

> Dear petsc-dev,
>
> I would like to run with a geometric multigrid hierarchy built out of
> DMPlex objects. On the coarse grids, I would like to reduce the size of the
> communicator being used. Since I build the hierarchy by regularly refining
> the coarse grid problem, this means I need a way of redistributing a DMPlex
> object from commA to commB.
>
> (I note that DMRefine has signature DMRefine(oldDM, newcomm, newDM), but
> as far as I can tell no concrete implementation of DMRefine has ever taken
> any notice of newcomm, and the newDM always ends up on the same
> communicator as the oldDM).
>
> OK, so I think what I want to use is DMPlexDistribute. This does
> many-to-many redistribution of DMPlex objects, but has a rather constrained
> interface:
>
> DMPlexDistribute(oldDM, overlap, *migrationSF, *newDM)
>
> It works by taking the oldDM, assuming that the communicator of that thing
> defines the number of partitions I want, partitioning the mesh dual graph,
> and migrating onto a newDM, with the same number of processes as the oldDM.
>
> So what are my options if I want to distribute from oldDM onto newDM where
> the communicators of the oldDM and the putative target newDM /don't/ match?
>
> Given my use case above, right now I can assume that
> MPI_Group_translate_ranks(oldcomm, newcomm) will work (although I can think
> of future use cases where it doesn't).
>
> In an ideal world, I think the interface for DMPlexDistribute should be
> something like:
>
> DMPlexDistribute(oldDM, overlap, newComm, *migrationSF, *newDM)
>
> where oldDM->comm and newComm are possibly disjoint sets of ranks, and the
> call is collective over the union of the groups of the two communicators.
>
> This would work by then asking the partitioner to construct a partition
> based on the size of newComm (rather than oldDM->comm) and doing all of the
> migration.
>
> I started to look at this, and swiftly threw my hands in the air, because
> the implicit assumption of the communicators matching is everywhere.
>
> So my stopgap solution would be (given that I know oldDM->comm \subset
> newComm) is to take my oldDM and move it onto newComm before calling the
> existing DMPlexDistribute.
>
> Does this sound like a reasonable approach? Effectively I need to create
> "empty" DM objects on all of the participating ranks in newComm that are
> not in oldComm: what information do they need to contain to align with
> oldDM?
>
>
> Does anyone else just have code for doing this lying around? Am I
> completely barking up the wrong tree?
>

That is how you do it. We are solidifying this pattern, as you can see from
Junchao's new example for pushing a Vec onto a subcomm.

I think the right way to do this would be to implement the hooks in
PCTELESCOPE for DMPlex. Dave and I have talked about this and
it should be exactly the same work as you propose above, but it would allow
you to use the command line, do this recursively, interact nicely
with the solvers, etc. I can help.

  Thanks,

    Matt


> Thanks,
>
> Lawrence



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190807/5233d2de/attachment.html>


More information about the petsc-dev mailing list