[petsc-dev] Redistribution of a DMPlex onto a new communicator
Matthew Knepley
knepley at gmail.com
Mon Aug 19 07:53:00 CDT 2019
On Wed, Aug 7, 2019 at 11:32 AM Lawrence Mitchell <wence at gmx.li> wrote:
> On Wed, 7 Aug 2019 at 13:52, Matthew Knepley <knepley at gmail.com> wrote:
> >
> > On Wed, Aug 7, 2019 at 7:13 AM Lawrence Mitchell via petsc-dev <
> petsc-dev at mcs.anl.gov> wrote:
> >>
> >> Dear petsc-dev,
> >>
> >> I would like to run with a geometric multigrid hierarchy built out of
> DMPlex objects. On the coarse grids, I would like to reduce the size of the
> communicator being used. Since I build the hierarchy by regularly refining
> the coarse grid problem, this means I need a way of redistributing a DMPlex
> object from commA to commB.
> >>
> >> (I note that DMRefine has signature DMRefine(oldDM, newcomm, newDM),
> but as far as I can tell no concrete implementation of DMRefine has ever
> taken any notice of newcomm, and the newDM always ends up on the same
> communicator as the oldDM).
> >>
> >> OK, so I think what I want to use is DMPlexDistribute. This does
> many-to-many redistribution of DMPlex objects, but has a rather constrained
> interface:
> >>
> >> DMPlexDistribute(oldDM, overlap, *migrationSF, *newDM)
> >>
> >> It works by taking the oldDM, assuming that the communicator of that
> thing defines the number of partitions I want, partitioning the mesh dual
> graph, and migrating onto a newDM, with the same number of processes as the
> oldDM.
> >>
> >> So what are my options if I want to distribute from oldDM onto newDM
> where the communicators of the oldDM and the putative target newDM /don't/
> match?
> >>
> >> Given my use case above, right now I can assume that
> MPI_Group_translate_ranks(oldcomm, newcomm) will work (although I can think
> of future use cases where it doesn't).
> >>
> >> In an ideal world, I think the interface for DMPlexDistribute should be
> something like:
> >>
> >> DMPlexDistribute(oldDM, overlap, newComm, *migrationSF, *newDM)
> >>
> >> where oldDM->comm and newComm are possibly disjoint sets of ranks, and
> the call is collective over the union of the groups of the two
> communicators.
> >>
> >> This would work by then asking the partitioner to construct a partition
> based on the size of newComm (rather than oldDM->comm) and doing all of the
> migration.
> >>
> >> I started to look at this, and swiftly threw my hands in the air,
> because the implicit assumption of the communicators matching is everywhere.
> >>
> >> So my stopgap solution would be (given that I know oldDM->comm \subset
> newComm) is to take my oldDM and move it onto newComm before calling the
> existing DMPlexDistribute.
> >>
> >> Does this sound like a reasonable approach? Effectively I need to
> create "empty" DM objects on all of the participating ranks in newComm that
> are not in oldComm: what information do they need to contain to align with
> oldDM?
> >>
> >>
> >> Does anyone else just have code for doing this lying around? Am I
> completely barking up the wrong tree?
> >
> >
> > That is how you do it. We are solidifying this pattern, as you can see
> from Junchao's new example for pushing a Vec onto a subcomm.
>
> OK, so I think I am getting there. Presently I am abusing
> DMPlexCreateFromDAG to migrate a DM on oldComm onto newComm, but this
> is very fragile. I attach what I have right now. You have to run it
> with PTSCOTCH, because parmetis refuses to partition graphs with no
> vertices on a process.: again, this would be avoided if the
> partitioning was done on a source communicator with a number of
> partitions given by a target communicator, anyway.
>
Sorry I am just getting to this. Its a great example. I was thinking of
just pushing this stuff in the library, but
I had the following thought. What if we reused DMClone() to stick things on
another Comm, since we do
not actually want a copy. The internals would then have to contend with a
NULL impl, which might be a lot
of trouble. I was going to try it out in a branch. It seems more elegant to
me.
Thanks,
Matt
> This example builds a simple DMPlex on a subcommunicator (built with
> processes from COMM_WORLD where rank % 3 == 0), and then refines it,
> and redistributes it onto COMM_WORLD.
>
> Running with:
>
> $ mpiexec -n 4 ./dmplex-redist -input_dm_view -refined_dm_view
> -copied_dm_view -redistributed_dm_view
>
> Builds a box mesh on 1 rank, redistributes it onto 2, refines that DM,
> then expands it to live on 4 ranks, and subsequently redistributes
> that.
>
> DM Object: 2 MPI processes
> type: plex
> DM_0x84000007_0 in 3 dimensions:
> 0-cells: 27 0
> 1-cells: 54 0
> 2-cells: 36 0
> 3-cells: 8 0
> Labels:
> depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8))
> Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4),
> 2 (4))
> marker: 1 strata with value/size (1 (72))
> DM Object: 2 MPI processes
> type: plex
> DM_0x84000007_1 in 3 dimensions:
> 0-cells: 75 75
> 1-cells: 170 170
> 2-cells: 128 128
> 3-cells: 32 32
> Labels:
> marker: 1 strata with value/size (1 (192))
> Face Sets: 5 strata with value/size (1 (36), 3 (18), 4 (18), 5 (18), 6
> (18))
> depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32))
> DM Object: 4 MPI processes
> type: plex
> DM_0xc4000003_0 in 3 dimensions:
> 0-cells: 75 0 0 75
> 1-cells: 170 0 0 170
> 2-cells: 128 0 0 128
> 3-cells: 32 0 0 32
> Labels:
> depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32))
> Field PetscContainer_0xc4000003_1:
> adjacency FEM
> DM Object: Parallel Mesh 4 MPI processes
> type: plex
> Parallel Mesh in 3 dimensions:
> 0-cells: 45 45 45 45
> 1-cells: 96 96 96 96
> 2-cells: 68 68 68 68
> 3-cells: 16 16 16 16
> Labels:
> depth: 4 strata with value/size (0 (45), 1 (96), 2 (68), 3 (16))
> Field PetscContainer_0xc4000003_2:
> adjacency FEM
>
> I would ideally like some help in figuring out a better way of
> "expanding" the DM onto the new communicator (rather than using
> DMPlexCreateFromDAG as I do now).
>
> I think one wants to think about the interface for these
> redistribution functions in general. It seems that they want to
> broadly match MPI_Intercomm_create. So something like:
>
> DMPlexDistribute(oldDM, peerCommunicator, newCommunicator, overlap,
> *migrationSF, *newDM)
>
> This is collective over peerCommunicator and returns a newDM on
> newCommunicator, and a migrationSF on peerCommunicator.
>
> Note that I do not think it is safe to assume that PETSC_COMM_WORLD is
> always suitable as peerCommunicator.
>
> > I think the right way to do this would be to implement the hooks in
> PCTELESCOPE for DMPlex. Dave and I have talked about this and
> > it should be exactly the same work as you propose above, but it would
> allow you to use the command line, do this recursively, interact nicely
> > with the solvers, etc. I can help.
>
> The telescope side of things now exists (c.f. 8d9f7141f511) to some
> degree. But to do that, one needs the redistribution.
>
> Cheers,
>
> Lawrence
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190819/75355c5a/attachment.html>
More information about the petsc-dev
mailing list