[petsc-dev] Redistribution of a DMPlex onto a new communicator

Lawrence Mitchell wence at gmx.li
Wed Aug 7 06:13:14 CDT 2019


Dear petsc-dev,

I would like to run with a geometric multigrid hierarchy built out of DMPlex objects. On the coarse grids, I would like to reduce the size of the communicator being used. Since I build the hierarchy by regularly refining the coarse grid problem, this means I need a way of redistributing a DMPlex object from commA to commB.

(I note that DMRefine has signature DMRefine(oldDM, newcomm, newDM), but as far as I can tell no concrete implementation of DMRefine has ever taken any notice of newcomm, and the newDM always ends up on the same communicator as the oldDM).

OK, so I think what I want to use is DMPlexDistribute. This does many-to-many redistribution of DMPlex objects, but has a rather constrained interface:

DMPlexDistribute(oldDM, overlap, *migrationSF, *newDM)

It works by taking the oldDM, assuming that the communicator of that thing defines the number of partitions I want, partitioning the mesh dual graph, and migrating onto a newDM, with the same number of processes as the oldDM.

So what are my options if I want to distribute from oldDM onto newDM where the communicators of the oldDM and the putative target newDM /don't/ match?

Given my use case above, right now I can assume that MPI_Group_translate_ranks(oldcomm, newcomm) will work (although I can think of future use cases where it doesn't).

In an ideal world, I think the interface for DMPlexDistribute should be something like:

DMPlexDistribute(oldDM, overlap, newComm, *migrationSF, *newDM)

where oldDM->comm and newComm are possibly disjoint sets of ranks, and the call is collective over the union of the groups of the two communicators.

This would work by then asking the partitioner to construct a partition based on the size of newComm (rather than oldDM->comm) and doing all of the migration.

I started to look at this, and swiftly threw my hands in the air, because the implicit assumption of the communicators matching is everywhere.

So my stopgap solution would be (given that I know oldDM->comm \subset newComm) is to take my oldDM and move it onto newComm before calling the existing DMPlexDistribute.

Does this sound like a reasonable approach? Effectively I need to create "empty" DM objects on all of the participating ranks in newComm that are not in oldComm: what information do they need to contain to align with oldDM?


Does anyone else just have code for doing this lying around? Am I completely barking up the wrong tree?

Thanks,

Lawrence


More information about the petsc-dev mailing list