[petsc-dev] Redistribution of a DMPlex onto a new communicator

Lawrence Mitchell wence at gmx.li
Mon Aug 19 08:42:33 CDT 2019


On Mon, 19 Aug 2019 at 13:53, Matthew Knepley <knepley at gmail.com> wrote:

[...]

>> OK, so I think I am getting there. Presently I am abusing
>> DMPlexCreateFromDAG to migrate a DM on oldComm onto newComm, but this
>> is very fragile. I attach what I have right now. You have to run it
>> with PTSCOTCH, because parmetis refuses to partition graphs with no
>> vertices on a process.: again, this would be avoided if the
>> partitioning was done on a source communicator with a number of
>> partitions given by a target communicator, anyway.
>
>
> Sorry I am just getting to this. Its a great example. I was thinking of just pushing this stuff in the library, but
> I had the following thought. What if we reused DMClone() to stick things on another Comm, since we do
> not actually want a copy. The internals would then have to contend with a NULL impl, which might be a lot
> of trouble. I was going to try it out in a branch. It seems more elegant to me.

If you want to start from some (slightly more debugged code). Use
wence/feature/dmplex-distribute-onto-comm

I think if the internals of DMDistribute are going to be refactored to
contend with a NULL implementation, then I think we should go the
whole hog and disconnect the number of target partitions from the
communicator of the to-be-distributed DM.

Lawrence


More information about the petsc-dev mailing list