[petsc-users] Agglomeration for Multigrid on Unstructured Meshes

Lawrence Mitchell wence at gmx.li
Tue Jun 2 04:15:52 CDT 2020



> On 2 Jun 2020, at 09:54, Matthew Knepley <knepley at gmail.com> wrote:
> 
> I almost agree. I still think we do not change Distribute(), since it is really convenient, but we do check sizes on input as you say.

If we only want Distribute(), we have to change it a bit, because right now there's only one communicator involved.

So one could go to

DMPlexDistribute(DM old, PetscInt overlap, MPI_Comm commNew, PetscSF *sf, DM *new)

and commNew may be MPI_COMM_NULL, meaning we pick it up from old->comm.

> We either,
> 
>   1) Copy the DM to a larger comm with empty slots on input
> 
> or
> 
>   2) Copy the DM to a smaller comm eliminating empty slots on output
> 
> depending on P_in < P_out, or the reverse.


So we now also need to decide on the semantics of the migrationSF. I guess it lives on commNew, but is collective over commNew \cap commOld. I can't think through if this breaks anything.

Lawrence




More information about the petsc-users mailing list