[petsc-users] Agglomeration for Multigrid on Unstructured Meshes

Matthew Knepley knepley at gmail.com
Tue Jun 2 05:13:43 CDT 2020


On Tue, Jun 2, 2020 at 5:15 AM Lawrence Mitchell <wence at gmx.li> wrote:

> > On 2 Jun 2020, at 09:54, Matthew Knepley <knepley at gmail.com> wrote:
> >
> > I almost agree. I still think we do not change Distribute(), since it is
> really convenient, but we do check sizes on input as you say.
>
> If we only want Distribute(), we have to change it a bit, because right
> now there's only one communicator involved.
>
> So one could go to
>
> DMPlexDistribute(DM old, PetscInt overlap, MPI_Comm commNew, PetscSF *sf,
> DM *new)
>
> and commNew may be MPI_COMM_NULL, meaning we pick it up from old->comm.
>

This seems reasonable.


> > We either,
> >
> >   1) Copy the DM to a larger comm with empty slots on input
> >
> > or
> >
> >   2) Copy the DM to a smaller comm eliminating empty slots on output
> >
> > depending on P_in < P_out, or the reverse.
>
> So we now also need to decide on the semantics of the migrationSF. I guess
> it lives on commNew, but is collective over commNew \cap commOld. I can't
> think through if this breaks anything.
>

I don't think 0-size participants should break anything.

   Matt


> Lawrence
>
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200602/63ecad31/attachment-0001.html>


More information about the petsc-users mailing list