[petsc-users] Agglomeration for Multigrid on Unstructured Meshes

Matthew Knepley knepley at gmail.com
Tue Jun 2 03:35:57 CDT 2020


On Tue, Jun 2, 2020 at 4:25 AM Lawrence Mitchell <wencel at gmail.com> wrote:

> Hi Dave,
>
> > On 2 Jun 2020, at 05:43, Dave May <dave.mayhem23 at gmail.com> wrote:
> >
> >
> >
> > On Tue 2. Jun 2020 at 03:30, Matthew Knepley <knepley at gmail.com> wrote:
> > On Mon, Jun 1, 2020 at 7:03 PM Danyang Su <danyang.su at gmail.com> wrote:
> > Thanks Jed for the quick response. Yes I am asking about the
> repartitioning of coarse grids in geometric multigrid for unstructured
> mesh. I am happy with AMG. Thanks for letting me know.
> >
> > All the pieces are there, we just have not had users asking for this,
> and it will take some work to put together.
> >
> > Matt - I created a branch for you and Lawrence last year which added
> full support for PLEX within Telescope. This implementation was not a fully
> automated algmoeration strategy - it utilized the partition associated with
> the DM returned from DMGetCoarseDM. Hence the job of building the
> distributed coarse hierarchy was let to the user.
> >
> > I’m pretty sure that code got merged into master as the branch also
> contained several bug mixes for Telescope. Or am I mistaken?
>
> I think you're right. I didn't manage to get the redistribution of the
> DMPlex object done last summer (it's bubbling up again).
>
> As I see it, for redistributed geometric multigrid on plexes, the missing
> piece is a function:
>
> DMPlexRedistributeOntoComm(DM old, MPI_Comm comm, DM *new)
>
> I went down a rabbit hole of trying to do this, since I actually think
> this replaced the current interface to DMPlexDistribute, which is
>
> DMPlexDistribute(DM old, PetscInt overlap, PetscSF *pointDistSF, DM *new)
>
> Where the new DM comes out on the same communicator as the old DM, just
> with a different partition.
>
> This has lots of follow-on consequences, for example, one can't easily
> load on P processes and then compute on Q.
>
> Unfortunately, collectiveness over MPI_Comm(old) is baked into the
> redistribution routines everywhere, and I didn't manage to finish things.
>

Yes, I remember thinking this out. I believe the conclusion was that
redistribution should happen on the large comm, which
some fraction of processes getting no cells. Then at the end we call one
new function which copies that DM onto the smaller comm.

   Matt


> Lawrence



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200602/19f5ba09/attachment.html>


More information about the petsc-users mailing list