<div dir="ltr"><div dir="ltr">On Tue, Jun 2, 2020 at 4:25 AM Lawrence Mitchell <<a href="mailto:wencel@gmail.com">wencel@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Dave,<br>
<br>
> On 2 Jun 2020, at 05:43, Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
> <br>
> <br>
> <br>
> On Tue 2. Jun 2020 at 03:30, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> On Mon, Jun 1, 2020 at 7:03 PM Danyang Su <<a href="mailto:danyang.su@gmail.com" target="_blank">danyang.su@gmail.com</a>> wrote:<br>
> Thanks Jed for the quick response. Yes I am asking about the repartitioning of coarse grids in geometric multigrid for unstructured mesh. I am happy with AMG. Thanks for letting me know.<br>
> <br>
> All the pieces are there, we just have not had users asking for this, and it will take some work to put together.<br>
> <br>
> Matt - I created a branch for you and Lawrence last year which added full support for PLEX within Telescope. This implementation was not a fully automated algmoeration strategy - it utilized the partition associated with the DM returned from DMGetCoarseDM. Hence the job of building the distributed coarse hierarchy was let to the user.<br>
> <br>
> I’m pretty sure that code got merged into master as the branch also contained several bug mixes for Telescope. Or am I mistaken?<br>
<br>
I think you're right. I didn't manage to get the redistribution of the DMPlex object done last summer (it's bubbling up again).<br>
<br>
As I see it, for redistributed geometric multigrid on plexes, the missing piece is a function:<br>
<br>
DMPlexRedistributeOntoComm(DM old, MPI_Comm comm, DM *new)<br>
<br>
I went down a rabbit hole of trying to do this, since I actually think this replaced the current interface to DMPlexDistribute, which is<br>
<br>
DMPlexDistribute(DM old, PetscInt overlap, PetscSF *pointDistSF, DM *new)<br>
<br>
Where the new DM comes out on the same communicator as the old DM, just with a different partition.<br>
<br>
This has lots of follow-on consequences, for example, one can't easily load on P processes and then compute on Q.<br>
<br>
Unfortunately, collectiveness over MPI_Comm(old) is baked into the redistribution routines everywhere, and I didn't manage to finish things.<br></blockquote><div><br></div><div>Yes, I remember thinking this out. I believe the conclusion was that redistribution should happen on the large comm, which</div><div>some fraction of processes getting no cells. Then at the end we call one new function which copies that DM onto the smaller comm.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Lawrence</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>