<div><br></div><div><br><div class="gmail_quote"><div dir="ltr">On Tue 2. Jun 2020 at 03:30, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, Jun 1, 2020 at 7:03 PM Danyang Su <<a href="mailto:danyang.su@gmail.com" target="_blank">danyang.su@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks Jed for the quick response. Yes I am asking about the repartitioning of coarse grids in geometric multigrid for unstructured mesh. I am happy with AMG. Thanks for letting me know.<br></blockquote><div><br></div><div>All the pieces are there, we just have not had users asking for this, and it will take some work to put together.</div></div></div></blockquote><div dir="auto"><br></div><div dir="auto">Matt - I created a branch for you and Lawrence last year which added full support for PLEX within Telescope. This implementation was not a fully automated algmoeration strategy - it utilized the partition associated with the DM returned from DMGetCoarseDM. Hence the job of building the distributed coarse hierarchy was let to the user.</div><div dir="auto"><br></div><div dir="auto">I’m pretty sure that code got merged into master as the branch also contained several bug mixes for Telescope. Or am I mistaken?</div><div dir="auto"><br></div><div dir="auto">Cheers</div><div dir="auto">Dave</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div></div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div></div><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Danyang<br>
<br>
On 2020-06-01, 1:47 PM, "Jed Brown" <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
<br>
I assume you're talking about repartitioning of coarse grids in<br>
geometric multigrid -- that hasn't been implemented.<br>
<br>
<a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCTELESCOPE.html" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCTELESCOPE.html</a><br>
<br>
But you can use an algebraic multigrid that does similar communicator<br>
reduction, and can be applied to the original global problem or just on<br>
the "coarse" problem of an initial geometric hierarchy.<br>
<br>
Danyang Su <<a href="mailto:danyang.su@gmail.com" target="_blank">danyang.su@gmail.com</a>> writes:<br>
<br>
> Dear All,<br>
><br>
> <br>
><br>
> I recalled there was a presentation ‘Extreme-scale multigrid components with PETSc’ taling about agglomeration in parallel multigrid, with future plan to extend to support unstructured meshes. Is this under development or to be added? <br>
><br>
> <br>
><br>
> Thanks and regards,<br>
><br>
> <br>
><br>
> Danyang<br>
<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>