<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Mar 22, 2018 at 11:45 AM, Saurabh Chawdhary <span dir="ltr"><<a href="mailto:schawdhary@anl.gov" target="_blank">schawdhary@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thank you Tobin. I was actually more interested in repartitioning after the mesh has been dynamically changed (say, after refinement in certain portions of mesh).<br></blockquote><div><br></div><div>That kinds of repartitioning in p4est is handled by just splitting the Morton order into equal pieces, not by a graph partitioner.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 03/22/2018 07:10 AM, Tobin Isaac wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
On March 21, 2018 11:26:35 AM MDT, Saurabh Chawdhary <<a href="mailto:schawdhary@anl.gov" target="_blank">schawdhary@anl.gov</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello team,<br>
<br>
I haven't used petsc since DMFOREST was released but I have a question<br>
regarding repartitioning of DMFOREST mesh. How is the repartitioning of<br>
<br>
mesh over processors done after some mesh refinement is carried out? Is<br>
<br>
it done by calling a p4est function or partitioning is done in PETSc?<br>
<br>
I was using p4est (natively) a couple of years ago and I remember that<br>
when I tried to partition the grid I could only use the serial METIS<br>
and<br>
not the parMETIS with p4est (using a native function<br>
/p4est_connectivity_reorder/). So what I want to know is whether<br>
DMFOREST repartitioning is done in parallel or in serial?<br>
</blockquote>
You can use the following workflow:<br>
<br>
- Create/read-only an unstructured hexahedral mesh as a DMPlex<br>
- Use ParMETIS to repartition that: there is a ParMETIS implementation of PetscPartitioner, which creates index sets and communication patterns for DMPlexDistribute.<br>
- Convert the DMPlex to DMP4est or DMP8est.<br>
<br>
This does not avoid the fundamental limitations of p4est: the distributed mesh will be redundantly serialized behind-the-scenes, and the coarse-mesh ordering derived from ParMETIS will be static for the life of the forest mesh.<br>
</blockquote>
I am not sure I understand how the distributed mesh is redundantly serialized in p4est. Do you mean that the partitioning is done serially?<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thank you.<br>
<br>
Saurabh<br>
</blockquote></blockquote>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div>
</div></div>