<div dir="ltr"><div dir="ltr">On Sun, Jun 13, 2021 at 7:48 PM Adrian Croucher <<a href="mailto:a.croucher@auckland.ac.nz">a.croucher@auckland.ac.nz</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>hi, thanks for the suggestions!<br>
</p>
<div>On 12/06/21 12:19 am, Matthew Knepley
wrote:<br>
</div>
<blockquote type="cite">
However, using overlap = 1 put in a bunch of new faces. We do not
care about the ones on the process boundary. They will be
<div dir="ltr">
<div class="gmail_quote">
<div>handled by the other process. We do care about faces
between two ghost cells, since they will be a false
positive. Luckily, these</div>
<div>are labeled by the "ghost" label.<br>
</div>
</div>
</div>
</blockquote>
<p>I think we *do* have to care about the faces on the process
boundary (assuming by that you mean the faces with support size 1
on the outside edge of the partition ghost cells), because they
can be ghosts of flux faces on other processes. If we ignore them
they will have no dofs on the current process, but they could have
dofs on another one. That inconsistency is the problem that causes
the error when you create the global section.</p>
<p>Also, I don't think we want to exclude faces between partition
ghost cells. Those faces will definitely be ghosts of a flux face
on another process. Again, if we exclude them the dofs will not be
consistent across processes. We might not actually compute a flux
on those faces locally, but there has to be a space for them
anyway.<br>
</p>
<p>I have found a simple algorithm now which seems to work in all my
test cases, though I still wonder if there is a better way. The
algorithm is:</p>
<p>1) label the global boundary faces before distribution (all faces
with support size 1), using DMPlexMarkBoundaryFaces()<br>
</p>
<p>2) label any open boundary faces (on which Dirichlet BCs are
applied) - I didn't mention these before, but they need to be
included as flux faces</p>
<p>3) after distribution, loop over all faces on current process:</p>
<p> if face on open boundary: label face as a flux face</p>
<p> else:</p>
<p> if face not on global boundary: label face as a flux face<br>
</p>
<p>Here the only test of support size is in step 1), which ensures
that the dofs are always consistent between processes. Ultimately,
the support size on the local process is not really relevant or
reliable.<br>
</p>
<p>The main thing I don't like about this algorithm is that it
relies on looping over all mesh faces in serial during step 1). I
would rather not add to the serial part of the code and would
prefer if it could all be done in parallel, after distribution.
Maybe I'm worrying unnecessarily, and just getting the support
size of each face is cheap enough that this won't be a significant
issue?</p>
<p></p></div></blockquote><div>I see that I have misunderstood something. I thought you wanted to put dofs on only the faces that you compute. It everyone only puts dofs on the computed faces, then I thought</div><div>you would get a globally consistent Section. However, now I see that you want unknowns on all local faces that anyone computes, so that you can get those values from the other process.</div><div><br></div><div>Okay, I think it is not so hard to get what you want in parallel. There are only two kinds of faces with supportSize == 1:</div><div><br></div><div> a) Faces on the global boundary</div><div><br></div><div> b) Faces which are "shared"</div><div><br></div><div>It is the second set that is somewhat confusing because PetscSF does not have 2-sided information by default. However, it can make it.</div><div>There is a two-step check for "shared":</div><div><br></div><div> 1) Is the face in the PetscSF? Here you just check for it in the sorted "locals" array from PetscSFGetGraph()</div><div><br></div><div> 2) Is the face ghosted on another process? You can get this from PetscSFGetRootRanks().</div><div><br></div><div>I just wrote a small function to check for "shared" points. After that, I think you can just run</div><div><br></div><div> 1) After distribution, loop overall faces on current process</div><div><br></div><div> If face on open boundary, label face as flux face</div><div><br></div><div> else:</div><div><br></div><div> if face has supportSize != 1 or (supportSize == 1 && shared), label face as flux face</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p>- Adrian<br>
</p>
<pre cols="72">--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: <a href="mailto:a.croucher@auckland.ac.nz" target="_blank">a.croucher@auckland.ac.nz</a>
tel: +64 (0)9 923 4611</pre>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>