[petsc-users] flux vector
Adrian Croucher
a.croucher at auckland.ac.nz
Sun Jun 13 18:47:57 CDT 2021
hi, thanks for the suggestions!
On 12/06/21 12:19 am, Matthew Knepley wrote:
> However, using overlap = 1 put in a bunch of new faces. We do not care
> about the ones on the process boundary. They will be
> handled by the other process. We do care about faces between two ghost
> cells, since they will be a false positive. Luckily, these
> are labeled by the "ghost" label.
I think we *do* have to care about the faces on the process boundary
(assuming by that you mean the faces with support size 1 on the outside
edge of the partition ghost cells), because they can be ghosts of flux
faces on other processes. If we ignore them they will have no dofs on
the current process, but they could have dofs on another one. That
inconsistency is the problem that causes the error when you create the
global section.
Also, I don't think we want to exclude faces between partition ghost
cells. Those faces will definitely be ghosts of a flux face on another
process. Again, if we exclude them the dofs will not be consistent
across processes. We might not actually compute a flux on those faces
locally, but there has to be a space for them anyway.
I have found a simple algorithm now which seems to work in all my test
cases, though I still wonder if there is a better way. The algorithm is:
1) label the global boundary faces before distribution (all faces with
support size 1), using DMPlexMarkBoundaryFaces()
2) label any open boundary faces (on which Dirichlet BCs are applied) -
I didn't mention these before, but they need to be included as flux faces
3) after distribution, loop over all faces on current process:
if face on open boundary: label face as a flux face
else:
if face not on global boundary: label face as a flux face
Here the only test of support size is in step 1), which ensures that the
dofs are always consistent between processes. Ultimately, the support
size on the local process is not really relevant or reliable.
The main thing I don't like about this algorithm is that it relies on
looping over all mesh faces in serial during step 1). I would rather not
add to the serial part of the code and would prefer if it could all be
done in parallel, after distribution. Maybe I'm worrying unnecessarily,
and just getting the support size of each face is cheap enough that this
won't be a significant issue?
- Adrian
--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.croucher at auckland.ac.nz
tel: +64 (0)9 923 4611
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210614/39208d7d/attachment.html>
More information about the petsc-users
mailing list