[petsc-users] flux vector

Matthew Knepley knepley at gmail.com
Mon Jun 14 04:54:12 CDT 2021


On Sun, Jun 13, 2021 at 7:48 PM Adrian Croucher <a.croucher at auckland.ac.nz>
wrote:

> hi, thanks for the suggestions!
> On 12/06/21 12:19 am, Matthew Knepley wrote:
>
> However, using overlap = 1 put in a bunch of new faces. We do not care
> about the ones on the process boundary. They will be
> handled by the other process. We do care about faces between two ghost
> cells, since they will be a false positive. Luckily, these
> are labeled by the "ghost" label.
>
> I think we *do* have to care about the faces on the process boundary
> (assuming by that you mean the faces with support size 1 on the outside
> edge of the partition ghost cells), because they can be ghosts of flux
> faces on other processes. If we ignore them they will have no dofs on the
> current process, but they could have dofs on another one. That
> inconsistency is the problem that causes the error when you create the
> global section.
>
> Also, I don't think we want to exclude faces between partition ghost
> cells. Those faces will definitely be ghosts of a flux face on another
> process. Again, if we exclude them the dofs will not be consistent across
> processes. We might not actually compute a flux on those faces locally, but
> there has to be a space for them anyway.
>
> I have found a simple algorithm now which seems to work in all my test
> cases, though I still wonder if there is a better way. The algorithm is:
>
> 1) label the global boundary faces before distribution (all faces with
> support size 1), using DMPlexMarkBoundaryFaces()
>
> 2) label any open boundary faces (on which Dirichlet BCs are applied) - I
> didn't mention these before, but they need to be included as flux faces
>
> 3) after distribution, loop over all faces on current process:
>
>     if face on open boundary: label face as a flux face
>
>     else:
>
>       if face not on global boundary: label face as a flux face
>
> Here the only test of support size is in step 1), which ensures that the
> dofs are always consistent between processes. Ultimately, the support size
> on the local process is not really relevant or reliable.
>
> The main thing I don't like about this algorithm is that it relies on
> looping over all mesh faces in serial during step 1). I would rather not
> add to the serial part of the code and would prefer if it could all be done
> in parallel, after distribution. Maybe I'm worrying unnecessarily, and just
> getting the support size of each face is cheap enough that this won't be a
> significant issue?
>
> I see that I have misunderstood something. I thought you wanted to put
dofs on only the faces that you compute. It everyone only puts dofs on the
computed faces, then I thought
you would get a globally consistent Section. However, now I see that you
want unknowns on all local faces that anyone computes, so that you can get
those values from the other process.

Okay, I think it is not so hard to get what you want in parallel. There are
only two kinds of faces with supportSize == 1:

  a) Faces on the global boundary

  b) Faces which are "shared"

It is the second set that is somewhat confusing because PetscSF does not
have 2-sided information by default. However, it can make it.
There is a two-step check for "shared":

  1) Is the face in the PetscSF? Here you just check for it in the sorted
"locals" array from PetscSFGetGraph()

  2) Is the face ghosted on another process? You can get this from
PetscSFGetRootRanks().

I just wrote a small function to check for "shared" points. After that, I
think you can just run

  1) After distribution, loop overall faces on current process

     If face on open boundary, label face as flux face

     else:

       if face has supportSize != 1 or (supportSize == 1 && shared), label
face as flux face

  Thanks,

     Matt

> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.croucher at auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210614/01a4e4b5/attachment-0001.html>


More information about the petsc-users mailing list