[petsc-dev] PetscSection

Jed Brown jedbrown at mcs.anl.gov
Sat Nov 10 11:04:57 CST 2012


On Sat, Nov 10, 2012 at 10:40 AM, Blaise A Bourdin <bourdin at lsu.edu> wrote:

> On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin <bourdin at lsu.edu> wrote:
>
>> A DM does double duty by describing the geometry of the mesh, and the
>> data layout associated with the finite element space. I liked the model
>> where the mesh geometry and the data layout on the mesh was split in two
>> objects, but I understand the convenience of having everything in the DM,
>> and DMClone works just fine. Since I may have to handle scalar, vector, 2nd
>> and 4th order tensor on 2 different finite element spaces, in an assembly
>> loop, I may end up dealing with as many as 8 DM. I stick all these DM's in
>> the user context of each DM associated with a unknown (a Vec on which I may
>> have to call SNESSolve or TSSolve), hoping that this is not creating some
>> aliasing problem which as  a fortran programmer I can not possibly
>> understand.
>>
>
>  ;-)
>
>
>  Actually, I am really not sure if passing a dm and a pointer to a user
> context containing this dm is legit or not...
>

Ah, okay. To confirm, you have a DM that you are solving for, and in its
user context, you have several other DMs, each with a Vec, describing the
"problem data" like coefficients, forcing terms, and internal
discontinuities? That is completely fine, and not "aliasing", but it does
not play well with *geometric* multigrid because coarse grids reference the
same application context. We have a system of hooks for managing such
resolution-dependent data, though only with a C interface so far. (We
needed this to get geometric multigrid and FAS to work with TS. Most
non-toy applications need it too.)

I'm not sure if there is a way to make this easier. We have been using
PetscObjectCompose() to attach things to the DM on different levels. We
could have a slightly friendlier "user" interface for that.

So keeping those things in the app context is just fine, but if you want to
use geometric multigrid, you'll have to take them out of the app context
and put them in a different structure attached to the DM that is not
transparently propagated under coarsening and refinement. If you think you
might do this eventually, I recommend commenting/organizing your app
context so that resolution-dependent stuff is easily identifiable.


> I don't mind that, but can't you have an index set describing the codim 0
> elements (maybe all of them) and another index set for the codim 1 elements
> on the features you care about? You can take their union (or concatenate)
> for your assembly loop if you like. Is there something wrong with this
> approach?
>
>
>   Thats a very good point. In the end it doesn't really matter. As far as
> I remember, the main reason I ended with my current scheme is that DMMesh
> did not play well with partially interpolated meshes. I don't know what the
> current status of DMComplex is.
>

Okay, I think it's important to eventually support partially interpolated
meshes to avoid using a lot of memory when used with low-order
discretizations. I see no reason why there can't also be a direct cache for
closure. For a P1 basis, that amounts to a point range

[cells, boundary faces, vertices]

closure: [cells -> vertices, faces -> vertices]

So cell -> face need not be stored anywhere. Presumably there is a reason
why Matt didn't write it this way. Is it just uniformity of data structure?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20121110/7f052425/attachment.html>


More information about the petsc-dev mailing list