[petsc-dev] PetscSection

Matthew Knepley knepley at gmail.com
Sat Nov 10 15:12:07 CST 2012


On Sat, Nov 10, 2012 at 4:09 PM, Karl Rupp <rupp at mcs.anl.gov> wrote:
> On 11/10/2012 02:29 PM, Jed Brown wrote:
>>
>> On Sat, Nov 10, 2012 at 2:09 PM, Blaise A Bourdin <bourdin at lsu.edu
>> <mailto:bourdin at lsu.edu>> wrote:
>>
>>     Nope... and please, don't break that...
>>
>>     Here is an example of a mesh with 4 element blocks. In an analysis
>>     code, for each block , I would specify the element type, type of
>>     physics, coefficients etc in a petscBag. That said, I may be doing
>>     it wrong...
>>
>>     block 10 is made of tri
>>     block 22 is made of quad
>>     blocks 100 and 102 are made of bars
>>
>>
>> I'm not suggesting any demotion of this ability. I'm trying to
>> understand what is the advantage of having a stratum with unsorted mixed
>> dimension. If the blocks were sorted by dimension, you would still
>> access them using a label (which gives you an index set, perhaps
>> contiguous). You would still be able to do all the normal operations
>> with those points. If you want to have one loop over all elements of the
>> various types, you would union or concatenate the index sets of each
>> type. Am I missing something?
>
>
> May I add something that is not specific to the current implementation in
> PETSc, yet still backs up the 'arrays of elements of the same type'
> approach:
>  a) When thinking of accelerators, having, say, all tets stored/referenced
> contiguously in one array and all hexahedra stored contiguously in another
> (separate) array, makes it a lot easier to move all tets to a kernel working
> on tets, and to feed all hexes to a second kernel working on hexes.
>
>  b) In a strongly typed setting (my experience here is in C++), one can do
> pretty nice things by keeping the two arrays separate rather than trying to
> unify them in some base class. For simplicity, suppose you have a mesh and
> two containers holding the tets and hexes. A FEM assembly could then look
> like this:

Yes, I agree completely with batching of this kind, which is why my integration
functions look like they do. However, the topology information is not
really needed
in these kernels. Instead you are packing discretization information, like
FEM coefficients, or geometric information like Jacobians, which are
all controlled
by PetscSection, not by the DMComplex.

   Matt

>   for_each(mesh_cells.begin(), mesh_cells.end(), assembly_functor());
>
> where assembly_functor statically dispatches into the cell type:
>   struct assembly_functor{
>     void operator()(tetrahedron const & t) { ... }
>     void operator()(hexahedron const & t) { ... }
>   }
> and for_each is implemented such that it internally expands to a for_each
> over each of the two containers for tets and hexes. Thus, instead of
> dispatching for each element over and over again, the dispatch is done once
> for the whole array.
>
> Just my 2 cents,
> Karli
>
>



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener



More information about the petsc-dev mailing list