[petsc-dev] PetscSection

Matthew Knepley knepley at gmail.com
Tue Sep 10 20:41:09 CDT 2013


On Tue, Sep 10, 2013 at 8:36 PM, Shri <abhyshr at mcs.anl.gov> wrote:

>
> On Sep 10, 2013, at 6:57 PM, Matthew Knepley wrote:
>
> On Tue, Sep 10, 2013 at 6:53 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>
>> On Sep 10, 2013, at 6:11 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>
>> > On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith <bsmith at mcs.anl.gov>
>> wrote:
>> >
>> >    We had this discussion last November. Now that DMPlex has further
>> matured and is likely to play a big role in some major applications I'd
>> like to revisit the issue and see if PetscSection can be refactored to be
>> easier to understand and more useful and more "compatible" with how DMPlex
>> is developing.
>> >
>> >     PetscSection seems to have two distinct roles:
>> >
>> >   Role 1   (I'll call this object XX)  Provides a way of indexing into
>> an array of "items" where different items can be of different sizes and
>> there can be spaces in the memory between items.
>> >
>> > ---------
>> >   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>> >   PetscInt                     *atlasDof;     /* Describes layout of
>> storage, point --> # of values */
>> >   PetscInt                     *atlasOff;     /* Describes layout of
>> storage, point --> offset into storage */
>> >   PetscInt                      maxDof;       /* Maximum dof on any
>> point */
>> >   PetscBool                     setup;
>> >
>> > So if we have a regular C array pointer to type int, double etc we do
>>  addr = array + p == &array[p] to get the address of the pth item (where p
>> may only be valid between pstart and pend.) With PetscSection instead one
>> would do XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while
>> XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
>> >
>> >     For historical reasons the offsets are in terms of int* instead of
>> char*;  (this should be fixed)
>> >
>> > I do not understand this. The offsets are integers, and I think they
>> should be.
>>
>>   I was unclear.  The offset value should be in bytes from the beginning
>> of the array, currently it is in ints from the beginning of the array. Ints
>> is just plan confusing, Shri has to have this strange size sizeof(his
>> struct)/sizeof(PetscInt) to indicate how big his chunk is.
>
>
> No, its has nothing to do with Int. Where did this come from? In fact, I
> use it all the time to mean Scalar offsets. It is
> interpreter by the creator of the Section. I told him to put in bytes for
> the sizes. Then you do not need any of that crap.
>
>
> After speaking with Barry today, I realized that I was abusing
> PetscSection and doing a roundabout indexing to access the desired data. I
> was using four sections to distribute data for four different structures. I
> understand now that I need to
> i) Create only one section to manage the parameters and use
> PetscSectionAddDof() to add the dofs (sizes of structs) for the different
> structs required for residual evaluation.
> ii) Allocate a chunk of memory equal to the size returned from
> PetscSectionGetStorageSize(). Note that the user can do (i) and (ii)
> directly while reading the circuit parameters file. I am doing this after
> the file has been read and the different structs have been created, hence
> the need for allocating the chunk of memory.
> iii) Loop over the edges, vertices and copy over the parameters using
> PetscSectionGetOffset().
> iv) Use DMPlexDistributeData() with this section and the chunk of memory.
>
> I tried to use MPI_BYTE as the MPI_DataType in DMPlexDistributeData() but
> got the following error. Jed explained that it had something to do with how
> the data is packed and suggested to divide the sizes of the structs in (ii)
> by PetscInt and use MPI_INT for DMPlexDistributeData() as a workaround.
>

I wish the guy who write PetscSF believed in types smaller than int. Just
because they are small, does not mean they
cannot achieve great things. MPI_BYTE is the Little Type That Could.

   Matt


> [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [0]PETSC ERROR: No support for this operation for this object type!
> [0]PETSC ERROR: No support for type size not divisible by 4!
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Development GIT revision:
> d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05
> -0500
> [1]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [1]PETSC ERROR: No support for this operation for this object type!
> [1]PETSC ERROR: No support for type size not divisible by 4!
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: Petsc Development GIT revision:
> d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05
> -0500
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: ./PF2 on a debug-mode named Shrirangs-MacBook-Pro.local by
> Shri Tue Sep 10 20:22:44 2013
> [0]PETSC ERROR: Libraries linked from
> /Users/Shri/Documents/petsc/debug-mode/lib
> [0]PETSC ERROR: Configure run at Sat Aug 31 09:27:36 2013
> [0]PETSC ERROR: Configure options --download-chaco --download-metis
> --download-mpich --download-mumps --download-parmetis --download-scalapack
> --download-superlu_dist COPTFLAGS=-g CLFAGS=-g FLAGS=-g FOPTFLAGS=-g
> PETSC_ARCH=debug-mode
> [1]PETSC ERROR: See docs/changes/index.html for recent updates.
> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [1]PETSC ERROR: See docs/index.html for manual pages.
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: PetscSFBasicPackTypeSetup() line 386 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> [0]PETSC ERROR: PetscSFBasicGetPack() line 509 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> [0]PETSC ERROR: [1]PETSC ERROR: ./PF2 on a debug-mode named
> Shrirangs-MacBook-Pro.local by Shri Tue Sep 10 20:22:44 2013
> [1]PETSC ERROR: Libraries linked from
> /Users/Shri/Documents/petsc/debug-mode/lib
> [1]PETSC ERROR: Configure run at Sat Aug 31 09:27:36 2013
> [1]PETSC ERROR: Configure options --download-chaco --download-metis
> --download-mpich --download-mumps --download-parmetis --download-scalapack
> --download-superlu_dist COPTFLAGS=-g CLFAGS=-g FLAGS=-g FOPTFLAGS=-g
> PETSC_ARCH=debug-mode
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: PetscSFBcastBegin_Basic() line 645 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> [0]PETSC ERROR: PetscSFBcastBegin() line 918 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/interface/sf.c
> [0]PETSC ERROR: DMPlexDistributeData() line 2796 in
> /Users/Shri/Documents/petsc/src/dm/impls/plex/plex.c
> [0]PETSC ERROR: PetscSFBasicPackTypeSetup() line 386 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> [1]PETSC ERROR: PetscSFBasicGetPack() line 509 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> [1]PETSC ERROR: PetscSFBcastBegin_Basic() line 645 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/impls/basic/sfbasic.c
> main() line 582 in pf2.c
> [1]PETSC ERROR: PetscSFBcastBegin() line 918 in
> /Users/Shri/Documents/petsc/src/vec/is/sf/interface/sf.c
> [1]PETSC ERROR: DMPlexDistributeData() line 2796 in
> /Users/Shri/Documents/petsc/src/dm/impls/plex/plex.c
> [1]PETSC ERROR: main() line 582 in pf2.c
> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0
> [cli_0]: aborting job:
> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0
> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 1
> [cli_1]: aborting job:
> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 1
>
>
> ===================================================================================
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   EXIT CODE: 56
> =   CLEANING UP REMAINING PROCESSES
> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>
> ===================================================================================
>
> Btw, DMPlexDistributeData() is missing an extern declaration in
> petscdmplex.h.
>
>
>
>
>> >
>> >
>> >     This is a useful thing but in itself not a killer app.  What makes
>> it a killer app are
>> >
>> >  DMPlexDistributeField(DM dm, PetscSF pointSF, XX originalSection, Vec
>> originalVec, XX newSection, Vec newVec)
>> >  DMPlexDistributeData(DM dm, PetscSF pointSF, XX originalSection,
>> MPI_Datatype datatype, void *originalData, XX newSection, void **newData)
>> >
>> >     Note that these do not actually even use the DM (the code should be
>> fixed by moving it out of DMPlex into the "lower level" code.) Note also
>> these really create newSection internally (the code should be fixed by
>> making these take PetscSection *newSection).
>> >
>> > I can move them over to vsectionis.c. I do not agree that the signature
>> should change. I thought we were going to
>> > the VecLoad() paradigm where an empty object comes in.
>> >
>> >     What this means is that if I have a set of "points" distributed
>> across MPI processes and for each point a "bunch of data" (in an "array"
>> offsetted by a XX) and I redistribute the points across MPI processes
>> including adding ghost points then I can redistribute all the "bunch of
>> data" (including that needed for ghost points) trivially and conveniently
>> access it using the newSection.
>> >
>> > Yep, this should be very useful, an extension of the very useful
>> VecScatter.
>> >
>> >
>> >    Role 2 (I'll call this object CC)  Provides a container that tells
>> you which parts of "bunches of data" in an array or Vec are associated with
>> different "physical simulation fields" and are possibly constrained (for
>> example by boundary conditions). Uses XX to indicate each field etc.
>> >
>> >   --------
>> >   PetscSection                  bc;           /* Describes constraints,
>> point --> # local dofs which are constrained */
>> >   PetscInt                     *bcIndices;    /* Local indices for
>> constrained dofs */
>> >
>> >
>> >   PetscInt                      numFields;    /* The number of fields
>> making up the degrees of freedom */
>> >   const char                  **fieldNames;   /* The field names */
>> >   PetscInt                     *numFieldComponents; /* The number of
>> components in each field */
>> >   PetscSection                 *field;        /* A section describing
>> the layout and constraints for each field */
>> >
>> >
>> >    Suggestion:
>> >
>> >         We split PetscSection into XX and CC reorganize and clean them
>> up.
>> >
>> > Okay. The reason I did it this way was that any FEM code is going to
>> need CC, and it needs to contain XX, so
>> > I just made XX the container. We can make CC the container and have the
>> split that you want. It makes sense.
>> >
>> >    Relationship of XX to IS.
>> >
>> >         An IS defines a subset of entries in a Vec or possibly in an
>> array of type int. Generally individual entries in an IS do not have
>> particular meaning relative to other entries. That is one does not iterate
>> over entries in an IS, one just uses the IS to get a new Vec and then does
>> whatever on that Vec.
>> >
>> >        An XX is something you often iterate over, for example
>> >
>> >     for (p = pStart; p < pEnd; ++p) {
>> >       PetscInt dof, off, s;
>> >       ierr = PetscSectionGetDof(mesh->supportSection, p,
>> &dof);CHKERRQ(ierr);
>> >       ierr = PetscSectionGetOffset(mesh->supportSection, p,
>> &off);CHKERRQ(ierr);
>> >       do something
>> >
>> > that is you get a single entry of XX and do something with it (though
>> you can also get a bunch of entries and do something on the whole bunch).
>> >
>> > We do have these kind of offset structures in PETSc (MatAIJ), but they
>> are hardcoded in.
>> >
>> >    Question:
>> >
>> >     Is XX a powerful and useful thing? Do we have the correct API for
>> it? Can it be used for other purposes?
>> >
>> > I think its all yes.
>> >
>> >     Can it be made more powerful: for example the "chunk" that it gives
>> you is always a contiqous set of bytes in the "array". Would be be better
>> off with a YY object that allowed them to be non-contiquous? Or should YY
>> be a higher level object implemented with XX?
>> >
>> > Let me take this statement apart. So for any point p, the Section gives
>> you back (offset, dof), so that you can use it to refer to
>> >
>> >    array[offset]...array[offset+dof]
>> >
>> > which is the "chunk" that Barry is talking about. It is contiguous by
>> design. That is how we achieve compression in the representation. I claim
>> you
>> > can get the YY object with two XX objects. The first XX object says how
>> many "points" in the second XX correspond to each chunk. Each one of
>> > the subchunks in the second XX is contiguous but they create a
>> non-contiguous chunk for the first XX. In general, any non-contiguous thing
>> is
>> > represented by a set of contiguous things. This is not a good
>> representation is you mostly have scatter, scalar data. However, I think we
>> mostly have
>> > scattered blocks of data.
>>
>>    Is this second thing, built on two XX worthing of its own abstraction?
>> For example when I way give me the coordinates of the nodes of a triangular
>> in a mesh?
>
>
> I am not sure I understand what you want here. I don't think this YY thing
> is all that useful since I can't see the generality.
>
>    Matt
>
>
>> >
>> >    Looking for a vigorous discussion and then code that I can better
>> understand.
>> >
>> > Me too, so "Jed is probably all wrong about this" :)
>> >
>> >    Matt
>> >
>> >
>> >    Barry
>> >
>> >
>> >
>> > On Nov 8, 2012, at 10:18 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> >
>> > > This is inevitable, but may still not resolve anything. I'm sure Matt
>> will correct whatever below is incorrect. The main question is whether to
>> promote PetscSection to a PetscObject or to hide/replace it with something
>> else.
>> > >
>> > > So PetscSection is an unloved (by most) beastie that is sort of
>> intermediate between an IS and a DM. Its core functionality is to provide
>> indirect indexing with variable block sizes. It consists of two parts, the
>> first of which is less than a PetscLayout.
>> > >
>> > > struct _n_PetscUniformSection {
>> > >   MPI_Comm comm;
>> > >   PetscInt pStart, pEnd; /* The chart: all points are contained in
>> [pStart, pEnd) */
>> > >   PetscInt numDof;       /* Describes layout of storage, point -->
>> (constant # of values, (p - pStart)*constant # of values) */
>> > > };
>> > >
>> > > numDof is always 1 and PetscUniformSection is never referenced from a
>> *.c file so this is really just a range of valid "point" values
>> [pStart...pEnd).
>> > >
>> > > struct _n_PetscSection {
>> > >   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas
>> */
>> > >   PetscInt                     *atlasDof;     /* Describes layout of
>> storage, point --> # of values */
>> > >   PetscInt                     *atlasOff;     /* Describes layout of
>> storage, point --> offset into storage */
>> > >
>> > > ^^ These two fields are the essential part. Each index in
>> [pStart,pEnd) is mapped to a block of size atlasDof[p-pStart] located at
>> atlasOff[p-pStart]. Matt composes these indirect maps for storing meshes
>> and other indexing tasks. This "unsorted offset mapping" is indeed used
>> frequently in PETSc, though outside of Matt's code, it's just raw arrays.
>> > >
>> > > User's primarily encounter PetscSection when accessing unstructured
>> meshes or any time the number of degrees of freedom or locations of each
>> "point" (topological entity in the mesh) is not uniform. DMs do a similar
>> thing (provide more logical access to a Vec), but DMDAVecGetArray() is of
>> course only natural for a structured grid.
>> > >
>> > > At the user interface level, provided the user doesn't need low-level
>> access to topology, I think there are three possibilities.
>> > >
>> > > 1. To compute a residual at a cell, user does
>> > >
>> > > VecGetArray(F,&f); // normal access to the Vec
>> > > DMComplexGetHeightStratum(dm,0,&cstart,&cend); // "height 0" is like
>> codimension 0 with respect to the maximum topological dimension. Matt and I
>> have have a philosophical debate over whether users prefer to think in
>> "strata" (that can skip dimensions, so "height 1" could be a face, an edge,
>> or a vertex depending on what is stored in the mesh) or in "topological
>> dimensions" (where codim=1 is a "face" for a 3D mesh and an edge for a 2D
>> mesh).
>> > >
>> > > DMGetDefaultSection(dm,&section);
>> > > for (c=cstart; c<cend; c++) {
>> > >   PetscInt off;
>> > >   PetscSectionGetOffset(section,c,&off);
>> > >   f[off+0] = residual of first equation at this cell;
>> > >   f[off+1] = residual of second equation;
>> > >   // brushing under the rug how we access neighbor values
>> > > }
>> > >
>> > > The above is basically the current model. Matt has
>> DMComplexGet/SetClosure(), but that interface carries a sleeper reference
>> to the Vec's array and he says he plans to remove it.
>> > >
>> > > 2. Add DMComplex-specific access routines so the user does not need
>> to see the PetscSection. Presumably this would be something like
>> > > DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of
>> global vector?
>> > > DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local
>> vector
>> > >
>> > > 3. Access cursors (another object).
>> > > DMComplexGetCursors(dm,Vec U,3,&uface,&ucellL,&ucellR);
>> > > // similar for F
>> > > DMComplexGetHeightStratum(dm,1,&fstart,&fend);
>> > > for (f=fstart; f<fend; f++) {
>> > >   const PetscInt *c;
>> > >   const PetscScalar *uL,*uR,*uF;
>> > >   DMComplexGetSupport(dm,f,&c); // left and right cells
>> > >   DMCursorGetValues(uface,f,&uF); // only used for staggered/mixed
>> discretization
>> > >   DMCursorGetValues(ucellL,c[0],&uL);
>> > >   DMCursorGetValues(ucellR,c[1],&uR);
>> > >   // Solve Riemann problem with "face" values uF[] (if using a
>> staggered discretization), uL[], and uR[].
>> > >   DMCursorRestoreValues(uface,f,&uF); // ...
>> > >
>> > > The "advantage" of the cursors here is that DMComplex doesn't need to
>> hold the context manager itself (perhaps for multiple threads), the code
>> dealing with the number of concurrent accesses is outside the inner loop,
>> and the implementations of these accessors are trivial (offset). I think
>> it's still not compelling for the sketched discretization above, but
>> consider also FEM methods where we need to access all degrees of freedom on
>> the closure of an element. The following packs a buffer with the closure
>> (because it's not contiguous in memory).
>> > >
>> > >   DMCursorGetClosure(cell,c[0],&uc);
>> > >
>> > >
>> > > 4. Alternative suggestions?
>> > >
>> > >
>> > > I'm so sick of the name DMComplex. Let's just rename to DMPlex. It's
>> shorter and doesn't have a double-meaning. Or pretty much any other name
>> but "Complex".
>> > >
>> > > For completeness, the last bit of _n_PetscSection:
>> > >
>> > >   PetscInt                      maxDof;       /* Maximum dof on any
>> point */
>> > >   PetscSection                  bc;           /* Describes
>> constraints, point --> # local dofs which are constrained */
>> > >   PetscInt                     *bcIndices;    /* Local indices for
>> constrained dofs */
>> > >
>> > > These BC fields are for doing masked updates, such as
>> VecSetValuesSection() which ignores constrained dofs.
>> > >
>> > >   PetscInt                      refcnt;       /* Vecs obtained with
>> VecDuplicate() and from MatGetVecs() reuse map of input object */
>> > >   PetscBool                     setup;
>> > >
>> > >   PetscInt                      numFields;    /* The number of fields
>> making up the degrees of freedom */
>> > >   const char                  **fieldNames;   /* The field names */
>> > >   PetscInt                     *numFieldComponents; /* The number of
>> components in each field */
>> > >   PetscSection                 *field;        /* A section describing
>> the layout and constraints for each field */
>> > > };
>> > >
>> >
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > -- Norbert Wiener
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130910/7462f083/attachment.html>


More information about the petsc-dev mailing list