[petsc-dev] DM field interfaces

Barry Smith bsmith at mcs.anl.gov
Sat Feb 25 15:27:24 CST 2012


   Let's back up a step before everyone gets totally confused (and worry less about exact syntax to use instead focus on concepts)

Proposal 1: Jed 

    // Conservative gas dynamics with fields ordered as [rho, rho*u, rho*v, rho*w, E]
   DMGetFieldSplitting(dm,"conservative-vector",&fscons); // creates if it doesn't exist
   DMFieldSplittingAddField(dm,fscons,"rho",{0});
   DMFieldSplittingAddField(dm,fscons,"momentum",{1,2,3});

    This seems to be premised on having an underlying "canonical" set of fields that one can define unions of to make new fields. For example with DMDA the underlying 
    fields are simply the 0th, 1st, 2nd etc DOF at each node. For a simple staggered grid 0 canonical may be the pressure centers, 1 the x velocities, 2 the y velocities

Proposal 2: Barry

     Use IS's to define the fields, not unions of canonical fields. Reason: more general than proposal 1 

Proposal 3: Jed

    Observation: Using IS does not give all desired generality since the new field may be a linear combination of DOFs. Or, yikes, a nonlinear transformation of DOF.
    Thus we may need something like DMTransformVec() to do these transformations.


Comment:  Barry 
      
     I very much like the universality in PETSc of using IS __ALWAYS__ to grab a set of indexed things and not using another indexing technique like 0 to indicate the "first field" 
     and 1 to indicate the "second field". 
 
       1) Matt seems to like this too since he very recently lobbied for simplifying PCFIELDSPLIT to ONLY use the IS for indicating parts and to remove 
            the   (somewhat redundant) PetscInt          nfields;  PetscInt          *fields; which we will do. 
   
        2) We currently have VecStrideXXXXX(vec,start,....) that index using the 0th field, 1st field etc Does not match the paradigm

        3) VecGetSubVec(Vec,IS,Vec*) uses IS to pull out pieces so matches the paradigm. Way to go Jed!

        4) There may be other places that do not match the paradigm

 Because Jed's proposal 1 violates the use of IS for indexing paradigm and is less general than Proposal 2 I'd like to drop that Proposal 1; but it does bring up an issue. I think we need another IS implementation that is more "general" than stride for handling sets of DOFs of freedom associated with those canonical sets of fields. So it is easy to access canonical fields that are not simply stride. For example ISCreateCanonical(MPI_Comm,PetscInt ncanonical,PetscInt *fields), we can even have "predefined" ones available for use such as IS_CANONICAL_(MPI_Comm,3,{1,2,3}) Thus we can preserve some of the simplicity of proposal 1.  Jed's (again not worrying about syntax)

DMFieldSplittingAddField(dm,fscons,"momentum",{1,2,3});  becomes

DMFieldSplittingAddField(dm,fscons,"momentum",IS_CANONICAL_(comm,3,{1,2,3})

Note that for a Vec with a given block size the canonical fields are simply the stride fields so IS_CANONICAL_(comm,1,{start}) is pretty much ISStride().

We could push this even further and assume that the underlying object being indexed in has "named" fields, we could still use those named fields to index via the IS if we had 
and ISCreateNamed(comm,"Name") and helper IS_NAMED_(comm,"Name") so one could do VecGetSubVec(vec,IS_NAMED_(comm,"T"),&subvec); etc.

Ok, so this solves the original issue of indicating sets of indices for DM, useful for standard use of block methods for matrices :-). It allows general IS, canonical fields and named fields all under the same interface. 

----------------

Now to the hard stuff Jed dumped on us.

Proposal 3: Jed

    Observation: Using IS does not give all desired generality since the new field may be a linear combination of DOFs. Or, yikes, a nonlinear transformation of DOF.
    Thus we may need something like DMTransformVec() to do these transformations.


So this moves us beyond the traditional block linear algebra methods but if we could solve it we'd have some very nice new powerful capabilities on our hands. 

Question 1:  Do we need to solve this for the current generation of PCFieldsplit? 
Question 2:  Do we want to solve it in PCFieldSplit or should that come later in a different PC, PCBasisChangeThingy or something?
Question 3: Can we go ahead with deciding on exact syntax etc for Proposal 2 and implementing before we resolve the Proposal 3 issue?


When I see DMTransformVec() I get worried, we already have DMGetInterpolation/Restriction() that maps between two DMs, cannot we not somehow extend that instead of putting in more methods? (I know they return Mat which is a linear operator and if I had listened to Matt many years ago everything in PETSc would be a nonlinear operator so everything would be fine).  If at all possibly I'd to reuse/fix up those old interfaces to handle this need.

  Barry


  



On Feb 25, 2012, at 1:33 PM, Jed Brown wrote:

> On Sat, Feb 25, 2012 at 13:10, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote:
> I wrote this before the thread moved to petsc-dev, so there is some overlap with the below.
> A given problem with the same "topology" -- a mesh and a total index layout -- might define several useful splits -- collections of fields. Each field might have further useful splits. I have useful examples from my interaction with Moose/libMesh, although these are rather general.  A field is now represented by an IS.  We could try to put netsted split definitions into ISs (e.g., ISNest), but I think it's much more natural to keep all nestedness inside DMs -- we already have examples of this in the coarsening, refinement for MG and a pretty well understood interaction of this sort of DM hierarchy. with a hierarchy of PCs.
> 
> Thus, I think it would be useful for a DM to return a split as a list of DMs, which can be further split, if necessary:
>        DMGetFieldDMs(DM dm, PetscInt *fieldcount, DM **fielddms);
> The field names can simply be the corresponding DM names, etc.  The fielddms can be lightweight, sharing the underlying
> topology.  That's up to the implementation to arrange, though.  This addresses nested splits.
> 
> There are two concepts, defining what the space for a given split _is_ and how to get the part of the current state is needed to get a vector living in that space. The IS is a rudimentary way to define what part is needed. (You can use VecGetSubVector(X,isfield,&Xfield) or other methods to reference that part of the state.) Unfortunately, it's not sufficient even for a linear change of basis. An IS combined with a PF is sufficient for non-mixed spaces. Mixed/staggered spaces need even more because the transformation is defined through quadrature and projection. Maybe the transformation should be entirely handled by the DM instead of giving the user objects that apply the transformation.
> 
> But is the interface going to be DMTransformVec(dm,X,Y,dmt,Yt) that transforms the perturbation Y to the state X into transformed variables according to dmt? Or should we be getting back an object that does this sort of transformation, so that we can amortize some setup coming from matching dmt relative to dm? (Note that we also need an inverse transformation.)
>  
> 
> We could use the same mehanism to implement alternative splits on the same topology  -- these are different DMs sharing the topology internally.  Instead of pushing and popping splits, they can be retrieved by name as 
>        DMGetSplitDM(DM dm, const char *splitname, DM *splitdm);
> Then calling DMGetFieldDMs on the splitdm retrieves the corresponding fields.
> Again, splitdm and dm can share common data structures, as arranged  by the particular DM implementation.
> The advantage of this approach over push/pop is that we can simultaneously use different splits of the same topology.
> 
> It is rather easy to apply the same scheme and have DMs serve up (overlapping) domain decompositions useful for (G)ASM.
> I'm now implementing both of these schemes (fieldsplit and subdomains) on top of libMesh, but the code resides in the libMesh source tree to avoid a circular dependence: as designed right now libMesh depends on PETSc.  The API has to live in DM, though.
> 




More information about the petsc-dev mailing list