[petsc-dev] [petsc-users] Multigrid with defect correction

Barry Smith bsmith at mcs.anl.gov
Tue Feb 28 10:51:54 CST 2017


> On Feb 28, 2017, at 4:03 AM, Dave May <dave.mayhem23 at gmail.com> wrote:
> 
> 
> 
> On 26 February 2017 at 05:30, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> Barry Smith <bsmith at mcs.anl.gov> writes:
> 
> >> On Feb 25, 2017, at 7:35 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> >>
> >> Barry Smith <bsmith at mcs.anl.gov> writes:
> >>>     Do you think this is a reasonable approach or am I missing
> >>>     something fundamental? I am assuming generally for the "higher
> >>>     order" DM the Mat it returns is a MATSHELL or a new matrix class
> >>>     built on "tensor contractions" and that kind of nonsense. I don't
> >>>     want to do all the coding and then have it turn out that it is
> >>>     totally useless for CEED etc.
> >>
> >> Well, this is exactly what we do in pTatin.
> >
> >   So why didn't you fix it in PETSc at that time?
> 
> I didn't know it "didn't work" because what we were doing did work.  (I
> think we used PCMG interfaces directly; we were mixing unassembled and
> assembled representations, plus Galerkin on coarser levels.)
> 
> 
> I can confirm that Jed's recollection is correct.
> 
> The configuration used in ptatin was performed in the context of a non-linear problem. Essentially all the MG setup has conducted during my method for computeJacobian(). The setup / config for the different operators was a little delicate, but the standard PCMG API permitted mixing matrix-free, assembled and Galerkin coarse operators.

   Sure the PCMG API has the flexibility; it is the DM usage in PCMG etc that only has a single DM and no way to handle mat and pmat separately.

> For the Galerkin case, I performed the call to MatPtAP myself in my computeJacobian() function and would stuff the result into the appropriate KSP defining the smoother.

   PCMG can actually do this for you if you tell it to use Galerkin so you shouldn't need to do this (with both the "standard" PCMG interface and its use of DM).

   Barry

> 
>  
> 
> >> I would ask, why just two discretizations?  I've always thought a better
> >> interface would be for Mats (and any other operators/functions) to have
> >> optional approximations or supplementary data attached.  We can do this
> >> with PetscObjectCompose, but that's hard to work with in a structured
> >> way.  Anyway, I think I would rather just have the Amat with ability to
> >> attach one or more Pmats.
> >
> >   Understood. I don't like the API as "attach one or more Pmats to a
> >   Mat" but instead prefer to think of it as a linear operator that has
> >   one or more "levels of approximation"
> 
> Should they be "levels" (numbered?) or named approximations?
> 
> >   that can optionally be indicated; which one you wish to use is up to
> >   the algorithm using the linear operator. Changing this in PETSc
> >   would require some conceptional changes and refactoring in
> >   PETSc. Would simplify things like SNESSetJacobian() because we would
> >   not need to track mat and pmat. But would not be terribly difficult
> >   for the PETSc developers or user.
> >
> >   If we used this model, instead of adding a new DM to the interface
> >   do we keep the one DM but have a way to "attach" other "lower order"
> >   DMs to the that one DM that can be accessed?
> 
> Presumably.  Is a DM a function space, an operator, or both?
> 
> > DMCreateMatrix(dm,&mat);
> > pmat = mat;
> > DMGetLowerDM(dm,&ldm);
> > if (ldm) {
> >   DMCreateMatrix(ldm,&pmat);
> > }
> > KSP/SNES/TSSetIJacobian(ts,mat,pmat,...);
> >
> > more generally DMGetLowerDM(dm,PetscOrderIndicater i,&ldm);  Similarly MatGetLowerMat(mat,PetscOrderIndicater i,&lmat)?
> >
> > ----- an aside
> >   BTW: this introduces a possible "missing" object in PETSc. We have
> >   as a fundamental object the Mat which is a linear operator, we do
> >   not have a fundamental object for a "general" operator, instead we
> >   accept user provided functions directly (SNESSetFunction(),
> >   TSSetIFunction() etc).  I've resisted the PetscOperator because I
> >   felt it was overly abstract (and thus confusing to most users) and
> >   is more difficult to implement cleanly since the operator may depend
> >   on more than one input vector (TSSetIFunction depends on u and udot;
> >   would we have a PetscOperator2 that had two input vectors?).
> 
> DMSNES and DMTS effectively provide this, it's just intimately coupled
> to the DM.  I advocated before for separating these because function
> spaces have meaning independent of operators defined on them, but that's
> more of a performance consideration that assumes DM implementations
> don't have cheap sharing of common data.
> 
> > I did try the PF object but it is very limited and rarely
> > used. Certainly I think for a wide class of users, especially Fortran,
> > SNESSetFunction() is understandable but
> >
> > PetscOperatorCreate(&operator)
> > PetscOperatorSetRawFunction(operator, function())
> > SNESSetOperator(snes,operator)
> >
> > becomes gibberish and so we would lose many potential users. Of course we could introduce the PetscOperator and maintain the simplicity of the current interface by having things like SNESSetFunction() automatically handle the conversion to the "operator" interface and have a new SNESSetOperator() for the general case. I hate having this kind of "wrapper" code because it usually bit decays over time and I don't like have multiple equivalent interfaces for the same effect (how do you document them, how does the user decide which one to use? I hate perl :-), there should be one way to do each thing).
> 
> SNESSetFunction calls DMSNESSetFunction.
> 
> > Or does one every need "lower order operation evaluation"? Is lower
> > order linearization enough?
> 
> No, dual-order discretizations are possibly more important for nonlinear
> problems.
> 
> > Currently with mat and pmat we only support lower order linearization,
> > we don't for example allow passing a higher order and lower order
> > nonlinear function both to SNESSetFunction. If the lower order
> > operator is never needed, only the lower order linearization then
> > PetscOperator (which would allow managing several orders of general
> > operator) would not be needed so API for users is simpler. Or we could
> > introduce lower order linear operators now and not worry about lower
> > order function evaluations for years?  --- end of the aside
> 
> One common application of lower order discretizations is in in the
> function provided to SNESSetNGS.
> 
> > I also don't disagree with Matt, should we just "complete" support for two levels of approximation by having proper two DMs for TS/KSP/SNES, see how that goes rather than jumping directly from a (far from perfect since users need to manually do ugly stuff since we only have one DM) two level support (as we currently have) to multilevel support? I won't change the current interface to do this instead I would just add TS/SNES/KSPSetDMs()
> 
> Yup.
> 
> > Imagine poor Ed's book if we suddenly change stuff like this in PETSc :-(  So the question is what should/could we do now to improve flexibility without requiring Ed to start all over again? That is, how gradually we need to introduce this kind of overhaul, and with how much backward compatibility?
> 
> I would say quite gradual and with at least a full release cycle of
> backward compatibility.  And although I think it is ultimately better
> for extensibility, I'm not 100% convinced it is worth the cost to users
> to overhaul.  One nice thing about the current interface is that it
> forces people to acknowledge that those two arguments could be
> different, so they may read or find examples.  If we switch so our
> operators have a basket of approximations, lots of users may never learn
> about that basket.
> 




More information about the petsc-dev mailing list