[petsc-dev] [petsc-users] Multigrid with defect correction
Barry Smith
bsmith at mcs.anl.gov
Mon Mar 6 09:45:28 CST 2017
> On Mar 6, 2017, at 5:38 AM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Sun, Mar 5, 2017 at 6:02 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Mar 5, 2017, at 11:18 AM, Jed Brown <jed at jedbrown.org> wrote:
> >
> > Barry Smith <bsmith at mcs.anl.gov> writes:
> >
> >> I've looked at the code again and I'm afraid that just "adding a second DM" is not trivial. The DMKSP/SNES/TS stuff is all built around a single DM and has stuff like ->dmoriginal in it (do we have two dmoriginal with all its complications?).
> >>
> >> Perhaps the entire KSPSetComputeOperators() model was misguided and
> >> I should have stuck with the "you fill up all the needed
> >> information for the levels of PCMG" before you do the multigrid
> >> setup instead of providing callbacks that can fill it them up on
> >> the fly "as needed".
> >
> > How would this work when the PCMG is deeply nested and you want to
> > rediscretize on coarse grids?
>
> So, for example, a PCFIELDSPLIT with a PCMG on one of the fields.
>
> Good point, but I don't think anyone can actually do this now without having Jed hold our hand and write more obscure hook code. (And any design that requires the knowledge of a single guru is not a good design).
>
> PCFieldSplitSetDefaults() does have the possibility of calling DMCreateFieldDecomposition() or DMCreateSubDM() but, for example, neither DMCreateSubDM_DA() nor DMCreateFieldDecomposition_DA() know anything about DMKSP so the subDM that would be associated with the PCMG cannot do rediscretize since it won't know what functions to use, in addition the subdomain doesn't get the "field" of the "current solution" that needs to be linearized around. It looks like DMCreateFieldDecomposition() and or DMCreateSubDM() need have hooks added for them, (there are currently subdomain, refine, and coarsen hooks, but none for "fields"). Even with the addition of the hooks there is no API for the user to provide a "linearize on a particular field" to SNES to be passed down through the hooks; so that has to added. What about linearization on a particular field of a particular field for when PCFIELDSPLIT is used inside a PCFIELDSPLIT, etc, how are we going to support that generally?
>
> Backing up to the case that is supported, PCMG, note that SNESSetUpMatrices() has the code
> {
> KSP ksp;
> ierr = SNESGetKSP(snes,&ksp);CHKERRQ(ierr);
> ierr = KSPSetComputeOperators(ksp,KSPComputeOperators_SNES,snes);CHKERRQ(ierr);
> ierr = DMCoarsenHookAdd(snes >dm,DMCoarsenHook_SNESVecSol,DMRestrictHook_SNESVecSol,snes);CHKERRQ(ierr);
> }
>
> so SNES essentially knows about PCMG already; true it doesn't loop over the levels directly and compute the "current solution", that is handled by the DMRestrictHook_SNESVecSol, but is it really so different? It is in the snes.c file. SNES knows about the PCMG pattern! Are we going to have to "teach" SNES the same kind of thing for the other more complicated situations like a PCFIELDSPLIT with a PCMG, with tons of specific hooks for the different situations?
>
> I find the use of the hooks and DMGetDMKSPWrite() stuff already confusing and yet we still only do a very simple case with it. How can we manage the complexity once we use it more generally?
>
>
> Perhaps the solution is to deprecate the KSPSetOperators, SNESSetFunction, SNESSetJacobian, TSSetIFunction etc and instead directly use DMKSPSetComputeOperators(); you need to remind me again why you need to have a DMKSP instead of just putting the function pointers into the DM? Would this vastly simplify things?
>
> The DMSNES and friends are the parts that are solver specific, since a given DM can be associated with several solvers.
No, that is not the reason they exist. Every DM actually contains its DMKSP, DMSNES, and DMTS so at any point in time a DM can actually have only one DMXXX. We had arguments over sharing DM with multiple solvers but I think that is a goofy optimization.
> That is why the resolution-dependent information can go there. You could have stuck all that info in the solver I guess, but perhaps Jed has a good reason not to do that (maybe to make it easier to pass around, or maybe extensible to different DMs).
Hmm, since PC/KSP/SNES/TS know about DM why don't we just stick that information in the solver. Seems simpler, there must be some reason.
>
> I am not against deprecation there, but we have to preserve an easy way for people to just feed there crap in, meaning no creating a DMSHELL or other setup.
>
> Matt
>
> Perhaps we need to revisit what DM is? Looking at the struct _p_DM is rather scary.
>
>
>
>
> >
> >> We could possibly throwout all the coarsen/refine hook stuff and
> >> the DMKSP construct.
> >
> > I don't think the coarsen or refine stuff is specific to KSP? How would
> > you intend to do coarse levels for FAS as a nonlinear preconditioner?
> >
> >> Essentially we'd only be requiring SNESComputeJacobian to know
> >> about the PCMG API to provide the coarser grid operators. The
> >> coarsen/refine hook stuff seems to be there only to hide from the
> >> SNESComputeJacobian the PCMG API. At lot of complication for what
> >> real benefit besides our programming egos?
> >
> > Why just PCMG, not every possible nesting of PCMG inside other PCs?
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
More information about the petsc-dev
mailing list