[petsc-dev] vector inner products

Jed Brown jed at jedbrown.org
Thu Apr 12 16:26:07 CDT 2018


"Oxberry, Geoffrey Malcolm" <oxberry1 at llnl.gov> writes:

> Agreed; we find the Hilbert space inner product improves the convergence a great deal when doing mesh refinement studies with quasi-Newton methods in a discretize-then-optimize approach.
>
> The best example I can think of to argue against “hiding” the inner
> product inside of a  DM

I don't think of it as hiding, just associating.  Making a link from a
Vec to a Mat violates the usual dependency direction.  The Vec can
unwittingly carry a reference to a Mat, but the normal Vec operations
shouldn't be changed.  We do create this sort of dependency inversion
with DM that changes the way VecView behaves, for example.

If DM isn't used, we would need to either create this extra association
for the Mat (gross) or build the plumbing to inform every user of the
Vec about the associated inner product.  But I think every use that
needs this special inner product is either already aware of the DM or
logically should be aware of it.

> instead of a Mat is that it could be used for automatically scaling
> the KKT systems solved in interior point methods (e.g., IPOPT);
> poorly-scaled problems arise sometimes in applications. Admittedly,
> these inner products tend to be diagonal, and thus there may be a
> better interface or abstraction for this functionality.
>
>> On Apr 12, 2018, at 13:28, Stefano Zampini <stefano.zampini at gmail.com> wrote:
>> 
>> The gradient norm is the one induced by the mass matrix of the DM associated with the control.
>> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + solve with the mass matrix.
>> 
>> For PDE constrained optimization, the “gradient norm” is crucial, since we consider optimization problems in Banach spaces.
>> We should keep supporting it, maybe differently than as it is now, but keep it.
>> 
>>> On Apr 12, 2018, at 11:21 PM, Jed Brown <jed at jedbrown.org> wrote:
>>> 
>>> Are you thinking about this PR again?
>>> 
>>> https://bitbucket.org/petsc/petsc/pull-requests/506
>>> 
>>> There's an issue here that Krylov methods operate in the discrete inner
>>> product while some higher level operations are of interest in
>>> (approximations of) continuous inner products (or norms).  The object in
>>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>>> fields) on discrete quantities is DM, so my first inclination is that
>>> any continuous interpretation of vectors, including inner products and
>>> norms, belongs in DM.
>>> 
>>> "Munson, Todd" <tmunson at mcs.anl.gov> writes:
>>> 
>>>> There is a bit of code in TAO that allows the user to change the norm to 
>>>> a matrix norm.  This was introduced to get some mesh independent 
>>>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>>>> norm, however, does not propagate down into the KSP methods
>>>> and is only used for testing convergence of the nonlinear
>>>> problem.
>>>> 
>>>> A few questions then:  Is similar functionality needed in SNES?  Are 
>>>> TAO and SNES even the right place for this functionality?  Should 
>>>> it belong to the Vector class so that you can change the inner 
>>>> products and have all the KSP methods (hopefully) work 
>>>> correctly?
>>>> 
>>>> Note: that this discussion brings us to the brink of supporting an 
>>>> optimize-then-discretize approach.  I am not convinced we should 
>>>> go down that rabbit hole.
>>>> 
>>>> Thanks, Todd.
>> 


More information about the petsc-dev mailing list