[petsc-dev] Discussion about time-dependent optimization moved from PR

Stefano Zampini stefano.zampini at gmail.com
Sun Oct 15 10:35:20 CDT 2017


If anyone wants to know what this discussion is about, see
https://bitbucket.org/petsc/petsc/pull-requests/766/support-for-pde-constrained-optimization

I'll try to summarize the interfaces here. Hong's code API is labeled with
H, mine with S.

Both methods support cost functions (i.e. objectives functions given by a
time integral): H TSSetCostIntegrand(), S TSSetObjective()
With TSSetCostIntegrand you set a single function that computes numcost
cost integral: TSSetObjective instead append to a list.
The prototype for the function evaluation is similar, expect that I also
carry over a vector which stores the current values of the parameters.

Point-form functionals (i.e., objective functions that are not integrated
over time, but just sampled at a given time) drive the initialization of
the adjoint variables, and they are not supported explicitly in Hong's
code. S: TSSetObjective()

Both methods need the Jacobian of the DAE wrt the parameters: H
TSAdjointSetRHSJacobian(), S TSSetGradientDAE()

Initial condition dependence on the parameters is implicitly computed in
Hong's code (limited to linear dependence on all the variables); instead I
have TSSetGradientIC which is a general way to express initial condition
dependence by an implicit function.

I'm not very familiar with the TSForward interface, Hong can elaborate
more. But my gut feeling is that the public API for cost functions is
a duplicate of the one used by TSAdjoint. TLMTS (which is my TS that solves
the tangent linear model), reuses the callbacks set by TSSetGradientDAE and
TSSetGradientIC. Hong, do you also need to integrate some quadrature
variable in your TSForward code?



2017-10-15 18:14 GMT+03:00 Matthew Knepley <knepley at gmail.com>:

> Someone had to do it.
>
> I will not try to frame the entire discussion. Barry has already thrown
> down the "show me your interface" gauntlet. However, I want to emphasize
> one point that may have been lost in the prior discussion. Every example I
> have looked at so far is focused on the reduced space formulation of the
> optimization problem. However, I am interested in the full space
> formulation so that I can do multigrid on the entire optimal control
> problem. This is not a new idea, in particular Borzi does this in SIAM
> Review in 2009. I think we have a tremendous opportunity here since other
> codes cannot do this, it has the potential (I think) for much better
> globalization, and perhaps can be faster.
>
> So, when we come up with interface proposals, I think we should keep a
> full space solution method in mind.
>
>   Thanks,
>
>      Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>



-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20171015/80bd435f/attachment.html>


More information about the petsc-dev mailing list