<div dir="ltr">If anyone wants to know what this discussion is about, see <a href="https://bitbucket.org/petsc/petsc/pull-requests/766/support-for-pde-constrained-optimization">https://bitbucket.org/petsc/petsc/pull-requests/766/support-for-pde-constrained-optimization</a><div><br></div><div>I'll try to summarize the interfaces here. Hong's code API is labeled with H, mine with S.</div><div><br><div>Both methods support cost functions (i.e. objectives functions given by a time integral): H TSSetCostIntegrand(), S TSSetObjective()</div><div>With TSSetCostIntegrand you set a single function that computes numcost cost integral: TSSetObjective instead append to a list.</div><div>The prototype for the function evaluation is similar, expect that I also carry over a vector which stores the current values of the parameters. </div><div><br></div><div>Point-form functionals (i.e., objective functions that are not integrated over time, but just sampled at a given time) drive the initialization of the adjoint variables, and they are not supported explicitly in Hong's code. S: TSSetObjective()</div><div><br></div><div>Both methods need the Jacobian of the DAE wrt the parameters: H TSAdjointSetRHSJacobian(), S TSSetGradientDAE()</div></div><div><br></div><div>Initial condition dependence on the parameters is implicitly computed in Hong's code (limited to linear dependence on all the variables); instead I have TSSetGradientIC which is a general way to express initial condition dependence by an implicit function.</div><div><br></div><div>I'm not very familiar with the TSForward interface, Hong can elaborate more. But my gut feeling is that the public API for cost functions is a duplicate of the one used by TSAdjoint. TLMTS (which is my TS that solves the tangent linear model), reuses the callbacks set by TSSetGradientDAE and TSSetGradientIC. Hong, do you also need to integrate some quadrature variable in your TSForward code? </div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-10-15 18:14 GMT+03:00 Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Someone had to do it.<div><br></div><div>I will not try to frame the entire discussion. Barry has already thrown down the "show me your interface" gauntlet. However, I want to emphasize one point that may have been lost in the prior discussion. Every example I have looked at so far is focused on the reduced space formulation of the optimization problem. However, I am interested in the full space formulation so that I can do multigrid on the entire optimal control problem. This is not a new idea, in particular Borzi does this in SIAM Review in 2009. I think we have a tremendous opportunity here since other codes cannot do this, it has the potential (I think) for much better globalization, and perhaps can be faster.</div><div><br></div><div>So, when we come up with interface proposals, I think we should keep a full space solution method in mind.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt<span class="HOEnZb"><font color="#888888"><br clear="all"><div><br></div>-- <br><div class="m_3105174786533026374gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br></div></div></div></div></div>
</font></span></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Stefano</div>
</div>