[petsc-users] Integrating TAO into SNES and TS
Barry Smith
bsmith at mcs.anl.gov
Tue Sep 1 18:26:07 CDT 2015
> On Sep 1, 2015, at 6:10 PM, Justin Chang <jychang48 at gmail.com> wrote:
>
> Let me back up a bit.
>
> Say I am solving -div[D(x)grad[c]] = 0 on a unit cube with a hole. Interior boundary has DirichletBC c = 1 and exterior boundary has DirichletBC c = 0. If I solve the above in the strong sense (e.g., finite volume, finite difference, etc) -div[D(x)grad[c]] = 0 at every grid cell (hence local mass conservation).
>
> According to discrete maximum principle and the prescribed BCs, my bounds should be between 0 <= c <= 1. However, if D(x) is aniosotropic, the KSP/SNES solver will return c's less than 0 and/or greater than 1. Local mass balance still exists for all cells regardless.
>
> Now, if TAO is used and Vec XL and XU of TaoSetVariable bounds are set to 0 and 1 respectively, I now ensure that 0 <= c <= 1 even for anisotropy. However, our experiments have shown that the "corrected" regions i.e., the regions where it would have violated DMP otherwise, no longer satisfy local mass balance i.e., div[D(x)grad[c]] != at the element level. The unaffected regions, i.e. the regions where DMP was satisfied regardless, are still locally conservative.
>
> We have shown through MATLAB's quadprog that we can achieve both DMP and local mass balance if we applied the mixed form (using least-squares finite element method) of the diffusion and subject the optimization solve to satisfy both upper/lower bounds as well as equality constraints (which is divergence of flux = 0 in Omega). If no bounds/constraints were set, the objective function is equal to zero. If either only bounds or constraints, the objective function is equal to some non-zero value. If both bounds and constraints, the objective function is a slightly larger non-zero value. Basically, the more constraints I apply, the further and further away my overall solution may drift from the original discretization.
>
> I am curious to see if 1) something like this is possible with TAO
I don't really understand what you want to do, but if it is an optimization problem with constraints, at least in theory it can do it.
> and 2) if I can achieve the SNESVI equivalent in the sense I have described.
I doubt it, because the very definition of VIs says that at inequality constraints the equation is not satisfied (and it cannot be because there is no solution that satisfies the equations AND satisfies the bounds).
Barry
> Hope this makes a little more sense.
>
> Thanks,
> Justin
>
> On Tue, Sep 1, 2015 at 4:29 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> I really cannot understand what you are saying.
>
> > On Sep 1, 2015, at 5:14 PM, Justin Chang <jychang48 at gmail.com> wrote:
> >
> > So in VI, the i-th component/cell/point of x is "corrected" if it is below lb_i or above ub_i. All other x_i that satisfy their respective bounds remain untouched. Hence only the corrected components will violated the original equation
>
> I think what you say above is backwards. First of all x_i would never be < lb_i or > ub_i since it is constrained to be between them. It is just that with a VI IF x_i is lb_i or ub_i then F_i(x) is allowed to be nonzero (that is that equation is allowed to violated).
>
> >
> > In optimization, every single x_i will be "shifted" such that every single x_i meets lb_i and ub_i. Hence it's possible all components will violate the original equation
>
> No. With constraints in optimization, they must all be satisfied for one to say that one has a solution.
>
> With VIs the F(x) is NOT a constraint and should not be thought of as a constraint; only the bounds are constraints. Or think of F(x) as a special kind of constraint (that doesn't exist with optimization) that is allowed to be violated in a a very special way (when its x_i variable is on a constraint).
>
> Barry
>
>
> >
> > Is that the general idea here?
> >
> > On Tue, Sep 1, 2015 at 12:48 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > I think we are talking past each other.
> >
> > With bound constraint VI's there is no optimization, there is an equation F(x) = 0 (where F may be linear or nonlinear) and constraints a <= x <= c. With VIs the equation F_i(x) is simply not satisfied if x_i is on a bound (that is x_i = a_i or x_i = b_i),
> >
> > With optimization if you have an equality constraint and inequality constraints; if to satisfy an inequality constraint FORCES an equality constraint to not be satisfied then the constraints are not compatible and the problem isn't properly posed.
> >
> > Barry
> >
> >
> > > On Sep 1, 2015, at 4:15 AM, Justin Chang <jychang48 at gmail.com> wrote:
> > >
> > > But if I add those linear equality constraint equations to my original problem, would they not be satisfied anyway? Say I add this to my weak form:
> > >
> > > Ax = b
> > >
> > > But once i subject x to some bounded constraints, Ax != b. Unless I add some sort of penalty where extra weighting is added to this property...
> > >
> > > On Tue, Sep 1, 2015 at 3:02 AM, Matthew Knepley <knepley at gmail.com> wrote:
> > > On Tue, Sep 1, 2015 at 3:46 AM, Justin Chang <jychang48 at gmail.com> wrote:
> > > I would like to simultaneously enforce both discrete maximum principle and local mass/species balance. Because even if a locally conservative scheme like RT0 is used, as soon as these bounded constraints are applied, i lose the mass balance.
> > >
> > > What I am saying is, can't you just add "linear equality constraints" as more equations?
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > On Tue, Sep 1, 2015 at 2:33 AM, Matthew Knepley <knepley at gmail.com> wrote:
> > > On Tue, Sep 1, 2015 at 3:11 AM, Justin Chang <jychang48 at gmail.com> wrote:
> > > Barry,
> > >
> > > That's good to know thanks.
> > >
> > > On a related note, is it possible for VI to one day include linear equality constraints?
> > >
> > > How are these different from just using more equations?
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > Thanks,
> > > Justin
> > >
> > > On Mon, Aug 31, 2015 at 7:13 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > >
> > > > On Aug 31, 2015, at 7:36 PM, Justin Chang <jychang48 at gmail.com> wrote:
> > > >
> > > > Coming back to this,
> > > >
> > > > Say I now want to ensure the DMP for advection-diffusion equations. The linear operator is now asymmetric and non-self-adjoint (assuming I do something like SUPG or finite volume), meaning I cannot simply solve this problem without any manipulation (e.g. normalizing the equations) using TAO's optimization solvers. Does this statement also hold true for SNESVI?
> > >
> > > SNESVI doesn't care about symmetry etc
> > >
> > > >
> > > > Thanks,
> > > > Justin
> > > >
> > > > On Fri, Apr 3, 2015 at 7:38 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > > >
> > > > > On Apr 3, 2015, at 7:35 PM, Justin Chang <jchang27 at uh.edu> wrote:
> > > > >
> > > > > I guess I will have to write my own code then :)
> > > > >
> > > > > I am not all that familiar with Variational Inequalities at the moment, but if my Jacobian is symmetric and positive definite and I only have lower and upper bounds, doesn't the problem simply reduce to that of a convex optimization? That is, with SNES act as if it were Tao?
> > > >
> > > > Yes, I think that is essentially correctly.
> > > >
> > > > Barry
> > > >
> > > > >
> > > > > On Fri, Apr 3, 2015 at 6:35 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > > > >
> > > > > Justin,
> > > > >
> > > > > We haven't done anything with TS to handle variational inequalities. So you can either write your own backward Euler (outside of TS) that solves each time-step problem either as 1) an optimization problem using Tao or 2) as a variational inequality using SNES.
> > > > >
> > > > > More adventurously you could look at the TSTHETA code in TS (which is a general form that includes Euler, Backward Euler and Crank-Nicolson and see if you can add the constraints to the SNES problem that is solved; in theory this is straightforward but it would require understanding the current code (which Jed, of course, overwrote :-). I think you should do this.
> > > > >
> > > > > Barry
> > > > >
> > > > >
> > > > > > On Apr 3, 2015, at 12:31 PM, Justin Chang <jchang27 at uh.edu> wrote:
> > > > > >
> > > > > > I am solving the following anisotropic transient diffusion equation subject to 0 bounds:
> > > > > >
> > > > > > du/dt = div[D*grad[u]] + f
> > > > > >
> > > > > > Where the dispersion tensor D(x) is symmetric and positive definite. This formulation violates the discrete maximum principles so one of the ways to ensure nonnegative concentrations is to employ convex optimization. I am following the procedures in Nakshatrala and Valocchi (2009) JCP and Nagarajan and Nakshatrala (2011) IJNMF.
> > > > > >
> > > > > > The Variational Inequality method works gives what I want for my transient case, but what if I want to implement the Tao methodology in TS? That is, what TS functions do I need to set up steps a) through e) for each time step (also the Jacobian remains the same for all time steps so I would only call this once). Normally I would just call TSSolve() and let the libraries and functions do everything, but I would like to incorporate TaoSolve into every time step.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > --
> > > > > > Justin Chang
> > > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > > University of Houston, Department of Civil and Environmental Engineering
> > > > > > Houston, TX 77004
> > > > > > (512) 963-3262
> > > > > >
> > > > > > On Thu, Apr 2, 2015 at 6:53 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > > > > >
> > > > > > An alternative approach is for you to solve it as a (non)linear variational inequality. See src/snes/examples/tutorials/ex9.c
> > > > > >
> > > > > > How you should proceed depends on your long term goal. What problem do you really want to solve? Is it really a linear time dependent problem with 0 bounds on U? Can the problem always be represented as an optimization problem easily? What are and what will be the properties of K? For example if K is positive definite then likely the bounds will remain try without explicitly providing the constraints.
> > > > > >
> > > > > > Barry
> > > > > >
> > > > > > > On Apr 2, 2015, at 6:39 PM, Justin Chang <jchang27 at uh.edu> wrote:
> > > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I have a two part question regarding the integration of the following optimization problem
> > > > > > >
> > > > > > > min 1/2 u^T*K*u + u^T*f
> > > > > > > S.T. u >= 0
> > > > > > >
> > > > > > > into SNES and TS
> > > > > > >
> > > > > > > 1) For SNES, assuming I am working with a linear FE equation, I have the following algorithm/steps for solving my problem
> > > > > > >
> > > > > > > a) Set an initial guess x
> > > > > > > b) Obtain residual r and jacobian A through functions SNESComputeFunction() and SNESComputeJacobian() respectively
> > > > > > > c) Form vector b = r - A*x
> > > > > > > d) Set Hessian equal to A, gradient to A*x, objective function value to 1/2*x^T*A*x + x^T*b, and variable (lower) bounds to a zero vector
> > > > > > > e) Call TaoSolve
> > > > > > >
> > > > > > > This works well at the moment, but my question is there a more "efficient" way of doing this? Because with my current setup, I am making a rather bold assumption that my problem would converge in one SNES iteration without the bounded constraints and does not have any unexpected nonlinearities.
> > > > > > >
> > > > > > > 2) How would I go about doing the above for time-stepping problems? At each time step, I want to solve a convex optimization subject to the lower bounds constraint. I plan on using backward euler and my resulting jacobian should still be compatible with the above optimization problem.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > --
> > > > > > > Justin Chang
> > > > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > > > University of Houston, Department of Civil and Environmental Engineering
> > > > > > > Houston, TX 77004
> > > > > > > (512) 963-3262
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Justin Chang
> > > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > > University of Houston, Department of Civil and Environmental Engineering
> > > > > > Houston, TX 77004
> > > > > > (512) 963-3262
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Justin Chang
> > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > University of Houston, Department of Civil and Environmental Engineering
> > > > > Houston, TX 77004
> > > > > (512) 963-3262
> > > >
> > > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > > -- Norbert Wiener
> > >
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > > -- Norbert Wiener
> > >
> >
> >
>
>
More information about the petsc-users
mailing list