<div dir="ltr">Let me back up a bit.<br><br>Say I am solving -div[D(x)grad[c]] = 0 on a unit cube with a hole. Interior boundary has DirichletBC c = 1 and exterior boundary has DirichletBC c = 0. If I solve the above in the strong sense (e.g., finite volume, finite difference, etc) -div[D(x)grad[c]] = 0 at every grid cell (hence local mass conservation).<div><br></div><div>According to discrete maximum principle and the prescribed BCs, my bounds <i>should</i> be between 0 <= c <= 1. However, if D(x) is aniosotropic, the KSP/SNES solver will return c's less than 0 and/or greater than 1. Local mass balance still exists for all cells regardless.<br><br>Now, if TAO is used and Vec XL and XU of TaoSetVariable bounds are set to 0 and 1 respectively, I now ensure that 0 <= c <= 1 even for anisotropy. However, our experiments have shown that the "corrected" regions i.e., the regions where it would have violated DMP otherwise, no longer satisfy local mass balance i.e., div[D(x)grad[c]] != at the element level. The unaffected regions, i.e. the regions where DMP was satisfied regardless, are still locally conservative.</div><div><br></div><div>We have shown through MATLAB's <i>quadprog</i> that we can achieve both DMP and local mass balance if we applied the mixed form (using least-squares finite element method) of the diffusion and subject the optimization solve to satisfy both upper/lower bounds as well as equality constraints (which is divergence of flux = 0 in Omega). If no bounds/constraints were set, the objective function is equal to zero. If either only bounds or constraints, the objective function is equal to some non-zero value. If both bounds and constraints, the objective function is a slightly larger non-zero value. Basically, the more constraints I apply, the further and further away my overall solution may drift from the original discretization.</div><div><br></div><div>I am curious to see if 1) something like this is possible with TAO and 2) if I can achieve the SNESVI equivalent in the sense I have described. Hope this makes a little more sense.<br><br></div><div>Thanks,<br>Justin</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 1, 2015 at 4:29 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
I really cannot understand what you are saying.<br>
<span class=""><br>
> On Sep 1, 2015, at 5:14 PM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
><br>
> So in VI, the i-th component/cell/point of x is "corrected" if it is below lb_i or above ub_i. All other x_i that satisfy their respective bounds remain untouched. Hence only the corrected components will violated the original equation<br>
<br>
</span> I think what you say above is backwards. First of all x_i would never be < lb_i or > ub_i since it is constrained to be between them. It is just that with a VI IF x_i is lb_i or ub_i then F_i(x) is allowed to be nonzero (that is that equation is allowed to violated).<br>
<span class=""><br>
><br>
> In optimization, every single x_i will be "shifted" such that every single x_i meets lb_i and ub_i. Hence it's possible all components will violate the original equation<br>
<br>
</span> No. With constraints in optimization, they must all be satisfied for one to say that one has a solution.<br>
<br>
With VIs the F(x) is NOT a constraint and should not be thought of as a constraint; only the bounds are constraints. Or think of F(x) as a special kind of constraint (that doesn't exist with optimization) that is allowed to be violated in a a very special way (when its x_i variable is on a constraint).<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> Is that the general idea here?<br>
><br>
> On Tue, Sep 1, 2015 at 12:48 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> I think we are talking past each other.<br>
><br>
> With bound constraint VI's there is no optimization, there is an equation F(x) = 0 (where F may be linear or nonlinear) and constraints a <= x <= c. With VIs the equation F_i(x) is simply not satisfied if x_i is on a bound (that is x_i = a_i or x_i = b_i),<br>
><br>
> With optimization if you have an equality constraint and inequality constraints; if to satisfy an inequality constraint FORCES an equality constraint to not be satisfied then the constraints are not compatible and the problem isn't properly posed.<br>
><br>
> Barry<br>
><br>
><br>
> > On Sep 1, 2015, at 4:15 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> ><br>
> > But if I add those linear equality constraint equations to my original problem, would they not be satisfied anyway? Say I add this to my weak form:<br>
> ><br>
> > Ax = b<br>
> ><br>
> > But once i subject x to some bounded constraints, Ax != b. Unless I add some sort of penalty where extra weighting is added to this property...<br>
> ><br>
> > On Tue, Sep 1, 2015 at 3:02 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> > On Tue, Sep 1, 2015 at 3:46 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> > I would like to simultaneously enforce both discrete maximum principle and local mass/species balance. Because even if a locally conservative scheme like RT0 is used, as soon as these bounded constraints are applied, i lose the mass balance.<br>
> ><br>
> > What I am saying is, can't you just add "linear equality constraints" as more equations?<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Matt<br>
> ><br>
> > On Tue, Sep 1, 2015 at 2:33 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> > On Tue, Sep 1, 2015 at 3:11 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> > Barry,<br>
> ><br>
> > That's good to know thanks.<br>
> ><br>
> > On a related note, is it possible for VI to one day include linear equality constraints?<br>
> ><br>
> > How are these different from just using more equations?<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Matt<br>
> ><br>
> > Thanks,<br>
> > Justin<br>
> ><br>
> > On Mon, Aug 31, 2015 at 7:13 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > > On Aug 31, 2015, at 7:36 PM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> > ><br>
> > > Coming back to this,<br>
> > ><br>
> > > Say I now want to ensure the DMP for advection-diffusion equations. The linear operator is now asymmetric and non-self-adjoint (assuming I do something like SUPG or finite volume), meaning I cannot simply solve this problem without any manipulation (e.g. normalizing the equations) using TAO's optimization solvers. Does this statement also hold true for SNESVI?<br>
> ><br>
> > SNESVI doesn't care about symmetry etc<br>
> ><br>
> > ><br>
> > > Thanks,<br>
> > > Justin<br>
> > ><br>
> > > On Fri, Apr 3, 2015 at 7:38 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > > On Apr 3, 2015, at 7:35 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > > ><br>
> > > > I guess I will have to write my own code then :)<br>
> > > ><br>
> > > > I am not all that familiar with Variational Inequalities at the moment, but if my Jacobian is symmetric and positive definite and I only have lower and upper bounds, doesn't the problem simply reduce to that of a convex optimization? That is, with SNES act as if it were Tao?<br>
> > ><br>
> > > Yes, I think that is essentially correctly.<br>
> > ><br>
> > > Barry<br>
> > ><br>
> > > ><br>
> > > > On Fri, Apr 3, 2015 at 6:35 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > ><br>
> > > > Justin,<br>
> > > ><br>
> > > > We haven't done anything with TS to handle variational inequalities. So you can either write your own backward Euler (outside of TS) that solves each time-step problem either as 1) an optimization problem using Tao or 2) as a variational inequality using SNES.<br>
> > > ><br>
> > > > More adventurously you could look at the TSTHETA code in TS (which is a general form that includes Euler, Backward Euler and Crank-Nicolson and see if you can add the constraints to the SNES problem that is solved; in theory this is straightforward but it would require understanding the current code (which Jed, of course, overwrote :-). I think you should do this.<br>
> > > ><br>
> > > > Barry<br>
> > > ><br>
> > > ><br>
> > > > > On Apr 3, 2015, at 12:31 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > > > ><br>
> > > > > I am solving the following anisotropic transient diffusion equation subject to 0 bounds:<br>
> > > > ><br>
> > > > > du/dt = div[D*grad[u]] + f<br>
> > > > ><br>
> > > > > Where the dispersion tensor D(x) is symmetric and positive definite. This formulation violates the discrete maximum principles so one of the ways to ensure nonnegative concentrations is to employ convex optimization. I am following the procedures in Nakshatrala and Valocchi (2009) JCP and Nagarajan and Nakshatrala (2011) IJNMF.<br>
> > > > ><br>
> > > > > The Variational Inequality method works gives what I want for my transient case, but what if I want to implement the Tao methodology in TS? That is, what TS functions do I need to set up steps a) through e) for each time step (also the Jacobian remains the same for all time steps so I would only call this once). Normally I would just call TSSolve() and let the libraries and functions do everything, but I would like to incorporate TaoSolve into every time step.<br>
> > > > ><br>
> > > > > Thanks,<br>
> > > > ><br>
> > > > > --<br>
> > > > > Justin Chang<br>
> > > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > > Houston, TX 77004<br>
> > > > > <a href="tel:%28512%29%20963-3262" value="+15129633262">(512) 963-3262</a><br>
> > > > ><br>
> > > > > On Thu, Apr 2, 2015 at 6:53 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > > ><br>
> > > > > An alternative approach is for you to solve it as a (non)linear variational inequality. See src/snes/examples/tutorials/ex9.c<br>
> > > > ><br>
> > > > > How you should proceed depends on your long term goal. What problem do you really want to solve? Is it really a linear time dependent problem with 0 bounds on U? Can the problem always be represented as an optimization problem easily? What are and what will be the properties of K? For example if K is positive definite then likely the bounds will remain try without explicitly providing the constraints.<br>
> > > > ><br>
> > > > > Barry<br>
> > > > ><br>
> > > > > > On Apr 2, 2015, at 6:39 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > > > > ><br>
> > > > > > Hi everyone,<br>
> > > > > ><br>
> > > > > > I have a two part question regarding the integration of the following optimization problem<br>
> > > > > ><br>
> > > > > > min 1/2 u^T*K*u + u^T*f<br>
> > > > > > S.T. u >= 0<br>
> > > > > ><br>
> > > > > > into SNES and TS<br>
> > > > > ><br>
> > > > > > 1) For SNES, assuming I am working with a linear FE equation, I have the following algorithm/steps for solving my problem<br>
> > > > > ><br>
> > > > > > a) Set an initial guess x<br>
> > > > > > b) Obtain residual r and jacobian A through functions SNESComputeFunction() and SNESComputeJacobian() respectively<br>
> > > > > > c) Form vector b = r - A*x<br>
> > > > > > d) Set Hessian equal to A, gradient to A*x, objective function value to 1/2*x^T*A*x + x^T*b, and variable (lower) bounds to a zero vector<br>
> > > > > > e) Call TaoSolve<br>
> > > > > ><br>
> > > > > > This works well at the moment, but my question is there a more "efficient" way of doing this? Because with my current setup, I am making a rather bold assumption that my problem would converge in one SNES iteration without the bounded constraints and does not have any unexpected nonlinearities.<br>
> > > > > ><br>
> > > > > > 2) How would I go about doing the above for time-stepping problems? At each time step, I want to solve a convex optimization subject to the lower bounds constraint. I plan on using backward euler and my resulting jacobian should still be compatible with the above optimization problem.<br>
> > > > > ><br>
> > > > > > Thanks,<br>
> > > > > ><br>
> > > > > > --<br>
> > > > > > Justin Chang<br>
> > > > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > > > Houston, TX 77004<br>
> > > > > > (512) 963-3262<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > --<br>
> > > > > Justin Chang<br>
> > > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > > Houston, TX 77004<br>
> > > > > (512) 963-3262<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > > Justin Chang<br>
> > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > Houston, TX 77004<br>
> > > > (512) 963-3262<br>
> > ><br>
> > ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>