<div dir="ltr">So in VI, the i-th component/cell/point of x is "corrected" if it is below lb_i or above ub_i. All other x_i that satisfy their respective bounds remain untouched. Hence only the corrected components will violated the original equation<br><br>In optimization, every single x_i will be "shifted" such that every single x_i meets lb_i and ub_i. Hence it's possible all components will violate the original equation<div><br></div><div>Is that the general idea here?</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 1, 2015 at 12:48 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
I think we are talking past each other.<br>
<br>
With bound constraint VI's there is no optimization, there is an equation F(x) = 0 (where F may be linear or nonlinear) and constraints a <= x <= c. With VIs the equation F_i(x) is simply not satisfied if x_i is on a bound (that is x_i = a_i or x_i = b_i),<br>
<br>
With optimization if you have an equality constraint and inequality constraints; if to satisfy an inequality constraint FORCES an equality constraint to not be satisfied then the constraints are not compatible and the problem isn't properly posed.<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Sep 1, 2015, at 4:15 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
><br>
> But if I add those linear equality constraint equations to my original problem, would they not be satisfied anyway? Say I add this to my weak form:<br>
><br>
> Ax = b<br>
><br>
> But once i subject x to some bounded constraints, Ax != b. Unless I add some sort of penalty where extra weighting is added to this property...<br>
><br>
> On Tue, Sep 1, 2015 at 3:02 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> On Tue, Sep 1, 2015 at 3:46 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> I would like to simultaneously enforce both discrete maximum principle and local mass/species balance. Because even if a locally conservative scheme like RT0 is used, as soon as these bounded constraints are applied, i lose the mass balance.<br>
><br>
> What I am saying is, can't you just add "linear equality constraints" as more equations?<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
> On Tue, Sep 1, 2015 at 2:33 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> On Tue, Sep 1, 2015 at 3:11 AM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> Barry,<br>
><br>
> That's good to know thanks.<br>
><br>
> On a related note, is it possible for VI to one day include linear equality constraints?<br>
><br>
> How are these different from just using more equations?<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
> Thanks,<br>
> Justin<br>
><br>
> On Mon, Aug 31, 2015 at 7:13 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> > On Aug 31, 2015, at 7:36 PM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> ><br>
> > Coming back to this,<br>
> ><br>
> > Say I now want to ensure the DMP for advection-diffusion equations. The linear operator is now asymmetric and non-self-adjoint (assuming I do something like SUPG or finite volume), meaning I cannot simply solve this problem without any manipulation (e.g. normalizing the equations) using TAO's optimization solvers. Does this statement also hold true for SNESVI?<br>
><br>
> SNESVI doesn't care about symmetry etc<br>
><br>
> ><br>
> > Thanks,<br>
> > Justin<br>
> ><br>
> > On Fri, Apr 3, 2015 at 7:38 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > > On Apr 3, 2015, at 7:35 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > ><br>
> > > I guess I will have to write my own code then :)<br>
> > ><br>
> > > I am not all that familiar with Variational Inequalities at the moment, but if my Jacobian is symmetric and positive definite and I only have lower and upper bounds, doesn't the problem simply reduce to that of a convex optimization? That is, with SNES act as if it were Tao?<br>
> ><br>
> > Yes, I think that is essentially correctly.<br>
> ><br>
> > Barry<br>
> ><br>
> > ><br>
> > > On Fri, Apr 3, 2015 at 6:35 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > Justin,<br>
> > ><br>
> > > We haven't done anything with TS to handle variational inequalities. So you can either write your own backward Euler (outside of TS) that solves each time-step problem either as 1) an optimization problem using Tao or 2) as a variational inequality using SNES.<br>
> > ><br>
> > > More adventurously you could look at the TSTHETA code in TS (which is a general form that includes Euler, Backward Euler and Crank-Nicolson and see if you can add the constraints to the SNES problem that is solved; in theory this is straightforward but it would require understanding the current code (which Jed, of course, overwrote :-). I think you should do this.<br>
> > ><br>
> > > Barry<br>
> > ><br>
> > ><br>
> > > > On Apr 3, 2015, at 12:31 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > > ><br>
> > > > I am solving the following anisotropic transient diffusion equation subject to 0 bounds:<br>
> > > ><br>
> > > > du/dt = div[D*grad[u]] + f<br>
> > > ><br>
> > > > Where the dispersion tensor D(x) is symmetric and positive definite. This formulation violates the discrete maximum principles so one of the ways to ensure nonnegative concentrations is to employ convex optimization. I am following the procedures in Nakshatrala and Valocchi (2009) JCP and Nagarajan and Nakshatrala (2011) IJNMF.<br>
> > > ><br>
> > > > The Variational Inequality method works gives what I want for my transient case, but what if I want to implement the Tao methodology in TS? That is, what TS functions do I need to set up steps a) through e) for each time step (also the Jacobian remains the same for all time steps so I would only call this once). Normally I would just call TSSolve() and let the libraries and functions do everything, but I would like to incorporate TaoSolve into every time step.<br>
> > > ><br>
> > > > Thanks,<br>
> > > ><br>
> > > > --<br>
> > > > Justin Chang<br>
> > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > Houston, TX 77004<br>
> > > > <a href="tel:%28512%29%20963-3262" value="+15129633262">(512) 963-3262</a><br>
> > > ><br>
> > > > On Thu, Apr 2, 2015 at 6:53 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > ><br>
> > > > An alternative approach is for you to solve it as a (non)linear variational inequality. See src/snes/examples/tutorials/ex9.c<br>
> > > ><br>
> > > > How you should proceed depends on your long term goal. What problem do you really want to solve? Is it really a linear time dependent problem with 0 bounds on U? Can the problem always be represented as an optimization problem easily? What are and what will be the properties of K? For example if K is positive definite then likely the bounds will remain try without explicitly providing the constraints.<br>
> > > ><br>
> > > > Barry<br>
> > > ><br>
> > > > > On Apr 2, 2015, at 6:39 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> > > > ><br>
> > > > > Hi everyone,<br>
> > > > ><br>
> > > > > I have a two part question regarding the integration of the following optimization problem<br>
> > > > ><br>
> > > > > min 1/2 u^T*K*u + u^T*f<br>
> > > > > S.T. u >= 0<br>
> > > > ><br>
> > > > > into SNES and TS<br>
> > > > ><br>
> > > > > 1) For SNES, assuming I am working with a linear FE equation, I have the following algorithm/steps for solving my problem<br>
> > > > ><br>
> > > > > a) Set an initial guess x<br>
> > > > > b) Obtain residual r and jacobian A through functions SNESComputeFunction() and SNESComputeJacobian() respectively<br>
> > > > > c) Form vector b = r - A*x<br>
> > > > > d) Set Hessian equal to A, gradient to A*x, objective function value to 1/2*x^T*A*x + x^T*b, and variable (lower) bounds to a zero vector<br>
> > > > > e) Call TaoSolve<br>
> > > > ><br>
> > > > > This works well at the moment, but my question is there a more "efficient" way of doing this? Because with my current setup, I am making a rather bold assumption that my problem would converge in one SNES iteration without the bounded constraints and does not have any unexpected nonlinearities.<br>
> > > > ><br>
> > > > > 2) How would I go about doing the above for time-stepping problems? At each time step, I want to solve a convex optimization subject to the lower bounds constraint. I plan on using backward euler and my resulting jacobian should still be compatible with the above optimization problem.<br>
> > > > ><br>
> > > > > Thanks,<br>
> > > > ><br>
> > > > > --<br>
> > > > > Justin Chang<br>
> > > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > > Houston, TX 77004<br>
> > > > > (512) 963-3262<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > > Justin Chang<br>
> > > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > > Houston, TX 77004<br>
> > > > (512) 963-3262<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > > Justin Chang<br>
> > > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > > University of Houston, Department of Civil and Environmental Engineering<br>
> > > Houston, TX 77004<br>
> > > (512) 963-3262<br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
<br>
</div></div></blockquote></div><br></div>