<div dir="ltr">I guess I will have to write my own code then :)<br><br> I am not all that familiar with Variational Inequalities at the moment, but if my Jacobian is symmetric and positive definite and I only have lower and upper bounds, doesn't the problem simply reduce to that of a convex optimization? That is, with SNES act as if it were Tao?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 3, 2015 at 6:35 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Justin,<br>
<br>
We haven't done anything with TS to handle variational inequalities. So you can either write your own backward Euler (outside of TS) that solves each time-step problem either as 1) an optimization problem using Tao or 2) as a variational inequality using SNES.<br>
<br>
More adventurously you could look at the TSTHETA code in TS (which is a general form that includes Euler, Backward Euler and Crank-Nicolson and see if you can add the constraints to the SNES problem that is solved; in theory this is straightforward but it would require understanding the current code (which Jed, of course, overwrote :-). I think you should do this.<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Apr 3, 2015, at 12:31 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
><br>
> I am solving the following anisotropic transient diffusion equation subject to 0 bounds:<br>
><br>
> du/dt = div[D*grad[u]] + f<br>
><br>
> Where the dispersion tensor D(x) is symmetric and positive definite. This formulation violates the discrete maximum principles so one of the ways to ensure nonnegative concentrations is to employ convex optimization. I am following the procedures in Nakshatrala and Valocchi (2009) JCP and Nagarajan and Nakshatrala (2011) IJNMF.<br>
><br>
> The Variational Inequality method works gives what I want for my transient case, but what if I want to implement the Tao methodology in TS? That is, what TS functions do I need to set up steps a) through e) for each time step (also the Jacobian remains the same for all time steps so I would only call this once). Normally I would just call TSSolve() and let the libraries and functions do everything, but I would like to incorporate TaoSolve into every time step.<br>
><br>
> Thanks,<br>
><br>
> --<br>
> Justin Chang<br>
> PhD Candidate, Civil Engineering - Computational Sciences<br>
> University of Houston, Department of Civil and Environmental Engineering<br>
> Houston, TX 77004<br>
> <a href="tel:%28512%29%20963-3262" value="+15129633262">(512) 963-3262</a><br>
><br>
> On Thu, Apr 2, 2015 at 6:53 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> An alternative approach is for you to solve it as a (non)linear variational inequality. See src/snes/examples/tutorials/ex9.c<br>
><br>
> How you should proceed depends on your long term goal. What problem do you really want to solve? Is it really a linear time dependent problem with 0 bounds on U? Can the problem always be represented as an optimization problem easily? What are and what will be the properties of K? For example if K is positive definite then likely the bounds will remain try without explicitly providing the constraints.<br>
><br>
> Barry<br>
><br>
> > On Apr 2, 2015, at 6:39 PM, Justin Chang <<a href="mailto:jchang27@uh.edu">jchang27@uh.edu</a>> wrote:<br>
> ><br>
> > Hi everyone,<br>
> ><br>
> > I have a two part question regarding the integration of the following optimization problem<br>
> ><br>
> > min 1/2 u^T*K*u + u^T*f<br>
> > S.T. u >= 0<br>
> ><br>
> > into SNES and TS<br>
> ><br>
> > 1) For SNES, assuming I am working with a linear FE equation, I have the following algorithm/steps for solving my problem<br>
> ><br>
> > a) Set an initial guess x<br>
> > b) Obtain residual r and jacobian A through functions SNESComputeFunction() and SNESComputeJacobian() respectively<br>
> > c) Form vector b = r - A*x<br>
> > d) Set Hessian equal to A, gradient to A*x, objective function value to 1/2*x^T*A*x + x^T*b, and variable (lower) bounds to a zero vector<br>
> > e) Call TaoSolve<br>
> ><br>
> > This works well at the moment, but my question is there a more "efficient" way of doing this? Because with my current setup, I am making a rather bold assumption that my problem would converge in one SNES iteration without the bounded constraints and does not have any unexpected nonlinearities.<br>
> ><br>
> > 2) How would I go about doing the above for time-stepping problems? At each time step, I want to solve a convex optimization subject to the lower bounds constraint. I plan on using backward euler and my resulting jacobian should still be compatible with the above optimization problem.<br>
> ><br>
> > Thanks,<br>
> ><br>
> > --<br>
> > Justin Chang<br>
> > PhD Candidate, Civil Engineering - Computational Sciences<br>
> > University of Houston, Department of Civil and Environmental Engineering<br>
> > Houston, TX 77004<br>
> > (512) 963-3262<br>
><br>
><br>
><br>
><br>
> --<br>
> Justin Chang<br>
> PhD Candidate, Civil Engineering - Computational Sciences<br>
> University of Houston, Department of Civil and Environmental Engineering<br>
> Houston, TX 77004<br>
> (512) 963-3262<br>
<br>
</div></div></blockquote></div><br></div><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div><div>Justin Chang<br></div>PhD Candidate, Civil Engineering - Computational Sciences<br></div>University of Houston, Department of Civil and Environmental Engineering<br></div>Houston, TX 77004<br>(512) 963-3262<br></div></div>