<br><br><div class="gmail_quote">On Wed, Aug 1, 2012 at 5:01 PM, Todd Munson <span dir="ltr"><<a href="mailto:tmunson@mcs.anl.gov" target="_blank">tmunson@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
On Aug 1, 2012, at 4:13 PM, Jed Brown wrote:<br>
<br>
> On Wed, Aug 1, 2012 at 11:10 AM, Todd Munson <<a href="mailto:tmunson@mcs.anl.gov">tmunson@mcs.anl.gov</a>> wrote:<br>
><br>
> Yes, I am all in favor of actually specifying a VI externally and then<br>
> internally formulating the appropriate augmented system. This is what<br>
> I do in other contexts (i.e. PATH for GAMS/AMPL). Some things you<br>
> probably want to do with such an interface is differentiate between<br>
> linear and nonlinear constraints -- you do need second derivatives<br>
> for nonlinear constraints and they should be convex.<br>
><br>
> Why not treat it as always nonlinear with the trivial second derivative in the linear case?<br>
<br>
</div>You can if you want. I would say that one benefit of splitting is that sometimes<br>
roundoff errors can be an issue and isolating the linear parts (which are just<br>
data) from the nonlinear parts may help. It depends on your Newton method<br>
though. Knowing that your constraints are linear also means that you<br>
do not need to worry about the Hessian of the Lagrangian. Adding and<br>
dropping linear constraints is would also seem to be easier since<br>
there is no penalty to be paid since the Jacobian does not need<br>
to be modified.<br>
<br>
One interesting thing is that for nonlinear constraints, your Jacobian<br>
matrix in the upper right hand block is the summation of a general matrix<br>
(the Jacobian of F(x)) and a symmetric matrix (sum_i lambda_i nabla^2 c_i(x),<br>
where lambda is the Lagrange multiplier). Is there a good matrix storage<br>
scheme for this combination?<br></blockquote><div>How does one precondition a system like that? Presumably, the user may at</div><div>best be able to provide a preconditioner for JF.</div><div><br></div><div>If JF responds well to multigrid, then it may be possible to use AMG-KKT</div>
<div>to coarsen the constraint Hessian, but how does MG perform for the sum</div><div>JF + Hc? There is also the problem of smoothing, although, if JF and Jc</div><div>are sparse, then DD may be an option with a direct solve on subdomains.</div>
<div><br></div><div>Anything more definitive on preconditioning these systems?</div><div><br></div><div>Dmitry.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> You can start getting very general (i.e. linear and nonlinear cone<br>
> constraints, which can appear in contact problems with friction),<br>
> but I would advocate for simplicity rather than generality.<br>
><br>
> I like simplicity, but I also like not having to change interfaces. I would rather write an interface for a more general case and even temporarily have an implementation that can't do the fully general thing, eventually perhaps doing something more general. We put a rather large amount of effort into making TS work with a convenient (solver-friendly) representation for DAEs, even though lots of methods cannot be used for DAEs. If the general case is not too complicated for the user, I'd rather do it that way and internally reformulate.<br>
<br>
</div>The problem with general cones is that they are not easy (you need both a definition<br>
of the cone and the polar cone) and the numerical methods are problematic (second-order<br>
cones, for example, are not differentiable at the origin). Here you require<br>
interior-point methods. So, the cone constraints algorithms themselves are<br>
more of a research issue.<br>
<div class="im"><br>
> The addition and deletion of nonlinear constraints in your example is fine.<br>
> For an active set method, the constraints can be implicitly defined and you<br>
> only need to bring in the new active nonlinear constraints and add them to<br>
> the constraint set for the linear algebra. You can also do this for other<br>
> methods. So the active nonlinear constraints can change from one Newton<br>
> iteration to the next. The only place where the inactive constraints<br>
> matter is in the evaluation of the residual for the line search and<br>
> termination tests.<br>
><br>
> Cool, I'm glad you're happy with the math. Now we need a suitable interface, which I think means something a fair amount different from what we have now. We need constraint function, its Jacobian, and a Hessian, each of which can change size?<br>
<br>
</div>Yep. I leave it to you to figure this out though.<br>
<div class="im"><br>
> For the contact problems with friction, you typically do an explicit<br>
> method though. In this setting, you know the subset of the possible<br>
> active constraints at each time step and can keep them constant in<br>
> the VI solver. They only change as you step forward through time.<br>
> So you can optimize for this case and not have to worry about the<br>
> constraints changing within each Newton iteration.<br>
><br>
> I'm not sure how you want to design the variable constraint functionality,<br>
> but there is nothing wrong with it technically.<br>
><br>
> I can explain the semismooth method with slacks thing in a few lines.<br>
> The simple problem is:<br>
><br>
> 0 <= x perp f(x) >= 0<br>
><br>
> The compact semismooth formulation is:<br>
><br>
> Phi(x,f(x)) = 0<br>
><br>
> where Phi is the reformulation used.<br>
><br>
> The semismooth formulation with slacks is:<br>
><br>
> f(x) - s = 0<br>
> Phi(x,s) = 0<br>
<br>
> The linear algebra for the semismooth reformulation with slacks<br>
> is very similar to that for the compact formulation, but there<br>
> are subtle differences in the right hand side and the<br>
> subdifferential.<br>
><br>
> Do you mean that the slacks are eliminated again in the linear algebra? What are the "subtle differences"?<br>
<br>
<br>
</div>If you eliminate the slack variables, in the slack formulation, then you get<br>
a system that looks like the semismooth system, but with a different right<br>
hand side. Apparently, the difference in the right hand side leads to<br>
fewer Newton iterations in general. Not a big deal.<br>
<br>
Cheers, Todd.<br>
<br>
</blockquote></div><br>