<div class="gmail_quote">On Tue, Jan 10, 2012 at 00:08, Geoffrey Irving <span dir="ltr"><<a href="mailto:irving@naml.us">irving@naml.us</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div id=":1i8">For now, I believe I can get away with a single linear iteration.<br></div></blockquote><div><br></div><div>Single linear iteration (e.g. one GMRES cycle) or single linear solve (e.g. one Newton step)?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div id=":1i8">
Even if I need a few, the extra cost of the first linear solve appears<br>
to be drastic. However, it appears you're right that this isn't due<br>
to preconditioner setup. The first solve takes over 50 times as long<br>
as the other solves:<br>
<br>
step 1<br>
dt = 0.00694444, time = 0<br>
cg icc converged: iterations = 4, rtol = 0.001, error = 9.56519e-05<br>
actual L2 residual = 1.10131e-05<br>
max speed = 0.00728987<br>
END step 1 0.6109 s<br></div></blockquote><div><br></div><div>How are you measuring this time? In -log_summary, I see 0.02 seconds in KSPSolve(). Maybe the time you see is because there are lots of page faults until you get the code loaded into memory?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div id=":1i8">
step 2<br>
dt = 0.00694444, time = 0.00694444<br>
cg icc converged: iterations = 3, rtol = 0.001, error = 0.000258359<br>
actual L2 residual = 3.13442e-05<br>
max speed = 0.0148876<br>
END step 2 0.0089 s<br>
<br>
Note that this is a very small problem, but even if it took 100x the<br>
iterations the first solve would still be significant more expensive<br>
than the second. However, if I pretend the nonzero pattern changes<br>
every iteration, I only see a 20% performance hit overall, so<br>
something else is happening on the first iteration. Do you know what<br>
it is? The results of -log_summary are attached if it helps.<br>
<div class="im"><br>
> Note that you can also enforce the constraints using Lagrange multipliers.<br>
> If the effect of the Lagrange multipliers are local, then you can likely get<br>
> away with an Uzawa-type algorithm (perhaps combined with some form of<br>
> multigrid for the unconstrained system). If the contact constraints cause<br>
> long-range response, Uzawa-type methods may not converge as quickly, but<br>
> there are still lots of alternatives.<br>
<br>
</div>Lagrange multipliers are unfortunate since the system is otherwise<br>
definite. The effect of the constraints will in general be global,<br>
since they will often be the only force combating the net effect of<br>
gravity. In any case, if recomputing the preconditioner appears to be<br>
cheap, symbolic elimination is probably the way to go.</div></blockquote></div><br><div>Well, if the Schur complement in the space of Lagrange multipliers is very well conditioned (or is preconditionable) and you have a good preconditioner for the positive definite part, then the saddle point formulation is not a big deal. The best method will be problem dependent, but this part of the design space is relevant when setup is high relative to solves (e.g. algebraic multigrid).</div>