On Wed, Dec 14, 2011 at 3:50 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
On Dec 14, 2011, at 3:13 PM, Mark F. Adams wrote:<br>
<br>
><br>
> On Dec 14, 2011, at 3:44 PM, Barry Smith wrote:<br>
><br>
>><br>
>> On Dec 14, 2011, at 12:45 PM, Mark F. Adams wrote:<br>
>><br>
>>> I have a nonlinear problem and am finding the the convergence test for the interior PETSc linear solver is apparently testing against the norm of the first linear solve (see appended).<br>
>><br>
>> No it is || r = b - Ax ||/|| b|| with a B in there if left preconditioned<br>
>><br>
>> It has nothing to do with the "first linear solve". Each linear solve is self contained and it does not use anything from a previous linear solve to determine convergence of a later linear solve.<br>
>><br>
><br>
> Matts suggestion -ksp_converged_use_initial_residual_norm did the trick.<br>
><br>
>>> This is clearly not what anyone would want,<br>
>><br>
>> Why not?<br>
>><br>
><br>
> OK, I should say I don't know why anyone would want this and if you pop a KSP into a nonlinear loop, which is a very common thing to do obviously, then you get behavior that is probably not what you want. Namely, an approximate Newton where the outer nonlinear tolerance is much smaller than the inner linear tolerance.<br>
<br>
</div> I would think you would want exactly that. Why oversolve the linear system since it is "wrong" anyways"?<br>
<br>
Actually with Newton it should really be setting the linear tolerances based on the progress of the Newton method, this is done via -snes_ksp_ew I would like to make that the default but this scheme is not robust and occasionally strange things might happen. I'd like it if someone could investigate using EW by default.</blockquote>
<div><br></div><div>I spent a ton of time on EW (Eisenstat-Walker) for my thesis. You are right that its just not robust, but version 2 is more</div><div>robust than version 1 in my tests. EW is just two possible implementations for the Dembo-Eisenstat-Steihaug theory. I</div>
<div>think Trond has some additional ones that are stuck down deep inside his bizarre optimization code. I would love to turn on</div><div>version 2 by default, but we need a better nonlinear test set to see what happens. Maybe some fluids thing?</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
> Just my perspective.<br>
><br>
> Mark<br>
><br>
>> Barry<br>
>><br>
>><br>
>>> is this the intended semantics?<br>
>><br>
>><br>
>>> Is there perhaps a method to reset rtol?<br>
>>><br>
>>> Mark<br>
>>><br>
>>> Picard iteration 0 max(resid) = 781020<br>
>>> 0 KSP Residual norm 2.499264167662e+07<br>
>>> 1 KSP Residual norm 6.330915810095e+06<br>
>>> 2 KSP Residual norm 1.856999435466e+06<br>
>>> 3 KSP Residual norm 3.967325725053e+05<br>
>>> 4 KSP Residual norm 9.037332775808e+04<br>
>>> 5 KSP Residual norm 1.826960320400e+04<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 5<br>
>>> Picard iteration 1 max(resid) = 727577 ------- Rate = 1.07345<br>
>>> 0 KSP Residual norm 6.885328126772e+06<br>
>>> 1 KSP Residual norm 2.150319244570e+06<br>
>>> 2 KSP Residual norm 2.297759417371e+05<br>
>>> 3 KSP Residual norm 3.321819214451e+04<br>
>>> 4 KSP Residual norm 4.155970071625e+03<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 4<br>
>>> Picard iteration 2 max(resid) = 486788 ------- Rate = 1.49465<br>
>>> 0 KSP Residual norm 2.987417661953e+06<br>
>>> 1 KSP Residual norm 5.046119985008e+05<br>
>>> 2 KSP Residual norm 3.882757664605e+04<br>
>>> 3 KSP Residual norm 4.106091678345e+03<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
>>> Picard iteration 3 max(resid) = 137348 ------- Rate = 3.54418<br>
>>> 0 KSP Residual norm 7.285359985607e+05<br>
>>> 1 KSP Residual norm 9.132700545850e+04<br>
>>> 2 KSP Residual norm 6.834608381922e+03<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 2<br>
>>> Picard iteration 4 max(resid) = 25082.7 ------- Rate = 5.47582<br>
>>> 0 KSP Residual norm 1.207523847817e+05<br>
>>> 1 KSP Residual norm 1.286174706329e+04<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 1<br>
>>> Picard iteration 5 max(resid) = 5723.21 ------- Rate = 4.38264<br>
>>> 0 KSP Residual norm 2.211945822133e+04<br>
>>> Linear solve converged due to CONVERGED_RTOL iterations 0<br>
>>><br>
>><br>
>><br>
><br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>