[petsc-users] how to stop SNES linesearch (l^2 minimization) from choosing obviously suboptimal lambda?

Matthew Knepley knepley at gmail.com
Thu Jan 26 07:57:42 CST 2017


On Thu, Jan 26, 2017 at 2:20 AM, Andrew McRae <A.T.T.McRae at bath.ac.uk>
wrote:

> Okay.  I discarded bt quite early since I have no reason to think the
> default step size (lambda = 1) is 'good', due to the partial Jacobian.  But
> I can try it again.
>
> cp sometimes behaves well, but other times I've seen it do something crazy
> like take lambda = 2.5 on the first step.  Due to the MA convexity reqs,
> the linear system at the second step is then malformed and the solver dies.
>
> I also briefly tried nleqerr in the past and found it to take a huge
> number of iterations, but I can try that again.
>

Line search is not good at all for functions that wiggle on the scale of
your step size. You could try trust region, although I am not sure that is
better.
Lots of people use "annealing" for this kind of thing, but that is a lot of
work.

  Thanks,

     Matt


> On 25 January 2017 at 19:57, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Wed, Jan 25, 2017 at 1:13 PM, Andrew McRae <A.T.T.McRae at bath.ac.uk>
>> wrote:
>>
>>> I have a nonlinear problem in which the line search procedure is making
>>> 'obviously wrong' choices for lambda.  My nonlinear solver options (going
>>> via petsc4py) include {"snes_linesearch_type": "l2",
>>> "snes_linesearch_max_it": 3}.
>>>
>>> After monotonically decreasing the residual by about 4 orders of
>>> magnitude, I get the following...
>>>
>>>  15 SNES Function norm 9.211230243067e-06
>>>       Line search: lambdas = [1., 0.5, 0.], fnorms = [3.13039e-05,
>>> 3.14838e-05, 9.21123e-06]
>>>       Line search: lambdas = [1.25615, 1.12808, 1.], fnorms =
>>> [3.14183e-05, 3.13437e-05, 3.13039e-05]
>>>       Line search: lambdas = [0.91881, 1.08748, 1.25615], fnorms =
>>> [3.12969e-05, 3.13273e-05, 3.14183e-05]
>>>       Line search terminated: lambda = 0.918811, fnorms = 3.12969e-05
>>>  16 SNES Function norm 3.129688997145e-05
>>>       Line search: lambdas = [1., 0.5, 0.], fnorms = [3.09357e-05,
>>> 1.58135e-05, 3.12969e-05]
>>>       Line search: lambdas = [0.503912, 0.751956, 1.], fnorms =
>>> [1.59287e-05, 2.33645e-05, 3.09357e-05]
>>>       Line search: lambdas = [0.0186202, 0.261266, 0.503912], fnorms =
>>> [3.07204e-05, 9.11e-06, 1.59287e-05]
>>>       Line search terminated: lambda = 0.342426, fnorms = 1.12885e-05
>>>  17 SNES Function norm 1.128846081676e-05
>>>       Line search: lambdas = [1., 0.5, 0.], fnorms = [3.09448e-05,
>>> 5.94789e-06, 1.12885e-05]
>>>       Line search: lambdas = [0.295379, 0.64769, 1.], fnorms =
>>> [8.09996e-06, 4.46782e-06, 3.09448e-05]
>>>       Line search: lambdas = [0.48789, 0.391635, 0.295379], fnorms =
>>> [6.07286e-06, 7.07842e-06, 8.09996e-06]
>>>       Line search terminated: lambda = 0.997854, fnorms = 3.09222e-05
>>>  18 SNES Function norm 3.092215965860e-05
>>>
>>> So, in iteration 16, the lambda chosen is 0.91..., even though we see
>>> that lambda close to 0 would give a smaller residual.  In iteration 18, we
>>> see that some lambda around 0.65 gives a far smaller residual (approx 4e-6)
>>> than the 0.997... value that gets used (which gives approx 3e-5).  The
>>> nonlinear iterations then get caught in some kind of cycle until my
>>> snes_max_it is reached [full log attached].
>>>
>>> I guess this is an artifact of (if I understand correctly) trying to
>>> minimize some polynomial fitted to the evaluated values of lambda?  But
>>> it's frustrating that it leads to 'obviously wrong' results!
>>>
>>
>> There might be better line searches for this problem. For example, 'bt'
>> should be more robust then 'l2', and 'cp'
>> tries really hard to find a minimum. The 'nleqerr' is Deufelhard's search
>> that should also be more robust. I would
>> try them out to see if its better.
>>
>>   Matt
>>
>>
>>> For background information, this comes from an FE discretisation of a
>>> Monge-Ampère equation (and also from several timesteps into a time-varying
>>> problem).  For various reasons (related to Monge-Ampère convexity
>>> requirements), I use a partial Jacobian that omits a term from the
>>> linearisation of the residual, and so the nonlinear convergence is not
>>> expected to be quadratic.
>>>
>>> Andrew
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170126/575c34b5/attachment.html>


More information about the petsc-users mailing list