[petsc-users] SNES: approximating the Jacobian with computed residuals?

Peter Brune prbrune at gmail.com
Tue Apr 22 16:58:47 CDT 2014


I accidentally forgot to reply to the list; resending.


On Tue, Apr 22, 2014 at 4:55 PM, Peter Brune <prbrune at gmail.com> wrote:

>
>
>
> On Tue, Apr 22, 2014 at 4:40 PM, Fischer, Greg A. <
> fischega at westinghouse.com> wrote:
>
>>
>>
>>
>>
>> *From:* Peter Brune [mailto:prbrune at gmail.com]
>> *Sent:* Tuesday, April 22, 2014 12:44 PM
>>
>> *To:* Fischer, Greg A.
>> *Cc:* petsc-users at mcs.anl.gov
>> *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with
>> computed residuals?
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. <
>> fischega at westinghouse.com> wrote:
>>
>>
>>
>> *From:* Peter Brune [mailto:prbrune at gmail.com]
>> *Sent:* Tuesday, April 22, 2014 10:16 AM
>> *To:* Fischer, Greg A.
>> *Cc:* petsc-users at mcs.anl.gov
>> *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with
>> computed residuals?
>>
>>
>>
>> On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. <
>> fischega at westinghouse.com> wrote:
>>
>> Hello PETSc-users,
>>
>> I'm using the SNES component with the NGMRES method in my application.
>> I'm using a matrix-free context for the Jacobian and the
>> MatMFFDComputeJacobian() function in my FormJacobian routine. My
>> understanding is that this effectively approximates the Jacobian using the
>> equation at the bottom of Page 103 in the PETSc User's Manual. This works,
>> but the expense of computing two function evaluations in each SNES
>> iteration nearly wipes out the performance improvements over Picard
>> iteration.
>>
>>
>>
>> Try -snes_type anderson.  It's less stable than NGMRES, but requires one
>> function evaluation per iteration.  The manual is out of date.  I guess
>> it's time to fix that.  It's interesting that the cost of matrix assembly
>> and a linear solve is around the same as that of a function evaluation.
>>  Output from -log_summary would help in the diagnosis.
>>
>>
>>
>> I tried the –snes_type anderson option, and it seems to be requiring even
>> more function evaluations than the Picard iterations. I’ve attached
>> –log_summary output. This seems strange, because I can use the NLKAIN code (
>> http://nlkain.sourceforge.net/) to fairly good effect, and I’ve read
>> that it’s related to Anderson mixing. Would it be useful to adjust the
>> parameters?
>>
>>
>>
>> If I recall correctly, NLKAIN is yet another improvement on Anderson
>> Mixing.  Our NGMRES is what's in O/W and is built largely around being
>> nonlinearly preconditionable with something strong like FAS.  What is the
>> perceived difference in convergence? (what does -snes_monitor say?) Any
>> multitude of tolerances may be different between the two methods, and it's
>> hard to judge without knowing much, much more.  Seeing what happens when
>> one changes the parameters is of course important if you're looking at
>> performance.
>>
>>
>>
>> I’m not looking to apply a preconditioner, so it sounds like perhaps
>> NGMRES isn’t a good choice for this application.
>>
>>
>>
>
> It all depends on the characteristics of your problem.
>
>
>>  I tried a different problem (one that is larger and more realistic),
>> and found the SNESAnderson performance to be substantially better. NLKAIN
>> still converges faster, though. (NLKAIN: ~1350 function calls;
>> SNESAnderson: ~1800 function calls; fixed-point: ~2550 function calls).
>> The –snes_monitor seems to indicate steadily decreasing function norms.
>>
>>
>>
>
> Playing around with -snes_anderson_beta will in all likelihood yield some
> performance improvements; perhaps significant.  If I run
> snes/examples/tutorials/ex5 with -snes_anderson_beta 1 and
> snes_anderson_beta 0.1 the difference is 304 to 103 iterations.  I may have
> to go look and see what NLKAIN does for damping; most Anderson Mixing
> implementations seem to do nothing complicated for this.  Anything adaptive
> would take more function evaluations.
>
>
>>  The –snes_anderson_monitor option doesn’t seem to produce any output.
>> I’ve tried passing “-snes_anderson_monitor” and “-snes_anderson_monitor
>> true” as options, but don’t see any output analogous to
>> “-snes_gmres_monitor”. What’s the correct way to pass that option?
>>
>>
> in the -snes_ngmres_monitor case, the monitor prints what happens with the
> choice between the candidate and the minimized solutions AND if the restart
> conditions are in play.  In Anderson mixing, the first part doesn't apply,
> so -snes_anderson_monitor only prints anything if the method is restarting;
> the default is no restart, so nothing will show up.  Changing to periodic
> restart or difference restart (which takes more reductions) will yield
> other output.
>
>
>>
>> By Picard, you mean simple fixed-point iteration, right?  What
>> constitutes a Picard iteration is a longstanding argument on this list and
>> therefore requires clarification, unfortunately. :)  This (without
>> linesearch) can be duplicated in PETSc with -snes_type nrichardson
>> -snes_linesearch_type basic.  For a typical problem one must damp this with
>> -snes_linesearch_damping <damping parameter>  That's what the linesearch is
>> there to avoid, but this takes more function evaluations.
>>
>>
>>
>> Yes, I mean fixed-point iteration.
>>
>
> Cool.
>
>
>>
>>
>> I’ve also attached –log_summary output for NGMRES. Does anything jump out
>> as being amiss?
>>
>>
>>
>>       ##########################################################
>>       #                                                        #
>>       #                          WARNING!!!                    #
>>       #                                                        #
>>       #   This code was compiled with a debugging option,      #
>>       #   To get timing results run ./configure                #
>>       #   using --with-debugging=no, the performance will      #
>>       #   be generally two or three times faster.              #
>>       #                                                        #
>>       ##########################################################
>>
>>
>>
>> Timing comparisons aren't reasonable with debugging on.
>>
>>
>>
>> Yes, I understand. At this point, I’m just comparing numbers of function
>> evalautions.
>>
>
> OK.
>
> - Peter
>
>
>>
>>
>
>>
>>
>>
>>
>> Based on my (limited) understanding of the Oosterlee/Washio SIAM paper
>> ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to
>> suggest that it's possible to approximate the Jacobian with a series of
>> previously-computed residuals (eq 2.14), rather than additional function
>> evaluations in each iteration. Is this correct? If so, could someone point
>> me to a reference that demonstrates how to do this with PETSc?
>>
>>
>>
>> What indication do you have that the Jacobian is calculated at all in the
>> NGMRES method?  The two function evaluations are related to computing the
>> quantities labeled F(u_M) and F(u_A) in O/W.  We already use the Jacobian
>> approximation for the minimization problem (2.14).
>>
>>
>>
>> - Peter
>>
>>
>>
>> Thanks for the clarification.
>>
>>
>>
>> -Greg
>>
>>
>>
>>
>> Or, perhaps a better question to ask is: are there other ways of reducing
>> the computing burden associated with estimating the Jacobian?
>>
>> Thanks,
>> Greg
>>
>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140422/a83012c9/attachment.html>


More information about the petsc-users mailing list