[petsc-users] Simultaneously compute Residual+Jacobian in SNES

Jed Brown jed at jedbrown.org
Sun Dec 11 23:36:13 CST 2016


r<#secure method=pgpmime mode=sign>
Derek Gaston <friedmud at gmail.com> writes:

> A quick note: I'm not hugely invested in this idea... I'm just talking it
> out since I started it.  The issues might outweigh potential gains...
>
> On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley <knepley at gmail.com> wrote:
>
>> I consider this bad code management more than an analytical case for the
>> technique, but I can see the point.
>>
>
> Can you expand on that?  Do you believe automatic differentiation in
> general to be "bad code management"?

AD that prevents calling the non-AD function is bad AD.

>>    - Extremely complex, heavy shape function evaluation (think super high
>> order with first and second derivatives needing to be computed)
>>
>> I honestly do not understand this one. Maybe I do not understand high
>> order since I never use it. If I want to compute an integral, I have
>> the basis functions tabulated. I understand that for high order, you use a
>> tensor product evaluation, but you still tabulate in 1D. What is
>> being recomputed here?
>>
>
> In unstructured mesh you still have to compute the reference->physical map
> for each element

Yes.

> and map all of the gradients/second derivatives to physical space.

No, you can apply it at quadrature points and it is fewer flops and
allows vectorization of the reference gradients over elements.  Libmesh
is just written to do a matrix-matrix product so that physical gradient
matrices can be handed to users.  That is convenient, but not optimal.

Gradients of physical coordinates and inversion of the 3x3 coordinate
Jacobians at each quadrature point are a real cost, though there are a
lot of scenarios in which it is less expensive to store them than to
recompute them.

>> Ah, so you are saying that the process of field evaluation at the
>> quadrature points is expensive because you have so many fields.
>> It feels very similar to the material case, but I cannot articulate why.
>
> It is similar: it's all about how much information you have to recompute at
> each quadrature point.  I was simply giving different scenarios for why you
> could end up with heavy calculations at each quadrature point that feed
> into both the Residual and Jacobian calculations.

Are all the fields in unique function spaces that need different
transforms or different quadratures?  If not, it seems like the presence
of many fields would already amortize the geometric overhead of visiting
an element.

When the fields are coupled through an expensive material model, I
realize that solving that model can be the dominant cost.  In the limit
of expensive materials, it is possible that the Jacobian and residual
have essentially the same cost.  I.e., the Jacobian can be updated for
free when you compute the residual.  Obviously you can just do that and
save 2x on residual/Jacobian evaluation.  Alternatively, you could cache
the effective material coefficient (and its gradient) at each quadrature
point during residual evaluation, thus avoiding a re-solve when building
the Jacobian.  I would recommend that unless you know that line searches
are rare.

It is far more common that the Jacobian is _much_ more expensive than
the residual, in which case the mere possibility of a line search (or of
converging) would justify deferring the Jacobian.  I think it's much
better to make residuals and Jacobians fast independently, then perhaps
make the residual do some cheap caching, and worry about second-guessing
Newton only as a last resort.  That said, I have no doubt that we could
demonstrate some benefit to using heuristics and a relative cost model
to sometimes compute residuals and Jacobians together.  It just isn't
that interesting and I think the gains are likely small and will
generate lots of bikeshedding about the heuristic.


More information about the petsc-users mailing list