<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Mon, Dec 12, 2016 at 12:36 AM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> Can you expand on that? Do you believe automatic differentiation in<br class="gmail_msg">
> general to be "bad code management"?<br class="gmail_msg">
<br class="gmail_msg">
AD that prevents calling the non-AD function is bad AD.<br class="gmail_msg"></blockquote><div><br></div><div>That's not exactly the problem. Even if you can call an AD and a non-AD residual... you still have to compute two residuals to compute a residual and a Jacobian separately when using AD.</div><div><br></div><div>It's not the end of the world... but it was something that prompted me to ask the question.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Are all the fields in unique function spaces that need different<br>
transforms or different quadratures? If not, it seems like the presence<br class="gmail_msg">
of many fields would already amortize the geometric overhead of visiting<br class="gmail_msg">
an element.<br class="gmail_msg"></blockquote><div><br></div><div>These were two separate examples. Expensive shape functions, by themselves, could warrant computing the residual and Jacobian simultaneously. Also: many variables, by themselves, could do the same.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Alternatively, you could cache the effective material coefficient (and its gradient) at each quadrature point during residual evaluation, thus avoiding a re-solve when building the Jacobian.</blockquote><div><br></div><div>I agree with this. We have some support for it in MOOSE now... and more plans for better support in the future. It's a classic time/space tradeoff.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I would recommend that unless you know that line searches are rare.<br class="gmail_msg"></blockquote><div><br></div><div>BTW: Many (most?) of our most complex applications all _disable_ line search. Over the years we've found line search to be more of a hindrance than a help. We typically prefer using some sort of "physics based" damped Newton.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">It is far more common that the Jacobian is _much_ more expensive than<br class="gmail_msg">
the residual, in which case the mere possibility of a line search (or of<br class="gmail_msg">
converging) would justify deferring the Jacobian. I think it's much<br class="gmail_msg">
better to make residuals and Jacobians fast independently, then perhaps<br class="gmail_msg">
make the residual do some cheap caching, and worry about second-guessing Newton only as a last resort.</blockquote><div><br></div><div>I think I agree. These are definitely "fringe" cases... for most applications Jacobians are _way_ more expensive.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">That said, I have no doubt that we could<br class="gmail_msg">
demonstrate some benefit to using heuristics and a relative cost model<br class="gmail_msg">
to sometimes compute residuals and Jacobians together. It just isn't<br class="gmail_msg">
that interesting and I think the gains are likely small and will<br class="gmail_msg">
generate lots of bikeshedding about the heuristic.<br class="gmail_msg"></blockquote><div><br></div><div>I agree here too. It could be done... but I think you've convinced me that it's not worth the trouble :-)</div><div><br></div><div>Thanks for the discussion everyone!</div><div><br></div><div>Derek</div></div></div>