<div dir="ltr">So I decided to look at the condition number of our matrix, running with `-pc_type svd -pc_svd_monitor` and it was atrocious, roughly on the order of 1e9. After doing some scaling we are down to a condition number of 1e3, and both MF and FD operators now converge, regardless of the differencing types chosen. I would say the problem was definitely on our end!</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 12, 2017 at 2:49 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Tue, Dec 12, 2017 at 3:19 PM, Alexander Lindsay <span dir="ltr"><<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I'm helping debug the finite strain models in the TensorMechanics module in MOOSE, so unfortunately I don't have a nice small PetSc code I can hand you guys :-(</div><div><br></div>Hmm, interesting, if I run with `-snes_mf_operator -snes_fd -mat_mffd_type ds`, I get DIVERGED_BREAKDOWN during the initial linear solve.</div></blockquote><div><br></div></span><div>So the MF operator always converges. The FD operator does not always converge, and factorization also can fail (DIVERGED_BREAKDOWN)</div><div>so it seems that the FD operator is incorrect. Usually we have bugs with coloring, but I do not think coloring is used by -snes_fd. What happens</div><div>if you get the coloring version by just deleting the FormJacobian pointer?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>If I run with `-snes_fd -mat_fd_type ds`, then the solve converges.</div><div><br></div><div>So summary:</div><div><br></div><div>- J = B = finite-differenced, differencing type = wp : Solve fails due to DIVERGED_LINE_SEARCH</div><div><br></div><div>- J = B = finite-differenced, differencing type = ds : Solve converges in 3 non-linear iterations</div><div><div> 0 Nonlinear |R| = 2.259203e-02</div><div> 0 Linear |R| = 2.259203e-02</div><div> 1 Linear |R| = 6.084393e-11</div><div> 1 Nonlinear |R| = 4.780691e-03</div><div> 0 Linear |R| = 4.780691e-03</div><div> 1 Linear |R| = 8.580132e-19</div><div> 2 Nonlinear |R| = 4.806625e-09</div><div> 0 Linear |R| = 4.806625e-09</div><div> 1 Linear |R| = 1.650725e-24</div><div> 3 Nonlinear |R| = 9.603678e-12</div></div><div><br></div><div>- J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = wp: Solve converges in 2 non-linear iterations</div><div><div> 0 Nonlinear |R| = 2.259203e-02</div><div> 0 Linear |R| = 2.259203e-02</div><div> 1 Linear |R| = 2.258733e-02</div><div> 2 Linear |R| = 3.103342e-06</div><div> 3 Linear |R| = 6.779865e-12</div><div> 1 Nonlinear |R| = 7.497740e-06</div><div> 0 Linear |R| = 7.497740e-06</div><div> 1 Linear |R| = 8.265413e-12</div><div> 2 Nonlinear |R| = 7.993729e-12</div></div><div><br></div><div>- J = matrix-free, B = finite-differenced, mat_mffd_type = ds, mat_fd_type = wp: DIVERGED_BREAKDOWN in linear solve</div><div><br></div><div>- J = matrix-free, B = finite-differenced, mat_mffd_type = wp, mat_fd_type = ds: Solve converges in 2 non-linear iterations</div><div><div> 0 Nonlinear |R| = 2.259203e-02</div><div> 0 Linear |R| = 2.259203e-02</div><div> 1 Linear |R| = 4.635397e-03</div><div> 2 Linear |R| = 5.413676e-11</div><div> 1 Nonlinear |R| = 1.068626e-05</div><div> 0 Linear |R| = 1.068626e-05</div><div> 1 Linear |R| = 7.942385e-12</div><div> 2 Nonlinear |R| = 5.444448e-11</div></div><div><br></div><div>- J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = ds: Solves converges in 3 non-linear iterations:</div><div><div> 0 Nonlinear |R| = 2.259203e-02</div><div> 0 Linear |R| = 2.259203e-02</div><div> 1 Linear |R| = 1.312921e-06</div><div> 2 Linear |R| = 7.714018e-09</div><div> 1 Nonlinear |R| = 4.780690e-03</div><div> 0 Linear |R| = 4.780690e-03</div><div> 1 Linear |R| = 7.773053e-09</div><div> 2 Nonlinear |R| = 1.226836e-08</div><div> 0 Linear |R| = 1.226836e-08</div><div> 1 Linear |R| = 1.546288e-14</div><div> 3 Nonlinear |R| = 1.295982e-10</div></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 12, 2017 at 12:33 PM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span><br>
<br>
> On Dec 12, 2017, at 11:26 AM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> Ok, I'm going to go back on my original statement...the physics being run here is a sub-set of a much larger set of physics; for the current set the hand-coded Jacobian actually appears to be quite good.<br>
><br>
> With hand-coded Jacobian, -pc_type lu, the convergence is perfect:<br>
><br>
> 0 Nonlinear |R| = 2.259203e-02<br>
> 0 Linear |R| = 2.259203e-02<br>
> 1 Linear |R| = 1.129089e-10<br>
> 1 Nonlinear |R| = 6.295583e-11<br>
><br>
> So yea I guess at this point I'm just curious about the different behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`.<br>
<br>
</span> Now that you have provided the exact options you are using, yes it is very unexpected behavior. Is there any chance you can send us the code that reproduces this?<br>
<br>
The code that does the differencing in -snes_fd is similar to the code that does the differencing for -snes_mf_operator so normally one expects similar behavior but there are a couple of options you can try. Run with -snes_mf_operator and -help | grep mat_mffd and this will show options to control the differencing for the matrix free. For -snes_fd you have the option -mat_fd_type wp or ds<br>
<div class="m_7923614171439796409gmail-m_5289783892134572291HOEnZb"><div class="m_7923614171439796409gmail-m_5289783892134572291h5"><br>
<br>
> Does the hand-coded result change your opinion Matt that the rules for FormFunction/Jacobian might be being violated?<br>
><br>
> I understand that a finite difference approximation of the true Jacobian is an approximation. However, in the absence of possible complications like Matt suggested where an on-the-fly calculation might stand a better chance of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same approximations, right?<br>
><br>
> On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
> I'm not using any hand-coded Jacobians.<br>
><br>
> This looks to me like the rules for FormFunction/Jacobian() are being broken. If the residual function<br>
> depends on some third variable, and it changes between calls independent of the solution U, then<br>
> the stored Jacobian could look wrong, but one done every time on the fly might converge.<br>
><br>
> Matt<br>
><br>
> Case 1 options: -snes_fd -pc_type lu<br>
><br>
> 0 Nonlinear |R| = 2.259203e-02<br>
> 0 Linear |R| = 2.259203e-02<br>
> 1 Linear |R| = 7.821248e-11<br>
> 1 Nonlinear |R| = 2.258733e-02<br>
> 0 Linear |R| = 2.258733e-02<br>
> 1 Linear |R| = 5.277296e-11<br>
> 2 Nonlinear |R| = 2.258733e-02<br>
> 0 Linear |R| = 2.258733e-02<br>
> 1 Linear |R| = 5.993971e-11<br>
> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2<br>
><br>
> Case 2 options: -snes_fd -snes_mf_operator -pc_type lu<br>
><br>
> 0 Nonlinear |R| = 2.259203e-02<br>
> 0 Linear |R| = 2.259203e-02<br>
> 1 Linear |R| = 2.258733e-02<br>
> 2 Linear |R| = 3.103342e-06<br>
> 3 Linear |R| = 6.779865e-12<br>
> 1 Nonlinear |R| = 7.497740e-06<br>
> 0 Linear |R| = 7.497740e-06<br>
> 1 Linear |R| = 8.265413e-12<br>
> 2 Nonlinear |R| = 7.993729e-12<br>
> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2<br>
><br>
><br>
</div></div><div class="m_7923614171439796409gmail-m_5289783892134572291HOEnZb"><div class="m_7923614171439796409gmail-m_5289783892134572291h5">> On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . <<a href="mailto:zakaryah@gmail.com" target="_blank">zakaryah@gmail.com</a>> wrote:<br>
> When you say "Jacobians are bad" and "debugging the Jacobians", do you mean that the hand-coded Jacobian is wrong? In that case, why would you be surprised that the finite difference Jacobians, which are "correct" to approximation error, perform better? Otherwise, what does "Jacobians are bad" mean - ill-conditioned? Singular? Not symmetric? Not positive definite?<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~k<wbr>nepley/</a><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</blockquote></div></div></div><div><div class="h5"><br><br clear="all"><div><br></div>-- <br><div class="m_7923614171439796409gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~<wbr>knepley/</a><br></div></div></div></div></div>
</div></div></div></div>
</blockquote></div><br></div>