<div dir="ltr"><div dir="ltr">On Sat, Sep 2, 2023 at 6:58 PM David Knezevic <<a href="mailto:david.knezevic@akselos.com">david.knezevic@akselos.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hmm, it sounds like the convergence measure is bad. Maybe using a weighted norm would be better?</blockquote><div><br></div><div>That's a good thought, I'd like to look into that idea too. Could you please give me some guidance on how to use a weighted norm in the convergence test? (Or are there any examples of doing that in the example suite?)</div></div></blockquote><div><br></div><div>I guess the idea would be the following:</div><div><br></div><div> You have exponential terms in your equation. You get a relatively small residual, but unacceptable</div><div> error. This leads me to believe that some part of the residual is swamping other parts you care about.</div><div> This should be confirmed by plotting the residual for one of these 0-iterate cases. To prevent this</div><div> imbalance, you would scale the residual to give more weight to the underrepresented parts, probably</div><div> with an inverse of the exponential. This is just a more complicated form of the diagonal scaling</div><div> commonly used to give all fields the same scale.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Thanks,</div><div>David</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Sep 2, 2023 at 5:54 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Sat, Sep 2, 2023 at 5:45 PM David Knezevic <<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>OK, thanks, I'll look into the custom convergence test.</div><div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I do not understand this comment. What do you mean by "inaccurate"? Since we do not have the true solution, we usually say "inaccurate" for large residual, but you already said that the residual is small. Why would you want to do another iterate?</blockquote></div><div dir="ltr"><br></div><div>I agree with your comments, but the specific case I'm considering is very numerically sensitive since it includes creep (which unfortunately has large exponential terms in it) which is the root cause of the issues I'm facing. Based on test cases with a known reference solution we're finding that we get inaccurate results due to steps with "zero iterations". We can fix this by tightening the tolerance but then we do an excessive number of iterations in other steps. So it seems to me that ensuring that we do at least one iteration will help here, so that's what I wanted to try.<br></div></div></blockquote><div><br></div><div>Hmm, it sounds like the convergence measure is bad. Maybe using a weighted norm would be better?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Thanks again for your help.</div><div><br></div><div>Best,<br>David</div><div dir="ltr"><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Sep 2, 2023 at 3:23 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Sat, Sep 2, 2023 at 3:05 PM David Knezevic via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi all,</div><div><br></div><div>I'm using the SNES solver for a plasticity model, and the issue I've run into is that in some time steps the solver terminates after "NL step 0" since the initial residual (based on the solution from the previous time step) is below the specified tolerance.</div><div><br></div><div>I gather that "NL step 0" only checks the residual and doesn't actually do a Newtown update, and hence it seems that this is leading to inaccurate results in some cases.</div></div></blockquote><div><br></div><div>I do not understand this comment. What do you mean by "inaccurate"? Since we do not have the true solution, we usually say "inaccurate" for large residual, but you already said that the residual is small.</div><div>Why would you want to do another iterate?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> I can of course specify a smaller convergence tolerance to avoid this issue, but I've found it difficult to find a smaller tolerance that works well in all cases (e.g. it leads to too many iterations or non-convergence). So instead what I would like to do is ensure that the solver does at least 1 Newton iteration instead of terminating at "NL step 0". Is there a way to enforce this behavior, e.g. by skipping "NL step 0", or specifying a "minimum number of iterations"? I didn't see anything like this in the documentation, so I was wondering if there are any suggestions on how to proceed for this.</div></div></blockquote><div><br></div><div>The easiest way to do this is to write a custom convergence test that looks like this</div><div><br></div><div>PetscErrorCode SNESConvergedDefault(SNES snes, PetscInt it, PetscReal xnorm, PetscReal snorm, PetscReal fnorm, SNESConvergedReason *reason, void *dummy)<br></div><div>{</div><div> PetscFunctionBeginUser;</div><div> if (!it) {</div> *reason = SNES_CONVERGED_ITERATING;<br><div> PetscFunctionReturn(PETSC_SUCCESS);</div><div> }</div><div> PetscCall(SNESConvergedDefault(snes, it, xnorm, snorm, fnorm, reason, dummy));</div><div> PetscFunctionReturn(PETSC_SUCCESS);</div><div>}</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Thanks,<br>David<br></div></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>