<div dir="ltr">Mark, Matthew and Barry,<div><br></div><div>Thank you all for the quick responses.</div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><font color="#351c75">Others might have a better idea, but you could run with '-info :ksp' and see if you see any messages like "Linear solver has created a not a number (NaN) as the residual norm, declaring divergence \n"</font></div></div><div><div><font color="#351c75">You could also run with -log_trace and see if it is using KSPConvergedDefault. I'm not sure if this is the method used given your parameters, but I think it is.</font></div></div></blockquote><div>Mark, I ran with both options. I didn't get any messages like "linear solver has created a not a number..." when using -info: ksp. When turning on -log_trace, I could verify that it is using KSPConvergedDefault but what does it mean exactly? When FGMRES converges with the true residual being NaN, I get the following message: [0] KSPConvergedDefault(): Linear solver has converged. Residual norm 8.897908325511e-05 is less than relative tolerance 1.000000000000e-08 times initial right hand side norm 1.466597558465e+04 at iteration 53. No information about NaN whatsoever.</div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><font color="#351c75">We check for NaN or Inf, for example, in KSPCheckDot(). if you have the KSP set to error (<a href="https://petsc.org/main/docs/manualpages/KSP/KSPSetErrorIfNotConverged.html" target="_blank">https://petsc.org/main/docs/manualpages/KSP/KSPSetErrorIfNotConverged.html</a>)</font></div></div><div><div><font color="#351c75">then we throw an error, but the return codes do not seem to be checked in your implementation. If not, then we set the flag for divergence.</font></div></div></blockquote><div>Matthew, I do not check the return code in this case because I don't want PETSc to stop if an error occurs during the solving step. I just want to know that it didn't converge and treat this error inside my code. The problem is that the flag for divergence is not always being set when FGMRES is not converging. I was just wondering why it was set during time step 921 and why not for time step 922 as well.</div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><font color="#351c75">Thanks for the complete report. It looks like we may be missing a check in our FGMRES implementation that allows the iteration to continue after a NaN/Inf. </font></div></div><div><div><font color="#351c75"><br></font></div></div><div><div><font color="#351c75"> I will explain how we handle the checking and then attach a patch that you can apply to see if it resolves the problem. Whenever our KSP solvers compute a norm we</font></div></div><div><div><font color="#351c75">check after that calculation to verify that the norm is not an Inf or Nan. This is an inexpensive global check across all MPI ranks because immediately after the norm computation all ranks that share the KSP have the same value. If the norm is a Inf or Nan we "short-circuit" the KSP solve and return immediately with an appropriate not converged code. A quick eye-ball inspection of the FGMRES code found a missing check. </font></div></div><div><div><font color="#351c75"><br></font></div></div><div><div><font color="#351c75"> You can apply the attached patch file in the PETSC_DIR with </font></div></div><div><div><font color="#351c75"><br></font></div></div><div><div><font color="#351c75">patch -p1 < fgmres.patch</font></div></div><div><div><font color="#351c75">make libs</font></div></div><div><div><font color="#351c75"><br></font></div></div><div><div><font color="#351c75">then rerun your code and see if it now handles the Inf/NaN correctly. If so we'll patch our release branch with the fix.</font></div></div></blockquote><div>Thank you for checking this, Barry. I applied the patch exactly the way you instructed, however, the problem is still happening. Is there a way to check if the patch was in fact applied? You can see in the attached screenshot the terminal information.</div><div><br></div><div>Kind regards,</div><div><br></div><div>Giovane</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em sex., 25 de fev. de 2022 às 13:48, Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div> Giovane,</div><div><br></div><div> Thanks for the complete report. It looks like we may be missing a check in our FGMRES implementation that allows the iteration to continue after a NaN/Inf. </div><div><br></div><div> I will explain how we handle the checking and then attach a patch that you can apply to see if it resolves the problem. Whenever our KSP solvers compute a norm we</div><div>check after that calculation to verify that the norm is not an Inf or Nan. This is an inexpensive global check across all MPI ranks because immediately after the norm computation all ranks that share the KSP have the same value. If the norm is a Inf or Nan we "short-circuit" the KSP solve and return immediately with an appropriate not converged code. A quick eye-ball inspection of the FGMRES code found a missing check. </div><div><br></div><div> You can apply the attached patch file in the PETSC_DIR with </div><div><br></div><div>patch -p1 < fgmres.patch</div><div>make libs</div><div><br></div><div>then rerun your code and see if it now handles the Inf/NaN correctly. If so we'll patch our release branch with the fix.</div></div><div><div></div><div><br></div><div> Barry</div><div><br></div><div><br></div><div><br><blockquote type="cite"><div dir="ltr"><div>Giovane</div></div></blockquote></div> <br><div><br><blockquote type="cite"><div>On Feb 25, 2022, at 11:06 AM, Giovane Avancini via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:</div><br><div><div dir="ltr">Dear PETSc users,<div><br></div><div>I'm working on an inhouse code that solves the Navier-Stokes equation in a Lagrangian fashion for free surface flows. Because of the large distortions and pressure gradients, it is quite common to encounter some issues with iterative solvers for some time steps, and because of that, I implemented a function that changes the solver type based on the flag KSPConvergedReason. If this flag is negative after a call to KSPSolve, I solve the same linear system again using a direct method.</div><div><br></div><div>The problem is that, sometimes, KSP keeps converging even though the residual is NaN, and because of that, I'm not able to identify the problem and change the solver, which leads to a solution vector equals to INF and obviously the code ends up crashing. Is it normal to observe this kind of behaviour?</div><div><br></div><div>Please find attached the log produced with the options -ksp_monitor_lg_residualnorm -ksp_log -ksp_view -ksp_monitor_true_residual -ksp_converged_reason and the function that changes the solver. I'm currently using FGMRES and BJACOBI preconditioner with LU for each block. The problem still happens with ILU for example. We can see in the log file that for the time step 921, the true residual is NaN and within just one iteration, the solver fails and it gives the reason DIVERGED_PC_FAILED. I simply changed the solver to MUMPS and it converged for that time step. However, when solving time step 922 we can see that FGMRES converges while the true residual is NaN. Why is that possible? I would appreciate it if someone could clarify this issue to me.</div><div><br></div><div>Kind regards,</div><div>Giovane</div><div><br></div><div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><font size="1" face="tahoma, sans-serif">Giovane Avancini</font><div><font size="1" face="tahoma, sans-serif">Doutorando em Engenharia de Estruturas - Escola de Engenharia de São Carlos, USP</font></div><div><font size="1" face="tahoma, sans-serif"><br></font></div><div><font size="1" face="tahoma, sans-serif">PhD researcher in Structural Engineering - School of Engineering of São Carlos. USP</font></div></div></div></div></div></div></div>
<span id="gmail-m_6259310096911551287gmail-m_1094472000501772005gmail-m_3468206820950415506gmail-m_-7933021170587843311cid:f_l02lsct61"><function.txt></span><span id="gmail-m_6259310096911551287gmail-m_1094472000501772005gmail-m_3468206820950415506gmail-m_-7933021170587843311cid:f_l02lscsb0"><log.txt></span></div></blockquote></div><br></div></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><font size="1" face="tahoma, sans-serif">Giovane Avancini</font><div><font size="1" face="tahoma, sans-serif">Doutorando em Engenharia de Estruturas - Escola de Engenharia de São Carlos, USP</font></div><div><font size="1" face="tahoma, sans-serif"><br></font></div><div><font size="1" face="tahoma, sans-serif">PhD researcher in Structural Engineering - School of Engineering of São Carlos. USP</font></div></div></div></div></div>