<div dir="ltr"><div dir="ltr">On Thu, May 4, 2023 at 8:21 AM Mark Lohry <<a href="mailto:mlohry@gmail.com">mlohry@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Do they start very similarly and then slowly drift further apart? </blockquote><div><br></div><div>Yes, this. I take it this sounds familiar?<br></div><div><br></div><div>See these two examples with 20 fixed iterations pasted at the end. The difference for one solve is slight (final SNES norm is identical to 5 digits), but in the context I'm using it in (repeated applications to solve a steady state multigrid problem, though here just one level) the differences add up such that I might reach global convergence in 35 iterations or 38. It's not the end of the world, but I was expecting that with -np 1 these would be identical and I'm not sure where the root cause would be.</div></div></div></div></blockquote><div><br></div><div>The initial KSP residual is different, so its the PC. Please send the output of -snes_view. If your ASM is using direct factorization, then it</div><div>could be randomness in whatever LU you are using.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div> 0 SNES Function norm 2.801842107848e+04 <br> 0 KSP Residual norm 4.045639499595e+01 <br> 1 KSP Residual norm 1.917999809040e+01 <br> 2 KSP Residual norm 1.616048521958e+01 <br></div><div>[...]</div><div> 19 KSP Residual norm 8.788043518111e-01 <br> 20 KSP Residual norm 6.570851270214e-01 <br> Linear solve converged due to CONVERGED_ITS iterations 20<br> 1 SNES Function norm 1.801309983345e+03 <br>Nonlinear solve converged due to CONVERGED_ITS iterations 1</div><div><br></div><div><br></div><div>Same system, identical initial 0 SNES norm, 0 KSP is slightly different<br></div><div></div><div><br></div><div> 0 SNES Function norm 2.801842107848e+04 <br> 0 KSP Residual norm 4.045639473002e+01 <br> 1 KSP Residual norm 1.917999883034e+01 <br> 2 KSP Residual norm 1.616048572016e+01 <br></div><div>[...]</div><div> 19 KSP Residual norm 8.788046348957e-01 <br> 20 KSP Residual norm 6.570859588610e-01 <br> Linear solve converged due to CONVERGED_ITS iterations 20<br> 1 SNES Function norm 1.801311320322e+03 <br>Nonlinear solve converged due to CONVERGED_ITS iterations 1</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, May 3, 2023 at 11:05 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> Do they start very similarly and then slowly drift further apart? That is the first couple of KSP iterations they are almost identical but then for each iteration get a bit further. Similar for the SNES iterations, starting close and then for more iterations and more solves they start moving apart. Or do they suddenly jump to be very different? You can run with -snes_monitor -ksp_monitor <br><div><br><blockquote type="cite"><div>On May 3, 2023, at 9:07 PM, Mark Lohry <<a href="mailto:mlohry@gmail.com" target="_blank">mlohry@gmail.com</a>> wrote:</div><br><div><div dir="auto"><div>This is on a single MPI rank. I haven't checked the coloring, was just guessing there. But the solutions/residuals are slightly different from run to run.</div><div dir="auto"><br></div><div dir="auto">Fair to say that for serial JFNK/asm ilu0/gmres we should expect bitwise identical results?</div><div dir="auto"><br></div><div dir="auto"><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Wed, May 3, 2023, 8:50 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
No, the coloring should be identical every time. Do you see differences with 1 MPI rank? (Or much smaller ones?).<br>
<br>
<br>
<br>
> On May 3, 2023, at 8:42 PM, Mark Lohry <<a href="mailto:mlohry@gmail.com" rel="noreferrer" target="_blank">mlohry@gmail.com</a>> wrote:<br>
> <br>
> I'm running multiple iterations of newtonls with an MFFD/JFNK nonlinear solver where I give it the sparsity. PC asm, KSP gmres, with SNESSetLagJacobian -2 (compute once and then frozen jacobian).<br>
> <br>
> I'm seeing slight (<1%) but nonzero differences in residuals from run to run. I'm wondering where randomness might enter here -- does the jacobian coloring use a random seed?<br>
<br>
</blockquote></div></div></div>
</div></blockquote></div><br></div></blockquote></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>