<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, May 22, 2014 at 1:01 PM, Jean-Arthur Louis Olive <span dir="ltr"><<a href="mailto:jaolive@mit.edu" target="_blank">jaolive@mit.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hi Matt,<div>our underlying problem is the mismatch between KSP ans SNES norms, even when solving a simple linear system, e.g.,</div>
<div class=""><div><br></div><div><span style="background-color:rgb(255,255,255)"> for (j=info->ys; j<info->ys+info->ym; j++) {</span><br style="background-color:rgb(255,255,255)"><span style="background-color:rgb(255,255,255)"> for (i=info->xs; i<info->xs+info->xm; i++) {</span><br style="background-color:rgb(255,255,255)">
<span style="background-color:rgb(255,255,255)"> f[j][i].P = x[j][i].P - 3;</span><br style="background-color:rgb(255,255,255)"><span style="background-color:rgb(255,255,255)"> f[j][i].vx= x[j][i].vx - 3*x[j][i].vy;</span><br style="background-color:rgb(255,255,255)">
<span style="background-color:rgb(255,255,255)"> f[j][i].vy= x[j][i].vy - 2;</span><br style="background-color:rgb(255,255,255)"><span style="background-color:rgb(255,255,255)"> f[j][i].T = x[j][i].T;</span><span style="background-color:rgb(255,255,255)"> </span><span style="background-color:rgb(255,255,255)"> </span><br>
<span style="background-color:rgb(255,255,255)"> }</span><br style="background-color:rgb(255,255,255)"><span style="background-color:rgb(255,255,255)"> }</span></div><div><span style="background-color:rgb(255,255,255)"><br>
</span></div></div><div><span style="background-color:rgb(255,255,255)">which should not have any conditioning issue. So I don’t think in this case it’s an accuracy problem- but something could be wrong with the FD estimation of our Jacobian (?)</span></div>
</div></blockquote><div><br></div><div>I think you are misinterpreting the output. As I said before, the FD Jacobian will only be accurate to about</div><div>1.0e-7 (which is what I see with my own code). Thus it will only match the SNES residual to this precision.</div>
<div>If you want an exact match, you need to code up the exact Jacobian.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word"><span class="HOEnZb"><font color="#888888"><div><span style="background-color:rgb(255,255,255)">Arthur</span></div></font></span><div><div class="h5"><div><br></div><div><br><div><div>On May 22, 2014, at 11:52 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, May 22, 2014 at 12:47 PM, Jean-Arthur Louis Olive<span> </span><span dir="ltr"><<a href="mailto:jaolive@mit.edu" target="_blank">jaolive@mit.edu</a>></span><span> </span>wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi Barry,<br>sorry about the late reply-<br>We indeed use structured grids (DMDA 2d) - but do not ever provide a Jacobian for our non-linear stokes problem (instead just rely on petsc's FD approximation). I understand "snes_type test" is meant to compare petsc’s Jacobian with a user-provided analytical Jacobian.<br>
Are you saying we should provide an exact Jacobian for our simple linear test and see if there’s a problem with the approximate Jacobian?<br></blockquote><div><br></div><div>The Jacobian computed by PETSc uses a finite-difference approximation, and thus is only accurate to maybe 1.0e-7</div>
<div>depending on the conditioning of your system. Are you trying to compare things that are more precise than that? You</div><div>can provide an exact Jacobian to get machine accuracy.</div><div><br></div><div> Matt</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Thanks,<br>Arthur & Eric<br><div><div><br>
<br><br>> If you are using DMDA and either DMGetColoring or the SNESSetDM approach and dof is 4 then we color each of the 4 variables per grid point with a different color so coupling between variables within a grid point is not a problem. This would not explain the problem you are seeing below.<br>
><br>> Run your code with -snes_type test and read the results and follow the directions to debug your Jacobian.<br>><br>> Barry<br>><br>><br>> On May 13, 2014, at 1:20 PM, Jean-Arthur Louis Olive <<a href="mailto:jaolive@MIT.EDU" target="_blank">jaolive@MIT.EDU</a>> wrote:<br>
><br>>> Hi all,<br>>> we are using PETSc to solve the steady state Stokes equations with non-linear viscosities using finite difference. Recently we have realized that our true residual norm after the last KSP solve did not match next SNES function norm when solving the linear Stokes equations.<br>
>><br>>> So to understand this better, we set up two extremely simple linear residuals, one with no coupling between variables (vx, vy, P and T), the other with one coupling term (shown below).<br>>><br>
>> RESIDUAL 1 (NO COUPLING):<br>>> for (j=info->ys; j<info->ys+info->ym; j++) {<br>>> for (i=info->xs; i<info->xs+info->xm; i++) {<br>>> f[j][i].P = x[j][i].P - 3000000;<br>
>> f[j][i].vx= 2*x[j][i].vx;<br>>> f[j][i].vy= 3*x[j][i].vy - 2;<br>>> f[j][i].T = x[j][i].T;<br>>> }<br>>><br>>> RESIDUAL 2 (ONE COUPLING TERM):<br>>> for (j=info->ys; j<info->ys+info->ym; j++) {<br>
>> for (i=info->xs; i<info->xs+info->xm; i++) {<br>>> f[j][i].P = x[j][i].P - 3;<br>>> f[j][i].vx= x[j][i].vx - 3*x[j][i].vy;<br>>> f[j][i].vy= x[j][i].vy - 2;<br>>> f[j][i].T = x[j][i].T;<br>
>> }<br>>> }<br>>><br>>><br>>> and our default set of options is:<br>>><br>>><br>>> OPTIONS: mpiexec -np $np ../Stokes -snes_max_it 4 -ksp_atol 2.0e+2 -ksp_max_it 20 -ksp_rtol 9.0e-1 -ksp_type fgmres -snes_monitor -snes_converged_reason -snes_view -log_summary -options_left 1 -ksp_monitor_true_residual -pc_type none -snes_linesearch_type cp<br>
>><br>>><br>>> With the uncoupled residual (Residual 1), we get matching KSP and SNES norm, highlighted below:<br>>><br>>><br>>> Result from Solve - RESIDUAL 1<br>>> 0 SNES Function norm 8.485281374240e+07<br>
>> 0 KSP unpreconditioned resid norm 8.485281374240e+07 true resid norm 8.485281374240e+07 ||r(i)||/||b|| 1.000000000000e+00<br>>> 1 KSP unpreconditioned resid norm 1.131370849896e+02 true resid norm 1.131370849896e+02 ||r(i)||/||b|| 1.333333333330e-06<br>
>> 1 SNES Function norm 1.131370849896e+02<br>>> 0 KSP unpreconditioned resid norm 1.131370849896e+02 true resid norm 1.131370849896e+02 ||r(i)||/||b|| 1.000000000000e+00<br>>> 2 SNES Function norm 1.131370849896e+02<br>
>> Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 2<br>>><br>>><br>>> With the coupled residual (Residual 2), the norms do not match, see below<br>>><br>>><br>>> Result from Solve - RESIDUAL 2:<br>
>> 0 SNES Function norm 1.019803902719e+02<br>>> 0 KSP unpreconditioned resid norm 1.019803902719e+02 true resid norm 1.019803902719e+02 ||r(i)||/||b|| 1.000000000000e+00<br>>> 1 KSP unpreconditioned resid norm 8.741176309016e+01 true resid norm 8.741176309016e+01 ||r(i)||/||b|| 8.571428571429e-01<br>
>> 1 SNES Function norm 1.697056274848e+02<br>>> 0 KSP unpreconditioned resid norm 1.697056274848e+02 true resid norm 1.697056274848e+02 ||r(i)||/||b|| 1.000000000000e+00<br>>> 1 KSP unpreconditioned resid norm 5.828670868165e-12 true resid norm 5.777940247956e-12 ||r(i)||/||b|| 3.404683942184e-14<br>
>> 2 SNES Function norm 3.236770473841e-07<br>>> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2<br>>><br>>><br>>> Lastly, if we add -snes_fd to our options, the norms for residual 2 get better - they match after the first iteration but not after the second.<br>
>><br>>><br>>> Result from Solve with -snes_fd - RESIDUAL 2<br>>> 0 SNES Function norm 8.485281374240e+07<br>>> 0 KSP unpreconditioned resid norm 8.485281374240e+07 true resid norm 8.485281374240e+07 ||r(i)||/||b|| 1.000000000000e+00<br>
>> 1 KSP unpreconditioned resid norm 2.039607805429e+02 true resid norm 2.039607805429e+02 ||r(i)||/||b|| 2.403700850300e-06<br>>> 1 SNES Function norm 2.039607805429e+02<br>>> 0 KSP unpreconditioned resid norm 2.039607805429e+02 true resid norm 2.039607805429e+02 ||r(i)||/||b|| 1.000000000000e+00<br>
>> 1 KSP unpreconditioned resid norm 2.529822128436e+01 true resid norm 2.529822128436e+01 ||r(i)||/||b|| 1.240347346045e-01<br>>> 2 SNES Function norm 2.549509757105e+01 [SLIGHTLY DIFFERENT]<br>>> 0 KSP unpreconditioned resid norm 2.549509757105e+01 true resid norm 2.549509757105e+01 ||r(i)||/||b|| 1.000000000000e+00<br>
>> 3 SNES Function norm 2.549509757105e+01<br>>> Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 3<br>>><br>>><br>>> Does this mean that our Jacobian is not approximated properly by the default “coloring” method when it has off-diagonal terms?<br>
>><br>>> Thanks a lot,<br>>> Arthur and Eric<br>><br><br></div></div></blockquote></div><br><br clear="all"><div><br></div>--<span> </span><br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div></div></div></blockquote></div><br></div></div></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>