<div class="gmail_quote">On Fri, Nov 11, 2011 at 11:17, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu" target="_blank">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
All I recall was that I was confused because the residual dropped 10 orders of magnitude, apparently, in the first iteration and Barry explained the issue, I added 'unpreconditioned' residual and it was fine. </blockquote>
<div><br></div><div>My recollection with that problem was that the initial guess "almost" satisfied the PDE in the interior, but did not satisfy the boundary conditions. The preconditioned residual made the boundaries and interior of roughly equal importance, so the initial preconditioned residual looked large due to the big disagreement near boundaries. In unpreconditioned residuals, you never see the boundary equations in the norm, but they happen to be satisfied because the preconditioner tends to fix them. Or something like this, but with ksp_diagonal_scale, the residual drop was similar for preconditioned and unpreconditioned residuals.</div>
<div><br></div><div>I want methods to be scale invariant, but I don't think it's practical to ask algorithms to behave the same way if you scale some equations very differently from others. That changes the conditioning of the operator and makes dot products with the operator meaningless. I think it's achievable to ask people to scale their equations properly if they want good performance. The questions is which causes less confusion when people mess up. I'm not sold either way regarding preconditioned or unpreconditioned residuals.</div>
</div>