<div class="gmail_quote">On Sat, Sep 17, 2011 at 00:28, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
But this is actually never Newton: if F(x) = A(x) x - b then J_F(x) = A(x) + J_A(x)x so Just using A(x_n) will never give you Newton. But yes is commonly done because it doesn't require the horrible computation of J_A(x).</blockquote>
</div><br><div>I think Vijay intended to distinguish the A in the solve step from the A in the residual. Assume the iteration</div><div><br></div><div>x_{n+1} = x_n - J(x_n)^{-1} F(x_n)</div><div><br></div><div>Assume in this case that F(x_n) can be written as A(x_n) x_n - b where A(x_n) distinguishes some "frozen" coefficients. Usually these are all but those variables involved in the highest order derivatives.</div>
<div><br></div><div>Now we define J:</div><div><br></div><div>J = F_x(x_n), the true Jacobian. This is Newton</div><div><br></div><div>J = A(x_n). This is "Picard with solve", the version most commonly used for PDEs. This is also the "Picard linearization" that is discussed in many textbooks, especially in CFD.</div>
<div><br></div><div>J = \lambda I. This is gradient descent, aka Richardson or "Matt's Picard".</div>