[petsc-dev] How do you get RIchardson?
Jed Brown
jedbrown at mcs.anl.gov
Fri Sep 16 17:36:51 CDT 2011
On Sat, Sep 17, 2011 at 00:28, Barry Smith <bsmith at mcs.anl.gov> wrote:
> But this is actually never Newton: if F(x) = A(x) x - b then J_F(x) =
> A(x) + J_A(x)x so Just using A(x_n) will never give you Newton. But yes is
> commonly done because it doesn't require the horrible computation of J_A(x).
I think Vijay intended to distinguish the A in the solve step from the A in
the residual. Assume the iteration
x_{n+1} = x_n - J(x_n)^{-1} F(x_n)
Assume in this case that F(x_n) can be written as A(x_n) x_n - b where
A(x_n) distinguishes some "frozen" coefficients. Usually these are all but
those variables involved in the highest order derivatives.
Now we define J:
J = F_x(x_n), the true Jacobian. This is Newton
J = A(x_n). This is "Picard with solve", the version most commonly used for
PDEs. This is also the "Picard linearization" that is discussed in many
textbooks, especially in CFD.
J = \lambda I. This is gradient descent, aka Richardson or "Matt's Picard".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20110917/7bf78094/attachment.html>
More information about the petsc-dev
mailing list