[petsc-dev] How do you get RIchardson?

Barry Smith bsmith at mcs.anl.gov
Fri Sep 16 17:48:40 CDT 2011


On Sep 16, 2011, at 5:36 PM, Jed Brown wrote:

> On Sat, Sep 17, 2011 at 00:28, Barry Smith <bsmith at mcs.anl.gov> wrote:
>   But this is actually never Newton: if F(x) = A(x) x - b then J_F(x) = A(x) + J_A(x)x  so Just using A(x_n) will never give you Newton. But yes is commonly done because it doesn't require the horrible computation of J_A(x).
> 
> I think Vijay intended to distinguish the A in the solve step from the A in the residual. Assume the iteration
> 
> x_{n+1} = x_n - J(x_n)^{-1} F(x_n)
> 
> Assume in this case that F(x_n) can be written as A(x_n) x_n - b where A(x_n) distinguishes some "frozen" coefficients. Usually these are all but those variables involved in the highest order derivatives.
> 
> Now we define J:
> 
> J = F_x(x_n), the true Jacobian. This is Newton
> 
> J = A(x_n). This is "Picard with solve", the version most commonly used for PDEs. This is also the "Picard linearization" that is discussed in many textbooks, especially in CFD.

   Should we have support for this in SNES as a particular class since it comes up fairly often (in certain literature): perhaps for short we could call it SNESPICARD :-)  

   This was actually one of my largest concern with the previous SNESPICARD is lots of people would think it implemented "Picard with solve".

    Barry

> 
> J = \lambda I. This is gradient descent, aka Richardson or "Matt's Picard".




More information about the petsc-dev mailing list