[petsc-users] fixed point interations
Dominik Szczerba
dominik at itis.ethz.ch
Sun Nov 6 11:52:35 CST 2011
>>> I want to start small by porting a very simple code using fixed point
>>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0),
>>> then solved by KSP for x, then x0 is updated to x, then repeat until
>>> convergence.
>
> Run the usual "Newton" methods with A(x) in place of the true Jacobian.
When I substitute A(x) into eq. 5.2 I get:
A(x) dx = -F(x) (1)
A(x) dx = -A(x) x + b(x) (2)
A(x) dx + A(x) x = b(x) (3)
A(x) (x+dx) = b(x) (4)
My questions:
* Will the procedure somehow optimally group the two A(x) terms into
one, as in 3-4? This requires knowledge, will this be efficiently
handled?
* I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and
how, correctly handled? Should I somehow disable the update myself?
Thanks a lot,
Dominik
> can compute A(x) in the residual
> F(x) = A(x) x - b(x)
> and cache it in your user context, then pass it back when asked to compute
> the Jacobian.
> This runs your algorithm (often called Picard) in "defect correction mode",
> but once you write your equations this way, you can try Newton iteration
> using -snes_mf_operator.
>
>>>
>>> In the documentation chapter 5 I see all sorts of sophisticated Newton
>>> type methods, requiring computation of the Jacobian. Is the above
>>> defined simple method still accessible somehow in Petsc or such
>>> triviality can only be done by hand? Which one from the existing
>>> nonlinear solvers would be a closest match both in simplicity and
>>> robustness (even if slow performance)?
>>
>> You want -snes_type nrichardson. All you need is to define the residual.
>
> Matt, were the 1000 emails we exchanged over this last month not enough to
> prevent you from spreading misinformation under a different name?
More information about the petsc-users
mailing list