On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba <span dir="ltr"><<a href="mailto:dominik@itis.ethz.ch">dominik@itis.ethz.ch</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
>>> I want to start small by porting a very simple code using fixed point<br>
>>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0),<br>
>>> then solved by KSP for x, then x0 is updated to x, then repeat until<br>
>>> convergence.<br>
><br>
> Run the usual "Newton" methods with A(x) in place of the true Jacobian.<br>
<br>
When I substitute A(x) into eq. 5.2 I get:<br>
<br>
A(x) dx = -F(x) (1)<br>
A(x) dx = -A(x) x + b(x) (2)<br>
A(x) dx + A(x) x = b(x) (3)<br>
A(x) (x+dx) = b(x) (4)<br>
<br>
My questions:<br>
<br>
* Will the procedure somehow optimally group the two A(x) terms into<br>
one, as in 3-4? This requires knowledge, will this be efficiently<br>
handled?<br></blockquote><div><br></div><div>There is no grouping. You solve for dx and do a vector addition.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
* I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and<br>
how, correctly handled? Should I somehow disable the update myself?<br></blockquote><div><br></div><div>Do not do any update yourself, just give the correct A at each iteration in</div><div>your FormJacobian routine.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Thanks a lot,<br>
Dominik<br>
<br>
> can compute A(x) in the residual<br>
> F(x) = A(x) x - b(x)<br>
> and cache it in your user context, then pass it back when asked to compute<br>
> the Jacobian.<br>
> This runs your algorithm (often called Picard) in "defect correction mode",<br>
> but once you write your equations this way, you can try Newton iteration<br>
> using -snes_mf_operator.<br>
><br>
>>><br>
>>> In the documentation chapter 5 I see all sorts of sophisticated Newton<br>
>>> type methods, requiring computation of the Jacobian. Is the above<br>
>>> defined simple method still accessible somehow in Petsc or such<br>
>>> triviality can only be done by hand? Which one from the existing<br>
>>> nonlinear solvers would be a closest match both in simplicity and<br>
>>> robustness (even if slow performance)?<br>
>><br>
>> You want -snes_type nrichardson. All you need is to define the residual.<br>
><br>
> Matt, were the 1000 emails we exchanged over this last month not enough to<br>
> prevent you from spreading misinformation under a different name?<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>