[petsc-dev] KSP convergence test based on error on residual
Barry Smith
bsmith at mcs.anl.gov
Sun Jul 17 22:07:04 CDT 2011
Jed beat me to it,
For the symmetric positive definite case one just uses the definition of the largest eigenvalue
e^T e r^T A^{-1} A^{-1} r \lambda_{max} (A^(-2)) = 1
------------ = --------------------------- <= -----------
r^T r r^T r \lambda^2_{min}(A)
When I did my dissertation I always computed for absolute error reduction using the minimum eigenvalue for the SPD case the and the singular value
for the general case. When I started working with Bill and he only cared about the solution to a nonlinear problem I lost the habit but it really should be
incorporated in PETSc.
Barry
On Jul 17, 2011, at 9:31 PM, Jed Brown wrote:
> On Sun, Jul 17, 2011 at 21:23, Mark F. Adams <mark.adams at columbia.edu> wrote:
> Humm, the only linear algebra proof that I know gives bounds on the error of the form
>
> | error |_2 <= Condition-number * | residual |_2,
>
> This looks like relative error.
>
>
> for SPD matrices of course. This is pessimistic but I'm not sure how you could get a bound on error with only the lowest eigen value ...
>
> Suppose you have
>
> | A x - b | < c
>
> Then there is some y such that
>
> A (x + y) - b = 0
>
> and for which
>
> |A y| < c
>
> Suppose s is the smallest singular value of A, thus 1/s is the largest singular value of A^{-1}. Then
>
> |y| = | A^{-1} A y | <= (1/s) |A y| < c/s.
>
> So you can bound the absolute error in the solution if you know the residual and the smallest singular value.
More information about the petsc-dev
mailing list