[petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations?

Matthew Knepley knepley at gmail.com
Wed Mar 15 01:09:59 CDT 2017


On Tue, Mar 14, 2017 at 5:42 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
> > On Mar 14, 2017, at 5:32 PM, Fangbo Wang <fangbowa at buffalo.edu> wrote:
> >
> > Hi,
> >
> > I know this is not a problem specific to PETSc, but I have this doubt
> for a long time and want to ask the experts here.
> >
> > Suppose I have a very large linear system of equations with 1.5 million
> unkowns. It is common to use relative tolerance as a stopping criteria.
> >
> > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc.
> But for a very large linear system, do I need to use a relative tolerance
> much smaller than the previous I use? (Theoretically I think the relative
> tolerance has nothing related to system size).
> >
> > However, something very weird happens. I used 1e-7 as my relative
> tolerance for my linear system  with 1.5 million unknows using conjugate
> gradient method with jacobi preconditioner, the solver can not converge to
> 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution
> is not good.
>
>    This is not particularly weird. Jacobi preconditioning can perform very
> poorly depending on the structure of your matrix.
>
>    So first you need a better preconditioner. Where does the matrix come
> from? This helps determine what preconditioner to use. For example, is it a
> pressure solve, a structural mechanics problem, a Stokes-like problem, a
> fully implicit cdf problem.


Simple estimates can be useful for thinking about solvers:

  1) Lets say the conditioning of your problem looks like the Laplacian,
since I know what that is and since I believe elasticity does look like
this. The
      condition number grows as h^{-2}

          kappa = C h^{-2}

  2) Using CG to solve a system to a given relative tolerance takes about
sqrt(kappa) iterations

  3) I do not think Jacobi contributes to asymptotics, just the constant

  4) Overall, that means it will take

          C h^{-1}

       iterations to solve your system. As you refine, the number of
iterations goes up until you jsut cannot solve it anymore.
       This is exactly what is meant by a non-scalable solver, and as Barry
point out, for this problem MG provides a scalable
       solver. A lot of times, MG is hard to tune correctly so we use
something to handle a few problematic modes in the
       problem for which MG as not tuned correctly, and that would be the
Krylov method.

    Thanks,

      Matt



>
>    Barry
>
> >
> > Any one have some  advices? Thank you very much!
> >
> > Best regards,
> >
> > Fangbo Wang
> >
> > --
> > Fangbo Wang, PhD student
> > Stochastic Geomechanics Research Group
> > Department of Civil, Structural and Environmental Engineering
> > University at Buffalo
> > Email: fangbowa at buffalo.edu
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170315/d1c4dd0a/attachment.html>


More information about the petsc-users mailing list