[petsc-users] Convergence and improvement of ill conditioned like problem
Matthew Knepley
knepley at gmail.com
Thu Feb 6 12:13:26 CST 2014
On Thu, Feb 6, 2014 at 12:02 PM, Danyang Su <danyang.su at gmail.com> wrote:
> Hi All,
>
> I have come across an ill conditioned like problem. The matrix in this
> problem is block matrix, in each block, there are some zero entries. The
> preconditioned residual norm drops slowly but the true residual norm drops
> quickly in the first few iterations. So as to improve the performance, I
> would like to stop iteration when the true residual norm meet the
> requirement.
>
> Q1: Do I need to use KSPSetConvergenceTest for this case?
>
> I can use direct solver for this problem and the outer newton iteration
> works fine, usually converged in 10 or less newton iterations. But when use
> PETSc KSP solver, the newton iterations usually need more than 20
> iterations and the timestep cannot increase much due to the large newton
> iteration number.
>
> Q2: Is it possible to increase the precision for KSP solver for this
> problem?
>
-ksp_rtol 1.0e-9
Matt
> I have read the comments by Jed in the website
> http://scicomp.stackexchange.com/questions/513/why-is-my-
> iterative-linear-solver-not-converging. I don't know what can
> KSPSetNullSpace or MatNullSpaceRemove do and haven't tried to use it.
>
> Thanks and regards,
>
> Danyang
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140206/fc829d62/attachment.html>
More information about the petsc-users
mailing list