[petsc-users] Convergence and improvement of ill conditioned like problem
Danyang Su
danyang.su at gmail.com
Thu Feb 6 12:02:55 CST 2014
Hi All,
I have come across an ill conditioned like problem. The matrix in this
problem is block matrix, in each block, there are some zero entries. The
preconditioned residual norm drops slowly but the true residual norm
drops quickly in the first few iterations. So as to improve the
performance, I would like to stop iteration when the true residual norm
meet the requirement.
Q1: Do I need to use KSPSetConvergenceTest for this case?
I can use direct solver for this problem and the outer newton iteration
works fine, usually converged in 10 or less newton iterations. But when
use PETSc KSP solver, the newton iterations usually need more than 20
iterations and the timestep cannot increase much due to the large newton
iteration number.
Q2: Is it possible to increase the precision for KSP solver for this
problem?
I have read the comments by Jed in the website
http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging.
I don't know what can KSPSetNullSpace or MatNullSpaceRemove do and
haven't tried to use it.
Thanks and regards,
Danyang
More information about the petsc-users
mailing list