[petsc-users] Convergence and improvement of ill conditioned like problem
danyang.su at gmail.com
Thu Feb 6 12:02:55 CST 2014
I have come across an ill conditioned like problem. The matrix in this
problem is block matrix, in each block, there are some zero entries. The
preconditioned residual norm drops slowly but the true residual norm
drops quickly in the first few iterations. So as to improve the
performance, I would like to stop iteration when the true residual norm
meet the requirement.
Q1: Do I need to use KSPSetConvergenceTest for this case?
I can use direct solver for this problem and the outer newton iteration
works fine, usually converged in 10 or less newton iterations. But when
use PETSc KSP solver, the newton iterations usually need more than 20
iterations and the timestep cannot increase much due to the large newton
Q2: Is it possible to increase the precision for KSP solver for this
I have read the comments by Jed in the website
I don't know what can KSPSetNullSpace or MatNullSpaceRemove do and
haven't tried to use it.
Thanks and regards,
More information about the petsc-users