[petsc-users] PCLU diverges where PCILU converges on Dense Matrix
Ali Berk Kahraman
aliberkkahraman at yahoo.com
Sat Mar 10 05:22:59 CST 2018
Hello All,
I am trying to get the finite difference coefficients for a given
irregular grid. For this, I follow the following webpage, which tells me
to solve a linear system.
http://web.media.mit.edu/~crtaylor/calculator.html
I solve a 7 unknown linear system with a 7x7 dense matrix to get the
finite difference coefficients. Since I will call this code many many
many times in my overall project, I need it to be as fast, yet as exact
as possible. So I use PCLU. I make sure that there are no zero diagonals
on the matrix, I swap required rows for it. However, PCLU still diverges
with the output at the end of this e-mail. It indicates
"FACTOR_NUMERIC_ZEROPIVOT" , but as I have written above I make sure
there are no zero main diagonal entries on the matrix. When I use PCILU
instead, it converges pretty well.
So my question is, is PCILU the same thing mathematically as PCLU when
applied on a small dense matrix? I need to know if I get the exact
solution with PCILU, because my whole project will depend on the
accuracy of the finite differences.
Best Regards,
Ali Berk Kahraman
M.Sc. Student, Mechanical Engineering Dept.
Boğaziçi Uni., Istanbul, Turkey
Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0
PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
KSP Object: 1 MPI processes
type: gmres
restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: lu
out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5., needed 1.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=7, cols=7
package used to perform factorization: petsc
total: nonzeros=49, allocated nonzeros=49
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 2 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=7, cols=7
total: nonzeros=49, allocated nonzeros=49
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 2 nodes, limit used is 5
More information about the petsc-users
mailing list