<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Linear elasticity, which yields symmetric positive definite matrices. So I guess I could reformulate my question to: what is the solver/preconditioner combination that is "best" suited for this kind of problem? I tried Anton suggestion and gave BCGS a shot but although it does seem to work it converges very slowly. Using the gamg preconditioner blows up:<div><br></div><div><div>[0]PCSetData_AGG bs=1 MM=9120</div><div> KSP resid. tolerance target = 1.000E-10</div><div> KSP initial residual |res0| = 1.443E-01</div><div> KSP iter = 0: |res|/|res0| = 1.000E+00</div><div> KSP iter = 1: |res|/|res0| = 4.861E-01</div><div>KSP Object: 6 MPI processes</div><div> type: cg</div><div> maximum iterations=10000</div><div> tolerances: relative=1e-10, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using PRECONDITIONED norm type for convergence test</div><div>PC Object: 6 MPI processes</div><div> type: gamg</div><div> MG: type is MULTIPLICATIVE, levels=2 cycles=v</div><div> Cycles per PCApply=1</div><div> Using Galerkin computed coarse grid matrices</div><div> Coarse grid solver -- level -------------------------------</div><div> KSP Object: (mg_coarse_) 6 MPI processes</div><div> type: gmres</div><div> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div> GMRES: happy breakdown tolerance 1e-30</div><div> maximum iterations=1, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_) 6 MPI processes</div><div> type: bjacobi</div><div> block Jacobi: number of blocks = 6</div><div> Local solve info for each block is in the following KSP and PC objects:</div><div> [0] number of local blocks = 1, first local block number = 0</div><div> [0] local block number 0</div><div> KSP Object: KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: preonly</div><div> KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> KSP Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> (mg_coarse_sub_) 1 MPI processes</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: lu</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: lu</div><div> PC Object: (mg_coarse_sub_) 1 MPI processes</div><div> type: lu</div><div> LU: out-of-place factorization</div><div> type: lu</div><div> LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> type: lu</div><div> LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Factored matrix follows:</div><div> Matrix Object: type: lu</div><div> LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Factored matrix follows:</div><div> Matrix Object: LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Factored matrix follows:</div><div> Matrix Object: LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Factored matrix follows:</div><div> Matrix Object: tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 0</div><div> Factored matrix follows:</div><div> Matrix Object: 1 MPI processes</div><div> factor fill ratio given 5, needed 4.41555</div><div> Factored matrix follows:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=447, cols=447</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=75113, allocated nonzeros=75113</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=447, cols=447</div><div> total: nonzeros=17011, allocated nonzeros=17011</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> - - - - - - - - - - - - - - - - - -</div><div> 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=0, cols=0</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> [1] number of local blocks = 1, first local block number = 1</div><div> [1] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> [2] number of local blocks = 1, first local block number = 2</div><div> [2] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> [3] number of local blocks = 1, first local block number = 3</div><div> [3] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> [4] number of local blocks = 1, first local block number = 4</div><div> [4] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> [5] number of local blocks = 1, first local block number = 5</div><div> [5] local block number 0</div><div> - - - - - - - - - - - - - - - - - -</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 6 MPI processes</div><div> type: mpiaij</div><div> rows=447, cols=447</div><div> total: nonzeros=17011, allocated nonzeros=17011</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node (on process 0) routines</div><div> Down solver (pre-smoother) on level 1 -------------------------------</div><div> KSP Object: (mg_levels_1_) 6 MPI processes</div><div> type: chebyshev</div><div> Chebyshev: eigenvalue estimates: min = 0.0358458, max = 4.60675</div><div> maximum iterations=2</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_levels_1_) 6 MPI processes</div><div> type: jacobi</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 6 MPI processes</div><div> type: mpiaij</div><div> rows=54711, cols=54711</div><div> total: nonzeros=4086585, allocated nonzeros=4086585</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 3040 nodes, limit used is 5</div><div> Up solver (post-smoother) same as down solver (pre-smoother)</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 6 MPI processes</div><div> type: mpiaij</div><div> rows=54711, cols=54711</div><div> total: nonzeros=4086585, allocated nonzeros=4086585</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 3040 nodes, limit used is 5</div><div> Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8</div><div><br></div><div>
<div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>--</div><div> Hugo Gagnon</div></div>
</div>
<br><div><div>On 2013-04-21, at 10:58 AM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Hugo Gagnon <<a href="mailto:opensource.petsc@user.fastmail.fm">opensource.petsc@user.fastmail.fm</a>> writes:<br><br><blockquote type="cite">Hi,<br><br>I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU. I<br>tried increasing the number of levels of fill and also tried other<br>options described in<br><a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html</a><br>but without any luck. Are there some other preconditioner options<br>that might work?<br></blockquote><br>What kind of problem are you solving? How does this work?<br><br> -pc_type gamg -pc_gamg_agg_nsmooths 1<br></blockquote></div><br></div></body></html>