[petsc-users] Increasing ILU robustness

Mark F. Adams mark.adams at columbia.edu
Sun Apr 21 15:20:54 CDT 2013


You need to set the block size (3 or 6 in your case) for AMG.

You also want to give ML and GAMG the (near) null space or the six rigid body modes in your case.  For convince GAMG lets you give us the nodal coordinates and we will figure it out for you.  But it should work to some degree w/o the null space (it can figure out the 3 translational modes)


On Apr 21, 2013, at 11:38 AM, Hugo Gagnon <opensource.petsc at user.fastmail.fm> wrote:

> Linear elasticity, which yields symmetric positive definite matrices.  So I guess I could reformulate my question to: what is the solver/preconditioner combination that is "best" suited for this kind of problem?  I tried Anton suggestion and gave BCGS a shot but although it does seem to work it converges very slowly.  Using the gamg preconditioner blows up:
> 
> [0]PCSetData_AGG bs=1 MM=9120
>    KSP resid. tolerance target  =   1.000E-10
>    KSP initial residual |res0|  =   1.443E-01
>    KSP iter =    0: |res|/|res0| =  1.000E+00
>    KSP iter =    1: |res|/|res0| =  4.861E-01
> KSP Object: 6 MPI processes
>   type: cg
>   maximum iterations=10000
>   tolerances:  relative=1e-10, absolute=1e-50, divergence=10000
>   left preconditioning
>   using nonzero initial guess
>   using PRECONDITIONED norm type for convergence test
> PC Object: 6 MPI processes
>   type: gamg
>     MG: type is MULTIPLICATIVE, levels=2 cycles=v
>       Cycles per PCApply=1
>       Using Galerkin computed coarse grid matrices
>   Coarse grid solver -- level -------------------------------
>     KSP Object:    (mg_coarse_)     6 MPI processes
>       type: gmres
>         GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>         GMRES: happy breakdown tolerance 1e-30
>       maximum iterations=1, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using NONE norm type for convergence test
>     PC Object:    (mg_coarse_)     6 MPI processes
>       type: bjacobi
>         block Jacobi: number of blocks = 6
>         Local solve info for each block is in the following KSP and PC objects:
>       [0] number of local blocks = 1, first local block number = 0
>         [0] local block number 0
>             KSP Object:        KSP Object:        (mg_coarse_sub_)         1 MPI processes
>           type: preonly
>           KSP Object:        (mg_coarse_sub_)         1 MPI processes
>         KSP Object:        (mg_coarse_sub_)         1 MPI processes
>           KSP Object:        (mg_coarse_sub_)         1 MPI processes
>               KSP Object:        (mg_coarse_sub_)         1 MPI processes
>           type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using NONE norm type for convergence test
>           type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>         type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>             maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           (mg_coarse_sub_)         1 MPI processes
>           type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using NONE norm type for convergence test
>           PC Object:        (mg_coarse_sub_)         1 MPI processes
>           using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>       using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>               using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>           type: lu
>     using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>           type: lu
>             PC Object:        (mg_coarse_sub_)         1 MPI processes
>           type: lu
>             LU: out-of-place factorization
>         type: lu
>             LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>           type: lu
>             LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>             factor fill ratio given 5, needed 0
>               Factored matrix follows:
>                 Matrix Object:    type: lu
>             LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>             factor fill ratio given 5, needed 0
>               Factored matrix follows:
>                 Matrix Object:                  LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>             factor fill ratio given 5, needed 0
>               Factored matrix follows:
>                 Matrix Object:            LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>             factor fill ratio given 5, needed 0
>               Factored matrix follows:
>                 Matrix Object:                    tolerance for zero pivot 2.22045e-14
>             matrix ordering: nd
>             factor fill ratio given 5, needed 0
>               Factored matrix follows:
>                 Matrix Object:                 1 MPI processes
>                           factor fill ratio given 5, needed 4.41555
>               Factored matrix follows:
>                 Matrix Object:                 1 MPI processes
>                   type: seqaij
>                   rows=447, cols=447
>                   package used to perform factorization: petsc
>                   total: nonzeros=75113, allocated nonzeros=75113
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=447, cols=447
>             total: nonzeros=17011, allocated nonzeros=17011
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>         - - - - - - - - - - - - - - - - - -
>                1 MPI processes
>                   type: seqaij
>                   rows=0, cols=0
>                   package used to perform factorization: petsc
>                   total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=0, cols=0
>             total: nonzeros=0, allocated nonzeros=0
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>            1 MPI processes
>                   type: seqaij
>                   rows=0, cols=0
>                   package used to perform factorization: petsc
>                   total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=0, cols=0
>             total: nonzeros=0, allocated nonzeros=0
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>            1 MPI processes
>                   type: seqaij
>                   rows=0, cols=0
>                   package used to perform factorization: petsc
>                   total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=0, cols=0
>             total: nonzeros=0, allocated nonzeros=0
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>          1 MPI processes
>                   type: seqaij
>                   rows=0, cols=0
>                   package used to perform factorization: petsc
>                   total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=0, cols=0
>             total: nonzeros=0, allocated nonzeros=0
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>     type: seqaij
>                   rows=0, cols=0
>                   package used to perform factorization: petsc
>                   total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=0, cols=0
>             total: nonzeros=0, allocated nonzeros=0
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>       [1] number of local blocks = 1, first local block number = 1
>         [1] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       [2] number of local blocks = 1, first local block number = 2
>         [2] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       [3] number of local blocks = 1, first local block number = 3
>         [3] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       [4] number of local blocks = 1, first local block number = 4
>         [4] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       [5] number of local blocks = 1, first local block number = 5
>         [5] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       linear system matrix = precond matrix:
>       Matrix Object:       6 MPI processes
>         type: mpiaij
>         rows=447, cols=447
>         total: nonzeros=17011, allocated nonzeros=17011
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Down solver (pre-smoother) on level 1 -------------------------------
>     KSP Object:    (mg_levels_1_)     6 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.0358458, max = 4.60675
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_1_)     6 MPI processes
>       type: jacobi
>       linear system matrix = precond matrix:
>       Matrix Object:       6 MPI processes
>         type: mpiaij
>         rows=54711, cols=54711
>         total: nonzeros=4086585, allocated nonzeros=4086585
>         total number of mallocs used during MatSetValues calls =0
>           using I-node (on process 0) routines: found 3040 nodes, limit used is 5
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   linear system matrix = precond matrix:
>   Matrix Object:   6 MPI processes
>     type: mpiaij
>     rows=54711, cols=54711
>     total: nonzeros=4086585, allocated nonzeros=4086585
>     total number of mallocs used during MatSetValues calls =0
>       using I-node (on process 0) routines: found 3040 nodes, limit used is 5
>  Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code =           -8
> 
> --
>   Hugo Gagnon
> 
> On 2013-04-21, at 10:58 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> 
>> Hugo Gagnon <opensource.petsc at user.fastmail.fm> writes:
>> 
>>> Hi,
>>> 
>>> I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU.  I
>>> tried increasing the number of levels of fill and also tried other
>>> options described in
>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html
>>> but without any luck.  Are there some other preconditioner options
>>> that might work?
>> 
>> What kind of problem are you solving?  How does this work?
>> 
>>  -pc_type gamg -pc_gamg_agg_nsmooths 1
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130421/080c3120/attachment-0001.html>


More information about the petsc-users mailing list