[petsc-users] KSP_CONVERGED_STEP_LENGTH

Barry Smith bsmith at mcs.anl.gov
Thu Sep 8 12:59:20 CDT 2016


   This is very odd. CONVERGED_STEP_LENGTH for KSP is very specialized and should never occur with GMRES. 

   Can you run with valgrind to make sure there is no memory corruption? http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

   Is your code fortran or C?

   Barry

> On Sep 8, 2016, at 10:38 AM, Harshad Sahasrabudhe <hsahasra at purdue.edu> wrote:
> 
> Hi,
> 
> I'm using GAMG + GMRES for my Poisson problem. The solver converges with KSP_CONVERGED_STEP_LENGTH at a residual of 9.773346857844e-02, which is much higher than what I need (I need a tolerance of at least 1E-8). I am not able to figure out which tolerance I need to set to avoid convergence due to CONVERGED_STEP_LENGTH.
> 
> Any help is appreciated! Output of -ksp_view and -ksp_monitor:
> 
>     0 KSP Residual norm 3.121347818142e+00 
>     1 KSP Residual norm 9.773346857844e-02 
>   Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1
> KSP Object: 1 MPI processes
>   type: gmres
>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>     GMRES: happy breakdown tolerance 1e-30
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-08, absolute=1e-50, divergence=10000
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: 1 MPI processes
>   type: gamg
>     MG: type is MULTIPLICATIVE, levels=2 cycles=v
>       Cycles per PCApply=1
>       Using Galerkin computed coarse grid matrices
>   Coarse grid solver -- level -------------------------------
>     KSP Object:    (mg_coarse_)     1 MPI processes
>       type: preonly
>       maximum iterations=1, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using NONE norm type for convergence test
>     PC Object:    (mg_coarse_)     1 MPI processes
>       type: bjacobi
>         block Jacobi: number of blocks = 1
>         Local solve is same for all blocks, in the following KSP and PC objects:
>         KSP Object:        (mg_coarse_sub_)         1 MPI processes
>           type: preonly
>           maximum iterations=1, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>           type: lu
>             LU: out-of-place factorization
>             tolerance for zero pivot 2.22045e-14
>             using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>             matrix ordering: nd
>             factor fill ratio given 5, needed 1.91048
>               Factored matrix follows:
>                 Mat Object:                 1 MPI processes
>                   type: seqaij
>                   rows=284, cols=284
>                   package used to perform factorization: petsc
>                   total: nonzeros=7726, allocated nonzeros=7726
>                   total number of mallocs used during MatSetValues calls =0
>                     using I-node routines: found 133 nodes, limit used is 5
>           linear system matrix = precond matrix:
>           Mat Object:           1 MPI processes
>             type: seqaij
>             rows=284, cols=284
>             total: nonzeros=4044, allocated nonzeros=4044
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
>       linear system matrix = precond matrix:
>       Mat Object:       1 MPI processes
>         type: seqaij
>         rows=284, cols=284
>         total: nonzeros=4044, allocated nonzeros=4044
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node routines
>   Down solver (pre-smoother) on level 1 -------------------------------
>     KSP Object:    (mg_levels_1_)     1 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.195339, max = 4.10212
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_1_)     1 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
>       linear system matrix = precond matrix:
>       Mat Object:      ()       1 MPI processes
>         type: seqaij
>         rows=9036, cols=9036
>         total: nonzeros=192256, allocated nonzeros=192256
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node routines
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   linear system matrix = precond matrix:
>   Mat Object:  ()   1 MPI processes
>     type: seqaij
>     rows=9036, cols=9036
>     total: nonzeros=192256, allocated nonzeros=192256
>     total number of mallocs used during MatSetValues calls =0
>       not using I-node routines
> 
> Thanks,
> Harshad



More information about the petsc-users mailing list