[petsc-users] strange convergence

Hoang Giang Bui hgbk2008 at gmail.com
Sun Apr 23 12:22:05 CDT 2017


Hello

I encountered a strange convergence behavior that I have trouble to
understand

KSPSetFromOptions completed
  0 KSP preconditioned resid norm 1.106709687386e+31 true resid norm
9.015150491938e+06 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 2.933141742664e+29 true resid norm
9.015152282123e+06 ||r(i)||/||b|| 1.000000198575e+00
  2 KSP preconditioned resid norm 9.686409637174e+16 true resid norm
9.015354521944e+06 ||r(i)||/||b|| 1.000022631902e+00
  3 KSP preconditioned resid norm 4.219243615809e+15 true resid norm
9.017157702420e+06 ||r(i)||/||b|| 1.000222648583e+00
.....
999 KSP preconditioned resid norm 3.043754298076e+12 true resid norm
9.015425041089e+06 ||r(i)||/||b|| 1.000030454195e+00
1000 KSP preconditioned resid norm 3.043000287819e+12 true resid norm
9.015424313455e+06 ||r(i)||/||b|| 1.000030373483e+00
Linear solve did not converge due to DIVERGED_ITS iterations 1000
KSP Object: 4 MPI processes
  type: gmres
    GMRES: restart=1000, using Modified Gram-Schmidt Orthogonalization
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-20, absolute=1e-09, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 4 MPI processes
  type: fieldsplit
    FieldSplit with MULTIPLICATIVE composition: total splits = 2
    Solver info for each split is in the following KSP objects:
    Split number 0 Defined by IS
    KSP Object:    (fieldsplit_u_)     4 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (fieldsplit_u_)     4 MPI processes
      type: hypre
        HYPRE BoomerAMG preconditioning
        HYPRE BoomerAMG: Cycle type V
        HYPRE BoomerAMG: Maximum number of levels 25
        HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
        HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
        HYPRE BoomerAMG: Threshold for strong coupling 0.6
        HYPRE BoomerAMG: Interpolation truncation factor 0
        HYPRE BoomerAMG: Interpolation: max elements per row 0
        HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
        HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
        HYPRE BoomerAMG: Maximum row sums 0.9
        HYPRE BoomerAMG: Sweeps down         1
        HYPRE BoomerAMG: Sweeps up           1
        HYPRE BoomerAMG: Sweeps on coarse    1
        HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
        HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
        HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
        HYPRE BoomerAMG: Relax weight  (all)      1
        HYPRE BoomerAMG: Outer relax weight (all) 1
        HYPRE BoomerAMG: Using CF-relaxation
        HYPRE BoomerAMG: Measure type        local
        HYPRE BoomerAMG: Coarsen type        PMIS
        HYPRE BoomerAMG: Interpolation type  classical
      linear system matrix = precond matrix:
      Mat Object:      (fieldsplit_u_)       4 MPI processes
        type: mpiaij
        rows=938910, cols=938910, bs=3
        total: nonzeros=8.60906e+07, allocated nonzeros=8.60906e+07
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 78749 nodes, limit
used is 5
    Split number 1 Defined by IS
    KSP Object:    (fieldsplit_wp_)     4 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (fieldsplit_wp_)     4 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        matrix ordering: natural
        factor fill ratio given 0, needed 0
          Factored matrix follows:
            Mat Object:             4 MPI processes
              type: mpiaij
              rows=34141, cols=34141
              package used to perform factorization: pastix
            Error :                        -nan
  Error :                        -nan
    total: nonzeros=0, allocated nonzeros=0
            Error :                        -nan
    total number of mallocs used during MatSetValues calls =0
                PaStiX run parameters:
                  Matrix type :                      Symmetric
                  Level of printing (0,1,2):         0
                  Number of refinements iterations : 0
  Error :                        -nan
      linear system matrix = precond matrix:
      Mat Object:      (fieldsplit_wp_)       4 MPI processes
        type: mpiaij
        rows=34141, cols=34141
        total: nonzeros=485655, allocated nonzeros=485655
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  linear system matrix = precond matrix:
  Mat Object:   4 MPI processes
    type: mpiaij
    rows=973051, cols=973051
    total: nonzeros=9.90037e+07, allocated nonzeros=9.90037e+07
    total number of mallocs used during MatSetValues calls =0
      using I-node (on process 0) routines: found 78749 nodes, limit used
is 5

The pattern of convergence gives a hint that this system is somehow
bad/singular. But I don't know why the preconditioned error goes up too
high. Anyone has an idea?

Best regards
Giang Bui
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170423/c5400e58/attachment.html>


More information about the petsc-users mailing list