<div dir="ltr">Hello
<div><br></div><div>I encountered a strange convergence behavior that I have trouble to understand</div><div><br></div><div><div>KSPSetFromOptions completed</div><div> 0 KSP preconditioned resid norm 1.106709687386e+31 true resid norm 9.015150491938e+06 ||r(i)||/||b|| 1.000000000000e+00</div><div> 1 KSP preconditioned resid norm 2.933141742664e+29 true resid norm 9.015152282123e+06 ||r(i)||/||b|| 1.000000198575e+00</div><div> 2 KSP preconditioned resid norm 9.686409637174e+16 true resid norm 9.015354521944e+06 ||r(i)||/||b|| 1.000022631902e+00</div><div> 3 KSP preconditioned resid norm 4.219243615809e+15 true resid norm 9.017157702420e+06 ||r(i)||/||b|| 1.000222648583e+00</div></div><div>.....</div><div><div>999 KSP preconditioned resid norm 3.043754298076e+12 true resid norm 9.015425041089e+06 ||r(i)||/||b|| 1.000030454195e+00</div><div>1000 KSP preconditioned resid norm 3.043000287819e+12 true resid norm 9.015424313455e+06 ||r(i)||/||b|| 1.000030373483e+00</div></div><div><div>Linear solve did not converge due to DIVERGED_ITS iterations 1000</div><div>KSP Object: 4 MPI processes</div><div> type: gmres</div><div> GMRES: restart=1000, using Modified Gram-Schmidt Orthogonalization</div><div> GMRES: happy breakdown tolerance 1e-30</div><div> maximum iterations=1000, initial guess is zero</div><div> tolerances: relative=1e-20, absolute=1e-09, divergence=10000</div><div> left preconditioning</div><div> using PRECONDITIONED norm type for convergence test</div><div>PC Object: 4 MPI processes</div><div> type: fieldsplit</div><div> FieldSplit with MULTIPLICATIVE composition: total splits = 2</div><div> Solver info for each split is in the following KSP objects:</div><div> Split number 0 Defined by IS</div><div> KSP Object: (fieldsplit_u_) 4 MPI processes</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div></div><div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (fieldsplit_u_) 4 MPI processes</div><div> type: hypre</div><div> HYPRE BoomerAMG preconditioning</div><div> HYPRE BoomerAMG: Cycle type V</div><div> HYPRE BoomerAMG: Maximum number of levels 25</div><div> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1</div><div> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0</div><div> HYPRE BoomerAMG: Threshold for strong coupling 0.6</div></div><div><div> HYPRE BoomerAMG: Interpolation truncation factor 0</div><div> HYPRE BoomerAMG: Interpolation: max elements per row 0</div><div> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0</div><div> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1</div><div> HYPRE BoomerAMG: Maximum row sums 0.9</div><div> HYPRE BoomerAMG: Sweeps down 1</div><div> HYPRE BoomerAMG: Sweeps up 1</div><div> HYPRE BoomerAMG: Sweeps on coarse 1</div><div> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi</div><div> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi</div><div> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination</div><div> HYPRE BoomerAMG: Relax weight (all) 1</div><div> HYPRE BoomerAMG: Outer relax weight (all) 1</div><div> HYPRE BoomerAMG: Using CF-relaxation</div><div> HYPRE BoomerAMG: Measure type local</div><div> HYPRE BoomerAMG: Coarsen type PMIS</div><div> HYPRE BoomerAMG: Interpolation type classical</div><div> linear system matrix = precond matrix:</div><div> Mat Object: (fieldsplit_u_) 4 MPI processes</div><div> type: mpiaij</div><div> rows=938910, cols=938910, bs=3</div><div> total: nonzeros=8.60906e+07, allocated nonzeros=8.60906e+07</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 78749 nodes, limit used is 5</div><div> Split number 1 Defined by IS</div><div> KSP Object: (fieldsplit_wp_) 4 MPI processes</div><div> type: preonly</div><div> maximum iterations=10000, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div></div><div><div> PC Object: (fieldsplit_wp_) 4 MPI processes</div><div> type: lu</div><div> LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div> matrix ordering: natural</div><div> factor fill ratio given 0, needed 0</div><div> Factored matrix follows:</div><div> Mat Object: 4 MPI processes</div><div> type: mpiaij</div><div> rows=34141, cols=34141</div><div> package used to perform factorization: pastix</div><div> Error : -nan</div><div> Error : -nan</div><div> total: nonzeros=0, allocated nonzeros=0</div><div> Error : -nan</div><div> total number of mallocs used during MatSetValues calls =0</div><div> PaStiX run parameters:</div><div> Matrix type : Symmetric</div><div> Level of printing (0,1,2): 0</div></div><div><div> Number of refinements iterations : 0</div><div> Error : -nan</div><div> linear system matrix = precond matrix:</div><div> Mat Object: (fieldsplit_wp_) 4 MPI processes</div><div> type: mpiaij</div><div> rows=34141, cols=34141</div><div> total: nonzeros=485655, allocated nonzeros=485655</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node (on process 0) routines</div><div> linear system matrix = precond matrix:</div><div> Mat Object: 4 MPI processes</div><div> type: mpiaij</div><div> rows=973051, cols=973051</div><div> total: nonzeros=9.90037e+07, allocated nonzeros=9.90037e+07</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node (on process 0) routines: found 78749 nodes, limit used is 5</div></div><div><br></div><div>The pattern of convergence gives a hint that this system is somehow bad/singular. But I don't know why the preconditioned error goes up too high. Anyone has an idea?</div><div><br></div><div>Best regards</div><div>Giang Bui</div><div><br></div></div>