[petsc-users] DIVERGED_PCSETUP_FAILED
Hong
hzhang at mcs.anl.gov
Wed Feb 10 21:00:20 CST 2016
Michele :
Superlu_dist LU is used for coarse grid PC, which likely produces a
zero-pivot.
Run your code with '-info |grep pivot' to verify.
Hong
Hi Matt,
>
> the ksp_view output was an attachment to my previous email.
> Here it is:
>
> KSP Object: 1 MPI processes
> type: cg
> maximum iterations=10000
> tolerances: relative=1e-08, absolute=1e-50, divergence=10000.
> left preconditioning
> using nonzero initial guess
> using UNPRECONDITIONED norm type for convergence test
> PC Object: 1 MPI processes
> type: mg
> MG: type is MULTIPLICATIVE, levels=4 cycles=v
> Cycles per PCApply=1
> Using Galerkin computed coarse grid matrices
> Coarse grid solver -- level -------------------------------
> KSP Object: (mg_coarse_) 1 MPI processes
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (mg_coarse_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> matrix ordering: nd
> factor fill ratio given 0., needed 0.
> Factored matrix follows:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=16, cols=16
> package used to perform factorization: superlu_dist
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> SuperLU_DIST run parameters:
> Process grid nprow 1 x npcol 1
> Equilibrate matrix TRUE
> Matrix input mode 0
> Replace tiny pivots FALSE
> Use iterative refinement FALSE
> Processors in row 1 col partition 1
> Row permutation LargeDiag
> Column permutation METIS_AT_PLUS_A
> Parallel symbolic factorization FALSE
> Repeated factorization SamePattern
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=16, cols=16
> total: nonzeros=72, allocated nonzeros=72
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Down solver (pre-smoother) on level 1 -------------------------------
> KSP Object: (mg_levels_1_) 1 MPI processes
> type: richardson
> Richardson: damping factor=1.
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (mg_levels_1_) 1 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=64, cols=64
> total: nonzeros=304, allocated nonzeros=304
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (mg_levels_2_) 1 MPI processes
> type: richardson
> Richardson: damping factor=1.
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (mg_levels_2_) 1 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=256, cols=256
> total: nonzeros=1248, allocated nonzeros=1248
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 3 -------------------------------
> KSP Object: (mg_levels_3_) 1 MPI processes
> type: richardson
> Richardson: damping factor=1.
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (mg_levels_3_) 1 MPI processes
> type: sor
> SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=1024, cols=1024
> total: nonzeros=5056, allocated nonzeros=5056
> total number of mallocs used during MatSetValues calls =0
> has attached null space
> not using I-node routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=1024, cols=1024
> total: nonzeros=5056, allocated nonzeros=5056
> total number of mallocs used during MatSetValues calls =0
> has attached null space
> not using I-node routines
>
>
> Michele
>
>
>
>
> On Wed, 2016-02-10 at 19:37 -0600, Matthew Knepley wrote:
>
> On Wed, Feb 10, 2016 at 7:33 PM, Michele Rosso <mrosso at uci.edu> wrote:
>
> Hi,
>
> I encountered the following error while solving a symmetric positive
> defined system:
>
> Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0
> PCSETUP_FAILED due to SUBPC_ERROR
>
> This error appears only if I use the optimized version of both petsc and
> my code ( compiler: gfortran, flags: -O3 ).
> It is weird since I am solving a time-dependent problem and everything,
> i.e. results and convergence rate, are as expected until the above error
> shows up. If I run both petsc and my code in debug mode, everything goes
> smooth till the end of the simulation.
> However, if I reduce the ksp_rtol, even the debug run fails, after running
> as expected for a while, because of a KSP_DIVERGED_INDEFINITE_PC .
> The options I am using are:
>
> -ksp_type cg
> -ksp_norm_type unpreconditioned
> -ksp_rtol 1e-8
> -ksp_lag_norm
> -ksp_initial_guess_nonzero yes
> -pc_type mg
> -pc_mg_galerkin
> -pc_mg_levels 4
> -mg_levels_ksp_type richardson
> -mg_coarse_ksp_constant_null_space
> -mg_coarse_pc_type lu
> -mg_coarse_pc_factor_mat_solver_package superlu_dist
> -options_left
>
> I attached a copy of ksp_view. I am currently using petsc-master (last
> updated yesterday).
> I would appreciate any suggestion on this matter.
>
>
>
> I suspect you have a nonlinear PC. Can you send the output of -ksp_view?
>
>
>
> Matt
>
>
>
> Thanks,
> Michele
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160210/87d99f4d/attachment-0001.html>
More information about the petsc-users
mailing list