[petsc-users] KSP changes for successive solver
Michele Rosso
mrosso at uci.edu
Thu Jul 16 16:53:08 CDT 2015
Hi,
I am performing a series of solves inside a loop. The matrix for each
solve changes but not enough to justify a rebuilt of the PC at each
solve.
Therefore I am using KSPSetReusePreconditioner to avoid rebuilding
unless necessary. The solver is CG + MG with a custom PC at the coarse
level.
If KSP is not updated each time, everything works as it is supposed to.
When instead I allow the default PETSc behavior, i.e. updating PC every
time the matrix changes, the coarse level KSP , initially set to
PREONLY, is changed into GMRES
after the first solve. I am not sure where the problem lies (my PC or
PETSc), so I would like to have your opinion on this.
I attached the ksp_view for the 2 successive solve and the options
stack.
Thanks for your help,
Michel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150716/2da40824/attachment-0001.html>
-------------- next part --------------
=============== FIRST SOLVE =============================
Linear solve converged due to CONVERGED_RTOL iterations 15
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-07, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: dmdarepart
DMDARepart: parent comm size reduction factor = 2
DMDARepart: subcomm_size = 4
KSP Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 0, needed 0
Factored matrix follows:
Mat Object: 4 MPI processes
type: mpiaij
rows=128, cols=128
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 2 x npcol 2
Equilibrate matrix TRUE
Matrix input mode 1
Replace tiny pivots TRUE
Use iterative refinement FALSE
Processors in row 2 col partition 2
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern_SameRowPerm
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=128, cols=128
total: nonzeros=736, allocated nonzeros=736
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=8192, cols=8192
total: nonzeros=54784, allocated nonzeros=54784
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
============================= SECOND SOLVE ================================
Linear solve converged due to CONVERGED_RTOL iterations 8
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-07, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: dmdarepart
DMDARepart: parent comm size reduction factor = 2
DMDARepart: subcomm_size = 4
KSP Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_) 4 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_coarse_) 4 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 0, needed 0
Factored matrix follows:
Mat Object: 4 MPI processes
type: mpiaij
rows=128, cols=128
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 2 x npcol 2
Equilibrate matrix TRUE
Matrix input mode 1
Replace tiny pivots TRUE
Use iterative refinement FALSE
Processors in row 2 col partition 2
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern_SameRowPerm
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=128, cols=128
total: nonzeros=736, allocated nonzeros=736
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_dmdarepart_mg_levels_1_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 4 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=8192, cols=8192
total: nonzeros=54784, allocated nonzeros=54784
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_initial_guess_nonzero yes
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-7
-ksp_type cg
-ksp_view
-max_tsteps 1
-mg_coarse_dmdarepart_ksp_constant_null_space
-mg_coarse_dmdarepart_ksp_type preonly
-mg_coarse_dmdarepart_mg_coarse_ksp_type preonly
-mg_coarse_dmdarepart_mg_coarse_pc_factor_mat_solver_package superlu_dist
-mg_coarse_dmdarepart_mg_coarse_pc_type lu
-mg_coarse_dmdarepart_mg_levels_ksp_type richardson
-mg_coarse_dmdarepart_pc_mg_galerkin
-mg_coarse_dmdarepart_pc_mg_levels 2
-mg_coarse_dmdarepart_pc_type mg
-mg_coarse_ksp_type preonly
-mg_coarse_pc_dmdarepart_factor 2
-mg_coarse_pc_type dmdarepart
-mg_levels_ksp_type richardson
-nout 8
-options_left
-pc_mg_galerkin
-pc_mg_levels 3
-pc_type mg
-ppe_max_niter 0
#End of PETSc Option Table entries
There are 2 unused database options. They are:
Option left: name:-max_tsteps value: 1
Option left: name:-nout value: 8
-------------- next part --------------
-ksp_type cg
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-7
-options_left
-ksp_initial_guess_nonzero yes
-pc_type mg
-pc_mg_galerkin
-pc_mg_levels 3
-mg_levels_ksp_type richardson
# -mg_levels_ksp_max_it 1
-mg_coarse_ksp_type preonly
-mg_coarse_pc_type dmdarepart
-mg_coarse_pc_dmdarepart_factor 2
# Subcomm
-mg_coarse_dmdarepart_ksp_type preonly
-mg_coarse_dmdarepart_pc_type mg
-mg_coarse_dmdarepart_pc_mg_galerkin
-mg_coarse_dmdarepart_pc_mg_levels 2
-mg_coarse_dmdarepart_mg_levels_ksp_type richardson
-mg_coarse_dmdarepart_mg_coarse_ksp_type preonly
-mg_coarse_dmdarepart_mg_coarse_pc_type lu
-mg_coarse_dmdarepart_mg_coarse_pc_factor_mat_solver_package superlu_dist
-ksp_view
-options_left
-ksp_converged_reason
More information about the petsc-users
mailing list