[petsc-users] KSP changes for successive solver
Michele Rosso
mrosso at uci.edu
Sat Jul 18 21:53:13 CDT 2015
Barry,
I tried the master branch of PETSc as you suggested and unfortunately
the problem with KSP being set to GMRES on the coarse level when using
"-mg_coarse_ksp_type preonly" is still there. I attached the output
from ksp view and the options stack.
Am I missing something?
Thanks,
Michele
On Thu, 2015-07-16 at 17:30 -0500, Barry Smith wrote:
> Michel,
>
> This is a very annoying feature that has been fixed in master http://www.mcs.anl.gov/petsc/developers/index.html I would like to have changed it in maint but Jed would have a shit-fit :-) since it changes behavior.
>
> Barry
>
> > On Jul 16, 2015, at 4:53 PM, Michele Rosso <mrosso at uci.edu> wrote:
> >
> > Hi,
> >
> > I am performing a series of solves inside a loop. The matrix for each solve changes but not enough to justify a rebuilt of the PC at each solve.
> > Therefore I am using KSPSetReusePreconditioner to avoid rebuilding unless necessary. The solver is CG + MG with a custom PC at the coarse level.
> > If KSP is not updated each time, everything works as it is supposed to.
> > When instead I allow the default PETSc behavior, i.e. updating PC every time the matrix changes, the coarse level KSP , initially set to PREONLY, is changed into GMRES
> > after the first solve. I am not sure where the problem lies (my PC or PETSc), so I would like to have your opinion on this.
> > I attached the ksp_view for the 2 successive solve and the options stack.
> >
> > Thanks for your help,
> > Michel
> >
> >
> >
> > <ksp_view.txt><petsc_options.txt>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150718/43610a8b/attachment.html>
-------------- next part --------------
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-07, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
GAMG specific options
Threshold for dropping small values from graph 0
AGG specific options
Symmetric graph false
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: bjacobi
block Jacobi: number of blocks = 8
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 5, needed 1.07143
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=10, cols=10
package used to perform factorization: petsc
total: nonzeros=90, allocated nonzeros=90
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 5 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=10, cols=10
total: nonzeros=84, allocated nonzeros=84
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=10, cols=10
total: nonzeros=84, allocated nonzeros=84
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 0.152971, max = 1.68268
Chebyshev: eigenvalues estimated using gmres with translations [0 0.1; 0 1.1]
KSP Object: (mg_coarse_mg_levels_1_esteig_) 8 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=64, cols=64
total: nonzeros=352, allocated nonzeros=352
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=512, cols=512
total: nonzeros=3200, allocated nonzeros=3200
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=4096, cols=4096
total: nonzeros=27136, allocated nonzeros=27136
total number of mallocs used during MatSetValues calls =0
-------------- next part --------------
-ksp_type cg
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-7
-options_left
-ksp_initial_guess_nonzero yes
-pc_type mg
-pc_mg_galerkin
-pc_mg_levels 3
-mg_levels_ksp_type richardson
#-mg_levels_ksp_max_it 1
-mg_coarse_ksp_type preonly
-mg_coarse_pc_type gamg
#-mg_coarse_mg_coarse_ksp_type preonly
#-mg_coarse_mg_coarse_ksp_type chebyshev
#-mg_coarse_mg_coarse_pc_type lu
-ksp_view
More information about the petsc-users
mailing list