[petsc-users] KSP changes for successive solver
Michele Rosso
mrosso at uci.edu
Thu Jul 16 19:18:57 CDT 2015
Barry,
thank you very much for the detailed answer. I tried what you suggested
and it works.
So far I tried on a small system but the final goal is to use it for
very large runs. How does PCGAMG compares to PCMG as far as
performances and scalability are concerned?
Also, could you help me to tune the GAMG part ( my current setup is in
the attached ksp_view.txt file )?
I also tried to use superlu_dist for the LU decomposition on
mg_coarse_mg_sub_
-mg_coarse_mg_coarse_sub_pc_type lu
-mg_coarse_mg_coarse_sub_pc_factor_mat_solver_package superlu_dist
but I got an error:
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
****** Error in MC64A/AD. INFO(1) = -2
symbfact() error returns 0
symbfact() error returns 0
symbfact() error returns 0
symbfact() error returns 0
symbfact() error returns 0
symbfact() error returns 0
symbfact() error returns 0
Thank you,
Michele
On Thu, 2015-07-16 at 18:07 -0500, Barry Smith wrote:
> > On Jul 16, 2015, at 5:42 PM, Michele Rosso <mrosso at uci.edu> wrote:
> >
> > Barry,
> >
> > thanks for your reply. So if I want it fixed, I will have to use the master branch, correct?
>
> Yes, or edit mg.c and remove the offending lines of code (easy enough).
> >
> > On a side note, what I am trying to achieve is to be able to use how many levels of MG I want, despite the limitation imposed by the local number of grid nodes.
>
> I assume you are talking about with DMDA? There is no generic limitation for PETSc's multigrid, it is only with the way the DMDA code figures out the interpolation that causes a restriction.
>
> > So far I am using a borrowed code that implements a PC that creates a sub communicator and perform MG on it.
> > While reading the documentation I found out that PCMGSetLevels takes in an optional array of communicators. How does this work?
>
> It doesn't work. It was an idea that never got pursued.
>
> > Can I can simply define my matrix and rhs on the fine grid as I would do normally ( I do not use kspsetoperators and kspsetrhs ) and KSP would take care of it by using the correct communicator for each level?
>
> No.
>
> You can use the PCMG geometric multigrid with DMDA for as many levels as it works and then use PCGAMG as the coarse grid solver. PCGAMG automatically uses fewer processes for the coarse level matrices and vectors. You could do this all from the command line without writing code.
>
> For example if your code uses a DMDA and calls KSPSetDM() use for example -da_refine 3 -pc_type mg -pc_mg_galerkin -mg_coarse_pc_type gamg -ksp_view
>
>
>
> Barry
>
>
> >
> > Thanks,
> > Michele
> >
> >
> >
> >
> > On Thu, 2015-07-16 at 17:30 -0500, Barry Smith wrote:
> >> Michel,
> >>
> >> This is a very annoying feature that has been fixed in master
> >> http://www.mcs.anl.gov/petsc/developers/index.html
> >> I would like to have changed it in maint but Jed would have a shit-fit :-) since it changes behavior.
> >>
> >> Barry
> >>
> >>
> >> > On Jul 16, 2015, at 4:53 PM, Michele Rosso <mrosso at uci.edu> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I am performing a series of solves inside a loop. The matrix for each solve changes but not enough to justify a rebuilt of the PC at each solve.
> >> > Therefore I am using KSPSetReusePreconditioner to avoid rebuilding unless necessary. The solver is CG + MG with a custom PC at the coarse level.
> >> > If KSP is not updated each time, everything works as it is supposed to.
> >> > When instead I allow the default PETSc behavior, i.e. updating PC every time the matrix changes, the coarse level KSP , initially set to PREONLY, is changed into GMRES
> >> > after the first solve. I am not sure where the problem lies (my PC or PETSc), so I would like to have your opinion on this.
> >> > I attached the ksp_view for the 2 successive solve and the options stack.
> >> >
> >> > Thanks for your help,
> >> > Michel
> >> >
> >> >
> >> >
> >> > <ksp_view.txt><petsc_options.txt>
> >>
> >>
> >>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150716/4353f29d/attachment-0001.html>
-------------- next part --------------
Linear solve converged due to CONVERGED_RTOL iterations 13
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-07, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: bjacobi
block Jacobi: number of blocks = 8
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 5, needed 2.9459
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=119, cols=119
package used to perform factorization: petsc
total: nonzeros=7079, allocated nonzeros=7079
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=119, cols=119
total: nonzeros=2403, allocated nonzeros=2403
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=119, cols=119
total: nonzeros=2403, allocated nonzeros=2403
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=8192, cols=8192
total: nonzeros=54784, allocated nonzeros=54784
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
Linear solve converged due to CONVERGED_RTOL iterations 7
KSP Object: 8 MPI processes
type: cg
maximum iterations=10000
tolerances: relative=1e-07, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using UNPRECONDITIONED norm type for convergence test
PC Object: 8 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=3 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 8 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=2 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_) 8 MPI processes
type: bjacobi
block Jacobi: number of blocks = 8
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 5, needed 2.9459
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=119, cols=119
package used to perform factorization: petsc
total: nonzeros=7079, allocated nonzeros=7079
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=119, cols=119
total: nonzeros=2403, allocated nonzeros=2403
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=119, cols=119
total: nonzeros=2403, allocated nonzeros=2403
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_coarse_mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=1024, cols=1024
total: nonzeros=6528, allocated nonzeros=6528
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=8192, cols=8192
total: nonzeros=54784, allocated nonzeros=54784
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 8 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 8 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 8 MPI processes
type: mpiaij
rows=65536, cols=65536
total: nonzeros=448512, allocated nonzeros=448512
total number of mallocs used during MatSetValues calls =0
has attached null space
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_initial_guess_nonzero yes
-ksp_norm_type unpreconditioned
-ksp_rtol 1e-7
-ksp_type cg
-ksp_view
-max_tsteps 1
-mg_coarse_ksp_type preonly
-mg_coarse_mg_levels_ksp_type richardson
-mg_coarse_pc_type gamg
-mg_levels_ksp_type richardson
-nout 1
-options_left
-pc_mg_galerkin
-pc_mg_levels 3
-pc_type mg
#End of PETSc Option Table entries
There are 2 unused database options. They are:
Option left: name:-max_tsteps value: 1
Option left: name:-nout value: 1
More information about the petsc-users
mailing list