[petsc-users] Indefinite PC
Anush Krishnan
k.anush at gmail.com
Fri Nov 21 23:05:54 CST 2014
On 21 November 2014 18:23, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Nov 21, 2014, at 5:09 PM, Anush Krishnan <k.anush at gmail.com> wrote:
> >
> > Hello petsc-users,
> >
> > I've been running some CFD simulations, and I ran into one case where
> the system (which is a coupled system for pressure and body forces) did not
> converge and the reason was KSP_DIVERGED_INDEFINITE_PC. The options I'm
> using are -pc_gamg_agg_nsmooths 1 -pc_type gamg -pc_gamg_type agg with a
> conjugate gradient solver.
>
> We've heard reports of this happening before. You should increase the
> number of smoothing steps for the multigrid or change to a "stronger"
> smoother.
>
> What do you get with -ksp_view for exact solver you are using?
>
I've attached a copy of the output with -ksp_view.
> Are you using PETSc's SNES? TS? Or just KSP?
>
Just KSP.
>
> >
> > I saw in example ksp/pc/examples/tutorials/ex2.c that I should use the
> flag -pc_factor_shift_positive_definite to avoid this.
>
> This flag is only for factorization based preconditioners not gamg so
> you should not use it.
>
> > I have a few questions regarding this:
> > • What does it mean that the preconditioner is indefinite?
>
> It means that the preconditioner generated by GAMG has both negative
> and positive eigenvalues. CG cannot handle this, CG requires the
> preconditioner have all eigenvalues of the same sign.
>
> > What can cause this to happen? And what does the above flag do?
> > • The error occurs midway through the simulation. Is there any
> reason why this might be the case? The left-hand side matrix does not
> change during the simulation.
>
> Huh? If the matrix is not changing then the preconditioner should not
> be changing and hence the preconditioner should not be rebuilt and hence
> you should not see this message "midway through the simulation". Are you
> sure that the matrix is not changing??
>
Yes, I'm sure that the matrix is not changing. Only the right hand side
vector changes every time step.
I have heard that CG sometimes converges even if the matrix is not positive
definite - do you think that might be happening? Also, is there a way to
check if a given matrix is positive definite using PETSc? And is it
possible to generate an indefinite preconditioner from a matrix that is
symmetric positive definite (I remember seeing a thread about this
recently)?
>
>
> > • Do both -pc_factor_shift_positive_definite and
> -pc_factor_shift_type POSITIVE_DEFINITE do the same thing?
>
> Yes, they are from different versions of PETSc, but neither are for
> gamg.
>
> Barry
>
> > Thank you,
> >
> > Anush
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141122/16692311/attachment.html>
-------------- next part --------------
KSP Object:(sys2_) 32 MPI processes
type: cg
maximum iterations=20000
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using PRECONDITIONED norm type for convergence test
PC Object:(sys2_) 32 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=4 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (sys2_mg_coarse_) 32 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (sys2_mg_coarse_) 32 MPI processes
type: bjacobi
block Jacobi: number of blocks = 32
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (sys2_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (sys2_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 5, needed 4.07919
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=465, cols=465
package used to perform factorization: petsc
total: nonzeros=103387, allocated nonzeros=103387
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=465, cols=465
total: nonzeros=25345, allocated nonzeros=25345
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 32 MPI processes
type: mpiaij
rows=465, cols=465
total: nonzeros=25345, allocated nonzeros=25345
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (sys2_mg_levels_1_) 32 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 0.0670626, max = 1.40831
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (sys2_mg_levels_1_) 32 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 32 MPI processes
type: mpiaij
rows=59990, cols=59990
total: nonzeros=3.98622e+06, allocated nonzeros=3.98622e+06
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (sys2_mg_levels_2_) 32 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 0.125337, max = 2.63208
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (sys2_mg_levels_2_) 32 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 32 MPI processes
type: mpiaij
rows=4204607, cols=4204607
total: nonzeros=1.40085e+08, allocated nonzeros=1.40085e+08
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 3 -------------------------------
KSP Object: (sys2_mg_levels_3_) 32 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 0.211442, max = 4.44027
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (sys2_mg_levels_3_) 32 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Mat Object: 32 MPI processes
type: mpiaij
rows=46923044, cols=46923044
total: nonzeros=3.29933e+08, allocated nonzeros=3.29933e+08
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: 32 MPI processes
type: mpiaij
rows=46923044, cols=46923044
total: nonzeros=3.29933e+08, allocated nonzeros=3.29933e+08
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
More information about the petsc-users
mailing list