<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; CHARSET=UTF-8">
<META NAME="GENERATOR" CONTENT="GtkHTML/4.6.6">
</HEAD>
<BODY>
Hi Matt,<BR>
<BR>
the ksp_view output was an attachment to my previous email.<BR>
Here it is:<BR>
<BR>
KSP Object: 1 MPI processes<BR>
type: cg<BR>
maximum iterations=10000<BR>
tolerances: relative=1e-08, absolute=1e-50, divergence=10000.<BR>
left preconditioning<BR>
using nonzero initial guess<BR>
using UNPRECONDITIONED norm type for convergence test<BR>
PC Object: 1 MPI processes<BR>
type: mg<BR>
MG: type is MULTIPLICATIVE, levels=4 cycles=v<BR>
Cycles per PCApply=1<BR>
Using Galerkin computed coarse grid matrices<BR>
Coarse grid solver -- level -------------------------------<BR>
KSP Object: (mg_coarse_) 1 MPI processes<BR>
type: preonly<BR>
maximum iterations=1, initial guess is zero<BR>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<BR>
left preconditioning<BR>
using NONE norm type for convergence test<BR>
PC Object: (mg_coarse_) 1 MPI processes<BR>
type: lu<BR>
LU: out-of-place factorization<BR>
tolerance for zero pivot 2.22045e-14<BR>
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<BR>
matrix ordering: nd<BR>
factor fill ratio given 0., needed 0.<BR>
Factored matrix follows:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=16, cols=16<BR>
package used to perform factorization: superlu_dist<BR>
total: nonzeros=0, allocated nonzeros=0<BR>
total number of mallocs used during MatSetValues calls =0<BR>
SuperLU_DIST run parameters:<BR>
Process grid nprow 1 x npcol 1 <BR>
Equilibrate matrix TRUE <BR>
Matrix input mode 0 <BR>
Replace tiny pivots FALSE <BR>
Use iterative refinement FALSE <BR>
Processors in row 1 col partition 1 <BR>
Row permutation LargeDiag <BR>
Column permutation METIS_AT_PLUS_A<BR>
Parallel symbolic factorization FALSE <BR>
Repeated factorization SamePattern<BR>
linear system matrix = precond matrix:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=16, cols=16<BR>
total: nonzeros=72, allocated nonzeros=72<BR>
total number of mallocs used during MatSetValues calls =0<BR>
not using I-node routines<BR>
Down solver (pre-smoother) on level 1 -------------------------------<BR>
KSP Object: (mg_levels_1_) 1 MPI processes<BR>
type: richardson<BR>
Richardson: damping factor=1.<BR>
maximum iterations=2<BR>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<BR>
left preconditioning<BR>
using nonzero initial guess<BR>
using NONE norm type for convergence test<BR>
PC Object: (mg_levels_1_) 1 MPI processes<BR>
type: sor<BR>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<BR>
linear system matrix = precond matrix:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=64, cols=64<BR>
total: nonzeros=304, allocated nonzeros=304<BR>
total number of mallocs used during MatSetValues calls =0<BR>
not using I-node routines<BR>
Up solver (post-smoother) same as down solver (pre-smoother)<BR>
Down solver (pre-smoother) on level 2 -------------------------------<BR>
KSP Object: (mg_levels_2_) 1 MPI processes<BR>
type: richardson<BR>
Richardson: damping factor=1.<BR>
maximum iterations=2<BR>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<BR>
left preconditioning<BR>
using nonzero initial guess<BR>
using NONE norm type for convergence test<BR>
PC Object: (mg_levels_2_) 1 MPI processes<BR>
type: sor<BR>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<BR>
linear system matrix = precond matrix:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=256, cols=256<BR>
total: nonzeros=1248, allocated nonzeros=1248<BR>
total number of mallocs used during MatSetValues calls =0<BR>
not using I-node routines<BR>
Up solver (post-smoother) same as down solver (pre-smoother)<BR>
Down solver (pre-smoother) on level 3 -------------------------------<BR>
KSP Object: (mg_levels_3_) 1 MPI processes<BR>
type: richardson<BR>
Richardson: damping factor=1.<BR>
maximum iterations=2<BR>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<BR>
left preconditioning<BR>
using nonzero initial guess<BR>
using NONE norm type for convergence test<BR>
PC Object: (mg_levels_3_) 1 MPI processes<BR>
type: sor<BR>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<BR>
linear system matrix = precond matrix:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=1024, cols=1024<BR>
total: nonzeros=5056, allocated nonzeros=5056<BR>
total number of mallocs used during MatSetValues calls =0<BR>
has attached null space<BR>
not using I-node routines<BR>
Up solver (post-smoother) same as down solver (pre-smoother)<BR>
linear system matrix = precond matrix:<BR>
Mat Object: 1 MPI processes<BR>
type: seqaij<BR>
rows=1024, cols=1024<BR>
total: nonzeros=5056, allocated nonzeros=5056<BR>
total number of mallocs used during MatSetValues calls =0<BR>
has attached null space<BR>
not using I-node routines<BR>
<BR>
<BR>
Michele<BR>
<BR>
<BR>
<BR>
On Wed, 2016-02-10 at 19:37 -0600, Matthew Knepley wrote:
<BLOCKQUOTE TYPE=CITE>
On Wed, Feb 10, 2016 at 7:33 PM, Michele Rosso <<A HREF="mailto:mrosso@uci.edu">mrosso@uci.edu</A>> wrote:
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BLOCKQUOTE>
Hi,<BR>
<BR>
I encountered the following error while solving a symmetric positive defined system:<BR>
<BR>
Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0<BR>
PCSETUP_FAILED due to SUBPC_ERROR <BR>
<BR>
This error appears only if I use the optimized version of both petsc and my code ( compiler: gfortran, flags: -O3 ).<BR>
It is weird since I am solving a time-dependent problem and everything, i.e. results and convergence rate, are as expected until the above error shows up. If I run both petsc and my code in debug mode, everything goes smooth till the end of the simulation.<BR>
However, if I reduce the ksp_rtol, even the debug run fails, after running as expected for a while, because of a KSP_DIVERGED_INDEFINITE_PC . <BR>
The options I am using are:<BR>
<BR>
-ksp_type cg<BR>
-ksp_norm_type unpreconditioned<BR>
-ksp_rtol 1e-8<BR>
-ksp_lag_norm<BR>
-ksp_initial_guess_nonzero yes<BR>
-pc_type mg<BR>
-pc_mg_galerkin<BR>
-pc_mg_levels 4<BR>
-mg_levels_ksp_type richardson<BR>
-mg_coarse_ksp_constant_null_space<BR>
-mg_coarse_pc_type lu<BR>
-mg_coarse_pc_factor_mat_solver_package superlu_dist<BR>
-options_left<BR>
<BR>
I attached a copy of ksp_view. I am currently using petsc-master (last updated yesterday).<BR>
I would appreciate any suggestion on this matter.<BR>
<BR>
</BLOCKQUOTE>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BR>
<BR>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
I suspect you have a nonlinear PC. Can you send the output of -ksp_view?
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BR>
<BR>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
Matt
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BLOCKQUOTE>
Thanks,<BR>
Michele<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
</BLOCKQUOTE>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BR>
<BR>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BR>
<BR>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
--
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<BR>
-- Norbert Wiener
</BLOCKQUOTE>
<BR>
</BODY>
</HTML>