<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Michele :</div><div class="gmail_quote">Superlu_dist LU is used for coarse grid PC, which likely produces a zero-pivot.</div><div class="gmail_quote">Run your code with '-info |grep pivot' to verify.</div><div class="gmail_quote"><br></div><div class="gmail_quote">Hong</div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div>
Hi Matt,<br>
<br>
the ksp_view output was an attachment to my previous email.<br>
Here it is:<br>
<br>
KSP Object: 1 MPI processes<br>
type: cg<br>
maximum iterations=10000<br>
tolerances: relative=1e-08, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using nonzero initial guess<br>
using UNPRECONDITIONED norm type for convergence test<br>
PC Object: 1 MPI processes<br>
type: mg<br>
MG: type is MULTIPLICATIVE, levels=4 cycles=v<br>
Cycles per PCApply=1<br>
Using Galerkin computed coarse grid matrices<br>
Coarse grid solver -- level -------------------------------<br>
KSP Object: (mg_coarse_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
matrix ordering: nd<br>
factor fill ratio given 0., needed 0.<br>
Factored matrix follows:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=16, cols=16<br>
package used to perform factorization: superlu_dist<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
SuperLU_DIST run parameters:<br>
Process grid nprow 1 x npcol 1 <br>
Equilibrate matrix TRUE <br>
Matrix input mode 0 <br>
Replace tiny pivots FALSE <br>
Use iterative refinement FALSE <br>
Processors in row 1 col partition 1 <br>
Row permutation LargeDiag <br>
Column permutation METIS_AT_PLUS_A<br>
Parallel symbolic factorization FALSE <br>
Repeated factorization SamePattern<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=16, cols=16<br>
total: nonzeros=72, allocated nonzeros=72<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Down solver (pre-smoother) on level 1 -------------------------------<br>
KSP Object: (mg_levels_1_) 1 MPI processes<br>
type: richardson<br>
Richardson: damping factor=1.<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (mg_levels_1_) 1 MPI processes<br>
type: sor<br>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=64, cols=64<br>
total: nonzeros=304, allocated nonzeros=304<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 2 -------------------------------<br>
KSP Object: (mg_levels_2_) 1 MPI processes<br>
type: richardson<br>
Richardson: damping factor=1.<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (mg_levels_2_) 1 MPI processes<br>
type: sor<br>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=256, cols=256<br>
total: nonzeros=1248, allocated nonzeros=1248<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 3 -------------------------------<br>
KSP Object: (mg_levels_3_) 1 MPI processes<br>
type: richardson<br>
Richardson: damping factor=1.<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (mg_levels_3_) 1 MPI processes<br>
type: sor<br>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=1024, cols=1024<br>
total: nonzeros=5056, allocated nonzeros=5056<br>
total number of mallocs used during MatSetValues calls =0<br>
has attached null space<br>
not using I-node routines<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=1024, cols=1024<br>
total: nonzeros=5056, allocated nonzeros=5056<br>
total number of mallocs used during MatSetValues calls =0<br>
has attached null space<br>
not using I-node routines<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
Michele</font></span><div><div class="h5"><br>
<br>
<br>
<br>
On Wed, 2016-02-10 at 19:37 -0600, Matthew Knepley wrote:
<blockquote type="CITE">
On Wed, Feb 10, 2016 at 7:33 PM, Michele Rosso <<a href="mailto:mrosso@uci.edu" target="_blank">mrosso@uci.edu</a>> wrote:
</blockquote>
<blockquote type="CITE">
<blockquote>
Hi,<br>
<br>
I encountered the following error while solving a symmetric positive defined system:<br>
<br>
Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0<br>
PCSETUP_FAILED due to SUBPC_ERROR <br>
<br>
This error appears only if I use the optimized version of both petsc and my code ( compiler: gfortran, flags: -O3 ).<br>
It is weird since I am solving a time-dependent problem and everything, i.e. results and convergence rate, are as expected until the above error shows up. If I run both petsc and my code in debug mode, everything goes smooth till the end of the simulation.<br>
However, if I reduce the ksp_rtol, even the debug run fails, after running as expected for a while, because of a KSP_DIVERGED_INDEFINITE_PC . <br>
The options I am using are:<br>
<br>
-ksp_type cg<br>
-ksp_norm_type unpreconditioned<br>
-ksp_rtol 1e-8<br>
-ksp_lag_norm<br>
-ksp_initial_guess_nonzero yes<br>
-pc_type mg<br>
-pc_mg_galerkin<br>
-pc_mg_levels 4<br>
-mg_levels_ksp_type richardson<br>
-mg_coarse_ksp_constant_null_space<br>
-mg_coarse_pc_type lu<br>
-mg_coarse_pc_factor_mat_solver_package superlu_dist<br>
-options_left<br>
<br>
I attached a copy of ksp_view. I am currently using petsc-master (last updated yesterday).<br>
I would appreciate any suggestion on this matter.<br>
<br>
</blockquote>
</blockquote>
<blockquote type="CITE">
<br>
<br>
</blockquote>
<blockquote type="CITE">
I suspect you have a nonlinear PC. Can you send the output of -ksp_view?
</blockquote>
<blockquote type="CITE">
<br>
<br>
</blockquote>
<blockquote type="CITE">
Matt
</blockquote>
<blockquote type="CITE">
</blockquote>
<blockquote type="CITE">
<blockquote>
Thanks,<br>
Michele<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</blockquote>
<blockquote type="CITE">
<br>
<br>
</blockquote>
<blockquote type="CITE">
<br>
<br>
</blockquote>
<blockquote type="CITE">
--
</blockquote>
<blockquote type="CITE">
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</blockquote>
<br>
</div></div></div>
</blockquote></div><br></div></div>