<div dir="ltr"><table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td width="40"><br></td><td align="LEFT" valign="TOP"><br></td><td><table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td width="40"><br>
</td><td align="LEFT" valign="TOP"><br>I'm getting an FPE in LAPACKgesvd of KSPComputeExtremeSingularValues_GMRES. I'm not sure where this routine is required during the algorithm as I'm using BiCG + richardson/SOR on all multigrid levels. Also, even though I've set mg_coarse_ksp_type richardson, KSPView is showing that preonly is used on the coarsest level. Am I misunderstanding something about the options I'm using?<br>
<br>-pres_ksp_type preonly -pres_pc_type redistribute -pres_redistribute_ksp_type bcgsl -pres_redistribute_pc_type gamg -pres_redistribute_pc_gamg_threshold 0.01 -pres_redistribute_mg_levels_ksp_type richardson -pres_redistribute_mg_levels_pc_type sor -pres_redistribute_mg_coarse_ksp_type richardson -pres_redistribute_mg_coarse_pc_type sor -pres_redistribute_mg_coarse_pc_sor_its 4 -pres_redistribute_pc_gamg_agg_nsmooths 1 -pres_redistribute_pc_gamg_sym_graph true -pres_redistribute_gamg_type agg -pres_ksp_view <br>
<br><br><br>[8]PETSC ERROR: ------------------------------------------------------------------------<br>[8]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero<br>[8]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>
[8]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[8]PETSC">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[8]PETSC</a> ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br>
[40]PETSC ERROR: ------------------------------------------------------------------------<br>[40]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero<br>[40]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>
[40]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[40]PETSC">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[40]PETSC</a> ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br>
[40]PETSC ERROR: likely location of problem given in stack below<br>[40]PETSC ERROR: --------------------- Stack Frames ------------------------------------<br>[40]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>
[40]PETSC ERROR: INSTEAD the line number of the start of the function<br>[40]PETSC ERROR: is given.<br>[40]PETSC ERROR: [40] LAPACKgesvd line 42 /Users/jmousel/SOFT/petsc/src/ksp/ksp/impls/gmres/gmreig.c<br>[40]PETSC ERROR: [40] KSPComputeExtremeSingularValues_GMRES line 24 /Users/jmousel/SOFT/petsc/src/ksp/ksp/impls/gmres/gmreig.c<br>
[40]PETSC ERROR: [40] KSPComputeExtremeSingularValues line 51 /Users/jmousel/SOFT/petsc/src/ksp/ksp/interface/itfunc.c<br><br><br><br>KSP Object:proj_ksp(pres_) 96 MPI processes<br> type: preonly<br> maximum iterations=1000, initial guess is zero<br>
tolerances: relative=1e-50, absolute=0.01, divergence=10000<br> left preconditioning<br> using NONE norm type for convergence test<br>PC Object:(pres_) 96 MPI processes<br> type: redistribute<br> Number rows eliminated 7115519 Percentage rows eliminated 44.1051<br>
Redistribute preconditioner: <br> KSP Object: (pres_redistribute_) 96 MPI processes<br> type: bcgsl<br> BCGSL: Ell = 2<br> BCGSL: Delta = 0<br> maximum iterations=1000, initial guess is zero<br> tolerances: relative=1e-50, absolute=0.01, divergence=10000<br>
left preconditioning<br> has attached null space<br> using PRECONDITIONED norm type for convergence test<br> PC Object: (pres_redistribute_) 96 MPI processes<br> type: gamg<br> MG: type is MULTIPLICATIVE, levels=4 cycles=v<br>
Cycles per PCApply=1<br> Using Galerkin computed coarse grid matrices<br> Coarse grid solver -- level -------------------------------<br> KSP Object: (pres_redistribute_mg_coarse_) 96 MPI processes<br>
type: preonly<br> maximum iterations=1, initial guess is zero<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br> left preconditioning<br> using NONE norm type for convergence test<br>
PC Object: (pres_redistribute_mg_coarse_) 96 MPI processes<br> type: sor<br> SOR: type = local_symmetric, iterations = 4, local iterations = 1, omega = 1<br> linear system matrix = precond matrix:<br>
Mat Object: 96 MPI processes<br> type: mpiaij<br> rows=465, cols=465<br> total: nonzeros=84523, allocated nonzeros=84523<br> total number of mallocs used during MatSetValues calls =0<br>
not using I-node (on process 0) routines<br> Down solver (pre-smoother) on level 1 -------------------------------<br> KSP Object: (pres_redistribute_mg_levels_1_) 96 MPI processes<br> type: richardson<br>
Richardson: damping factor=1<br> maximum iterations=2<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br> left preconditioning<br> using nonzero initial guess<br> using NONE norm type for convergence test<br>
PC Object: (pres_redistribute_mg_levels_1_) 96 MPI processes<br> type: sor<br> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1<br> linear system matrix = precond matrix:<br>
Mat Object: 96 MPI processes<br> type: mpiaij<br> rows=13199, cols=13199<br> total: nonzeros=2.09436e+06, allocated nonzeros=2.09436e+06<br> total number of mallocs used during MatSetValues calls =0<br>
not using I-node (on process 0) routines<br> Up solver (post-smoother) same as down solver (pre-smoother)<br> Down solver (pre-smoother) on level 2 -------------------------------<br> KSP Object: (pres_redistribute_mg_levels_2_) 96 MPI processes<br>
type: richardson<br> Richardson: damping factor=1<br> maximum iterations=2<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br> left preconditioning<br> using nonzero initial guess<br>
using NONE norm type for convergence test<br> PC Object: (pres_redistribute_mg_levels_2_) 96 MPI processes<br> type: sor<br> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1<br>
linear system matrix = precond matrix:<br> Mat Object: 96 MPI processes<br> type: mpiaij<br> rows=568202, cols=568202<br> total: nonzeros=2.33509e+07, allocated nonzeros=2.33509e+07<br>
total number of mallocs used during MatSetValues calls =0<br> not using I-node (on process 0) routines<br> Up solver (post-smoother) same as down solver (pre-smoother)<br> Down solver (pre-smoother) on level 3 -------------------------------<br>
KSP Object: (pres_redistribute_mg_levels_3_) 96 MPI processes<br> type: richardson<br> Richardson: damping factor=1<br> maximum iterations=2<br> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br> using nonzero initial guess<br> using NONE norm type for convergence test<br> PC Object: (pres_redistribute_mg_levels_3_) 96 MPI processes<br> type: sor<br>
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1<br> linear system matrix = precond matrix:<br> Mat Object: 96 MPI processes<br> type: mpiaij<br> rows=9017583, cols=9017583<br>
total: nonzeros=1.01192e+08, allocated nonzeros=1.01192e+08<br> total number of mallocs used during MatSetValues calls =0<br> not using I-node (on process 0) routines<br> Up solver (post-smoother) same as down solver (pre-smoother)<br>
linear system matrix = precond matrix:<br> Mat Object: 96 MPI processes<br> type: mpiaij<br> rows=9017583, cols=9017583<br> total: nonzeros=1.01192e+08, allocated nonzeros=1.01192e+08<br> total number of mallocs used during MatSetValues calls =0<br>
not using I-node (on process 0) routines<br> linear system matrix = precond matrix:<br> Mat Object: proj_A 96 MPI processes<br> type: mpiaij<br> rows=16133102, cols=16133102<br> total: nonzeros=1.09807e+08, allocated nonzeros=1.57582e+08<br>
total number of mallocs used during MatSetValues calls =0<br> not using I-node (on process 0) routines<br> Residual norms for vel_ solve.<br></td><td><br></td></tr></tbody></table></td></tr></tbody></table></div>