[petsc-dev] KSPComputeExtremeSingularValues_GMRES floating point exception

John Mousel john.mousel at gmail.com
Wed Jul 23 09:54:18 CDT 2014


I'm getting an FPE in LAPACKgesvd of KSPComputeExtremeSingularValues_GMRES.
I'm not sure where this routine is required during the algorithm as I'm
using BiCG + richardson/SOR on all multigrid levels. Also, even though I've
set mg_coarse_ksp_type richardson, KSPView is showing that preonly is used
on the coarsest level. Am I misunderstanding something about the options
I'm using?

-pres_ksp_type preonly -pres_pc_type redistribute
-pres_redistribute_ksp_type bcgsl -pres_redistribute_pc_type gamg
-pres_redistribute_pc_gamg_threshold 0.01
-pres_redistribute_mg_levels_ksp_type richardson
-pres_redistribute_mg_levels_pc_type sor
-pres_redistribute_mg_coarse_ksp_type richardson
-pres_redistribute_mg_coarse_pc_type sor
-pres_redistribute_mg_coarse_pc_sor_its 4
-pres_redistribute_pc_gamg_agg_nsmooths 1
-pres_redistribute_pc_gamg_sym_graph true -pres_redistribute_gamg_type agg
-pres_ksp_view



[8]PETSC ERROR:
------------------------------------------------------------------------
[8]PETSC ERROR: Caught signal number 8 FPE: Floating Point
Exception,probably divide by zero
[8]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[8]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[8]PETSC ERROR:
or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
corruption errors
[40]PETSC ERROR:
------------------------------------------------------------------------
[40]PETSC ERROR: Caught signal number 8 FPE: Floating Point
Exception,probably divide by zero
[40]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[40]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[40]PETSC
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find
memory corruption errors
[40]PETSC ERROR: likely location of problem given in stack below
[40]PETSC ERROR: ---------------------  Stack Frames
------------------------------------
[40]PETSC ERROR: Note: The EXACT line numbers in the stack are not
available,
[40]PETSC ERROR:       INSTEAD the line number of the start of the function
[40]PETSC ERROR:       is given.
[40]PETSC ERROR: [40] LAPACKgesvd line 42
/Users/jmousel/SOFT/petsc/src/ksp/ksp/impls/gmres/gmreig.c
[40]PETSC ERROR: [40] KSPComputeExtremeSingularValues_GMRES line 24
/Users/jmousel/SOFT/petsc/src/ksp/ksp/impls/gmres/gmreig.c
[40]PETSC ERROR: [40] KSPComputeExtremeSingularValues line 51
/Users/jmousel/SOFT/petsc/src/ksp/ksp/interface/itfunc.c



KSP Object:proj_ksp(pres_) 96 MPI processes
  type: preonly
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-50, absolute=0.01, divergence=10000
  left preconditioning
  using NONE norm type for convergence test
PC Object:(pres_) 96 MPI processes
  type: redistribute
      Number rows eliminated 7115519 Percentage rows eliminated 44.1051
    Redistribute preconditioner:
  KSP Object:  (pres_redistribute_)   96 MPI processes
    type: bcgsl
      BCGSL: Ell = 2
      BCGSL: Delta = 0
    maximum iterations=1000, initial guess is zero
    tolerances:  relative=1e-50, absolute=0.01, divergence=10000
    left preconditioning
    has attached null space
    using PRECONDITIONED norm type for convergence test
  PC Object:  (pres_redistribute_)   96 MPI processes
    type: gamg
      MG: type is MULTIPLICATIVE, levels=4 cycles=v
        Cycles per PCApply=1
        Using Galerkin computed coarse grid matrices
    Coarse grid solver -- level -------------------------------
      KSP Object:      (pres_redistribute_mg_coarse_)       96 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (pres_redistribute_mg_coarse_)       96 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 4, local iterations =
1, omega = 1
        linear system matrix = precond matrix:
        Mat Object:         96 MPI processes
          type: mpiaij
          rows=465, cols=465
          total: nonzeros=84523, allocated nonzeros=84523
          total number of mallocs used during MatSetValues calls =0
            not using I-node (on process 0) routines
    Down solver (pre-smoother) on level 1 -------------------------------
      KSP Object:      (pres_redistribute_mg_levels_1_)       96 MPI
processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (pres_redistribute_mg_levels_1_)       96 MPI
processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations =
1, omega = 1
        linear system matrix = precond matrix:
        Mat Object:         96 MPI processes
          type: mpiaij
          rows=13199, cols=13199
          total: nonzeros=2.09436e+06, allocated nonzeros=2.09436e+06
          total number of mallocs used during MatSetValues calls =0
            not using I-node (on process 0) routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 2 -------------------------------
      KSP Object:      (pres_redistribute_mg_levels_2_)       96 MPI
processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (pres_redistribute_mg_levels_2_)       96 MPI
processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations =
1, omega = 1
        linear system matrix = precond matrix:
        Mat Object:         96 MPI processes
          type: mpiaij
          rows=568202, cols=568202
          total: nonzeros=2.33509e+07, allocated nonzeros=2.33509e+07
          total number of mallocs used during MatSetValues calls =0
            not using I-node (on process 0) routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 3 -------------------------------
      KSP Object:      (pres_redistribute_mg_levels_3_)       96 MPI
processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (pres_redistribute_mg_levels_3_)       96 MPI
processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations =
1, omega = 1
        linear system matrix = precond matrix:
        Mat Object:         96 MPI processes
          type: mpiaij
          rows=9017583, cols=9017583
          total: nonzeros=1.01192e+08, allocated nonzeros=1.01192e+08
          total number of mallocs used during MatSetValues calls =0
            not using I-node (on process 0) routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    linear system matrix = precond matrix:
    Mat Object:     96 MPI processes
      type: mpiaij
      rows=9017583, cols=9017583
      total: nonzeros=1.01192e+08, allocated nonzeros=1.01192e+08
      total number of mallocs used during MatSetValues calls =0
        not using I-node (on process 0) routines
  linear system matrix = precond matrix:
  Mat Object:  proj_A   96 MPI processes
    type: mpiaij
    rows=16133102, cols=16133102
    total: nonzeros=1.09807e+08, allocated nonzeros=1.57582e+08
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for vel_ solve.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140723/2e274676/attachment.html>


More information about the petsc-dev mailing list