[petsc-users] GAMG and zero pivots follow up

David Knezevic david.knezevic at akselos.com
Wed Nov 11 11:57:45 CST 2015


On Wed, Nov 11, 2015 at 12:24 PM, David Knezevic <david.knezevic at akselos.com
> wrote:

> On Tue, Nov 10, 2015 at 10:28 PM, David Knezevic <
> david.knezevic at akselos.com> wrote:
>
>> On Tue, Nov 10, 2015 at 10:24 PM, Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> On Tue, Nov 10, 2015 at 9:21 PM, David Knezevic <
>>> david.knezevic at akselos.com> wrote:
>>>
>>>> On Tue, Nov 10, 2015 at 10:00 PM, Matthew Knepley <knepley at gmail.com>
>>>> wrote:
>>>>
>>>>> On Tue, Nov 10, 2015 at 8:39 PM, David Knezevic <
>>>>> david.knezevic at akselos.com> wrote:
>>>>>
>>>>>> I'm looking into using GAMG, so I wanted to start with a simple 3D
>>>>>> elasticity problem. When I first tried this, I got the following "zero
>>>>>> pivot" error:
>>>>>>
>>>>>>
>>>>>> -----------------------------------------------------------------------
>>>>>>
>>>>>> [0]PETSC ERROR: Zero pivot in LU factorization:
>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
>>>>>> [0]PETSC ERROR: Zero pivot, row 3
>>>>>> [0]PETSC ERROR: See
>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble
>>>>>> shooting.
>>>>>> [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
>>>>>> [0]PETSC ERROR:
>>>>>> /home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real on a
>>>>>> arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
>>>>>> [0]PETSC ERROR: Configure options --with-shared-libraries=1
>>>>>> --with-debugging=0 --download-suitesparse --download-parmetis
>>>>>> --download-blacs
>>>>>> --with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
>>>>>> --CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
>>>>>> --download-metis --download-superlu_dist
>>>>>> --prefix=/home/dknez/software/libmesh_install/opt_real/petsc
>>>>>> --download-hypre --download-ml
>>>>>> [0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
>>>>>> [0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
>>>>>> [0]PETSC ERROR: #3 MatSOR() line 3697 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
>>>>>> [0]PETSC ERROR: #4 PCApply_SOR() line 37 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
>>>>>> [0]PETSC ERROR: #5 PCApply() line 482 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
>>>>>> [0]PETSC ERROR: #6 KSP_PCApply() line 242 in
>>>>>> /home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
>>>>>> [0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
>>>>>> [0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
>>>>>> [0]PETSC ERROR: #9 KSPSolve() line 604 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
>>>>>> [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
>>>>>> [0]PETSC ERROR: #11 KSPSolve() line 604 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
>>>>>> [0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
>>>>>> [0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
>>>>>> [0]PETSC ERROR: #14 PCApply_MG() line 338 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
>>>>>> [0]PETSC ERROR: #15 PCApply() line 482 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
>>>>>> [0]PETSC ERROR: #16 KSP_PCApply() line 242 in
>>>>>> /home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
>>>>>> [0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
>>>>>> [0]PETSC ERROR: #18 KSPSolve() line 604 in
>>>>>> /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
>>>>>>
>>>>>>
>>>>>> -----------------------------------------------------------------------
>>>>>>
>>>>>> I saw that there was a thread about this in September (subject: "gamg
>>>>>> and zero pivots"), and that the fix is to use "-mg_levels_pc_type
>>>>>> jacobi." When I do that, the solve succeeds (I pasted the -ksp_view at the
>>>>>> end of this email).
>>>>>>
>>>>>> So I have two questions about this:
>>>>>>
>>>>>> 1. Is it surprising that I hit this issue for a 3D elasticity
>>>>>> problem? Note that matrix assembly was done in libMesh, I can look into the
>>>>>> structure of the assembled matrix more carefully, if needed. Also, note
>>>>>> that I can solve this problem with direct solvers just fine.
>>>>>>
>>>>>
>>>>> Yes, this seems like a bug, but it could be some strange BC thing I do
>>>>> not understand.
>>>>>
>>>>
>>>>
>>>> OK, I can look into the matrix in more detail. I agree that it should
>>>> have a non-zero diagonal, so I'll have a look at what's happening with that.
>>>>
>>>>
>>>>
>>>>
>>>>> Naively, the elastic element matrix has a nonzero diagonal. I see that
>>>>> you are doing LU
>>>>> of size 5. That seems strange for 3D elasticity. Am I missing
>>>>> something? I would expect
>>>>> block size 3.
>>>>>
>>>>
>>>>
>>>> I'm not sure what is causing the LU of size 5. Is there a setting to
>>>> control that?
>>>>
>>>> Regarding the block size: I set the vector and matrix block size to 3
>>>> via VecSetBlockSize and MatSetBlockSize. I also
>>>> used MatNullSpaceCreateRigidBody on a vector with block size of 3, and set
>>>> the matrix's near nullspace using that.
>>>>
>>>
>>> Can you run this same example with -mat_no_inode? I think it may be a
>>> strange blocking that is causing this.
>>>
>>
>>
>> That works. The -ksp_view output is below.
>>
>
>
> I just wanted to follow up on this. I had a more careful look at the
> matrix, and confirmed that there are no zero entries on the diagonal (as
> expected for elasticity). The matrix is from one of libMesh's example
> problems: a simple cantilever model using HEX8 elements.
>
> Do you have any further thoughts about what might cause the "strange
> blocking" that you referred to? If there's something non-standard that
> libMesh is doing with the blocks, I'd be interested to look into that. I
> can send over the matrix if that would be helpful.
>
> Thanks,
> David
>
>
P.S. I was previously calling VecSetBlockSize and MatSetBlockSize to set
the block size to 3. When I don't do that, I no longer need to call
-mat_no_inodes. I've pasted the -ksp_view output below. Does it look like
that's working OK?


----------------------------------------------------------

KSP Object: 1 MPI processes
  type: cg
  maximum iterations=5000
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=6 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0
        AGG specific options
          Symmetric graph false
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: gmres
        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
        GMRES: happy breakdown tolerance 1e-30
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 1
        Local solve is same for all blocks, in the following KSP and PC
objects:
        KSP Object:        (mg_coarse_sub_)         1 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
        PC Object:        (mg_coarse_sub_)         1 MPI processes
          type: lu
            LU: out-of-place factorization
            tolerance for zero pivot 2.22045e-14
            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
            matrix ordering: nd
            factor fill ratio given 5, needed 1.03941
              Factored matrix follows:
                Mat Object:                 1 MPI processes
                  type: seqaij
                  rows=47, cols=47
                  package used to perform factorization: petsc
                  total: nonzeros=211, allocated nonzeros=211
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
          linear system matrix = precond matrix:
          Mat Object:           1 MPI processes
            type: seqaij
            rows=47, cols=47
            total: nonzeros=203, allocated nonzeros=203
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
      linear system matrix = precond matrix:
      Mat Object:       1 MPI processes
        type: seqaij
        rows=47, cols=47
        total: nonzeros=203, allocated nonzeros=203
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0998481, max = 1.09833
        Chebyshev: eigenvalues estimated using gmres with translations  [0
0.1; 0 1.1]
        KSP Object:        (mg_levels_1_esteig_)         1 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       1 MPI processes
        type: seqaij
        rows=67, cols=67
        total: nonzeros=373, allocated nonzeros=373
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0997389, max = 1.09713
        Chebyshev: eigenvalues estimated using gmres with translations  [0
0.1; 0 1.1]
        KSP Object:        (mg_levels_2_esteig_)         1 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       1 MPI processes
        type: seqaij
        rows=129, cols=129
        total: nonzeros=1029, allocated nonzeros=1029
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0997179, max = 1.0969
        Chebyshev: eigenvalues estimated using gmres with translations  [0
0.1; 0 1.1]
        KSP Object:        (mg_levels_3_esteig_)         1 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       1 MPI processes
        type: seqaij
        rows=372, cols=372
        total: nonzeros=4116, allocated nonzeros=4116
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0995012, max = 1.09451
        Chebyshev: eigenvalues estimated using gmres with translations  [0
0.1; 0 1.1]
        KSP Object:        (mg_levels_4_esteig_)         1 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       1 MPI processes
        type: seqaij
        rows=1816, cols=1816
        total: nonzeros=26636, allocated nonzeros=26636
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 5 -------------------------------
    KSP Object:    (mg_levels_5_)     1 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0994721, max = 1.09419
        Chebyshev: eigenvalues estimated using gmres with translations  [0
0.1; 0 1.1]
        KSP Object:        (mg_levels_5_esteig_)         1 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_5_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:      ()       1 MPI processes
        type: seqaij
        rows=55473, cols=55473
        total: nonzeros=4.08484e+06, allocated nonzeros=4.08484e+06
        total number of mallocs used during MatSetValues calls =0
          has attached near null space
          using I-node routines: found 18491 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:  ()   1 MPI processes
    type: seqaij
    rows=55473, cols=55473
    total: nonzeros=4.08484e+06, allocated nonzeros=4.08484e+06
    total number of mallocs used during MatSetValues calls =0
      has attached near null space
      using I-node routines: found 18491 nodes, limit used is 5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151111/089bdb9a/attachment-0001.html>


More information about the petsc-users mailing list