<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Nov 11, 2015 at 12:24 PM, David Knezevic <span dir="ltr"><<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div><div class="h5"><div class="gmail_quote">On Tue, Nov 10, 2015 at 10:28 PM, David Knezevic <span dir="ltr"><<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>></span> wrote:<br></div></div></div><div class="gmail_quote"><div><div class="h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Tue, Nov 10, 2015 at 10:24 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Tue, Nov 10, 2015 at 9:21 PM, David Knezevic <span dir="ltr"><<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Nov 10, 2015 at 10:00 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Tue, Nov 10, 2015 at 8:39 PM, David Knezevic <span dir="ltr"><<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>I'm looking into using GAMG, so I wanted to start with a simple 3D elasticity problem. When I first tried this, I got the following "zero pivot" error:</div><div><br></div><div>-----------------------------------------------------------------------</div><div><br></div><div><div>[0]PETSC ERROR: Zero pivot in LU factorization: <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot</a></div><div>[0]PETSC ERROR: Zero pivot, row 3</div><div>[0]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.</div><div>[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015 </div><div>[0]PETSC ERROR: /home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real on a arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015</div><div>[0]PETSC ERROR: Configure options --with-shared-libraries=1 --with-debugging=0 --download-suitesparse --download-parmetis --download-blacs --with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl --CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/opt_real/petsc --download-hypre --download-ml</div><div>[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in /home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c</div><div>[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in /home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c</div><div>[0]PETSC ERROR: #3 MatSOR() line 3697 in /home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c</div><div>[0]PETSC ERROR: #4 PCApply_SOR() line 37 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c</div><div>[0]PETSC ERROR: #5 PCApply() line 482 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: #6 KSP_PCApply() line 242 in /home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h</div><div>[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c</div><div>[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c</div><div>[0]PETSC ERROR: #9 KSPSolve() line 604 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c</div><div>[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c</div><div>[0]PETSC ERROR: #11 KSPSolve() line 604 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c</div><div>[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c</div><div>[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c</div><div>[0]PETSC ERROR: #14 PCApply_MG() line 338 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c</div><div>[0]PETSC ERROR: #15 PCApply() line 482 in /home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c</div><div>[0]PETSC ERROR: #16 KSP_PCApply() line 242 in /home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h</div><div>[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c</div><div>[0]PETSC ERROR: #18 KSPSolve() line 604 in /home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c</div></div><div><br></div><div>-----------------------------------------------------------------------<br></div><div><br></div><div>I saw that there was a thread about this in September (subject: "gamg and zero pivots"), and that the fix is to use "<span style="color:rgb(0,0,0);white-space:pre-wrap">-mg_levels_pc_type jacobi." When I do that, the solve succeeds (I pasted the -ksp_view at the end of this email).</span></div><div><span style="color:rgb(0,0,0);white-space:pre-wrap"><br></span></div><div><span style="color:rgb(0,0,0);white-space:pre-wrap">So I have two questions about this:</span></div><div><span style="color:rgb(0,0,0);white-space:pre-wrap"><br></span></div><div>1. Is it surprising that I hit this issue for a 3D elasticity problem? Note that matrix assembly was done in libMesh, I can look into the structure of the assembled matrix more carefully, if needed. Also, note that I can solve this problem with direct solvers just fine.</div></div></blockquote><div><br></div></div></div><div>Yes, this seems like a bug, but it could be some strange BC thing I do not understand.</div></div></div></div></blockquote><div><br></div><div><br></div><div>OK, I can look into the matrix in more detail. I agree that it should have a non-zero diagonal, so I'll have a look at what's happening with that.</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Naively, the elastic element matrix has a nonzero diagonal. I see that you are doing LU<br></div><div>of size 5. That seems strange for 3D elasticity. Am I missing something? I would expect</div><div>block size 3.</div></div></div></div></blockquote><div><br></div><div><br></div><div>I'm not sure what is causing the LU of size 5. Is there a setting to control that?<br></div><div><br></div><div>Regarding the block size: I set the vector and matrix block size to 3 via VecSetBlockSize and MatSetBlockSize. I also used MatNullSpaceCreateRigidBody on a vector with block size of 3, and set the matrix's near nullspace using that.</div></div></div></div></blockquote><div><br></div></div></div><div>Can you run this same example with -mat_no_inode? I think it may be a strange blocking that is causing this.</div></div></div></div></blockquote><div><br></div><div><br></div></div></div><div>That works. The -ksp_view output is below.</div></div></div></div></blockquote><div><br></div><div><br></div></div></div><div>I just wanted to follow up on this. I had a more careful look at the matrix, and confirmed that there are no zero entries on the diagonal (as expected for elasticity). The matrix is from one of libMesh's example problems: a simple cantilever model using HEX8 elements.</div><div><br></div><div>Do you have any further thoughts about what might cause the "strange blocking" that you referred to? If there's something non-standard that libMesh is doing with the blocks, I'd be interested to look into that. I can send over the matrix if that would be helpful.</div><div><br></div><div>Thanks,<br>David</div><div><br></div></div></div></div>
</blockquote></div><br></div><div class="gmail_extra">P.S. I was previously calling VecSetBlockSize and MatSetBlockSize to set the block size to 3. When I don't do that, I no longer need to call -mat_no_inodes. I've pasted the -ksp_view output below. Does it look like that's working OK?</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">----------------------------------------------------------</div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra">KSP Object: 1 MPI processes</div><div class="gmail_extra">  type: cg</div><div class="gmail_extra">  maximum iterations=5000</div><div class="gmail_extra">  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000</div><div class="gmail_extra">  left preconditioning</div><div class="gmail_extra">  using nonzero initial guess</div><div class="gmail_extra">  using PRECONDITIONED norm type for convergence test</div><div class="gmail_extra">PC Object: 1 MPI processes</div><div class="gmail_extra">  type: gamg</div><div class="gmail_extra">    MG: type is MULTIPLICATIVE, levels=6 cycles=v</div><div class="gmail_extra">      Cycles per PCApply=1</div><div class="gmail_extra">      Using Galerkin computed coarse grid matrices</div><div class="gmail_extra">      GAMG specific options</div><div class="gmail_extra">        Threshold for dropping small values from graph 0</div><div class="gmail_extra">        AGG specific options</div><div class="gmail_extra">          Symmetric graph false</div><div class="gmail_extra">  Coarse grid solver -- level -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_coarse_)     1 MPI processes</div><div class="gmail_extra">      type: gmres</div><div class="gmail_extra">        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">        GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">      maximum iterations=1, initial guess is zero</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_coarse_)     1 MPI processes</div><div class="gmail_extra">      type: bjacobi</div><div class="gmail_extra">        block Jacobi: number of blocks = 1</div><div class="gmail_extra">        Local solve is same for all blocks, in the following KSP and PC objects:</div><div class="gmail_extra">        KSP Object:        (mg_coarse_sub_)         1 MPI processes</div><div class="gmail_extra">          type: preonly</div><div class="gmail_extra">          maximum iterations=1, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">        PC Object:        (mg_coarse_sub_)         1 MPI processes</div><div class="gmail_extra">          type: lu</div><div class="gmail_extra">            LU: out-of-place factorization</div><div class="gmail_extra">            tolerance for zero pivot 2.22045e-14</div><div class="gmail_extra">            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]</div><div class="gmail_extra">            matrix ordering: nd</div><div class="gmail_extra">            factor fill ratio given 5, needed 1.03941</div><div class="gmail_extra">              Factored matrix follows:</div><div class="gmail_extra">                Mat Object:                 1 MPI processes</div><div class="gmail_extra">                  type: seqaij</div><div class="gmail_extra">                  rows=47, cols=47</div><div class="gmail_extra">                  package used to perform factorization: petsc</div><div class="gmail_extra">                  total: nonzeros=211, allocated nonzeros=211</div><div class="gmail_extra">                  total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">                    not using I-node routines</div><div class="gmail_extra">          linear system matrix = precond matrix:</div><div class="gmail_extra">          Mat Object:           1 MPI processes</div><div class="gmail_extra">            type: seqaij</div><div class="gmail_extra">            rows=47, cols=47</div><div class="gmail_extra">            total: nonzeros=203, allocated nonzeros=203</div><div class="gmail_extra">            total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">              not using I-node routines</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=47, cols=47</div><div class="gmail_extra">        total: nonzeros=203, allocated nonzeros=203</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          not using I-node routines</div><div class="gmail_extra">  Down solver (pre-smoother) on level 1 -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_levels_1_)     1 MPI processes</div><div class="gmail_extra">      type: chebyshev</div><div class="gmail_extra">        Chebyshev: eigenvalue estimates:  min = 0.0998481, max = 1.09833</div><div class="gmail_extra">        Chebyshev: eigenvalues estimated using gmres with translations  [0 0.1; 0 1.1]</div><div class="gmail_extra">        KSP Object:        (mg_levels_1_esteig_)         1 MPI processes</div><div class="gmail_extra">          type: gmres</div><div class="gmail_extra">            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">            GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">          maximum iterations=10, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">      maximum iterations=2</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using nonzero initial guess</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_levels_1_)     1 MPI processes</div><div class="gmail_extra">      type: sor</div><div class="gmail_extra">        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=67, cols=67</div><div class="gmail_extra">        total: nonzeros=373, allocated nonzeros=373</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          not using I-node routines</div><div class="gmail_extra">  Up solver (post-smoother) same as down solver (pre-smoother)</div><div class="gmail_extra">  Down solver (pre-smoother) on level 2 -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_levels_2_)     1 MPI processes</div><div class="gmail_extra">      type: chebyshev</div><div class="gmail_extra">        Chebyshev: eigenvalue estimates:  min = 0.0997389, max = 1.09713</div><div class="gmail_extra">        Chebyshev: eigenvalues estimated using gmres with translations  [0 0.1; 0 1.1]</div><div class="gmail_extra">        KSP Object:        (mg_levels_2_esteig_)         1 MPI processes</div><div class="gmail_extra">          type: gmres</div><div class="gmail_extra">            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">            GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">          maximum iterations=10, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">      maximum iterations=2</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using nonzero initial guess</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_levels_2_)     1 MPI processes</div><div class="gmail_extra">      type: sor</div><div class="gmail_extra">        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=129, cols=129</div><div class="gmail_extra">        total: nonzeros=1029, allocated nonzeros=1029</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          not using I-node routines</div><div class="gmail_extra">  Up solver (post-smoother) same as down solver (pre-smoother)</div><div class="gmail_extra">  Down solver (pre-smoother) on level 3 -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_levels_3_)     1 MPI processes</div><div class="gmail_extra">      type: chebyshev</div><div class="gmail_extra">        Chebyshev: eigenvalue estimates:  min = 0.0997179, max = 1.0969</div><div class="gmail_extra">        Chebyshev: eigenvalues estimated using gmres with translations  [0 0.1; 0 1.1]</div><div class="gmail_extra">        KSP Object:        (mg_levels_3_esteig_)         1 MPI processes</div><div class="gmail_extra">          type: gmres</div><div class="gmail_extra">            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">            GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">          maximum iterations=10, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">      maximum iterations=2</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using nonzero initial guess</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_levels_3_)     1 MPI processes</div><div class="gmail_extra">      type: sor</div><div class="gmail_extra">        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=372, cols=372</div><div class="gmail_extra">        total: nonzeros=4116, allocated nonzeros=4116</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          not using I-node routines</div><div class="gmail_extra">  Up solver (post-smoother) same as down solver (pre-smoother)</div><div class="gmail_extra">  Down solver (pre-smoother) on level 4 -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_levels_4_)     1 MPI processes</div><div class="gmail_extra">      type: chebyshev</div><div class="gmail_extra">        Chebyshev: eigenvalue estimates:  min = 0.0995012, max = 1.09451</div><div class="gmail_extra">        Chebyshev: eigenvalues estimated using gmres with translations  [0 0.1; 0 1.1]</div><div class="gmail_extra">        KSP Object:        (mg_levels_4_esteig_)         1 MPI processes</div><div class="gmail_extra">          type: gmres</div><div class="gmail_extra">            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">            GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">          maximum iterations=10, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">      maximum iterations=2</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using nonzero initial guess</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_levels_4_)     1 MPI processes</div><div class="gmail_extra">      type: sor</div><div class="gmail_extra">        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=1816, cols=1816</div><div class="gmail_extra">        total: nonzeros=26636, allocated nonzeros=26636</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          not using I-node routines</div><div class="gmail_extra">  Up solver (post-smoother) same as down solver (pre-smoother)</div><div class="gmail_extra">  Down solver (pre-smoother) on level 5 -------------------------------</div><div class="gmail_extra">    KSP Object:    (mg_levels_5_)     1 MPI processes</div><div class="gmail_extra">      type: chebyshev</div><div class="gmail_extra">        Chebyshev: eigenvalue estimates:  min = 0.0994721, max = 1.09419</div><div class="gmail_extra">        Chebyshev: eigenvalues estimated using gmres with translations  [0 0.1; 0 1.1]</div><div class="gmail_extra">        KSP Object:        (mg_levels_5_esteig_)         1 MPI processes</div><div class="gmail_extra">          type: gmres</div><div class="gmail_extra">            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div class="gmail_extra">            GMRES: happy breakdown tolerance 1e-30</div><div class="gmail_extra">          maximum iterations=10, initial guess is zero</div><div class="gmail_extra">          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">          left preconditioning</div><div class="gmail_extra">          using NONE norm type for convergence test</div><div class="gmail_extra">      maximum iterations=2</div><div class="gmail_extra">      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000</div><div class="gmail_extra">      left preconditioning</div><div class="gmail_extra">      using nonzero initial guess</div><div class="gmail_extra">      using NONE norm type for convergence test</div><div class="gmail_extra">    PC Object:    (mg_levels_5_)     1 MPI processes</div><div class="gmail_extra">      type: sor</div><div class="gmail_extra">        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div class="gmail_extra">      linear system matrix = precond matrix:</div><div class="gmail_extra">      Mat Object:      ()       1 MPI processes</div><div class="gmail_extra">        type: seqaij</div><div class="gmail_extra">        rows=55473, cols=55473</div><div class="gmail_extra">        total: nonzeros=4.08484e+06, allocated nonzeros=4.08484e+06</div><div class="gmail_extra">        total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">          has attached near null space</div><div class="gmail_extra">          using I-node routines: found 18491 nodes, limit used is 5</div><div class="gmail_extra">  Up solver (post-smoother) same as down solver (pre-smoother)</div><div class="gmail_extra">  linear system matrix = precond matrix:</div><div class="gmail_extra">  Mat Object:  ()   1 MPI processes</div><div class="gmail_extra">    type: seqaij</div><div class="gmail_extra">    rows=55473, cols=55473</div><div class="gmail_extra">    total: nonzeros=4.08484e+06, allocated nonzeros=4.08484e+06</div><div class="gmail_extra">    total number of mallocs used during MatSetValues calls =0</div><div class="gmail_extra">      has attached near null space</div><div class="gmail_extra">      using I-node routines: found 18491 nodes, limit used is 5</div><div class="gmail_extra"><br></div></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></div>