<div dir="ltr">The eigen estimator is failing in GAMG.<div><br></div><div>* The coarsening method in GAMG changed in recent releases, a little bit, with "aggressive" or "square" coarsening (two MISs instead of MIS on A'A), but something else is going on here.</div><div>* Your fine grid looks good, N%3 == 0 and NNZ%9 == 0, the coarse grids seem to have lost the block size. N is not a factor of 3, or 6 with the null space. rows=45721, cols=45721. This is bad.</div><div>* the block size is in there on the fine grid: rows=2186610, cols=2186610, bs=3</div><div>* Try running with -info and grep on GAMG and send me that output. Something is very wrong here.</div><div><br></div><div>Thanks,</div><div>Mark</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Dec 13, 2022 at 10:38 PM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Do you have slip/symmetry boundary conditions, where some components are constrained? In that case, there is no uniform block size and I think you'll need DMPlexCreateRigidBody() and MatSetNearNullSpace().<br>
<br>
The PCSetCoordinates() code won't work for non-constant block size.<br>
<br>
-pc_type gamg should work okay out of the box for elasticity. For hypre, I've had good luck with this options suite, which also runs on GPU.<br>
<br>
-pc_type hypre -pc_hypre_boomeramg_coarsen_type pmis -pc_hypre_boomeramg_interp_type ext+i -pc_hypre_boomeramg_no_CF -pc_hypre_boomeramg_P_max 6 -pc_hypre_boomeramg_relax_type_down Chebyshev -pc_hypre_boomeramg_relax_type_up Chebyshev -pc_hypre_boomeramg_strong_threshold 0.5<br>
<br>
Blaise Bourdin <<a href="mailto:bourdin@mcmaster.ca" target="_blank">bourdin@mcmaster.ca</a>> writes:<br>
<br>
> Hi,<br>
><br>
> I am getting close to finish porting a code from petsc 3.3 / sieve to main / dmplex, but am<br>
> now encountering difficulties <br>
> I am reasonably sure that the Jacobian and residual are correct. The codes handle boundary<br>
> conditions differently (MatZeroRowCols vs dmplex constraints) so it is not trivial to compare<br>
> them. Running with snes_type ksponly pc_type Jacobi or hyper gives me the same results in<br>
> roughly the same number of iterations.<br>
><br>
> In my old code, gamg would work out of the box. When using petsc-main, -pc_type gamg -<br>
> pc_gamg_type agg works for _some_ problems using P1-Lagrange elements, but never for<br>
> P2-Lagrange. The typical error message is in gamg_agg.txt<br>
><br>
> When using -pc_type classical, a problem where the KSP would converge in 47 iteration in<br>
> 3.3 now takes 1400. ksp_view_3.3.txt and ksp_view_main.txt show the output of -ksp_view<br>
> for both versions. I don’t notice anything obvious.<br>
><br>
> Strangely, removing the call to PCSetCoordinates does not have any impact on the<br>
> convergence.<br>
><br>
> I am sure that I am missing something, or not passing the right options. What’s a good<br>
> starting point for 3D elasticity?<br>
> Regards,<br>
> Blaise<br>
><br>
> — <br>
> Canada Research Chair in Mathematical and Computational Aspects of Solid Mechanics<br>
> (Tier 1)<br>
> Professor, Department of Mathematics & Statistics<br>
> Hamilton Hall room 409A, McMaster University<br>
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada <br>
> <a href="https://www.math.mcmaster.ca/bourdin" rel="noreferrer" target="_blank">https://www.math.mcmaster.ca/bourdin</a> | +1 (905) 525 9140 ext. 27243<br>
> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>
> [0]PETSC ERROR: Petsc has generated inconsistent data<br>
> [0]PETSC ERROR: Computed maximum singular value as zero<br>
> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc!<br>
> [0]PETSC ERROR: Option left: name:-displacement_ksp_converged_reason value: ascii source: file<br>
> [0]PETSC ERROR: See <a href="https://petsc.org/release/faq/" rel="noreferrer" target="_blank">https://petsc.org/release/faq/</a> for trouble shooting.<br>
> [0]PETSC ERROR: Petsc Development GIT revision: v3.18.2-341-g16200351da0 GIT Date: 2022-12-12 23:42:20 +0000<br>
> [0]PETSC ERROR: /home/bourdinb/Development/mef90/mef90-dmplex/bbserv-gcc11.2.1-mvapich2-2.3.7-O/bin/ThermoElasticity on a bbserv-gcc11.2.1-mvapich2-2.3.7-O named bb01 by bourdinb Tue Dec 13 17:02:19 2022<br>
> [0]PETSC ERROR: Configure options --CFLAGS=-Wunused --FFLAGS="-ffree-line-length-none -fallow-argument-mismatch -Wunused" --COPTFLAGS="-O2 -march=znver2" --CXXOPTFLAGS="-O2 -march=znver2" --FOPTFLAGS="-O2 -march=znver2" --download-chaco=1 --download-exodusii=1 --download-fblaslapack=1 --download-hdf5=1 --download-hypre=1 --download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1 --download-p4est=1 --download-parmetis=1 --download-pnetcdf=1 --download-scalapack=1 --download-sowing=1 --download-sowing-cc=/opt/rh/devtoolset-9/root/usr/bin/gcc --download-sowing-cxx=/opt/rh/devtoolset-9/root/usr/bin/g++ --download-sowing-cpp=/opt/rh/devtoolset-9/root/usr/bin/cpp --download-sowing-cxxcpp=/opt/rh/devtoolset-9/root/usr/bin/cpp --download-superlu=1 --download-triangle=1 --download-yaml=1 --download-zlib=1 --with-debugging=0 --with-mpi-dir=/opt/HPC/mvapich2/2.3.7-gcc11.2.1 --with-pic --with-shared-libraries=1 --with-mpiexec=srun --with-x11=0<br>
> [0]PETSC ERROR: #1 PCGAMGOptProlongator_AGG() at /1/HPC/petsc/main/src/ksp/pc/impls/gamg/agg.c:779<br>
> [0]PETSC ERROR: #2 PCSetUp_GAMG() at /1/HPC/petsc/main/src/ksp/pc/impls/gamg/gamg.c:639<br>
> [0]PETSC ERROR: #3 PCSetUp() at /1/HPC/petsc/main/src/ksp/pc/interface/precon.c:994<br>
> [0]PETSC ERROR: #4 KSPSetUp() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:405<br>
> [0]PETSC ERROR: #5 KSPSolve_Private() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:824<br>
> [0]PETSC ERROR: #6 KSPSolve() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:1070<br>
> [0]PETSC ERROR: #7 SNESSolve_KSPONLY() at /1/HPC/petsc/main/src/snes/impls/ksponly/ksponly.c:48<br>
> [0]PETSC ERROR: #8 SNESSolve() at /1/HPC/petsc/main/src/snes/interface/snes.c:4693<br>
> [0]PETSC ERROR: #9 /home/bourdinb/Development/mef90/mef90-dmplex/ThermoElasticity/ThermoElasticity.F90:228<br>
> Linear solve converged due to CONVERGED_RTOL iterations 46<br>
> KSP Object:(Disp_) 32 MPI processes<br>
> type: cg<br>
> maximum iterations=10000<br>
> tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using PRECONDITIONED norm type for convergence test<br>
> PC Object:(Disp_) 32 MPI processes<br>
> type: gamg<br>
> MG: type is MULTIPLICATIVE, levels=4 cycles=v<br>
> Cycles per PCApply=1<br>
> Using Galerkin computed coarse grid matrices<br>
> Coarse grid solver -- level -------------------------------<br>
> KSP Object: (Disp_mg_coarse_) 32 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_) 32 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 32<br>
> Local solve info for each block is in the following KSP and PC objects:<br>
> [0] number of local blocks = 1, first local block number = 0<br>
> [0] local block number 0<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 1.06061<br>
> Factored matrix follows:<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=54, cols=54, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1260, allocated nonzeros=1260<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 16 nodes, limit used is 5<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=54, cols=54, bs=6<br>
> total: nonzeros=1188, allocated nonzeros=1188<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 17 nodes, limit used is 5<br>
> - - - - - - - - - - - - - - - - - -<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> [1] number of local blocks = 1, first local block number = 1<br>
> [1] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 0<br>
> Factored matrix follows:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1, allocated nonzeros=1<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=0, cols=0, bs=6<br>
> total: nonzeros=0, allocated nonzeros=0<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> [2] number of local blocks = 1, first local block number = 2<br>
> [2] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [3] number of local blocks = 1, first local block number = 3<br>
> [3] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [4] number of local blocks = 1, first local block number = 4<br>
> [4] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [5] number of local blocks = 1, first local block number = 5<br>
> [5] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [6] number of local blocks = 1, first local block number = 6<br>
> [6] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [7] number of local blocks = 1, first local block number = 7<br>
> [7] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [8] number of local blocks = 1, first local block number = 8<br>
> [8] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [9] number of local blocks = 1, first local block number = 9<br>
> [9] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [10] number of local blocks = 1, first local block number = 10<br>
> [10] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [11] number of local blocks = 1, first local block number = 11<br>
> [11] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [12] number of local blocks = 1, first local block number = 12<br>
> [12] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [13] number of local blocks = 1, first local block number = 13<br>
> [13] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [14] number of local blocks = 1, first local block number = 14<br>
> [14] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [15] number of local blocks = 1, first local block number = 15<br>
> [15] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [16] number of local blocks = 1, first local block number = 16<br>
> [16] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [17] number of local blocks = 1, first local block number = 17<br>
> [17] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [18] number of local blocks = 1, first local block number = 18<br>
> [18] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [19] number of local blocks = 1, first local block number = 19<br>
> [19] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [20] number of local blocks = 1, first local block number = 20<br>
> [20] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [21] number of local blocks = 1, first local block number = 21<br>
> [21] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [22] number of local blocks = 1, first local block number = 22<br>
> [22] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [23] number of local blocks = 1, first local block number = 23<br>
> [23] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [24] number of local blocks = 1, first local block number = 24<br>
> [24] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [25] number of local blocks = 1, first local block number = 25<br>
> [25] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [26] number of local blocks = 1, first local block number = 26<br>
> [26] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [27] number of local blocks = 1, first local block number = 27<br>
> [27] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [28] number of local blocks = 1, first local block number = 28<br>
> [28] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [29] number of local blocks = 1, first local block number = 29<br>
> [29] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [30] number of local blocks = 1, first local block number = 30<br>
> [30] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> [31] number of local blocks = 1, first local block number = 31<br>
> [31] local block number 0<br>
> - - - - - - - - - - - - - - - - - -<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=54, cols=54, bs=6<br>
> total: nonzeros=1188, allocated nonzeros=1188<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 17 nodes, limit used is 5<br>
> Down solver (pre-smoother) on level 1 -------------------------------<br>
> KSP Object: (Disp_mg_levels_1_) 32 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.101023, max = 2.13327<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_levels_1_) 32 MPI processes<br>
> type: jacobi<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=1086, cols=1086, bs=6<br>
> total: nonzeros=67356, allocated nonzeros=67356<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 362 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 2 -------------------------------<br>
> KSP Object: (Disp_mg_levels_2_) 32 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.0996526, max = 2.29388<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_levels_2_) 32 MPI processes<br>
> type: jacobi<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=23808, cols=23808, bs=6<br>
> total: nonzeros=1976256, allocated nonzeros=1976256<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 7936 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 3 -------------------------------<br>
> KSP Object: (Disp_mg_levels_3_) 32 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.165968, max = 2.13065<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (Disp_mg_levels_3_) 32 MPI processes<br>
> type: jacobi<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: (Disp_) 32 MPI processes<br>
> type: mpiaij<br>
> rows=291087, cols=291087<br>
> total: nonzeros=12323691, allocated nonzeros=12336696<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 3419 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> linear system matrix = precond matrix:<br>
> Matrix Object: (Disp_) 32 MPI processes<br>
> type: mpiaij<br>
> rows=291087, cols=291087<br>
> total: nonzeros=12323691, allocated nonzeros=12336696<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 3419 nodes, limit used is 5<br>
> SNESConvergedReason returned 5 <br>
> KSP Object: (Displacement_) 32 MPI processes<br>
> type: cg<br>
> maximum iterations=10000, nonzero initial guess<br>
> tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> PC Object: (Displacement_) 32 MPI processes<br>
> type: gamg<br>
> type is MULTIPLICATIVE, levels=4 cycles=v<br>
> Cycles per PCApply=1<br>
> Using externally compute Galerkin coarse grid matrices<br>
> GAMG specific options<br>
> Threshold for dropping small values in graph on each level = -1. -1. -1. -1. <br>
> Threshold scaling factor for each level not specified = 1.<br>
> Complexity: grid = 1.02128 operator = 1.05534<br>
> Coarse grid solver -- level 0 -------------------------------<br>
> KSP Object: (Displacement_mg_coarse_) 32 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Displacement_mg_coarse_) 32 MPI processes<br>
> type: bjacobi<br>
> number of blocks = 32<br>
> Local solver information for first block is in the following KSP and PC objects on rank 0:<br>
> Use -Displacement_mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks<br>
> KSP Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
> type: preonly<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
> type: lu<br>
> out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
> matrix ordering: nd<br>
> factor fill ratio given 5., needed 1.08081<br>
> Factored matrix follows:<br>
> Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
> type: seqaij<br>
> rows=20, cols=20<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=214, allocated nonzeros=214<br>
> using I-node routines: found 8 nodes, limit used is 5<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
> type: seqaij<br>
> rows=20, cols=20<br>
> total: nonzeros=198, allocated nonzeros=198<br>
> total number of mallocs used during MatSetValues calls=0<br>
> using I-node routines: found 13 nodes, limit used is 5<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=20, cols=20<br>
> total: nonzeros=198, allocated nonzeros=198<br>
> total number of mallocs used during MatSetValues calls=0<br>
> using I-node (on process 0) routines: found 13 nodes, limit used is 5<br>
> Down solver (pre-smoother) on level 1 -------------------------------<br>
> KSP Object: (Displacement_mg_levels_1_) 32 MPI processes<br>
> type: chebyshev<br>
> eigenvalue targets used: min 0.81922, max 9.01143<br>
> eigenvalues estimated via gmres: min 0.186278, max 8.1922<br>
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
> KSP Object: (Displacement_mg_levels_1_esteig_) 32 MPI processes<br>
> type: gmres<br>
> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> estimating eigenvalues using noisy right hand side<br>
> maximum iterations=2, nonzero initial guess<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Displacement_mg_levels_1_) 32 MPI processes<br>
> type: jacobi<br>
> type DIAGONAL<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=799, cols=799<br>
> total: nonzeros=83159, allocated nonzeros=83159<br>
> total number of mallocs used during MatSetValues calls=0<br>
> using I-node (on process 0) routines: found 23 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 2 -------------------------------<br>
> KSP Object: (Displacement_mg_levels_2_) 32 MPI processes<br>
> type: chebyshev<br>
> eigenvalue targets used: min 1.16291, max 12.792<br>
> eigenvalues estimated via gmres: min 0.27961, max 11.6291<br>
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
> KSP Object: (Displacement_mg_levels_2_esteig_) 32 MPI processes<br>
> type: gmres<br>
> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> estimating eigenvalues using noisy right hand side<br>
> maximum iterations=2, nonzero initial guess<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Displacement_mg_levels_2_) 32 MPI processes<br>
> type: jacobi<br>
> type DIAGONAL<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 32 MPI processes<br>
> type: mpiaij<br>
> rows=45721, cols=45721<br>
> total: nonzeros=9969661, allocated nonzeros=9969661<br>
> total number of mallocs used during MatSetValues calls=0<br>
> using nonscalable MatPtAP() implementation<br>
> not using I-node (on process 0) routines<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 3 -------------------------------<br>
> KSP Object: (Displacement_mg_levels_3_) 32 MPI processes<br>
> type: chebyshev<br>
> eigenvalue targets used: min 0.281318, max 3.0945<br>
> eigenvalues estimated via gmres: min 0.0522027, max 2.81318<br>
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
> KSP Object: (Displacement_mg_levels_3_esteig_) 32 MPI processes<br>
> type: gmres<br>
> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> estimating eigenvalues using noisy right hand side<br>
> maximum iterations=2, nonzero initial guess<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (Displacement_mg_levels_3_) 32 MPI processes<br>
> type: jacobi<br>
> type DIAGONAL<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (Displacement_) 32 MPI processes<br>
> type: mpiaij<br>
> rows=2186610, cols=2186610, bs=3<br>
> total: nonzeros=181659996, allocated nonzeros=181659996<br>
> total number of mallocs used during MatSetValues calls=0<br>
> has attached near null space<br>
> using I-node (on process 0) routines: found 21368 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (Displacement_) 32 MPI processes<br>
> type: mpiaij<br>
> rows=2186610, cols=2186610, bs=3<br>
> total: nonzeros=181659996, allocated nonzeros=181659996<br>
> total number of mallocs used during MatSetValues calls=0<br>
> has attached near null space<br>
> using I-node (on process 0) routines: found 21368 nodes, limit used is 5<br>
> cell set 1 elastic energy: 9.32425E-02 work: 1.86485E-01 total: -9.32425E-02 <br>
</blockquote></div>