[petsc-users] GAMG and linearized elasticity
Mark Adams
mfadams at lbl.gov
Wed Dec 14 10:07:23 CST 2022
On Wed, Dec 14, 2022 at 9:38 AM Blaise Bourdin <bourdin at mcmaster.ca> wrote:
> Hi Jed,
>
> Thanks for pointing us in the right direction.
> We were using MatNullSpaceCreateRigidBody which does not know anything
> about the discretization, hence our issues with quadratic elements.
> DMPlexCreate RigidBody does not work out of the box for us since we do not
> use PetscFE at the moment, but we can easily build the near null space by
> hand.
>
Oh, MatNullSpaceCreateRigidBody should work because it takes the
coordinates. You just need to get the coordinates for all the
points/vertices.
Or you can build it by hand.
I don't know of DMPlexCreate RigidBody does the right thing. This would
take a little code and I'm not sure if (Matt) did this (kinda doubt it). It
should error out if not, but you don't use it anyway.
Did you call MatNullSpaceCreateRigidBody with a vector of coordinates that
only has the corner points? (In that case it should have through an error)
>
> FWIW, removing the wrong null space brought GAMG iteration number to
> something more reasonable
>
Good. I'm not sure what happened, but MatNullSpaceCreateRigidBody should
work unless you have a non-standard element and you can always test it by
call MatMult on the RBMs and verify that its a null space, away from the
BCs.
>
> Thanks a million,
> Blaise
>
>
> On Dec 13, 2022, at 10:37 PM, Jed Brown <jed at jedbrown.org> wrote:
>
> Do you have slip/symmetry boundary conditions, where some components are
> constrained? In that case, there is no uniform block size and I think
> you'll need DMPlexCreateRigidBody() and MatSetNearNullSpace().
>
> The PCSetCoordinates() code won't work for non-constant block size.
>
> -pc_type gamg should work okay out of the box for elasticity. For hypre,
> I've had good luck with this options suite, which also runs on GPU.
>
> -pc_type hypre -pc_hypre_boomeramg_coarsen_type pmis
> -pc_hypre_boomeramg_interp_type ext+i -pc_hypre_boomeramg_no_CF
> -pc_hypre_boomeramg_P_max 6 -pc_hypre_boomeramg_relax_type_down Chebyshev
> -pc_hypre_boomeramg_relax_type_up Chebyshev
> -pc_hypre_boomeramg_strong_threshold 0.5
>
> Blaise Bourdin <bourdin at mcmaster.ca> writes:
>
> Hi,
>
> I am getting close to finish porting a code from petsc 3.3 / sieve to main
> / dmplex, but am
> now encountering difficulties
> I am reasonably sure that the Jacobian and residual are correct. The codes
> handle boundary
> conditions differently (MatZeroRowCols vs dmplex constraints) so it is not
> trivial to compare
> them. Running with snes_type ksponly pc_type Jacobi or hyper gives me the
> same results in
> roughly the same number of iterations.
>
> In my old code, gamg would work out of the box. When using petsc-main,
> -pc_type gamg -
> pc_gamg_type agg works for _some_ problems using P1-Lagrange elements, but
> never for
> P2-Lagrange. The typical error message is in gamg_agg.txt
>
> When using -pc_type classical, a problem where the KSP would converge in
> 47 iteration in
> 3.3 now takes 1400. ksp_view_3.3.txt and ksp_view_main.txt show the
> output of -ksp_view
> for both versions. I don’t notice anything obvious.
>
> Strangely, removing the call to PCSetCoordinates does not have any impact
> on the
> convergence.
>
> I am sure that I am missing something, or not passing the right options.
> What’s a good
> starting point for 3D elasticity?
> Regards,
> Blaise
>
> —
> Canada Research Chair in Mathematical and Computational Aspects of Solid
> Mechanics
> (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> [0]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Computed maximum singular value as zero
> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could
> be the program crashed before they were used or a spelling mistake, etc!
> [0]PETSC ERROR: Option left: name:-displacement_ksp_converged_reason
> value: ascii source: file
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.18.2-341-g16200351da0
> GIT Date: 2022-12-12 23:42:20 +0000
> [0]PETSC ERROR:
> /home/bourdinb/Development/mef90/mef90-dmplex/bbserv-gcc11.2.1-mvapich2-2.3.7-O/bin/ThermoElasticity
> on a bbserv-gcc11.2.1-mvapich2-2.3.7-O named bb01 by bourdinb Tue Dec 13
> 17:02:19 2022
> [0]PETSC ERROR: Configure options --CFLAGS=-Wunused
> --FFLAGS="-ffree-line-length-none -fallow-argument-mismatch -Wunused"
> --COPTFLAGS="-O2 -march=znver2" --CXXOPTFLAGS="-O2 -march=znver2"
> --FOPTFLAGS="-O2 -march=znver2" --download-chaco=1 --download-exodusii=1
> --download-fblaslapack=1 --download-hdf5=1 --download-hypre=1
> --download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1
> --download-p4est=1 --download-parmetis=1 --download-pnetcdf=1
> --download-scalapack=1 --download-sowing=1
> --download-sowing-cc=/opt/rh/devtoolset-9/root/usr/bin/gcc
> --download-sowing-cxx=/opt/rh/devtoolset-9/root/usr/bin/g++
> --download-sowing-cpp=/opt/rh/devtoolset-9/root/usr/bin/cpp
> --download-sowing-cxxcpp=/opt/rh/devtoolset-9/root/usr/bin/cpp
> --download-superlu=1 --download-triangle=1 --download-yaml=1
> --download-zlib=1 --with-debugging=0
> --with-mpi-dir=/opt/HPC/mvapich2/2.3.7-gcc11.2.1 --with-pic
> --with-shared-libraries=1 --with-mpiexec=srun --with-x11=0
> [0]PETSC ERROR: #1 PCGAMGOptProlongator_AGG() at
> /1/HPC/petsc/main/src/ksp/pc/impls/gamg/agg.c:779
> [0]PETSC ERROR: #2 PCSetUp_GAMG() at
> /1/HPC/petsc/main/src/ksp/pc/impls/gamg/gamg.c:639
> [0]PETSC ERROR: #3 PCSetUp() at
> /1/HPC/petsc/main/src/ksp/pc/interface/precon.c:994
> [0]PETSC ERROR: #4 KSPSetUp() at
> /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:405
> [0]PETSC ERROR: #5 KSPSolve_Private() at
> /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:824
> [0]PETSC ERROR: #6 KSPSolve() at
> /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:1070
> [0]PETSC ERROR: #7 SNESSolve_KSPONLY() at
> /1/HPC/petsc/main/src/snes/impls/ksponly/ksponly.c:48
> [0]PETSC ERROR: #8 SNESSolve() at
> /1/HPC/petsc/main/src/snes/interface/snes.c:4693
> [0]PETSC ERROR: #9
> /home/bourdinb/Development/mef90/mef90-dmplex/ThermoElasticity/ThermoElasticity.F90:228
> Linear solve converged due to CONVERGED_RTOL iterations 46
> KSP Object:(Disp_) 32 MPI processes
> type: cg
> maximum iterations=10000
> tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10
> left preconditioning
> using nonzero initial guess
> using PRECONDITIONED norm type for convergence test
> PC Object:(Disp_) 32 MPI processes
> type: gamg
> MG: type is MULTIPLICATIVE, levels=4 cycles=v
> Cycles per PCApply=1
> Using Galerkin computed coarse grid matrices
> Coarse grid solver -- level -------------------------------
> KSP Object: (Disp_mg_coarse_) 32 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_) 32 MPI processes
> type: bjacobi
> block Jacobi: number of blocks = 32
> Local solve info for each block is in the following KSP and PC
> objects:
> [0] number of local blocks = 1, first local block number = 0
> [0] local block number 0
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 1.06061
> Factored matrix follows:
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=54, cols=54, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1260, allocated nonzeros=1260
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 16 nodes, limit used is 5
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=54, cols=54, bs=6
> total: nonzeros=1188, allocated nonzeros=1188
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 17 nodes, limit used is 5
> - - - - - - - - - - - - - - - - - -
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> [1] number of local blocks = 1, first local block number = 1
> [1] local block number 0
> - - - - - - - - - - - - - - - - - -
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Disp_mg_coarse_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5, needed 0
> Factored matrix follows:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> package used to perform factorization: petsc
> total: nonzeros=1, allocated nonzeros=1
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> linear system matrix = precond matrix:
> Matrix Object: 1 MPI processes
> type: seqaij
> rows=0, cols=0, bs=6
> total: nonzeros=0, allocated nonzeros=0
> total number of mallocs used during MatSetValues calls =0
> not using I-node routines
> [2] number of local blocks = 1, first local block number = 2
> [2] local block number 0
> - - - - - - - - - - - - - - - - - -
> [3] number of local blocks = 1, first local block number = 3
> [3] local block number 0
> - - - - - - - - - - - - - - - - - -
> [4] number of local blocks = 1, first local block number = 4
> [4] local block number 0
> - - - - - - - - - - - - - - - - - -
> [5] number of local blocks = 1, first local block number = 5
> [5] local block number 0
> - - - - - - - - - - - - - - - - - -
> [6] number of local blocks = 1, first local block number = 6
> [6] local block number 0
> - - - - - - - - - - - - - - - - - -
> [7] number of local blocks = 1, first local block number = 7
> [7] local block number 0
> - - - - - - - - - - - - - - - - - -
> [8] number of local blocks = 1, first local block number = 8
> [8] local block number 0
> - - - - - - - - - - - - - - - - - -
> [9] number of local blocks = 1, first local block number = 9
> [9] local block number 0
> - - - - - - - - - - - - - - - - - -
> [10] number of local blocks = 1, first local block number = 10
> [10] local block number 0
> - - - - - - - - - - - - - - - - - -
> [11] number of local blocks = 1, first local block number = 11
> [11] local block number 0
> - - - - - - - - - - - - - - - - - -
> [12] number of local blocks = 1, first local block number = 12
> [12] local block number 0
> - - - - - - - - - - - - - - - - - -
> [13] number of local blocks = 1, first local block number = 13
> [13] local block number 0
> - - - - - - - - - - - - - - - - - -
> [14] number of local blocks = 1, first local block number = 14
> [14] local block number 0
> - - - - - - - - - - - - - - - - - -
> [15] number of local blocks = 1, first local block number = 15
> [15] local block number 0
> - - - - - - - - - - - - - - - - - -
> [16] number of local blocks = 1, first local block number = 16
> [16] local block number 0
> - - - - - - - - - - - - - - - - - -
> [17] number of local blocks = 1, first local block number = 17
> [17] local block number 0
> - - - - - - - - - - - - - - - - - -
> [18] number of local blocks = 1, first local block number = 18
> [18] local block number 0
> - - - - - - - - - - - - - - - - - -
> [19] number of local blocks = 1, first local block number = 19
> [19] local block number 0
> - - - - - - - - - - - - - - - - - -
> [20] number of local blocks = 1, first local block number = 20
> [20] local block number 0
> - - - - - - - - - - - - - - - - - -
> [21] number of local blocks = 1, first local block number = 21
> [21] local block number 0
> - - - - - - - - - - - - - - - - - -
> [22] number of local blocks = 1, first local block number = 22
> [22] local block number 0
> - - - - - - - - - - - - - - - - - -
> [23] number of local blocks = 1, first local block number = 23
> [23] local block number 0
> - - - - - - - - - - - - - - - - - -
> [24] number of local blocks = 1, first local block number = 24
> [24] local block number 0
> - - - - - - - - - - - - - - - - - -
> [25] number of local blocks = 1, first local block number = 25
> [25] local block number 0
> - - - - - - - - - - - - - - - - - -
> [26] number of local blocks = 1, first local block number = 26
> [26] local block number 0
> - - - - - - - - - - - - - - - - - -
> [27] number of local blocks = 1, first local block number = 27
> [27] local block number 0
> - - - - - - - - - - - - - - - - - -
> [28] number of local blocks = 1, first local block number = 28
> [28] local block number 0
> - - - - - - - - - - - - - - - - - -
> [29] number of local blocks = 1, first local block number = 29
> [29] local block number 0
> - - - - - - - - - - - - - - - - - -
> [30] number of local blocks = 1, first local block number = 30
> [30] local block number 0
> - - - - - - - - - - - - - - - - - -
> [31] number of local blocks = 1, first local block number = 31
> [31] local block number 0
> - - - - - - - - - - - - - - - - - -
> linear system matrix = precond matrix:
> Matrix Object: 32 MPI processes
> type: mpiaij
> rows=54, cols=54, bs=6
> total: nonzeros=1188, allocated nonzeros=1188
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 17 nodes, limit used
> is 5
> Down solver (pre-smoother) on level 1 -------------------------------
> KSP Object: (Disp_mg_levels_1_) 32 MPI processes
> type: chebyshev
> Chebyshev: eigenvalue estimates: min = 0.101023, max = 2.13327
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (Disp_mg_levels_1_) 32 MPI processes
> type: jacobi
> linear system matrix = precond matrix:
> Matrix Object: 32 MPI processes
> type: mpiaij
> rows=1086, cols=1086, bs=6
> total: nonzeros=67356, allocated nonzeros=67356
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 362 nodes, limit used
> is 5
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (Disp_mg_levels_2_) 32 MPI processes
> type: chebyshev
> Chebyshev: eigenvalue estimates: min = 0.0996526, max = 2.29388
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (Disp_mg_levels_2_) 32 MPI processes
> type: jacobi
> linear system matrix = precond matrix:
> Matrix Object: 32 MPI processes
> type: mpiaij
> rows=23808, cols=23808, bs=6
> total: nonzeros=1976256, allocated nonzeros=1976256
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 7936 nodes, limit
> used is 5
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 3 -------------------------------
> KSP Object: (Disp_mg_levels_3_) 32 MPI processes
> type: chebyshev
> Chebyshev: eigenvalue estimates: min = 0.165968, max = 2.13065
> maximum iterations=2
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> left preconditioning
> using nonzero initial guess
> using NONE norm type for convergence test
> PC Object: (Disp_mg_levels_3_) 32 MPI processes
> type: jacobi
> linear system matrix = precond matrix:
> Matrix Object: (Disp_) 32 MPI processes
> type: mpiaij
> rows=291087, cols=291087
> total: nonzeros=12323691, allocated nonzeros=12336696
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 3419 nodes, limit
> used is 5
> Up solver (post-smoother) same as down solver (pre-smoother)
> linear system matrix = precond matrix:
> Matrix Object: (Disp_) 32 MPI processes
> type: mpiaij
> rows=291087, cols=291087
> total: nonzeros=12323691, allocated nonzeros=12336696
> total number of mallocs used during MatSetValues calls =0
> using I-node (on process 0) routines: found 3419 nodes, limit used is
> 5
> SNESConvergedReason returned 5
> KSP Object: (Displacement_) 32 MPI processes
> type: cg
> maximum iterations=10000, nonzero initial guess
> tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> PC Object: (Displacement_) 32 MPI processes
> type: gamg
> type is MULTIPLICATIVE, levels=4 cycles=v
> Cycles per PCApply=1
> Using externally compute Galerkin coarse grid matrices
> GAMG specific options
> Threshold for dropping small values in graph on each level = -1.
> -1. -1. -1.
> Threshold scaling factor for each level not specified = 1.
> Complexity: grid = 1.02128 operator = 1.05534
> Coarse grid solver -- level 0 -------------------------------
> KSP Object: (Displacement_mg_coarse_) 32 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Displacement_mg_coarse_) 32 MPI processes
> type: bjacobi
> number of blocks = 32
> Local solver information for first block is in the following KSP
> and PC objects on rank 0:
> Use -Displacement_mg_coarse_ksp_view ::ascii_info_detail to display
> information for all blocks
> KSP Object: (Displacement_mg_coarse_sub_) 1 MPI process
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Displacement_mg_coarse_sub_) 1 MPI process
> type: lu
> out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> matrix ordering: nd
> factor fill ratio given 5., needed 1.08081
> Factored matrix follows:
> Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process
> type: seqaij
> rows=20, cols=20
> package used to perform factorization: petsc
> total: nonzeros=214, allocated nonzeros=214
> using I-node routines: found 8 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process
> type: seqaij
> rows=20, cols=20
> total: nonzeros=198, allocated nonzeros=198
> total number of mallocs used during MatSetValues calls=0
> using I-node routines: found 13 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: 32 MPI processes
> type: mpiaij
> rows=20, cols=20
> total: nonzeros=198, allocated nonzeros=198
> total number of mallocs used during MatSetValues calls=0
> using I-node (on process 0) routines: found 13 nodes, limit used
> is 5
> Down solver (pre-smoother) on level 1 -------------------------------
> KSP Object: (Displacement_mg_levels_1_) 32 MPI processes
> type: chebyshev
> eigenvalue targets used: min 0.81922, max 9.01143
> eigenvalues estimated via gmres: min 0.186278, max 8.1922
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
> KSP Object: (Displacement_mg_levels_1_esteig_) 32 MPI processes
> type: gmres
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> happy breakdown tolerance 1e-30
> maximum iterations=10, initial guess is zero
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> estimating eigenvalues using noisy right hand side
> maximum iterations=2, nonzero initial guess
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Displacement_mg_levels_1_) 32 MPI processes
> type: jacobi
> type DIAGONAL
> linear system matrix = precond matrix:
> Mat Object: 32 MPI processes
> type: mpiaij
> rows=799, cols=799
> total: nonzeros=83159, allocated nonzeros=83159
> total number of mallocs used during MatSetValues calls=0
> using I-node (on process 0) routines: found 23 nodes, limit used
> is 5
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 2 -------------------------------
> KSP Object: (Displacement_mg_levels_2_) 32 MPI processes
> type: chebyshev
> eigenvalue targets used: min 1.16291, max 12.792
> eigenvalues estimated via gmres: min 0.27961, max 11.6291
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
> KSP Object: (Displacement_mg_levels_2_esteig_) 32 MPI processes
> type: gmres
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> happy breakdown tolerance 1e-30
> maximum iterations=10, initial guess is zero
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> estimating eigenvalues using noisy right hand side
> maximum iterations=2, nonzero initial guess
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Displacement_mg_levels_2_) 32 MPI processes
> type: jacobi
> type DIAGONAL
> linear system matrix = precond matrix:
> Mat Object: 32 MPI processes
> type: mpiaij
> rows=45721, cols=45721
> total: nonzeros=9969661, allocated nonzeros=9969661
> total number of mallocs used during MatSetValues calls=0
> using nonscalable MatPtAP() implementation
> not using I-node (on process 0) routines
> Up solver (post-smoother) same as down solver (pre-smoother)
> Down solver (pre-smoother) on level 3 -------------------------------
> KSP Object: (Displacement_mg_levels_3_) 32 MPI processes
> type: chebyshev
> eigenvalue targets used: min 0.281318, max 3.0945
> eigenvalues estimated via gmres: min 0.0522027, max 2.81318
> eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
> KSP Object: (Displacement_mg_levels_3_esteig_) 32 MPI processes
> type: gmres
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> happy breakdown tolerance 1e-30
> maximum iterations=10, initial guess is zero
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> estimating eigenvalues using noisy right hand side
> maximum iterations=2, nonzero initial guess
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (Displacement_mg_levels_3_) 32 MPI processes
> type: jacobi
> type DIAGONAL
> linear system matrix = precond matrix:
> Mat Object: (Displacement_) 32 MPI processes
> type: mpiaij
> rows=2186610, cols=2186610, bs=3
> total: nonzeros=181659996, allocated nonzeros=181659996
> total number of mallocs used during MatSetValues calls=0
> has attached near null space
> using I-node (on process 0) routines: found 21368 nodes, limit
> used is 5
> Up solver (post-smoother) same as down solver (pre-smoother)
> linear system matrix = precond matrix:
> Mat Object: (Displacement_) 32 MPI processes
> type: mpiaij
> rows=2186610, cols=2186610, bs=3
> total: nonzeros=181659996, allocated nonzeros=181659996
> total number of mallocs used during MatSetValues calls=0
> has attached near null space
> using I-node (on process 0) routines: found 21368 nodes, limit used
> is 5
> cell set 1 elastic energy: 9.32425E-02 work: 1.86485E-01 total:
> -9.32425E-02
>
>
> —
> Canada Research Chair in Mathematical and Computational Aspects of Solid
> Mechanics (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221214/57eefa66/attachment-0001.html>
More information about the petsc-users
mailing list