KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 16 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.6588 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi number of blocks = 16 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.157166, max = 1.72883 eigenvalues estimate via cg min 0.584727, max 1.57166 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=368, cols=368 total: nonzeros=51304, allocated nonzeros=51304 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.155447, max = 1.70991 eigenvalues estimate via cg min 0.0886846, max 1.55447 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=17989, cols=17989 total: nonzeros=4028117, allocated nonzeros=4028117 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165376, max = 1.81914 eigenvalues estimate via cg min 0.0451531, max 1.65376 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=643139, cols=643139 total: nonzeros=88765895, allocated nonzeros=88765895 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.152016, max = 1.67217 eigenvalues estimate via cg min 0.0374355, max 1.52016 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=6549734, cols=6549734 total: nonzeros=215858426, allocated nonzeros=215858426 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195762, max = 2.15339 eigenvalues estimate via cg min 0.0424176, max 1.95762 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 16 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 **************************************** *********************************************************************************************************************** *** WIDEN YOUR WINDOW TO 160 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** **************************************************************************************************************************************************************** ------------------------------------------------------------------ PETSc Performance Summary: ------------------------------------------------------------------- /global/homes/s/sajid/packages/aclatfd/3D/poisson3d on a named nid001081 with 16 processors, by sajid Wed Feb 9 19:52:03 2022 Using Petsc Development GIT revision: f351d5494b5462f62c419e00645ac2e477b88cae GIT Date: 2022-02-08 15:08:19 +0000 Max Max/Min Avg Total Time (sec): 4.740e+01 1.000 4.740e+01 Objects: 6.780e+02 1.000 6.780e+02 Flop: 6.628e+09 1.020 6.530e+09 1.045e+11 Flop/sec: 1.398e+08 1.020 1.378e+08 2.204e+09 MPI Messages: 1.515e+03 2.154 1.184e+03 1.894e+04 MPI Message Lengths: 1.489e+08 1.494 1.152e+05 2.182e+09 MPI Reductions: 8.620e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 4.7396e+01 100.0% 1.0380e+11 99.4% 1.879e+04 99.2% 1.161e+05 100.0% 7.690e+02 89.2% 1: linear-solve: 4.9210e-03 0.0% 6.7109e+08 0.6% 1.500e+02 0.8% 4.000e+00 0.0% 7.500e+01 8.7% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) GPU Mflop/s: 10e-6 * (sum of flop on GPU over all processors)/(max GPU time over all processors) CpuToGpu Count: total number of CPU to GPU copies per processor CpuToGpu Size (Mbytes): 10e-6 * (total size of CPU to GPU copies per processor) GpuToCpu Count: total number of GPU to CPU copies per processor GpuToCpu Size (Mbytes): 10e-6 * (total size of GPU to CPU copies per processor) GPU %F: percent flops on GPU in this event ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F --------------------------------------------------------------------------------------------------------------------------------------------------------------- --- Event Stage 0: Main Stage BuildTwoSided 75 1.0 6.3336e+00 7.9 0.00e+00 0.0 2.5e+03 4.0e+00 7.5e+01 10 0 13 0 9 10 0 13 0 10 0 0 0 0.00e+00 0 0.00e+00 0 BuildTwoSidedF 32 1.0 6.1561e+0010.2 0.00e+00 0.0 4.3e+02 3.8e+05 3.2e+01 10 0 2 8 4 10 0 2 8 4 0 0 0 0.00e+00 0 0.00e+00 0 MatMult 100 1.0 1.7308e-01 1.7 1.88e+09 1.0 6.5e+03 5.8e+04 5.0e+00 0 28 34 17 1 0 29 35 17 1 171044 1452688 8 3.69e+02 0 0.00e+00 100 MatConvert 15 1.0 2.3006e-01 1.8 0.00e+00 0.0 5.9e+02 1.6e+04 5.0e+00 0 0 3 0 1 0 0 3 0 1 0 0 0 0.00e+00 6 1.54e+02 0 MatScale 15 1.0 3.8301e-01 1.2 1.37e+08 1.0 3.0e+02 6.2e+04 0.0e+00 1 2 2 1 0 1 2 2 1 0 5632 913853 20 1.88e+02 19 1.89e+02 14 MatAssemblyBegin 43 1.1 6.1775e+00 9.8 0.00e+00 0.0 4.3e+02 3.8e+05 1.7e+01 10 0 2 8 2 10 0 2 8 2 0 0 0 0.00e+00 0 0.00e+00 0 MatAssemblyEnd 43 1.1 2.1720e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 6.4e+01 4 0 0 0 7 4 0 0 0 8 0 0 0 0.00e+00 0 0.00e+00 0 MatCreateSubMat 4 1.0 2.8734e-02 1.0 0.00e+00 0.0 6.5e+01 5.2e+03 5.6e+01 0 0 0 0 6 0 0 0 0 7 0 0 0 0.00e+00 4 2.57e-02 0 MatCoarsen 5 1.0 6.1405e-01 1.1 0.00e+00 0.0 2.7e+03 3.5e+04 2.8e+01 1 0 14 4 3 1 0 14 4 4 0 0 0 0.00e+00 0 0.00e+00 0 MatView 8 1.1 1.1459e-01406.7 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 1 0 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0 MatAXPY 5 1.0 1.0937e+00 1.0 4.66e+06 1.0 0.0e+00 0.0e+00 5.0e+00 2 0 0 0 1 2 0 0 0 1 68 0 0 0.00e+00 9 1.51e+02 0 MatMatMultSym 5 1.0 9.9417e-01 1.0 9.88e+07 1.0 8.9e+02 4.2e+04 3.0e+01 2 1 5 2 3 2 1 5 2 4 1564 12943 44 6.45e+02 29 2.28e+02 100 MatMatMultNum 5 1.0 1.7716e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 3 0.00e+00 0 0.00e+00 0 MatPtAPSymbolic 5 1.0 2.1850e+00 1.0 1.25e+09 1.0 5.0e+03 2.1e+05 4.0e+01 5 19 27 49 5 5 19 27 49 5 8970 39867 49 1.19e+03 39 5.02e+02 100 MatPtAPNumeric 5 1.0 1.6424e-01 2.1 1.24e+09 1.0 3.2e+02 5.5e+05 0.0e+00 0 18 2 8 0 0 19 2 8 0 117323 1552311 29 9.62e+01 0 0.00e+00 100 MatTrnMatMultSym 1 1.0 1.1955e+01 1.0 0.00e+00 0.0 2.2e+02 9.6e+05 1.2e+01 25 0 1 10 1 25 0 1 10 2 0 0 0 0.00e+00 0 0.00e+00 0 MatGetLocalMat 6 1.0 4.7045e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 0 12 1.69e+02 5 9.39e+01 0 MatGetBrAoCol 10 1.0 1.2991e-01 1.3 0.00e+00 0.0 1.8e+03 1.3e+05 0.0e+00 0 0 9 10 0 0 0 9 10 0 0 0 0 0.00e+00 0 0.00e+00 0 MatCUSPARSCopyTo 74 1.1 2.0217e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 72 1.65e+03 0 0.00e+00 0 MatCUSPARSCopyFr 30 1.2 4.0094e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 28 4.56e+02 0 MatCUSPARSGenT 10 1.1 3.8009e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 MatSetPreallCOO 10 1.0 5.2247e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+01 1 0 0 0 3 1 0 0 0 4 0 0 48 1.12e+03 28 2.02e+02 0 MatSetValuesCOO 10 1.0 3.0277e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 DMCreateMat 1 1.0 4.2950e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 9 0 0 0 1 9 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0 SFSetGraph 60 1.0 2.2032e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 SFSetUp 43 1.0 5.3949e-01 1.2 0.00e+00 0.0 4.5e+03 7.0e+04 4.3e+01 1 0 24 15 5 1 0 24 15 6 0 0 0 0.00e+00 0 0.00e+00 0 SFBcastBegin 38 1.0 1.5162e-02 1.7 0.00e+00 0.0 2.3e+03 7.1e+04 0.0e+00 0 0 12 8 0 0 0 12 8 0 0 0 0 0.00e+00 0 0.00e+00 0 SFBcastEnd 38 1.0 7.1520e-02 5.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 SFReduceBegin 25 1.0 1.5348e-01 5.6 0.00e+00 0.0 1.6e+03 3.3e+05 0.0e+00 0 0 8 24 0 0 0 8 24 0 0 0 4 5.50e+00 0 0.00e+00 0 SFReduceEnd 25 1.0 2.3236e-01 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 4 5.50e+00 0 0.00e+00 0 SFFetchOpBegin 5 1.0 1.2860e-02 1.5 0.00e+00 0.0 3.2e+02 2.8e+05 0.0e+00 0 0 2 4 0 0 0 2 4 0 0 0 0 0.00e+00 0 0.00e+00 0 SFFetchOpEnd 5 1.0 6.2515e-02 2.7 0.00e+00 0.0 3.2e+02 2.8e+05 0.0e+00 0 0 2 4 0 0 0 2 4 0 0 0 0 0.00e+00 0 0.00e+00 0 SFPack 190 1.0 6.6891e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 13 5.93e+00 0 0.00e+00 0 SFUnpack 195 1.0 5.8677e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 4 5.50e+00 0 0.00e+00 0 VecView 2 1.0 5.4783e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 0 0 0.00e+00 2 6.71e+01 0 VecMDot 50 1.0 8.8710e-02 8.4 5.13e+08 1.0 0.0e+00 0.0e+00 5.0e+01 0 8 0 0 6 0 8 0 0 7 92162 1278431 0 0.00e+00 0 0.00e+00 100 VecTDot 105 1.0 9.8136e-03 1.4 1.96e+08 1.0 0.0e+00 0.0e+00 1.0e+02 0 3 0 0 12 0 3 0 0 14 318072 828521 0 0.00e+00 0 0.00e+00 100 VecNorm 111 1.0 3.0824e-02 1.9 2.13e+08 1.0 0.0e+00 0.0e+00 1.1e+02 0 3 0 0 13 0 3 0 0 14 110444 565363 0 0.00e+00 0 0.00e+00 100 VecScale 59 1.0 3.5291e-03 1.2 6.80e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 307718 979539 1 3.36e+01 0 0.00e+00 100 VecCopy 16 1.0 3.0592e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSet 195 1.0 5.9674e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecAXPY 105 1.0 3.2240e-03 1.1 1.96e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 968181 1444514 0 0.00e+00 0 0.00e+00 100 VecAYPX 45 1.0 1.9336e-03 1.1 8.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 691864 887016 0 0.00e+00 0 0.00e+00 100 VecMAXPY 55 1.0 6.6952e-03 1.0 6.06e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 1443058 1564332 0 0.00e+00 0 0.00e+00 100 VecAssemblyBegin 17 1.0 2.7902e-02 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+01 0 0 0 0 2 0 0 0 0 2 0 0 0 0.00e+00 0 0.00e+00 0 VecAssemblyEnd 17 1.0 7.7858e-05 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecPointwiseMult 110 1.0 8.3741e-03 1.0 1.03e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 195249 551509 15 1.11e+02 0 0.00e+00 100 VecScatterBegin 122 1.0 2.4110e-01 3.2 0.00e+00 0.0 8.6e+03 6.8e+04 1.3e+01 0 0 45 27 2 0 0 46 27 2 0 0 14 3.76e+01 0 0.00e+00 0 VecScatterEnd 122 1.0 1.2922e-01 5.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSetRandom 5 1.0 1.8475e-03 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecNormalize 55 1.0 1.6486e-02 1.8 1.54e+08 1.0 0.0e+00 0.0e+00 5.5e+01 0 2 0 0 6 0 2 0 0 7 148766 623719 0 0.00e+00 0 0.00e+00 100 VecCUDACopyTo 41 1.0 1.3105e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 41 3.31e+02 0 0.00e+00 0 VecCUDACopyFrom 27 1.0 9.6774e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 27 2.17e+02 0 KSPSetUp 12 1.0 3.6088e-01 1.0 1.56e+09 1.0 3.0e+03 6.2e+04 1.8e+02 1 24 16 8 21 1 24 16 8 24 68409 1148779 15 1.11e+02 5 3.72e+01 100 KSPSolve 1 1.0 8.7477e-04 3.4 8.39e+06 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 153433 651188 0 0.00e+00 0 0.00e+00 100 KSPGMRESOrthog 50 1.0 9.4482e-02 5.8 1.03e+09 1.0 0.0e+00 0.0e+00 5.0e+01 0 16 0 0 6 0 16 0 0 7 173059 1406643 0 0.00e+00 0 0.00e+00 100 PCGAMGGraph_AGG 5 1.0 7.3602e+00 1.0 9.88e+07 1.0 8.9e+02 3.1e+04 4.5e+01 15 1 5 1 5 15 1 5 1 6 211 0 14 7.45e+01 16 1.93e+02 0 PCGAMGCoarse_AGG 5 1.0 1.3900e+01 1.0 0.00e+00 0.0 3.4e+03 1.2e+05 5.1e+01 29 0 18 19 6 29 0 18 19 7 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMGProl_AGG 5 1.0 6.9410e+00 1.0 0.00e+00 0.0 1.5e+03 4.7e+04 7.9e+01 15 0 8 3 9 15 0 8 3 10 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMGPOpt_AGG 5 1.0 9.0988e+00 1.0 2.42e+09 1.0 4.4e+03 5.2e+04 1.8e+02 19 37 23 11 21 19 37 24 11 24 4208 253450 80 1.28e+03 57 6.04e+02 99 GAMG: createProl 5 1.0 3.7282e+01 1.0 2.51e+09 1.0 1.0e+04 7.3e+04 3.6e+02 79 38 54 34 42 79 38 55 34 47 1069 252905 95 1.35e+03 73 7.96e+02 95 Graph 10 1.0 7.3375e+00 1.0 9.88e+07 1.0 8.9e+02 3.1e+04 4.5e+01 15 1 5 1 5 15 1 5 1 6 212 0 14 7.45e+01 16 1.93e+02 0 MIS/Agg 5 1.0 6.1416e-01 1.1 0.00e+00 0.0 2.7e+03 3.5e+04 2.8e+01 1 0 14 4 3 1 0 14 4 4 0 0 0 0.00e+00 0 0.00e+00 0 SA: col data 5 1.0 3.6241e-01 1.0 0.00e+00 0.0 1.2e+03 5.5e+04 3.4e+01 1 0 6 3 4 1 0 6 3 4 0 0 0 0.00e+00 0 0.00e+00 0 SA: frmProl0 5 1.0 6.4859e+00 1.0 0.00e+00 0.0 3.5e+02 2.2e+04 2.5e+01 14 0 2 0 3 14 0 2 0 3 0 0 0 0.00e+00 0 0.00e+00 0 SA: smooth 5 1.0 2.2838e+00 1.0 1.41e+08 1.0 8.9e+02 4.2e+04 4.5e+01 5 2 5 2 5 5 2 5 2 6 977 15251 63 8.33e+02 52 5.66e+02 83 GAMG: partLevel 5 1.0 2.3985e+00 1.0 2.49e+09 1.0 5.5e+03 2.3e+05 1.5e+02 5 37 29 57 17 5 37 29 57 19 16206 77164 78 1.28e+03 42 5.02e+02 100 repartition 2 1.0 7.8242e-02 1.6 0.00e+00 0.0 1.7e+02 2.0e+03 1.1e+02 0 0 1 0 12 0 0 1 0 14 0 0 0 0.00e+00 4 2.57e-02 0 Invert-Sort 2 1.0 8.0591e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 1 0 0 0 0 2 0 0 0 0.00e+00 0 0.00e+00 0 Move A 2 1.0 1.6814e-02 1.0 0.00e+00 0.0 6.5e+01 5.2e+03 3.0e+01 0 0 0 0 3 0 0 0 0 4 0 0 0 0.00e+00 3 2.57e-02 0 Move P 2 1.0 1.5755e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.2e+01 0 0 0 0 4 0 0 0 0 4 0 0 0 0.00e+00 1 0.00e+00 0 PCGAMG Squ l00 1 1.0 1.1955e+01 1.0 0.00e+00 0.0 2.2e+02 9.6e+05 1.2e+01 25 0 1 10 1 25 0 1 10 2 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMG Gal l00 1 1.0 1.1081e+00 1.0 8.95e+08 1.0 1.1e+03 3.8e+05 8.0e+00 2 14 6 20 1 2 14 6 20 1 12894 71396 17 8.82e+02 8 3.51e+02 100 PCGAMG Opt l00 1 1.0 7.1939e-01 1.0 5.86e+07 1.0 1.3e+02 1.7e+05 6.0e+00 2 1 1 1 1 2 1 1 1 1 1303 14497 9 5.12e+02 6 1.85e+02 100 PCGAMG Gal l01 1 1.0 8.4848e-01 1.0 1.11e+09 1.0 1.2e+03 5.2e+05 8.0e+00 2 16 6 29 1 2 17 7 29 1 20260 88669 17 3.60e+02 8 1.33e+02 100 PCGAMG Opt l01 1 1.0 1.7983e-01 1.0 2.76e+07 1.0 2.2e+02 4.5e+04 6.0e+00 0 0 1 0 1 0 0 1 0 1 2401 13862 9 1.16e+02 6 3.75e+01 100 PCGAMG Gal l02 1 1.0 2.6291e-01 1.1 4.65e+08 1.1 1.2e+03 1.3e+05 8.0e+00 1 7 6 7 1 1 7 7 7 1 26962 72279 17 3.89e+01 8 1.71e+01 100 PCGAMG Opt l02 1 1.0 6.4877e-02 1.0 1.21e+07 1.1 2.2e+02 1.6e+04 6.0e+00 0 0 1 0 1 0 0 1 0 1 2736 7960 9 1.62e+01 6 5.25e+00 100 PCGAMG Gal l03 1 1.0 9.3417e-02 1.4 2.09e+07 1.3 1.2e+03 1.2e+04 8.0e+00 0 0 7 1 1 0 0 7 1 1 3232 17075 15 1.38e+00 8 5.96e-01 100 PCGAMG Opt l03 1 1.0 2.0345e-02 1.0 5.38e+05 1.1 2.2e+02 2.2e+03 6.0e+00 0 0 1 0 1 0 0 1 0 1 396 1044 9 5.51e-01 6 1.97e-01 100 PCGAMG Gal l04 1 1.0 3.6403e-02 1.0 1.13e+05 0.0 5.3e+02 2.0e+02 8.0e+00 0 0 3 0 1 0 0 3 0 1 20 220 12 8.06e-03 7 2.23e-03 100 PCGAMG Opt l04 1 1.0 1.2400e-02 1.0 1.55e+04 0.0 1.1e+02 1.9e+02 6.0e+00 0 0 1 0 1 0 0 1 0 1 8 50 12 5.86e-03 5 1.98e-03 100 PCSetUp 1 1.0 3.9996e+01 1.0 6.56e+09 1.0 1.9e+04 1.2e+05 7.0e+02 84 99 99100 82 84100100100 92 2585 151984 188 2.74e+03 120 1.34e+03 98 --- Event Stage 1: linear-solve MatView 40 1.1 2.4770e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.5e+01 0 0 0 0 4 43 0 0 0 47 0 0 0 0.00e+00 0 0.00e+00 0 VecNorm 5 1.0 7.3149e-04 1.1 4.19e+07 1.0 0.0e+00 0.0e+00 5.0e+00 0 1 0 0 1 15100 0 0 7 917427 1180031 0 0.00e+00 0 0.00e+00 100 VecCopy 5 1.0 3.5791e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 7 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSet 5 1.0 1.9763e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 KSPSolve 5 1.0 1.3361e-03 1.0 4.19e+07 1.0 0.0e+00 0.0e+00 1.0e+01 0 1 0 0 1 27100 0 0 13 502270 661583 0 0.00e+00 0 0.00e+00 100 --------------------------------------------------------------------------------------------------------------------------------------------------------------- Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Distributed Mesh 7 7 36360 0. Matrix 158 158 4520924612 0. Matrix Coarsen 5 5 3120 0. Index Set 92 92 71642744 0. IS L to G Mapping 22 22 35058996 0. Star Forest Graph 69 69 81072 0. Discrete System 7 7 6720 0. Weak Form 7 7 4312 0. Vector 254 254 1575512248 0. Krylov Solver 18 18 176848 0. DMKSP interface 1 1 656 0. Preconditioner 18 18 17872 0. Viewer 5 4 3312 0. PetscRandom 10 10 6660 0. --- Event Stage 1: linear-solve Viewer 5 5 4200 0. ======================================================================================================================== Average time to get PetscTime(): 3.31e-08 Average time for MPI_Barrier(): 1.43916e-05 Average time for zero size MPI_Send(): 7.78875e-06 #PETSc Option Table entries: -dm_mat_type aijcusparse -dm_vec_type cuda -ksp_monitor -ksp_norm_type unpreconditioned -ksp_type cg -ksp_view -log_view -mg_levels_esteig_ksp_type cg -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi -pc_gamg_agg_nsmooths 1 -pc_gamg_square_graph 1 -pc_gamg_threshold 0.0 -pc_gamg_threshold_scale 0.0 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS= FFLAGS= CXXFLAGS= --with-cc=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc --with-cxx=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicxx --with-fc=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-openmp=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib=/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib/libsci_gnu.so --with-x=0 --with-clanguage=C --with-cuda=1 --with-cuda-dir=/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4 --with-hip=0 --with-metis=1 --with-metis-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/include --with-metis-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib/libmetis.so --with-hypre=1 --with-hypre-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/include --with-hypre-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib/libHYPRE.so --with-parmetis=1 --with-parmetis-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/include --with-parmetis-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib/libparmetis.so --with-kokkos=1 --with-kokkos-dir=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb --with-kokkos-kernels=1 --with-kokkos-kernels-dir=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6 --with-superlu_dist=1 --with-superlu_dist-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/include --with-superlu_dist-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib/libsuperlu_dist.so --with-ptscotch=0 --with-suitesparse=0 --with-hdf5=1 --with-hdf5-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/include --with-hdf5-lib="/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib/libhdf5_hl.so /global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib/libhdf5.so" --with-zlib=1 --with-zlib-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/include --with-zlib-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib/libz.so --with-mumps=0 --with-trilinos=0 --with-fftw=0 --with-valgrind=0 --with-gmp=0 --with-libpng=0 --with-giflib=0 --with-mpfr=0 --with-netcdf=0 --with-pnetcdf=0 --with-moab=0 --with-random123=0 --with-exodusii=0 --with-cgns=0 --with-memkind=0 --with-p4est=0 --with-saws=0 --with-yaml=0 --with-hwloc=0 --with-libjpeg=0 --with-scalapack=1 --with-scalapack-lib=/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib/libsci_gnu.so --with-strumpack=0 --with-mmg=0 --with-parmmg=0 --with-tetgen=0 --with-cuda-arch=80 ----------------------------------------- Libraries compiled on 2022-02-08 15:44:43 on login22 Machine characteristics: Linux-5.3.18-24.75_10.0.190-cray_shasta_c-x86_64-with-glibc2.26 Using PETSc directory: /global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo Using PETSc arch: ----------------------------------------- Using C compiler: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc -fPIC Using Fortran compiler: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 -fPIC ----------------------------------------- Using include paths: -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/include -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/include -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/include ----------------------------------------- Using C linker: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc Using Fortran linker: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 Using libraries: -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/lib -lpetsc -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib -Wl,-rpath,/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib -L/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/lib64 -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/lib64 -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/lib64 -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/lib64 -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64/stubs -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/lib -L/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/lib -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64/stubs -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/nvvm/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/nvvm/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/CUPTI/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/CUPTI/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/Debugger/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/Debugger/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/lib64 -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/gtl/lib -L/opt/cray/pe/mpich/8.1.12/gtl/lib -Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib -L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_3.9__g3cf3325.shasta/lib64 -L/opt/cray/xpmem/2.2.40-2.1_3.9__g3cf3325.shasta/lib64 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib/gcc/x86_64-suse-linux/11.2.0 -L/opt/cray/pe/gcc/11.2.0/snos/lib/gcc/x86_64-suse-linux/11.2.0 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib64 -L/opt/cray/pe/gcc/11.2.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib -L/opt/cray/pe/gcc/11.2.0/snos/lib -lHYPRE -lsci_gnu -lsuperlu_dist -lkokkoskernels -lkokkoscontainers -lkokkoscore -lsci_gnu -lhdf5_hl -lhdf5 -lparmetis -lmetis -lz -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -lcuda -lstdc++ -ldl -lmpifort_gnu_91 -lmpi_gnu_91 -lcuda -lmpi_gtl_cuda -lxpmem -lgfortran -lm -lcupti -lcudart -lsci_gnu_82_mpi -lsci_gnu_82 -ldsmml -lgfortran -lquadmath -lpthread -lm -lgcc_s -lquadmath -lstdc++ -ldl -----------------------------------------