KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 KSP Object: 8 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. 0. 0. 0. Threshold scaling factor for each level not specified = 0. AGG specific options Symmetric graph false Number of levels to square graph 1 Number smoothing steps 1 Complexity: grid = 1.65764 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi number of blocks = 8 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC has not been set up so information may be incomplete out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd matrix solver type: (null) matrix not yet factored; no additional information available linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI processes type: seqaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=5, cols=5 total: nonzeros=25, allocated nonzeros=25 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.171499, max = 1.88649 eigenvalues estimate via cg min 0.471629, max 1.71499 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_1_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=349, cols=349 total: nonzeros=44647, allocated nonzeros=44647 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151491, max = 1.6664 eigenvalues estimate via cg min 0.0672315, max 1.51491 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_2_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=17826, cols=17826 total: nonzeros=3911024, allocated nonzeros=3911024 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.165296, max = 1.81825 eigenvalues estimate via cg min 0.0491492, max 1.65296 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_3_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=644296, cols=644296 total: nonzeros=88871282, allocated nonzeros=88871282 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.151982, max = 1.6718 eigenvalues estimate via cg min 0.0371365, max 1.51982 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_4_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=6537257, cols=6537257 total: nonzeros=215331055, allocated nonzeros=215331055 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 8 MPI processes type: chebyshev eigenvalue estimates used: min = 0.195767, max = 2.15344 eigenvalues estimate via cg min 0.0425341, max 1.95767 eigenvalues estimated using cg with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_5_esteig_) 8 MPI processes type: cg maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 8 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpiaijcusparse rows=67108864, cols=67108864 total: nonzeros=468582400, allocated nonzeros=468582400 total number of mallocs used during MatSetValues calls=0 **************************************** *********************************************************************************************************************** *** WIDEN YOUR WINDOW TO 160 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** **************************************************************************************************************************************************************** ------------------------------------------------------------------ PETSc Performance Summary: ------------------------------------------------------------------- /global/homes/s/sajid/packages/aclatfd/3D/poisson3d on a named nid002253 with 8 processors, by sajid Wed Feb 9 19:28:17 2022 Using Petsc Development GIT revision: f351d5494b5462f62c419e00645ac2e477b88cae GIT Date: 2022-02-08 15:08:19 +0000 Max Max/Min Avg Total Time (sec): 6.402e+01 1.000 6.402e+01 Objects: 6.440e+02 1.000 6.440e+02 Flop: 1.317e+10 1.018 1.300e+10 1.040e+11 Flop/sec: 2.058e+08 1.018 2.031e+08 1.625e+09 MPI Messages: 8.355e+02 1.905 6.844e+02 5.475e+03 MPI Message Lengths: 1.969e+08 2.005 2.512e+05 1.375e+09 MPI Reductions: 8.010e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 6.4011e+01 100.0% 1.0336e+11 99.4% 5.405e+03 98.7% 2.544e+05 100.0% 7.080e+02 88.4% 1: linear-solve: 5.3374e-03 0.0% 6.7109e+08 0.6% 7.000e+01 1.3% 4.000e+00 0.0% 7.500e+01 9.4% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) GPU Mflop/s: 10e-6 * (sum of flop on GPU over all processors)/(max GPU time over all processors) CpuToGpu Count: total number of CPU to GPU copies per processor CpuToGpu Size (Mbytes): 10e-6 * (total size of CPU to GPU copies per processor) GpuToCpu Count: total number of GPU to CPU copies per processor GpuToCpu Size (Mbytes): 10e-6 * (total size of GPU to CPU copies per processor) GPU %F: percent flops on GPU in this event ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F --------------------------------------------------------------------------------------------------------------------------------------------------------------- --- Event Stage 0: Main Stage BuildTwoSided 72 1.0 1.2600e+00 3.3 0.00e+00 0.0 6.9e+02 4.0e+00 7.2e+01 1 0 13 0 9 1 0 13 0 10 0 0 0 0.00e+00 0 0.00e+00 0 BuildTwoSidedF 30 1.0 5.4455e-01 5.3 0.00e+00 0.0 1.2e+02 9.0e+05 3.0e+01 1 0 2 8 4 1 0 2 8 4 0 0 0 0.00e+00 0 0.00e+00 0 MatMult 100 1.0 1.4770e-01 1.2 3.76e+09 1.0 2.0e+03 1.2e+05 5.0e+00 0 28 37 17 1 0 29 37 17 1 200290 886047 7 7.38e+02 0 0.00e+00 100 MatConvert 15 1.0 2.4938e-01 1.2 0.00e+00 0.0 1.8e+02 3.2e+04 5.0e+00 0 0 3 0 1 0 0 3 0 1 0 0 0 0.00e+00 8 3.08e+02 0 MatScale 15 1.0 6.2330e-01 1.1 2.73e+08 1.0 9.2e+01 1.3e+05 0.0e+00 1 2 2 1 0 1 2 2 1 0 3458 542648 20 3.76e+02 20 3.77e+02 14 MatAssemblyBegin 38 1.1 5.7772e-01 4.2 0.00e+00 0.0 1.2e+02 9.0e+05 1.5e+01 1 0 2 8 2 1 0 2 8 2 0 0 0 0.00e+00 0 0.00e+00 0 MatAssemblyEnd 38 1.1 3.9175e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 5.6e+01 6 0 0 0 7 6 0 0 0 8 0 0 0 0.00e+00 0 0.00e+00 0 MatCreateSubMat 2 1.0 1.2913e-02 1.0 0.00e+00 0.0 2.0e+01 1.9e+01 2.8e+01 0 0 0 0 3 0 0 0 0 4 0 0 0 0.00e+00 1 2.50e-05 0 MatCoarsen 5 1.0 1.4885e+00 1.0 0.00e+00 0.0 7.3e+02 6.4e+04 2.0e+01 2 0 13 3 2 2 0 14 3 3 0 0 0 0.00e+00 0 0.00e+00 0 MatView 8 1.1 1.7513e-0272.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 1 0 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0 MatAXPY 5 1.0 2.1959e+00 1.0 9.32e+06 1.0 0.0e+00 0.0e+00 5.0e+00 3 0 0 0 1 3 0 0 0 1 34 0 0 0.00e+00 10 3.01e+02 0 MatMatMultSym 5 1.0 2.1141e+00 1.0 1.97e+08 1.0 2.8e+02 8.5e+04 3.0e+01 3 1 5 2 4 3 2 5 2 4 735 3386 44 1.28e+03 30 4.53e+02 100 MatMatMultNum 5 1.0 2.6103e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 MatPtAPSymbolic 5 1.0 3.8823e+00 1.0 2.46e+09 1.0 1.3e+03 5.1e+05 4.0e+01 6 19 24 49 5 6 19 25 49 6 4980 18532 51 2.34e+03 39 9.88e+02 100 MatPtAPNumeric 5 1.0 7.9610e-02 1.2 2.44e+09 1.0 8.1e+01 1.4e+06 0.0e+00 0 18 1 8 0 0 18 1 8 0 240155 767015 28 1.78e+02 0 0.00e+00 100 MatTrnMatMultSym 1 1.0 1.9383e+01 1.0 0.00e+00 0.0 7.0e+01 1.9e+06 1.2e+01 30 0 1 10 1 30 0 1 10 2 0 0 0 0.00e+00 0 0.00e+00 0 MatGetLocalMat 6 1.0 1.0244e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 0 14 3.38e+02 5 1.88e+02 0 MatGetBrAoCol 10 1.0 1.9273e-01 1.2 0.00e+00 0.0 5.5e+02 2.6e+05 0.0e+00 0 0 10 10 0 0 0 10 10 0 0 0 0 0.00e+00 0 0.00e+00 0 MatCUSPARSCopyTo 68 1.0 3.5846e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 0 67 3.29e+03 0 0.00e+00 0 MatCUSPARSCopyFr 30 1.1 8.1929e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 29 9.10e+02 0 MatCUSPARSGenT 10 1.0 6.1319e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 MatSetPreallCOO 10 1.0 8.7517e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+01 1 0 0 0 4 1 0 0 0 4 0 0 49 2.22e+03 29 3.98e+02 0 MatSetValuesCOO 10 1.0 4.5516e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 DMCreateMat 1 1.0 7.9086e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 12 0 0 0 1 12 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0 SFSetGraph 57 1.0 4.3028e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 SFSetUp 42 1.0 9.3058e-01 1.6 0.00e+00 0.0 1.3e+03 1.6e+05 4.2e+01 1 0 23 15 5 1 0 23 15 6 0 0 0 0.00e+00 0 0.00e+00 0 SFBcastBegin 30 1.0 1.9898e-02 2.6 0.00e+00 0.0 6.3e+02 1.5e+05 0.0e+00 0 0 11 7 0 0 0 12 7 0 0 0 0 0.00e+00 0 0.00e+00 0 SFBcastEnd 30 1.0 1.8015e-0110.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 SFReduceBegin 25 1.0 6.8733e-0233.0 0.00e+00 0.0 4.0e+02 8.3e+05 0.0e+00 0 0 7 24 0 0 0 7 24 0 0 0 4 6.01e+00 0 0.00e+00 0 SFReduceEnd 25 1.0 3.2174e-01 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 5 7.01e+00 0 0.00e+00 0 SFFetchOpBegin 5 1.0 1.6278e-0263.0 0.00e+00 0.0 8.1e+01 6.9e+05 0.0e+00 0 0 1 4 0 0 0 1 4 0 0 0 0 0.00e+00 0 0.00e+00 0 SFFetchOpEnd 5 1.0 5.5293e-02 1.8 0.00e+00 0.0 8.1e+01 6.9e+05 0.0e+00 0 0 1 4 0 0 0 1 4 0 0 0 0 0.00e+00 0 0.00e+00 0 SFPack 181 1.0 8.5428e-02 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 13 6.55e+00 0 0.00e+00 0 SFUnpack 186 1.0 7.3383e-02 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 5 7.01e+00 0 0.00e+00 0 VecView 2 1.0 6.1254e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 0 0 0.00e+00 2 1.34e+02 0 VecMDot 50 1.0 4.5050e-02 2.2 1.03e+09 1.0 0.0e+00 0.0e+00 5.0e+01 0 8 0 0 6 0 8 0 0 7 181449 677644 0 0.00e+00 0 0.00e+00 100 VecTDot 105 1.0 7.8183e-03 1.1 3.91e+08 1.0 0.0e+00 0.0e+00 1.0e+02 0 3 0 0 13 0 3 0 0 15 399185 663113 0 0.00e+00 0 0.00e+00 100 VecNorm 111 1.0 4.3589e-02 4.4 4.27e+08 1.0 0.0e+00 0.0e+00 1.1e+02 0 3 0 0 14 0 3 0 0 16 78088 507760 0 0.00e+00 0 0.00e+00 100 VecScale 59 1.0 5.8454e-03 1.0 1.36e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 185757 564736 1 6.71e+01 0 0.00e+00 100 VecCopy 16 1.0 4.7311e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSet 194 1.0 7.1129e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecAXPY 105 1.0 4.8041e-03 1.0 3.91e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 649651 821617 0 0.00e+00 0 0.00e+00 100 VecAYPX 45 1.0 3.0359e-03 1.0 1.68e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 440577 507326 0 0.00e+00 0 0.00e+00 100 VecMAXPY 55 1.0 1.1733e-02 1.0 1.21e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 823347 862212 0 0.00e+00 0 0.00e+00 100 VecAssemblyBegin 16 1.0 3.1758e-02 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+01 0 0 0 0 2 0 0 0 0 2 0 0 0 0.00e+00 0 0.00e+00 0 VecAssemblyEnd 16 1.0 6.2263e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecPointwiseMult 110 1.0 1.4313e-02 1.0 2.05e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 114219 348098 15 2.23e+02 0 0.00e+00 100 VecScatterBegin 121 1.0 1.9927e-01 3.2 0.00e+00 0.0 2.6e+03 1.4e+05 1.2e+01 0 0 47 27 1 0 0 48 27 2 0 0 14 7.49e+01 0 0.00e+00 0 VecScatterEnd 121 1.0 8.6786e-02 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSetRandom 5 1.0 1.2099e-03 7.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecNormalize 55 1.0 2.5156e-02 3.6 3.08e+08 1.0 0.0e+00 0.0e+00 5.5e+01 0 2 0 0 7 0 2 0 0 8 97478 498356 0 0.00e+00 0 0.00e+00 100 VecCUDACopyTo 41 1.0 2.5548e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 41 6.62e+02 0 0.00e+00 0 VecCUDACopyFrom 27 1.0 1.8575e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00e+00 27 4.33e+02 0 KSPSetUp 12 1.0 6.9346e-01 1.0 3.12e+09 1.0 9.2e+02 1.3e+05 1.8e+02 1 24 17 9 23 1 24 17 9 26 35582 749376 15 2.23e+02 5 7.43e+01 100 KSPSolve 1 1.0 3.7595e-04 1.0 1.68e+07 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 357006 441088 0 0.00e+00 0 0.00e+00 100 KSPGMRESOrthog 50 1.0 5.5113e-02 1.8 2.05e+09 1.0 0.0e+00 0.0e+00 5.0e+01 0 16 0 0 6 0 16 0 0 7 296630 758906 0 0.00e+00 0 0.00e+00 100 PCGAMGGraph_AGG 5 1.0 1.3408e+01 1.0 1.97e+08 1.0 2.8e+02 6.4e+04 4.5e+01 21 1 5 1 6 21 2 5 1 6 116 0 15 1.49e+02 18 3.84e+02 0 PCGAMGCoarse_AGG 5 1.0 2.4127e+01 1.0 0.00e+00 0.0 9.3e+02 2.8e+05 4.3e+01 38 0 17 19 5 38 0 17 19 6 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMGProl_AGG 5 1.0 3.1753e+00 1.0 0.00e+00 0.0 4.3e+02 1.0e+05 7.9e+01 5 0 8 3 10 5 0 8 3 11 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMGPOpt_AGG 5 1.0 5.7744e+00 1.0 4.83e+09 1.0 1.4e+03 1.1e+05 1.8e+02 9 37 25 11 23 9 37 26 11 26 6628 74185 76 2.55e+03 59 1.20e+03 99 GAMG: createProl 5 1.0 4.6465e+01 1.0 5.02e+09 1.0 3.0e+03 1.5e+05 3.5e+02 73 38 55 34 44 73 39 56 34 50 857 74120 91 2.69e+03 77 1.59e+03 95 Graph 10 1.0 1.3384e+01 1.0 1.97e+08 1.0 2.8e+02 6.4e+04 4.5e+01 21 1 5 1 6 21 2 5 1 6 116 0 15 1.49e+02 18 3.84e+02 0 MIS/Agg 5 1.0 1.4887e+00 1.0 0.00e+00 0.0 7.3e+02 6.4e+04 2.0e+01 2 0 13 3 2 2 0 14 3 3 0 0 0 0.00e+00 0 0.00e+00 0 SA: col data 5 1.0 7.0388e-01 1.0 0.00e+00 0.0 3.4e+02 1.2e+05 3.4e+01 1 0 6 3 4 1 0 6 3 5 0 0 0 0.00e+00 0 0.00e+00 0 SA: frmProl0 5 1.0 2.2816e+00 1.0 0.00e+00 0.0 9.0e+01 5.4e+04 2.5e+01 4 0 2 0 3 4 0 2 0 4 0 0 0 0.00e+00 0 0.00e+00 0 SA: smooth 5 1.0 4.6886e+00 1.0 2.82e+08 1.0 2.8e+02 8.5e+04 4.5e+01 7 2 5 2 6 7 2 5 2 6 476 4019 59 1.66e+03 54 1.13e+03 83 GAMG: partLevel 5 1.0 3.9906e+00 1.0 4.90e+09 1.0 1.5e+03 5.4e+05 9.3e+01 6 37 27 58 12 6 37 27 58 13 9636 35998 79 2.52e+03 40 9.88e+02 100 repartition 1 1.0 2.8524e-02 1.0 0.00e+00 0.0 4.7e+01 1.1e+01 5.3e+01 0 0 1 0 7 0 0 1 0 7 0 0 0 0.00e+00 1 2.50e-05 0 Invert-Sort 1 1.0 1.6403e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 1 0 0 0 0 1 0 0 0 0.00e+00 0 0.00e+00 0 Move A 1 1.0 7.3216e-03 1.0 0.00e+00 0.0 2.0e+01 1.9e+01 1.5e+01 0 0 0 0 2 0 0 0 0 2 0 0 0 0.00e+00 1 2.50e-05 0 Move P 1 1.0 6.6407e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.6e+01 0 0 0 0 2 0 0 0 0 2 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMG Squ l00 1 1.0 1.9383e+01 1.0 0.00e+00 0.0 7.0e+01 1.9e+06 1.2e+01 30 0 1 10 1 30 0 1 10 2 0 0 0 0.00e+00 0 0.00e+00 0 PCGAMG Gal l00 1 1.0 2.2652e+00 1.0 1.79e+09 1.0 2.4e+02 1.2e+06 8.0e+00 3 14 4 20 1 3 14 4 20 1 6293 30320 17 1.74e+03 8 6.97e+02 100 PCGAMG Opt l00 1 1.0 1.6769e+00 1.0 1.17e+08 1.0 4.2e+01 3.5e+05 6.0e+00 3 1 1 1 1 3 1 1 1 1 559 2509 9 1.02e+03 6 3.69e+02 100 PCGAMG Gal l01 1 1.0 1.2428e+00 1.0 2.19e+09 1.0 2.4e+02 1.7e+06 8.0e+00 2 16 4 30 1 2 16 4 30 1 13719 41246 17 6.98e+02 8 2.59e+02 100 PCGAMG Opt l01 1 1.0 3.1079e-01 1.0 5.50e+07 1.0 4.2e+01 1.5e+05 6.0e+00 0 0 1 0 1 0 0 1 0 1 1386 9426 9 2.31e+02 6 7.41e+01 100 PCGAMG Gal l02 1 1.0 3.7143e-01 1.0 8.97e+08 1.1 2.4e+02 4.0e+05 8.0e+00 1 7 4 7 1 1 7 4 7 1 18510 41586 17 7.06e+01 8 3.14e+01 100 PCGAMG Opt l02 1 1.0 9.3935e-02 1.0 2.41e+07 1.1 4.2e+01 5.2e+04 6.0e+00 0 0 1 0 1 0 0 1 0 1 1892 5943 9 3.13e+01 6 1.00e+01 100 PCGAMG Gal l03 1 1.0 4.5541e-02 1.1 3.67e+07 1.3 2.4e+02 2.7e+04 8.0e+00 0 0 4 0 1 0 0 4 0 1 5972 13010 15 1.88e+00 8 9.46e-01 100 PCGAMG Opt l03 1 1.0 2.3099e-02 1.0 1.03e+06 1.1 4.2e+01 6.0e+03 6.0e+00 0 0 1 0 1 0 0 1 0 1 339 741 9 9.77e-01 6 3.41e-01 100 PCGAMG Gal l04 1 1.0 3.7959e-02 1.0 7.10e+04 2.9 4.6e+02 1.4e+02 8.0e+00 0 0 8 0 1 0 0 8 0 1 11 108 14 1.00e-02 7 2.80e-03 100 PCGAMG Opt l04 1 1.0 1.2923e-02 1.0 1.20e+04 1.2 1.1e+02 1.8e+02 6.0e+00 0 0 2 0 1 0 0 2 0 1 7 42 8 8.60e-03 6 2.84e-03 100 PCSetUp 1 1.0 5.0976e+01 1.0 1.30e+10 1.0 5.4e+03 2.6e+05 6.4e+02 80 99 98100 80 80100100100 91 2020 64035 185 5.43e+03 123 2.65e+03 98 --- Event Stage 1: linear-solve MatView 40 1.1 2.0912e-03 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.5e+01 0 0 0 0 4 34 0 0 0 47 0 0 0 0.00e+00 0 0.00e+00 0 VecNorm 5 1.0 1.3106e-03 1.0 8.39e+07 1.0 0.0e+00 0.0e+00 5.0e+00 0 1 0 0 1 24100 0 0 7 512041 922352 0 0.00e+00 0 0.00e+00 100 VecCopy 5 1.0 6.1139e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 11 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 VecSet 5 1.0 3.0530e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 0 0 0 0 0 0 0 0.00e+00 0 0.00e+00 0 KSPSolve 5 1.0 2.2451e-03 1.0 8.39e+07 1.0 0.0e+00 0.0e+00 1.0e+01 0 1 0 0 1 42100 0 0 13 298917 438845 0 0.00e+00 0 0.00e+00 100 --------------------------------------------------------------------------------------------------------------------------------------------------------------- Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Distributed Mesh 7 7 36360 0. Matrix 150 150 9035823784 0. Matrix Coarsen 5 5 3120 0. Index Set 76 76 140581556 0. IS L to G Mapping 22 22 68607276 0. Star Forest Graph 66 66 77472 0. Discrete System 7 7 6720 0. Weak Form 7 7 4312 0. Vector 247 247 3144505376 0. Krylov Solver 18 18 176848 0. DMKSP interface 1 1 656 0. Preconditioner 18 18 17872 0. Viewer 5 4 3312 0. PetscRandom 10 10 6660 0. --- Event Stage 1: linear-solve Viewer 5 5 4200 0. ======================================================================================================================== Average time to get PetscTime(): 3.21e-08 Average time for MPI_Barrier(): 1.01996e-05 Average time for zero size MPI_Send(): 7.33537e-06 #PETSc Option Table entries: -dm_mat_type aijcusparse -dm_vec_type cuda -ksp_monitor -ksp_norm_type unpreconditioned -ksp_type cg -ksp_view -log_view -mg_levels_esteig_ksp_type cg -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi -pc_gamg_agg_nsmooths 1 -pc_gamg_square_graph 1 -pc_gamg_threshold 0.0 -pc_gamg_threshold_scale 0.0 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS= FFLAGS= CXXFLAGS= --with-cc=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc --with-cxx=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicxx --with-fc=/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-openmp=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib=/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib/libsci_gnu.so --with-x=0 --with-clanguage=C --with-cuda=1 --with-cuda-dir=/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4 --with-hip=0 --with-metis=1 --with-metis-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/include --with-metis-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib/libmetis.so --with-hypre=1 --with-hypre-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/include --with-hypre-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib/libHYPRE.so --with-parmetis=1 --with-parmetis-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/include --with-parmetis-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib/libparmetis.so --with-kokkos=1 --with-kokkos-dir=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb --with-kokkos-kernels=1 --with-kokkos-kernels-dir=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6 --with-superlu_dist=1 --with-superlu_dist-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/include --with-superlu_dist-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib/libsuperlu_dist.so --with-ptscotch=0 --with-suitesparse=0 --with-hdf5=1 --with-hdf5-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/include --with-hdf5-lib="/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib/libhdf5_hl.so /global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib/libhdf5.so" --with-zlib=1 --with-zlib-include=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/include --with-zlib-lib=/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib/libz.so --with-mumps=0 --with-trilinos=0 --with-fftw=0 --with-valgrind=0 --with-gmp=0 --with-libpng=0 --with-giflib=0 --with-mpfr=0 --with-netcdf=0 --with-pnetcdf=0 --with-moab=0 --with-random123=0 --with-exodusii=0 --with-cgns=0 --with-memkind=0 --with-p4est=0 --with-saws=0 --with-yaml=0 --with-hwloc=0 --with-libjpeg=0 --with-scalapack=1 --with-scalapack-lib=/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib/libsci_gnu.so --with-strumpack=0 --with-mmg=0 --with-parmmg=0 --with-tetgen=0 --with-cuda-arch=80 ----------------------------------------- Libraries compiled on 2022-02-08 15:44:43 on login22 Machine characteristics: Linux-5.3.18-24.75_10.0.190-cray_shasta_c-x86_64-with-glibc2.26 Using PETSc directory: /global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo Using PETSc arch: ----------------------------------------- Using C compiler: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc -fPIC Using Fortran compiler: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 -fPIC ----------------------------------------- Using include paths: -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/include -I/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/include -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/include -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/include ----------------------------------------- Using C linker: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpicc Using Fortran linker: /opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/bin/mpif90 Using libraries: -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/petsc-main-mnj56kbexro3fipf6kheyttljzwss7fo/lib -lpetsc -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hypre-develop-3gtrobj6ky64qlq4jvi2qzou5mvisy4w/lib -Wl,-rpath,/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib -L/opt/cray/pe/libsci/21.08.1.2/GNU/9.1/x86_64/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/superlu-dist-develop-l5kc2sttvfqcjlejhgnvygfxwulrujga/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/lib64 -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-kernels-3.5.00-zwq3aedpbg7ywpmqiqxmn5nx4w6hdrx6/lib64 -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/lib64 -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/kokkos-3.5.00-65sqphcwz6lwtqectq6yswa6kt3654mb/lib64 -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/hdf5-1.12.1-7pefaoio5q3hwnzggbnz7mpqw352gtsy/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/parmetis-4.0.3-7xhbi6h22ni4fe35vxurnwmr6izbeb7b/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/metis-5.1.0-lxe5bhakcmkcf7zuqcagulm7tihcav7q/lib -Wl,-rpath,/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib -L/global/u1/s/sajid/packages/spack/opt/spack/cray-sles15-zen3/gcc-11.2.0/zlib-1.2.11-ekeupmdcqoimgroigtctln7tqkyh6pdm/lib -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64/stubs -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/lib -L/opt/cray/pe/mpich/8.1.12/ofi/gnu/9.1/lib -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/lib64/stubs -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/nvvm/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/nvvm/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/CUPTI/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/CUPTI/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/Debugger/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/cuda/11.4/extras/Debugger/lib64 -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/lib64 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4/lib64 -Wl,-rpath,/opt/cray/pe/mpich/8.1.12/gtl/lib -L/opt/cray/pe/mpich/8.1.12/gtl/lib -Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib -L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_3.9__g3cf3325.shasta/lib64 -L/opt/cray/xpmem/2.2.40-2.1_3.9__g3cf3325.shasta/lib64 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib/gcc/x86_64-suse-linux/11.2.0 -L/opt/cray/pe/gcc/11.2.0/snos/lib/gcc/x86_64-suse-linux/11.2.0 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib64 -L/opt/cray/pe/gcc/11.2.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/gcc/11.2.0/snos/lib -L/opt/cray/pe/gcc/11.2.0/snos/lib -lHYPRE -lsci_gnu -lsuperlu_dist -lkokkoskernels -lkokkoscontainers -lkokkoscore -lsci_gnu -lhdf5_hl -lhdf5 -lparmetis -lmetis -lz -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -lcuda -lstdc++ -ldl -lmpifort_gnu_91 -lmpi_gnu_91 -lcuda -lmpi_gtl_cuda -lxpmem -lgfortran -lm -lcupti -lcudart -lsci_gnu_82_mpi -lsci_gnu_82 -ldsmml -lgfortran -lquadmath -lpthread -lm -lgcc_s -lquadmath -lstdc++ -ldl -----------------------------------------