Solving a linear TS problem on 256 processors mx : 16384, my: 16384, energy(in eV) : 1.500000e+04 0 TS dt 3.851e-06 time 0. 0 KSP Residual norm 1.685968662328e+10 1 KSP Residual norm 7.320610622864e+08 2 KSP Residual norm 6.410815809781e+07 3 KSP Residual norm 5.714873255756e+06 4 KSP Residual norm 5.302113047785e+05 5 KSP Residual norm 5.181591438499e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 1 TS dt 3.851e-06 time 3.851e-06 0 KSP Residual norm 1.515303548572e+10 1 KSP Residual norm 6.946858214429e+08 2 KSP Residual norm 6.385565962442e+07 3 KSP Residual norm 5.590092646255e+06 4 KSP Residual norm 5.213449902279e+05 5 KSP Residual norm 5.090376417926e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 2 TS dt 3.851e-06 time 7.702e-06 0 KSP Residual norm 1.376338309708e+10 1 KSP Residual norm 6.700711345142e+08 2 KSP Residual norm 6.276893779249e+07 3 KSP Residual norm 5.464072083084e+06 4 KSP Residual norm 5.172872055727e+05 5 KSP Residual norm 5.058666765745e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 3 TS dt 3.851e-06 time 1.1553e-05 0 KSP Residual norm 1.262360249428e+10 1 KSP Residual norm 6.349211968831e+08 2 KSP Residual norm 5.922043471053e+07 3 KSP Residual norm 5.197805501332e+06 4 KSP Residual norm 4.947798225484e+05 5 KSP Residual norm 4.855762422450e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 4 TS dt 3.851e-06 time 1.5404e-05 0 KSP Residual norm 1.169398846385e+10 1 KSP Residual norm 6.387741607745e+08 2 KSP Residual norm 5.770279163597e+07 3 KSP Residual norm 5.091527333429e+06 4 KSP Residual norm 4.877044430604e+05 5 KSP Residual norm 4.808435030497e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 5 TS dt 3.851e-06 time 1.9255e-05 0 KSP Residual norm 1.093943349180e+10 1 KSP Residual norm 6.465199750732e+08 2 KSP Residual norm 5.632726921983e+07 3 KSP Residual norm 4.995659342893e+06 4 KSP Residual norm 4.748465461618e+05 5 KSP Residual norm 4.668965984724e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 6 TS dt 3.851e-06 time 2.3106e-05 0 KSP Residual norm 1.029007291288e+10 1 KSP Residual norm 6.301715448013e+08 2 KSP Residual norm 5.249621137509e+07 3 KSP Residual norm 4.729620864597e+06 4 KSP Residual norm 4.442480151012e+05 5 KSP Residual norm 4.380431850372e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 7 TS dt 3.851e-06 time 2.6957e-05 0 KSP Residual norm 9.717330053116e+09 1 KSP Residual norm 6.030084399356e+08 2 KSP Residual norm 4.998305142182e+07 3 KSP Residual norm 4.558659909524e+06 4 KSP Residual norm 4.265251250485e+05 5 KSP Residual norm 4.222422327399e+04 KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 8 TS dt 3.851e-06 time 3.0808e-05 TS Object: 256 MPI processes type: cn maximum steps=8 maximum time=3.0808e-05 total number of linear solver iterations=40 total number of linear solve failures=0 total number of rejected steps=0 using relative error tolerance of 0.0001, using absolute error tolerance of 0.0001 TSAdapt Object: 256 MPI processes type: none SNES Object: 256 MPI processes type: ksponly maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=5 total number of function evaluations=1 norm schedule ALWAYS KSP Object: 256 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 256 MPI processes type: gamg type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = -0.04 -0.04 -0.04 -0.04 Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1.33047 Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 256 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 256 MPI processes type: redundant First (color=0) of 256 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 3.04503 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 package used to perform factorization: petsc total: nonzeros=6897, allocated nonzeros=6897 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 117 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=155, cols=155 total: nonzeros=2265, allocated nonzeros=2265 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=3415, cols=3415 total: nonzeros=52439, allocated nonzeros=52439 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=89922, cols=89922 total: nonzeros=1434332, allocated nonzeros=1434332 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=2183018, cols=2183018 total: nonzeros=32605624, allocated nonzeros=32605624 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=37579152, cols=37579152 total: nonzeros=409437190, allocated nonzeros=409437190 total number of mallocs used during MatSetValues calls=0 using scalable MatPtAP() implementation not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 256 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_5_) 256 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 256 MPI processes type: mpiaij rows=268435456, cols=268435456 total: nonzeros=1342111744, allocated nonzeros=1342111744 total number of mallocs used during MatSetValues calls=0 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex_dmda on a named apsxrmd-0001 with 256 processors, by sajid Mon Mar 16 17:37:50 2020 Using Petsc Release Version 3.12.3, unknown Max Max/Min Avg Total Time (sec): 3.329e+02 1.000 3.328e+02 Objects: 7.860e+02 1.000 7.860e+02 Flop: 3.648e+10 1.001 3.648e+10 9.339e+12 Flop/sec: 1.096e+08 1.001 1.096e+08 2.806e+10 MPI Messages: 1.605e+04 4.029 9.985e+03 2.556e+06 MPI Message Lengths: 4.876e+07 2.025 4.534e+03 1.159e+10 MPI Reductions: 2.951e+03 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 3.3273e+02 100.0% 9.3388e+12 100.0% 2.556e+06 100.0% 4.534e+03 100.0% 2.944e+03 99.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 45 1.0 3.1316e-01 2.6 0.00e+00 0.0 2.5e+04 8.0e+00 0.0e+00 0 0 1 0 0 0 0 1 0 0 0 BuildTwoSidedF 106 1.0 5.9374e+00 2.0 0.00e+00 0.0 7.3e+04 1.1e+04 0.0e+00 1 0 3 7 0 1 0 3 7 0 0 DMCreateMat 1 1.0 6.6529e+00 1.0 0.00e+00 0.0 1.4e+03 5.5e+03 6.0e+00 2 0 0 0 0 2 0 0 0 0 0 SFSetGraph 45 1.0 6.3653e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 45 1.0 6.4681e-01 1.5 0.00e+00 0.0 7.6e+04 1.4e+03 0.0e+00 0 0 3 1 0 0 0 3 1 0 0 SFBcastOpBegin 1671 1.0 4.1366e-01 2.6 0.00e+00 0.0 2.0e+06 4.4e+03 0.0e+00 0 0 79 77 0 0 0 79 77 0 0 SFBcastOpEnd 1671 1.0 6.4216e-01 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceBegin 200 1.0 3.0439e-02 2.8 0.00e+00 0.0 2.1e+05 1.4e+03 0.0e+00 0 0 8 2 0 0 0 8 2 0 0 SFReduceEnd 200 1.0 6.3287e-0111.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecView 1 1.0 3.8907e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 12 0 0 0 0 12 0 0 0 0 0 VecMDot 890 1.0 1.0506e+01 1.3 3.85e+09 1.0 0.0e+00 0.0e+00 8.9e+02 3 11 0 0 30 3 11 0 0 30 93779 VecNorm 1303 1.0 5.2890e+00 1.3 2.82e+09 1.0 0.0e+00 0.0e+00 1.3e+03 1 8 0 0 44 1 8 0 0 44 136534 VecScale 1311 1.0 5.2244e+00 1.1 1.44e+09 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 70755 VecCopy 437 1.0 2.4498e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 1209 1.0 2.1904e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 629 1.0 3.4704e+00 1.1 1.37e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 100842 VecAYPX 216 1.0 1.4737e+00 1.3 2.60e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 45130 VecAXPBYCZ 8 1.0 2.7323e-01 2.1 1.01e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 94316 VecMAXPY 1303 1.0 1.2638e+01 1.1 5.82e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 16 0 0 0 4 16 0 0 0 117936 VecAssemblyBegin 33 1.0 7.6024e-01 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 33 1.0 5.2248e-0255.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 1255 1.0 1.0930e+01 1.1 1.21e+09 1.0 0.0e+00 0.0e+00 0.0e+00 3 3 0 0 0 3 3 0 0 0 28320 VecLoad 1 1.0 9.9491e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecScatterBegin 1828 1.0 4.5465e-01 2.0 0.00e+00 0.0 2.2e+06 4.2e+03 0.0e+00 0 0 86 79 0 0 0 86 79 0 0 VecScatterEnd 1828 1.0 1.0279e+00 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 5 1.0 2.7230e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1255 1.0 8.2105e+00 1.1 3.63e+09 1.0 0.0e+00 0.0e+00 1.3e+03 2 10 0 0 43 2 10 0 0 43 113095 MatMult 1306 1.0 7.2966e+01 1.0 1.49e+10 1.0 1.7e+06 4.9e+03 0.0e+00 21 41 66 70 0 21 41 66 70 0 52136 MatMultAdd 200 1.0 8.4009e+00 1.0 9.35e+08 1.0 2.1e+05 1.4e+03 0.0e+00 2 3 8 2 0 2 3 8 2 0 28499 MatMultTranspose 200 1.0 5.6989e+00 1.1 9.35e+08 1.0 2.1e+05 1.4e+03 0.0e+00 2 3 8 2 0 2 3 8 2 0 42012 MatSolve 40 1.0 7.2412e-02 9.2 2.18e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7715 MatLUFactorSym 2 1.0 3.8199e-0228.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 8 1.0 3.9006e-02 1.3 5.81e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 38114 MatCopy 6 1.0 4.1628e-04 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 7 1.0 6.6640e-01 1.7 0.00e+00 0.0 9.7e+03 1.5e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 30 1.0 1.6298e+00 1.5 3.94e+08 1.0 6.5e+03 4.5e+03 0.0e+00 0 1 0 0 0 0 1 0 0 0 61846 MatResidual 200 1.0 1.0894e+01 1.0 2.23e+09 1.0 2.6e+05 4.5e+03 0.0e+00 3 6 10 10 0 3 6 10 10 0 52451 MatAssemblyBegin 279 1.0 5.5863e+00 2.1 0.00e+00 0.0 7.3e+04 1.1e+04 0.0e+00 1 0 3 7 0 1 0 3 7 0 0 MatAssemblyEnd 279 1.0 5.0471e+00 1.1 4.38e+05 2.2 5.1e+04 1.1e+03 1.2e+02 1 0 2 0 4 1 0 2 0 4 20 MatGetRowIJ 2 1.0 8.8736e-02732.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMats 8 1.0 1.8844e-01 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMat 4 1.0 3.3782e-01 1.0 0.00e+00 0.0 3.3e+03 3.5e+02 4.8e+01 0 0 0 0 2 0 0 0 0 2 0 MatGetOrdering 2 1.0 9.7793e-02133.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 5 1.0 1.6620e+00 1.0 0.00e+00 0.0 6.1e+04 2.7e+03 3.3e+01 0 0 2 1 1 0 0 2 1 1 0 MatZeroEntries 42 1.0 1.0265e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 81 1.3 3.3003e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.3e+01 0 0 0 0 2 0 0 0 0 2 0 MatAXPY 5 1.0 2.1628e+00 1.1 4.82e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 570 MatTranspose 80 1.0 2.3021e+00 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMatMult 5 1.0 4.4538e+00 1.1 5.58e+07 1.0 3.5e+04 2.6e+03 4.2e+01 1 0 1 1 1 1 0 1 1 1 3207 MatMatMultSym 5 1.0 2.6664e+00 1.0 0.00e+00 0.0 2.8e+04 2.2e+03 4.0e+01 1 0 1 1 1 1 0 1 1 1 0 MatMatMultNum 5 1.0 1.2941e+00 1.0 5.58e+07 1.0 6.5e+03 4.5e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 11039 MatPtAP 40 1.0 6.2440e+01 1.0 1.98e+09 1.0 1.9e+05 8.2e+03 1.1e+02 19 5 8 14 4 19 5 8 14 4 8133 MatPtAPSymbolic 10 1.0 1.4395e+01 1.0 0.00e+00 0.0 7.2e+04 4.5e+03 7.0e+01 4 0 3 3 2 4 0 3 3 2 0 MatPtAPNumeric 40 1.0 4.7911e+01 1.0 1.98e+09 1.0 1.2e+05 1.0e+04 4.0e+01 14 5 5 11 1 14 5 5 11 1 10599 MatTrnMatMult 5 1.0 2.4071e+01 1.0 3.69e+08 1.0 3.9e+04 1.5e+04 5.8e+01 7 1 2 5 2 7 1 2 5 2 3919 MatTrnMatMultSym 5 1.0 1.5486e+01 1.0 0.00e+00 0.0 2.8e+04 8.0e+03 5.0e+01 5 0 1 2 2 5 0 1 2 2 0 MatTrnMatMultNum 5 1.0 8.5882e+00 1.0 3.69e+08 1.0 1.1e+04 3.1e+04 8.0e+00 3 1 0 3 0 3 1 0 3 0 10983 MatRedundantMat 8 1.0 2.8105e-01 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatMPIConcateSeq 8 1.0 1.0566e-0164.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetLocalMat 58 1.0 3.6725e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetBrAoCol 50 1.0 1.7231e+0010.4 0.00e+00 0.0 1.0e+05 7.8e+03 0.0e+00 0 0 4 7 0 0 0 4 7 0 0 TSStep 8 1.0 2.6639e+02 1.0 3.65e+10 1.0 2.6e+06 4.5e+03 2.9e+03 80100100100 98 80100100100 98 35057 TSFunctionEval 16 1.0 9.5915e+00 1.1 8.47e+08 1.0 1.5e+04 1.6e+04 0.0e+00 3 2 1 2 0 3 2 1 2 0 22612 TSJacobianEval 24 1.0 7.2019e+00 1.1 2.01e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 2 1 0 0 0 7156 SNESSolve 8 1.0 2.5736e+02 1.0 3.59e+10 1.0 2.5e+06 4.5e+03 2.9e+03 77 99100 99 98 77 99100 99 98 35745 SNESSetUp 1 1.0 4.1294e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESFunctionEval 8 1.0 2.0196e+00 1.2 4.36e+08 1.0 7.7e+03 1.6e+04 0.0e+00 1 1 0 1 0 1 1 0 1 0 55290 SNESJacobianEval 8 1.0 7.2020e+00 1.1 2.01e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 2 1 0 0 0 7156 KSPSetUp 69 1.0 8.1191e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.6e+01 0 0 0 0 1 0 0 0 0 1 0 KSPSolve 8 1.0 2.4820e+02 1.0 3.52e+10 1.0 2.5e+06 4.5e+03 2.8e+03 74 96 99 98 94 74 96 99 98 95 36268 KSPGMRESOrthog 890 1.0 1.8700e+01 1.1 7.70e+09 1.0 0.0e+00 0.0e+00 8.9e+02 5 21 0 0 30 5 21 0 0 30 105377 PCGAMGGraph_AGG 5 1.0 2.6489e+00 1.1 5.58e+07 1.0 1.6e+04 2.7e+03 1.0e+01 1 0 1 0 0 1 0 1 0 0 5393 PCGAMGCoarse_AGG 5 1.0 2.9635e+01 1.0 3.69e+08 1.0 1.4e+05 7.3e+03 1.1e+02 9 1 5 9 4 9 1 5 9 4 3183 PCGAMGProl_AGG 5 1.0 9.3756e+00 1.0 0.00e+00 0.0 2.9e+04 4.6e+03 6.0e+01 3 0 1 1 2 3 0 1 1 2 0 PCGAMGPOpt_AGG 5 1.0 1.4200e+01 1.0 1.97e+09 1.0 1.0e+05 3.8e+03 1.9e+02 4 5 4 3 6 4 5 4 3 6 35543 GAMG: createProl 5 1.0 5.5923e+01 1.0 2.40e+09 1.0 2.8e+05 5.5e+03 3.7e+02 17 7 11 13 12 17 7 11 13 12 10967 Graph 10 1.0 2.6476e+00 1.1 5.58e+07 1.0 1.6e+04 2.7e+03 1.0e+01 1 0 1 0 0 1 0 1 0 0 5395 MIS/Agg 5 1.0 1.6628e+00 1.0 0.00e+00 0.0 6.1e+04 2.7e+03 3.3e+01 0 0 2 1 1 0 0 2 1 1 0 SA: col data 5 1.0 1.2102e+00 1.0 0.00e+00 0.0 1.6e+04 7.4e+03 2.0e+01 0 0 1 1 1 0 0 1 1 1 0 SA: frmProl0 5 1.0 7.8960e+00 1.0 0.00e+00 0.0 1.4e+04 1.4e+03 2.0e+01 2 0 1 0 1 2 0 1 0 1 0 SA: smooth 5 1.0 6.9252e+00 1.1 8.40e+07 1.0 3.5e+04 2.6e+03 5.2e+01 2 0 1 1 2 2 0 1 1 2 3105 GAMG: partLevel 5 1.0 1.3708e+01 1.0 2.43e+08 1.0 6.3e+04 4.2e+03 1.5e+02 4 1 2 2 5 4 1 2 2 5 4534 repartition 2 1.0 1.7902e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 Invert-Sort 2 1.0 1.4781e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 Move A 2 1.0 2.6408e-01 1.2 0.00e+00 0.0 1.9e+03 5.9e+02 2.6e+01 0 0 0 0 1 0 0 0 0 1 0 Move P 2 1.0 1.2238e-01 1.0 0.00e+00 0.0 1.4e+03 4.2e+01 2.8e+01 0 0 0 0 1 0 0 0 0 1 0 PCSetUp 8 1.0 1.2202e+02 1.0 4.39e+09 1.0 4.9e+05 6.5e+03 6.3e+02 36 12 19 27 21 36 12 19 27 22 9201 PCApply 40 1.0 1.1077e+02 1.0 2.63e+10 1.0 2.0e+06 3.8e+03 2.0e+03 33 72 79 65 68 33 72 79 65 68 60662 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Distributed Mesh 3 2 9920 0. Index Set 105 105 8889920 0. IS L to G Mapping 3 2 7095488 0. Star Forest Graph 51 48 50336 0. Discrete System 3 2 2016 0. Vector 305 305 1246077136 0. Vec Scatter 40 39 33384 0. Viewer 13 12 10656 0. Matrix 214 214 1512647008 0. Matrix Coarsen 5 5 3420 0. TSAdapt 1 1 1448 0. TS 1 1 2456 0. DMTS 1 1 792 0. SNES 1 1 1532 0. DMSNES 3 3 2160 0. Krylov Solver 13 13 503280 0. DMKSP interface 1 1 704 0. Preconditioner 13 13 13704 0. PetscRandom 10 10 7100 0. ======================================================================================================================== Average time to get PetscTime(): 8.10623e-07 Average time for MPI_Barrier(): 0.000174189 Average time for zero size MPI_Send(): 1.08946e-05 #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1e-5 -ksp_type fgmres -ksp_view -log_view -mg_levels_ksp_type gmres -mg_levels_pc_type jacobi -pc_gamg_coarse_eq_limit 1000 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 10 -pc_gamg_threshold -0.04 -pc_gamg_type agg -pc_gamg_use_parallel_coarse_grid_solver -pc_mg_monitor -pc_type gamg -prop_steps 8 -ts_monitor -ts_type cn #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with 64 bit PetscInt Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 8 Configure options: --prefix=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/petsc-develop-bbxeabwmh54bhgxtcvo6pxpltipre5ih --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS="-O3 -xMIC-AVX512 -gcc-name=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/bin/gcc" FFLAGS="-O3 -xMIC-AVX512 -gcc-name=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/bin/gfortran" CXXFLAGS="-O3 -xMIC-AVX512 -gcc-name=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/bin/g++" --with-cc=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiicc --with-cxx=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiicpc --with-fc=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiifort --FC_LINKER_FLAGS=-lintlc --with-precision=double --with-scalar-type=complex --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=1 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib="/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-6ff5hhpjd6k3y4uvcg4mrthoqj3e3ok4/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_intel_lp64.so /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-6ff5hhpjd6k3y4uvcg4mrthoqj3e3ok4/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_sequential.so /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-6ff5hhpjd6k3y4uvcg4mrthoqj3e3ok4/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_core.so /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-avx-512-kernels --with-memalign=64 --with-x=0 --with-clanguage=C --with-scalapack=0 --with-metis=1 --with-metis-dir=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/metis-5.1.0-exfs6tdtvltgzpxuruhx7enxbxcgbqth --with-hdf5=1 --with-hdf5-dir=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/hdf5-1.10.5-xd7rbe4dwdpadwhvq2znfps5c3kmqjih --with-hypre=0 --with-parmetis=1 --with-parmetis-dir=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/parmetis-4.0.3-pe2lmbtgsfifnh4gklb7p6vc53t2euvl --with-mumps=0 --with-trilinos=0 --with-fftw=1 --with-fftw-dir=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/fftw-3.3.8-fjaf3bnsdnz543f4xw7ygxwwpkgakeub --with-cxx-dialect=C++11 --with-superlu_dist-include=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/superlu-dist-develop-h3rx27nxr4duu6nzywdaynctji33g3gv/include --with-superlu_dist-lib=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/superlu-dist-develop-h3rx27nxr4duu6nzywdaynctji33g3gv/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse=0 --with-zlib-include=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/zlib-1.2.11-llcfo5bbeptvecfwv5erhgz4gitdo2c3/include --with-zlib-lib=/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/zlib-1.2.11-llcfo5bbeptvecfwv5erhgz4gitdo2c3/lib/libz.so --with-zlib=1 ----------------------------------------- Libraries compiled on 2020-01-06 16:18:17 on apsxrmd-0001 Machine characteristics: Linux-3.10.0-957.21.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core Using PETSc directory: /blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/petsc-develop-bbxeabwmh54bhgxtcvo6pxpltipre5ih Using PETSc arch: ----------------------------------------- Using C compiler: /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiicc -O3 -xMIC-AVX512 -gcc-name=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/bin/gcc -fPIC Using Fortran compiler: /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiifort -O3 -xMIC-AVX512 -gcc-name=/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/bin/gfortran -fPIC ----------------------------------------- Using include paths: -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/petsc-develop-bbxeabwmh54bhgxtcvo6pxpltipre5ih/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/superlu-dist-develop-h3rx27nxr4duu6nzywdaynctji33g3gv/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/fftw-3.3.8-fjaf3bnsdnz543f4xw7ygxwwpkgakeub/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/hdf5-1.10.5-xd7rbe4dwdpadwhvq2znfps5c3kmqjih/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/parmetis-4.0.3-pe2lmbtgsfifnh4gklb7p6vc53t2euvl/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/metis-5.1.0-exfs6tdtvltgzpxuruhx7enxbxcgbqth/include -I/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/zlib-1.2.11-llcfo5bbeptvecfwv5erhgz4gitdo2c3/include ----------------------------------------- Using C linker: /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiicc Using Fortran linker: /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin/mpiifort Using libraries: -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/petsc-develop-bbxeabwmh54bhgxtcvo6pxpltipre5ih/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/petsc-develop-bbxeabwmh54bhgxtcvo6pxpltipre5ih/lib -lpetsc -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/superlu-dist-develop-h3rx27nxr4duu6nzywdaynctji33g3gv/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/superlu-dist-develop-h3rx27nxr4duu6nzywdaynctji33g3gv/lib -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/fftw-3.3.8-fjaf3bnsdnz543f4xw7ygxwwpkgakeub/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/fftw-3.3.8-fjaf3bnsdnz543f4xw7ygxwwpkgakeub/lib -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-6ff5hhpjd6k3y4uvcg4mrthoqj3e3ok4/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-6ff5hhpjd6k3y4uvcg4mrthoqj3e3ok4/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/hdf5-1.10.5-xd7rbe4dwdpadwhvq2znfps5c3kmqjih/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/hdf5-1.10.5-xd7rbe4dwdpadwhvq2znfps5c3kmqjih/lib -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/parmetis-4.0.3-pe2lmbtgsfifnh4gklb7p6vc53t2euvl/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/parmetis-4.0.3-pe2lmbtgsfifnh4gklb7p6vc53t2euvl/lib -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/metis-5.1.0-exfs6tdtvltgzpxuruhx7enxbxcgbqth/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/metis-5.1.0-exfs6tdtvltgzpxuruhx7enxbxcgbqth/lib -Wl,-rpath,/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/zlib-1.2.11-llcfo5bbeptvecfwv5erhgz4gitdo2c3/lib -L/blues/gpfs/home/sajid/packages/spack/opt/spack/linux-centos7-mic_knl/intel-18.0.3/zlib-1.2.11-llcfo5bbeptvecfwv5erhgz4gitdo2c3/lib -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/release_mt -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/release_mt -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-4aoglyb6zfpaydn6iuiz63wyj3soolv2/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/lib -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/lib -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-18.0.3-d6gtsxst5w4bw3ko7qvtaweu23hv5y6b/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib64 -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib64 -Wl,-rpath,/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib -L/blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-xyzezhjmbiebkjfoakso464rhfshlkyq/lib -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib/release_mt -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib -lsuperlu_dist -lfftw3_mpi -lfftw3 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lz -lintlc -lstdc++ -ldl -lmpifort -lmpi -lmpigi -lrt -lpthread -lifport -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lgcc_s -lirc_s -lquadmath -lstdc++ -ldl -----------------------------------------