Script started on Wed Dec 14 15:06:04 2011 alias: Too dangerous to alias that. [madams-macbk-3:BISICLES/code/exec2D] markadams% ./driver2d.Darwin.g++-4.gfortran.DEBUG.PETSC.ex inputs.petsc AmrIce::initGrids Level 0: ((0,0) (31,31) (0,0))[0] # AmrIce::initData Warning : Ice temparature initialization is nonsense for now Sum(rhs) for velocity solve = -1.99941e+16 Picard iteration 0 max(resid) = 781020 [0]solveprivate isdefined=0 0 KSP Residual norm 2.499264167662e+07 1 KSP Residual norm 6.330915810095e+06 2 KSP Residual norm 1.856999435466e+06 3 KSP Residual norm 3.967325725053e+05 4 KSP Residual norm 9.037332775808e+04 5 KSP Residual norm 1.826960320400e+04 Linear solve converged due to CONVERGED_RTOL iterations 5 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 1 max(resid) = 727577 ------- Rate = 1.07345 [0]solveprivate isdefined=1 0 KSP Residual norm 6.885328126772e+06 1 KSP Residual norm 2.946109921809e+06 2 KSP Residual norm 3.473083281802e+05 3 KSP Residual norm 6.997669601579e+04 4 KSP Residual norm 1.099474905893e+04 Linear solve converged due to CONVERGED_RTOL iterations 4 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 2 max(resid) = 486568 ------- Rate = 1.49532 [0]solveprivate isdefined=1 0 KSP Residual norm 2.987688431960e+06 1 KSP Residual norm 9.667451163891e+05 2 KSP Residual norm 1.001194753250e+05 3 KSP Residual norm 1.844513588319e+04 Linear solve converged due to CONVERGED_RTOL iterations 3 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 3 max(resid) = 137951 ------- Rate = 3.5271 [0]solveprivate isdefined=1 0 KSP Residual norm 7.284421708198e+05 1 KSP Residual norm 1.599989263137e+05 2 KSP Residual norm 2.001197440787e+04 Linear solve converged due to CONVERGED_RTOL iterations 2 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 4 max(resid) = 28505.1 ------- Rate = 4.83954 [0]solveprivate isdefined=1 0 KSP Residual norm 1.225969407227e+05 1 KSP Residual norm 2.161659983610e+04 Linear solve converged due to CONVERGED_RTOL iterations 1 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 5 max(resid) = 9297.08 ------- Rate = 3.06602 [0]solveprivate isdefined=1 0 KSP Residual norm 2.854103115959e+04 1 KSP Residual norm 5.871254638950e+03 Linear solve converged due to CONVERGED_RTOL iterations 1 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 6 max(resid) = 2064.26 ------- Rate = 4.50384 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 7 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 8 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 9 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 10 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 11 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 12 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 13 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 14 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 15 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 16 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 17 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 18 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 19 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 20 max(resid) = 2064.26 ------- Rate = 1 [0]solveprivate isdefined=1 0 KSP Residual norm 5.987082000170e+03 Linear solve converged due to CONVERGED_RTOL iterations 0 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100 tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 3.26452 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 package used to perform factorization: petsc total: nonzeros=45540, allocated nonzeros=45540 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 97 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=13950 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=306, cols=306 total: nonzeros=13950, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 102 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebychev Chebychev: eigenvalue estimates: min = 0.291234, max = 2.04663 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2048, cols=2048 total: nonzeros=26624, allocated nonzeros=47104 total number of mallocs used during MatSetValues calls =2048 not using I-node routines Picard iteration 21 max(resid) = 2064.26 ------- Rate = 1 Picard Solver reached max number of iterations PicardSolver NOT CONVERGED -- final norm(resid) = 2064.26 after 21 iterations AmrIce::writePlotFile AmrIce::run -- max_time= 1e+07, max_step = 0 AmrIce::computeInitialDt AmrIce::computeDt AmrIce::writePlotFile AmrIce::run finished ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./driver2d.Darwin.g++-4.gfortran.DEBUG.PETSC.ex on a arch-maco named madams-macbk-3.local with 1 processor, by markadams Wed Dec 14 15:06:12 2011 Using Petsc Development HG revision: 291b1392c1ac25f3592838a7b2fe32dea6435abe HG Date: Mon Dec 12 23:23:58 2011 -0600 Max Max/Min Avg Total Time (sec): 2.070e+00 1.00000 2.070e+00 Objects: 1.000e+02 1.00000 1.000e+02 Flops: 8.100e+07 1.00000 8.100e+07 8.100e+07 Flops/sec: 3.913e+07 1.00000 3.913e+07 3.913e+07 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.0697e+00 100.0% 8.0998e+07 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 16 1.0 1.3728e-03 1.0 2.95e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 215 KSPSetup 71 1.0 2.7323e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 22 1.0 1.6148e-01 1.0 7.86e+07 1.0 0.0e+00 0.0e+00 0.0e+00 8 97 0 0 0 8 97 0 0 0 487 PCSetUp 27 1.0 1.6040e-01 1.0 6.93e+07 1.0 0.0e+00 0.0e+00 0.0e+00 8 85 0 0 0 8 85 0 0 0 432 PCSetUpOnBlocks 44 1.0 4.1191e-02 1.0 2.65e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 33 0 0 0 2 33 0 0 0 643 PCApply 32 1.0 5.6576e-02 1.0 3.36e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 41 0 0 0 3 41 0 0 0 594 GAMG: createProl 1 1.0 1.4889e-02 1.0 1.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 73 Graph 1 1.0 9.2111e-03 1.0 1.39e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 15 G.Mat 1 1.0 3.9642e-03 1.0 1.84e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5 G.Filter 1 1.0 1.5249e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 G.Square 1 1.0 3.3240e-03 1.0 1.20e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 36 MIS/Agg 1 1.0 1.4782e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SA: init 1 1.0 1.5402e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SA: smooth 1 1.0 4.0569e-03 1.0 9.47e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 233 GAMG: partLevel 1 1.0 1.2989e-02 1.0 2.08e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 160 PL repartition 1 1.0 1.6761e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 135 1.0 1.1334e-02 1.0 6.91e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 9 0 0 0 1 9 0 0 0 610 MatMultAdd 22 1.0 1.7703e-03 1.0 7.86e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 444 MatMultTranspose 22 1.0 1.9238e-03 1.0 7.86e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 409 MatSolve 22 1.0 4.3290e-03 1.0 2.00e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 461 MatLUFactorSym 1 1.0 2.4090e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 6 1.0 3.3392e-02 1.0 2.45e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 734 MatConvert 1 1.0 8.3923e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 3 1.0 9.1076e-05 1.0 5.42e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 595 MatAssemblyBegin 81 1.0 4.6253e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 81 1.0 5.4932e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 5732 1.0 7.6079e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 5.1022e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 23 1.0 5.7230e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 1.6999e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPartitioning 1 1.0 1.0967e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 40 1.0 3.0868e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 105 1.0 6.8146e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 33 0 0 0 0 33 0 0 0 0 0 MatAXPY 1 1.0 1.6999e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 1 1.0 2.2910e-03 1.0 1.78e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 78 MatMatMultSym 1 1.0 1.7171e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 1 1.0 5.7292e-04 1.0 1.78e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 310 MatPtAP 21 1.0 9.6037e-02 1.0 4.37e+07 1.0 0.0e+00 0.0e+00 0.0e+00 5 54 0 0 0 5 54 0 0 0 455 MatPtAPSymbolic 2 1.0 1.3147e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatPtAPNumeric 21 1.0 8.2839e-02 1.0 4.37e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 54 0 0 0 4 54 0 0 0 527 MatMatTrnMultSym 1 1.0 1.7130e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatTrnMultNum 1 1.0 1.6060e-03 1.0 1.20e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 75 MatGetSymTrans 3 1.0 4.3297e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecMDot 16 1.0 1.1692e-03 1.0 1.47e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 126 VecTDot 20 1.0 6.1750e-05 1.0 8.19e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1326 VecNorm 58 1.0 2.3508e-04 1.0 2.38e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1010 VecScale 81 1.0 1.8597e-04 1.0 1.66e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 892 VecCopy 116 1.0 5.5861e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 183 1.0 5.9986e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 135 1.0 4.8947e-04 1.0 5.53e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1130 VecAYPX 141 1.0 7.9346e-04 1.0 3.97e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 501 VecMAXPY 22 1.0 2.0552e-04 1.0 2.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1036 VecAssemblyBegin 43 1.0 9.7752e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 43 1.0 1.1206e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 98 1.0 7.7629e-04 1.0 2.01e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 259 VecScatterBegin 1 1.0 7.1526e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 1 1.0 8.9169e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 37 1.0 3.5024e-04 1.0 2.27e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 649 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 324 0 Krylov Solver 5 5 37564 0 Preconditioner 5 5 3092 0 Viewer 1 0 0 0 Matrix 20 20 1914696 0 Matrix Partitioning 1 1 352 0 Vector 48 48 635968 0 Vector Scatter 1 1 372 0 Index Set 17 17 28252 0 PetscRandom 1 1 364 0 ======================================================================================================================== Average time to get PetscTime(): 2.86102e-07 #PETSc Option Table entries: -ksp_converged_reason -ksp_max_it 100 -ksp_monitor -ksp_norm_type unpreconditioned -ksp_rtol 1.e-3 -ksp_type gmres -ksp_view -log_summary -options_left -pc_gamg_type sa -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Dec 14 11:19:50 2011 Configure options: CXX="/sw/lib/gcc4.4/bin/g++-4 -malign-double" CC="/sw/lib/gcc4.4/bin/gcc-4 -malign-double" FC=/sw/lib/gcc4.4/bin/gfortran --with-x=0 --with-clanguage=c++ --with-debugging=0 --with-mpi=0 PETSC_ARCH=arch-macosx-gnu-seq ----------------------------------------- Libraries compiled on Wed Dec 14 11:19:50 2011 on madams-macbk-3.local Machine characteristics: Darwin-10.8.0-i386-64bit Using PETSc directory: /Users/markadams/Codes/petsc-dev Using PETSc arch: arch-macosx-gnu-seq ----------------------------------------- Using C compiler: /sw/lib/gcc4.4/bin/g++-4 -malign-double -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /sw/lib/gcc4.4/bin/gfortran -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/Users/markadams/Codes/petsc-dev/arch-macosx-gnu-seq/include -I/Users/markadams/Codes/petsc-dev/include -I/Users/markadams/Codes/petsc-dev/include -I/Users/markadams/Codes/petsc-dev/arch-macosx-gnu-seq/include -I/Users/markadams/Codes/petsc-dev/include/mpiuni ----------------------------------------- Using C linker: /sw/lib/gcc4.4/bin/g++-4 -malign-double Using Fortran linker: /sw/lib/gcc4.4/bin/gfortran Using libraries: -L/Users/markadams/Codes/petsc-dev/arch-macosx-gnu-seq/lib -L/Users/markadams/Codes/petsc-dev/arch-macosx-gnu-seq/lib -lpetsc -lpthread -llapack -lblas -L/sw/lib/gcc4.4/lib/gcc/i386-apple-darwin10.6.0/4.4.4 -L/sw/lib/gcc4.4/lib -ldl -lgcc_s.10.5 -lSystem -lgfortran -lstdc++ -lstdc++ -ldl -lgcc_s.10.5 -lSystem -ldl ----------------------------------------- #PETSc Option Table entries: -ksp_converged_reason -ksp_max_it 100 -ksp_monitor -ksp_norm_type unpreconditioned -ksp_rtol 1.e-3 -ksp_type gmres -ksp_view -log_summary -options_left -pc_gamg_type sa -pc_type gamg #End of PETSc Option Table entries There are no unused options. [madams-macbk-3:BISICLES/code/exec2D] markadams% exit exit Script done on Wed Dec 14 15:06:14 2011