0 KSP Residual norm 4.681784098361e+07 1 KSP Residual norm 7.464546381559e+05 2 KSP Residual norm 1.408218797845e+05 3 KSP Residual norm 5.063837705363e+04 Linear solve converged due to CONVERGED_RTOL iterations 3 0 KSP Residual norm 5.063837705363e+04 1 KSP Residual norm 2.420269752237e+04 2 KSP Residual norm 6.363767832347e+03 3 KSP Residual norm 2.142984191217e+03 4 KSP Residual norm 7.251735399946e+02 5 KSP Residual norm 3.527217156307e+02 6 KSP Residual norm 1.944877001100e+02 7 KSP Residual norm 1.044085234481e+02 8 KSP Residual norm 3.546294065987e+01 Linear solve converged due to CONVERGED_RTOL iterations 8 0 SNES Function norm 3.762862047491e+07 0 KSP Residual norm 3.762862047491e+07 1 KSP Residual norm 4.240754361354e+06 2 KSP Residual norm 5.286856102391e+05 3 KSP Residual norm 1.477536232353e+05 4 KSP Residual norm 5.363419980321e+04 Linear solve converged due to CONVERGED_RTOL iterations 4 1 SNES Function norm 1.323050535160e+07 0 KSP Residual norm 1.323050535160e+07 1 KSP Residual norm 1.752504173017e+06 2 KSP Residual norm 5.080732077034e+05 3 KSP Residual norm 1.911126479517e+05 4 KSP Residual norm 9.070288456134e+04 5 KSP Residual norm 5.140361028415e+04 6 KSP Residual norm 2.996124263668e+04 7 KSP Residual norm 1.656051086167e+04 Linear solve converged due to CONVERGED_RTOL iterations 7 2 SNES Function norm 1.007481092312e+07 0 KSP Residual norm 1.007481092312e+07 1 KSP Residual norm 1.331323406300e+06 2 KSP Residual norm 2.659245599019e+05 3 KSP Residual norm 9.349633506217e+04 4 KSP Residual norm 4.228255668686e+04 5 KSP Residual norm 2.569998718420e+04 6 KSP Residual norm 1.807338552031e+04 Linear solve converged due to CONVERGED_RTOL iterations 6 3 SNES Function norm 6.154699799473e+06 0 KSP Residual norm 6.154699799473e+06 1 KSP Residual norm 8.918353630241e+05 2 KSP Residual norm 2.031769617840e+05 3 KSP Residual norm 6.564026379218e+04 4 KSP Residual norm 2.840705301090e+04 5 KSP Residual norm 1.744381694243e+04 6 KSP Residual norm 1.299475287305e+04 7 KSP Residual norm 9.162583748108e+03 Linear solve converged due to CONVERGED_RTOL iterations 7 4 SNES Function norm 2.835114636163e+06 0 KSP Residual norm 2.835114636163e+06 1 KSP Residual norm 3.622629593215e+05 2 KSP Residual norm 9.562827124200e+04 3 KSP Residual norm 3.553216478965e+04 4 KSP Residual norm 1.663416819313e+04 5 KSP Residual norm 9.815969665206e+03 6 KSP Residual norm 7.338951179519e+03 7 KSP Residual norm 5.784780806522e+03 8 KSP Residual norm 4.216334202588e+03 Linear solve converged due to CONVERGED_RTOL iterations 8 5 SNES Function norm 2.391810194836e+06 0 KSP Residual norm 2.391810194836e+06 1 KSP Residual norm 2.301218457541e+05 2 KSP Residual norm 4.596191738496e+04 3 KSP Residual norm 1.649162371642e+04 4 KSP Residual norm 7.014318574013e+03 5 KSP Residual norm 3.818057206415e+03 Linear solve converged due to CONVERGED_RTOL iterations 5 6 SNES Function norm 1.154583522706e+06 0 KSP Residual norm 1.154583522706e+06 1 KSP Residual norm 1.486289744776e+05 2 KSP Residual norm 3.482775170559e+04 3 KSP Residual norm 1.222534649902e+04 4 KSP Residual norm 5.131212514853e+03 5 KSP Residual norm 2.858124933052e+03 6 KSP Residual norm 2.131355121687e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 7 SNES Function norm 5.149762271841e+05 0 KSP Residual norm 5.149762271841e+05 1 KSP Residual norm 5.707288395764e+04 2 KSP Residual norm 1.492681660129e+04 3 KSP Residual norm 6.071760019027e+03 4 KSP Residual norm 3.207501684067e+03 5 KSP Residual norm 2.187677418164e+03 6 KSP Residual norm 1.823646992939e+03 7 KSP Residual norm 1.599325319703e+03 8 KSP Residual norm 1.338763730202e+03 9 KSP Residual norm 1.008563055753e+03 Linear solve converged due to CONVERGED_RTOL iterations 9 8 SNES Function norm 2.835738730756e+05 0 KSP Residual norm 2.835738730756e+05 1 KSP Residual norm 2.020055310948e+04 2 KSP Residual norm 2.795109525498e+03 3 KSP Residual norm 1.310779723540e+03 4 KSP Residual norm 8.239240263628e+02 5 KSP Residual norm 6.858517845919e+02 6 KSP Residual norm 6.186574421775e+02 7 KSP Residual norm 5.377444541555e+02 Linear solve converged due to CONVERGED_RTOL iterations 7 9 SNES Function norm 4.895122815462e+04 0 KSP Residual norm 4.895122815462e+04 1 KSP Residual norm 3.145014325312e+03 2 KSP Residual norm 7.147339842336e+02 3 KSP Residual norm 4.981553785101e+02 4 KSP Residual norm 4.405152852331e+02 5 KSP Residual norm 4.002566335582e+02 6 KSP Residual norm 3.299494246162e+02 7 KSP Residual norm 2.203299658579e+02 8 KSP Residual norm 1.293352291992e+02 9 KSP Residual norm 8.202394702820e+01 Linear solve converged due to CONVERGED_RTOL iterations 9 10 SNES Function norm 8.667439117424e+03 0 KSP Residual norm 8.667439117424e+03 1 KSP Residual norm 4.545993216108e+02 2 KSP Residual norm 8.212841616324e+01 3 KSP Residual norm 6.018036862004e+01 4 KSP Residual norm 5.161095473021e+01 5 KSP Residual norm 4.575961271181e+01 6 KSP Residual norm 3.965772591335e+01 7 KSP Residual norm 3.314454849706e+01 8 KSP Residual norm 2.564993200530e+01 9 KSP Residual norm 1.765955332236e+01 10 KSP Residual norm 1.268224485640e+01 Linear solve converged due to CONVERGED_RTOL iterations 10 11 SNES Function norm 1.763150801162e+02 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 11 SNES Object: 16 MPI processes type: newtonls maximum iterations=500, maximum function evaluations=10000 tolerances: relative=0.0001, absolute=1e-50, solution=0.0001 total number of linear solver iterations=78 total number of function evaluations=659 SNESLineSearch Object: 16 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 16 MPI processes type: gmres GMRES: restart=60, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=500, initial guess is zero tolerances: relative=0.002, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: bjacobi block Jacobi: number of blocks = 16 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 KSP Object: KSP Object:KSP Object:KSP Object: KSP Object: left preconditioning KSP Object: KSP Object: (mg_coarse_sub_)(mg_coarse_sub_) KSP Object:KSP Object: KSP Object: KSP Object:KSP Object: KSP Object: (mg_coarse_sub_) KSP Object: 1 MPI processes 1 MPI processes KSP Object: (mg_coarse_sub_) using NONE norm type for convergence test (mg_coarse_sub_) PC Object: (mg_coarse_sub_) (mg_coarse_sub_) (mg_coarse_sub_) (mg_coarse_sub_) (mg_coarse_sub_) (mg_coarse_sub_)(mg_coarse_sub_) 1 MPI processes (mg_coarse_sub_) type: preonly 1 MPI processes 1 MPI processes type: preonly 1 MPI processes type: preonly (mg_coarse_sub_) type: preonly (mg_coarse_sub_) type: preonly maximum iterations=10000, initial guess is zero 1 MPI processes (mg_coarse_sub_) 1 MPI processes 1 MPI processes maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 1 MPI processes maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero 1 MPI processes 1 MPI processes maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 1 MPI processes type: preonly type: lu type: preonly left preconditioning left preconditioning type: preonly type: preonly tolerances: relative=1e-05, absolute=1e-50, divergence=10000 1 MPI processes 1 MPI processes type: preonly LU: out-of-place factorization using NONE norm type for convergence test maximum iterations=10000, initial guess is zero left preconditioning type: preonly PC Object: left preconditioning tolerance for zero pivot 2.22045e-14 using NONE norm type for convergence test maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero left preconditioning type: preonly maximum iterations=10000, initial guess is zero using NONE norm type for convergence test type: preonly using NONE norm type for convergence test using diagonal shift on blocks to prevent zero pivot maximum iterations=10000, initial guess is zero PC Object: maximum iterations=10000, initial guess is zero PC Object: type: preonly maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 matrix ordering: nd (mg_coarse_sub_) using NONE norm type for convergence test tolerances: relative=1e-05, absolute=1e-50, divergence=10000 PC Object: (mg_coarse_sub_) tolerances: relative=1e-05, absolute=1e-50, divergence=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 type: preonly (mg_coarse_sub_) 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 factor fill ratio given 5, needed 1.87421 PC Object: left preconditioning (mg_coarse_sub_) type: lu (mg_coarse_sub_) 1 MPI processes maximum iterations=10000, initial guess is zero left preconditioning 1 MPI processes Factored matrix follows: using NONE norm type for convergence test type: lu tolerances: relative=1e-05, absolute=1e-50, divergence=10000 maximum iterations=10000, initial guess is zero 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning Matrix Object: LU: out-of-place factorization tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning 1 MPI processes left preconditioning LU: out-of-place factorization type: lu 1 MPI processes using NONE norm type for convergence test tolerance for zero pivot 2.22045e-14 using NONE norm type for convergence test using NONE norm type for convergence test LU: out-of-place factorization left preconditioning PC Object: type: seqaij using diagonal shift on blocks to prevent zero pivot matrix ordering: nd rows=774, cols=774, bs=3 type: lu left preconditioning left preconditioning (mg_coarse_sub_) factor fill ratio given 5, needed 0 type: lu PC Object: tolerances: relative=1e-05, absolute=1e-50, divergence=10000 PC Object: LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 left preconditioning using NONE norm type for convergence test using NONE norm type for convergence test tolerance for zero pivot 2.22045e-14 using NONE norm type for convergence test PC Object: left preconditioning PC Object: tolerance for zero pivot 2.22045e-14 1 MPI processes using NONE norm type for convergence test using NONE norm type for convergence test using diagonal shift on blocks to prevent zero pivot LU: out-of-place factorization matrix ordering: nd using diagonal shift on blocks to prevent zero pivot PC Object: using diagonal shift on blocks to prevent zero pivot PC Object: matrix ordering: nd tolerance for zero pivot 2.22045e-14 (mg_coarse_sub_) Factored matrix follows: using NONE norm type for convergence test factor fill ratio given 5, needed 0 PC Object:(mg_coarse_sub_) factor fill ratio given 5, needed 0 (mg_coarse_sub_) (mg_coarse_sub_) Factored matrix follows: (mg_coarse_sub_) 1 MPI processes PC Object: PC Object: Matrix Object: type: lu 1 MPI processes Factored matrix follows: package used to perform factorization: petsc (mg_coarse_sub_) matrix ordering: nd (mg_coarse_sub_) (mg_coarse_sub_) (mg_coarse_sub_) 1 MPI processes using diagonal shift on blocks to prevent zero pivot total: nonzeros=380034, allocated nonzeros=380034 1 MPI processes type: lu factor fill ratio given 5, needed 0 LU: out-of-place factorization 1 MPI processes Matrix Object: type: lu 1 MPI processes 1 MPI processes 1 MPI processes 1 MPI processes total number of mallocs used during MatSetValues calls =0 type: lu tolerance for zero pivot 2.22045e-14 type: lu type: seqaij Matrix Object: matrix ordering: nd 1 MPI processes Factored matrix follows: using I-node routines: found 205 nodes, limit used is 5 LU: out-of-place factorization LU: out-of-place factorization 1 MPI processes linear system matrix = precond matrix: factor fill ratio given 5, needed 0 LU: out-of-place factorization type: lu Matrix Object: type: lu type: lu LU: out-of-place factorization rows=0, cols=0, bs=3 using diagonal shift on blocks to prevent zero pivot type: lu Factored matrix follows: 1 MPI processes Matrix Object: type: seqaij tolerance for zero pivot 2.22045e-14 1 MPI processes type: seqaij tolerance for zero pivot 2.22045e-14 LU: out-of-place factorization LU: out-of-place factorization type: seqaij tolerance for zero pivot 2.22045e-14 1 MPI processes rows=774, cols=774, bs=3 LU: out-of-place factorization package used to perform factorization: petsc tolerance for zero pivot 2.22045e-14 matrix ordering: nd Matrix Object: using diagonal shift on blocks to prevent zero pivot type: lu tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot using diagonal shift on blocks to prevent zero pivot tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot tolerance for zero pivot 2.22045e-14 1 MPI processes rows=0, cols=0, bs=3 LU: out-of-place factorization type: seqaij LU: out-of-place factorization matrix ordering: nd total: nonzeros=202770, allocated nonzeros=202770 rows=0, cols=0, bs=3 matrix ordering: nd factor fill ratio given 5, needed 0 total: nonzeros=1, allocated nonzeros=1 using diagonal shift on blocks to prevent zero pivot total number of mallocs used during MatSetValues calls =0 using diagonal shift on blocks to prevent zero pivot matrix ordering: nd factor fill ratio given 5, needed 0 package used to perform factorization: petsc using I-node routines: found 258 nodes, limit used is 5 total number of mallocs used during MatSetValues calls =0 - - - - - - - - - - - - - - - - - - package used to perform factorization: petsc type: seqaij rows=0, cols=0, bs=3 matrix ordering: nd matrix ordering: nd tolerance for zero pivot 2.22045e-14 factor fill ratio given 5, needed 0 total: nonzeros=1, allocated nonzeros=1 using diagonal shift on blocks to prevent zero pivot Factored matrix follows: not using I-node routines package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 tolerance for zero pivot 2.22045e-14 Factored matrix follows: rows=0, cols=0, bs=3 factor fill ratio given 5, needed 0 matrix ordering: nd factor fill ratio given 5, needed 0 total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: using diagonal shift on blocks to prevent zero pivot not using I-node routines total: nonzeros=1, allocated nonzeros=1 factor fill ratio given 5, needed 0 factor fill ratio given 5, needed 0 matrix ordering: nd Factored matrix follows: using diagonal shift on blocks to prevent zero pivot package used to perform factorization: petsc Matrix Object: total number of mallocs used during MatSetValues calls =0 Factored matrix follows: Matrix Object: factor fill ratio given 5, needed 0 matrix ordering: nd 1 MPI processes Factored matrix follows: linear system matrix = precond matrix: total number of mallocs used during MatSetValues calls =0 not using I-node routines Factored matrix follows: Factored matrix follows: total: nonzeros=1, allocated nonzeros=1 Matrix Object: Matrix Object: matrix ordering: nd type: seqaij Matrix Object: linear system matrix = precond matrix: factor fill ratio given 5, needed 0 factor fill ratio given 5, needed 0 total number of mallocs used during MatSetValues calls =0 Matrix Object:Matrix Object: not using I-node routines Matrix Object: rows=0, cols=0, bs=3 1 MPI processes Matrix Object: Factored matrix follows: 1 MPI processes Matrix Object: linear system matrix = precond matrix: not using I-node routines Matrix Object: Factored matrix follows: linear system matrix = precond matrix: type: seqaij 1 MPI processes 1 MPI processes total: nonzeros=0, allocated nonzeros=0 Matrix Object: 1 MPI processes Matrix Object: total number of mallocs used during MatSetValues calls =0 Factored matrix follows: type: seqaij 1 MPI processes 1 MPI processes type: seqaij type: seqaij type: seqaij 1 MPI processes 1 MPI processes 1 MPI processes rows=0, cols=0, bs=3 1 MPI processes type: seqaij type: seqaij total: nonzeros=0, allocated nonzeros=0 Matrix Object: rows=0, cols=0, bs=3 not using I-node routines type: seqaij 1 MPI processes total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 rows=0, cols=0, bs=3 [1] number of local blocks = 1, first local block number = 1 [1] local block number 0 Matrix Object: rows=0, cols=0, bs=3 - - - - - - - - - - - - - - - - - - total number of mallocs used during MatSetValues calls =0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines not using I-node routines [2] number of local blocks = 1, first local block number = 2 total: nonzeros=0, allocated nonzeros=0 type: seqaij rows=0, cols=0, bs=3 [2] local block number 0 - - - - - - - - - - - - - - - - - - type: seqaij 1 MPI processes rows=0, cols=0, bs=3 not using I-node routines total number of mallocs used during MatSetValues calls =0 type: seqaij rows=0, cols=0, bs=3 rows=0, cols=0, bs=3 type: seqaij rows=0, cols=0, bs=3 not using I-node routines [3] number of local blocks = 1, first local block number = 3 1 MPI processes [3] local block number 0 package used to perform factorization: petsc - - - - - - - - - - - - - - - - - - [4] number of local blocks = 1, first local block number = 4 rows=0, cols=0, bs=3 package used to perform factorization: petsc [4] local block number 0 rows=0, cols=0, bs=3 - - - - - - - - - - - - - - - - - - package used to perform factorization: petsc [5] number of local blocks = 1, first local block number = 5 [5] local block number 0 package used to perform factorization: petsc - - - - - - - - - - - - - - - - - - type: seqaij rows=0, cols=0, bs=3 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 type: seqaij total: nonzeros=1, allocated nonzeros=1 total: nonzeros=1, allocated nonzeros=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 package used to perform factorization: petsc rows=0, cols=0, bs=3 total number of mallocs used during MatSetValues calls =0 package used to perform factorization: petsc rows=0, cols=0, bs=3 total number of mallocs used during MatSetValues calls =0 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines total: nonzeros=1, allocated nonzeros=1 not using I-node routines total: nonzeros=1, allocated nonzeros=1 package used to perform factorization: petsc linear system matrix = precond matrix: total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 not using I-node routines not using I-node routines package used to perform factorization: petsc linear system matrix = precond matrix: linear system matrix = precond matrix: linear system matrix = precond matrix: total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 Matrix Object: Matrix Object: Matrix Object: not using I-node routines not using I-node routines total: nonzeros=1, allocated nonzeros=1 Matrix Object: total: nonzeros=1, allocated nonzeros=1 linear system matrix = precond matrix: 1 MPI processes linear system matrix = precond matrix: 1 MPI processes 1 MPI processes not using I-node routines Matrix Object: not using I-node routines type: seqaij total number of mallocs used during MatSetValues calls =0 type: seqaij 1 MPI processes linear system matrix = precond matrix: 1 MPI processes Matrix Object: rows=0, cols=0, bs=3 total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: type: seqaij type: seqaij Matrix Object: 1 MPI processes rows=0, cols=0, bs=3 total: nonzeros=0, allocated nonzeros=0 type: seqaij not using I-node routines Matrix Object: type: seqaij rows=0, cols=0, bs=3 rows=0, cols=0, bs=3 rows=0, cols=0, bs=3 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines 1 MPI processes rows=0, cols=0, bs=3 linear system matrix = precond matrix: 1 MPI processes total: nonzeros=0, allocated nonzeros=0 type: seqaij linear system matrix = precond matrix: total: nonzeros=0, allocated nonzeros=0 total: nonzeros=0, allocated nonzeros=0 not using I-node routines total number of mallocs used during MatSetValues calls =0 [6] number of local blocks = 1, first local block number = 6 type: seqaij Matrix Object: total: nonzeros=0, allocated nonzeros=0 [6] local block number 0 - - - - - - - - - - - - - - - - - - Matrix Object: total number of mallocs used during MatSetValues calls =0 rows=0, cols=0, bs=3 total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 rows=0, cols=0, bs=3 1 MPI processes not using I-node routines 1 MPI processes not using I-node routines total: nonzeros=0, allocated nonzeros=0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 type: seqaij not using I-node routines type: seqaij [7] number of local blocks = 1, first local block number = 7 rows=0, cols=0, bs=3 not using I-node routines total number of mallocs used during MatSetValues calls =0 [7] local block number 0 - - - - - - - - - - - - - - - - - - total: nonzeros=0, allocated nonzeros=0 not using I-node routines [8] number of local blocks = 1, first local block number = 8 total number of mallocs used during MatSetValues calls =0 rows=0, cols=0, bs=3 [8] local block number 0 - - - - - - - - - - - - - - - - - - total: nonzeros=0, allocated nonzeros=0 not using I-node routines total number of mallocs used during MatSetValues calls =0 [9] number of local blocks = 1, first local block number = 9 not using I-node routines [9] local block number 0 - - - - - - - - - - - - - - - - - - [10] number of local blocks = 1, first local block number = 10 [10] local block number 0 - - - - - - - - - - - - - - - - - - [11] number of local blocks = 1, first local block number = 11 [11] local block number 0 - - - - - - - - - - - - - - - - - - [12] number of local blocks = 1, first local block number = 12 [12] local block number 0 - - - - - - - - - - - - - - - - - - [13] number of local blocks = 1, first local block number = 13 [13] local block number 0 - - - - - - - - - - - - - - - - - - [14] number of local blocks = 1, first local block number = 14 [14] local block number 0 - - - - - - - - - - - - - - - - - - [15] number of local blocks = 1, first local block number = 15 [15] local block number 0 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=774, cols=774, bs=3 total: nonzeros=202770, allocated nonzeros=202770 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 258 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.127202, max = 1.33562 Chebyshev: estimated using: [0 0.1; 0 1.05] KSP Object: (mg_levels_1_adapt_) 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=3666, cols=3666, bs=3 total: nonzeros=568206, allocated nonzeros=568206 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 7 nodes, limit used is 5 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=3666, cols=3666, bs=3 total: nonzeros=568206, allocated nonzeros=568206 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 7 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.148001, max = 1.55401 Chebyshev: estimated using: [0 0.1; 0 1.05] KSP Object: (mg_levels_2_adapt_) 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=16731, cols=16731, bs=3 total: nonzeros=1183095, allocated nonzeros=1183095 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 51 nodes, limit used is 5 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=16731, cols=16731, bs=3 total: nonzeros=1183095, allocated nonzeros=1183095 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 51 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.146192, max = 1.53501 Chebyshev: estimated using: [0 0.1; 0 1.05] KSP Object: (mg_levels_3_adapt_) 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=133170, cols=133170, bs=3 total: nonzeros=4998060, allocated nonzeros=4998060 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 359 nodes, limit used is 5 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 16 MPI processes type: mpiaij rows=133170, cols=133170, bs=3 total: nonzeros=4998060, allocated nonzeros=4998060 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 359 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0236582, max = 0.248411 Chebyshev: estimated using: [0 0.1; 0 1.05] KSP Object: (mg_levels_4_adapt_) 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix followed by preconditioner matrix: Matrix Object: 16 MPI processes type: mffd rows=3276800, cols=3276800 Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) Using wp compute h routine Does not compute normU Matrix Object: 16 MPI processes type: mpiaij rows=3276800, cols=3276800, bs=2 total: nonzeros=42598400, allocated nonzeros=62261240 total number of mallocs used during MatSetValues calls =136 not using I-node (on process 0) routines maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix followed by preconditioner matrix: Matrix Object: 16 MPI processes type: mffd rows=3276800, cols=3276800 Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) Using wp compute h routine Does not compute normU Matrix Object: 16 MPI processes type: mpiaij rows=3276800, cols=3276800, bs=2 total: nonzeros=42598400, allocated nonzeros=62261240 total number of mallocs used during MatSetValues calls =136 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix followed by preconditioner matrix: Matrix Object: 16 MPI processes type: mffd rows=3276800, cols=3276800 Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) Using wp compute h routine Does not compute normU Matrix Object: 16 MPI processes type: mpiaij rows=3276800, cols=3276800, bs=2 total: nonzeros=42598400, allocated nonzeros=62261240 total number of mallocs used during MatSetValues calls =136 not using I-node (on process 0) routines ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ../../driver2d.Linux.64.CC.ftn.OPTHIGH.MPI.PETSC.ex on a arch-xe6-opt64 named nid01753 with 16 processors, by Unknown Wed Mar 6 16:30:15 2013 Using Petsc Development HG revision: f37196d89aa62310230dff96ac6fce27c1d0da5e HG Date: Mon Jan 28 15:07:26 2013 -0600 Max Max/Min Avg Total Time (sec): 7.033e+02 1.00019 7.032e+02 Objects: 1.516e+03 1.00265 1.512e+03 Flops: 1.936e+10 3.69293 1.080e+10 1.728e+11 Flops/sec: 2.754e+07 3.69291 1.535e+07 2.457e+08 MPI Messages: 2.360e+04 14.26904 1.202e+04 1.923e+05 MPI Message Lengths: 6.225e+07 15.60797 2.304e+03 4.431e+08 MPI Reductions: 6.918e+03 1.00058 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 7.0320e+02 100.0% 1.7276e+11 100.0% 1.923e+05 100.0% 2.304e+03 100.0% 6.913e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult MF 644 1.0 5.2740e+02 1.1 1.46e+09 2.0 0.0e+00 0.0e+00 2.5e+03 72 11 0 0 36 72 11 0 0 36 35 MatMult 3258 1.0 5.4141e+02 1.1 5.34e+09 3.7 1.3e+05 1.5e+03 2.5e+03 74 27 69 44 36 74 27 69 44 36 88 MatMultAdd 421 1.0 3.0954e+0024.5 5.02e+08 0.0 1.3e+04 6.3e+02 0.0e+00 0 1 7 2 0 0 1 7 2 0 709 MatMultTranspose 421 1.0 4.1886e+0029.7 5.02e+08 0.0 1.3e+04 6.3e+02 0.0e+00 0 1 7 2 0 0 1 7 2 0 524 MatSolve 102 0.0 8.4904e-02 0.0 6.91e+07 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 814 MatSOR 3120 1.0 6.6210e+01 2.2 7.46e+09 2.9 0.0e+00 0.0e+00 0.0e+00 7 43 0 0 0 7 43 0 0 0 1128 MatLUFactorSym 2 1.0 2.1518e-02352.7 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 13 1.0 1.1084e+0028351.3 1.25e+09 0.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1132 MatScale 27 1.0 8.4768e-02 4.0 1.68e+07 7.1 5.6e+02 8.4e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 1289 MatAssemblyBegin 251 1.0 1.8837e+0118.6 0.00e+00 0.0 3.4e+03 2.3e+03 3.1e+02 1 0 2 2 4 1 0 2 2 4 0 MatAssemblyEnd 251 1.0 1.9387e+00 1.3 0.00e+00 0.0 6.9e+03 4.5e+02 6.5e+02 0 0 4 1 9 0 0 4 1 9 0 MatGetRow 2702808 2.3 5.8520e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 2 0.0 6.0489e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 2 0.0 1.1452e-03 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e-01 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 9 1.0 1.3600e-01 1.4 0.00e+00 0.0 1.6e+03 8.2e+02 1.8e+02 0 0 1 0 3 0 0 1 0 3 0 MatZeroEntries 1 1.0 1.5734e-02 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 15 1.0 1.3499e-02 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 9 1.0 4.2669e-0218.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTranspose 9 1.0 4.0912e-01 1.1 0.00e+00 0.0 4.5e+03 1.6e+03 1.5e+02 0 0 2 2 2 0 0 2 2 2 0 MatMatMult 9 1.0 1.3397e+00 1.0 6.41e+07 0.0 3.0e+03 2.1e+03 2.2e+02 0 0 2 1 3 0 0 2 1 3 197 MatPtAP 18 1.0 9.1338e+00 1.0 1.10e+09 0.0 7.6e+03 1.4e+04 4.5e+02 1 3 4 25 7 1 3 4 25 7 500 MatPtAPNumeric 36 1.0 6.6597e+00 1.0 1.63e+09 0.0 3.4e+03 2.7e+04 7.2e+01 1 5 2 21 1 1 5 2 21 1 1195 MatTrnMatMult 9 1.0 1.0854e+00 1.0 3.13e+07119.6 2.0e+03 1.8e+03 2.6e+02 0 0 1 1 4 0 0 1 1 4 124 MatGetLocalMat 90 1.0 7.3519e-01 7.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.2e+01 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 72 1.0 2.9224e-01 3.8 0.00e+00 0.0 7.9e+03 1.3e+04 5.4e+01 0 0 4 22 1 0 0 4 22 1 0 MatGetSymTrans 36 1.0 1.6877e-0152.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 11 1.0 6.2570e-02 3.4 5.77e+06 2.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 1152 VecMDot 719 1.0 1.1348e+01 5.2 6.90e+08 2.2 0.0e+00 0.0e+00 7.2e+02 1 5 0 0 10 1 5 0 0 10 716 VecNorm 1343 1.0 3.8128e+0114.4 4.40e+08 2.1 0.0e+00 0.0e+00 1.3e+03 2 3 0 0 19 2 3 0 0 19 142 VecScale 3123 1.0 1.2591e+00 4.9 3.66e+08 2.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 3506 VecCopy 545 1.0 3.4311e-01 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2590 1.0 7.9994e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 4749 1.0 4.2894e+00 3.2 1.19e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 8 0 0 0 0 8 0 0 0 3346 VecAYPX 3368 1.0 1.5943e+00 4.0 3.04e+08 2.3 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 2199 VecWAXPY 658 1.0 2.4284e+00 3.1 3.42e+08 2.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 1761 VecMAXPY 795 1.0 2.2949e+00 2.3 8.26e+08 2.2 0.0e+00 0.0e+00 0.0e+00 0 6 0 0 0 0 6 0 0 0 4246 VecAssemblyBegin 838 1.0 2.9640e+0165.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.5e+03 2 0 0 0 36 2 0 0 0 36 0 VecAssemblyEnd 838 1.0 3.3769e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 99 1.0 6.5623e-02 5.2 6.71e+06 2.3 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1159 VecScatterBegin 3638 1.0 7.3890e-02 6.6 0.00e+00 0.0 1.7e+05 1.3e+03 0.0e+00 0 0 87 49 0 0 0 87 49 0 0 VecScatterEnd 3638 1.0 1.9508e+0115.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSetRandom 9 1.0 1.8497e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 23 1.0 1.9750e-02 4.2 1.21e+07 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7632 VecReduceComm 12 1.0 6.0185e-01634.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 795 1.0 3.6306e+00 5.4 2.23e+08 2.2 0.0e+00 0.0e+00 7.8e+02 0 2 0 0 11 0 2 0 0 11 726 KSPGMRESOrthog 719 1.0 1.2188e+01 3.0 1.38e+09 2.2 0.0e+00 0.0e+00 7.2e+02 1 9 0 0 10 1 9 0 0 10 1334 KSPSetUp 124 1.0 8.8735e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 2.2e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 13 1.0 6.0382e+02 1.0 1.93e+10 3.7 1.9e+05 2.3e+03 6.6e+03 86100100100 96 86100100100 96 285 PCSetUp 26 1.0 2.5703e+01 1.0 3.23e+0922.6 3.7e+04 6.5e+03 2.9e+03 4 11 19 55 42 4 11 19 55 42 711 PCSetUpOnBlocks 102 1.0 1.1317e+001463.6 1.25e+09 0.0 0.0e+00 0.0e+00 1.0e+01 0 1 0 0 0 0 1 0 0 0 1109 PCApply 102 1.0 5.1618e+02 1.0 1.53e+10 3.3 1.5e+05 1.3e+03 3.3e+03 73 84 79 44 47 73 84 79 44 47 281 PCGAMGgraph_AGG 5 1.0 1.9570e+00 1.0 3.04e+06 2.6 4.2e+03 1.2e+03 2.6e+02 0 0 2 1 4 0 0 2 1 4 16 PCGAMGcoarse_AGG 5 1.0 6.1726e-01 1.0 1.04e+0779.2 2.8e+03 9.7e+02 3.4e+02 0 0 1 1 5 0 0 1 1 5 80 PCGAMGProl_AGG 5 1.0 2.8930e-01 1.0 0.00e+00 0.0 2.4e+03 6.8e+02 3.2e+02 0 0 1 0 5 0 0 1 0 5 0 PCGAMGPOpt_AGG 5 1.0 1.7899e+00 1.0 2.59e+08 3.7 4.7e+03 1.9e+03 2.8e+02 0 1 2 2 4 0 1 2 2 4 1235 SNESSolve 1 1.0 6.0460e+02 1.0 1.52e+10 3.5 1.4e+05 2.1e+03 5.0e+03 86 82 71 66 73 86 82 71 66 73 233 SNESFunctionEval 659 1.0 5.0238e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+03 71 0 0 0 29 71 0 0 0 29 0 SNESJacobianEval 11 1.0 7.0255e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.6e+01 1 0 0 0 1 1 0 0 0 1 0 SNESLineSearch 11 1.0 1.9731e+01 1.0 6.24e+07 2.0 0.0e+00 0.0e+00 1.2e+02 3 0 0 0 2 3 0 0 0 2 40 PCGAMGgraph_AGG 4 1.0 1.8519e+00 1.0 2.67e+06 2.3 3.6e+03 1.2e+03 2.1e+02 0 0 2 1 3 0 0 2 1 3 17 PCGAMGcoarse_AGG 4 1.0 7.4655e-01 1.0 2.10e+07159.9 2.5e+03 1.4e+03 2.8e+02 0 0 1 1 4 0 0 1 1 4 114 PCGAMGProl_AGG 4 1.0 2.5730e-01 1.1 0.00e+00 0.0 2.1e+03 8.1e+02 2.6e+02 0 0 1 0 4 0 0 1 0 4 0 PCGAMGPOpt_AGG 4 1.0 1.4530e+00 1.0 2.10e+08 3.0 3.9e+03 2.0e+03 2.2e+02 0 1 2 2 3 0 1 2 2 3 1429 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage MatMFFD 1 1 856 0 Matrix 322 322 741571200 0 Matrix Coarsen 9 9 6300 0 Vector 809 809 281875864 0 Vector Scatter 80 80 96320 0 Index Set 195 195 359616 0 Krylov Solver 35 35 923240 0 DMKSP interface 1 1 712 0 Preconditioner 35 35 35520 0 Viewer 2 1 800 0 Bipartite Graph 13 13 11912 0 PetscRandom 9 9 6192 0 SNES 1 1 1456 0 SNESLineSearch 1 1 920 0 DMSNES 1 1 728 0 Distributed Mesh 2 2 8912 0 ======================================================================================================================== Average time to get PetscTime(): 2.02325e-07 Average time for MPI_Barrier(): 7.58908e-06 Average time for zero size MPI_Send(): 2.31545e-06 #PETSc Option Table entries: -ksp_converged_reason -ksp_converged_use_initial_residual_norm -ksp_gmres_restart 60 -ksp_max_it 500 -ksp_monitor -ksp_norm_type unpreconditioned -ksp_rtol 2.e-3 -ksp_type gmres -log_summary -mg_levels_ksp_chebyshev_estimate_eigenvalues 0,0.1,0,1.05 -mg_levels_ksp_max_it 2 -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor -pc_gamg_agg_nsmooths 1 -pc_gamg_reuse_interpolation true -pc_gamg_sym_graph -pc_gamg_threshold .05 -pc_gamg_type agg -pc_gamg_verbose 0 -pc_hypre_boomeramg_grid_sweeps_coarse 4 -pc_hypre_type boomeramg -pc_ml_EnergyMinimization 2 -pc_ml_PrintLevel 1 -pc_ml_Threshold 0.01 -pc_type gamg -snes_converged_reason -snes_max_funcs 10000 -snes_max_it 500 -snes_mf_operator -snes_monitor -snes_rtol 1.e-4 -snes_stol 1.e-4 -snes_view #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 8 Configure run at: Wed Jan 30 07:39:23 2013 Configure options: --COPTFLAGS="-O3 -ffast-math -funroll-loops" --CXXOPTFLAGS="-O3 -ffast-math -funroll-loops" --FOPTFLAGS="-O3 -ffast-math -funroll-loops" --download-parmetis --download-metis --download-hypre --with-cc=cc --with-clib-autodetect=0 --with-cxx=CC --with-cxxlib-autodetect=0 --with-debugging=0 --with-fc=ftn --with-fortranlib-autodetect=0 --with-mpiexec=/usr/common/acts/PETSc/3.1/bin/mpiexec.aprun --with-shared-libraries=0 --with-x=0 --with-64-bit-indices PETSC_ARCH=arch-xe6-opt64 ----------------------------------------- Libraries compiled on Wed Jan 30 07:39:23 2013 on hopper09 Machine characteristics: Linux-2.6.32.36-0.5-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /global/homes/m/madams/petsc-dev Using PETSc arch: arch-xe6-opt64 ----------------------------------------- Using C compiler: cc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -ffast-math -funroll-loops ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: ftn -Wall -Wno-unused-variable -Wno-unused-dummy-argument -O3 -ffast-math -funroll-loops ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/global/homes/m/madams/petsc-dev/arch-xe6-opt64/include -I/global/homes/m/madams/petsc-dev/include -I/global/homes/m/madams/petsc-dev/include -I/global/homes/m/madams/petsc-dev/arch-xe6-opt64/include ----------------------------------------- Using C linker: cc Using Fortran linker: ftn Using libraries: -Wl,-rpath,/global/homes/m/madams/petsc-dev/arch-xe6-opt64/lib -L/global/homes/m/madams/petsc-dev/arch-xe6-opt64/lib -lpetsc -Wl,-rpath,/global/homes/m/madams/petsc-dev/arch-xe6-opt64/lib -L/global/homes/m/madams/petsc-dev/arch-xe6-opt64/lib -lHYPRE -lparmetis -lmetis -lpthread -ldl ----------------------------------------- Application 15484350 resources: utime ~11126s, stime ~16s, Rss ~866768, inblocks ~707380, outblocks ~83002