[petsc-users] Strange GAMG performance for mixed FE formulation

Justin Chang jychang48 at gmail.com
Wed Mar 2 19:15:48 CST 2016


Barry,

Attached are the log_summary output for each preconditioner.

Thanks,
Justin

On Wednesday, March 2, 2016, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>   Justin,
>
>    Do you have the -log_summary output for these runs?
>
> Barry
>
> > On Mar 2, 2016, at 4:28 PM, Justin Chang <jychang48 at gmail.com> wrote:
> >
> > Dear all,
> >
> > Using the firedrake project, I am solving this simple mixed poisson
> problem:
> >
> > mesh = UnitCubeMesh(40,40,40)
> > V = FunctionSpace(mesh,"RT",1)
> > Q = FunctionSpace(mesh,"DG",0)
> > W = V*Q
> >
> > v, p = TrialFunctions(W)
> > w, q = TestFunctions(W)
> >
> > f = Function(Q)
> >
> f.interpolate(Expression("12*pi*pi*sin(pi*x[0]*2)*sin(pi*x[1]*2)*sin(2*pi*x[2])"))
> >
> > a = dot(v,w)*dx - p*div(w)*dx + div(v)*q*dx
> > L = f*q*dx
> >
> > u = Function(W)
> > solve(a==L,u,solver_parameters={...})
> >
> > This problem has 1161600 degrees of freedom. The solver_parameters are:
> >
> > -ksp_type gmres
> > -pc_type fieldsplit
> > -pc_fieldsplit_type schur
> > -pc_fieldsplit_schur_fact_type: upper
> > -pc_fieldsplit_schur_precondition selfp
> > -fieldsplit_0_ksp_type preonly
> > -fieldsplit_0_pc_type bjacobi
> > -fieldsplit_1_ksp_type preonly
> > -fieldsplit_1_pc_type hypre/ml/gamg
> >
> > for the last option, I compared the wall-clock timings for hypre, ml,and
> gamg. Here are the strong-scaling results (across 64 cores, 8 cores per
> Intel Xeon E5-2670 node) for hypre, ml, and gamg:
> >
> > hypre:
> > 1 core: 47.5 s, 12 solver iters
> > 2 cores: 34.1 s, 15 solver iters
> > 4 cores: 21.5 s, 15 solver iters
> > 8 cores: 16.6 s, 15 solver iters
> > 16 cores: 10.2 s, 15 solver iters
> > 24 cores: 7.66 s, 15 solver iters
> > 32 cores: 6.31 s, 15 solver iters
> > 40 cores: 5.68 s, 15 solver iters
> > 48 cores: 5.36 s, 16 solver iters
> > 56 cores: 5.12 s, 16 solver iters
> > 64 cores: 4.99 s, 16 solver iters
> >
> > ml:
> > 1 core: 4.44 s, 14 solver iters
> > 2 cores: 2.85 s, 16 solver iters
> > 4 cores: 1.6 s, 17 solver iters
> > 8 cores: 0.966 s, 17 solver iters
> > 16 cores: 0.585 s, 18 solver iters
> > 24 cores: 0.440 s, 18 solver iters
> > 32 cores: 0.375 s, 18 solver iters
> > 40 cores: 0.332 s, 18 solver iters
> > 48 cores: 0.307 s, 17 solver iters
> > 56 cores: 0.290 s, 18 solver iters
> > 64 cores: 0.281 s, 18 solver items
> >
> > gamg:
> > 1 core: 613 s, 12 solver iters
> > 2 cores: 204 s, 15 solver iters
> > 4 cores: 77.1 s, 15 solver iters
> > 8 cores: 38.1 s, 15 solver iters
> > 16 cores: 15.9 s, 16 solver iters
> > 24 cores: 9.24 s, 16 solver iters
> > 32 cores: 5.92 s, 16 solver iters
> > 40 cores: 4.72 s, 16 solver iters
> > 48 cores: 3.89 s, 16 solver iters
> > 56 cores: 3.65 s, 16 solver iters
> > 64 cores: 3.46 s, 16 solver iters
> >
> > The performance difference between ML and HYPRE makes sense to me, but
> what I am really confused about is GAMG. It seems GAMG is really slow on a
> single core but something internally is causing it to speed up
> super-linearly as I increase the number of MPI processes. Shouldn't ML and
> GAMG have the same performance? I am not sure what log outputs to give you
> guys, but for starters, below is -ksp_view for the single core case with
> GAMG
> >
> > KSP Object:(solver_) 1 MPI processes
> >   type: gmres
> >     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> >     GMRES: happy breakdown tolerance 1e-30
> >   maximum iterations=10000, initial guess is zero
> >   tolerances:  relative=1e-07, absolute=1e-50, divergence=10000.
> >   left preconditioning
> >   using PRECONDITIONED norm type for convergence test
> > PC Object:(solver_) 1 MPI processes
> >   type: fieldsplit
> >     FieldSplit with Schur preconditioner, factorization UPPER
> >     Preconditioner for the Schur complement formed from Sp, an assembled
> approximation to S, which uses (lumped, if requested) A00's diagonal's
> inverse
> >     Split info:
> >     Split number 0 Defined by IS
> >     Split number 1 Defined by IS
> >     KSP solver for A00 block
> >       KSP Object:      (solver_fieldsplit_0_)       1 MPI processes
> >         type: preonly
> >         maximum iterations=10000, initial guess is zero
> >         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
> >         left preconditioning
> >         using NONE norm type for convergence test
> >       PC Object:      (solver_fieldsplit_0_)       1 MPI processes
> >         type: bjacobi
> >           block Jacobi: number of blocks = 1
> >           Local solve is same for all blocks, in the following KSP and
> PC objects:
> >           KSP Object:          (solver_fieldsplit_0_sub_)           1
> MPI processes
> >             type: preonly
> >             maximum iterations=10000, initial guess is zero
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_0_sub_)           1 MPI
> processes
> >             type: ilu
> >               ILU: out-of-place factorization
> >               0 levels of fill
> >               tolerance for zero pivot 2.22045e-14
> >               matrix ordering: natural
> >               factor fill ratio given 1., needed 1.
> >                 Factored matrix follows:
> >                   Mat Object:                   1 MPI processes
> >                     type: seqaij
> >                     rows=777600, cols=777600
> >                     package used to perform factorization: petsc
> >                     total: nonzeros=5385600, allocated nonzeros=5385600
> >                     total number of mallocs used during MatSetValues
> calls =0
> >                       not using I-node routines
> >             linear system matrix = precond matrix:
> >             Mat Object:            (solver_fieldsplit_0_)             1
> MPI processes
> >               type: seqaij
> >               rows=777600, cols=777600
> >               total: nonzeros=5385600, allocated nonzeros=5385600
> >               total number of mallocs used during MatSetValues calls =0
> >                 not using I-node routines
> >         linear system matrix = precond matrix:
> >         Mat Object:        (solver_fieldsplit_0_)         1 MPI processes
> >           type: seqaij
> >           rows=777600, cols=777600
> >           total: nonzeros=5385600, allocated nonzeros=5385600
> >           total number of mallocs used during MatSetValues calls =0
> >             not using I-node routines
> >     KSP solver for S = A11 - A10 inv(A00) A01
> >       KSP Object:      (solver_fieldsplit_1_)       1 MPI processes
> >         type: preonly
> >         maximum iterations=10000, initial guess is zero
> >         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
> >         left preconditioning
> >         using NONE norm type for convergence test
> >       PC Object:      (solver_fieldsplit_1_)       1 MPI processes
> >         type: gamg
> >           MG: type is MULTIPLICATIVE, levels=5 cycles=v
> >             Cycles per PCApply=1
> >             Using Galerkin computed coarse grid matrices
> >             GAMG specific options
> >               Threshold for dropping small values from graph 0.
> >               AGG specific options
> >                 Symmetric graph false
> >         Coarse grid solver -- level -------------------------------
> >           KSP Object:          (solver_fieldsplit_1_mg_coarse_)
>  1 MPI processes
> >             type: preonly
> >             maximum iterations=1, initial guess is zero
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_1_mg_coarse_)
>  1 MPI processes
> >             type: bjacobi
> >               block Jacobi: number of blocks = 1
> >               Local solve is same for all blocks, in the following KSP
> and PC objects:
> >               KSP Object:
> (solver_fieldsplit_1_mg_coarse_sub_)               1 MPI processes
> >                 type: preonly
> >                 maximum iterations=1, initial guess is zero
> >                 tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using NONE norm type for convergence test
> >               PC Object:
> (solver_fieldsplit_1_mg_coarse_sub_)               1 MPI processes
> >                 type: lu
> >                   LU: out-of-place factorization
> >                   tolerance for zero pivot 2.22045e-14
> >                   using diagonal shift on blocks to prevent zero pivot
> [INBLOCKS]
> >                   matrix ordering: nd
> >                   factor fill ratio given 5., needed 1.
> >                     Factored matrix follows:
> >                       Mat Object:                       1 MPI processes
> >                         type: seqaij
> >                         rows=9, cols=9
> >                         package used to perform factorization: petsc
> >                         total: nonzeros=81, allocated nonzeros=81
> >                         total number of mallocs used during MatSetValues
> calls =0
> >                           using I-node routines: found 2 nodes, limit
> used is 5
> >                 linear system matrix = precond matrix:
> >                 Mat Object:                 1 MPI processes
> >                   type: seqaij
> >                   rows=9, cols=9
> >                   total: nonzeros=81, allocated nonzeros=81
> >                   total number of mallocs used during MatSetValues calls
> =0
> >                     using I-node routines: found 2 nodes, limit used is 5
> >             linear system matrix = precond matrix:
> >             Mat Object:             1 MPI processes
> >               type: seqaij
> >               rows=9, cols=9
> >               total: nonzeros=81, allocated nonzeros=81
> >               total number of mallocs used during MatSetValues calls =0
> >                 using I-node routines: found 2 nodes, limit used is 5
> >         Down solver (pre-smoother) on level 1
> -------------------------------
> >           KSP Object:          (solver_fieldsplit_1_mg_levels_1_)
>    1 MPI processes
> >             type: chebyshev
> >               Chebyshev: eigenvalue estimates:  min = 0.0999525, max =
> 1.09948
> >               Chebyshev: eigenvalues estimated using gmres with
> translations  [0. 0.1; 0. 1.1]
> >               KSP Object:
> (solver_fieldsplit_1_mg_levels_1_esteig_)               1 MPI processes
> >                 type: gmres
> >                   GMRES: restart=30, using Classical (unmodified)
> Gram-Schmidt Orthogonalization with no iterative refinement
> >                   GMRES: happy breakdown tolerance 1e-30
> >                 maximum iterations=10, initial guess is zero
> >                 tolerances:  relative=1e-12, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using PRECONDITIONED norm type for convergence test
> >             maximum iterations=2
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using nonzero initial guess
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_1_mg_levels_1_)
>    1 MPI processes
> >             type: sor
> >               SOR: type = local_symmetric, iterations = 1, local
> iterations = 1, omega = 1.
> >             linear system matrix = precond matrix:
> >             Mat Object:             1 MPI processes
> >               type: seqaij
> >               rows=207, cols=207
> >               total: nonzeros=42849, allocated nonzeros=42849
> >               total number of mallocs used during MatSetValues calls =0
> >                 using I-node routines: found 42 nodes, limit used is 5
> >         Up solver (post-smoother) same as down solver (pre-smoother)
> >         Down solver (pre-smoother) on level 2
> -------------------------------
> >           KSP Object:          (solver_fieldsplit_1_mg_levels_2_)
>    1 MPI processes
> >             type: chebyshev
> >               Chebyshev: eigenvalue estimates:  min = 0.0996628, max =
> 1.09629
> >               Chebyshev: eigenvalues estimated using gmres with
> translations  [0. 0.1; 0. 1.1]
> >               KSP Object:
> (solver_fieldsplit_1_mg_levels_2_esteig_)               1 MPI processes
> >                 type: gmres
> >                   GMRES: restart=30, using Classical (unmodified)
> Gram-Schmidt Orthogonalization with no iterative refinement
> >                   GMRES: happy breakdown tolerance 1e-30
> >                 maximum iterations=10, initial guess is zero
> >                 tolerances:  relative=1e-12, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using PRECONDITIONED norm type for convergence test
> >             maximum iterations=2
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using nonzero initial guess
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_1_mg_levels_2_)
>    1 MPI processes
> >             type: sor
> >               SOR: type = local_symmetric, iterations = 1, local
> iterations = 1, omega = 1.
> >             linear system matrix = precond matrix:
> >             Mat Object:             1 MPI processes
> >               type: seqaij
> >               rows=5373, cols=5373
> >               total: nonzeros=28852043, allocated nonzeros=28852043
> >               total number of mallocs used during MatSetValues calls =0
> >                 using I-node routines: found 1481 nodes, limit used is 5
> >         Up solver (post-smoother) same as down solver (pre-smoother)
> >         Down solver (pre-smoother) on level 3
> -------------------------------
> >           KSP Object:          (solver_fieldsplit_1_mg_levels_3_)
>    1 MPI processes
> >             type: chebyshev
> >               Chebyshev: eigenvalue estimates:  min = 0.0994294, max =
> 1.09372
> >               Chebyshev: eigenvalues estimated using gmres with
> translations  [0. 0.1; 0. 1.1]
> >               KSP Object:
> (solver_fieldsplit_1_mg_levels_3_esteig_)               1 MPI processes
> >                 type: gmres
> >                   GMRES: restart=30, using Classical (unmodified)
> Gram-Schmidt Orthogonalization with no iterative refinement
> >                   GMRES: happy breakdown tolerance 1e-30
> >                 maximum iterations=10, initial guess is zero
> >                 tolerances:  relative=1e-12, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using PRECONDITIONED norm type for convergence test
> >             maximum iterations=2
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using nonzero initial guess
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_1_mg_levels_3_)
>    1 MPI processes
> >             type: sor
> >               SOR: type = local_symmetric, iterations = 1, local
> iterations = 1, omega = 1.
> >             linear system matrix = precond matrix:
> >             Mat Object:             1 MPI processes
> >               type: seqaij
> >               rows=52147, cols=52147
> >               total: nonzeros=38604909, allocated nonzeros=38604909
> >               total number of mallocs used during MatSetValues calls =2
> >                 not using I-node routines
> >         Up solver (post-smoother) same as down solver (pre-smoother)
> >         Down solver (pre-smoother) on level 4
> -------------------------------
> >           KSP Object:          (solver_fieldsplit_1_mg_levels_4_)
>    1 MPI processes
> >             type: chebyshev
> >               Chebyshev: eigenvalue estimates:  min = 0.158979, max =
> 1.74876
> >               Chebyshev: eigenvalues estimated using gmres with
> translations  [0. 0.1; 0. 1.1]
> >               KSP Object:
> (solver_fieldsplit_1_mg_levels_4_esteig_)               1 MPI processes
> >                 type: gmres
> >                   GMRES: restart=30, using Classical (unmodified)
> Gram-Schmidt Orthogonalization with no iterative refinement
> >                   GMRES: happy breakdown tolerance 1e-30
> >                 maximum iterations=10, initial guess is zero
> >                 tolerances:  relative=1e-12, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using PRECONDITIONED norm type for convergence test
> >             maximum iterations=2
> >             tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >             left preconditioning
> >             using nonzero initial guess
> >             using NONE norm type for convergence test
> >           PC Object:          (solver_fieldsplit_1_mg_levels_4_)
>    1 MPI processes
> >             type: sor
> >               SOR: type = local_symmetric, iterations = 1, local
> iterations = 1, omega = 1.
> >             linear system matrix followed by preconditioner matrix:
> >             Mat Object:            (solver_fieldsplit_1_)             1
> MPI processes
> >               type: schurcomplement
> >               rows=384000, cols=384000
> >                 Schur complement A11 - A10 inv(A00) A01
> >                 A11
> >                   Mat Object:                  (solver_fieldsplit_1_)
>                1 MPI processes
> >                     type: seqaij
> >                     rows=384000, cols=384000
> >                     total: nonzeros=384000, allocated nonzeros=384000
> >                     total number of mallocs used during MatSetValues
> calls =0
> >                       not using I-node routines
> >                 A10
> >                   Mat Object:                   1 MPI processes
> >                     type: seqaij
> >                     rows=384000, cols=777600
> >                     total: nonzeros=1919999, allocated nonzeros=1919999
> >                     total number of mallocs used during MatSetValues
> calls =0
> >                       not using I-node routines
> >                 KSP of A00
> >                   KSP Object:                  (solver_fieldsplit_0_)
>                1 MPI processes
> >                     type: preonly
> >                     maximum iterations=10000, initial guess is zero
> >                     tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >                     left preconditioning
> >                     using NONE norm type for convergence test
> >                   PC Object:                  (solver_fieldsplit_0_)
>                1 MPI processes
> >                     type: bjacobi
> >                       block Jacobi: number of blocks = 1
> >                       Local solve is same for all blocks, in the
> following KSP and PC objects:
> >                       KSP Object:
> (solver_fieldsplit_0_sub_)                       1 MPI processes
> >                         type: preonly
> >                         maximum iterations=10000, initial guess is zero
> >                         tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >                         left preconditioning
> >                         using NONE norm type for convergence test
> >                       PC Object:
> (solver_fieldsplit_0_sub_)                       1 MPI processes
> >                         type: ilu
> >                           ILU: out-of-place factorization
> >                           0 levels of fill
> >                           tolerance for zero pivot 2.22045e-14
> >                           matrix ordering: natural
> >                           factor fill ratio given 1., needed 1.
> >                             Factored matrix follows:
> >                               Mat Object:
>  1 MPI processes
> >                                 type: seqaij
> >                                 rows=777600, cols=777600
> >                                 package used to perform factorization:
> petsc
> >                                 total: nonzeros=5385600, allocated
> nonzeros=5385600
> >                                 total number of mallocs used during
> MatSetValues calls =0
> >                                   not using I-node routines
> >                         linear system matrix = precond matrix:
> >                         Mat Object:
> (solver_fieldsplit_0_)                         1 MPI processes
> >                           type: seqaij
> >                           rows=777600, cols=777600
> >                           total: nonzeros=5385600, allocated
> nonzeros=5385600
> >                           total number of mallocs used during
> MatSetValues calls =0
> >                             not using I-node routines
> >                     linear system matrix = precond matrix:
> >                     Mat Object:
> (solver_fieldsplit_0_)                     1 MPI processes
> >                       type: seqaij
> >                       rows=777600, cols=777600
> >                       total: nonzeros=5385600, allocated nonzeros=5385600
> >                       total number of mallocs used during MatSetValues
> calls =0
> >                         not using I-node routines
> >                 A01
> >                   Mat Object:                   1 MPI processes
> >                     type: seqaij
> >                     rows=777600, cols=384000
> >                     total: nonzeros=1919999, allocated nonzeros=1919999
> >                     total number of mallocs used during MatSetValues
> calls =0
> >                       not using I-node routines
> >             Mat Object:             1 MPI processes
> >               type: seqaij
> >               rows=384000, cols=384000
> >               total: nonzeros=3416452, allocated nonzeros=3416452
> >               total number of mallocs used during MatSetValues calls =0
> >                 not using I-node routines
> >         Up solver (post-smoother) same as down solver (pre-smoother)
> >         linear system matrix followed by preconditioner matrix:
> >         Mat Object:        (solver_fieldsplit_1_)         1 MPI processes
> >           type: schurcomplement
> >           rows=384000, cols=384000
> >             Schur complement A11 - A10 inv(A00) A01
> >             A11
> >               Mat Object:              (solver_fieldsplit_1_)
>    1 MPI processes
> >                 type: seqaij
> >                 rows=384000, cols=384000
> >                 total: nonzeros=384000, allocated nonzeros=384000
> >                 total number of mallocs used during MatSetValues calls =0
> >                   not using I-node routines
> >             A10
> >               Mat Object:               1 MPI processes
> >                 type: seqaij
> >                 rows=384000, cols=777600
> >                 total: nonzeros=1919999, allocated nonzeros=1919999
> >                 total number of mallocs used during MatSetValues calls =0
> >                   not using I-node routines
> >             KSP of A00
> >               KSP Object:              (solver_fieldsplit_0_)
>    1 MPI processes
> >                 type: preonly
> >                 maximum iterations=10000, initial guess is zero
> >                 tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >                 left preconditioning
> >                 using NONE norm type for convergence test
> >               PC Object:              (solver_fieldsplit_0_)
>    1 MPI processes
> >                 type: bjacobi
> >                   block Jacobi: number of blocks = 1
> >                   Local solve is same for all blocks, in the following
> KSP and PC objects:
> >                   KSP Object:
> (solver_fieldsplit_0_sub_)                   1 MPI processes
> >                     type: preonly
> >                     maximum iterations=10000, initial guess is zero
> >                     tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
> >                     left preconditioning
> >                     using NONE norm type for convergence test
> >                   PC Object:
> (solver_fieldsplit_0_sub_)                   1 MPI processes
> >                     type: ilu
> >                       ILU: out-of-place factorization
> >                       0 levels of fill
> >                       tolerance for zero pivot 2.22045e-14
> >                       matrix ordering: natural
> >                       factor fill ratio given 1., needed 1.
> >                         Factored matrix follows:
> >                           Mat Object:                           1 MPI
> processes
> >                             type: seqaij
> >                             rows=777600, cols=777600
> >                             package used to perform factorization: petsc
> >                             total: nonzeros=5385600, allocated
> nonzeros=5385600
> >                             total number of mallocs used during
> MatSetValues calls =0
> >                               not using I-node routines
> >                     linear system matrix = precond matrix:
> >                     Mat Object:
> (solver_fieldsplit_0_)                     1 MPI processes
> >                       type: seqaij
> >                       rows=777600, cols=777600
> >                       total: nonzeros=5385600, allocated nonzeros=5385600
> >                       total number of mallocs used during MatSetValues
> calls =0
> >                         not using I-node routines
> >                 linear system matrix = precond matrix:
> >                 Mat Object:                (solver_fieldsplit_0_)
>          1 MPI processes
> >                   type: seqaij
> >                   rows=777600, cols=777600
> >                   total: nonzeros=5385600, allocated nonzeros=5385600
> >                   total number of mallocs used during MatSetValues calls
> =0
> >                     not using I-node routines
> >             A01
> >               Mat Object:               1 MPI processes
> >                 type: seqaij
> >                 rows=777600, cols=384000
> >                 total: nonzeros=1919999, allocated nonzeros=1919999
> >                 total number of mallocs used during MatSetValues calls =0
> >                   not using I-node routines
> >         Mat Object:         1 MPI processes
> >           type: seqaij
> >           rows=384000, cols=384000
> >           total: nonzeros=3416452, allocated nonzeros=3416452
> >           total number of mallocs used during MatSetValues calls =0
> >             not using I-node routines
> >   linear system matrix = precond matrix:
> >   Mat Object:   1 MPI processes
> >     type: nest
> >     rows=1161600, cols=1161600
> >       Matrix object:
> >         type=nest, rows=2, cols=2
> >         MatNest structure:
> >         (0,0) : prefix="solver_fieldsplit_0_", type=seqaij, rows=777600,
> cols=777600
> >         (0,1) : type=seqaij, rows=777600, cols=384000
> >         (1,0) : type=seqaij, rows=384000, cols=777600
> >         (1,1) : prefix="solver_fieldsplit_1_", type=seqaij, rows=384000,
> cols=384000
> >
> > Any insight as to what's happening? Btw this firedrake/petsc-mapdes is
> from way back in october 2015 (yes much has changed since but
> reinstalling/updating firedrake and petsc on LANL's firewall HPC machines
> is a big pain in the ass).
> >
> > Thanks,
> > Justin
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160302/cb4af408/attachment-0001.html>
-------------- next part --------------
=================
 gamg 40 1
=================
Discretization: RT
MPI processes 1: solving... 
((1161600, 1161600), (1161600, 1161600))
	Solver time: 6.176223e+02
	Solver iterations: 12
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 1 processor, by jychang48 Wed Mar  2 17:54:07 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           6.314e+02      1.00000   6.314e+02
Objects:              4.870e+02      1.00000   4.870e+02
Flops:                2.621e+11      1.00000   2.621e+11  2.621e+11
Flops/sec:            4.151e+08      1.00000   4.151e+08  4.151e+08
MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Reductions:       0.000e+00      0.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.3802e+01   2.2%  0.0000e+00   0.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 
 1:             FEM: 6.1762e+02  97.8%  2.6212e+11 100.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecSet                 8 1.0 1.5719e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 2.5649e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexInterp           1 1.0 2.0957e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0  15  0  0  0  0     0
DMPlexStratify         4 1.0 5.1086e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   4  0  0  0  0     0
SFSetGraph             7 1.0 2.6373e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

VecMDot               90 1.0 9.1943e-02 1.0 2.78e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3028
VecNorm               99 1.0 2.5040e-02 1.0 4.96e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1982
VecScale             187 1.0 4.3765e-02 1.0 6.37e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1456
VecCopy               61 1.0 1.4966e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               585 1.0 3.6946e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 2.4149e-03 1.0 4.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1694
VecAYPX              416 1.0 5.2025e-02 1.0 5.74e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1104
VecAXPBYCZ           208 1.0 3.7029e-02 1.0 1.15e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3102
VecMAXPY              99 1.0 1.0790e-01 1.0 3.24e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3002
VecPointwiseMult      44 1.0 7.9148e-03 1.0 4.86e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   614
VecScatterBegin       58 1.0 6.4158e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSetRandom           4 1.0 1.7824e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          99 1.0 4.0050e-02 1.0 7.45e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1859
MatMult              415 1.0 1.0716e+01 1.0 1.50e+10 1.0 0.0e+00 0.0e+00 0.0e+00  2  6  0  0  0   2  6  0  0  0  1398
MatMultAdd           175 1.0 6.3934e-01 1.0 8.98e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1404
MatMultTranspose      52 1.0 4.7943e-01 1.0 6.09e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1271
MatSolve             101 1.0 9.7955e-01 1.0 8.79e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   898
MatSOR               354 1.0 8.2436e+00 1.0 1.26e+10 1.0 0.0e+00 0.0e+00 0.0e+00  1  5  0  0  0   1  5  0  0  0  1531
MatLUFactorSym         1 1.0 1.5020e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 6.5916e-02 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   275
MatILUFactorSym        1 1.0 5.1070e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 5.2860e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 2.0680e-01 1.0 1.94e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   938
MatResidual           52 1.0 1.2122e+00 1.0 1.84e+09 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1521
MatAssemblyBegin      41 1.0 3.8147e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd        41 1.0 2.9745e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetRow        2093181 1.0 1.3037e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetRowIJ            2 1.0 1.0967e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 4.8364e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         2 1.0 5.2271e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 2.7666e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.6762e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatMatMult             5 1.0 2.0927e+00 1.0 1.79e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    85
MatMatMultSym          5 1.0 1.5816e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatMatMultNum          5 1.0 5.1102e-01 1.0 1.79e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   349
MatPtAP                4 1.0 5.8895e+02 1.0 2.31e+11 1.0 0.0e+00 0.0e+00 0.0e+00 93 88  0  0  0  95 88  0  0  0   392
MatPtAPSymbolic        4 1.0 3.6005e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 57  0  0  0  0  58  0  0  0  0     0
MatPtAPNumeric         4 1.0 2.2890e+02 1.0 2.31e+11 1.0 0.0e+00 0.0e+00 0.0e+00 36 88  0  0  0  37 88  0  0  0  1010
MatTrnMatMult          1 1.0 1.8922e-01 1.0 1.89e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   100
MatTrnMatMultSym       1 1.0 1.3319e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatTrnMatMultNum       1 1.0 5.6026e-02 1.0 1.89e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   337
MatGetSymTrans         5 1.0 2.6924e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
PCGAMGGraph_AGG        4 1.0 2.3435e+00 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    61
PCGAMGCoarse_AGG       4 1.0 2.7229e-01 1.0 1.89e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    69
PCGAMGProl_AGG         4 1.0 7.6182e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
PCGAMGPOpt_AGG         4 1.0 3.7438e+00 1.0 1.75e+09 1.0 0.0e+00 0.0e+00 0.0e+00  1  1  0  0  0   1  1  0  0  0   468
GAMG: createProl       4 1.0 6.4585e+00 1.0 1.91e+09 1.0 0.0e+00 0.0e+00 0.0e+00  1  1  0  0  0   1  1  0  0  0   296
  Graph                8 1.0 2.3078e+00 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    61
  MIS/Agg              4 1.0 2.7763e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
  SA: col data         4 1.0 9.4485e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
  SA: frmProl0         4 1.0 7.2291e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
  SA: smooth           4 1.0 3.7438e+00 1.0 1.75e+09 1.0 0.0e+00 0.0e+00 0.0e+00  1  1  0  0  0   1  1  0  0  0   468
GAMG: partLevel        4 1.0 5.8895e+02 1.0 2.31e+11 1.0 0.0e+00 0.0e+00 0.0e+00 93 88  0  0  0  95 88  0  0  0   392
PCSetUp                5 1.0 5.9682e+02 1.0 2.33e+11 1.0 0.0e+00 0.0e+00 0.0e+00 95 89  0  0  0  97 89  0  0  0   390
PCSetUpOnBlocks      101 1.0 1.2241e-01 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   148
PCApply               13 1.0 6.1450e+02 1.0 2.61e+11 1.0 0.0e+00 0.0e+00 0.0e+00 97 99  0  0  0  99 99  0  0  0   424
KSPGMRESOrthog        90 1.0 1.8531e-01 1.0 5.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3004
KSPSetUp              18 1.0 3.1746e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 6.1623e+02 1.0 2.61e+11 1.0 0.0e+00 0.0e+00 0.0e+00 98100  0  0  0 100100  0  0  0   424
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set    22             25     38642096     0.
             Section    26              8         5376     0.
              Vector    13            119    258907968     0.
      Vector Scatter     2              6         3984     0.
              Matrix     0             12    136713716     0.
      Preconditioner     0             11        11092     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    10              4        19256     0.
    GraphPartitioner     4              3         1836     0.
Star Forest Bipartite Graph    23             12         9696     0.
     Discrete System    10              4         3456     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set    28             18        14128     0.
   IS L to G Mapping     4              0            0     0.
              Vector   254            139     93471128     0.
      Vector Scatter     6              0            0     0.
              Matrix    33             16   1768633272     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    20              9         8536     0.
       Krylov Solver    20              5       122312     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 2: solving... 
((579051, 1161600), (579051, 1161600))
	Solver time: 2.062817e+02
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 2 processors, by jychang48 Wed Mar  2 17:57:49 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.205e+02      1.00001   2.205e+02
Objects:              9.470e+02      1.01719   9.390e+02
Flops:                1.113e+11      1.14917   1.041e+11  2.082e+11
Flops/sec:            5.049e+08      1.14915   4.722e+08  9.444e+08
MPI Messages:         1.065e+03      1.06928   1.030e+03  2.061e+03
MPI Message Lengths:  6.096e+08      1.34148   5.163e+05  1.064e+09
MPI Reductions:       1.005e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4211e+01   6.4%  0.0000e+00   0.0%  5.020e+02  24.4%  3.027e+05       58.6%  1.250e+02  12.4% 
 1:             FEM: 2.0628e+02  93.6%  2.0822e+11 100.0%  1.559e+03  75.6%  2.135e+05       41.4%  8.790e+02  87.5% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 8.1745e-01 9.9 0.00e+00 0.0 1.2e+02 4.0e+00 4.4e+01  0  0  6  0  4   3  0 24  0 35     0
VecScatterBegin        2 1.0 1.5371e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 3.0994e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.6335e+00 1.1 0.00e+00 0.0 9.2e+01 5.5e+05 2.1e+01  1  0  4  5  2  11  0 18  8 17     0
Mesh Migration         2 1.0 1.6901e+00 1.0 0.00e+00 0.0 3.7e+02 1.4e+06 5.4e+01  1  0 18 49  5  12  0 75 83 43     0
DMPlexInterp           1 1.0 2.0401e+0041740.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   7  0  0  0  0     0
DMPlexDistribute       1 1.0 2.1610e+00 1.1 0.00e+00 0.0 1.7e+02 1.9e+06 2.5e+01  1  0  8 30  2  15  0 33 50 20     0
DMPlexDistCones        2 1.0 3.6283e-01 1.0 0.00e+00 0.0 5.4e+01 3.2e+06 4.0e+00  0  0  3 16  0   3  0 11 28  3     0
DMPlexDistLabels       2 1.0 8.6425e-01 1.0 0.00e+00 0.0 2.4e+02 1.2e+06 2.2e+01  0  0 12 28  2   6  0 48 47 18     0
DMPlexDistribOL        1 1.0 1.1811e+00 1.0 0.00e+00 0.0 3.1e+02 9.6e+05 5.0e+01  1  0 15 28  5   8  0 61 48 40     0
DMPlexDistField        3 1.0 4.4417e-02 1.1 0.00e+00 0.0 6.2e+01 3.5e+05 1.2e+01  0  0  3  2  1   0  0 12  3 10     0
DMPlexDistData         2 1.0 8.4055e-0126.5 0.00e+00 0.0 5.4e+01 4.0e+05 6.0e+00  0  0  3  2  1   3  0 11  3  5     0
DMPlexStratify         6 1.5 7.7436e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   4  0  0  0  0     0
SFSetGraph            51 1.0 4.1998e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   3  0  0  0  0     0
SFBcastBegin          95 1.0 9.1913e-01 3.3 0.00e+00 0.0 4.8e+02 1.2e+06 4.1e+01  0  0 23 56  4   4  0 96 96 33     0
SFBcastEnd            95 1.0 3.4255e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.0928e-03 1.2 0.00e+00 0.0 1.1e+01 1.3e+06 3.0e+00  0  0  1  1  0   0  0  2  2  2     0
SFReduceEnd            4 1.0 5.1863e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.0994e-05 6.2 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 5.9104e-04 3.8 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 3.1600e-03 6.4 0.00e+00 0.0 1.0e+01 4.0e+00 1.7e+01  0  0  0  0  2   0  0  1  0  2     0
BuildTwoSidedF        12 1.0 1.7691e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               95 1.0 7.7544e-02 1.2 1.88e+08 1.0 0.0e+00 0.0e+00 9.5e+01  0  0  0  0  9   0  0  0  0 11  4849
VecNorm              104 1.0 1.8694e-02 1.1 2.84e+07 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 12  3028
VecScale             210 1.0 2.2894e-02 1.0 3.77e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3289
VecCopy               73 1.0 7.2665e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               563 1.0 1.2248e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 1.0741e-03 1.0 2.05e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3808
VecAYPX              512 1.0 3.0463e-02 1.0 3.54e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2321
VecAXPBYCZ           256 1.0 2.1654e-02 1.0 7.08e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6529
VecMAXPY             104 1.0 6.4986e-02 1.0 2.15e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6594
VecAssemblyBegin      13 1.0 2.4629e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        13 1.0 2.4557e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 3.5381e-03 1.0 2.43e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1374
VecScatterBegin      936 1.0 4.1796e-02 1.0 0.00e+00 0.0 1.3e+03 2.4e+04 0.0e+00  0  0 61  3  0   0  0 81  7  0     0
VecScatterEnd        936 1.0 4.6248e-01 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSetRandom           4 1.0 8.8727e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         104 1.0 2.7498e-02 1.0 4.26e+07 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 12  3088
MatMult              495 1.0 6.5382e+00 1.0 9.15e+09 1.0 1.0e+03 2.8e+04 1.2e+02  3  9 49  3 12   3  9 64  6 14  2794
MatMultAdd           214 1.0 3.9983e-01 1.1 5.66e+08 1.1 1.7e+02 1.6e+04 0.0e+00  0  1  8  0  0   0  1 11  1  0  2751
MatMultTranspose      64 1.0 2.8789e-01 1.1 3.88e+08 1.1 1.1e+02 5.9e+03 0.0e+00  0  0  5  0  0   0  0  7  0  0  2580
MatSolve             122 1.2 6.5441e-01 1.0 5.30e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1612
MatSOR               428 1.0 4.2707e+00 1.0 6.26e+09 1.0 0.0e+00 0.0e+00 0.0e+00  2  6  0  0  0   2  6  0  0  0  2925
MatLUFactorSym         1 1.0 1.5974e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 3.4417e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   525
MatILUFactorSym        1 1.0 2.5373e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 2.6755e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 1.0400e-01 1.0 9.99e+07 1.0 8.0e+00 2.8e+04 0.0e+00  0  0  0  0  0   0  0  1  0  0  1903
MatResidual           64 1.0 7.6395e-01 1.0 1.17e+09 1.0 1.3e+02 2.8e+04 0.0e+00  0  1  6  0  0   0  1  8  1  0  3061
MatAssemblyBegin      89 1.0 1.1247e+0138.9 0.00e+00 0.0 3.9e+01 5.9e+06 5.4e+01  3  0  2 22  5   3  0  3 52  6     0
MatAssemblyEnd        89 1.0 1.0761e+00 1.0 0.00e+00 0.0 8.0e+01 4.3e+03 2.0e+02  0  0  4  0 20   1  0  5  0 23     0
MatGetRow        1047432 1.0 1.0699e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            2 2.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        6 1.0 2.5043e-01 1.0 0.00e+00 0.0 7.0e+00 1.0e+02 4.0e+01  0  0  0  0  4   0  0  0  0  5     0
MatGetOrdering         2 2.0 2.4710e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 1.5470e-02 1.1 0.00e+00 0.0 4.8e+01 8.1e+03 1.2e+01  0  0  2  0  1   0  0  3  0  1     0
MatZeroEntries         4 1.0 1.3583e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.2150e+00 1.0 0.00e+00 0.0 4.0e+00 6.7e+03 2.0e+01  1  0  0  0  2   1  0  0  0  2     0
MatMatMult             5 1.0 7.4322e+00 1.0 7.81e+07 1.0 5.6e+01 1.5e+04 8.0e+01  3  0  3  0  8   4  0  4  0  9    21
MatMatMultSym          5 1.0 7.1974e+00 1.0 0.00e+00 0.0 4.7e+01 1.3e+04 7.0e+01  3  0  2  0  7   3  0  3  0  8     0
MatMatMultNum          5 1.0 2.3470e-01 1.0 7.81e+07 1.0 9.0e+00 3.1e+04 1.0e+01  0  0  0  0  1   0  0  1  0  1   665
MatPtAP                4 1.0 1.8275e+02 1.0 9.39e+10 1.2 8.8e+01 4.3e+06 6.8e+01 83 83  4 36  7  89 83  6 86  8   949
MatPtAPSymbolic        4 1.0 1.1582e+02 1.0 0.00e+00 0.0 4.8e+01 3.1e+06 2.8e+01 53  0  2 14  3  56  0  3 34  3     0
MatPtAPNumeric         4 1.0 6.6931e+01 1.0 9.39e+10 1.2 4.0e+01 5.8e+06 4.0e+01 30 83  2 22  4  32 83  3 52  5  2592
MatTrnMatMult          1 1.0 2.4068e-01 1.0 9.46e+06 1.0 1.2e+01 4.3e+04 1.9e+01  0  0  1  0  2   0  0  1  0  2    79
MatTrnMatMultSym       1 1.0 1.5422e-01 1.0 0.00e+00 0.0 1.0e+01 2.4e+04 1.7e+01  0  0  0  0  2   0  0  1  0  2     0
MatTrnMatMultNum       1 1.0 8.6440e-02 1.0 9.46e+06 1.0 2.0e+00 1.4e+05 2.0e+00  0  0  0  0  0   0  0  0  0  0   219
MatGetLocalMat        16 1.0 1.1441e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 7.4965e-02 1.0 0.00e+00 0.0 6.0e+01 1.5e+06 0.0e+00  0  0  3  9  0   0  0  4 21  0     0
PCGAMGGraph_AGG        4 1.0 1.5076e+00 1.0 7.32e+07 1.0 2.4e+01 1.1e+04 4.8e+01  1  0  1  0  5   1  0  2  0  5    97
PCGAMGCoarse_AGG       4 1.0 2.9894e-01 1.0 9.46e+06 1.0 7.0e+01 2.0e+04 3.9e+01  0  0  3  0  4   0  0  4  0  4    63
PCGAMGProl_AGG         4 1.0 5.3907e-02 1.0 0.00e+00 0.0 4.1e+01 8.8e+03 8.0e+01  0  0  2  0  8   0  0  3  0  9     0
PCGAMGPOpt_AGG         4 1.0 8.3099e+00 1.0 8.88e+08 1.0 1.3e+02 2.3e+04 1.9e+02  4  1  6  0 19   4  1  8  1 21   214
GAMG: createProl       4 1.0 1.0182e+01 1.0 9.71e+08 1.0 2.6e+02 1.9e+04 3.6e+02  5  1 13  0 35   5  1 17  1 40   191
  Graph                8 1.0 1.4890e+00 1.0 7.32e+07 1.0 2.4e+01 1.1e+04 4.8e+01  1  0  1  0  5   1  0  2  0  5    98
  MIS/Agg              4 1.0 1.5540e-02 1.0 0.00e+00 0.0 4.8e+01 8.1e+03 1.2e+01  0  0  2  0  1   0  0  3  0  1     0
  SA: col data         4 1.0 1.1244e-02 1.0 0.00e+00 0.0 1.6e+01 1.9e+04 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 3.9084e-02 1.0 0.00e+00 0.0 2.5e+01 2.0e+03 4.0e+01  0  0  1  0  4   0  0  2  0  5     0
  SA: smooth           4 1.0 8.3098e+00 1.0 8.88e+08 1.0 1.3e+02 2.3e+04 1.9e+02  4  1  6  0 19   4  1  8  1 21   214
GAMG: partLevel        4 1.0 1.8298e+02 1.0 9.39e+10 1.2 9.8e+01 3.9e+06 1.2e+02 83 83  5 36 12  89 83  6 86 14   948
  repartition          1 1.0 2.5988e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          1 1.0 1.8120e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0     0
  Move A               1 1.0 2.2859e-01 1.0 0.00e+00 0.0 5.0e+00 1.3e+02 1.8e+01  0  0  0  0  2   0  0  0  0  2     0
  Move P               1 1.0 1.9908e-04 1.0 0.00e+00 0.0 2.0e+00 2.2e+01 1.8e+01  0  0  0  0  2   0  0  0  0  2     0
PCSetUp                5 1.0 1.9420e+02 1.0 9.49e+10 1.2 3.8e+02 1.0e+06 5.6e+02 88 84 18 37 56  94 84 24 89 64   903
PCSetUpOnBlocks      122 1.0 6.2164e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   291
PCApply               16 1.0 2.0400e+02 1.0 1.10e+11 1.2 1.4e+03 2.8e+05 5.9e+02 93 99 70 38 58  99 99 93 93 67  1012
KSPGMRESOrthog        95 1.0 1.3511e-01 1.1 3.77e+08 1.0 0.0e+00 0.0e+00 9.5e+01  0  0  0  0  9   0  0  0  0 11  5566
KSPSetUp              18 1.0 1.5130e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 2.0527e+02 1.0 1.11e+11 1.1 1.5e+03 2.8e+05 8.0e+02 93100 74 40 80 100100 98 96 91  1010
SFSetGraph             4 1.0 1.7595e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          17 1.0 3.2637e-03 5.2 0.00e+00 0.0 5.4e+01 1.2e+04 5.0e+00  0  0  3  0  0   0  0  3  0  1     0
SFBcastEnd            17 1.0 8.5759e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set    79             84     49128340     0.
   IS L to G Mapping     3              3     23945692     0.
             Section    70             53        35616     0.
              Vector    15            141    180695472     0.
      Vector Scatter     2             15     13913664     0.
              Matrix     0             59    790177488     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set    88             76       217792     0.
   IS L to G Mapping     4              0            0     0.
              Vector   346            208     53912024     0.
      Vector Scatter    37             18        19768     0.
              Matrix   137             65   1266465532     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 8.10623e-07
Average time for MPI_Barrier(): 8.10623e-07
Average time for zero size MPI_Send(): 2.98023e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 4: solving... 
((288348, 1161600), (288348, 1161600))
	Solver time: 7.881676e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 4 processors, by jychang48 Wed Mar  2 17:59:23 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           9.180e+01      1.00003   9.180e+01
Objects:              9.550e+02      1.02358   9.390e+02
Flops:                3.967e+10      1.06829   3.820e+10  1.528e+11
Flops/sec:            4.321e+08      1.06831   4.161e+08  1.664e+09
MPI Messages:         2.896e+03      1.22894   2.581e+03  1.032e+04
MPI Message Lengths:  6.146e+08      1.57029   1.801e+05  1.860e+09
MPI Reductions:       1.010e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.2979e+01  14.1%  0.0000e+00   0.0%  1.654e+03  16.0%  6.288e+04       34.9%  1.250e+02  12.4% 
 1:             FEM: 7.8817e+01  85.9%  1.5278e+11 100.0%  8.671e+03  84.0%  1.173e+05       65.1%  8.840e+02  87.5% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.2481e-0116.9 0.00e+00 0.0 3.9e+02 4.0e+00 4.4e+01  1  0  4  0  4   5  0 24  0 35     0
VecScatterBegin        2 1.0 6.7115e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 5.2452e-06 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.7374e+00 1.1 0.00e+00 0.0 3.8e+02 1.4e+05 2.1e+01  2  0  4  3  2  13  0 23  8 17     0
Mesh Migration         2 1.0 1.0188e+00 1.0 0.00e+00 0.0 1.1e+03 4.7e+05 5.4e+01  1  0 11 29  5   8  0 69 83 43     0
DMPlexInterp           1 1.0 2.0553e+0052887.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0098e+00 1.1 0.00e+00 0.0 3.9e+02 8.1e+05 2.5e+01  2  0  4 17  2  15  0 23 49 20     0
DMPlexDistCones        2 1.0 2.3204e-01 1.0 0.00e+00 0.0 1.6e+02 1.1e+06 4.0e+00  0  0  2 10  0   2  0 10 28  3     0
DMPlexDistLabels       2 1.0 5.4382e-01 1.0 0.00e+00 0.0 7.2e+02 4.2e+05 2.2e+01  1  0  7 16  2   4  0 44 47 18     0
DMPlexDistribOL        1 1.0 7.6249e-01 1.0 0.00e+00 0.0 1.2e+03 2.8e+05 5.0e+01  1  0 11 17  5   6  0 70 49 40     0
DMPlexDistField        3 1.0 2.7687e-02 1.2 0.00e+00 0.0 2.0e+02 1.1e+05 1.2e+01  0  0  2  1  1   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.2141e-0140.2 0.00e+00 0.0 2.2e+02 1.0e+05 6.0e+00  1  0  2  1  1   5  0 14  4  5     0
DMPlexStratify         6 1.5 6.4549e-01 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
SFSetGraph            51 1.0 2.4213e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
SFBcastBegin          95 1.0 9.8293e-01 4.6 0.00e+00 0.0 1.6e+03 4.0e+05 4.1e+01  1  0 15 34  4   6  0 95 96 33     0
SFBcastEnd            95 1.0 3.0583e-01 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 2.8150e-03 2.0 0.00e+00 0.0 4.9e+01 2.9e+05 3.0e+00  0  0  0  1  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.0437e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.9101e-05 7.8 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.9993e-04 2.9 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 3.0973e-03 3.2 0.00e+00 0.0 5.4e+01 4.0e+00 1.7e+01  0  0  1  0  2   0  0  1  0  2     0
BuildTwoSidedF        12 1.0 3.7432e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               95 1.0 6.2907e-02 1.6 9.45e+07 1.0 0.0e+00 0.0e+00 9.5e+01  0  0  0  0  9   0  0  0  0 11  5977
VecNorm              104 1.0 1.1690e-02 1.3 1.42e+07 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 12  4843
VecScale             210 1.0 1.1790e-02 1.0 1.89e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6387
VecCopy               73 1.0 2.9068e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               563 1.0 6.0357e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 6.0582e-04 1.0 1.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6752
VecAYPX              512 1.0 1.6990e-02 1.0 1.77e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  4161
VecAXPBYCZ           256 1.0 1.2354e-02 1.0 3.54e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 11446
VecMAXPY             104 1.0 3.6181e-02 1.0 1.08e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 11844
VecAssemblyBegin      13 1.0 4.4584e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        13 1.0 2.8133e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 1.8675e-03 1.0 1.22e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2603
VecScatterBegin      936 1.0 2.9592e-02 1.1 0.00e+00 0.0 7.1e+03 1.2e+04 0.0e+00  0  0 69  5  0   0  0 82  7  0     0
VecScatterEnd        936 1.0 2.7927e-01 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSetRandom           4 1.0 4.5090e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         104 1.0 1.6539e-02 1.2 2.13e+07 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 12  5135
MatMult              495 1.0 3.3400e+00 1.0 4.53e+09 1.0 5.7e+03 1.4e+04 1.2e+02  4 12 55  4 12   4 12 66  7 14  5284
MatMultAdd           214 1.0 1.9689e-01 1.0 2.56e+08 1.0 8.9e+02 6.8e+03 0.0e+00  0  1  9  0  0   0  1 10  1  0  5084
MatMultTranspose      64 1.0 1.5643e-01 1.3 1.67e+08 1.1 5.9e+02 3.0e+03 0.0e+00  0  0  6  0  0   0  0  7  0  0  4115
MatSolve             122 1.2 3.5537e-01 1.1 2.65e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  2957
MatSOR               428 1.0 2.0880e+00 1.1 2.71e+09 1.1 0.0e+00 0.0e+00 0.0e+00  2  7  0  0  0   3  7  0  0  0  4989
MatLUFactorSym         1 1.0 3.5048e-05 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.8154e-02 1.0 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   995
MatILUFactorSym        1 1.0 1.2837e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 1.3118e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 5.5241e-02 1.0 4.79e+07 1.0 4.6e+01 1.4e+04 0.0e+00  0  0  0  0  0   0  0  1  0  0  3375
MatResidual           64 1.0 3.8316e-01 1.0 5.79e+08 1.0 7.4e+02 1.4e+04 0.0e+00  0  1  7  1  0   0  1  8  1  0  5882
MatAssemblyBegin      89 1.0 2.4723e+00 7.3 0.00e+00 0.0 2.1e+02 3.2e+06 5.4e+01  2  0  2 36  5   2  0  2 56  6     0
MatAssemblyEnd        89 1.0 1.3361e+00 1.0 0.00e+00 0.0 4.3e+02 1.8e+03 2.0e+02  1  0  4  0 20   2  0  5  0 23     0
MatGetRow         524034 1.0 1.0427e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            2 2.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        6 1.0 1.0785e-02 1.0 0.00e+00 0.0 2.1e+01 4.8e+01 4.0e+01  0  0  0  0  4   0  0  0  0  5     0
MatGetOrdering         2 2.0 1.2300e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 8.7881e-03 1.1 0.00e+00 0.0 3.2e+02 3.0e+03 1.7e+01  0  0  3  0  2   0  0  4  0  2     0
MatZeroEntries         4 1.0 7.0884e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.2048e+00 1.0 0.00e+00 0.0 2.0e+01 2.7e+03 2.0e+01  1  0  0  0  2   2  0  0  0  2     0
MatMatMult             5 1.0 2.5813e+00 1.0 3.86e+07 1.0 3.2e+02 7.3e+03 8.0e+01  3  0  3  0  8   3  0  4  0  9    58
MatMatMultSym          5 1.0 2.4731e+00 1.0 0.00e+00 0.0 2.6e+02 5.9e+03 7.0e+01  3  0  3  0  7   3  0  3  0  8     0
MatMatMultNum          5 1.0 1.0818e-01 1.0 3.86e+07 1.0 5.1e+01 1.5e+04 1.0e+01  0  0  0  0  1   0  0  1  0  1  1394
MatPtAP                4 1.0 6.7203e+01 1.0 3.18e+10 1.1 5.1e+02 2.1e+06 6.8e+01 73 79  5 59  7  85 79  6 90  8  1799
MatPtAPSymbolic        4 1.0 4.1790e+01 1.0 0.00e+00 0.0 2.8e+02 1.5e+06 2.8e+01 46  0  3 22  3  53  0  3 34  3     0
MatPtAPNumeric         4 1.0 2.5414e+01 1.0 3.18e+10 1.1 2.3e+02 2.9e+06 4.0e+01 28 79  2 36  4  32 79  3 56  5  4756
MatTrnMatMult          1 1.0 1.3047e-01 1.0 4.76e+06 1.0 6.0e+01 1.7e+04 1.9e+01  0  0  1  0  2   0  0  1  0  2   146
MatTrnMatMultSym       1 1.0 8.3536e-02 1.0 0.00e+00 0.0 5.0e+01 9.7e+03 1.7e+01  0  0  0  0  2   0  0  1  0  2     0
MatTrnMatMultNum       1 1.0 4.6928e-02 1.0 4.76e+06 1.0 1.0e+01 5.4e+04 2.0e+00  0  0  0  0  0   0  0  0  0  0   405
MatGetLocalMat        16 1.0 5.5166e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 1.3093e-01 1.2 0.00e+00 0.0 3.4e+02 7.3e+05 0.0e+00  0  0  3 13  0   0  0  4 21  0     0
PCGAMGGraph_AGG        4 1.0 1.1715e+00 1.0 3.62e+07 1.0 1.3e+02 5.5e+03 4.8e+01  1  0  1  0  5   1  0  2  0  5   120
PCGAMGCoarse_AGG       4 1.0 1.6066e-01 1.0 4.76e+06 1.0 4.3e+02 6.9e+03 4.4e+01  0  0  4  0  4   0  0  5  0  5   118
PCGAMGProl_AGG         4 1.0 2.7573e-02 1.0 0.00e+00 0.0 2.1e+02 3.5e+03 8.0e+01  0  0  2  0  8   0  0  2  0  9     0
PCGAMGPOpt_AGG         4 1.0 3.2287e+00 1.0 4.39e+08 1.0 7.4e+02 1.1e+04 1.9e+02  4  1  7  0 19   4  1  8  1 21   530
GAMG: createProl       4 1.0 4.5962e+00 1.0 4.80e+08 1.0 1.5e+03 8.4e+03 3.6e+02  5  1 15  1 36   6  1 17  1 41   407
  Graph                8 1.0 1.1613e+00 1.0 3.62e+07 1.0 1.3e+02 5.5e+03 4.8e+01  1  0  1  0  5   1  0  2  0  5   121
  MIS/Agg              4 1.0 8.8501e-03 1.1 0.00e+00 0.0 3.2e+02 3.0e+03 1.7e+01  0  0  3  0  2   0  0  4  0  2     0
  SA: col data         4 1.0 5.9159e-03 1.0 0.00e+00 0.0 8.8e+01 7.3e+03 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 2.0285e-02 1.0 0.00e+00 0.0 1.2e+02 8.0e+02 4.0e+01  0  0  1  0  4   0  0  1  0  5     0
  SA: smooth           4 1.0 3.2287e+00 1.0 4.39e+08 1.0 7.4e+02 1.1e+04 1.9e+02  4  1  7  0 19   4  1  8  1 21   530
GAMG: partLevel        4 1.0 6.7203e+01 1.0 3.18e+10 1.1 5.4e+02 2.0e+06 1.2e+02 73 79  5 59 12  85 79  6 90 14  1799
  repartition          1 1.0 5.2929e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          1 1.0 4.6968e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0     0
  Move A               1 1.0 2.0814e-04 1.0 0.00e+00 0.0 1.5e+01 6.0e+01 1.8e+01  0  0  0  0  2   0  0  0  0  2     0
  Move P               1 1.0 1.8096e-04 1.0 0.00e+00 0.0 6.0e+00 2.0e+01 1.8e+01  0  0  0  0  2   0  0  0  0  2     0
PCSetUp                5 1.0 7.2710e+01 1.0 3.22e+10 1.1 2.1e+03 5.2e+05 5.7e+02 79 80 21 60 56  92 80 25 92 64  1689
PCSetUpOnBlocks      122 1.0 3.2053e-02 1.1 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   563
PCApply               16 1.0 7.7192e+01 1.0 3.92e+10 1.1 8.2e+03 1.4e+05 5.9e+02 84 99 79 63 59  98 99 94 97 67  1955
KSPGMRESOrthog        95 1.0 9.5074e-02 1.3 1.89e+08 1.0 0.0e+00 0.0e+00 9.5e+01  0  0  0  0  9   0  0  0  0 11  7910
KSPSetUp              18 1.0 6.6414e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 7.8229e+01 1.0 3.94e+10 1.1 8.6e+03 1.4e+05 8.0e+02 85 99 83 64 80  99 99 99 98 91  1941
SFSetGraph             4 1.0 2.3985e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          22 1.0 3.1741e-03 2.8 0.00e+00 0.0 3.5e+02 4.2e+03 5.0e+00  0  0  3  0  0   0  0  4  0  1     0
SFBcastEnd            22 1.0 3.9601e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set    87             92     35927140     0.
   IS L to G Mapping     3              3     18881016     0.
             Section    70             53        35616     0.
              Vector    15            141     92771016     0.
      Vector Scatter     2             15      6936792     0.
              Matrix     0             59    361266968     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set    88             76       260344     0.
   IS L to G Mapping     4              0            0     0.
              Vector   346            208     27367352     0.
      Vector Scatter    37             18        19744     0.
              Matrix   137             65    608134404     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.57356e-06
Average time for zero size MPI_Send(): 1.96695e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 8: solving... 
((143102, 1161600), (143102, 1161600))
	Solver time: 4.012547e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 8 processors, by jychang48 Wed Mar  2 18:00:17 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           5.307e+01      1.00010   5.307e+01
Objects:              1.002e+03      1.03512   9.730e+02
Flops:                1.814e+10      1.32210   1.527e+10  1.221e+11
Flops/sec:            3.418e+08      1.32197   2.877e+08  2.301e+09
MPI Messages:         5.818e+03      1.51879   4.441e+03  3.553e+04
MPI Message Lengths:  5.914e+08      2.04577   8.271e+04  2.938e+09
MPI Reductions:       1.063e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.2942e+01  24.4%  0.0000e+00   0.0%  5.296e+03  14.9%  1.912e+04       23.1%  1.250e+02  11.8% 
 1:             FEM: 4.0126e+01  75.6%  1.2213e+11 100.0%  3.023e+04  85.1%  6.359e+04       76.9%  9.370e+02  88.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.8084e-0133.7 0.00e+00 0.0 1.2e+03 4.0e+00 4.4e+01  2  0  3  0  4   6  0 23  0 35     0
VecScatterBegin        2 1.0 3.1185e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 3.8147e-06 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.8528e+00 1.1 0.00e+00 0.0 1.4e+03 4.1e+04 2.1e+01  3  0  4  2  2  14  0 26  8 17     0
Mesh Migration         2 1.0 6.9385e-01 1.0 0.00e+00 0.0 3.4e+03 1.6e+05 5.4e+01  1  0 10 19  5   5  0 65 82 43     0
DMPlexInterp           1 1.0 2.1209e+0058911.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0432e+00 1.1 0.00e+00 0.0 1.0e+03 3.2e+05 2.5e+01  4  0  3 11  2  16  0 19 47 20     0
DMPlexDistCones        2 1.0 1.6185e-01 1.0 0.00e+00 0.0 4.9e+02 3.8e+05 4.0e+00  0  0  1  6  0   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.9578e-01 1.0 0.00e+00 0.0 2.1e+03 1.5e+05 2.2e+01  1  0  6 11  2   3  0 40 47 18     0
DMPlexDistribOL        1 1.0 5.2214e-01 1.0 0.00e+00 0.0 3.9e+03 8.8e+04 5.0e+01  1  0 11 12  5   4  0 73 50 40     0
DMPlexDistField        3 1.0 2.2155e-02 1.2 0.00e+00 0.0 6.4e+02 3.8e+04 1.2e+01  0  0  2  1  1   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.7258e-0152.8 0.00e+00 0.0 8.5e+02 3.0e+04 6.0e+00  2  0  2  1  1   6  0 16  4  5     0
DMPlexStratify         6 1.5 6.0035e-01 8.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFSetGraph            51 1.0 1.4183e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.0219e+00 6.0 0.00e+00 0.0 5.0e+03 1.3e+05 4.1e+01  2  0 14 22  4   7  0 95 96 33     0
SFBcastEnd            95 1.0 3.0369e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.5670e-03 3.6 0.00e+00 0.0 1.8e+02 8.2e+04 3.0e+00  0  0  1  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.7761e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 6.9857e-0511.3 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.3389e-04 2.6 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 2.7728e-03 6.0 0.00e+00 0.0 1.7e+02 4.0e+00 1.7e+01  0  0  0  0  2   0  0  1  0  2     0
BuildTwoSidedF        12 1.0 4.5395e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               95 1.0 4.5413e-02 1.8 4.73e+07 1.0 0.0e+00 0.0e+00 9.5e+01  0  0  0  0  9   0  0  0  0 10  8279
VecNorm              104 1.0 7.2548e-03 1.2 7.12e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11  7803
VecScale             210 1.0 6.3441e-03 1.0 9.46e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 11871
VecCopy               73 1.0 1.6530e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               566 1.0 1.8410e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 3.9196e-04 1.2 5.14e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 10436
VecAYPX              512 1.0 1.1320e-02 1.2 8.85e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6245
VecAXPBYCZ           256 1.0 7.8399e-03 1.1 1.77e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 18034
VecMAXPY             104 1.0 2.3948e-02 1.1 5.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 17894
VecAssemblyBegin      14 1.0 5.6481e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 5.9128e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 1.1752e-03 1.1 6.09e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  4136
VecScatterBegin      937 1.0 2.5381e-02 1.1 0.00e+00 0.0 2.5e+04 6.9e+03 0.0e+00  0  0 70  6  0   0  0 82  8  0     0
VecScatterEnd        937 1.0 1.5833e-01 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSetRandom           4 1.0 2.2795e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         104 1.0 1.0659e-02 1.1 1.07e+07 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11  7967
MatMult              495 1.0 1.7763e+00 1.0 2.19e+09 1.1 2.0e+04 7.9e+03 1.2e+02  3 14 57  5 12   4 14 67  7 13  9571
MatMultAdd           214 1.0 1.1394e-01 1.1 1.18e+08 1.1 3.1e+03 3.3e+03 0.0e+00  0  1  9  0  0   0  1 10  0  0  8065
MatMultTranspose      64 1.0 7.4735e-02 1.3 7.34e+07 1.1 2.0e+03 1.8e+03 0.0e+00  0  0  6  0  0   0  0  7  0  0  7514
MatSolve             122 1.2 1.9528e-01 1.1 1.33e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  5355
MatSOR               428 1.0 1.1337e+00 1.0 1.18e+09 1.1 0.0e+00 0.0e+00 0.0e+00  2  8  0  0  0   3  8  0  0  0  8134
MatLUFactorSym         1 1.0 2.3842e-05 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 9.0170e-03 1.0 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2003
MatILUFactorSym        1 1.0 3.6991e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 7.0534e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 3.9479e-02 1.0 2.27e+07 1.1 1.6e+02 8.1e+03 0.0e+00  0  0  0  0  0   0  0  1  0  0  4452
MatResidual           64 1.0 2.0590e-01 1.0 2.80e+08 1.1 2.6e+03 8.1e+03 0.0e+00  0  2  7  1  0   1  2  9  1  0 10516
MatAssemblyBegin      93 1.0 3.5671e+0011.2 0.00e+00 0.0 7.3e+02 1.7e+06 5.8e+01  4  0  2 43  5   6  0  2 56  6     0
MatAssemblyEnd        93 1.0 1.8181e+00 1.0 0.00e+00 0.0 1.6e+03 9.3e+02 2.2e+02  3  0  4  0 20   4  0  5  0 23     0
MatGetRow         262026 1.0 1.0316e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   3  0  0  0  0     0
MatGetRowIJ            2 2.0 1.0014e-05 5.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 1.3851e-02 1.0 0.00e+00 0.0 1.4e+02 3.6e+03 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 3.9101e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 6.0310e-03 1.1 0.00e+00 0.0 9.5e+02 1.8e+03 1.7e+01  0  0  3  0  2   0  0  3  0  2     0
MatZeroEntries         4 1.0 3.4907e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.2078e+00 1.0 0.00e+00 0.0 7.6e+01 1.1e+03 2.0e+01  2  0  0  0  2   3  0  0  0  2     0
MatMatMult             5 1.0 1.0170e+00 1.0 1.87e+07 1.1 1.1e+03 4.1e+03 8.0e+01  2  0  3  0  8   3  0  4  0  9   143
MatMatMultSym          5 1.0 9.6113e-01 1.0 0.00e+00 0.0 9.4e+02 3.3e+03 7.0e+01  2  0  3  0  7   2  0  3  0  7     0
MatMatMultNum          5 1.0 5.5820e-02 1.0 1.87e+07 1.1 1.8e+02 8.2e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1  2602
MatPtAP                4 1.0 3.3189e+01 1.0 1.43e+10 1.4 1.8e+03 1.1e+06 6.8e+01 63 75  5 70  6  83 75  6 91  7  2778
MatPtAPSymbolic        4 1.0 1.9334e+01 1.0 0.00e+00 0.0 9.7e+02 8.1e+05 2.8e+01 36  0  3 27  3  48  0  3 35  3     0
MatPtAPNumeric         4 1.0 1.3856e+01 1.0 1.43e+10 1.4 8.5e+02 1.5e+06 4.0e+01 26 75  2 43  4  35 75  3 56  4  6653
MatTrnMatMult          1 1.0 7.2624e-02 1.0 2.41e+06 1.0 2.3e+02 7.1e+03 1.9e+01  0  0  1  0  2   0  0  1  0  2   263
MatTrnMatMultSym       1 1.0 4.6745e-02 1.0 0.00e+00 0.0 1.9e+02 4.1e+03 1.7e+01  0  0  1  0  2   0  0  1  0  2     0
MatTrnMatMultNum       1 1.0 2.5871e-02 1.0 2.41e+06 1.0 3.8e+01 2.3e+04 2.0e+00  0  0  0  0  0   0  0  0  0  0   737
MatGetLocalMat        16 1.0 2.7739e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 1.5730e-01 1.1 0.00e+00 0.0 1.2e+03 4.0e+05 0.0e+00  0  0  3 16  0   0  0  4 21  0     0
PCGAMGGraph_AGG        4 1.0 1.0277e+00 1.0 1.75e+07 1.1 4.2e+02 3.5e+03 4.8e+01  2  0  1  0  5   3  0  1  0  5   132
PCGAMGCoarse_AGG       4 1.0 9.0071e-02 1.0 2.41e+06 1.0 1.4e+03 3.6e+03 4.4e+01  0  0  4  0  4   0  0  5  0  5   212
PCGAMGProl_AGG         4 1.0 1.5908e-02 1.0 0.00e+00 0.0 6.6e+02 1.8e+03 8.0e+01  0  0  2  0  8   0  0  2  0  9     0
PCGAMGPOpt_AGG         4 1.0 1.5742e+00 1.0 2.12e+08 1.1 2.6e+03 6.6e+03 1.9e+02  3  1  7  1 18   4  1  9  1 20  1045
GAMG: createProl       4 1.0 2.7134e+00 1.0 2.32e+08 1.1 5.0e+03 4.9e+03 3.6e+02  5  1 14  1 34   7  1 17  1 38   663
  Graph                8 1.0 1.0201e+00 1.0 1.75e+07 1.1 4.2e+02 3.5e+03 4.8e+01  2  0  1  0  5   3  0  1  0  5   133
  MIS/Agg              4 1.0 6.1002e-03 1.1 0.00e+00 0.0 9.5e+02 1.8e+03 1.7e+01  0  0  3  0  2   0  0  3  0  2     0
  SA: col data         4 1.0 3.9160e-03 1.0 0.00e+00 0.0 2.6e+02 3.9e+03 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 1.1169e-02 1.0 0.00e+00 0.0 4.0e+02 3.9e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 1.5742e+00 1.0 2.12e+08 1.1 2.6e+03 6.6e+03 1.9e+02  3  1  7  1 18   4  1  9  1 20  1045
GAMG: partLevel        4 1.0 3.3198e+01 1.0 1.43e+10 1.4 2.0e+03 1.0e+06 1.7e+02 63 75  6 70 16  83 75  7 91 19  2777
  repartition          2 1.0 1.0920e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 1.3280e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.1239e-03 1.0 0.00e+00 0.0 7.4e+01 6.6e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 7.6461e-03 1.0 0.00e+00 0.0 6.2e+01 9.9e+01 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 3.6754e+01 1.0 1.45e+10 1.4 7.3e+03 2.9e+05 6.2e+02 69 77 21 71 58  92 77 24 92 66  2558
PCSetUpOnBlocks      122 1.0 1.3308e-02 1.0 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1357
PCApply               16 1.0 3.8816e+01 1.0 1.79e+10 1.3 2.8e+04 7.8e+04 6.4e+02 73 98 80 76 61  97 98 94 98 69  3097
KSPGMRESOrthog        95 1.0 6.7224e-02 1.5 9.47e+07 1.0 0.0e+00 0.0e+00 9.5e+01  0  1  0  0  9   0  1  0  0 10 11186
KSPSetUp              18 1.0 3.2098e-03 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 3.9741e+01 1.0 1.80e+10 1.3 3.0e+04 7.5e+04 8.6e+02 75 99 84 76 81  99 99 99 99 92  3049
SFSetGraph             4 1.0 2.6512e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          22 1.0 2.9793e-03 3.8 0.00e+00 0.0 1.1e+03 2.4e+03 5.0e+00  0  0  3  0  0   0  0  4  0  1     0
SFBcastEnd            22 1.0 7.0238e-04 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   103            110     29169724     0.
   IS L to G Mapping     3              3     16320748     0.
             Section    70             53        35616     0.
              Vector    15            141     48823152     0.
      Vector Scatter     2             15      3450888     0.
              Matrix     0             52    170292328     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       277596     0.
   IS L to G Mapping     4              0            0     0.
              Vector   352            214     14068176     0.
      Vector Scatter    40             21        23168     0.
              Matrix   145             80    299175352     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 3.24249e-06
Average time for zero size MPI_Send(): 1.49012e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 16: solving... 
((70996, 1161600), (70996, 1161600))
	Solver time: 1.734145e+01
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 16 processors, by jychang48 Wed Mar  2 18:00:52 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           3.188e+01      1.00047   3.187e+01
Objects:              1.028e+03      1.05761   9.805e+02
Flops:                6.348e+09      1.50259   5.466e+09  8.746e+10
Flops/sec:            1.991e+08      1.50208   1.715e+08  2.744e+09
MPI Messages:         9.623e+03      1.72687   7.338e+03  1.174e+05
MPI Message Lengths:  3.833e+08      2.99434   2.872e+04  3.371e+09
MPI Reductions:       1.076e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4530e+01  45.6%  0.0000e+00   0.0%  1.470e+04  12.5%  6.199e+03       21.6%  1.250e+02  11.6% 
 1:             FEM: 1.7342e+01  54.4%  8.7456e+10 100.0%  1.027e+05  87.5%  2.252e+04       78.4%  9.500e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1578e+00 8.9 0.00e+00 0.0 3.5e+03 4.0e+00 4.4e+01  3  0  3  0  4   7  0 24  0 35     0
VecScatterBegin        2 1.0 8.3208e-05 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9160e+00 1.1 0.00e+00 0.0 4.4e+03 1.4e+04 2.1e+01  6  0  4  2  2  13  0 30  9 17     0
Mesh Migration         2 1.0 5.1724e-01 1.0 0.00e+00 0.0 8.9e+03 6.6e+04 5.4e+01  2  0  8 18  5   4  0 61 81 43     0
DMPlexInterp           1 1.0 2.1073e+0058535.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.1091e+00 1.1 0.00e+00 0.0 2.9e+03 1.1e+05 2.5e+01  7  0  2 10  2  14  0 20 44 20     0
DMPlexDistCones        2 1.0 1.2136e-01 1.1 0.00e+00 0.0 1.3e+03 1.5e+05 4.0e+00  0  0  1  6  0   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.1182e-01 1.0 0.00e+00 0.0 5.5e+03 6.1e+04 2.2e+01  1  0  5 10  2   2  0 38 46 18     0
DMPlexDistribOL        1 1.0 3.4499e-01 1.0 0.00e+00 0.0 1.1e+04 3.5e+04 5.0e+01  1  0  9 11  5   2  0 72 52 40     0
DMPlexDistField        3 1.0 3.0422e-02 1.9 0.00e+00 0.0 1.7e+03 1.5e+04 1.2e+01  0  0  1  1  1   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.8079e-0161.7 0.00e+00 0.0 2.9e+03 9.6e+03 6.0e+00  3  0  3  1  1   6  0 20  4  5     0
DMPlexStratify         6 1.5 5.6268e-0115.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 8.5973e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.1731e+00 3.9 0.00e+00 0.0 1.4e+04 5.0e+04 4.1e+01  3  0 12 21  4   7  0 95 97 33     0
SFBcastEnd            95 1.0 2.9754e-01 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.7155e-0312.1 0.00e+00 0.0 5.0e+02 3.0e+04 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.9478e-03 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.9114e-0517.2 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.9598e-04 3.1 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 3.8493e-03 8.3 0.00e+00 0.0 4.7e+02 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 3.8695e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 4.4748e-02 3.3 2.61e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  0  0  0  9   0  0  0  0 10  9234
VecNorm              105 1.0 6.0410e-03 1.4 3.73e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11  9757
VecScale             217 1.0 3.4852e-03 1.1 4.99e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 22716
VecCopy               77 1.0 9.4819e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 9.9552e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 1.9503e-04 1.1 2.58e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 20979
VecAYPX              544 1.0 5.8689e-03 1.2 4.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 12804
VecAXPBYCZ           272 1.0 3.8891e-03 1.2 9.44e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 38645
VecMAXPY             105 1.0 1.0443e-02 1.1 2.96e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 44820
VecAssemblyBegin      14 1.0 4.9472e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 6.8665e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 5.2500e-04 1.2 3.05e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  9262
VecScatterBegin      988 1.0 2.9767e-02 1.3 0.00e+00 0.0 8.6e+04 3.5e+03 0.0e+00  0  0 73  9  0   0  0 84 11  0     0
VecScatterEnd        988 1.0 2.4255e-01 8.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecSetRandom           4 1.0 1.1868e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 8.3659e-03 1.3 5.59e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 10569
MatMult              521 1.0 9.2801e-01 1.1 1.15e+09 1.3 7.1e+04 3.9e+03 1.3e+02  3 19 61  8 12   5 19 70 11 14 17559
MatMultAdd           227 1.0 8.0478e-02 1.5 5.58e+07 1.1 9.7e+03 1.8e+03 0.0e+00  0  1  8  1  0   0  1  9  1  0 10703
MatMultTranspose      68 1.0 4.1057e-02 1.5 3.17e+07 1.2 6.3e+03 9.3e+02 0.0e+00  0  1  5  0  0   0  1  6  0  0 11713
MatSolve             129 1.2 1.0344e-01 1.1 7.01e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0 10602
MatSOR               452 1.0 5.2970e-01 1.1 5.58e+08 1.1 0.0e+00 0.0e+00 0.0e+00  2 10  0  0  0   3 10  0  0  0 15905
MatLUFactorSym         1 1.0 3.1948e-05 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 4.5488e-03 1.1 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3964
MatILUFactorSym        1 1.0 1.7991e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 3.1865e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 1.8671e-02 1.2 1.09e+07 1.3 5.5e+02 4.0e+03 0.0e+00  0  0  0  0  0   0  0  1  0  0  8353
MatResidual           68 1.0 1.1264e-01 1.1 1.48e+08 1.3 9.4e+03 4.0e+03 0.0e+00  0  2  8  1  0   1  2  9  1  0 18463
MatAssemblyBegin      93 1.0 1.7310e+00 6.5 0.00e+00 0.0 2.3e+03 5.8e+05 5.8e+01  3  0  2 40  5   5  0  2 51  6     0
MatAssemblyEnd        93 1.0 1.6349e+00 1.3 0.00e+00 0.0 4.9e+03 4.9e+02 2.2e+02  5  0  4  0 20   9  0  5  0 23     0
MatGetRow         131316 1.0 5.1258e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   3  0  0  0  0     0
MatGetRowIJ            2 2.0 9.0599e-06 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 8.2927e-03 1.0 0.00e+00 0.0 2.4e+02 2.6e+03 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 2.1505e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 5.2981e-03 1.6 0.00e+00 0.0 2.9e+03 9.7e+02 2.0e+01  0  0  2  0  2   0  0  3  0  2     0
MatZeroEntries         4 1.0 1.5755e-02 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 6.0479e-01 1.0 0.00e+00 0.0 2.2e+02 6.2e+02 2.0e+01  2  0  0  0  2   3  0  0  0  2     0
MatMatMult             5 1.0 3.3290e-01 1.0 9.34e+06 1.3 3.7e+03 2.0e+03 8.0e+01  1  0  3  0  7   2  0  4  0  8   397
MatMatMultSym          5 1.0 3.0637e-01 1.0 0.00e+00 0.0 3.1e+03 1.6e+03 7.0e+01  1  0  3  0  7   2  0  3  0  7     0
MatMatMultNum          5 1.0 2.6513e-02 1.0 9.34e+06 1.3 6.0e+02 4.1e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1  4987
MatPtAP                4 1.0 1.3960e+01 1.0 4.49e+09 1.7 6.4e+03 3.6e+05 6.8e+01 44 67  5 68  6  80 67  6 87  7  4228
MatPtAPSymbolic        4 1.0 8.5832e+00 1.0 0.00e+00 0.0 3.3e+03 2.9e+05 2.8e+01 27  0  3 28  3  49  0  3 36  3     0
MatPtAPNumeric         4 1.0 5.3769e+00 1.0 4.49e+09 1.7 3.0e+03 4.4e+05 4.0e+01 17 67  3 40  4  31 67  3 51  4 10978
MatTrnMatMult          1 1.0 3.8761e-02 1.0 1.22e+06 1.0 6.5e+02 4.0e+03 1.9e+01  0  0  1  0  2   0  0  1  0  2   495
MatTrnMatMultSym       1 1.0 2.4785e-02 1.0 0.00e+00 0.0 5.4e+02 2.3e+03 1.7e+01  0  0  0  0  2   0  0  1  0  2     0
MatTrnMatMultNum       1 1.0 1.3965e-02 1.0 1.22e+06 1.0 1.1e+02 1.3e+04 2.0e+00  0  0  0  0  0   0  0  0  0  0  1374
MatGetLocalMat        16 1.0 1.1930e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 9.5529e-02 1.3 0.00e+00 0.0 4.1e+03 1.5e+05 0.0e+00  0  0  3 19  0   1  0  4 24  0     0
PCGAMGGraph_AGG        4 1.0 5.0889e-01 1.0 8.72e+06 1.3 1.3e+03 1.9e+03 4.8e+01  2  0  1  0  4   3  0  1  0  5   240
PCGAMGCoarse_AGG       4 1.0 5.0001e-02 1.0 1.22e+06 1.0 4.1e+03 1.9e+03 4.7e+01  0  0  3  0  4   0  0  4  0  5   384
PCGAMGProl_AGG         4 1.0 1.1742e-02 1.0 0.00e+00 0.0 1.8e+03 1.0e+03 8.0e+01  0  0  2  0  7   0  0  2  0  8     0
PCGAMGPOpt_AGG         4 1.0 6.1055e-01 1.0 1.05e+08 1.3 8.8e+03 3.2e+03 1.9e+02  2  2  8  1 17   4  2  9  1 20  2450
GAMG: createProl       4 1.0 1.1821e+00 1.0 1.15e+08 1.3 1.6e+04 2.5e+03 3.6e+02  4  2 14  1 34   7  2 16  2 38  1385
  Graph                8 1.0 5.0667e-01 1.0 8.72e+06 1.3 1.3e+03 1.9e+03 4.8e+01  2  0  1  0  4   3  0  1  0  5   241
  MIS/Agg              4 1.0 5.3558e-03 1.6 0.00e+00 0.0 2.9e+03 9.7e+02 2.0e+01  0  0  2  0  2   0  0  3  0  2     0
  SA: col data         4 1.0 2.2843e-03 1.0 0.00e+00 0.0 7.2e+02 2.3e+03 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 8.7664e-03 1.0 0.00e+00 0.0 1.1e+03 2.3e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 6.1054e-01 1.0 1.05e+08 1.3 8.8e+03 3.2e+03 1.9e+02  2  2  8  1 17   4  2  9  1 20  2450
GAMG: partLevel        4 1.0 1.3967e+01 1.0 4.49e+09 1.7 6.6e+03 3.5e+05 1.7e+02 44 67  6 68 16  81 67  6 87 18  4226
  repartition          2 1.0 2.9707e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 1.8883e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.3578e-03 1.0 0.00e+00 0.0 1.1e+02 5.3e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 4.5681e-03 1.0 0.00e+00 0.0 1.3e+02 1.1e+02 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 1.5573e+01 1.0 4.60e+09 1.7 2.3e+04 1.0e+05 6.2e+02 49 69 20 70 58  90 69 23 89 65  3898
PCSetUpOnBlocks      129 1.0 6.7956e-03 1.1 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2654
PCApply               17 1.0 1.6612e+01 1.0 6.21e+09 1.5 9.8e+04 2.7e+04 6.5e+02 52 98 83 77 60  96 98 95 98 68  5138
KSPGMRESOrthog        96 1.0 5.4426e-02 2.4 5.23e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 15184
KSPSetUp              18 1.0 1.6465e-03 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 1.7078e+01 1.0 6.28e+09 1.5 1.0e+05 2.6e+04 8.7e+02 54 99 87 78 81  98 99 99 99 92  5059
SFSetGraph             4 1.0 2.7418e-04 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          25 1.0 4.1673e-03 4.9 0.00e+00 0.0 3.2e+03 1.3e+03 5.0e+00  0  0  3  0  0   0  0  3  0  1     0
SFBcastEnd            25 1.0 6.3801e-04 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   125            132     25606724     0.
   IS L to G Mapping     3              3     15014432     0.
             Section    70             53        35616     0.
              Vector    15            141     26937184     0.
      Vector Scatter     2             15      1720344     0.
              Matrix     0             52     71903496     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       239952     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      7355832     0.
      Vector Scatter    40             21        23096     0.
              Matrix   145             80    132079708     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 5.38826e-06
Average time for zero size MPI_Send(): 1.68383e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 24: solving... 
((47407, 1161600), (47407, 1161600))
	Solver time: 1.005248e+01
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 24 processors, by jychang48 Wed Mar  2 18:01:20 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.430e+01      1.00050   2.430e+01
Objects:              1.042e+03      1.07202   9.828e+02
Flops:                3.581e+09      1.56180   2.922e+09  7.012e+10
Flops/sec:            1.474e+08      1.56148   1.202e+08  2.886e+09
MPI Messages:         1.260e+04      1.77245   9.542e+03  2.290e+05
MPI Message Lengths:  3.024e+08      3.27966   1.567e+04  3.589e+09
MPI Reductions:       1.076e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4244e+01  58.6%  0.0000e+00   0.0%  2.618e+04  11.4%  3.318e+03       21.2%  1.250e+02  11.6% 
 1:             FEM: 1.0052e+01  41.4%  7.0118e+10 100.0%  2.028e+05  88.6%  1.235e+04       78.8%  9.500e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1165e+00 9.7 0.00e+00 0.0 6.2e+03 4.0e+00 4.4e+01  4  0  3  0  4   7  0 24  0 35     0
VecScatterBegin        2 1.0 6.1035e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.0014e-0510.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9826e+00 1.1 0.00e+00 0.0 8.7e+03 7.7e+03 2.1e+01  8  0  4  2  2  14  0 33  9 17     0
Mesh Migration         2 1.0 4.5927e-01 1.0 0.00e+00 0.0 1.5e+04 4.0e+04 5.4e+01  2  0  7 17  5   3  0 58 81 43     0
DMPlexInterp           1 1.0 2.1183e+0057320.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.1988e+00 1.1 0.00e+00 0.0 5.7e+03 5.7e+04 2.5e+01  9  0  2  9  2  15  0 22 43 20     0
DMPlexDistCones        2 1.0 1.0621e-01 1.1 0.00e+00 0.0 2.2e+03 9.3e+04 4.0e+00  0  0  1  6  0   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.8685e-01 1.0 0.00e+00 0.0 9.3e+03 3.7e+04 2.2e+01  1  0  4 10  2   2  0 35 46 18     0
DMPlexDistribOL        1 1.0 2.6254e-01 1.0 0.00e+00 0.0 1.8e+04 2.2e+04 5.0e+01  1  0  8 11  5   2  0 70 53 40     0
DMPlexDistField        3 1.0 2.6364e-02 1.8 0.00e+00 0.0 3.0e+03 9.4e+03 1.2e+01  0  0  1  1  1   0  0 11  4 10     0
DMPlexDistData         2 1.0 9.9430e-0127.9 0.00e+00 0.0 6.0e+03 5.0e+03 6.0e+00  4  0  3  1  1   6  0 23  4  5     0
DMPlexStratify         6 1.5 5.5264e-0122.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 6.1109e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1261e+00 4.0 0.00e+00 0.0 2.5e+04 2.9e+04 4.1e+01  4  0 11 20  4   7  0 96 97 33     0
SFBcastEnd            95 1.0 2.9690e-01 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.7369e-0312.0 0.00e+00 0.0 8.7e+02 1.8e+04 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.3060e-03 4.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.6968e-0521.9 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.3208e-04 3.2 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 3.1033e-03 5.2 0.00e+00 0.0 8.3e+02 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 4.2391e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 3.3720e-02 3.3 1.75e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 12253
VecNorm              105 1.0 4.9245e-03 1.4 2.49e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 11969
VecScale             217 1.0 2.3873e-03 1.1 3.33e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 33163
VecCopy               77 1.0 7.2002e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 6.5861e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 1.4710e-04 1.2 1.73e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 27810
VecAYPX              544 1.0 3.7701e-03 1.1 3.16e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 19928
VecAXPBYCZ           272 1.0 2.6333e-03 1.2 6.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 57063
VecMAXPY             105 1.0 5.9943e-03 1.2 1.98e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 78080
VecAssemblyBegin      14 1.0 5.3096e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 6.6996e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 3.2210e-04 1.2 2.04e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 15093
VecScatterBegin      988 1.0 3.2264e-02 1.4 0.00e+00 0.0 1.7e+05 2.3e+03 0.0e+00  0  0 74 11  0   0  0 84 14  0     0
VecScatterEnd        988 1.0 1.3531e-01 5.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecSetRandom           4 1.0 8.0204e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 6.9528e-03 1.2 3.74e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 12716
MatMult              521 1.0 6.0050e-01 1.1 7.20e+08 1.3 1.4e+05 2.6e+03 1.3e+02  2 22 63 10 12   6 22 71 13 14 25122
MatMultAdd           227 1.0 6.2568e-02 1.7 3.48e+07 1.1 1.7e+04 1.2e+03 0.0e+00  0  1  8  1  0   1  1  9  1  0 12729
MatMultTranspose      68 1.0 3.3125e-02 2.0 1.88e+07 1.2 1.1e+04 6.3e+02 0.0e+00  0  1  5  0  0   0  1  6  0  0 12559
MatSolve             129 1.2 7.0106e-02 1.1 4.68e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 15570
MatSOR               452 1.0 3.2208e-01 1.1 3.43e+08 1.2 0.0e+00 0.0e+00 0.0e+00  1 11  0  0  0   3 11  0  0  0 23760
MatLUFactorSym         1 1.0 2.0027e-05 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 2.9938e-03 1.1 7.77e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6005
MatILUFactorSym        1 1.0 1.2600e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 2.0496e-02 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 1.1414e-02 1.3 6.76e+06 1.2 1.1e+03 2.6e+03 0.0e+00  0  0  0  0  0   0  0  1  0  0 12464
MatResidual           68 1.0 7.3167e-02 1.1 9.23e+07 1.3 1.9e+04 2.6e+03 0.0e+00  0  3  8  1  0   1  3  9  2  0 26129
MatAssemblyBegin      93 1.0 1.0436e+00 7.0 0.00e+00 0.0 4.5e+03 3.0e+05 5.8e+01  3  0  2 38  5   6  0  2 48  6     0
MatAssemblyEnd        93 1.0 1.0572e+00 1.1 0.00e+00 0.0 9.3e+03 3.2e+02 2.2e+02  4  0  4  0 20  10  0  5  0 23     0
MatGetRow          87698 1.0 3.4188e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   3  0  0  0  0     0
MatGetRowIJ            2 2.0 7.8678e-06 8.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 6.2807e-03 1.0 0.00e+00 0.0 3.4e+02 1.7e+03 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 1.7095e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 3.9628e-03 1.5 0.00e+00 0.0 5.1e+03 7.0e+02 2.0e+01  0  0  2  0  2   0  0  2  0  2     0
MatZeroEntries         4 1.0 1.0336e-02 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 4.0474e-01 1.0 0.00e+00 0.0 3.7e+02 4.4e+02 2.0e+01  2  0  0  0  2   4  0  0  0  2     0
MatMatMult             5 1.0 1.8609e-01 1.0 5.84e+06 1.3 7.5e+03 1.3e+03 8.0e+01  1  0  3  0  7   2  0  4  0  8   657
MatMatMultSym          5 1.0 1.6933e-01 1.0 0.00e+00 0.0 6.3e+03 1.1e+03 7.0e+01  1  0  3  0  7   2  0  3  0  7     0
MatMatMultNum          5 1.0 1.6730e-02 1.0 5.84e+06 1.3 1.2e+03 2.6e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1  7314
MatPtAP                4 1.0 7.8254e+00 1.0 2.42e+09 1.9 1.3e+04 1.8e+05 6.8e+01 32 62  6 67  6  78 62  6 84  7  5593
MatPtAPSymbolic        4 1.0 4.7722e+00 1.0 0.00e+00 0.0 6.8e+03 1.5e+05 2.8e+01 20  0  3 29  3  47  0  3 37  3     0
MatPtAPNumeric         4 1.0 3.0532e+00 1.0 2.42e+09 1.9 6.4e+03 2.1e+05 4.0e+01 13 62  3 38  4  30 62  3 48  4 14334
MatTrnMatMult          1 1.0 2.6716e-02 1.0 8.18e+05 1.0 1.1e+03 2.8e+03 1.9e+01  0  0  0  0  2   0  0  1  0  2   721
MatTrnMatMultSym       1 1.0 1.7252e-02 1.0 0.00e+00 0.0 9.4e+02 1.6e+03 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 9.4512e-03 1.0 8.18e+05 1.0 1.9e+02 8.9e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  2039
MatGetLocalMat        16 1.0 7.0903e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 8.4786e-02 1.2 0.00e+00 0.0 8.2e+03 8.7e+04 0.0e+00  0  0  4 20  0   1  0  4 25  0     0
PCGAMGGraph_AGG        4 1.0 3.3958e-01 1.0 5.43e+06 1.3 2.4e+03 1.4e+03 4.8e+01  1  0  1  0  4   3  0  1  0  5   331
PCGAMGCoarse_AGG       4 1.0 3.4880e-02 1.0 8.18e+05 1.0 7.1e+03 1.4e+03 4.7e+01  0  0  3  0  4   0  0  4  0  5   552
PCGAMGProl_AGG         4 1.0 1.0176e-02 1.0 0.00e+00 0.0 3.0e+03 7.7e+02 8.0e+01  0  0  1  0  7   0  0  2  0  8     0
PCGAMGPOpt_AGG         4 1.0 3.7025e-01 1.0 6.59e+07 1.3 1.8e+04 2.1e+03 1.9e+02  2  2  8  1 17   4  2  9  1 20  3737
GAMG: createProl       4 1.0 7.5580e-01 1.0 7.22e+07 1.3 3.1e+04 1.7e+03 3.6e+02  3  2 13  1 34   8  2 15  2 38  2005
  Graph                8 1.0 3.3810e-01 1.0 5.43e+06 1.3 2.4e+03 1.4e+03 4.8e+01  1  0  1  0  4   3  0  1  0  5   333
  MIS/Agg              4 1.0 4.0288e-03 1.5 0.00e+00 0.0 5.1e+03 7.0e+02 2.0e+01  0  0  2  0  2   0  0  2  0  2     0
  SA: col data         4 1.0 1.8752e-03 1.0 0.00e+00 0.0 1.3e+03 1.6e+03 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 7.7009e-03 1.0 0.00e+00 0.0 1.8e+03 1.7e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 3.7022e-01 1.0 6.59e+07 1.3 1.8e+04 2.1e+03 1.9e+02  2  2  8  1 17   4  2  9  1 20  3737
GAMG: partLevel        4 1.0 7.8308e+00 1.0 2.42e+09 1.9 1.4e+04 1.8e+05 1.7e+02 32 62  6 67 16  78 62  7 85 18  5589
  repartition          2 1.0 2.1386e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 2.1815e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.4272e-03 1.0 0.00e+00 0.0 1.5e+02 3.7e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 3.1939e-03 1.0 0.00e+00 0.0 1.9e+02 1.1e+02 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 8.8722e+00 1.0 2.49e+09 1.9 4.5e+04 5.4e+04 6.2e+02 37 65 20 68 58  88 65 22 87 65  5107
PCSetUpOnBlocks      129 1.0 4.6492e-03 1.1 7.77e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3867
PCApply               17 1.0 9.5174e+00 1.0 3.49e+09 1.6 1.9e+05 1.4e+04 6.5e+02 39 97 85 78 60  95 97 96 98 68  7139
KSPGMRESOrthog        96 1.0 3.9721e-02 2.6 3.50e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 20804
KSPSetUp              18 1.0 1.1928e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 9.8317e+00 1.0 3.53e+09 1.6 2.0e+05 1.4e+04 8.7e+02 40 98 88 78 81  98 98 99 99 92  7018
SFSetGraph             4 1.0 2.1601e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          25 1.0 3.2976e-03 3.6 0.00e+00 0.0 5.6e+03 9.2e+02 5.0e+00  0  0  2  0  0   0  0  3  0  1     0
SFBcastEnd            25 1.0 1.0622e-03 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   139            146     24070156     0.
   IS L to G Mapping     3              3     14448024     0.
             Section    70             53        35616     0.
              Vector    15            141     19734088     0.
      Vector Scatter     2             15      1154208     0.
              Matrix     0             52     42870916     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       195904     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      5052384     0.
      Vector Scatter    40             21        23080     0.
              Matrix   145             80     79765836     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.71661e-05
Average time for zero size MPI_Send(): 1.41064e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 32: solving... 
((35155, 1161600), (35155, 1161600))
	Solver time: 6.467208e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 32 processors, by jychang48 Wed Mar  2 18:01:44 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.074e+01      1.00048   2.074e+01
Objects:              1.068e+03      1.09877   9.844e+02
Flops:                2.198e+09      1.49553   1.837e+09  5.879e+10
Flops/sec:            1.060e+08      1.49497   8.860e+07  2.835e+09
MPI Messages:         1.553e+04      1.84528   1.145e+04  3.664e+05
MPI Message Lengths:  2.863e+08      4.24531   9.989e+03  3.660e+09
MPI Reductions:       1.078e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4269e+01  68.8%  0.0000e+00   0.0%  3.925e+04  10.7%  2.153e+03       21.5%  1.250e+02  11.6% 
 1:             FEM: 6.4673e+00  31.2%  5.8792e+10 100.0%  3.271e+05  89.3%  7.837e+03       78.5%  9.520e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1607e+0012.7 0.00e+00 0.0 9.3e+03 4.0e+00 4.4e+01  5  0  3  0  4   7  0 24  0 35     0
VecScatterBegin        2 1.0 4.7922e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 6.9141e-06 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.0361e+00 1.1 0.00e+00 0.0 1.4e+04 5.1e+03 2.1e+01 10  0  4  2  2  14  0 36  9 17     0
Mesh Migration         2 1.0 4.3835e-01 1.0 0.00e+00 0.0 2.2e+04 2.9e+04 5.4e+01  2  0  6 17  5   3  0 56 80 43     0
DMPlexInterp           1 1.0 2.1146e+0064271.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2676e+00 1.1 0.00e+00 0.0 9.4e+03 3.5e+04 2.5e+01 11  0  3  9  2  16  0 24 41 20     0
DMPlexDistCones        2 1.0 1.0199e-01 1.1 0.00e+00 0.0 3.2e+03 6.6e+04 4.0e+00  0  0  1  6  0   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.7904e-01 1.0 0.00e+00 0.0 1.3e+04 2.7e+04 2.2e+01  1  0  4 10  2   2  0 34 45 18     0
DMPlexDistribOL        1 1.0 2.2615e-01 1.0 0.00e+00 0.0 2.7e+04 1.6e+04 5.0e+01  1  0  7 12  5   2  0 68 54 40     0
DMPlexDistField        3 1.0 3.1294e-02 2.0 0.00e+00 0.0 4.3e+03 6.8e+03 1.2e+01  0  0  1  1  1   0  0 11  4 10     0
DMPlexDistData         2 1.0 9.9918e-0173.6 0.00e+00 0.0 1.0e+04 3.2e+03 6.0e+00  5  0  3  1  1   7  0 26  4  5     0
DMPlexStratify         6 1.5 5.4843e-0129.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 5.1525e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1663e+00 4.4 0.00e+00 0.0 3.8e+04 2.0e+04 4.1e+01  5  0 10 21  4   7  0 96 97 33     0
SFBcastEnd            95 1.0 3.0130e-01 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 8.7683e-0318.0 0.00e+00 0.0 1.3e+03 1.3e+04 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.8024e-03 5.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.8862e-0512.5 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.3995e-04 3.2 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 1.9822e-03 2.9 0.00e+00 0.0 1.2e+03 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 4.4608e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 3.1041e-02 3.4 1.32e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 13312
VecNorm              105 1.0 5.3015e-03 1.5 1.87e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 11119
VecScale             217 1.0 1.9186e-03 1.1 2.50e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 41267
VecCopy               77 1.0 5.4383e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 4.6587e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 1.0824e-04 1.2 1.30e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 37802
VecAYPX              544 1.0 3.0854e-03 1.2 2.37e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 24362
VecAXPBYCZ           272 1.0 1.8950e-03 1.2 4.74e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 79333
VecMAXPY             105 1.0 4.1659e-03 1.2 1.49e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 112362
VecAssemblyBegin      14 1.0 5.6028e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 6.9141e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 2.4009e-04 1.2 1.53e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 20258
VecScatterBegin      988 1.0 3.2709e-02 1.6 0.00e+00 0.0 2.8e+05 1.7e+03 0.0e+00  0  0 75 13  0   0  0 84 16  0     0
VecScatterEnd        988 1.0 1.0493e-01 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecSetRandom           4 1.0 5.8675e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 7.1294e-03 1.3 2.81e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 12403
MatMult              521 1.0 4.4204e-01 1.1 5.11e+08 1.3 2.4e+05 1.9e+03 1.3e+02  2 24 65 12 12   6 24 72 15 14 32017
MatMultAdd           227 1.0 5.3586e-02 1.9 2.44e+07 1.1 2.6e+04 9.6e+02 0.0e+00  0  1  7  1  0   1  1  8  1  0 14062
MatMultTranspose      68 1.0 2.3165e-02 1.9 1.23e+07 1.1 1.7e+04 4.9e+02 0.0e+00  0  1  5  0  0   0  1  5  0  0 16104
MatSolve             129 1.2 5.2656e-02 1.2 3.51e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 20642
MatSOR               452 1.0 2.2063e-01 1.1 2.33e+08 1.2 0.0e+00 0.0e+00 0.0e+00  1 12  0  0  0   3 12  0  0  0 32021
MatLUFactorSym         1 1.0 3.3855e-05 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 2.1760e-03 1.1 5.87e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  8254
MatILUFactorSym        1 1.0 9.6488e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 1.2136e-02 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 9.3570e-03 1.7 4.72e+06 1.3 1.8e+03 1.9e+03 0.0e+00  0  0  1  0  0   0  0  1  0  0 14120
MatResidual           68 1.0 5.2496e-02 1.2 6.51e+07 1.3 3.1e+04 1.9e+03 0.0e+00  0  3  9  2  0   1  3 10  2  0 33952
MatAssemblyBegin      93 1.0 5.5953e-01 4.5 0.00e+00 0.0 7.1e+03 1.8e+05 5.8e+01  2  0  2 36  5   5  0  2 45  6     0
MatAssemblyEnd        93 1.0 8.5411e-01 1.1 0.00e+00 0.0 1.5e+04 2.4e+02 2.2e+02  4  0  4  0 20  13  0  5  0 23     0
MatGetRow          65844 1.0 2.5644e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
MatGetRowIJ            2 2.0 8.1062e-06 8.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 5.0519e-03 1.0 0.00e+00 0.0 4.5e+02 1.2e+03 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 1.4186e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 3.4940e-03 1.5 0.00e+00 0.0 8.3e+03 5.3e+02 2.2e+01  0  0  2  0  2   0  0  3  0  2     0
MatZeroEntries         4 1.0 7.4160e-03 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 3.0466e-01 1.0 0.00e+00 0.0 5.4e+02 3.6e+02 2.0e+01  1  0  0  0  2   5  0  0  0  2     0
MatMatMult             5 1.0 1.1690e-01 1.0 4.14e+06 1.3 1.2e+04 9.7e+02 8.0e+01  1  0  3  0  7   2  0  4  0  8   982
MatMatMultSym          5 1.0 1.0482e-01 1.0 0.00e+00 0.0 1.0e+04 7.8e+02 7.0e+01  1  0  3  0  6   2  0  3  0  7     0
MatMatMultNum          5 1.0 1.2022e-02 1.0 4.14e+06 1.3 2.0e+03 1.9e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1  9545
MatPtAP                4 1.0 4.8059e+00 1.0 1.35e+09 1.8 2.2e+04 1.1e+05 6.8e+01 23 58  6 64  6  74 58  7 82  7  7077
MatPtAPSymbolic        4 1.0 2.8555e+00 1.0 0.00e+00 0.0 1.1e+04 9.4e+04 2.8e+01 14  0  3 29  3  44  0  3 37  3     0
MatPtAPNumeric         4 1.0 1.9504e+00 1.0 1.35e+09 1.8 1.1e+04 1.2e+05 4.0e+01  9 58  3 36  4  30 58  3 45  4 17438
MatTrnMatMult          1 1.0 2.0930e-02 1.0 6.19e+05 1.1 1.6e+03 2.3e+03 1.9e+01  0  0  0  0  2   0  0  0  0  2   924
MatTrnMatMultSym       1 1.0 1.3424e-02 1.0 0.00e+00 0.0 1.4e+03 1.3e+03 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 7.4990e-03 1.0 6.19e+05 1.1 2.7e+02 7.2e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  2579
MatGetLocalMat        16 1.0 4.7717e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 6.7384e-02 1.3 0.00e+00 0.0 1.3e+04 5.4e+04 0.0e+00  0  0  4 20  0   1  0  4 26  0     0
PCGAMGGraph_AGG        4 1.0 2.5093e-01 1.0 3.83e+06 1.3 3.7e+03 1.0e+03 4.8e+01  1  0  1  0  4   4  0  1  0  5   418
PCGAMGCoarse_AGG       4 1.0 2.7763e-02 1.0 6.19e+05 1.1 1.1e+04 1.0e+03 4.9e+01  0  0  3  0  5   0  0  3  0  5   697
PCGAMGProl_AGG         4 1.0 8.3046e-03 1.0 0.00e+00 0.0 4.4e+03 6.2e+02 8.0e+01  0  0  1  0  7   0  0  1  0  8     0
PCGAMGPOpt_AGG         4 1.0 2.5502e-01 1.0 4.67e+07 1.3 3.0e+04 1.5e+03 1.9e+02  1  2  8  1 17   4  2  9  2 20  5087
GAMG: createProl       4 1.0 5.4260e-01 1.0 5.11e+07 1.3 4.9e+04 1.3e+03 3.6e+02  3  2 13  2 34   8  2 15  2 38  2620
  Graph                8 1.0 2.4971e-01 1.0 3.83e+06 1.3 3.7e+03 1.0e+03 4.8e+01  1  0  1  0  4   4  0  1  0  5   420
  MIS/Agg              4 1.0 3.5570e-03 1.4 0.00e+00 0.0 8.3e+03 5.3e+02 2.2e+01  0  0  2  0  2   0  0  3  0  2     0
  SA: col data         4 1.0 1.6532e-03 1.1 0.00e+00 0.0 1.9e+03 1.3e+03 2.4e+01  0  0  1  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 6.1510e-03 1.0 0.00e+00 0.0 2.5e+03 1.4e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 2.5500e-01 1.0 4.67e+07 1.3 3.0e+04 1.5e+03 1.9e+02  1  2  8  1 17   4  2  9  2 20  5087
GAMG: partLevel        4 1.0 4.8109e+00 1.0 1.35e+09 1.8 2.2e+04 1.1e+05 1.7e+02 23 58  6 64 16  74 58  7 82 18  7070
  repartition          2 1.0 2.5296e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 3.9124e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.4398e-03 1.0 0.00e+00 0.0 1.9e+02 2.6e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 2.5690e-03 1.0 0.00e+00 0.0 2.5e+02 9.6e+01 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 5.5706e+00 1.0 1.41e+09 1.8 7.3e+04 3.3e+04 6.2e+02 27 60 20 66 58  86 60 22 85 66  6367
PCSetUpOnBlocks      129 1.0 3.5369e-03 1.1 5.87e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  5078
PCApply               17 1.0 6.0198e+00 1.0 2.13e+09 1.5 3.1e+05 9.0e+03 6.5e+02 29 96 86 77 60  93 96 96 98 68  9397
KSPGMRESOrthog        96 1.0 3.5287e-02 2.7 2.63e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 23420
KSPSetUp              18 1.0 1.0297e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 6.2599e+00 1.0 2.16e+09 1.5 3.2e+05 8.8e+03 8.7e+02 30 98 89 78 81  97 98 99 99 92  9205
SFSetGraph             4 1.0 2.0409e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          27 1.0 2.2466e-03 2.6 0.00e+00 0.0 9.1e+03 6.9e+02 5.0e+00  0  0  2  0  0   0  0  3  0  1     0
SFBcastEnd            27 1.0 5.7459e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   165            172     23657268     0.
   IS L to G Mapping     3              3     14326164     0.
             Section    70             53        35616     0.
              Vector    15            141     16031216     0.
      Vector Scatter     2             15       860160     0.
              Matrix     0             52     30817568     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       197036     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      3951520     0.
      Vector Scatter    40             21        23048     0.
              Matrix   145             80     57676072     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 6.96182e-06
Average time for zero size MPI_Send(): 1.65403e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 40: solving... 
((27890, 1161600), (27890, 1161600))
	Solver time: 5.067154e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 40 processors, by jychang48 Wed Mar  2 18:02:07 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.954e+01      1.00033   1.953e+01
Objects:              1.090e+03      1.12371   9.856e+02
Flops:                1.585e+09      1.51148   1.325e+09  5.299e+10
Flops/sec:            8.113e+07      1.51121   6.782e+07  2.713e+09
MPI Messages:         1.975e+04      2.40864   1.332e+04  5.330e+05
MPI Message Lengths:  2.951e+08      6.39097   7.450e+03  3.971e+09
MPI Reductions:       1.077e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4467e+01  74.1%  0.0000e+00   0.0%  5.371e+04  10.1%  1.521e+03       20.4%  1.250e+02  11.6% 
 1:             FEM: 5.0672e+00  25.9%  5.2989e+10 100.0%  4.793e+05  89.9%  5.929e+03       79.6%  9.510e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1854e+0014.3 0.00e+00 0.0 1.3e+04 4.0e+00 4.4e+01  6  0  2  0  4   7  0 24  0 35     0
VecScatterBegin        2 1.0 2.8849e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.1062e-06 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1183e+00 1.1 0.00e+00 0.0 2.0e+04 3.6e+03 2.1e+01 11  0  4  2  2  15  0 38  9 17     0
Mesh Migration         2 1.0 4.1828e-01 1.0 0.00e+00 0.0 2.9e+04 2.2e+04 5.4e+01  2  0  5 16  5   3  0 54 80 43     0
DMPlexInterp           1 1.0 2.1106e+0061906.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.3542e+00 1.1 0.00e+00 0.0 1.4e+04 2.3e+04 2.5e+01 12  0  3  8  2  16  0 26 40 20     0
DMPlexDistCones        2 1.0 9.7619e-02 1.2 0.00e+00 0.0 4.3e+03 5.1e+04 4.0e+00  0  0  1  5  0   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.6993e-01 1.0 0.00e+00 0.0 1.7e+04 2.1e+04 2.2e+01  1  0  3  9  2   2  0 32 45 18     0
DMPlexDistribOL        1 1.0 2.0237e-01 1.0 0.00e+00 0.0 3.6e+04 1.2e+04 5.0e+01  1  0  7 11  5   1  0 66 55 40     0
DMPlexDistField        3 1.0 2.9826e-02 2.2 0.00e+00 0.0 5.7e+03 5.2e+03 1.2e+01  0  0  1  1  1   0  0 11  4 10     0
DMPlexDistData         2 1.0 1.0467e+0079.2 0.00e+00 0.0 1.5e+04 2.2e+03 6.0e+00  5  0  3  1  1   7  0 28  4  5     0
DMPlexStratify         6 1.5 5.4628e-0136.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 4.2505e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1899e+00 4.6 0.00e+00 0.0 5.1e+04 1.5e+04 4.1e+01  6  0 10 20  4   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0756e-01 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 8.8451e-0319.5 0.00e+00 0.0 1.7e+03 9.4e+03 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.1884e-03 7.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.7207e-0524.8 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.4687e-04 3.1 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 2.3334e-03 3.5 0.00e+00 0.0 1.7e+03 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 4.2319e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 2.6845e-02 3.8 1.05e+07 1.1 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 15392
VecNorm              105 1.0 4.4451e-03 1.5 1.50e+06 1.1 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 13260
VecScale             217 1.0 1.6758e-03 1.2 2.01e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 47243
VecCopy               77 1.0 5.0402e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 3.8776e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 9.5606e-05 1.2 1.04e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 42794
VecAYPX              544 1.0 2.5859e-03 1.2 1.90e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 29060
VecAXPBYCZ           272 1.0 1.5566e-03 1.2 3.80e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 96549
VecMAXPY             105 1.0 3.2706e-03 1.2 1.19e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 143109
VecAssemblyBegin      14 1.0 5.3811e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 5.7459e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 2.1434e-04 1.3 1.23e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 22685
VecScatterBegin      988 1.0 3.5461e-02 2.1 0.00e+00 0.0 4.0e+05 1.3e+03 0.0e+00  0  0 76 14  0   1  0 84 17  0     0
VecScatterEnd        988 1.0 1.2415e-01 5.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecSetRandom           4 1.0 5.0902e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 6.2478e-03 1.3 2.25e+06 1.1 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 14152
MatMult              521 1.0 3.6698e-01 1.2 4.16e+08 1.4 3.5e+05 1.5e+03 1.3e+02  2 26 66 13 12   7 26 73 16 14 36848
MatMultAdd           227 1.0 5.2911e-02 2.1 1.89e+07 1.1 3.5e+04 7.9e+02 0.0e+00  0  1  7  1  0   1  1  7  1  0 13698
MatMultTranspose      68 1.0 2.4873e-02 2.5 9.31e+06 1.1 2.4e+04 4.0e+02 0.0e+00  0  1  4  0  0   0  1  5  0  0 13844
MatSolve             129 1.2 4.1506e-02 1.2 2.81e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 26104
MatSOR               452 1.0 1.6256e-01 1.2 1.78e+08 1.3 0.0e+00 0.0e+00 0.0e+00  1 12  0  0  0   3 12  0  0  0 40040
MatLUFactorSym         1 1.0 4.1962e-05 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.7710e-03 1.1 4.73e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 10116
MatILUFactorSym        1 1.0 8.9598e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 1.1864e-02 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 6.0964e-03 1.6 3.80e+06 1.4 2.8e+03 1.5e+03 0.0e+00  0  0  1  0  0   0  0  1  0  0 20551
MatResidual           68 1.0 4.3995e-02 1.2 5.31e+07 1.5 4.7e+04 1.5e+03 0.0e+00  0  3  9  2  0   1  3 10  2  0 38524
MatAssemblyBegin      93 1.0 4.5627e-01 4.6 0.00e+00 0.0 1.0e+04 1.4e+05 5.8e+01  1  0  2 35  5   5  0  2 44  6     0
MatAssemblyEnd        93 1.0 7.9526e-01 1.1 0.00e+00 0.0 2.1e+04 1.9e+02 2.2e+02  4  0  4  0 20  15  0  4  0 23     0
MatGetRow          52698 1.0 2.0620e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
MatGetRowIJ            2 2.0 1.0014e-05 8.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 5.0244e-03 1.0 0.00e+00 0.0 5.5e+02 1.1e+03 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 1.1802e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 3.3238e-03 1.3 0.00e+00 0.0 1.1e+04 4.3e+02 2.1e+01  0  0  2  0  2   0  0  2  0  2     0
MatZeroEntries         4 1.0 5.5079e-03 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 2.4477e-01 1.0 0.00e+00 0.0 7.2e+02 3.0e+02 2.0e+01  1  0  0  0  2   5  0  0  0  2     0
MatMatMult             5 1.0 8.6163e-02 1.0 3.37e+06 1.4 1.8e+04 7.6e+02 8.0e+01  0  0  3  0  7   2  0  4  0  8  1272
MatMatMultSym          5 1.0 7.6432e-02 1.0 0.00e+00 0.0 1.5e+04 6.2e+02 7.0e+01  0  0  3  0  6   2  0  3  0  7     0
MatMatMultNum          5 1.0 9.7311e-03 1.0 3.37e+06 1.4 2.9e+03 1.5e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1 11263
MatPtAP                4 1.0 3.7207e+00 1.0 9.62e+08 1.9 3.3e+04 7.8e+04 6.8e+01 19 56  6 65  6  73 56  7 81  7  7908
MatPtAPSymbolic        4 1.0 2.1426e+00 1.0 0.00e+00 0.0 1.7e+04 7.0e+04 2.8e+01 11  0  3 29  3  42  0  3 37  3     0
MatPtAPNumeric         4 1.0 1.5781e+00 1.0 9.62e+08 1.9 1.6e+04 8.6e+04 4.0e+01  8 56  3 35  4  31 56  3 44  4 18646
MatTrnMatMult          1 1.0 1.6947e-02 1.0 4.99e+05 1.1 2.2e+03 1.9e+03 1.9e+01  0  0  0  0  2   0  0  0  0  2  1144
MatTrnMatMultSym       1 1.0 1.1003e-02 1.0 0.00e+00 0.0 1.8e+03 1.1e+03 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 5.9371e-03 1.0 4.99e+05 1.1 3.6e+02 6.0e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  3266
MatGetLocalMat        16 1.0 3.9840e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 6.5035e-02 1.3 0.00e+00 0.0 2.0e+04 4.1e+04 0.0e+00  0  0  4 21  0   1  0  4 26  0     0
PCGAMGGraph_AGG        4 1.0 2.0390e-01 1.0 3.12e+06 1.5 5.3e+03 8.4e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   489
PCGAMGCoarse_AGG       4 1.0 2.3103e-02 1.0 4.99e+05 1.1 1.5e+04 8.4e+02 4.8e+01  0  0  3  0  4   0  0  3  0  5   839
PCGAMGProl_AGG         4 1.0 8.3230e-03 1.0 0.00e+00 0.0 5.9e+03 5.2e+02 8.0e+01  0  0  1  0  7   0  0  1  0  8     0
PCGAMGPOpt_AGG         4 1.0 1.9880e-01 1.0 3.80e+07 1.4 4.4e+04 1.2e+03 1.9e+02  1  2  8  1 17   4  2  9  2 20  6232
GAMG: createProl       4 1.0 4.3470e-01 1.0 4.16e+07 1.4 7.0e+04 1.0e+03 3.6e+02  2  3 13  2 34   9  3 15  2 38  3124
  Graph                8 1.0 2.0300e-01 1.0 3.12e+06 1.5 5.3e+03 8.4e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   491
  MIS/Agg              4 1.0 3.3939e-03 1.3 0.00e+00 0.0 1.1e+04 4.3e+02 2.1e+01  0  0  2  0  2   0  0  2  0  2     0
  SA: col data         4 1.0 1.5466e-03 1.1 0.00e+00 0.0 2.5e+03 1.0e+03 2.4e+01  0  0  0  0  2   0  0  1  0  3     0
  SA: frmProl0         4 1.0 6.2592e-03 1.0 0.00e+00 0.0 3.4e+03 1.2e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 1.9878e-01 1.0 3.80e+07 1.4 4.4e+04 1.2e+03 1.9e+02  1  2  8  1 17   4  2  9  2 20  6233
GAMG: partLevel        4 1.0 3.7256e+00 1.0 9.62e+08 1.9 3.4e+04 7.7e+04 1.7e+02 19 56  6 65 16  74 56  7 81 18  7898
  repartition          2 1.0 2.9397e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 3.0017e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.5130e-03 1.0 0.00e+00 0.0 2.3e+02 2.4e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 2.5032e-03 1.0 0.00e+00 0.0 3.2e+02 1.0e+02 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 4.3353e+00 1.0 1.00e+09 1.9 1.1e+05 2.5e+04 6.2e+02 22 58 20 67 58  86 58 22 84 66  7108
PCSetUpOnBlocks      129 1.0 3.0313e-03 1.2 4.73e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  5910
PCApply               17 1.0 4.6916e+00 1.0 1.52e+09 1.5 4.6e+05 6.7e+03 6.5e+02 24 96 87 78 60  93 96 97 98 68 10812
KSPGMRESOrthog        96 1.0 3.0260e-02 2.9 2.11e+07 1.1 0.0e+00 0.0e+00 9.6e+01  0  2  0  0  9   0  2  0  0 10 27310
KSPSetUp              18 1.0 8.6641e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 4.8847e+00 1.0 1.55e+09 1.5 4.8e+05 6.6e+03 8.7e+02 25 98 89 79 81  96 98 99 99 92 10601
SFSetGraph             4 1.0 2.2912e-04 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          26 1.0 2.7394e-03 2.6 0.00e+00 0.0 1.2e+04 5.6e+02 5.0e+00  0  0  2  0  0   0  0  3  0  1     0
SFBcastEnd            26 1.0 1.0862e-03 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   187            194     23336772     0.
   IS L to G Mapping     3              3     14094724     0.
             Section    70             53        35616     0.
              Vector    15            141     13835680     0.
      Vector Scatter     2             15       685800     0.
              Matrix     0             52     22699316     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       198216     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      3282272     0.
      Vector Scatter    40             21        23032     0.
              Matrix   145             80     42315724     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.58443e-06
Average time for zero size MPI_Send(): 1.40071e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 48: solving... 
((23365, 1161600), (23365, 1161600))
	Solver time: 4.216640e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 48 processors, by jychang48 Wed Mar  2 18:02:30 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.886e+01      1.00032   1.886e+01
Objects:              1.098e+03      1.13196   9.852e+02
Flops:                1.310e+09      1.70226   1.017e+09  4.881e+10
Flops/sec:            6.945e+07      1.70193   5.392e+07  2.588e+09
MPI Messages:         2.029e+04      2.24362   1.474e+04  7.073e+05
MPI Message Lengths:  2.606e+08      6.46661   5.856e+03  4.142e+09
MPI Reductions:       1.078e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4641e+01  77.6%  0.0000e+00   0.0%  6.669e+04   9.4%  1.171e+03       20.0%  1.250e+02  11.6% 
 1:             FEM: 4.2166e+00  22.4%  4.8807e+10 100.0%  6.406e+05  90.6%  4.686e+03       80.0%  9.520e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1856e+0013.6 0.00e+00 0.0 1.6e+04 4.0e+00 4.4e+01  6  0  2  0  4   7  0 24  0 35     0
VecScatterBegin        2 1.0 2.2173e-05 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1368e+00 1.1 0.00e+00 0.0 2.7e+04 2.8e+03 2.1e+01 11  0  4  2  2  15  0 40  9 17     0
Mesh Migration         2 1.0 4.0564e-01 1.0 0.00e+00 0.0 3.4e+04 1.9e+04 5.4e+01  2  0  5 16  5   3  0 51 80 43     0
DMPlexInterp           1 1.0 2.1086e+0061847.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.3904e+00 1.1 0.00e+00 0.0 1.9e+04 1.7e+04 2.5e+01 13  0  3  8  2  16  0 29 40 20     0
DMPlexDistCones        2 1.0 9.4622e-02 1.2 0.00e+00 0.0 5.1e+03 4.3e+04 4.0e+00  0  0  1  5  0   1  0  8 26  3     0
DMPlexDistLabels       2 1.0 2.6377e-01 1.0 0.00e+00 0.0 2.1e+04 1.8e+04 2.2e+01  1  0  3  9  2   2  0 31 45 18     0
DMPlexDistribOL        1 1.0 1.7173e-01 1.1 0.00e+00 0.0 4.2e+04 1.1e+04 5.0e+01  1  0  6 11  5   1  0 64 55 40     0
DMPlexDistField        3 1.0 2.9037e-02 2.1 0.00e+00 0.0 6.8e+03 4.5e+03 1.2e+01  0  0  1  1  1   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0405e+0065.1 0.00e+00 0.0 2.1e+04 1.7e+03 6.0e+00  5  0  3  1  1   7  0 31  4  5     0
DMPlexStratify         6 1.5 5.4211e-0141.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.6311e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1934e+00 4.4 0.00e+00 0.0 6.4e+04 1.3e+04 4.1e+01  6  0  9 19  4   8  0 96 97 33     0
SFBcastEnd            95 1.0 2.9515e-01 6.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.3102e-0322.9 0.00e+00 0.0 2.0e+03 8.2e+03 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.3510e-03 7.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.2915e-0520.0 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.1206e-04 2.2 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 2.1787e-03 3.6 0.00e+00 0.0 2.0e+03 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 5.0044e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 2.5555e-02 3.9 8.80e+06 1.1 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 16169
VecNorm              105 1.0 4.3268e-03 1.5 1.25e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 13624
VecScale             217 1.0 1.4749e-03 1.2 1.67e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 53683
VecCopy               77 1.0 5.0020e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 3.2456e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 9.8467e-05 1.5 8.69e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 41556
VecAYPX              544 1.0 2.2662e-03 1.3 1.58e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 33170
VecAXPBYCZ           272 1.0 1.3177e-03 1.3 3.16e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 114089
VecMAXPY             105 1.0 2.5764e-03 1.1 9.97e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 181688
VecAssemblyBegin      14 1.0 6.0821e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 6.1989e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 2.0766e-04 1.5 1.02e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 23422
VecScatterBegin      988 1.0 3.0665e-02 1.9 0.00e+00 0.0 5.4e+05 1.1e+03 0.0e+00  0  0 77 15  0   1  0 85 18  0     0
VecScatterEnd        988 1.0 1.0807e-01 4.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecSetRandom           4 1.0 4.1986e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 5.9941e-03 1.4 1.88e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 14752
MatMult              521 1.0 2.9442e-01 1.2 3.40e+08 1.5 4.7e+05 1.2e+03 1.3e+02  1 27 67 14 12   7 27 74 17 14 44430
MatMultAdd           227 1.0 4.8976e-02 2.3 1.57e+07 1.1 4.4e+04 6.9e+02 0.0e+00  0  1  6  1  0   1  1  7  1  0 14413
MatMultTranspose      68 1.0 2.0373e-02 2.3 7.53e+06 1.2 3.0e+04 3.5e+02 0.0e+00  0  1  4  0  0   0  1  5  0  0 15974
MatSolve             129 1.2 3.4987e-02 1.2 2.34e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 30892
MatSOR               452 1.0 1.1530e-01 1.2 1.39e+08 1.2 0.0e+00 0.0e+00 0.0e+00  1 13  0  0  0   3 13  0  0  0 53183
MatLUFactorSym         1 1.0 3.8862e-05 4.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.4689e-03 1.1 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 12180
MatILUFactorSym        1 1.0 6.6495e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 9.6900e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 5.3563e-03 1.9 3.09e+06 1.5 3.7e+03 1.2e+03 0.0e+00  0  0  1  0  0   0  0  1  0  0 22511
MatResidual           68 1.0 3.6604e-02 1.3 4.32e+07 1.6 6.4e+04 1.2e+03 0.0e+00  0  3  9  2  0   1  3 10  2  0 44632
MatAssemblyBegin      93 1.0 4.2010e-01 4.6 0.00e+00 0.0 1.4e+04 1.1e+05 5.8e+01  1  0  2 35  5   6  0  2 43  6     0
MatAssemblyEnd        93 1.0 5.8031e-01 1.2 0.00e+00 0.0 2.8e+04 1.6e+02 2.2e+02  3  0  4  0 20  13  0  4  0 23     0
MatGetRow          43909 1.0 1.7181e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
MatGetRowIJ            2 2.0 1.0252e-0510.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 4.6279e-03 1.0 0.00e+00 0.0 6.6e+02 8.8e+02 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 1.0300e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 2.2120e-03 1.1 0.00e+00 0.0 1.4e+04 3.8e+02 2.2e+01  0  0  2  0  2   0  0  2  0  2     0
MatZeroEntries         4 1.0 4.2212e-03 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 2.0452e-01 1.0 0.00e+00 0.0 8.6e+02 2.7e+02 2.0e+01  1  0  0  0  2   5  0  0  0  2     0
MatMatMult             5 1.0 7.1930e-02 1.0 2.75e+06 1.5 2.4e+04 6.3e+02 8.0e+01  0  0  3  0  7   2  0  4  0  8  1474
MatMatMultSym          5 1.0 6.3674e-02 1.0 0.00e+00 0.0 2.0e+04 5.1e+02 7.0e+01  0  0  3  0  6   2  0  3  0  7     0
MatMatMultNum          5 1.0 8.2068e-03 1.0 2.75e+06 1.5 4.0e+03 1.2e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1 12916
MatPtAP                4 1.0 3.0910e+00 1.0 7.76e+08 2.2 4.5e+04 5.9e+04 6.8e+01 16 53  6 64  6  73 53  7 80  7  8436
MatPtAPSymbolic        4 1.0 1.8711e+00 1.0 0.00e+00 0.0 2.2e+04 5.4e+04 2.8e+01 10  0  3 29  3  44  0  4 37  3     0
MatPtAPNumeric         4 1.0 1.2206e+00 1.0 7.76e+08 2.2 2.2e+04 6.4e+04 4.0e+01  6 53  3 35  4  29 53  3 43  4 21364
MatTrnMatMult          1 1.0 1.4337e-02 1.0 4.16e+05 1.1 2.6e+03 1.7e+03 1.9e+01  0  0  0  0  2   0  0  0  0  2  1355
MatTrnMatMultSym       1 1.0 9.2320e-03 1.0 0.00e+00 0.0 2.2e+03 9.6e+02 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 5.0960e-03 1.0 4.16e+05 1.1 4.3e+02 5.3e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  3813
MatGetLocalMat        16 1.0 3.2232e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 5.9111e-02 1.3 0.00e+00 0.0 2.7e+04 3.2e+04 0.0e+00  0  0  4 21  0   1  0  4 26  0     0
PCGAMGGraph_AGG        4 1.0 1.6926e-01 1.0 2.54e+06 1.6 6.8e+03 7.3e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   568
PCGAMGCoarse_AGG       4 1.0 1.9103e-02 1.0 4.16e+05 1.1 1.9e+04 7.3e+02 4.9e+01  0  0  3  0  5   0  0  3  0  5  1017
PCGAMGProl_AGG         4 1.0 6.7439e-03 1.0 0.00e+00 0.0 7.1e+03 4.7e+02 8.0e+01  0  0  1  0  7   0  0  1  0  8     0
PCGAMGPOpt_AGG         4 1.0 1.6605e-01 1.0 3.10e+07 1.5 6.0e+04 9.7e+02 1.9e+02  1  2  8  1 17   4  2  9  2 20  7216
GAMG: createProl       4 1.0 3.6235e-01 1.0 3.39e+07 1.5 9.3e+04 8.7e+02 3.6e+02  2  3 13  2 34   9  3 14  2 38  3626
  Graph                8 1.0 1.6913e-01 1.0 2.54e+06 1.6 6.8e+03 7.3e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   568
  MIS/Agg              4 1.0 2.2740e-03 1.0 0.00e+00 0.0 1.4e+04 3.8e+02 2.2e+01  0  0  2  0  2   0  0  2  0  2     0
  SA: col data         4 1.0 1.4884e-03 1.1 0.00e+00 0.0 3.1e+03 9.4e+02 2.4e+01  0  0  0  0  2   0  0  0  0  3     0
  SA: frmProl0         4 1.0 4.6790e-03 1.0 0.00e+00 0.0 4.0e+03 1.1e+02 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 1.6603e-01 1.0 3.10e+07 1.5 6.0e+04 9.7e+02 1.9e+02  1  2  8  1 17   4  2  9  2 20  7217
GAMG: partLevel        4 1.0 3.0959e+00 1.0 7.76e+08 2.2 4.6e+04 5.8e+04 1.7e+02 16 53  6 64 16  73 53  7 80 18  8423
  repartition          2 1.0 3.0708e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 3.5977e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 1.6382e-03 1.0 0.00e+00 0.0 2.7e+02 2.0e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 2.2459e-03 1.0 0.00e+00 0.0 3.8e+02 9.8e+01 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 3.6055e+00 1.0 8.07e+08 2.2 1.4e+05 1.9e+04 6.2e+02 19 56 20 66 58  86 56 22 83 66  7606
PCSetUpOnBlocks      129 1.0 2.4600e-03 1.1 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  7273
PCApply               17 1.0 3.8817e+00 1.0 1.26e+09 1.7 6.2e+05 5.3e+03 6.5e+02 21 95 88 79 60  92 95 97 98 68 11983
KSPGMRESOrthog        96 1.0 2.8517e-02 3.0 1.76e+07 1.1 0.0e+00 0.0e+00 9.6e+01  0  2  0  0  9   0  2  0  0 10 28981
KSPSetUp              18 1.0 7.9155e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 4.0445e+00 1.0 1.28e+09 1.7 6.4e+05 5.2e+03 8.7e+02 21 97 90 79 81  96 97 99 99 92 11761
SFSetGraph             4 1.0 1.7500e-04 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          27 1.0 2.1336e-03 2.8 0.00e+00 0.0 1.6e+04 4.9e+02 5.0e+00  0  0  2  0  0   0  0  2  0  1     0
SFBcastEnd            27 1.0 8.4925e-04 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   195            202     22897792     0.
   IS L to G Mapping     3              3     14076928     0.
             Section    70             53        35616     0.
              Vector    15            141     12429688     0.
      Vector Scatter     2             15       577200     0.
              Matrix     0             52     18299000     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       174544     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      2791528     0.
      Vector Scatter    40             21        23056     0.
              Matrix   145             80     34632808     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.12057e-05
Average time for zero size MPI_Send(): 1.39574e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 56: solving... 
((20104, 1161600), (20104, 1161600))
	Solver time: 3.855059e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 56 processors, by jychang48 Wed Mar  2 18:02:52 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.866e+01      1.00021   1.866e+01
Objects:              1.116e+03      1.14815   9.859e+02
Flops:                1.055e+09      1.76655   8.275e+08  4.634e+10
Flops/sec:            5.653e+07      1.76628   4.435e+07  2.484e+09
MPI Messages:         2.255e+04      2.17874   1.655e+04  9.265e+05
MPI Message Lengths:  2.460e+08      7.31784   4.913e+03  4.552e+09
MPI Reductions:       1.078e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4803e+01  79.3%  0.0000e+00   0.0%  8.318e+04   9.0%  9.156e+02       18.6%  1.250e+02  11.6% 
 1:             FEM: 3.8551e+00  20.7%  4.6342e+10 100.0%  8.434e+05  91.0%  3.997e+03       81.4%  9.520e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2211e+0012.6 0.00e+00 0.0 2.0e+04 4.0e+00 4.4e+01  6  0  2  0  4   8  0 24  0 35     0
VecScatterBegin        2 1.0 2.0027e-05 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1955e+00 1.1 0.00e+00 0.0 3.5e+04 2.2e+03 2.1e+01 12  0  4  2  2  15  0 42  9 17     0
Mesh Migration         2 1.0 4.0020e-01 1.0 0.00e+00 0.0 4.1e+04 1.6e+04 5.4e+01  2  0  4 15  5   3  0 50 79 43     0
DMPlexInterp           1 1.0 2.1409e+0065070.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.4610e+00 1.1 0.00e+00 0.0 2.6e+04 1.3e+04 2.5e+01 13  0  3  7  2  17  0 31 39 20     0
DMPlexDistCones        2 1.0 9.4115e-02 1.2 0.00e+00 0.0 6.2e+03 3.6e+04 4.0e+00  0  0  1  5  0   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.6018e-01 1.0 0.00e+00 0.0 2.5e+04 1.5e+04 2.2e+01  1  0  3  8  2   2  0 30 45 18     0
DMPlexDistribOL        1 1.0 1.5694e-01 1.1 0.00e+00 0.0 5.1e+04 9.2e+03 5.0e+01  1  0  6 10  5   1  0 62 56 40     0
DMPlexDistField        3 1.0 3.1035e-02 2.2 0.00e+00 0.0 8.3e+03 3.8e+03 1.2e+01  0  0  1  1  1   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0645e+0052.3 0.00e+00 0.0 2.8e+04 1.3e+03 6.0e+00  5  0  3  1  1   7  0 33  4  5     0
DMPlexStratify         6 1.5 5.4196e-0150.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.2694e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2282e+00 4.1 0.00e+00 0.0 8.0e+04 1.0e+04 4.1e+01  6  0  9 18  4   8  0 96 97 33     0
SFBcastEnd            95 1.0 2.9801e-01 9.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.2599e-0323.0 0.00e+00 0.0 2.5e+03 6.8e+03 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 1.0490e-02 8.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.5048e-0518.4 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.1206e-04 2.5 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 2.0077e-03 3.6 0.00e+00 0.0 2.5e+03 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 4.8900e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 2.3549e-02 3.5 7.55e+06 1.0 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 17546
VecNorm              105 1.0 4.9648e-03 1.4 1.08e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 11873
VecScale             217 1.0 1.3340e-03 1.2 1.43e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 59353
VecCopy               77 1.0 3.8218e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 2.8689e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 8.3923e-05 1.4 7.45e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 48756
VecAYPX              544 1.0 1.7948e-03 1.2 1.36e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 41879
VecAXPBYCZ           272 1.0 1.0500e-03 1.2 2.71e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 143173
VecMAXPY             105 1.0 2.2776e-03 1.1 8.55e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 205516
VecAssemblyBegin      14 1.0 5.9843e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 6.9380e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 1.8048e-04 1.4 8.77e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 26948
VecScatterBegin      988 1.0 3.1637e-02 2.2 0.00e+00 0.0 7.2e+05 9.4e+02 0.0e+00  0  0 77 15  0   1  0 85 18  0     0
VecScatterEnd        988 1.0 8.6891e-02 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecSetRandom           4 1.0 3.6597e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 6.5765e-03 1.3 1.61e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11 13445
MatMult              521 1.0 2.4558e-01 1.2 2.74e+08 1.5 6.3e+05 1.0e+03 1.3e+02  1 27 68 14 12   6 27 75 17 14 51699
MatMultAdd           227 1.0 3.8159e-02 2.0 1.29e+07 1.1 5.5e+04 5.9e+02 0.0e+00  0  1  6  1  0   1  1  7  1  0 18085
MatMultTranspose      68 1.0 1.6905e-02 2.4 6.06e+06 1.2 3.8e+04 3.0e+02 0.0e+00  0  1  4  0  0   0  1  5  0  0 18319
MatSolve             129 1.2 2.8386e-02 1.1 2.01e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 37974
MatSOR               452 1.0 8.4327e-02 1.2 1.13e+08 1.2 0.0e+00 0.0e+00 0.0e+00  0 12  0  0  0   2 12  0  0  0 67671
MatLUFactorSym         1 1.0 4.3869e-05 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.2627e-03 1.1 3.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 14138
MatILUFactorSym        1 1.0 7.9203e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 6.6226e-03 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 4.1234e-03 1.9 2.47e+06 1.4 5.0e+03 1.0e+03 0.0e+00  0  0  1  0  0   0  0  1  0  0 28256
MatResidual           68 1.0 2.7525e-02 1.1 3.46e+07 1.5 8.5e+04 1.0e+03 0.0e+00  0  3  9  2  0   1  3 10  2  0 57417
MatAssemblyBegin      93 1.0 4.3088e-01 3.8 0.00e+00 0.0 1.8e+04 9.0e+04 5.8e+01  2  0  2 35  5   7  0  2 43  6     0
MatAssemblyEnd        93 1.0 6.9626e-01 1.4 0.00e+00 0.0 3.6e+04 1.4e+02 2.2e+02  4  0  4  0 20  17  0  4  0 23     0
MatGetRow          37645 1.0 1.4686e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
MatGetRowIJ            2 2.0 5.9605e-06 6.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 4.5202e-03 1.0 0.00e+00 0.0 7.6e+02 8.8e+02 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 9.2030e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 2.1741e-03 1.1 0.00e+00 0.0 1.7e+04 3.4e+02 2.2e+01  0  0  2  0  2   0  0  2  0  2     0
MatZeroEntries         4 1.0 3.0334e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.7570e-01 1.0 0.00e+00 0.0 1.0e+03 2.4e+02 2.0e+01  1  0  0  0  2   5  0  0  0  2     0
MatMatMult             5 1.0 6.3858e-02 1.0 2.21e+06 1.5 3.2e+04 5.3e+02 8.0e+01  0  0  3  0  7   2  0  4  0  8  1611
MatMatMultSym          5 1.0 5.6910e-02 1.0 0.00e+00 0.0 2.7e+04 4.3e+02 7.0e+01  0  0  3  0  6   1  0  3  0  7     0
MatMatMultNum          5 1.0 6.9041e-03 1.0 2.21e+06 1.5 5.2e+03 1.1e+03 1.0e+01  0  0  1  0  1   0  0  1  0  1 14898
MatPtAP                4 1.0 2.8888e+00 1.0 6.50e+08 2.6 6.0e+04 5.0e+04 6.8e+01 15 53  6 65  6  75 53  7 80  7  8455
MatPtAPSymbolic        4 1.0 1.6264e+00 1.0 0.00e+00 0.0 3.0e+04 4.6e+04 2.8e+01  9  0  3 30  3  42  0  4 37  3     0
MatPtAPNumeric         4 1.0 1.2635e+00 1.0 6.50e+08 2.6 3.0e+04 5.4e+04 4.0e+01  7 53  3 35  4  33 53  4 43  4 19331
MatTrnMatMult          1 1.0 1.2778e-02 1.0 3.58e+05 1.1 3.2e+03 1.5e+03 1.9e+01  0  0  0  0  2   0  0  0  0  2  1524
MatTrnMatMultSym       1 1.0 8.3380e-03 1.0 0.00e+00 0.0 2.6e+03 8.5e+02 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 4.4389e-03 1.0 3.58e+05 1.1 5.2e+02 4.8e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  4388
MatGetLocalMat        16 1.0 2.8000e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 5.8958e-02 1.3 0.00e+00 0.0 3.6e+04 2.7e+04 0.0e+00  0  0  4 21  0   1  0  4 26  0     0
PCGAMGGraph_AGG        4 1.0 1.4344e-01 1.0 2.04e+06 1.5 8.7e+03 6.3e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   648
PCGAMGCoarse_AGG       4 1.0 1.7099e-02 1.0 3.58e+05 1.1 2.3e+04 6.5e+02 4.9e+01  0  0  2  0  5   0  0  3  0  5  1139
PCGAMGProl_AGG         4 1.0 6.5010e-03 1.0 0.00e+00 0.0 8.7e+03 4.1e+02 8.0e+01  0  0  1  0  7   0  0  1  0  8     0
PCGAMGPOpt_AGG         4 1.0 1.4499e-01 1.0 2.49e+07 1.5 8.0e+04 8.2e+02 1.9e+02  1  3  9  1 17   4  3  9  2 20  8020
GAMG: createProl       4 1.0 3.1266e-01 1.0 2.73e+07 1.5 1.2e+05 7.5e+02 3.6e+02  2  3 13  2 34   8  3 14  2 38  4079
  Graph                8 1.0 1.4329e-01 1.0 2.04e+06 1.5 8.7e+03 6.3e+02 4.8e+01  1  0  1  0  4   4  0  1  0  5   649
  MIS/Agg              4 1.0 2.2221e-03 1.0 0.00e+00 0.0 1.7e+04 3.4e+02 2.2e+01  0  0  2  0  2   0  0  2  0  2     0
  SA: col data         4 1.0 1.4677e-03 1.1 0.00e+00 0.0 3.8e+03 8.2e+02 2.4e+01  0  0  0  0  2   0  0  0  0  3     0
  SA: frmProl0         4 1.0 4.5345e-03 1.0 0.00e+00 0.0 4.9e+03 9.5e+01 4.0e+01  0  0  1  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 1.4497e-01 1.0 2.49e+07 1.5 8.0e+04 8.2e+02 1.9e+02  1  3  9  1 17   4  3  9  2 20  8021
GAMG: partLevel        4 1.0 2.8946e+00 1.0 6.50e+08 2.6 6.1e+04 4.9e+04 1.7e+02 16 53  7 65 16  75 53  7 80 18  8438
  repartition          2 1.0 3.2711e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 4.0507e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 2.5859e-03 1.5 0.00e+00 0.0 3.1e+02 2.0e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 2.9900e-03 1.4 0.00e+00 0.0 4.5e+02 1.0e+02 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 3.3360e+00 1.0 6.75e+08 2.5 1.8e+05 1.7e+04 6.2e+02 18 56 20 68 58  87 56 22 83 66  7714
PCSetUpOnBlocks      129 1.0 2.3839e-03 1.2 3.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  7488
PCApply               17 1.0 3.5413e+00 1.0 1.01e+09 1.8 8.2e+05 4.5e+03 6.5e+02 19 95 88 80 60  92 95 97 99 68 12428
KSPGMRESOrthog        96 1.0 2.6282e-02 2.8 1.51e+07 1.0 0.0e+00 0.0e+00 9.6e+01  0  2  0  0  9   0  2  0  0 10 31445
KSPSetUp              18 1.0 7.0262e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 3.6885e+00 1.0 1.03e+09 1.8 8.4e+05 4.4e+03 8.7e+02 20 97 91 81 81  96 97 99 99 92 12218
SFSetGraph             4 1.0 1.6308e-04 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          27 1.0 2.0990e-03 2.8 0.00e+00 0.0 1.9e+04 4.4e+02 5.0e+00  0  0  2  0  0   0  0  2  0  1     0
SFBcastEnd            27 1.0 7.8893e-04 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   213            220     22661528     0.
   IS L to G Mapping     3              3     12801292     0.
             Section    70             53        35616     0.
              Vector    15            141     11422112     0.
      Vector Scatter     2             15       498936     0.
              Matrix     0             52     15514332     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       158676     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      2450960     0.
      Vector Scatter    40             21        23040     0.
              Matrix   145             80     29369344     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.44138e-06
Average time for zero size MPI_Send(): 1.28576e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 gamg 40 1
=================
Discretization: RT
MPI processes 64: solving... 
((17544, 1161600), (17544, 1161600))
	Solver time: 3.773817e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 64 processors, by jychang48 Wed Mar  2 18:03:15 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.874e+01      1.00026   1.873e+01
Objects:              1.130e+03      1.16495   9.868e+02
Flops:                9.997e+08      2.24666   6.960e+08  4.454e+10
Flops/sec:            5.336e+07      2.24622   3.715e+07  2.378e+09
MPI Messages:         2.500e+04      2.38812   1.814e+04  1.161e+06
MPI Message Lengths:  2.334e+08      8.33714   4.205e+03  4.883e+09
MPI Reductions:       1.081e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4959e+01  79.9%  0.0000e+00   0.0%  1.007e+05   8.7%  7.457e+02       17.7%  1.250e+02  11.6% 
 1:             FEM: 3.7738e+00  20.1%  4.4543e+10 100.0%  1.061e+06  91.3%  3.459e+03       82.3%  9.550e+02  88.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2383e+0012.0 0.00e+00 0.0 2.4e+04 4.0e+00 4.4e+01  6  0  2  0  4   8  0 24  0 35     0
VecScatterBegin        2 1.0 1.9073e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 7.1526e-06 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.2405e+00 1.1 0.00e+00 0.0 4.4e+04 1.8e+03 2.1e+01 12  0  4  2  2  15  0 44  9 17     0
Mesh Migration         2 1.0 3.9701e-01 1.0 0.00e+00 0.0 4.9e+04 1.4e+04 5.4e+01  2  0  4 14  5   3  0 48 79 43     0
DMPlexInterp           1 1.0 2.1140e+0062441.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.5115e+00 1.1 0.00e+00 0.0 3.3e+04 1.0e+04 2.5e+01 13  0  3  7  2  17  0 33 38 20     0
DMPlexDistCones        2 1.0 9.2092e-02 1.2 0.00e+00 0.0 7.2e+03 3.1e+04 4.0e+00  0  0  1  5  0   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.6102e-01 1.0 0.00e+00 0.0 2.9e+04 1.3e+04 2.2e+01  1  0  3  8  2   2  0 29 45 18     0
DMPlexDistribOL        1 1.0 1.4294e-01 1.1 0.00e+00 0.0 6.1e+04 8.0e+03 5.0e+01  1  0  5 10  5   1  0 60 56 40     0
DMPlexDistField        3 1.0 3.2045e-02 2.3 0.00e+00 0.0 9.7e+03 3.4e+03 1.2e+01  0  0  1  1  1   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0870e+0055.1 0.00e+00 0.0 3.5e+04 1.0e+03 6.0e+00  6  0  3  1  1   7  0 35  4  5     0
DMPlexStratify         6 1.5 5.4254e-0157.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 2.8019e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2455e+00 4.0 0.00e+00 0.0 9.7e+04 8.7e+03 4.1e+01  6  0  8 17  4   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0168e-0110.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.6369e-0322.3 0.00e+00 0.0 2.9e+03 5.8e+03 3.0e+00  0  0  0  0  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.7442e-03 8.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.6001e-0516.8 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.6093e-04 4.6 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided         17 1.0 2.0092e-03 4.2 0.00e+00 0.0 2.9e+03 4.0e+00 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
BuildTwoSidedF        12 1.0 4.2796e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecMDot               96 1.0 2.3543e-02 5.1 6.61e+06 1.1 0.0e+00 0.0e+00 9.6e+01  0  1  0  0  9   0  1  0  0 10 17550
VecNorm              105 1.0 1.2387e-02 5.0 9.42e+05 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11  4759
VecScale             217 1.0 1.4968e-03 1.4 1.26e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 52895
VecCopy               77 1.0 3.4785e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               596 1.0 2.5966e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                9 1.0 7.5102e-05 1.4 6.53e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 54480
VecAYPX              544 1.0 1.8840e-03 1.5 1.19e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 39891
VecAXPBYCZ           272 1.0 1.1594e-03 1.5 2.37e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 129640
VecMAXPY             105 1.0 1.9822e-03 1.1 7.49e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 236134
VecAssemblyBegin      14 1.0 5.4741e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
VecAssemblyEnd        14 1.0 7.1526e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      44 1.0 1.6665e-04 1.5 7.68e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 29180
VecScatterBegin      988 1.0 3.0876e-02 2.5 0.00e+00 0.0 9.0e+05 8.2e+02 0.0e+00  0  0 78 15  0   1  0 85 18  0     0
VecScatterEnd        988 1.0 9.7675e-02 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecSetRandom           4 1.0 3.1018e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         105 1.0 1.3980e-02 3.5 1.41e+06 1.0 0.0e+00 0.0e+00 1.0e+02  0  0  0  0 10   0  0  0  0 11  6325
MatMult              521 1.0 2.2436e-01 1.2 2.57e+08 1.8 7.9e+05 8.8e+02 1.3e+02  1 28 68 14 12   5 28 75 17 14 55789
MatMultAdd           227 1.0 3.2643e-02 1.8 1.14e+07 1.2 6.5e+04 5.4e+02 0.0e+00  0  2  6  1  0   1  2  6  1  0 20795
MatMultTranspose      68 1.0 1.7001e-02 2.5 5.40e+06 1.4 4.6e+04 2.7e+02 0.0e+00  0  1  4  0  0   0  1  4  0  0 17550
MatSolve             129 1.2 2.5924e-02 1.2 1.76e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0 41475
MatSOR               452 1.0 7.1886e-02 1.3 9.34e+07 1.3 0.0e+00 0.0e+00 0.0e+00  0 12  0  0  0   2 12  0  0  0 74824
MatLUFactorSym         1 1.0 3.0994e-05 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.1089e-03 1.1 2.96e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 16070
MatILUFactorSym        1 1.0 7.7200e-04 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             5 1.0 6.6314e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale              14 1.0 4.2133e-03 2.7 2.33e+06 1.7 6.3e+03 8.7e+02 0.0e+00  0  0  1  0  0   0  0  1  0  0 27151
MatResidual           68 1.0 2.6713e-02 1.2 3.27e+07 1.9 1.1e+05 8.7e+02 0.0e+00  0  3  9  2  0   1  3 10  2  0 58240
MatAssemblyBegin      93 1.0 4.7054e-01 4.1 0.00e+00 0.0 2.2e+04 7.8e+04 5.8e+01  2  0  2 36  5   9  0  2 44  6     0
MatAssemblyEnd        93 1.0 8.0991e-01 1.2 0.00e+00 0.0 4.6e+04 1.2e+02 2.2e+02  4  0  4  0 20  21  0  4  0 23     0
MatGetRow          32943 1.0 1.2840e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   3  0  0  0  0     0
MatGetRowIJ            2 2.0 6.9141e-06 7.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        8 1.0 4.1435e-03 1.0 0.00e+00 0.0 8.6e+02 6.5e+02 7.4e+01  0  0  0  0  7   0  0  0  0  8     0
MatGetOrdering         2 2.0 8.7023e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             4 1.0 2.1451e-03 1.0 0.00e+00 0.0 2.2e+04 2.9e+02 2.5e+01  0  0  2  0  2   0  0  2  0  3     0
MatZeroEntries         4 1.0 2.5702e-03 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                5 1.0 1.5406e-01 1.0 0.00e+00 0.0 1.2e+03 2.2e+02 2.0e+01  1  0  0  0  2   4  0  0  0  2     0
MatMatMult             5 1.0 5.2665e-02 1.0 2.08e+06 1.8 4.0e+04 4.6e+02 8.0e+01  0  0  3  0  7   1  0  4  0  8  1926
MatMatMultSym          5 1.0 4.6258e-02 1.0 0.00e+00 0.0 3.4e+04 3.7e+02 7.0e+01  0  0  3  0  6   1  0  3  0  7     0
MatMatMultNum          5 1.0 6.3677e-03 1.0 2.08e+06 1.8 6.6e+03 9.2e+02 1.0e+01  0  0  1  0  1   0  0  1  0  1 15926
MatPtAP                4 1.0 2.8912e+00 1.0 6.30e+08 3.8 7.6e+04 4.2e+04 6.8e+01 15 52  7 66  6  77 52  7 80  7  7999
MatPtAPSymbolic        4 1.0 1.5281e+00 1.0 0.00e+00 0.0 3.8e+04 3.9e+04 2.8e+01  8  0  3 30  3  40  0  4 36  3     0
MatPtAPNumeric         4 1.0 1.3641e+00 1.0 6.30e+08 3.8 3.8e+04 4.6e+04 4.0e+01  7 52  3 36  4  36 52  4 44  4 16954
MatTrnMatMult          1 1.0 1.1292e-02 1.0 3.14e+05 1.1 3.7e+03 1.4e+03 1.9e+01  0  0  0  0  2   0  0  0  0  2  1728
MatTrnMatMultSym       1 1.0 7.5750e-03 1.0 0.00e+00 0.0 3.1e+03 7.8e+02 1.7e+01  0  0  0  0  2   0  0  0  0  2     0
MatTrnMatMultNum       1 1.0 3.7148e-03 1.0 3.14e+05 1.1 6.1e+02 4.3e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  5254
MatGetLocalMat        16 1.0 2.3561e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         14 1.0 5.9310e-02 1.7 0.00e+00 0.0 4.5e+04 2.3e+04 0.0e+00  0  0  4 21  0   1  0  4 26  0     0
PCGAMGGraph_AGG        4 1.0 1.2765e-01 1.0 1.93e+06 1.9 1.1e+04 5.6e+02 4.8e+01  1  0  1  0  4   3  0  1  0  5   717
PCGAMGCoarse_AGG       4 1.0 1.5398e-02 1.0 3.14e+05 1.1 2.9e+04 5.6e+02 5.2e+01  0  0  3  0  5   0  0  3  0  5  1268
PCGAMGProl_AGG         4 1.0 6.5541e-03 1.0 0.00e+00 0.0 1.0e+04 3.8e+02 8.0e+01  0  0  1  0  7   0  0  1  0  8     0
PCGAMGPOpt_AGG         4 1.0 1.2338e-01 1.0 2.34e+07 1.8 1.0e+05 7.1e+02 1.9e+02  1  3  9  1 17   3  3 10  2 20  9290
GAMG: createProl       4 1.0 2.7354e-01 1.0 2.57e+07 1.8 1.5e+05 6.5e+02 3.7e+02  1  3 13  2 34   7  3 14  2 39  4596
  Graph                8 1.0 1.2751e-01 1.0 1.93e+06 1.9 1.1e+04 5.6e+02 4.8e+01  1  0  1  0  4   3  0  1  0  5   718
  MIS/Agg              4 1.0 2.1992e-03 1.0 0.00e+00 0.0 2.2e+04 2.9e+02 2.5e+01  0  0  2  0  2   0  0  2  0  3     0
  SA: col data         4 1.0 1.3566e-03 1.1 0.00e+00 0.0 4.5e+03 7.5e+02 2.4e+01  0  0  0  0  2   0  0  0  0  3     0
  SA: frmProl0         4 1.0 4.7398e-03 1.0 0.00e+00 0.0 5.6e+03 8.8e+01 4.0e+01  0  0  0  0  4   0  0  1  0  4     0
  SA: smooth           4 1.0 1.2335e-01 1.0 2.34e+07 1.8 1.0e+05 7.1e+02 1.9e+02  1  3  9  1 17   3  3 10  2 20  9292
GAMG: partLevel        4 1.0 2.8968e+00 1.0 6.30e+08 3.8 7.8e+04 4.1e+04 1.7e+02 15 52  7 66 16  77 52  7 80 18  7984
  repartition          2 1.0 3.7479e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          2 1.0 4.2200e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               2 1.0 2.3668e-03 1.5 0.00e+00 0.0 3.5e+02 1.4e+03 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
  Move P               2 1.0 2.7559e-03 1.4 0.00e+00 0.0 5.1e+02 9.7e+01 3.6e+01  0  0  0  0  3   0  0  0  0  4     0
PCSetUp                5 1.0 3.2839e+00 1.0 6.53e+08 3.6 2.3e+05 1.4e+04 6.3e+02 18 55 20 68 58  87 55 22 83 66  7435
PCSetUpOnBlocks      129 1.0 2.1794e-03 1.3 2.96e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  8177
PCApply               17 1.0 3.4818e+00 1.0 9.60e+08 2.3 1.0e+06 3.8e+03 6.5e+02 19 95 89 81 60  92 95 97 99 68 12115
KSPGMRESOrthog        96 1.0 2.5954e-02 3.6 1.32e+07 1.1 0.0e+00 0.0e+00 9.6e+01  0  2  0  0  9   0  2  0  0 10 31841
KSPSetUp              18 1.0 7.0524e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               1 1.0 3.6077e+00 1.0 9.76e+08 2.3 1.1e+06 3.8e+03 8.8e+02 19 97 91 82 81  96 97100 99 92 11984
SFSetGraph             4 1.0 1.5688e-04 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          30 1.0 2.1074e-03 2.9 0.00e+00 0.0 2.4e+04 3.8e+02 5.0e+00  0  0  2  0  0   0  0  2  0  1     0
SFBcastEnd            30 1.0 1.0002e-03 5.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
         PetscRandom     0              1          646     0.
           Index Set   227            234     22508856     0.
   IS L to G Mapping     3              3     12541056     0.
             Section    70             53        35616     0.
              Vector    15            141     10638768     0.
      Vector Scatter     2             15       437496     0.
              Matrix     0             52     12578516     0.
      Preconditioner     0             11        11020     0.
       Krylov Solver     0             15       151752     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

         PetscRandom     1              0            0     0.
           Index Set   102             88       152948     0.
   IS L to G Mapping     4              0            0     0.
              Vector   356            218      2199448     0.
      Vector Scatter    40             21        23056     0.
              Matrix   145             80     23656560     0.
      Matrix Coarsen     4              4         2544     0.
      Preconditioner    21             10         8944     0.
       Krylov Solver    21              6       123480     0.
Star Forest Bipartite Graph     4              4         3456     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 8.63075e-06
Average time for zero size MPI_Send(): 1.65775e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type gamg
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------
-------------- next part --------------
=================
 hypre 40 1
=================
Discretization: RT
MPI processes 1: solving... 
((1161600, 1161600), (1161600, 1161600))
	Solver time: 4.842733e+01
	Solver iterations: 12
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 1 processor, by jychang48 Wed Mar  2 17:34:27 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           6.507e+01      1.00000   6.507e+01
Objects:              2.470e+02      1.00000   2.470e+02
Flops:                1.711e+09      1.00000   1.711e+09  1.711e+09
Flops/sec:            2.630e+07      1.00000   2.630e+07  2.630e+07
MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Reductions:       0.000e+00      0.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.6646e+01  25.6%  0.0000e+00   0.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 
 1:             FEM: 4.8427e+01  74.4%  1.7111e+09 100.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecSet                 8 1.0 1.5751e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 3.6759e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexInterp           1 1.0 2.1008e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0  13  0  0  0  0     0
DMPlexStratify         4 1.0 5.1098e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   3  0  0  0  0     0
SFSetGraph             7 1.0 2.6133e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

VecMDot               12 1.0 6.1646e-02 1.0 1.81e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   0 11  0  0  0  2939
VecNorm               13 1.0 1.5800e-02 1.0 3.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0  1911
VecScale              26 1.0 1.6351e-02 1.0 2.52e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1542
VecCopy                1 1.0 2.6531e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 1.5827e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 1.4691e-03 1.0 2.32e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1581
VecMAXPY              13 1.0 7.4229e-02 1.0 2.09e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 12  0  0  0   0 12  0  0  0  2817
VecScatterBegin       58 1.0 6.3398e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          13 1.0 2.4897e-02 1.0 4.53e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  3  0  0  0   0  3  0  0  0  1820
MatMult               25 1.0 2.4851e-01 1.0 2.70e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 16  0  0  0   1 16  0  0  0  1088
MatMultAdd            48 1.0 1.8471e-01 1.0 2.31e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 13  0  0  0   0 13  0  0  0  1249
MatSolve              13 1.0 1.4378e-01 1.0 1.30e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0   904
MatLUFactorNum         1 1.0 6.5479e-02 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   276
MatILUFactorSym        1 1.0 4.7690e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 4.4951e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 8.9600e-03 1.0 5.34e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   596
MatAssemblyBegin      10 1.0 4.7684e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd        10 1.0 9.9671e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetRow         768000 1.0 4.7885e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            2 1.0 2.8610e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 4.8168e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 2.9769e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.0544e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
MatMatMult             1 1.0 1.2049e-01 1.0 1.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   111
MatMatMultSym          1 1.0 8.2528e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatMatMultNum          1 1.0 3.7939e-02 1.0 1.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   351
PCSetUp                4 1.0 2.7671e+01 1.0 3.68e+07 1.0 0.0e+00 0.0e+00 0.0e+00 43  2  0  0  0  57  2  0  0  0     1
PCSetUpOnBlocks       13 1.0 1.1621e-01 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   156
PCApply               13 1.0 4.5220e+01 1.0 1.98e+08 1.0 0.0e+00 0.0e+00 0.0e+00 69 12  0  0  0  93 12  0  0  0     4
KSPGMRESOrthog        12 1.0 1.2641e-01 1.0 3.62e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 21  0  0  0   0 21  0  0  0  2867
KSPSetUp               4 1.0 2.3469e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 4.6958e+01 1.0 8.85e+08 1.0 0.0e+00 0.0e+00 0.0e+00 72 52  0  0  0  97 52  0  0  0    19
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    22             22     38639672     0.
             Section    26              8         5376     0.
              Vector    13             31    178264920     0.
      Vector Scatter     2              6         3984     0.
              Matrix     0              3    124219284     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    10              4        19256     0.
    GraphPartitioner     4              3         1836     0.
Star Forest Bipartite Graph    23             12         9696     0.
     Discrete System    10              4         3456     0.

--- Event Stage 1: FEM

           Index Set    19             12         9408     0.
   IS L to G Mapping     4              0            0     0.
              Vector    79             52     21737472     0.
      Vector Scatter     6              0            0     0.
              Matrix    10              2     37023836     0.
      Preconditioner     6              1         1016     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 2: solving... 
((579051, 1161600), (579051, 1161600))
	Solver time: 3.476467e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 2 processors, by jychang48 Wed Mar  2 17:35:18 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           4.903e+01      1.00022   4.903e+01
Objects:              4.840e+02      1.02979   4.770e+02
Flops:                1.033e+09      1.00377   1.031e+09  2.063e+09
Flops/sec:            2.108e+07      1.00400   2.104e+07  4.207e+07
MPI Messages:         3.485e+02      1.24687   3.140e+02  6.280e+02
MPI Message Lengths:  4.050e+08      1.62105   1.043e+06  6.549e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4264e+01  29.1%  0.0000e+00   0.0%  5.020e+02  79.9%  9.935e+05       95.3%  1.250e+02  29.6% 
 1:             FEM: 3.4765e+01  70.9%  2.0627e+09 100.0%  1.260e+02  20.1%  4.932e+04        4.7%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 8.3311e-01 9.6 0.00e+00 0.0 1.2e+02 4.0e+00 4.4e+01  1  0 19  0 10   3  0 24  0 35     0
VecScatterBegin        2 1.0 1.6110e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.9073e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.6245e+00 1.1 0.00e+00 0.0 9.2e+01 5.5e+05 2.1e+01  3  0 15  8  5  11  0 18  8 17     0
Mesh Migration         2 1.0 1.7902e+00 1.0 0.00e+00 0.0 3.7e+02 1.4e+06 5.4e+01  4  0 60 79 13  13  0 75 83 43     0
DMPlexInterp           1 1.0 2.0429e+0045337.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   7  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2455e+00 1.1 0.00e+00 0.0 1.7e+02 1.9e+06 2.5e+01  4  0 26 48  6  15  0 33 50 20     0
DMPlexDistCones        2 1.0 3.6353e-01 1.0 0.00e+00 0.0 5.4e+01 3.2e+06 4.0e+00  1  0  9 27  1   3  0 11 28  3     0
DMPlexDistLabels       2 1.0 9.6565e-01 1.0 0.00e+00 0.0 2.4e+02 1.2e+06 2.2e+01  2  0 38 45  5   7  0 48 47 18     0
DMPlexDistribOL        1 1.0 1.1889e+00 1.0 0.00e+00 0.0 3.1e+02 9.6e+05 5.0e+01  2  0 49 45 12   8  0 61 48 40     0
DMPlexDistField        3 1.0 4.3184e-02 1.1 0.00e+00 0.0 6.2e+01 3.5e+05 1.2e+01  0  0 10  3  3   0  0 12  3 10     0
DMPlexDistData         2 1.0 8.3491e-0126.1 0.00e+00 0.0 5.4e+01 4.0e+05 6.0e+00  1  0  9  3  1   3  0 11  3  5     0
DMPlexStratify         6 1.5 7.6915e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
SFSetGraph            51 1.0 4.2324e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   3  0  0  0  0     0
SFBcastBegin          95 1.0 9.3474e-01 3.1 0.00e+00 0.0 4.8e+02 1.2e+06 4.1e+01  1  0 77 92 10   4  0 96 96 33     0
SFBcastEnd            95 1.0 4.0109e-01 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.5613e-03 1.4 0.00e+00 0.0 1.1e+01 1.3e+06 3.0e+00  0  0  2  2  1   0  0  2  2  2     0
SFReduceEnd            4 1.0 5.1992e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.0994e-05 6.2 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 9.9111e-04 6.3 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.8351e-03108.4 0.00e+00 0.0 2.0e+00 4.0e+00 1.0e+00  0  0  0  0  0   0  0  2  0  0     0
VecMDot               15 1.0 4.5149e-02 1.0 1.40e+08 1.0 0.0e+00 0.0e+00 1.5e+01  0 14  0  0  4   0 14  0  0  5  6175
VecNorm               16 1.0 1.0752e-02 1.2 1.86e+07 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5  3457
VecScale              32 1.0 9.4597e-03 1.0 1.56e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0  3280
VecCopy                1 1.0 1.1580e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 3.4921e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 6.0606e-04 1.0 1.17e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3833
VecMAXPY              16 1.0 5.1952e-02 1.0 1.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 15  0  0  0   0 15  0  0  0  6037
VecScatterBegin      146 1.0 3.5844e-02 1.0 0.00e+00 0.0 7.6e+01 3.4e+04 0.0e+00  0  0 12  0  0   0  0 60  8  0     0
VecScatterEnd        146 1.0 2.1336e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 1.6374e-02 1.1 2.80e+07 1.0 0.0e+00 0.0e+00 1.6e+01  0  3  0  0  4   0  3  0  0  5  3405
MatMult               31 1.0 1.5530e-01 1.0 1.69e+08 1.0 7.6e+01 3.4e+04 1.2e+02  0 16 12  0 28   0 16 60  8 41  2172
MatMultAdd            60 1.0 1.1559e-01 1.0 1.45e+08 1.0 6.0e+01 3.6e+04 0.0e+00  0 14 10  0  0   0 14 48  7  0  2494
MatSolve              16 1.0 9.8875e-02 1.0 8.00e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0  1611
MatLUFactorNum         1 1.0 3.3598e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   538
MatILUFactorSym        1 1.0 2.3966e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 2.4728e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 3.5920e-03 1.0 2.67e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1485
MatAssemblyBegin      12 1.0 5.8002e-0312.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 9.0917e-02 1.0 0.00e+00 0.0 1.6e+01 7.8e+03 4.8e+01  0  0  3  0 11   0  0 13  0 16     0
MatGetRow         384000 1.0 3.7148e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 3.0994e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 2.1708e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 1.4248e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6908e-01 1.0 0.00e+00 0.0 4.0e+00 6.7e+03 1.2e+01  2  0  1  0  3   2  0  3  0  4     0
MatMatMult             1 1.0 1.4067e-01 1.0 4.95e+06 1.0 8.0e+00 2.2e+04 1.6e+01  0  0  1  0  4   0  0  6  1  5    70
MatMatMultSym          1 1.0 1.2321e-01 1.0 0.00e+00 0.0 7.0e+00 1.8e+04 1.4e+01  0  0  1  0  3   0  0  6  0  5     0
MatMatMultNum          1 1.0 1.7438e-02 1.0 4.95e+06 1.0 1.0e+00 5.5e+04 2.0e+00  0  0  0  0  0   0  0  1  0  1   568
MatGetLocalMat         2 1.0 2.2079e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 1.1523e-03 1.5 0.00e+00 0.0 4.0e+00 3.8e+04 0.0e+00  0  0  1  0  0   0  0  3  0  0     0
PCSetUp                4 1.0 2.2106e+01 1.0 1.67e+07 1.0 2.0e+01 4.7e+05 6.6e+01 45  2  3  1 16  64  2 16 31 22     2
PCSetUpOnBlocks       16 1.0 5.9039e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   306
PCApply               16 1.0 3.2480e+01 1.0 1.20e+08 1.0 1.6e+01 2.7e+04 4.0e+00 66 12  3  0  1  93 12 13  1  1     7
KSPGMRESOrthog        15 1.0 9.1311e-02 1.0 2.80e+08 1.0 0.0e+00 0.0e+00 1.5e+01  0 27  0  0  4   0 27  0  0  5  6106
KSPSetUp               4 1.0 9.7001e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 3.3756e+01 1.0 5.98e+08 1.0 9.6e+01 1.3e+05 2.2e+02 69 58 15  2 51  97 58 76 39 73    35
SFBcastBegin           1 1.0 1.8959e-0324.3 0.00e+00 0.0 6.0e+00 4.1e+04 1.0e+00  0  0  1  0  0   0  0  5  1  0     0
SFBcastEnd             1 1.0 3.1090e-04 4.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    79             79     49124340     0.
   IS L to G Mapping     3              3     23945692     0.
             Section    70             53        35616     0.
              Vector    15             45    140251432     0.
      Vector Scatter     2              7     13904896     0.
              Matrix     0              5     64871960     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        73104     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72     10946608     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8     52067772     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 1.62125e-06
Average time for zero size MPI_Send(): 2.5034e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 4: solving... 
((288348, 1161600), (288348, 1161600))
	Solver time: 2.221880e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 4 processors, by jychang48 Wed Mar  2 17:35:54 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           3.523e+01      1.00003   3.523e+01
Objects:              4.920e+02      1.04237   4.775e+02
Flops:                5.295e+08      1.00702   5.270e+08  2.108e+09
Flops/sec:            1.503e+07      1.00704   1.496e+07  5.983e+07
MPI Messages:         7.315e+02      1.64938   5.530e+02  2.212e+03
MPI Message Lengths:  2.891e+08      2.21291   3.089e+05  6.833e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.3011e+01  36.9%  0.0000e+00   0.0%  1.654e+03  74.8%  2.935e+05       95.0%  1.250e+02  29.6% 
 1:             FEM: 2.2219e+01  63.1%  2.1080e+09 100.0%  5.580e+02  25.2%  1.541e+04        5.0%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.3237e-0117.3 0.00e+00 0.0 3.9e+02 4.0e+00 4.4e+01  2  0 18  0 10   5  0 24  0 35     0
VecScatterBegin        2 1.0 7.1883e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 5.0068e-06 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.7474e+00 1.1 0.00e+00 0.0 3.8e+02 1.4e+05 2.1e+01  5  0 17  8  5  13  0 23  8 17     0
Mesh Migration         2 1.0 1.0405e+00 1.0 0.00e+00 0.0 1.1e+03 4.7e+05 5.4e+01  3  0 51 79 13   8  0 69 83 43     0
DMPlexInterp           1 1.0 2.0642e+0053115.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0381e+00 1.1 0.00e+00 0.0 3.9e+02 8.1e+05 2.5e+01  6  0 18 46  6  15  0 23 49 20     0
DMPlexDistCones        2 1.0 2.3054e-01 1.0 0.00e+00 0.0 1.6e+02 1.1e+06 4.0e+00  1  0  7 26  1   2  0 10 28  3     0
DMPlexDistLabels       2 1.0 5.6307e-01 1.0 0.00e+00 0.0 7.2e+02 4.2e+05 2.2e+01  2  0 33 45  5   4  0 44 47 18     0
DMPlexDistribOL        1 1.0 7.6747e-01 1.0 0.00e+00 0.0 1.2e+03 2.8e+05 5.0e+01  2  0 52 46 12   6  0 70 49 40     0
DMPlexDistField        3 1.0 2.9449e-02 1.0 0.00e+00 0.0 2.0e+02 1.1e+05 1.2e+01  0  0  9  3  3   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.2793e-0140.7 0.00e+00 0.0 2.2e+02 1.0e+05 6.0e+00  2  0 10  3  1   5  0 14  4  5     0
DMPlexStratify         6 1.5 6.5141e-01 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFSetGraph            51 1.0 2.4321e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFBcastBegin          95 1.0 9.8964e-01 4.5 0.00e+00 0.0 1.6e+03 4.0e+05 4.1e+01  2  0 71 92 10   6  0 95 96 33     0
SFBcastEnd            95 1.0 3.2814e-01 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 2.9824e-03 2.1 0.00e+00 0.0 4.9e+01 2.9e+05 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.0050e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.6001e-05 9.4 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 4.2892e-04 2.8 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.8361e-0391.7 0.00e+00 0.0 1.0e+01 4.0e+00 1.0e+00  0  0  0  0  0   0  0  2  0  0     0
VecMDot               15 1.0 3.0009e-02 1.2 7.01e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 13  0  0  4   0 13  0  0  5  9290
VecNorm               16 1.0 6.5994e-03 1.3 9.35e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5  5632
VecScale              32 1.0 4.8001e-03 1.0 7.81e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  6464
VecCopy                1 1.0 5.7697e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 1.5269e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 3.2783e-04 1.0 5.84e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  7087
VecMAXPY              16 1.0 2.9337e-02 1.0 7.89e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 15  0  0  0   0 15  0  0  0 10691
VecScatterBegin      146 1.0 1.7044e-02 1.1 0.00e+00 0.0 3.8e+02 1.4e+04 0.0e+00  0  0 17  1  0   0  0 68 15  0     0
VecScatterEnd        146 1.0 1.6215e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 9.4516e-03 1.2 1.40e+07 1.0 0.0e+00 0.0e+00 1.6e+01  0  3  0  0  4   0  3  0  0  5  5899
MatMult               31 1.0 8.5318e-02 1.0 8.48e+07 1.0 3.8e+02 1.4e+04 1.2e+02  0 16 17  1 28   0 16 68 15 41  3953
MatMultAdd            60 1.0 6.3984e-02 1.0 7.25e+07 1.0 3.0e+02 1.4e+04 0.0e+00  0 14 14  1  0   0 14 54 13  0  4506
MatSolve              16 1.0 5.3797e-02 1.1 4.00e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0  2948
MatLUFactorNum         1 1.0 1.7410e-02 1.0 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1037
MatILUFactorSym        1 1.0 6.5880e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.2823e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.9329e-03 1.1 1.34e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2758
MatAssemblyBegin      12 1.0 1.0853e-0230.3 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 4.9526e-02 1.0 0.00e+00 0.0 8.0e+01 3.1e+03 4.8e+01  0  0  4  0 11   0  0 14  1 16     0
MatGetRow         192000 1.0 3.7644e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatGetRowIJ            3 1.0 5.0068e-06 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 1.0573e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 6.9499e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6509e-01 1.0 0.00e+00 0.0 2.0e+01 2.7e+03 1.2e+01  2  0  1  0  3   3  0  4  0  4     0
MatMatMult             1 1.0 7.6802e-02 1.0 2.48e+06 1.0 4.0e+01 8.9e+03 1.6e+01  0  0  2  0  4   0  0  7  1  5   129
MatMatMultSym          1 1.0 6.7404e-02 1.1 0.00e+00 0.0 3.5e+01 7.0e+03 1.4e+01  0  0  2  0  3   0  0  6  1  5     0
MatMatMultNum          1 1.0 9.4151e-03 1.0 2.48e+06 1.0 5.0e+00 2.2e+04 2.0e+00  0  0  0  0  0   0  0  1  0  1  1051
MatGetLocalMat         2 1.0 1.1554e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 4.0152e-03 5.9 0.00e+00 0.0 2.0e+01 1.5e+04 0.0e+00  0  0  1  0  0   0  0  4  1  0     0
PCSetUp                4 1.0 1.6053e+01 1.0 8.38e+06 1.0 7.6e+01 1.3e+05 6.6e+01 46  2  3  1 16  72  2 14 28 22     2
PCSetUpOnBlocks       16 1.0 2.4721e-02 1.0 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0   730
PCApply               16 1.0 2.0598e+01 1.0 6.00e+07 1.0 8.0e+01 1.1e+04 4.0e+00 58 11  4  0  1  93 11 14  3  1    12
KSPGMRESOrthog        15 1.0 5.6051e-02 1.1 1.40e+08 1.0 0.0e+00 0.0e+00 1.5e+01  0 26  0  0  4   0 26  0  0  5  9947
KSPSetUp               4 1.0 5.0209e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 2.1633e+01 1.0 3.00e+08 1.0 4.6e+02 3.3e+04 2.2e+02 61 57 21  2 51  97 57 82 44 73    55
SFBcastBegin           1 1.0 1.9069e-0318.0 0.00e+00 0.0 3.0e+01 1.7e+04 1.0e+00  0  0  1  0  0   0  0  5  1  0     0
SFBcastEnd             1 1.0 2.2197e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    87             87     35923148     0.
   IS L to G Mapping     3              3     18881016     0.
             Section    70             53        35616     0.
              Vector    15             45     72396112     0.
      Vector Scatter     2              7      6928024     0.
              Matrix     0              5     32297372     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        76832     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72      5529184     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8     26014168     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.38283e-06
Average time for zero size MPI_Send(): 1.72853e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 8: solving... 
((143102, 1161600), (143102, 1161600))
	Solver time: 1.735006e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 8 processors, by jychang48 Wed Mar  2 17:36:26 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           3.019e+01      1.00018   3.019e+01
Objects:              5.080e+02      1.06723   4.808e+02
Flops:                2.751e+08      1.02555   2.702e+08  2.162e+09
Flops/sec:            9.112e+06      1.02543   8.951e+06  7.161e+07
MPI Messages:         1.488e+03      1.92936   9.162e+02  7.330e+03
MPI Message Lengths:  2.303e+08      3.35599   9.783e+04  7.171e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.2840e+01  42.5%  0.0000e+00   0.0%  5.296e+03  72.3%  9.268e+04       94.7%  1.250e+02  29.6% 
 1:             FEM: 1.7350e+01  57.5%  2.1619e+09 100.0%  2.034e+03  27.7%  5.144e+03        5.3%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.9128e-0138.8 0.00e+00 0.0 1.2e+03 4.0e+00 4.4e+01  3  0 17  0 10   7  0 23  0 35     0
VecScatterBegin        2 1.0 2.8896e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 4.2915e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.8551e+00 1.1 0.00e+00 0.0 1.4e+03 4.1e+04 2.1e+01  6  0 19  8  5  14  0 26  8 17     0
Mesh Migration         2 1.0 6.9572e-01 1.0 0.00e+00 0.0 3.4e+03 1.6e+05 5.4e+01  2  0 47 78 13   5  0 65 82 43     0
DMPlexInterp           1 1.0 2.1258e+0061070.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0482e+00 1.1 0.00e+00 0.0 1.0e+03 3.2e+05 2.5e+01  7  0 14 44  6  16  0 19 47 20     0
DMPlexDistCones        2 1.0 1.6333e-01 1.0 0.00e+00 0.0 4.9e+02 3.8e+05 4.0e+00  1  0  7 26  1   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.9636e-01 1.0 0.00e+00 0.0 2.1e+03 1.5e+05 2.2e+01  1  0 29 44  5   3  0 40 47 18     0
DMPlexDistribOL        1 1.0 5.2188e-01 1.0 0.00e+00 0.0 3.9e+03 8.8e+04 5.0e+01  2  0 53 47 12   4  0 73 50 40     0
DMPlexDistField        3 1.0 2.2661e-02 1.2 0.00e+00 0.0 6.4e+02 3.8e+04 1.2e+01  0  0  9  3  3   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.7732e-0152.9 0.00e+00 0.0 8.5e+02 3.0e+04 6.0e+00  3  0 12  4  1   6  0 16  4  5     0
DMPlexStratify         6 1.5 6.0215e-01 8.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFSetGraph            51 1.0 1.4261e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.0311e+00 6.1 0.00e+00 0.0 5.0e+03 1.3e+05 4.1e+01  3  0 69 91 10   7  0 95 96 33     0
SFBcastEnd            95 1.0 3.0112e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.5210e-03 3.5 0.00e+00 0.0 1.8e+02 8.2e+04 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.8002e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 6.3896e-0510.7 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.3389e-04 2.4 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.9801e-0368.1 0.00e+00 0.0 3.8e+01 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               15 1.0 2.0872e-02 1.2 3.52e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 13  0  0  4   0 13  0  0  5 13357
VecNorm               16 1.0 4.1130e-03 1.4 4.69e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5  9038
VecScale              32 1.0 2.5165e-03 1.1 3.92e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 12329
VecCopy                1 1.0 3.9196e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 5.7840e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 2.0885e-04 1.1 2.93e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 11124
VecMAXPY              16 1.0 2.0167e-02 1.1 3.95e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 15  0  0  0   0 15  0  0  0 15552
VecScatterBegin      146 1.0 9.3625e-03 1.1 0.00e+00 0.0 1.4e+03 5.6e+03 0.0e+00  0  0 20  1  0   0  0 71 22  0     0
VecScatterEnd        146 1.0 1.7571e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 5.5876e-03 1.2 7.03e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  3  0  0  4   0  3  0  0  5  9979
MatMult               31 1.0 5.3636e-02 1.0 4.25e+07 1.0 1.4e+03 5.6e+03 1.2e+02  0 16 20  1 28   0 16 71 22 41  6288
MatMultAdd            60 1.0 4.1250e-02 1.0 3.64e+07 1.0 1.1e+03 6.0e+03 0.0e+00  0 13 16  1  0   0 13 56 18  0  6989
MatSolve              16 1.0 2.9535e-02 1.1 2.00e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  7  0  0  0   0  7  0  0  0  5344
MatLUFactorNum         1 1.0 9.0630e-03 1.0 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1993
MatILUFactorSym        1 1.0 3.5720e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 7.2582e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.1430e-03 1.1 6.70e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  4666
MatAssemblyBegin      12 1.0 1.4740e-0230.9 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 2.9862e-02 1.1 0.00e+00 0.0 3.0e+02 1.3e+03 4.8e+01  0  0  4  0 11   0  0 15  1 16     0
MatGetRow          96000 1.0 3.8036e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatGetRowIJ            3 1.0 7.3910e-06 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 5.1844e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 3.9101e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6591e-01 1.0 0.00e+00 0.0 7.6e+01 1.1e+03 1.2e+01  3  0  1  0  3   4  0  4  0  4     0
MatMatMult             1 1.0 4.1730e-02 1.0 1.24e+06 1.0 1.5e+02 3.7e+03 1.6e+01  0  0  2  0  4   0  0  7  1  5   237
MatMatMultSym          1 1.0 3.6349e-02 1.1 0.00e+00 0.0 1.3e+02 2.9e+03 1.4e+01  0  0  2  0  3   0  0  7  1  5     0
MatMatMultNum          1 1.0 5.3639e-03 1.0 1.24e+06 1.0 1.9e+01 9.1e+03 2.0e+00  0  0  0  0  0   0  0  1  0  1  1846
MatGetLocalMat         2 1.0 6.2211e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 2.3620e-03 3.6 0.00e+00 0.0 7.6e+01 6.3e+03 0.0e+00  0  0  1  0  0   0  0  4  1  0     0
PCSetUp                4 1.0 1.3550e+01 1.0 4.22e+06 1.0 2.6e+02 3.8e+04 6.6e+01 45  2  4  1 16  78  2 13 26 22     2
PCSetUpOnBlocks       16 1.0 1.3087e-02 1.1 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1380
PCApply               16 1.0 1.6049e+01 1.0 3.01e+07 1.0 3.0e+02 4.5e+03 4.0e+00 53 11  4  0  1  92 11 15  4  1    15
KSPGMRESOrthog        15 1.0 3.8775e-02 1.1 7.03e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 26  0  0  4   0 26  0  0  5 14380
KSPSetUp               4 1.0 2.0099e-03 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 1.6968e+01 1.0 1.50e+08 1.0 1.7e+03 1.1e+04 2.2e+02 56 55 23  3 51  98 55 84 48 73    70
SFBcastBegin           1 1.0 2.1381e-0310.9 0.00e+00 0.0 1.1e+02 7.1e+03 1.0e+00  0  0  2  0  0   0  0  6  2  0     0
SFBcastEnd             1 1.0 1.9217e-04 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   103            103     29164164     0.
   IS L to G Mapping     3              3     16320748     0.
             Section    70             53        35616     0.
              Vector    15             45     38486192     0.
      Vector Scatter     2              7      3442120     0.
              Matrix     0              5     16044008     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        72488     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72      2819096     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8     12996772     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 2.57492e-06
Average time for zero size MPI_Send(): 1.63913e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 16: solving... 
((70996, 1161600), (70996, 1161600))
	Solver time: 1.058687e+01
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 16 processors, by jychang48 Wed Mar  2 17:36:54 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.499e+01      1.00048   2.498e+01
Objects:              5.300e+02      1.11345   4.844e+02
Flops:                1.457e+08      1.07257   1.405e+08  2.248e+09
Flops/sec:            5.832e+06      1.07236   5.625e+06  8.999e+07
MPI Messages:         2.200e+03      2.65801   1.275e+03  2.041e+04
MPI Message Lengths:  2.006e+08      5.65486   3.779e+04  7.712e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4394e+01  57.6%  0.0000e+00   0.0%  1.470e+04  72.0%  3.566e+04       94.4%  1.250e+02  29.6% 
 1:             FEM: 1.0587e+01  42.4%  2.2481e+09 100.0%  5.706e+03  28.0%  2.128e+03        5.6%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1248e+0011.8 0.00e+00 0.0 3.5e+03 4.0e+00 4.4e+01  4  0 17  0 10   7  0 24  0 35     0
VecScatterBegin        2 1.0 9.4891e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9227e+00 1.1 0.00e+00 0.0 4.4e+03 1.4e+04 2.1e+01  8  0 22  8  5  13  0 30  9 17     0
Mesh Migration         2 1.0 5.1665e-01 1.0 0.00e+00 0.0 8.9e+03 6.6e+04 5.4e+01  2  0 44 77 13   4  0 61 81 43     0
DMPlexInterp           1 1.0 2.1289e+0057607.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.1160e+00 1.1 0.00e+00 0.0 2.9e+03 1.1e+05 2.5e+01  8  0 14 42  6  15  0 20 44 20     0
DMPlexDistCones        2 1.0 1.2104e-01 1.1 0.00e+00 0.0 1.3e+03 1.5e+05 4.0e+00  0  0  6 26  1   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.1131e-01 1.0 0.00e+00 0.0 5.5e+03 6.1e+04 2.2e+01  1  0 27 43  5   2  0 38 46 18     0
DMPlexDistribOL        1 1.0 3.4250e-01 1.0 0.00e+00 0.0 1.1e+04 3.5e+04 5.0e+01  1  0 52 49 12   2  0 72 52 40     0
DMPlexDistField        3 1.0 2.6409e-02 1.7 0.00e+00 0.0 1.7e+03 1.5e+04 1.2e+01  0  0  8  3  3   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.8067e-0166.0 0.00e+00 0.0 2.9e+03 9.6e+03 6.0e+00  4  0 14  4  1   6  0 20  4  5     0
DMPlexStratify         6 1.5 5.7033e-0116.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 8.5479e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.1410e+00 4.2 0.00e+00 0.0 1.4e+04 5.0e+04 4.1e+01  4  0 69 91 10   7  0 95 97 33     0
SFBcastEnd            95 1.0 2.9869e-01 5.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.5610e-0311.6 0.00e+00 0.0 5.0e+02 3.0e+04 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.0378e-03 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.8876e-0517.1 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.0194e-04 2.8 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.9481e-0357.1 0.00e+00 0.0 1.1e+02 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               15 1.0 1.0908e-02 1.3 1.77e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 12  0  0  4   0 12  0  0  5 25557
VecNorm               16 1.0 2.3785e-03 1.4 2.36e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 15628
VecScale              32 1.0 1.2298e-03 1.1 1.97e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 25230
VecCopy                1 1.0 2.2292e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 2.8174e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 1.0705e-04 1.2 1.47e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 21702
VecMAXPY              16 1.0 7.5836e-03 1.1 1.99e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0 41357
VecScatterBegin      146 1.0 5.2605e-03 1.2 0.00e+00 0.0 4.1e+03 3.1e+03 0.0e+00  0  0 20  2  0   0  0 72 30  0     0
VecScatterEnd        146 1.0 1.6868e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 3.2125e-03 1.3 3.53e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 17356
MatMult               31 1.0 2.9088e-02 1.1 2.15e+07 1.0 4.1e+03 3.1e+03 1.2e+02  0 15 20  2 28   0 15 72 30 41 11595
MatMultAdd            60 1.0 2.0502e-02 1.1 1.84e+07 1.0 3.2e+03 3.3e+03 0.0e+00  0 13 16  1  0   0 13 57 25  0 14061
MatSolve              16 1.0 1.4782e-02 1.2 1.00e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  7  0  0  0   0  7  0  0  0 10598
MatLUFactorNum         1 1.0 4.4949e-03 1.1 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  4012
MatILUFactorSym        1 1.0 1.7769e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 3.6948e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.8299e-03 4.0 3.37e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2915
MatAssemblyBegin      12 1.0 6.6197e-0314.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 1.8342e-02 1.2 0.00e+00 0.0 8.6e+02 7.2e+02 4.8e+01  0  0  4  0 11   0  0 15  1 16     0
MatGetRow          48000 1.0 1.9010e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatGetRowIJ            3 1.0 7.3910e-06 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 2.5330e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 1.8501e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 3.8416e-01 1.0 0.00e+00 0.0 2.2e+02 6.2e+02 1.2e+01  2  0  1  0  3   4  0  4  0  4     0
MatMatMult             1 1.0 2.2542e-02 1.0 6.22e+05 1.0 4.3e+02 2.1e+03 1.6e+01  0  0  2  0  4   0  0  8  2  5   439
MatMatMultSym          1 1.0 1.9671e-02 1.0 0.00e+00 0.0 3.8e+02 1.6e+03 1.4e+01  0  0  2  0  3   0  0  7  1  5     0
MatMatMultNum          1 1.0 2.8560e-03 1.0 6.22e+05 1.0 5.4e+01 5.1e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1  3467
MatGetLocalMat         2 1.0 3.1629e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 1.3680e-03 3.8 0.00e+00 0.0 2.2e+02 3.5e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 8.6555e+00 1.0 2.13e+06 1.0 7.1e+02 1.4e+04 6.6e+01 35  1  3  1 16  82  1 12 24 22     4
PCSetUpOnBlocks       16 1.0 6.4955e-03 1.1 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  2776
PCApply               16 1.0 9.8637e+00 1.0 1.51e+07 1.1 8.6e+02 2.5e+03 4.0e+00 39 11  4  0  1  93 11 15  5  1    24
KSPGMRESOrthog        15 1.0 1.7411e-02 1.1 3.53e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 25  0  0  4   0 25  0  0  5 32024
KSPSetUp               4 1.0 1.0309e-03 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 1.0327e+01 1.0 7.56e+07 1.0 4.8e+03 4.8e+03 2.2e+02 41 53 24  3 51  98 53 84 53 73   115
SFBcastBegin           1 1.0 2.0330e-0312.4 0.00e+00 0.0 3.3e+02 3.9e+03 1.0e+00  0  0  2  0  0   0  0  6  3  0     0
SFBcastEnd             1 1.0 6.5589e-0418.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   125            125     25601172     0.
   IS L to G Mapping     3              3     15014432     0.
             Section    70             53        35616     0.
              Vector    15             45     21647864     0.
      Vector Scatter     2              7      1711576     0.
              Matrix     0              5      7939056     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        62680     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72      1468496     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      6483848     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 5.19753e-06
Average time for zero size MPI_Send(): 1.74344e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 24: solving... 
((47407, 1161600), (47407, 1161600))
	Solver time: 7.910459e+00
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 24 processors, by jychang48 Wed Mar  2 17:37:20 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.242e+01      1.00049   2.241e+01
Objects:              5.440e+02      1.14286   4.867e+02
Flops:                1.007e+08      1.09540   9.605e+07  2.305e+09
Flops/sec:            4.496e+06      1.09573   4.286e+06  1.029e+08
MPI Messages:         2.382e+03      2.69553   1.502e+03  3.605e+04
MPI Message Lengths:  1.887e+08      7.68458   2.238e+04  8.069e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4502e+01  64.7%  0.0000e+00   0.0%  2.618e+04  72.6%  2.108e+04       94.2%  1.250e+02  29.6% 
 1:             FEM: 7.9106e+00  35.3%  2.3052e+09 100.0%  9.866e+03  27.4%  1.304e+03        5.8%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1278e+00 9.6 0.00e+00 0.0 6.2e+03 4.0e+00 4.4e+01  5  0 17  0 10   7  0 24  0 35     0
VecScatterBegin        2 1.0 5.1975e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 6.1989e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9991e+00 1.1 0.00e+00 0.0 8.7e+03 7.7e+03 2.1e+01  9  0 24  8  5  14  0 33  9 17     0
Mesh Migration         2 1.0 4.5798e-01 1.0 0.00e+00 0.0 1.5e+04 4.0e+04 5.4e+01  2  0 42 76 13   3  0 58 81 43     0
DMPlexInterp           1 1.0 2.1214e+0062660.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2150e+00 1.1 0.00e+00 0.0 5.7e+03 5.7e+04 2.5e+01 10  0 16 40  6  15  0 22 43 20     0
DMPlexDistCones        2 1.0 1.0382e-01 1.1 0.00e+00 0.0 2.2e+03 9.3e+04 4.0e+00  0  0  6 25  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.8698e-01 1.0 0.00e+00 0.0 9.3e+03 3.7e+04 2.2e+01  1  0 26 43  5   2  0 35 46 18     0
DMPlexDistribOL        1 1.0 2.6150e-01 1.0 0.00e+00 0.0 1.8e+04 2.2e+04 5.0e+01  1  0 51 50 12   2  0 70 53 40     0
DMPlexDistField        3 1.0 2.8048e-02 1.9 0.00e+00 0.0 3.0e+03 9.4e+03 1.2e+01  0  0  8  3  3   0  0 11  4 10     0
DMPlexDistData         2 1.0 1.0004e+0029.4 0.00e+00 0.0 6.0e+03 5.0e+03 6.0e+00  4  0 17  4  1   6  0 23  4  5     0
DMPlexStratify         6 1.5 5.5870e-0122.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 6.0882e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1378e+00 4.0 0.00e+00 0.0 2.5e+04 2.9e+04 4.1e+01  5  0 69 91 10   7  0 96 97 33     0
SFBcastEnd            95 1.0 2.9645e-01 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.4639e-0312.3 0.00e+00 0.0 8.7e+02 1.8e+04 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.2810e-03 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 5.1975e-0527.2 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.4901e-04 3.2 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.2400e-0322.1 0.00e+00 0.0 1.9e+02 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               15 1.0 7.2665e-03 1.2 1.18e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 12  0  0  4   0 12  0  0  5 38365
VecNorm               16 1.0 1.5225e-03 1.3 1.58e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 24414
VecScale              32 1.0 8.1635e-04 1.1 1.32e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 38007
VecCopy                1 1.0 1.5211e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 1.9157e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 9.1076e-05 1.5 9.87e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 25508
VecMAXPY              16 1.0 4.0472e-03 1.1 1.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0 77494
VecScatterBegin      146 1.0 3.9029e-03 1.3 0.00e+00 0.0 7.1e+03 2.2e+03 0.0e+00  0  0 20  2  0   0  0 72 34  0     0
VecScatterEnd        146 1.0 1.7533e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 2.1329e-03 1.2 2.37e+06 1.0 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 26141
MatMult               31 1.0 2.0300e-02 1.1 1.43e+07 1.0 7.1e+03 2.2e+03 1.2e+02  0 15 20  2 28   0 15 72 34 41 16614
MatMultAdd            60 1.0 1.4215e-02 1.1 1.22e+07 1.0 5.6e+03 2.3e+03 0.0e+00  0 13 16  2  0   0 13 57 28  0 20280
MatSolve              16 1.0 1.0126e-02 1.1 6.68e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  7  0  0  0   0  7  0  0  0 15400
MatLUFactorNum         1 1.0 2.9531e-03 1.1 7.77e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  6088
MatILUFactorSym        1 1.0 1.2059e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 2.4192e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.8942e-0310.6 2.25e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1842
MatAssemblyBegin      12 1.0 3.4149e-0312.4 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 1.3027e-02 1.2 0.00e+00 0.0 1.5e+03 5.2e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          32000 1.0 1.2811e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatGetRowIJ            3 1.0 9.0599e-06 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 1.8091e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 1.3399e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 2.5847e-01 1.0 0.00e+00 0.0 3.7e+02 4.4e+02 1.2e+01  1  0  1  0  3   3  0  4  0  4     0
MatMatMult             1 1.0 1.5638e-02 1.0 4.15e+05 1.0 7.4e+02 1.5e+03 1.6e+01  0  0  2  0  4   0  0  8  2  5   633
MatMatMultSym          1 1.0 1.3800e-02 1.0 0.00e+00 0.0 6.5e+02 1.2e+03 1.4e+01  0  0  2  0  3   0  0  7  2  5     0
MatMatMultNum          1 1.0 1.8721e-03 1.0 4.15e+05 1.0 9.3e+01 3.6e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1  5287
MatGetLocalMat         2 1.0 2.0990e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 8.5115e-04 3.2 0.00e+00 0.0 3.7e+02 2.5e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 6.6768e+00 1.0 1.42e+06 1.1 1.2e+03 8.7e+03 6.6e+01 30  1  3  1 16  84  1 12 22 22     5
PCSetUpOnBlocks       16 1.0 4.3478e-03 1.1 7.77e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  4135
PCApply               16 1.0 7.3809e+00 1.0 1.01e+07 1.1 1.5e+03 1.8e+03 4.0e+00 33 10  4  0  1  93 10 15  6  1    32
KSPGMRESOrthog        15 1.0 1.0740e-02 1.2 2.37e+07 1.0 0.0e+00 0.0e+00 1.5e+01  0 24  0  0  4   0 24  0  0  5 51917
KSPSetUp               4 1.0 7.2980e-04 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 7.6940e+00 1.0 5.05e+07 1.0 8.3e+03 3.2e+03 2.2e+02 34 52 23  3 51  97 52 85 56 73   155
SFBcastBegin           1 1.0 1.3211e-03 7.3 0.00e+00 0.0 5.8e+02 2.8e+03 1.0e+00  0  0  2  0  0   0  0  6  3  0     0
SFBcastEnd             1 1.0 4.4298e-0422.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   139            139     24064612     0.
   IS L to G Mapping     3              3     14448024     0.
             Section    70             53        35616     0.
              Vector    15             45     16129832     0.
      Vector Scatter     2              7      1145440     0.
              Matrix     0              5      5296216     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        48256     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72      1020040     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      4329832     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.01222e-06
Average time for zero size MPI_Send(): 1.41064e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 32: solving... 
((35155, 1161600), (35155, 1161600))
	Solver time: 6.555492e+00
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 32 processors, by jychang48 Wed Mar  2 17:37:44 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.096e+01      1.00042   2.095e+01
Objects:              5.700e+02      1.19748   4.884e+02
Flops:                7.783e+07      1.11880   7.364e+07  2.357e+09
Flops/sec:            3.714e+06      1.11873   3.514e+06  1.125e+08
MPI Messages:         3.398e+03      3.61735   1.672e+03  5.349e+04
MPI Message Lengths:  1.852e+08      9.83329   1.568e+04  8.389e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4399e+01  68.7%  0.0000e+00   0.0%  3.925e+04  73.4%  1.474e+04       94.0%  1.250e+02  29.6% 
 1:             FEM: 6.5554e+00  31.3%  2.3565e+09 100.0%  1.424e+04  26.6%  9.392e+02        6.0%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1524e+0013.3 0.00e+00 0.0 9.3e+03 4.0e+00 4.4e+01  5  0 17  0 10   7  0 24  0 35     0
VecScatterBegin        2 1.0 4.9114e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.0014e-05 5.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.0450e+00 1.1 0.00e+00 0.0 1.4e+04 5.1e+03 2.1e+01 10  0 26  8  5  14  0 36  9 17     0
Mesh Migration         2 1.0 4.3641e-01 1.0 0.00e+00 0.0 2.2e+04 2.9e+04 5.4e+01  2  0 41 75 13   3  0 56 80 43     0
DMPlexInterp           1 1.0 2.1204e+0066370.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2712e+00 1.1 0.00e+00 0.0 9.4e+03 3.5e+04 2.5e+01 11  0 18 39  6  16  0 24 41 20     0
DMPlexDistCones        2 1.0 1.0140e-01 1.2 0.00e+00 0.0 3.2e+03 6.6e+04 4.0e+00  0  0  6 25  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.7727e-01 1.0 0.00e+00 0.0 1.3e+04 2.7e+04 2.2e+01  1  0 25 43  5   2  0 34 45 18     0
DMPlexDistribOL        1 1.0 2.2697e-01 1.0 0.00e+00 0.0 2.7e+04 1.6e+04 5.0e+01  1  0 50 51 12   2  0 68 54 40     0
DMPlexDistField        3 1.0 2.7891e-02 2.0 0.00e+00 0.0 4.3e+03 6.8e+03 1.2e+01  0  0  8  3  3   0  0 11  4 10     0
DMPlexDistData         2 1.0 9.9838e-0173.6 0.00e+00 0.0 1.0e+04 3.2e+03 6.0e+00  4  0 19  4  1   7  0 26  4  5     0
DMPlexStratify         6 1.5 5.4583e-0129.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 5.1974e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1583e+00 4.5 0.00e+00 0.0 3.8e+04 2.0e+04 4.1e+01  5  0 70 91 10   7  0 96 97 33     0
SFBcastEnd            95 1.0 3.0140e-01 5.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 8.4107e-0318.3 0.00e+00 0.0 1.3e+03 1.3e+04 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.6839e-03 5.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.0054e-0512.9 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.4400e-04 2.1 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.5190e-0325.4 0.00e+00 0.0 2.9e+02 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               15 1.0 7.0019e-03 1.3 8.90e+06 1.1 0.0e+00 0.0e+00 1.5e+01  0 12  0  0  4   0 12  0  0  5 39815
VecNorm               16 1.0 1.7715e-03 1.5 1.19e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 20983
VecScale              32 1.0 6.4635e-04 1.2 9.95e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 48004
VecCopy                1 1.0 1.8501e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 1.4009e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 7.7963e-05 1.6 7.42e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 29799
VecMAXPY              16 1.0 2.8646e-03 1.2 1.00e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0 13  0  0  0   0 13  0  0  0 109485
VecScatterBegin      146 1.0 3.1700e-03 1.3 0.00e+00 0.0 1.0e+04 1.8e+03 0.0e+00  0  0 19  2  0   0  0 72 37  0     0
VecScatterEnd        146 1.0 1.4212e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 2.2540e-03 1.3 1.78e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 24737
MatMult               31 1.0 1.6098e-02 1.1 1.08e+07 1.1 1.0e+04 1.8e+03 1.2e+02  0 14 19  2 28   0 14 72 37 41 20950
MatMultAdd            60 1.0 1.1012e-02 1.2 9.23e+06 1.1 8.1e+03 1.9e+03 0.0e+00  0 12 15  2  0   0 12 57 31  0 26178
MatSolve              16 1.0 7.5953e-03 1.2 5.01e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  7  0  0  0   0  7  0  0  0 20444
MatLUFactorNum         1 1.0 2.1951e-03 1.1 5.87e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  8182
MatILUFactorSym        1 1.0 9.3198e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.9350e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.6952e-03 8.8 1.69e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3146
MatAssemblyBegin      12 1.0 4.2899e-0319.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 1.1258e-02 1.2 0.00e+00 0.0 2.2e+03 4.2e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          24000 1.0 9.5220e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 9.0599e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 1.2531e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 1.2612e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.9306e-01 1.0 0.00e+00 0.0 5.4e+02 3.6e+02 1.2e+01  1  0  1  0  3   3  0  4  0  4     0
MatMatMult             1 1.0 1.2406e-02 1.0 3.11e+05 1.0 1.1e+03 1.2e+03 1.6e+01  0  0  2  0  4   0  0  8  3  5   798
MatMatMultSym          1 1.0 1.1072e-02 1.0 0.00e+00 0.0 9.4e+02 9.4e+02 1.4e+01  0  0  2  0  3   0  0  7  2  5     0
MatMatMultNum          1 1.0 1.3359e-03 1.0 3.11e+05 1.0 1.4e+02 2.9e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1  7412
MatGetLocalMat         2 1.0 1.5900e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 7.4410e-04 2.9 0.00e+00 0.0 5.4e+02 2.0e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 5.6156e+00 1.0 1.07e+06 1.1 1.7e+03 6.2e+03 6.6e+01 27  1  3  1 16  86  1 12 21 22     6
PCSetUpOnBlocks       16 1.0 3.2737e-03 1.1 5.87e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  5486
PCApply               16 1.0 6.1178e+00 1.0 7.57e+06 1.1 2.2e+03 1.4e+03 4.0e+00 29 10  4  0  1  93 10 15  6  1    38
KSPGMRESOrthog        15 1.0 9.2309e-03 1.2 1.78e+07 1.1 0.0e+00 0.0e+00 1.5e+01  0 24  0  0  4   0 24  0  0  5 60402
KSPSetUp               4 1.0 5.3430e-04 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 6.3550e+00 1.0 3.80e+07 1.1 1.2e+04 2.4e+03 2.2e+02 30 50 23  3 51  97 50 85 58 73   187
SFBcastBegin           1 1.0 1.6100e-03 9.0 0.00e+00 0.0 8.6e+02 2.2e+03 1.0e+00  0  0  2  0  0   0  0  6  4  0     0
SFBcastEnd             1 1.0 2.7430e-03162.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   165            165     23651716     0.
   IS L to G Mapping     3              3     14326164     0.
             Section    70             53        35616     0.
              Vector    15             45     13270832     0.
      Vector Scatter     2              7       851392     0.
              Matrix     0              5      3930504     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        50224     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72       792920     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      3242072     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 7.58171e-06
Average time for zero size MPI_Send(): 1.68383e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 40: solving... 
((27890, 1161600), (27890, 1161600))
	Solver time: 5.808753e+00
	Solver iterations: 15
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 40 processors, by jychang48 Wed Mar  2 17:38:08 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.025e+01      1.00036   2.025e+01
Objects:              5.920e+02      1.24895   4.896e+02
Flops:                6.409e+07      1.14442   5.989e+07  2.396e+09
Flops/sec:            3.165e+06      1.14437   2.958e+06  1.183e+08
MPI Messages:         4.088e+03      4.83560   1.815e+03  7.260e+04
MPI Message Lengths:  1.828e+08     12.01197   1.189e+04  8.633e+08
MPI Reductions:       4.220e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4441e+01  71.3%  0.0000e+00   0.0%  5.371e+04  74.0%  1.117e+04       93.9%  1.250e+02  29.6% 
 1:             FEM: 5.8089e+00  28.7%  2.3956e+09 100.0%  1.888e+04  26.0%  7.254e+02        6.1%  2.960e+02  70.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1936e+0013.3 0.00e+00 0.0 1.3e+04 4.0e+00 4.4e+01  5  0 18  0 10   8  0 24  0 35     0
VecScatterBegin        2 1.0 3.5048e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1145e+00 1.1 0.00e+00 0.0 2.0e+04 3.6e+03 2.1e+01 10  0 28  9  5  15  0 38  9 17     0
Mesh Migration         2 1.0 4.1762e-01 1.0 0.00e+00 0.0 2.9e+04 2.2e+04 5.4e+01  2  0 40 75 13   3  0 54 80 43     0
DMPlexInterp           1 1.0 2.1110e+0060232.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.3499e+00 1.1 0.00e+00 0.0 1.4e+04 2.3e+04 2.5e+01 12  0 19 38  6  16  0 26 40 20     0
DMPlexDistCones        2 1.0 9.5561e-02 1.2 0.00e+00 0.0 4.3e+03 5.1e+04 4.0e+00  0  0  6 25  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.6967e-01 1.0 0.00e+00 0.0 1.7e+04 2.1e+04 2.2e+01  1  0 24 42  5   2  0 32 45 18     0
DMPlexDistribOL        1 1.0 2.0232e-01 1.0 0.00e+00 0.0 3.6e+04 1.2e+04 5.0e+01  1  0 49 51 12   1  0 66 55 40     0
DMPlexDistField        3 1.0 3.1459e-02 2.2 0.00e+00 0.0 5.7e+03 5.2e+03 1.2e+01  0  0  8  3  3   0  0 11  4 10     0
DMPlexDistData         2 1.0 1.0449e+0078.5 0.00e+00 0.0 1.5e+04 2.2e+03 6.0e+00  5  0 21  4  1   7  0 28  4  5     0
DMPlexStratify         6 1.5 5.4727e-0136.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 4.2525e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1982e+00 4.5 0.00e+00 0.0 5.1e+04 1.5e+04 4.1e+01  5  0 71 91 10   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0828e-01 6.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 8.9126e-0319.8 0.00e+00 0.0 1.7e+03 9.4e+03 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.0048e-03 6.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.0054e-0514.0 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.1182e-04 2.3 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.0710e-0319.5 0.00e+00 0.0 3.8e+02 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               15 1.0 4.8821e-03 1.3 7.14e+06 1.1 0.0e+00 0.0e+00 1.5e+01  0 12  0  0  4   0 12  0  0  5 57102
VecNorm               16 1.0 1.3859e-03 1.6 9.52e+05 1.1 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 26820
VecScale              32 1.0 5.1856e-04 1.2 7.98e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 59833
VecCopy                1 1.0 9.8944e-05 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               104 1.0 1.1306e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 5.8174e-05 1.6 5.95e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 39935
VecMAXPY              16 1.0 2.0962e-03 1.2 8.03e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 13  0  0  0   0 13  0  0  0 149621
VecScatterBegin      146 1.0 2.9225e-03 1.6 0.00e+00 0.0 1.4e+04 1.5e+03 0.0e+00  0  0 19  2  0   0  0 72 39  0     0
VecScatterEnd        146 1.0 1.6057e-03 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          16 1.0 1.7741e-03 1.4 1.43e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0  2  0  0  4   0  2  0  0  5 31429
MatMult               31 1.0 1.4111e-02 1.2 8.66e+06 1.1 1.4e+04 1.5e+03 1.2e+02  0 14 19  2 28   0 14 72 39 41 23901
MatMultAdd            60 1.0 9.0714e-03 1.2 7.40e+06 1.1 1.1e+04 1.6e+03 0.0e+00  0 12 15  2  0   0 12 57 32  0 31779
MatSolve              16 1.0 6.0453e-03 1.2 4.01e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  6  0  0  0   0  6  0  0  0 25604
MatLUFactorNum         1 1.0 1.7362e-03 1.1 4.73e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 10319
MatILUFactorSym        1 1.0 7.0596e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.5569e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.7138e-0311.6 1.35e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3112
MatAssemblyBegin      12 1.0 4.5278e-0321.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 9.7792e-03 1.2 0.00e+00 0.0 2.9e+03 3.5e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          19200 1.0 7.6635e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 9.5367e-06 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 1.1559e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 8.6069e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.5613e-01 1.0 0.00e+00 0.0 7.2e+02 3.0e+02 1.2e+01  1  0  1  0  3   3  0  4  0  4     0
MatMatMult             1 1.0 1.0730e-02 1.0 2.49e+05 1.0 1.4e+03 9.9e+02 1.6e+01  0  0  2  0  4   0  0  8  3  5   923
MatMatMultSym          1 1.0 9.6390e-03 1.0 0.00e+00 0.0 1.3e+03 7.8e+02 1.4e+01  0  0  2  0  3   0  0  7  2  5     0
MatMatMultNum          1 1.0 1.0788e-03 1.1 2.49e+05 1.0 1.8e+02 2.4e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1  9176
MatGetLocalMat         2 1.0 1.2472e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 6.6495e-04 3.2 0.00e+00 0.0 7.2e+02 1.7e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 5.0245e+00 1.0 8.56e+05 1.1 2.3e+03 4.7e+03 6.6e+01 25  1  3  1 16  86  1 12 21 22     7
PCSetUpOnBlocks       16 1.0 2.5797e-03 1.1 4.73e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  6945
PCApply               16 1.0 5.4329e+00 1.0 6.07e+06 1.1 2.9e+03 1.2e+03 4.0e+00 27 10  4  0  1  94 10 15  6  1    43
KSPGMRESOrthog        15 1.0 6.5861e-03 1.2 1.43e+07 1.1 0.0e+00 0.0e+00 1.5e+01  0 23  0  0  4   0 23  0  0  5 84658
KSPSetUp               4 1.0 4.7898e-04 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 5.6257e+00 1.0 3.05e+07 1.1 1.6e+04 2.0e+03 2.2e+02 28 50 22  4 51  97 50 85 60 73   211
SFBcastBegin           1 1.0 1.1621e-03 6.0 0.00e+00 0.0 1.2e+03 1.8e+03 1.0e+00  0  0  2  0  0   0  0  6  4  0     0
SFBcastEnd             1 1.0 6.1488e-0436.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   187            187     23331236     0.
   IS L to G Mapping     3              3     14094724     0.
             Section    70             53        35616     0.
              Vector    15             45     11572992     0.
      Vector Scatter     2              7       677032     0.
              Matrix     0              5      3124972     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        48720     0.
   IS L to G Mapping     4              0            0     0.
              Vector   114             72       656808     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      2592528     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.91414e-07
Average time for MPI_Barrier(): 9.63211e-06
Average time for zero size MPI_Send(): 1.42455e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 48: solving... 
((23365, 1161600), (23365, 1161600))
	Solver time: 5.547698e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 48 processors, by jychang48 Wed Mar  2 17:38:33 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           2.058e+01      1.00033   2.058e+01
Objects:              6.040e+02      1.26360   4.931e+02
Flops:                5.665e+07      1.15192   5.290e+07  2.539e+09
Flops/sec:            2.753e+06      1.15212   2.571e+06  1.234e+08
MPI Messages:         3.949e+03      4.19214   1.888e+03  9.063e+04
MPI Message Lengths:  1.799e+08     13.98669   9.756e+03  8.842e+08
MPI Reductions:       4.320e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.5032e+01  73.0%  0.0000e+00   0.0%  6.669e+04  73.6%  9.138e+03       93.7%  1.250e+02  28.9% 
 1:             FEM: 5.5477e+00  27.0%  2.5394e+09 100.0%  2.394e+04  26.4%  6.183e+02        6.3%  3.060e+02  70.8% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.3304e+0082.7 0.00e+00 0.0 1.6e+04 4.0e+00 4.4e+01  6  0 18  0 10   8  0 24  0 35     0
VecScatterBegin        2 1.0 2.3127e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.8215e-06 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1502e+00 1.1 0.00e+00 0.0 2.7e+04 2.8e+03 2.1e+01 10  0 30  9  5  14  0 40  9 17     0
Mesh Migration         2 1.0 4.0710e-01 1.0 0.00e+00 0.0 3.4e+04 1.9e+04 5.4e+01  2  0 38 75 12   3  0 51 80 43     0
DMPlexInterp           1 1.0 2.1277e+0062846.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.4059e+00 1.1 0.00e+00 0.0 1.9e+04 1.7e+04 2.5e+01 12  0 21 37  6  16  0 29 40 20     0
DMPlexDistCones        2 1.0 9.4815e-02 1.2 0.00e+00 0.0 5.1e+03 4.3e+04 4.0e+00  0  0  6 25  1   1  0  8 26  3     0
DMPlexDistLabels       2 1.0 2.6481e-01 1.0 0.00e+00 0.0 2.1e+04 1.8e+04 2.2e+01  1  0 23 42  5   2  0 31 45 18     0
DMPlexDistribOL        1 1.0 1.7179e-01 1.1 0.00e+00 0.0 4.2e+04 1.1e+04 5.0e+01  1  0 47 52 12   1  0 64 55 40     0
DMPlexDistField        3 1.0 3.1440e-02 2.3 0.00e+00 0.0 6.8e+03 4.5e+03 1.2e+01  0  0  8  3  3   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0401e+0067.8 0.00e+00 0.0 2.1e+04 1.7e+03 6.0e+00  5  0 23  4  1   7  0 31  4  5     0
DMPlexStratify         6 1.5 5.3913e-0141.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.6406e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.3376e+00 6.7 0.00e+00 0.0 6.4e+04 1.3e+04 4.1e+01  6  0 71 91  9   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0287e-01 6.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.4006e-0323.4 0.00e+00 0.0 2.0e+03 8.2e+03 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.3119e-03 7.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.3855e-0515.8 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.2398e-04 2.5 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.3649e-0317.7 0.00e+00 0.0 4.6e+02 4.0e+00 1.0e+00  0  0  1  0  0   0  0  2  0  0     0
VecMDot               16 1.0 4.8552e-03 1.3 6.76e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0 12  0  0  4   0 12  0  0  5 65075
VecNorm               17 1.0 1.2991e-03 1.5 8.45e+05 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 30400
VecScale              34 1.0 4.4942e-04 1.2 7.09e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 73353
VecCopy                1 1.0 1.1802e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               110 1.0 9.8443e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 7.2956e-05 2.3 4.97e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 31844
VecMAXPY              17 1.0 1.9226e-03 1.2 7.56e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0 183671
VecScatterBegin      155 1.0 2.6774e-03 1.6 0.00e+00 0.0 1.8e+04 1.3e+03 0.0e+00  0  0 19  3  0   0  0 74 42  0     0
VecScatterEnd        155 1.0 1.3506e-03 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          17 1.0 1.6842e-03 1.3 1.27e+06 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 35175
MatMult               33 1.0 1.3046e-02 1.2 7.76e+06 1.1 1.8e+04 1.3e+03 1.3e+02  0 14 19  3 30   0 14 74 42 42 27561
MatMultAdd            64 1.0 8.1303e-03 1.2 6.63e+06 1.1 1.4e+04 1.4e+03 0.0e+00  0 12 15  2  0   0 12 58 35  0 37821
MatSolve              17 1.0 5.3048e-03 1.1 3.55e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  6  0  0  0   0  6  0  0  0 30925
MatLUFactorNum         1 1.0 1.4539e-03 1.1 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 12306
MatILUFactorSym        1 1.0 5.9080e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.2629e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.9281e-0315.4 1.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2766
MatAssemblyBegin      12 1.0 3.2587e-0318.4 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 8.6644e-03 1.2 0.00e+00 0.0 3.5e+03 3.1e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          16000 1.0 6.3950e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 1.0729e-05 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 9.5296e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 7.7963e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.3025e-01 1.0 0.00e+00 0.0 8.6e+02 2.7e+02 1.2e+01  1  0  1  0  3   2  0  4  0  4     0
MatMatMult             1 1.0 9.7330e-03 1.0 2.08e+05 1.0 1.7e+03 8.9e+02 1.6e+01  0  0  2  0  4   0  0  7  3  5  1017
MatMatMultSym          1 1.0 8.7850e-03 1.0 0.00e+00 0.0 1.5e+03 7.0e+02 1.4e+01  0  0  2  0  3   0  0  6  2  5     0
MatMatMultNum          1 1.0 9.3794e-04 1.0 2.08e+05 1.0 2.2e+02 2.2e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1 10554
MatGetLocalMat         2 1.0 1.0867e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 6.5875e-04 3.9 0.00e+00 0.0 8.6e+02 1.5e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 4.8043e+00 1.0 7.17e+05 1.1 2.8e+03 4.0e+03 6.6e+01 23  1  3  1 15  87  1 12 20 22     7
PCSetUpOnBlocks       17 1.0 2.2023e-03 1.1 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  8124
PCApply               17 1.0 5.2019e+00 1.0 5.36e+06 1.1 3.7e+03 1.1e+03 4.0e+00 25 10  4  0  1  94 10 15  7  1    48
KSPGMRESOrthog        16 1.0 6.3984e-03 1.2 1.35e+07 1.1 0.0e+00 0.0e+00 1.6e+01  0 25  0  0  4   0 25  0  0  5 98759
KSPSetUp               4 1.0 3.8409e-04 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 5.3660e+00 1.0 2.80e+07 1.1 2.0e+04 1.7e+03 2.3e+02 26 51 23  4 53  97 51 85 62 74   242
SFBcastBegin           1 1.0 1.4529e-03 8.4 0.00e+00 0.0 1.4e+03 1.7e+03 1.0e+00  0  0  2  0  0   0  0  6  4  0     0
SFBcastEnd             1 1.0 7.1597e-0450.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   195            195     22892240     0.
   IS L to G Mapping     3              3     14076928     0.
             Section    70             53        35616     0.
              Vector    15             45     10511976     0.
      Vector Scatter     2              7       568432     0.
              Matrix     0              5      2614408     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        42440     0.
   IS L to G Mapping     4              0            0     0.
              Vector   118             76       575296     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      2166428     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 8.96454e-06
Average time for zero size MPI_Send(): 1.35601e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 56: solving... 
((20104, 1161600), (20104, 1161600))
	Solver time: 5.245087e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 56 processors, by jychang48 Wed Mar  2 17:38:56 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.992e+01      1.00032   1.991e+01
Objects:              6.220e+02      1.29583   4.939e+02
Flops:                4.935e+07      1.17708   4.599e+07  2.575e+09
Flops/sec:            2.478e+06      1.17720   2.310e+06  1.293e+08
MPI Messages:         4.414e+03      3.94944   2.001e+03  1.120e+05
MPI Message Lengths:  1.783e+08     16.37358   8.092e+03  9.066e+08
MPI Reductions:       4.320e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4666e+01  73.7%  0.0000e+00   0.0%  8.318e+04  74.3%  7.573e+03       93.6%  1.250e+02  28.9% 
 1:             FEM: 5.2451e+00  26.3%  2.5753e+09 100.0%  2.884e+04  25.7%  5.196e+02        6.4%  3.060e+02  70.8% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2130e+0012.1 0.00e+00 0.0 2.0e+04 4.0e+00 4.4e+01  6  0 18  0 10   8  0 24  0 35     0
VecScatterBegin        2 1.0 2.5034e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.0014e-05 5.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1797e+00 1.1 0.00e+00 0.0 3.5e+04 2.2e+03 2.1e+01 11  0 31  9  5  15  0 42  9 17     0
Mesh Migration         2 1.0 3.9948e-01 1.0 0.00e+00 0.0 4.1e+04 1.6e+04 5.4e+01  2  0 37 74 12   3  0 50 79 43     0
DMPlexInterp           1 1.0 2.1081e+0065986.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.4434e+00 1.1 0.00e+00 0.0 2.6e+04 1.3e+04 2.5e+01 12  0 23 36  6  17  0 31 39 20     0
DMPlexDistCones        2 1.0 9.4779e-02 1.2 0.00e+00 0.0 6.2e+03 3.6e+04 4.0e+00  0  0  5 25  1   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.6069e-01 1.0 0.00e+00 0.0 2.5e+04 1.5e+04 2.2e+01  1  0 22 42  5   2  0 30 45 18     0
DMPlexDistribOL        1 1.0 1.5735e-01 1.1 0.00e+00 0.0 5.1e+04 9.2e+03 5.0e+01  1  0 46 52 12   1  0 62 56 40     0
DMPlexDistField        3 1.0 3.0454e-02 2.4 0.00e+00 0.0 8.3e+03 3.8e+03 1.2e+01  0  0  7  3  3   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0585e+0051.5 0.00e+00 0.0 2.8e+04 1.3e+03 6.0e+00  5  0 25  4  1   7  0 33  4  5     0
DMPlexStratify         6 1.5 5.3799e-0149.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.2726e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2201e+00 4.0 0.00e+00 0.0 8.0e+04 1.0e+04 4.1e+01  6  0 71 91  9   8  0 96 97 33     0
SFBcastEnd            95 1.0 2.9608e-01 8.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.1550e-0324.3 0.00e+00 0.0 2.5e+03 6.8e+03 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 1.0157e-02 8.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.2902e-0517.2 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.2207e-04 2.7 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.4939e-0332.5 0.00e+00 0.0 5.6e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  2  0  0     0
VecMDot               16 1.0 4.8904e-03 1.5 5.80e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0 12  0  0  4   0 12  0  0  5 64605
VecNorm               17 1.0 1.1060e-03 1.5 7.25e+05 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 35708
VecScale              34 1.0 4.6110e-04 1.4 6.08e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 71495
VecCopy                1 1.0 7.5102e-05 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               110 1.0 8.4519e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 4.8161e-05 1.8 4.26e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 48239
VecMAXPY              17 1.0 1.5872e-03 1.2 6.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0 222491
VecScatterBegin      155 1.0 2.4736e-03 1.6 0.00e+00 0.0 2.1e+04 1.2e+03 0.0e+00  0  0 19  3  0   0  0 74 43  0     0
VecScatterEnd        155 1.0 1.1752e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          17 1.0 1.4255e-03 1.4 1.09e+06 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 41558
MatMult               33 1.0 1.2118e-02 1.2 6.64e+06 1.1 2.1e+04 1.2e+03 1.3e+02  0 14 19  3 30   0 14 74 43 42 29670
MatMultAdd            64 1.0 7.1990e-03 1.3 5.67e+06 1.1 1.7e+04 1.3e+03 0.0e+00  0 12 15  2  0   0 12 58 36  0 42713
MatSolve              17 1.0 4.6377e-03 1.1 3.05e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  6  0  0  0   0  6  0  0  0 35279
MatLUFactorNum         1 1.0 1.2581e-03 1.1 3.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 14188
MatILUFactorSym        1 1.0 5.0306e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.0850e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.8859e-0317.4 9.68e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2827
MatAssemblyBegin      12 1.0 3.0339e-0318.7 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 8.4910e-03 1.4 0.00e+00 0.0 4.2e+03 2.8e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          13716 1.0 5.4852e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 9.2983e-06 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 8.1325e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 6.8188e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.1231e-01 1.0 0.00e+00 0.0 1.0e+03 2.4e+02 1.2e+01  1  0  1  0  3   2  0  4  0  4     0
MatMatMult             1 1.0 8.4488e-03 1.0 1.78e+05 1.0 2.1e+03 8.0e+02 1.6e+01  0  0  2  0  4   0  0  7  3  5  1171
MatMatMultSym          1 1.0 7.6680e-03 1.1 0.00e+00 0.0 1.8e+03 6.3e+02 1.4e+01  0  0  2  0  3   0  0  6  2  5     0
MatMatMultNum          1 1.0 7.8893e-04 1.0 1.78e+05 1.0 2.6e+02 2.0e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1 12545
MatGetLocalMat         2 1.0 9.2196e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 5.2214e-04 3.5 0.00e+00 0.0 1.0e+03 1.4e+03 0.0e+00  0  0  1  0  0   0  0  4  2  0     0
PCSetUp                4 1.0 4.5953e+00 1.0 6.15e+05 1.1 3.4e+03 3.3e+03 6.6e+01 23  1  3  1 15  88  1 12 19 22     7
PCSetUpOnBlocks       17 1.0 1.9102e-03 1.1 3.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  9345
PCApply               17 1.0 4.9382e+00 1.0 4.60e+06 1.1 4.4e+03 9.4e+02 4.0e+00 25 10  4  0  1  94 10 15  7  1    50
KSPGMRESOrthog        16 1.0 6.3207e-03 1.3 1.16e+07 1.1 0.0e+00 0.0e+00 1.6e+01  0 25  0  0  4   0 25  0  0  5 99973
KSPSetUp               4 1.0 3.2091e-04 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 5.0805e+00 1.0 2.39e+07 1.1 2.5e+04 1.5e+03 2.3e+02 26 50 22  4 53  97 50 85 63 74   256
SFBcastBegin           1 1.0 1.5471e-0310.0 0.00e+00 0.0 1.7e+03 1.5e+03 1.0e+00  0  0  1  0  0   0  0  6  4  0     0
SFBcastEnd             1 1.0 6.0892e-0451.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   213            213     22655968     0.
   IS L to G Mapping     3              3     12801292     0.
             Section    70             53        35616     0.
              Vector    15             45      9747824     0.
      Vector Scatter     2              7       490168     0.
              Matrix     0              5      2241752     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        38312     0.
   IS L to G Mapping     4              0            0     0.
              Vector   118             76       511680     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      1858040     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.07765e-05
Average time for zero size MPI_Send(): 1.37516e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 hypre 40 1
=================
Discretization: RT
MPI processes 64: solving... 
((17544, 1161600), (17544, 1161600))
	Solver time: 5.107411e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 64 processors, by jychang48 Wed Mar  2 17:39:20 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.985e+01      1.00035   1.984e+01
Objects:              6.360e+02      1.33054   4.947e+02
Flops:                4.357e+07      1.18779   4.072e+07  2.606e+09
Flops/sec:            2.196e+06      1.18785   2.052e+06  1.314e+08
MPI Messages:         4.678e+03      4.58129   2.100e+03  1.344e+05
MPI Message Lengths:  1.774e+08     18.56783   6.891e+03  9.262e+08
MPI Reductions:       4.320e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4735e+01  74.3%  0.0000e+00   0.0%  1.007e+05  74.9%  6.442e+03       93.5%  1.250e+02  28.9% 
 1:             FEM: 5.1075e+00  25.7%  2.6063e+09 100.0%  3.371e+04  25.1%  4.482e+02        6.5%  3.060e+02  70.8% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2411e+0012.0 0.00e+00 0.0 2.4e+04 4.0e+00 4.4e+01  6  0 18  0 10   8  0 24  0 35     0
VecScatterBegin        2 1.0 5.1975e-05 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.2346e+00 1.1 0.00e+00 0.0 4.4e+04 1.8e+03 2.1e+01 11  0 33  9  5  15  0 44  9 17     0
Mesh Migration         2 1.0 3.9620e-01 1.0 0.00e+00 0.0 4.9e+04 1.4e+04 5.4e+01  2  0 36 74 12   3  0 48 79 43     0
DMPlexInterp           1 1.0 2.1139e+0058718.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.5038e+00 1.1 0.00e+00 0.0 3.3e+04 1.0e+04 2.5e+01 13  0 25 36  6  17  0 33 38 20     0
DMPlexDistCones        2 1.0 9.2579e-02 1.2 0.00e+00 0.0 7.2e+03 3.1e+04 4.0e+00  0  0  5 25  1   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.5964e-01 1.0 0.00e+00 0.0 2.9e+04 1.3e+04 2.2e+01  1  0 22 42  5   2  0 29 45 18     0
DMPlexDistribOL        1 1.0 1.4357e-01 1.1 0.00e+00 0.0 6.1e+04 8.0e+03 5.0e+01  1  0 45 52 12   1  0 60 56 40     0
DMPlexDistField        3 1.0 3.1811e-02 2.4 0.00e+00 0.0 9.7e+03 3.4e+03 1.2e+01  0  0  7  4  3   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0804e+0051.6 0.00e+00 0.0 3.5e+04 1.0e+03 6.0e+00  5  0 26  4  1   7  0 35  4  5     0
DMPlexStratify         6 1.5 5.3848e-0156.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 2.8280e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2476e+00 4.0 0.00e+00 0.0 9.7e+04 8.7e+03 4.1e+01  6  0 72 91  9   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0134e-0110.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.5589e-0322.9 0.00e+00 0.0 2.9e+03 5.8e+03 3.0e+00  0  0  2  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.6300e-03 8.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.2902e-0515.3 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.2994e-04 3.1 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.2991e-0318.9 0.00e+00 0.0 6.5e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  2  0  0     0
VecMDot               16 1.0 5.5118e-03 1.9 5.08e+06 1.1 0.0e+00 0.0e+00 1.6e+01  0 12  0  0  4   0 12  0  0  5 57322
VecNorm               17 1.0 1.2081e-03 1.3 6.35e+05 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 32692
VecScale              34 1.0 3.6025e-04 1.2 5.33e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 91510
VecCopy                1 1.0 5.5075e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               110 1.0 7.6127e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                1 1.0 4.4823e-05 1.8 3.73e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 51831
VecMAXPY              17 1.0 1.3549e-03 1.1 5.68e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0 260623
VecScatterBegin      155 1.0 2.3611e-03 1.6 0.00e+00 0.0 2.5e+04 1.1e+03 0.0e+00  0  0 18  3  0   0  0 74 45  0     0
VecScatterEnd        155 1.0 1.4179e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          17 1.0 1.5237e-03 1.3 9.52e+05 1.1 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  4   0  2  0  0  6 38879
MatMult               33 1.0 1.1678e-02 1.2 5.80e+06 1.1 2.5e+04 1.1e+03 1.3e+02  0 14 18  3 30   0 14 74 45 42 30787
MatMultAdd            64 1.0 6.7151e-03 1.3 4.96e+06 1.1 2.0e+04 1.1e+03 0.0e+00  0 12 15  2  0   0 12 58 37  0 45791
MatSolve              17 1.0 4.0982e-03 1.1 2.67e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  6  0  0  0   0  6  0  0  0 39822
MatLUFactorNum         1 1.0 1.1230e-03 1.1 2.96e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 15869
MatILUFactorSym        1 1.0 4.8018e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 9.8610e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.9418e-0330.9 8.46e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1812
MatAssemblyBegin      12 1.0 2.3770e-0314.8 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatAssemblyEnd        12 1.0 7.4003e-03 1.4 0.00e+00 0.0 4.9e+03 2.5e+02 4.8e+01  0  0  4  0 11   0  0 15  2 16     0
MatGetRow          12000 1.0 4.7642e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetRowIJ            3 1.0 1.1206e-05 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 6.6686e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  2     0
MatGetOrdering         1 1.0 6.3181e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 9.9198e-02 1.0 0.00e+00 0.0 1.2e+03 2.2e+02 1.2e+01  0  0  1  0  3   2  0  4  0  4     0
MatMatMult             1 1.0 7.7269e-03 1.0 1.56e+05 1.0 2.4e+03 7.3e+02 1.6e+01  0  0  2  0  4   0  0  7  3  5  1281
MatMatMultSym          1 1.0 7.0100e-03 1.0 0.00e+00 0.0 2.1e+03 5.8e+02 1.4e+01  0  0  2  0  3   0  0  6  2  5     0
MatMatMultNum          1 1.0 7.3886e-04 1.1 1.56e+05 1.0 3.1e+02 1.8e+03 2.0e+00  0  0  0  0  0   0  0  1  1  1 13397
MatGetLocalMat         2 1.0 8.2898e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 4.1103e-04 2.6 0.00e+00 0.0 1.2e+03 1.2e+03 0.0e+00  0  0  1  0  0   0  0  4  3  0     0
PCSetUp                4 1.0 4.4948e+00 1.0 5.36e+05 1.1 3.9e+03 2.9e+03 6.6e+01 23  1  3  1 15  88  1 12 19 22     7
PCSetUpOnBlocks       17 1.0 1.6925e-03 1.1 2.96e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 10529
PCApply               17 1.0 4.8126e+00 1.0 4.02e+06 1.1 5.2e+03 8.5e+02 4.0e+00 24  9  4  0  1  94  9 15  7  1    51
KSPGMRESOrthog        16 1.0 6.7701e-03 1.6 1.02e+07 1.1 0.0e+00 0.0e+00 1.6e+01  0 24  0  0  4   0 24  0  0  5 93337
KSPSetUp               4 1.0 2.9278e-04 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 4.9404e+00 1.0 2.09e+07 1.1 2.9e+04 1.3e+03 2.3e+02 25 50 21  4 53  97 50 85 64 74   263
SFBcastBegin           1 1.0 1.3502e-03 8.5 0.00e+00 0.0 2.0e+03 1.4e+03 1.0e+00  0  0  1  0  0   0  0  6  4  0     0
SFBcastEnd             1 1.0 6.3281e-03530.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   227            227     22503312     0.
   IS L to G Mapping     3              3     12541056     0.
             Section    70             53        35616     0.
              Vector    15             45      9148440     0.
      Vector Scatter     2              7       428728     0.
              Matrix     0              5      1962204     0.
      Preconditioner     0              5         5176     0.
       Krylov Solver     0              5        23264     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    31             24        36680     0.
   IS L to G Mapping     4              0            0     0.
              Vector   118             76       463200     0.
      Vector Scatter    13              2         2192     0.
              Matrix    26              8      1629940     0.
      Preconditioner     6              1          896     0.
       Krylov Solver     6              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 1.2207e-05
Average time for zero size MPI_Send(): 1.64285e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type hypre
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------
-------------- next part --------------
=================
 ml 40 1
=================
Discretization: RT
MPI processes 1: solving... 
((1161600, 1161600), (1161600, 1161600))
	Solver time: 5.434000e+00
	Solver iterations: 14
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 1 processor, by jychang48 Wed Mar  2 17:40:07 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.920e+01      1.00000   1.920e+01
Objects:              3.260e+02      1.00000   3.260e+02
Flops:                3.005e+09      1.00000   3.005e+09  3.005e+09
Flops/sec:            1.565e+08      1.00000   1.565e+08  1.565e+08
MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Reductions:       0.000e+00      0.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.3767e+01  71.7%  0.0000e+00   0.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 
 1:             FEM: 5.4340e+00  28.3%  3.0049e+09 100.0%  0.000e+00   0.0%  0.000e+00        0.0%  0.000e+00   0.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecSet                 8 1.0 1.6064e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 3.6058e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexInterp           1 1.0 2.0997e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 11  0  0  0  0  15  0  0  0  0     0
DMPlexStratify         4 1.0 5.1062e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   4  0  0  0  0     0
SFSetGraph             7 1.0 2.6127e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

VecMDot               14 1.0 8.3214e-02 1.0 2.44e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   2  8  0  0  0  2931
VecNorm               15 1.0 1.8275e-02 1.0 3.48e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1907
VecScale              30 1.0 1.8777e-02 1.0 2.91e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1549
VecCopy                1 1.0 2.6550e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               231 1.0 2.2175e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   4  0  0  0  0     0
VecAXPY                1 1.0 1.4780e-03 1.0 2.32e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1572
VecAYPX               75 1.0 7.5226e-03 1.0 6.78e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   901
VecMAXPY              15 1.0 9.9054e-02 1.0 2.76e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1  9  0  0  0   2  9  0  0  0  2791
VecScatterBegin       66 1.0 7.1086e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecNormalize          15 1.0 2.8637e-02 1.0 5.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   1  2  0  0  0  1825
MatMult              189 1.0 4.6065e-01 1.0 5.58e+08 1.0 0.0e+00 0.0e+00 0.0e+00  2 19  0  0  0   8 19  0  0  0  1212
MatMultAdd           131 1.0 2.5532e-01 1.0 3.10e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1 10  0  0  0   5 10  0  0  0  1215
MatSolve              30 1.0 1.6508e-01 1.0 1.50e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1  5  0  0  0   3  5  0  0  0   908
MatSOR               150 1.0 7.3408e-01 1.0 8.00e+08 1.0 0.0e+00 0.0e+00 0.0e+00  4 27  0  0  0  14 27  0  0  0  1089
MatLUFactorSym         1 1.0 1.0014e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 6.5596e-02 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0   276
MatILUFactorSym        1 1.0 4.8129e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatConvert             1 1.0 1.8516e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 7.8988e-03 1.0 5.34e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   676
MatResidual           75 1.0 1.0117e-01 1.0 1.45e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1  5  0  0  0   2  5  0  0  0  1436
MatAssemblyBegin      25 1.0 1.6451e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd        25 1.0 1.1168e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatGetRow         768000 1.0 4.7904e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   9  0  0  0  0     0
MatGetRowIJ            2 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix        4 1.0 4.8298e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetOrdering         2 1.0 3.0270e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.0552e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  5  0  0  0  0  19  0  0  0  0     0
MatMatMult             1 1.0 1.2038e-01 1.0 1.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   2  0  0  0  0   111
MatMatMultSym          1 1.0 8.1955e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
MatMatMultNum          1 1.0 3.8406e-02 1.0 1.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0   347
PCSetUp                4 1.0 2.2227e+00 1.0 1.01e+08 1.0 0.0e+00 0.0e+00 0.0e+00 12  3  0  0  0  41  3  0  0  0    46
PCSetUpOnBlocks       15 1.0 1.1679e-01 1.0 1.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00  1  1  0  0  0   2  1  0  0  0   155
PCApply               15 1.0 2.1770e+00 1.0 1.32e+09 1.0 0.0e+00 0.0e+00 0.0e+00 11 44  0  0  0  40 44  0  0  0   605
KSPGMRESOrthog        14 1.0 1.7076e-01 1.0 4.88e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1 16  0  0  0   3 16  0  0  0  2857
KSPSetUp              10 1.0 2.5997e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 4.0425e+00 1.0 2.18e+09 1.0 0.0e+00 0.0e+00 0.0e+00 21 73  0  0  0  74 73  0  0  0   539
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    22             25     38642032     0.
             Section    26              8         5376     0.
              Vector    13             68    289326248     0.
      Vector Scatter     2              6         3984     0.
              Matrix     0             19    155400352     0.
      Preconditioner     0             11        11008     0.
       Krylov Solver     0             11        30976     0.
    Distributed Mesh    10              4        19256     0.
    GraphPartitioner     4              3         1836     0.
Star Forest Bipartite Graph    23             12         9696     0.
     Discrete System    10              4         3456     0.

--- Event Stage 1: FEM

           Index Set    22             12         9408     0.
   IS L to G Mapping     4              0            0     0.
              Vector   127             63     19225304     0.
      Vector Scatter     6              0            0     0.
              Matrix    26              2     37023836     0.
      Preconditioner    12              1         1016     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 2: solving... 
((579051, 1161600), (579051, 1161600))
	Solver time: 3.572446e+00
	Solver iterations: 16
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 2 processors, by jychang48 Wed Mar  2 17:40:26 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.773e+01      1.00036   1.773e+01
Objects:              7.890e+02      1.01806   7.820e+02
Flops:                1.659e+09      1.00160   1.658e+09  3.315e+09
Flops/sec:            9.356e+07      1.00124   9.350e+07  1.870e+08
MPI Messages:         7.360e+02      1.10345   7.015e+02  1.403e+03
MPI Message Lengths:  4.089e+08      1.61162   4.723e+05  6.626e+08
MPI Reductions:       6.180e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4155e+01  79.8%  0.0000e+00   0.0%  5.020e+02  35.8%  4.447e+05       94.2%  1.250e+02  20.2% 
 1:             FEM: 3.5725e+00  20.2%  3.3150e+09 100.0%  9.010e+02  64.2%  2.757e+04        5.8%  4.920e+02  79.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 8.3753e-0110.1 0.00e+00 0.0 1.2e+02 4.0e+00 4.4e+01  3  0  8  0  7   3  0 24  0 35     0
VecScatterBegin        2 1.0 1.5290e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 3.0994e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.6361e+00 1.1 0.00e+00 0.0 9.2e+01 5.5e+05 2.1e+01  9  0  7  8  3  11  0 18  8 17     0
Mesh Migration         2 1.0 1.7862e+00 1.0 0.00e+00 0.0 3.7e+02 1.4e+06 5.4e+01 10  0 27 78  9  13  0 75 83 43     0
DMPlexInterp           1 1.0 2.0356e+0049929.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  6  0  0  0  0   7  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2571e+00 1.1 0.00e+00 0.0 1.7e+02 1.9e+06 2.5e+01 12  0 12 47  4  15  0 33 50 20     0
DMPlexDistCones        2 1.0 3.6382e-01 1.0 0.00e+00 0.0 5.4e+01 3.2e+06 4.0e+00  2  0  4 26  1   3  0 11 28  3     0
DMPlexDistLabels       2 1.0 9.6086e-01 1.0 0.00e+00 0.0 2.4e+02 1.2e+06 2.2e+01  5  0 17 44  4   7  0 48 47 18     0
DMPlexDistribOL        1 1.0 1.1848e+00 1.0 0.00e+00 0.0 3.1e+02 9.6e+05 5.0e+01  7  0 22 45  8   8  0 61 48 40     0
DMPlexDistField        3 1.0 4.3099e-02 1.1 0.00e+00 0.0 6.2e+01 3.5e+05 1.2e+01  0  0  4  3  2   0  0 12  3 10     0
DMPlexDistData         2 1.0 8.3743e-0126.3 0.00e+00 0.0 5.4e+01 4.0e+05 6.0e+00  2  0  4  3  1   3  0 11  3  5     0
DMPlexStratify         6 1.5 7.6924e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   4  0  0  0  0     0
SFSetGraph            51 1.0 4.1922e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   3  0  0  0  0     0
SFBcastBegin          95 1.0 9.3978e-01 3.2 0.00e+00 0.0 4.8e+02 1.2e+06 4.1e+01  3  0 34 91  7   4  0 96 96 33     0
SFBcastEnd            95 1.0 4.0925e-01 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.2341e-03 1.3 0.00e+00 0.0 1.1e+01 1.3e+06 3.0e+00  0  0  1  2  0   0  0  2  2  2     0
SFReduceEnd            4 1.0 5.1863e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.0994e-05 7.6 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 4.2391e-04 2.8 0.00e+00 0.0 1.0e+00 4.2e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.8229e-03121.4 0.00e+00 0.0 2.0e+00 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               16 1.0 5.0983e-02 1.0 1.58e+08 1.0 0.0e+00 0.0e+00 1.6e+01  0 10  0  0  3   1 10  0  0  3  6197
VecNorm               17 1.0 1.0093e-02 1.0 1.98e+07 1.0 0.0e+00 0.0e+00 1.7e+01  0  1  0  0  3   0  1  0  0  3  3913
VecScale             289 1.0 1.0229e-02 1.0 1.69e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  3286
VecCopy                1 1.0 1.1420e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               238 1.0 3.8023e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecAXPY               86 1.0 4.3805e-03 1.0 8.83e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  4027
VecAYPX               85 1.0 3.9430e-03 1.0 3.83e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1942
VecMAXPY              17 1.0 5.8177e-02 1.0 1.77e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   2 11  0  0  0  6070
VecScatterBegin      541 1.0 3.9975e-02 1.0 0.00e+00 0.0 8.2e+02 1.3e+04 0.0e+00  0  0 58  2  0   1  0 91 27  0     0
VecScatterEnd        541 1.0 1.5631e-02 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          17 1.0 1.6049e-02 1.0 2.97e+07 1.0 0.0e+00 0.0e+00 1.7e+01  0  2  0  0  3   0  2  0  0  3  3691
MatMult              213 1.0 2.6564e-01 1.0 2.90e+08 1.0 2.5e+02 1.8e+04 1.3e+02  1 17 18  1 21   7 17 28 12 26  2183
MatMultAdd           149 1.0 1.5978e-01 1.0 1.62e+08 1.0 6.4e+01 3.6e+04 0.0e+00  1 10  5  0  0   4 10  7  6  0  2020
MatSolve              34 1.0 1.0493e-01 1.0 8.50e+07 1.0 0.0e+00 0.0e+00 0.0e+00  1  5  0  0  0   3  5  0  0  0  1612
MatSOR               170 1.0 4.4878e-01 1.0 4.49e+08 1.0 5.1e+02 1.0e+04 0.0e+00  3 27 36  1  0  13 27 57 13  0  1996
MatLUFactorSym         1 1.0 1.0967e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 3.4249e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0   527
MatILUFactorSym        1 1.0 2.4367e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatConvert             2 1.0 1.0835e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 3.6101e-03 1.0 2.67e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1477
MatResidual           85 1.0 5.9523e-02 1.0 8.19e+07 1.0 1.7e+02 1.0e+04 0.0e+00  0  5 12  0  0   2  5 19  4  0  2746
MatAssemblyBegin      19 1.0 1.9948e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.6e+01  0  0  0  0  3   0  0  0  0  3     0
MatAssemblyEnd        19 1.0 1.0086e-01 1.0 0.00e+00 0.0 3.6e+01 4.1e+03 8.8e+01  1  0  3  0 14   3  0  4  0 18     0
MatGetRow         384000 1.0 3.7113e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0  10  0  0  0  0     0
MatGetRowIJ            2 1.0 5.9605e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 1.2000e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 2.1678e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   1  0  0  0  1     0
MatGetOrdering         2 1.0 1.4310e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6797e-01 1.0 0.00e+00 0.0 4.0e+00 6.7e+03 1.2e+01  4  0  0  0  2  21  0  0  0  2     0
MatMatMult             1 1.0 1.4012e-01 1.0 4.95e+06 1.0 8.0e+00 2.2e+04 1.6e+01  1  0  1  0  3   4  0  1  0  3    71
MatMatMultSym          1 1.0 1.2285e-01 1.0 0.00e+00 0.0 7.0e+00 1.8e+04 1.4e+01  1  0  0  0  2   3  0  1  0  3     0
MatMatMultNum          1 1.0 1.7299e-02 1.0 4.95e+06 1.0 1.0e+00 5.5e+04 2.0e+00  0  0  0  0  0   0  0  0  0  0   572
MatRedundantMat        1 1.0 1.2240e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 2.2047e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatGetBrAoCol          2 1.0 1.2829e-03 2.9 0.00e+00 0.0 4.0e+00 3.8e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
PCSetUp                4 1.0 1.4806e+00 1.0 4.89e+07 1.0 7.6e+01 1.3e+05 2.5e+02  8  3  5  2 41  41  3  8 26 51    66
PCSetUpOnBlocks       17 1.0 6.0061e-02 1.0 9.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   2  1  0  0  0   301
PCApply               17 1.0 1.2664e+00 1.0 6.96e+08 1.0 7.9e+02 1.0e+04 1.9e+02  7 42 56  1 31  35 42 87 21 39  1099
KSPGMRESOrthog        16 1.0 1.0304e-01 1.0 3.17e+08 1.0 0.0e+00 0.0e+00 1.6e+01  1 19  0  0  3   3 19  0  0  3  6133
KSPSetUp              11 1.0 9.7177e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01  0  0  0  0  2   0  0  0  0  2     0
KSPSolve               1 1.0 2.5623e+00 1.0 1.22e+09 1.0 8.7e+02 2.3e+04 4.1e+02 14 74 62  3 67  72 74 97 51 84   954
SFBcastBegin           1 1.0 1.8821e-0324.7 0.00e+00 0.0 6.0e+00 4.1e+04 1.0e+00  0  0  0  0  0   0  0  1  1  0     0
SFBcastEnd             1 1.0 7.5102e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    79             84     49128276     0.
   IS L to G Mapping     3              3     23945692     0.
             Section    70             53        35616     0.
              Vector    15             96    153254352     0.
      Vector Scatter     2             14     13912584     0.
              Matrix     0             34     80629960     0.
      Preconditioner     0             12        11944     0.
       Krylov Solver     0             12        32144     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    52             40        97704     0.
   IS L to G Mapping     4              0            0     0.
              Vector   348            255     71197824     0.
      Vector Scatter    20              2         2192     0.
              Matrix    55              8     52067772     0.
      Preconditioner    13              1          896     0.
       Krylov Solver    13              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 1.00136e-06
Average time for zero size MPI_Send(): 2.5034e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 4: solving... 
((288348, 1161600), (288348, 1161600))
	Solver time: 2.355974e+00
	Solver iterations: 17
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 4 processors, by jychang48 Wed Mar  2 17:40:43 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.540e+01      1.00012   1.540e+01
Objects:              7.540e+02      1.02725   7.395e+02
Flops:                8.911e+08      1.00740   8.865e+08  3.546e+09
Flops/sec:            5.788e+07      1.00742   5.759e+07  2.303e+08
MPI Messages:         1.734e+03      1.43406   1.438e+03  5.750e+03
MPI Message Lengths:  2.936e+08      2.18390   1.217e+05  7.001e+08
MPI Reductions:       5.940e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.3039e+01  84.7%  0.0000e+00   0.0%  1.654e+03  28.8%  1.129e+05       92.7%  1.250e+02  21.0% 
 1:             FEM: 2.3563e+00  15.3%  3.5462e+09 100.0%  4.096e+03  71.2%  8.839e+03        7.3%  4.680e+02  78.8% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.1547e-0122.2 0.00e+00 0.0 3.9e+02 4.0e+00 4.4e+01  4  0  7  0  7   5  0 24  0 35     0
VecScatterBegin        2 1.0 6.8903e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 4.0531e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.7451e+00 1.1 0.00e+00 0.0 3.8e+02 1.4e+05 2.1e+01 11  0  7  8  4  13  0 23  8 17     0
Mesh Migration         2 1.0 1.0095e+00 1.0 0.00e+00 0.0 1.1e+03 4.7e+05 5.4e+01  7  0 20 77  9   8  0 69 83 43     0
DMPlexInterp           1 1.0 2.0528e+0051557.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   4  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0368e+00 1.1 0.00e+00 0.0 3.9e+02 8.1e+05 2.5e+01 13  0  7 45  4  15  0 23 49 20     0
DMPlexDistCones        2 1.0 2.3249e-01 1.0 0.00e+00 0.0 1.6e+02 1.1e+06 4.0e+00  2  0  3 26  1   2  0 10 28  3     0
DMPlexDistLabels       2 1.0 5.3011e-01 1.0 0.00e+00 0.0 7.2e+02 4.2e+05 2.2e+01  3  0 13 43  4   4  0 44 47 18     0
DMPlexDistribOL        1 1.0 7.3405e-01 1.0 0.00e+00 0.0 1.2e+03 2.8e+05 5.0e+01  5  0 20 45  8   6  0 70 49 40     0
DMPlexDistField        3 1.0 2.9428e-02 1.1 0.00e+00 0.0 2.0e+02 1.1e+05 1.2e+01  0  0  4  3  2   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.2139e-0141.2 0.00e+00 0.0 2.2e+02 1.0e+05 6.0e+00  4  0  4  3  1   5  0 14  4  5     0
DMPlexStratify         6 1.5 6.4224e-01 4.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFSetGraph            51 1.0 2.3842e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFBcastBegin          95 1.0 9.6980e-01 4.8 0.00e+00 0.0 1.6e+03 4.0e+05 4.1e+01  5  0 27 89  7   6  0 95 96 33     0
SFBcastEnd            95 1.0 3.1558e-01 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 2.9893e-03 2.1 0.00e+00 0.0 4.9e+01 2.9e+05 3.0e+00  0  0  1  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.0470e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.6955e-0512.9 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 3.9387e-04 2.3 0.00e+00 0.0 5.0e+00 1.7e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.7231e-0396.4 0.00e+00 0.0 1.0e+01 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               17 1.0 3.9099e-02 1.2 8.94e+07 1.0 0.0e+00 0.0e+00 1.7e+01  0 10  0  0  3   1 10  0  0  4  9091
VecNorm               18 1.0 8.1410e-03 1.3 1.05e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0  1  0  0  3   0  1  0  0  4  5137
VecScale             252 1.0 5.7852e-03 1.0 9.11e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  6272
VecCopy                1 1.0 6.0797e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               232 1.0 1.7792e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecAXPY               73 1.0 2.5489e-03 1.0 4.64e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  7271
VecAYPX               72 1.0 2.3601e-03 1.1 2.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3434
VecMAXPY              18 1.0 3.7059e-02 1.0 9.93e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   2 11  0  0  0 10657
VecScatterBegin      500 1.0 2.3368e-02 1.0 0.00e+00 0.0 3.8e+03 5.8e+03 0.0e+00  0  0 66  3  0   1  0 93 43  0     0
VecScatterEnd        500 1.0 8.3990e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          18 1.0 1.1338e-02 1.2 1.58e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0  2  0  0  3   0  2  0  0  4  5532
MatMult              189 1.0 1.5278e-01 1.0 1.54e+08 1.0 1.2e+03 8.0e+03 1.4e+02  1 17 21  1 23   6 17 29 19 29  4009
MatMultAdd           140 1.0 9.3157e-02 1.0 8.62e+07 1.0 3.4e+02 1.4e+04 0.0e+00  1 10  6  1  0   4 10  8 10  0  3681
MatSolve              36 1.0 5.9907e-02 1.1 4.50e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   2  5  0  0  0  2978
MatSOR               144 1.0 2.5068e-01 1.0 2.40e+08 1.0 2.3e+03 4.9e+03 0.0e+00  2 27 39  2  0  11 27 55 22  0  3799
MatLUFactorSym         1 1.0 1.3113e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.8109e-02 1.0 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0   997
MatILUFactorSym        1 1.0 1.3030e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
MatConvert             2 1.0 5.7888e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.0142e-03 1.1 1.34e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2647
MatResidual           72 1.0 3.2858e-02 1.0 4.39e+07 1.0 7.6e+02 4.9e+03 0.0e+00  0  5 13  1  0   1  5 18  7  0  5307
MatAssemblyBegin      18 1.0 1.1560e-02 9.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 5.6518e-02 1.1 0.00e+00 0.0 1.7e+02 1.8e+03 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow         192000 1.0 3.7683e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0  16  0  0  0  0     0
MatGetRowIJ            2 1.0 5.9605e-06 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 6.4135e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 1.0420e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 7.2408e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6580e-01 1.0 0.00e+00 0.0 2.0e+01 2.7e+03 1.2e+01  5  0  0  0  2  32  0  0  0  3     0
MatMatMult             1 1.0 7.8246e-02 1.0 2.48e+06 1.0 4.0e+01 8.9e+03 1.6e+01  0  0  1  0  3   3  0  1  1  3   126
MatMatMultSym          1 1.0 6.8802e-02 1.1 0.00e+00 0.0 3.5e+01 7.0e+03 1.4e+01  0  0  1  0  2   3  0  1  0  3     0
MatMatMultNum          1 1.0 9.4202e-03 1.0 2.48e+06 1.0 5.0e+00 2.2e+04 2.0e+00  0  0  0  0  0   0  0  0  0  0  1050
MatRedundantMat        1 1.0 8.8930e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 1.1519e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 4.1471e-03 5.9 0.00e+00 0.0 2.0e+01 1.5e+04 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 1.1437e+00 1.0 2.45e+07 1.0 3.2e+02 3.4e+04 2.2e+02  7  3  6  2 37  48  3  8 22 47    85
PCSetUpOnBlocks       18 1.0 3.1880e-02 1.1 4.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0   566
PCApply               18 1.0 6.9520e-01 1.0 3.70e+08 1.0 3.6e+03 4.8e+03 1.6e+02  4 42 62  2 26  29 42 87 33 33  2119
KSPGMRESOrthog        17 1.0 7.2115e-02 1.1 1.79e+08 1.0 0.0e+00 0.0e+00 1.7e+01  0 20  0  0  3   3 20  0  0  4  9858
KSPSetUp              10 1.0 5.5130e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 1.7607e+00 1.0 6.60e+08 1.0 4.0e+03 7.9e+03 3.9e+02 11 74 69  5 65  75 74 98 62 83  1494
SFBcastBegin           1 1.0 1.7951e-0317.1 0.00e+00 0.0 3.0e+01 1.7e+04 1.0e+00  0  0  1  0  0   0  0  1  1  0     0
SFBcastEnd             1 1.0 1.2398e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set    87             92     35927100     0.
   IS L to G Mapping     3              3     18881016     0.
             Section    70             53        35616     0.
              Vector    15             87     78953080     0.
      Vector Scatter     2             13      6934616     0.
              Matrix     0             29     40306524     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38       100832     0.
   IS L to G Mapping     4              0            0     0.
              Vector   315            231     37564192     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8     26014168     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 1.19209e-06
Average time for zero size MPI_Send(): 1.54972e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 8: solving... 
((143102, 1161600), (143102, 1161600))
	Solver time: 1.725371e+00
	Solver iterations: 17
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 8 processors, by jychang48 Wed Mar  2 17:40:59 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.389e+01      1.00022   1.389e+01
Objects:              7.700e+02      1.04336   7.428e+02
Flops:                4.567e+08      1.01904   4.507e+08  3.605e+09
Flops/sec:            3.287e+07      1.01910   3.244e+07  2.595e+08
MPI Messages:         3.824e+03      1.59612   2.718e+03  2.174e+04
MPI Message Lengths:  2.346e+08      3.28110   3.422e+04  7.441e+08
MPI Reductions:       5.940e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.2167e+01  87.6%  0.0000e+00   0.0%  5.296e+03  24.4%  3.124e+04       91.3%  1.250e+02  21.0% 
 1:             FEM: 1.7258e+00  12.4%  3.6052e+09 100.0%  1.645e+04  75.6%  2.978e+03        8.7%  4.680e+02  78.8% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 9.9127e-0135.3 0.00e+00 0.0 1.2e+03 4.0e+00 4.4e+01  6  0  6  0  7   7  0 23  0 35     0
VecScatterBegin        2 1.0 2.7800e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 3.3379e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.8539e+00 1.1 0.00e+00 0.0 1.4e+03 4.1e+04 2.1e+01 13  0  6  8  4  15  0 26  8 17     0
Mesh Migration         2 1.0 6.9888e-01 1.0 0.00e+00 0.0 3.4e+03 1.6e+05 5.4e+01  5  0 16 75  9   6  0 65 82 43     0
DMPlexInterp           1 1.0 2.1155e+0064296.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
DMPlexDistribute       1 1.0 2.0479e+00 1.1 0.00e+00 0.0 1.0e+03 3.2e+05 2.5e+01 15  0  5 43  4  17  0 19 47 20     0
DMPlexDistCones        2 1.0 1.6353e-01 1.0 0.00e+00 0.0 4.9e+02 3.8e+05 4.0e+00  1  0  2 25  1   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.9835e-01 1.0 0.00e+00 0.0 2.1e+03 1.5e+05 2.2e+01  3  0 10 42  4   3  0 40 47 18     0
DMPlexDistribOL        1 1.0 5.2452e-01 1.0 0.00e+00 0.0 3.9e+03 8.8e+04 5.0e+01  4  0 18 46  8   4  0 73 50 40     0
DMPlexDistField        3 1.0 2.3566e-02 1.3 0.00e+00 0.0 6.4e+02 3.8e+04 1.2e+01  0  0  3  3  2   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.7306e-0152.6 0.00e+00 0.0 8.5e+02 3.0e+04 6.0e+00  6  0  4  3  1   7  0 16  4  5     0
DMPlexStratify         6 1.5 5.9495e-01 8.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
SFSetGraph            51 1.0 1.4286e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.0315e+00 6.1 0.00e+00 0.0 5.0e+03 1.3e+05 4.1e+01  6  0 23 88  7   7  0 95 96 33     0
SFBcastEnd            95 1.0 2.9712e-01 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 3.7477e-03 3.8 0.00e+00 0.0 1.8e+02 8.2e+04 3.0e+00  0  0  1  2  1   0  0  3  2  2     0
SFReduceEnd            4 1.0 6.8834e-03 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 6.5088e-0513.0 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.9087e-04 2.9 0.00e+00 0.0 1.9e+01 7.0e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.9469e-0377.8 0.00e+00 0.0 3.8e+01 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               17 1.0 2.6954e-02 1.1 4.48e+07 1.0 0.0e+00 0.0e+00 1.7e+01  0 10  0  0  3   1 10  0  0  4 13187
VecNorm               18 1.0 1.0048e-02 2.8 5.27e+06 1.0 0.0e+00 0.0e+00 1.8e+01  0  1  0  0  3   0  1  0  0  4  4162
VecScale             252 1.0 3.3975e-03 1.1 4.70e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 10932
VecCopy                1 1.0 3.8004e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               232 1.0 6.7997e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               73 1.0 1.5988e-03 1.1 2.33e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 11593
VecAYPX               72 1.0 1.4379e-03 1.1 1.02e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  5638
VecMAXPY              18 1.0 2.6705e-02 1.1 4.98e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   1 11  0  0  0 14789
VecScatterBegin      500 1.0 1.6459e-02 1.1 0.00e+00 0.0 1.5e+04 2.3e+03 0.0e+00  0  0 71  5  0   1  0 93 54  0     0
VecScatterEnd        500 1.0 8.3652e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          18 1.0 1.1821e-02 2.2 7.91e+06 1.0 0.0e+00 0.0e+00 1.8e+01  0  2  0  0  3   0  2  0  0  4  5307
MatMult              189 1.0 1.0047e-01 1.1 7.71e+07 1.0 4.7e+03 3.2e+03 1.4e+02  1 17 22  2 23   6 17 29 23 29  6111
MatMultAdd           140 1.0 5.8617e-02 1.0 4.32e+07 1.0 1.3e+03 6.0e+03 0.0e+00  0 10  6  1  0   3 10  8 12  0  5850
MatSolve              36 1.0 3.2941e-02 1.1 2.25e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   2  5  0  0  0  5391
MatSOR               144 1.0 1.4923e-01 1.0 1.20e+08 1.0 9.2e+03 1.9e+03 0.0e+00  1 27 42  2  0   9 27 56 28  0  6406
MatLUFactorSym         1 1.0 1.4782e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 9.5110e-03 1.1 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0  1900
MatILUFactorSym        1 1.0 6.8221e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 3.1312e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.1721e-03 1.1 6.70e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  4550
MatResidual           72 1.0 1.9800e-02 1.0 2.22e+07 1.0 3.1e+03 1.9e+03 0.0e+00  0  5 14  1  0   1  5 19  9  0  8879
MatAssemblyBegin      18 1.0 1.0952e-0212.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 3.2517e-02 1.1 0.00e+00 0.0 6.8e+02 7.0e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          96000 1.0 3.8074e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0  22  0  0  0  0     0
MatGetRowIJ            2 1.0 9.0599e-06 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 1.1802e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 5.2059e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 3.7909e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 7.6783e-01 1.0 0.00e+00 0.0 7.6e+01 1.1e+03 1.2e+01  6  0  0  0  2  44  0  0  0  3     0
MatMatMult             1 1.0 4.1842e-02 1.0 1.24e+06 1.0 1.5e+02 3.7e+03 1.6e+01  0  0  1  0  3   2  0  1  1  3   237
MatMatMultSym          1 1.0 3.6508e-02 1.1 0.00e+00 0.0 1.3e+02 2.9e+03 1.4e+01  0  0  1  0  2   2  0  1  1  3     0
MatMatMultNum          1 1.0 5.3141e-03 1.0 1.24e+06 1.0 1.9e+01 9.1e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  1863
MatRedundantMat        1 1.0 1.4997e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 6.3159e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 2.3229e-03 3.7 0.00e+00 0.0 7.6e+01 6.3e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 9.7671e-01 1.0 1.23e+07 1.0 1.2e+03 9.8e+03 2.2e+02  7  3  6  2 37  57  3  8 19 47   100
PCSetUpOnBlocks       18 1.0 1.6695e-02 1.3 2.31e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   1  1  0  0  0  1082
PCApply               18 1.0 4.0179e-01 1.0 1.86e+08 1.0 1.5e+04 1.9e+03 1.6e+02  3 41 67  4 26  23 41 89 42 33  3677
KSPGMRESOrthog        17 1.0 5.0506e-02 1.1 8.96e+07 1.0 0.0e+00 0.0e+00 1.7e+01  0 20  0  0  3   3 20  0  0  4 14075
KSPSetUp              10 1.0 2.3572e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 1.3470e+00 1.0 3.31e+08 1.0 1.6e+04 2.8e+03 3.9e+02 10 73 74  6 65  78 73 98 70 83  1956
SFBcastBegin           1 1.0 2.0890e-0312.0 0.00e+00 0.0 1.1e+02 7.1e+03 1.0e+00  0  0  1  0  0   0  0  1  1  0     0
SFBcastEnd             1 1.0 1.4305e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   103            108     29168148     0.
   IS L to G Mapping     3              3     16320748     0.
             Section    70             53        35616     0.
              Vector    15             87     41816936     0.
      Vector Scatter     2             13      3448712     0.
              Matrix     0             29     20176708     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        96316     0.
   IS L to G Mapping     4              0            0     0.
              Vector   315            231     18936288     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8     12996772     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 2.00272e-06
Average time for zero size MPI_Send(): 1.63913e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 16: solving... 
((70996, 1161600), (70996, 1161600))
	Solver time: 9.724491e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 16 processors, by jychang48 Wed Mar  2 17:41:17 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.549e+01      1.00008   1.549e+01
Objects:              8.040e+02      1.07200   7.584e+02
Flops:                2.486e+08      1.05022   2.426e+08  3.882e+09
Flops/sec:            1.605e+07      1.05015   1.567e+07  2.507e+08
MPI Messages:         6.216e+03      1.99327   4.443e+03  7.109e+04
MPI Message Lengths:  2.044e+08      5.47991   1.150e+04  8.175e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4514e+01  93.7%  0.0000e+00   0.0%  1.470e+04  20.7%  1.024e+04       89.0%  1.250e+02  20.7% 
 1:             FEM: 9.7263e-01   6.3%  3.8821e+09 100.0%  5.639e+04  79.3%  1.263e+03       11.0%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1980e+00 7.6 0.00e+00 0.0 3.5e+03 4.0e+00 4.4e+01  6  0  5  0  7   7  0 24  0 35     0
VecScatterBegin        2 1.0 8.1062e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 5.0068e-06 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9332e+00 1.1 0.00e+00 0.0 4.4e+03 1.4e+04 2.1e+01 12  0  6  8  3  13  0 30  9 17     0
Mesh Migration         2 1.0 5.2375e-01 1.0 0.00e+00 0.0 8.9e+03 6.6e+04 5.4e+01  3  0 13 72  9   4  0 61 81 43     0
DMPlexInterp           1 1.0 2.1157e+0055461.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.1336e+00 1.1 0.00e+00 0.0 2.9e+03 1.1e+05 2.5e+01 14  0  4 39  4  15  0 20 44 20     0
DMPlexDistCones        2 1.0 1.2215e-01 1.0 0.00e+00 0.0 1.3e+03 1.5e+05 4.0e+00  1  0  2 24  1   1  0  9 27  3     0
DMPlexDistLabels       2 1.0 3.1635e-01 1.0 0.00e+00 0.0 5.5e+03 6.1e+04 2.2e+01  2  0  8 41  4   2  0 38 46 18     0
DMPlexDistribOL        1 1.0 3.4299e-01 1.0 0.00e+00 0.0 1.1e+04 3.5e+04 5.0e+01  2  0 15 46  8   2  0 72 52 40     0
DMPlexDistField        3 1.0 2.6731e-02 1.7 0.00e+00 0.0 1.7e+03 1.5e+04 1.2e+01  0  0  2  3  2   0  0 12  4 10     0
DMPlexDistData         2 1.0 9.9244e-0167.5 0.00e+00 0.0 2.9e+03 9.6e+03 6.0e+00  6  0  4  3  1   6  0 20  4  5     0
DMPlexStratify         6 1.5 5.6545e-0115.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 8.5791e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
SFBcastBegin          95 1.0 1.2138e+00 3.6 0.00e+00 0.0 1.4e+04 5.0e+04 4.1e+01  7  0 20 86  7   7  0 95 97 33     0
SFBcastEnd            95 1.0 3.0706e-01 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.5688e-0311.7 0.00e+00 0.0 5.0e+02 3.0e+04 3.0e+00  0  0  1  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.1227e-03 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 5.1022e-0526.8 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 2.3007e-04 2.4 0.00e+00 0.0 5.4e+01 3.9e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.2939e-0336.9 0.00e+00 0.0 1.1e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 1.5401e-02 1.3 2.52e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 25795
VecNorm               19 1.0 4.7302e-03 2.3 2.80e+06 1.0 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4  9332
VecScale             266 1.0 2.0359e-03 1.1 2.62e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 19960
VecCopy                1 1.0 2.0003e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 3.7987e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 9.3222e-04 1.1 1.22e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 20859
VecAYPX               76 1.0 6.7663e-04 1.2 5.38e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 12652
VecMAXPY              19 1.0 1.0955e-02 1.1 2.78e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   1 11  0  0  0 40080
VecScatterBegin      527 1.0 1.1807e-02 1.3 0.00e+00 0.0 5.3e+04 1.1e+03 0.0e+00  0  0 74  7  0   1  0 94 66  0     0
VecScatterEnd        527 1.0 1.1447e-02 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecNormalize          19 1.0 5.7294e-03 1.9 4.20e+06 1.0 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 11556
MatMult              199 1.0 5.5462e-02 1.1 4.11e+07 1.0 1.5e+04 1.7e+03 1.4e+02  0 17 22  3 24   5 17 27 28 30 11672
MatMultAdd           148 1.0 3.2288e-02 1.1 2.31e+07 1.0 3.9e+03 3.3e+03 0.0e+00  0  9  5  2  0   3  9  7 14  0 11244
MatSolve              38 1.0 1.7801e-02 1.2 1.19e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   2  5  0  0  0 10460
MatSOR               152 1.0 7.6650e-02 1.0 6.41e+07 1.0 3.2e+04 9.6e+02 0.0e+00  0 26 44  4  0   8 26 56 34  0 13202
MatLUFactorSym         1 1.0 2.9087e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 4.6940e-03 1.1 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3851
MatILUFactorSym        1 1.0 3.3820e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.5450e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.5489e-03 5.7 3.37e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2093
MatResidual           76 1.0 1.2002e-02 1.2 1.20e+07 1.0 1.1e+04 9.6e+02 0.0e+00  0  5 15  1  0   1  5 19 11  0 15609
MatAssemblyBegin      18 1.0 6.2559e-0314.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 2.1353e-02 1.2 0.00e+00 0.0 2.2e+03 3.4e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          48000 1.0 1.9073e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0  19  0  0  0  0     0
MatGetRowIJ            2 1.0 4.4990e-04117.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 1.5783e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 2.5711e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 6.6710e-04 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 3.8572e-01 1.0 0.00e+00 0.0 2.2e+02 6.2e+02 1.2e+01  2  0  0  0  2  40  0  0  0  3     0
MatMatMult             1 1.0 2.2668e-02 1.0 6.22e+05 1.0 4.3e+02 2.1e+03 1.6e+01  0  0  1  0  3   2  0  1  1  3   437
MatMatMultSym          1 1.0 1.9739e-02 1.1 0.00e+00 0.0 3.8e+02 1.6e+03 1.4e+01  0  0  1  0  2   2  0  1  1  3     0
MatMatMultNum          1 1.0 2.9190e-03 1.0 6.22e+05 1.0 5.4e+01 5.1e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  3392
MatRedundantMat        1 1.0 1.8787e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 3.1598e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 1.4391e-03 3.9 0.00e+00 0.0 2.2e+02 3.5e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 5.0388e-01 1.0 6.18e+06 1.0 3.9e+03 3.5e+03 2.2e+02  3  3  6  2 36  52  3  7 15 46   194
PCSetUpOnBlocks       19 1.0 8.1487e-03 1.3 1.17e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0  2213
PCApply               19 1.0 2.2397e-01 1.0 9.86e+07 1.0 5.1e+04 9.1e+02 1.6e+02  1 40 72  6 26  23 40 90 52 33  6956
KSPGMRESOrthog        18 1.0 2.4910e-02 1.2 5.04e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0 20  0  0  3   2 20  0  0  4 31896
KSPSetUp              10 1.0 1.2152e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 7.0285e-01 1.0 1.79e+08 1.0 5.6e+04 1.3e+03 4.0e+02  5 73 78  9 66  72 73 98 77 83  4018
SFBcastBegin           1 1.0 1.3649e-03 8.3 0.00e+00 0.0 3.3e+02 3.9e+03 1.0e+00  0  0  0  0  0   0  0  1  1  0     0
SFBcastEnd             1 1.0 6.1879e-03167.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   125            130     25605220     0.
   IS L to G Mapping     3              3     15014432     0.
             Section    70             53        35616     0.
              Vector    15             87     23370536     0.
      Vector Scatter     2             13      1718168     0.
              Matrix     0             29     10124824     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        84652     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243     10154536     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      6483848     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 5.00679e-06
Average time for zero size MPI_Send(): 1.80304e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 24: solving... 
((47407, 1161600), (47407, 1161600))
	Solver time: 7.069969e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 24 processors, by jychang48 Wed Mar  2 17:41:35 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.503e+01      1.00014   1.503e+01
Objects:              8.180e+02      1.09067   7.607e+02
Flops:                1.700e+08      1.06571   1.644e+08  3.945e+09
Flops/sec:            1.131e+07      1.06582   1.094e+07  2.625e+08
MPI Messages:         8.270e+03      2.25324   5.616e+03  1.348e+05
MPI Message Lengths:  1.913e+08      7.34285   6.417e+03  8.650e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4322e+01  95.3%  0.0000e+00   0.0%  2.618e+04  19.4%  5.638e+03       87.8%  1.250e+02  20.7% 
 1:             FEM: 7.0697e-01   4.7%  3.9447e+09 100.0%  1.086e+05  80.6%  7.798e+02       12.2%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1264e+00 9.5 0.00e+00 0.0 6.2e+03 4.0e+00 4.4e+01  7  0  5  0  7   7  0 24  0 35     0
VecScatterBegin        2 1.0 5.2214e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.8835e-05 9.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 1.9900e+00 1.1 0.00e+00 0.0 8.7e+03 7.7e+03 2.1e+01 13  0  6  8  3  14  0 33  9 17     0
Mesh Migration         2 1.0 4.5752e-01 1.0 0.00e+00 0.0 1.5e+04 4.0e+04 5.4e+01  3  0 11 71  9   3  0 58 81 43     0
DMPlexInterp           1 1.0 2.1216e+0058932.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2043e+00 1.1 0.00e+00 0.0 5.7e+03 5.7e+04 2.5e+01 15  0  4 37  4  15  0 22 43 20     0
DMPlexDistCones        2 1.0 1.0450e-01 1.1 0.00e+00 0.0 2.2e+03 9.3e+04 4.0e+00  1  0  2 24  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.8710e-01 1.0 0.00e+00 0.0 9.3e+03 3.7e+04 2.2e+01  2  0  7 40  4   2  0 35 46 18     0
DMPlexDistribOL        1 1.0 2.6298e-01 1.0 0.00e+00 0.0 1.8e+04 2.2e+04 5.0e+01  2  0 14 47  8   2  0 70 53 40     0
DMPlexDistField        3 1.0 2.8758e-02 2.1 0.00e+00 0.0 3.0e+03 9.4e+03 1.2e+01  0  0  2  3  2   0  0 11  4 10     0
DMPlexDistData         2 1.0 9.9861e-0127.8 0.00e+00 0.0 6.0e+03 5.0e+03 6.0e+00  6  0  4  3  1   6  0 23  4  5     0
DMPlexStratify         6 1.5 5.6285e-0122.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 6.0824e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1366e+00 4.0 0.00e+00 0.0 2.5e+04 2.9e+04 4.1e+01  7  0 19 85  7   7  0 96 97 33     0
SFBcastEnd            95 1.0 3.0235e-01 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 7.5159e-0313.3 0.00e+00 0.0 8.7e+02 1.8e+04 3.0e+00  0  0  1  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.3404e-03 4.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.4107e-0523.1 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.7095e-04 3.6 0.00e+00 0.0 9.4e+01 2.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.7259e-0328.3 0.00e+00 0.0 1.9e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 1.1039e-02 1.3 1.69e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 35989
VecNorm               19 1.0 3.1846e-03 2.2 1.88e+06 1.0 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 13861
VecScale             266 1.0 1.4791e-03 1.1 1.81e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 28126
VecCopy                1 1.0 1.4901e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 2.5969e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 6.2633e-04 1.1 8.16e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 31062
VecAYPX               76 1.0 4.6229e-04 1.2 3.59e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 18529
VecMAXPY              19 1.0 6.4399e-03 1.1 1.87e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   1 11  0  0  0 68182
VecScatterBegin      527 1.0 1.0387e-02 1.5 0.00e+00 0.0 1.0e+05 7.2e+02 0.0e+00  0  0 76  9  0   1  0 94 70  0     0
VecScatterEnd        527 1.0 1.1114e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecNormalize          19 1.0 3.8595e-03 1.8 2.81e+06 1.0 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 17155
MatMult              199 1.0 3.9954e-02 1.1 2.74e+07 1.0 2.9e+04 1.1e+03 1.4e+02  0 16 21  4 24   5 16 26 30 30 16239
MatMultAdd           148 1.0 2.3000e-02 1.1 1.54e+07 1.0 6.8e+03 2.3e+03 0.0e+00  0  9  5  2  0   3  9  6 15  0 15786
MatSolve              38 1.0 1.2058e-02 1.1 7.95e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   2  5  0  0  0 15399
MatSOR               152 1.0 5.3896e-02 1.0 4.33e+07 1.0 6.0e+04 6.3e+02 0.0e+00  0 26 45  4  0   7 26 56 36  0 18843
MatLUFactorSym         1 1.0 3.1948e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 3.0081e-03 1.1 7.86e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6048
MatILUFactorSym        1 1.0 2.2931e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 1.0121e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.3041e-03 8.9 2.25e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2314
MatResidual           76 1.0 8.7066e-03 1.2 8.11e+06 1.1 2.0e+04 6.3e+02 0.0e+00  0  5 15  1  0   1  5 19 12  0 21688
MatAssemblyBegin      18 1.0 4.5638e-03 9.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 1.5963e-02 1.2 0.00e+00 0.0 4.3e+03 2.2e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          32000 1.0 1.2688e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0  18  0  0  0  0     0
MatGetRowIJ            2 1.0 5.6219e-04112.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 1.5306e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 1.8289e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 7.2408e-04 5.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 2.5769e-01 1.0 0.00e+00 0.0 3.7e+02 4.4e+02 1.2e+01  2  0  0  0  2  36  0  0  0  3     0
MatMatMult             1 1.0 1.5805e-02 1.0 4.15e+05 1.0 7.4e+02 1.5e+03 1.6e+01  0  0  1  0  3   2  0  1  1  3   626
MatMatMultSym          1 1.0 1.3917e-02 1.0 0.00e+00 0.0 6.5e+02 1.2e+03 1.4e+01  0  0  0  0  2   2  0  1  1  3     0
MatMatMultNum          1 1.0 1.8940e-03 1.0 4.15e+05 1.0 9.3e+01 3.6e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  5226
MatRedundantMat        1 1.0 1.8001e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 2.1231e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 8.0800e-04 3.2 0.00e+00 0.0 3.7e+02 2.5e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 3.4841e-01 1.0 4.13e+06 1.0 7.5e+03 2.0e+03 2.2e+02  2  2  6  2 36  49  2  7 14 46   281
PCSetUpOnBlocks       19 1.0 5.4433e-03 1.4 7.77e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0  3303
PCApply               19 1.0 1.6812e-01 1.0 6.60e+07 1.0 9.9e+04 5.9e+02 1.6e+02  1 40 74  7 26  24 40 91 55 33  9296
KSPGMRESOrthog        18 1.0 1.6622e-02 1.2 3.38e+07 1.0 0.0e+00 0.0e+00 1.8e+01  0 20  0  0  3   2 20  0  0  4 47800
KSPSetUp              10 1.0 8.8477e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 4.9091e-01 1.0 1.19e+08 1.0 1.1e+05 7.9e+02 4.0e+02  3 72 79 10 66  69 72 99 80 83  5762
SFBcastBegin           1 1.0 1.8461e-0310.3 0.00e+00 0.0 5.8e+02 2.8e+03 1.0e+00  0  0  0  0  0   0  0  1  2  0     0
SFBcastEnd             1 1.0 6.1107e-0451.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   139            144     24068724     0.
   IS L to G Mapping     3              3     14448024     0.
             Section    70             53        35616     0.
              Vector    15             87     17303640     0.
      Vector Scatter     2             13      1152032     0.
              Matrix     0             29      6806588     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        67128     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      6910640     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      4329832     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.82285e-06
Average time for zero size MPI_Send(): 1.37091e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 32: solving... 
((35155, 1161600), (35155, 1161600))
	Solver time: 5.819740e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 32 processors, by jychang48 Wed Mar  2 17:41:54 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.505e+01      1.00013   1.505e+01
Objects:              8.440e+02      1.12533   7.624e+02
Flops:                1.300e+08      1.08171   1.250e+08  3.999e+09
Flops/sec:            8.641e+06      1.08171   8.307e+06  2.658e+08
MPI Messages:         1.004e+04      2.37869   6.666e+03  2.133e+05
MPI Message Lengths:  1.880e+08      9.33983   4.255e+03  9.076e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4463e+01  96.1%  0.0000e+00   0.0%  3.925e+04  18.4%  3.698e+03       86.9%  1.250e+02  20.7% 
 1:             FEM: 5.8202e-01   3.9%  3.9993e+09 100.0%  1.740e+05  81.6%  5.576e+02       13.1%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.1604e+0012.6 0.00e+00 0.0 9.3e+03 4.0e+00 4.4e+01  7  0  4  0  7   7  0 24  0 35     0
VecScatterBegin        2 1.0 3.6001e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.6928e-0517.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.0413e+00 1.1 0.00e+00 0.0 1.4e+04 5.1e+03 2.1e+01 14  0  7  8  3  14  0 36  9 17     0
Mesh Migration         2 1.0 4.3615e-01 1.0 0.00e+00 0.0 2.2e+04 2.9e+04 5.4e+01  3  0 10 70  9   3  0 56 80 43     0
DMPlexInterp           1 1.0 2.1172e+0064348.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.2697e+00 1.1 0.00e+00 0.0 9.4e+03 3.5e+04 2.5e+01 15  0  4 36  4  16  0 24 41 20     0
DMPlexDistCones        2 1.0 1.0138e-01 1.1 0.00e+00 0.0 3.2e+03 6.6e+04 4.0e+00  1  0  2 23  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.7745e-01 1.0 0.00e+00 0.0 1.3e+04 2.7e+04 2.2e+01  2  0  6 39  4   2  0 34 45 18     0
DMPlexDistribOL        1 1.0 2.2781e-01 1.0 0.00e+00 0.0 2.7e+04 1.6e+04 5.0e+01  1  0 13 47  8   2  0 68 54 40     0
DMPlexDistField        3 1.0 3.1382e-02 2.1 0.00e+00 0.0 4.3e+03 6.8e+03 1.2e+01  0  0  2  3  2   0  0 11  4 10     0
DMPlexDistData         2 1.0 1.0031e+0069.7 0.00e+00 0.0 1.0e+04 3.2e+03 6.0e+00  6  0  5  4  1   7  0 26  4  5     0
DMPlexStratify         6 1.5 5.4998e-0129.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 5.1585e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.1656e+00 4.3 0.00e+00 0.0 3.8e+04 2.0e+04 4.1e+01  7  0 18 84  7   7  0 96 97 33     0
SFBcastEnd            95 1.0 2.9936e-01 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 8.7662e-0318.1 0.00e+00 0.0 1.3e+03 1.3e+04 3.0e+00  0  0  1  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 8.8432e-03 5.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.0054e-0518.7 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.4091e-04 2.2 0.00e+00 0.0 1.4e+02 2.2e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.3969e-0331.2 0.00e+00 0.0 2.9e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 8.9464e-03 1.4 1.27e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 44405
VecNorm               19 1.0 3.4175e-03 2.4 1.41e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 12916
VecScale             266 1.0 1.2009e-03 1.2 1.39e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 35370
VecCopy                1 1.0 1.2088e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 2.0475e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 4.8566e-04 1.2 6.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 40100
VecAYPX               76 1.0 3.9840e-04 1.4 2.70e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 21526
VecMAXPY              19 1.0 4.6327e-03 1.4 1.40e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   1 11  0  0  0 94779
VecScatterBegin      527 1.0 9.9692e-03 1.6 0.00e+00 0.0 1.6e+05 5.3e+02 0.0e+00  0  0 76 10  0   1  0 94 73  0     0
VecScatterEnd        527 1.0 1.0046e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecNormalize          19 1.0 3.9191e-03 2.0 2.11e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 16894
MatMult              199 1.0 3.2215e-02 1.1 2.07e+07 1.0 4.5e+04 8.3e+02 1.4e+02  0 16 21  4 24   5 16 26 31 30 20169
MatMultAdd           148 1.0 1.8383e-02 1.1 1.16e+07 1.1 9.8e+03 1.9e+03 0.0e+00  0  9  5  2  0   3  9  6 16  0 19751
MatSolve              38 1.0 9.1224e-03 1.2 5.99e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   1  5  0  0  0 20347
MatSOR               152 1.0 4.2917e-02 1.0 3.26e+07 1.1 9.7e+04 4.7e+02 0.0e+00  0 25 45  5  0   7 25 55 38  0 23694
MatLUFactorSym         1 1.0 4.6015e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 2.2459e-03 1.1 6.08e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  8301
MatILUFactorSym        1 1.0 1.7860e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 8.0991e-04 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.7459e-03 8.5 1.69e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3055
MatResidual           76 1.0 7.5746e-03 1.2 6.14e+06 1.1 3.2e+04 4.7e+02 0.0e+00  0  5 15  2  0   1  5 18 13  0 25052
MatAssemblyBegin      18 1.0 3.6287e-0310.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 1.3054e-02 1.2 0.00e+00 0.0 7.0e+03 1.6e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          24000 1.0 9.4720e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0  16  0  0  0  0     0
MatGetRowIJ            2 1.0 5.1618e-04108.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 1.8001e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 1.2710e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 6.5613e-04 5.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.9316e-01 1.0 0.00e+00 0.0 5.4e+02 3.6e+02 1.2e+01  1  0  0  0  2  33  0  0  0  3     0
MatMatMult             1 1.0 1.2574e-02 1.0 3.11e+05 1.0 1.1e+03 1.2e+03 1.6e+01  0  0  1  0  3   2  0  1  1  3   787
MatMatMultSym          1 1.0 1.1223e-02 1.0 0.00e+00 0.0 9.4e+02 9.4e+02 1.4e+01  0  0  0  0  2   2  0  1  1  3     0
MatMatMultNum          1 1.0 1.3480e-03 1.0 3.11e+05 1.0 1.4e+02 2.9e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  7345
MatRedundantMat        1 1.0 2.0981e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 1.5619e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 7.3791e-04 2.4 0.00e+00 0.0 5.4e+02 2.0e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 2.6482e-01 1.0 3.11e+06 1.0 1.2e+04 1.3e+03 2.2e+02  2  2  6  2 36  45  2  7 13 46   371
PCSetUpOnBlocks       19 1.0 4.1590e-03 1.4 5.87e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0  4318
PCApply               19 1.0 1.3506e-01 1.0 4.98e+07 1.0 1.6e+05 4.3e+02 1.6e+02  1 39 75  8 26  23 39 92 58 33 11590
KSPGMRESOrthog        18 1.0 1.2306e-02 1.3 2.54e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 20  0  0  3   2 20  0  0  4 64564
KSPSetUp              10 1.0 7.0786e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 3.7938e-01 1.0 9.02e+07 1.0 1.7e+05 5.7e+02 4.0e+02  3 71 81 11 66  65 71 99 82 83  7463
SFBcastBegin           1 1.0 1.4801e-03 9.9 0.00e+00 0.0 8.6e+02 2.2e+03 1.0e+00  0  0  0  0  0   0  0  0  2  0     0
SFBcastEnd             1 1.0 4.5705e-0426.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   165            170     23655892     0.
   IS L to G Mapping     3              3     14326164     0.
             Section    70             53        35616     0.
              Vector    15             87     14178840     0.
      Vector Scatter     2             13       857984     0.
              Matrix     0             29      5131328     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        69996     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      5282368     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      3242072     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 9.01222e-06
Average time for zero size MPI_Send(): 1.65403e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 40: solving... 
((27890, 1161600), (27890, 1161600))
	Solver time: 5.010350e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 40 processors, by jychang48 Wed Mar  2 17:42:13 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.486e+01      1.00013   1.486e+01
Objects:              8.660e+02      1.15775   7.635e+02
Flops:                1.061e+08      1.10163   1.011e+08  4.043e+09
Flops/sec:            7.138e+06      1.10159   6.800e+06  2.720e+08
MPI Messages:         1.235e+04      2.82415   7.572e+03  3.029e+05
MPI Message Lengths:  1.855e+08     11.36833   3.104e+03  9.402e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4362e+01  96.6%  0.0000e+00   0.0%  5.371e+04  17.7%  2.676e+03       86.2%  1.250e+02  20.7% 
 1:             FEM: 5.0110e-01   3.4%  4.0430e+09 100.0%  2.492e+05  82.3%  4.278e+02       13.8%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2159e+0011.9 0.00e+00 0.0 1.3e+04 4.0e+00 4.4e+01  7  0  4  0  7   8  0 24  0 35     0
VecScatterBegin        2 1.0 3.5763e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 7.8678e-06 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1240e+00 1.1 0.00e+00 0.0 2.0e+04 3.6e+03 2.1e+01 14  0  7  8  3  15  0 38  9 17     0
Mesh Migration         2 1.0 4.1766e-01 1.0 0.00e+00 0.0 2.9e+04 2.2e+04 5.4e+01  3  0 10 69  9   3  0 54 80 43     0
DMPlexInterp           1 1.0 2.1100e+0061887.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.3591e+00 1.1 0.00e+00 0.0 1.4e+04 2.3e+04 2.5e+01 16  0  5 35  4  16  0 26 40 20     0
DMPlexDistCones        2 1.0 9.5764e-02 1.1 0.00e+00 0.0 4.3e+03 5.1e+04 4.0e+00  1  0  1 23  1   1  0  8 27  3     0
DMPlexDistLabels       2 1.0 2.6905e-01 1.0 0.00e+00 0.0 1.7e+04 2.1e+04 2.2e+01  2  0  6 39  4   2  0 32 45 18     0
DMPlexDistribOL        1 1.0 2.0288e-01 1.0 0.00e+00 0.0 3.6e+04 1.2e+04 5.0e+01  1  0 12 47  8   1  0 66 55 40     0
DMPlexDistField        3 1.0 2.9222e-02 2.1 0.00e+00 0.0 5.7e+03 5.2e+03 1.2e+01  0  0  2  3  2   0  0 11  4 10     0
DMPlexDistData         2 1.0 1.0574e+0077.8 0.00e+00 0.0 1.5e+04 2.2e+03 6.0e+00  7  0  5  4  1   7  0 28  4  5     0
DMPlexStratify         6 1.5 5.4311e-0135.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 4.2634e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2205e+00 4.4 0.00e+00 0.0 5.1e+04 1.5e+04 4.1e+01  8  0 17 83  7   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0737e-01 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.1023e-0319.9 0.00e+00 0.0 1.7e+03 9.4e+03 3.0e+00  0  0  1  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.7229e-03 7.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 4.1008e-0521.5 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.1396e-04 2.1 0.00e+00 0.0 1.9e+02 1.8e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.7800e-0332.3 0.00e+00 0.0 3.8e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 7.3071e-03 1.4 1.02e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 54367
VecNorm               19 1.0 1.8599e-03 1.8 1.13e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 23733
VecScale             266 1.0 1.0359e-03 1.2 1.14e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 41655
VecCopy                1 1.0 1.0204e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 1.7505e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 4.1318e-04 1.3 4.91e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 47153
VecAYPX               76 1.0 3.1829e-04 1.3 2.16e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 26956
VecMAXPY              19 1.0 2.7597e-03 1.2 1.12e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   0 11  0  0  0 159106
VecScatterBegin      527 1.0 9.4907e-03 1.8 0.00e+00 0.0 2.3e+05 4.2e+02 0.0e+00  0  0 77 10  0   1  0 94 75  0     0
VecScatterEnd        527 1.0 1.1055e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecNormalize          19 1.0 2.2883e-03 1.5 1.70e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 28934
MatMult              199 1.0 2.8809e-02 1.1 1.66e+07 1.0 6.2e+04 6.6e+02 1.4e+02  0 16 21  4 24   6 16 25 32 30 22581
MatMultAdd           148 1.0 1.5773e-02 1.2 9.31e+06 1.1 1.3e+04 1.6e+03 0.0e+00  0  9  4  2  0   3  9  5 16  0 23019
MatSolve              38 1.0 7.4098e-03 1.1 4.82e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   1  5  0  0  0 25130
MatSOR               152 1.0 3.6705e-02 1.0 2.62e+07 1.1 1.4e+05 3.7e+02 0.0e+00  0 25 45  5  0   7 25 55 39  0 27750
MatLUFactorSym         1 1.0 5.2929e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.8580e-03 1.1 5.15e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 10544
MatILUFactorSym        1 1.0 1.2209e-03 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 6.3920e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.2421e-0315.3 1.35e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2378
MatResidual           76 1.0 6.9928e-03 1.2 4.96e+06 1.1 4.6e+04 3.7e+02 0.0e+00  0  5 15  2  0   1  5 18 13  0 27255
MatAssemblyBegin      18 1.0 3.5076e-0310.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 1.1627e-02 1.2 0.00e+00 0.0 1.0e+04 1.2e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          19200 1.0 7.6632e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0  15  0  0  0  0     0
MatGetRowIJ            2 1.0 5.0902e-04101.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 2.1482e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 1.1673e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 6.2823e-04 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.5566e-01 1.0 0.00e+00 0.0 7.2e+02 3.0e+02 1.2e+01  1  0  0  0  2  31  0  0  0  3     0
MatMatMult             1 1.0 1.0531e-02 1.0 2.49e+05 1.0 1.4e+03 9.9e+02 1.6e+01  0  0  0  0  3   2  0  1  1  3   940
MatMatMultSym          1 1.0 9.4621e-03 1.0 0.00e+00 0.0 1.3e+03 7.8e+02 1.4e+01  0  0  0  0  2   2  0  1  1  3     0
MatMatMultNum          1 1.0 1.0662e-03 1.0 2.49e+05 1.0 1.8e+02 2.4e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0  9284
MatRedundantMat        1 1.0 2.4199e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 1.2550e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 5.9271e-04 2.9 0.00e+00 0.0 7.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 2.1873e-01 1.0 2.52e+06 1.0 1.7e+04 9.5e+02 2.2e+02  1  2  6  2 36  44  2  7 13 46   454
PCSetUpOnBlocks       19 1.0 3.0646e-03 1.3 4.73e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0  5846
PCApply               19 1.0 1.1932e-01 1.0 4.00e+07 1.0 2.3e+05 3.3e+02 1.6e+02  1 39 76  8 26  24 39 93 59 33 13154
KSPGMRESOrthog        18 1.0 9.8231e-03 1.3 2.03e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 20  0  0  3   2 20  0  0  4 80884
KSPSetUp              10 1.0 6.3419e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 3.1758e-01 1.0 7.21e+07 1.0 2.5e+05 4.4e+02 4.0e+02  2 70 81 12 66  63 70 99 84 83  8928
SFBcastBegin           1 1.0 1.8940e-0310.1 0.00e+00 0.0 1.2e+03 1.8e+03 1.0e+00  0  0  0  0  0   0  0  0  2  0     0
SFBcastEnd             1 1.0 6.5684e-0443.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   187            192     23335476     0.
   IS L to G Mapping     3              3     14094724     0.
             Section    70             53        35616     0.
              Vector    15             87     12319944     0.
      Vector Scatter     2             13       683624     0.
              Matrix     0             29      4148000     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        68468     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      4309184     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      2592528     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.01222e-06
Average time for zero size MPI_Send(): 1.40071e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 48: solving... 
((23365, 1161600), (23365, 1161600))
	Solver time: 4.549189e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 48 processors, by jychang48 Wed Mar  2 17:42:31 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.494e+01      1.00013   1.494e+01
Objects:              8.740e+02      1.16845   7.631e+02
Flops:                8.941e+07      1.11020   8.501e+07  4.081e+09
Flops/sec:            5.986e+06      1.11026   5.692e+06  2.732e+08
MPI Messages:         1.241e+04      2.76359   8.136e+03  3.905e+05
MPI Message Lengths:  1.820e+08     13.08995   2.475e+03  9.666e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4481e+01  97.0%  0.0000e+00   0.0%  6.669e+04  17.1%  2.121e+03       85.7%  1.250e+02  20.7% 
 1:             FEM: 4.5505e-01   3.0%  4.0806e+09 100.0%  3.239e+05  82.9%  3.544e+02       14.3%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2041e+0013.4 0.00e+00 0.0 1.6e+04 4.0e+00 4.4e+01  7  0  4  0  7   8  0 24  0 35     0
VecScatterBegin        2 1.0 3.1948e-05 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 9.0599e-06 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1464e+00 1.1 0.00e+00 0.0 2.7e+04 2.8e+03 2.1e+01 14  0  7  8  3  15  0 40  9 17     0
Mesh Migration         2 1.0 4.0869e-01 1.0 0.00e+00 0.0 3.4e+04 1.9e+04 5.4e+01  3  0  9 68  9   3  0 51 80 43     0
DMPlexInterp           1 1.0 2.1102e+0064135.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.4017e+00 1.1 0.00e+00 0.0 1.9e+04 1.7e+04 2.5e+01 16  0  5 34  4  17  0 29 40 20     0
DMPlexDistCones        2 1.0 9.4518e-02 1.2 0.00e+00 0.0 5.1e+03 4.3e+04 4.0e+00  1  0  1 23  1   1  0  8 26  3     0
DMPlexDistLabels       2 1.0 2.6648e-01 1.0 0.00e+00 0.0 2.1e+04 1.8e+04 2.2e+01  2  0  5 39  4   2  0 31 45 18     0
DMPlexDistribOL        1 1.0 1.7164e-01 1.1 0.00e+00 0.0 4.2e+04 1.1e+04 5.0e+01  1  0 11 47  8   1  0 64 55 40     0
DMPlexDistField        3 1.0 2.8804e-02 2.2 0.00e+00 0.0 6.8e+03 4.5e+03 1.2e+01  0  0  2  3  2   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0458e+0067.3 0.00e+00 0.0 2.1e+04 1.7e+03 6.0e+00  7  0  5  4  1   7  0 31  4  5     0
DMPlexStratify         6 1.5 5.3865e-0141.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.6523e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2136e+00 4.5 0.00e+00 0.0 6.4e+04 1.3e+04 4.1e+01  8  0 16 83  7   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0116e-01 6.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.5670e-0323.0 0.00e+00 0.0 2.0e+03 8.2e+03 3.0e+00  0  0  1  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.3460e-03 7.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.5048e-0518.4 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.1182e-04 2.5 0.00e+00 0.0 2.2e+02 1.7e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 3.1829e-0358.8 0.00e+00 0.0 4.6e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 6.1665e-03 1.5 8.50e+06 1.1 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 64423
VecNorm               19 1.0 1.2932e-03 1.5 9.45e+05 1.1 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 34133
VecScale             266 1.0 9.3555e-04 1.2 9.80e+05 1.2 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 46730
VecCopy                1 1.0 7.7009e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 1.3967e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 3.7766e-04 1.4 4.09e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 51622
VecAYPX               76 1.0 3.3188e-04 1.6 1.80e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 25871
VecMAXPY              19 1.0 2.2688e-03 1.2 9.40e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   0 11  0  0  0 193532
VecScatterBegin      527 1.0 9.3722e-03 1.9 0.00e+00 0.0 3.0e+05 3.5e+02 0.0e+00  0  0 78 11  0   2  0 94 76  0     0
VecScatterEnd        527 1.0 1.1200e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecNormalize          19 1.0 1.7309e-03 1.3 1.42e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 38252
MatMult              199 1.0 2.5053e-02 1.1 1.40e+07 1.1 7.9e+04 5.7e+02 1.4e+02  0 16 20  5 24   5 16 24 32 30 25999
MatMultAdd           148 1.0 1.3936e-02 1.2 7.82e+06 1.1 1.6e+04 1.4e+03 0.0e+00  0  9  4  2  0   3  9  5 16  0 26055
MatSolve              38 1.0 6.1853e-03 1.1 4.06e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   1  5  0  0  0 30315
MatSOR               152 1.0 3.2173e-02 1.1 2.18e+07 1.1 1.8e+05 3.1e+02 0.0e+00  0 25 45  6  0   7 25 54 40  0 31732
MatLUFactorSym         1 1.0 6.9857e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.5531e-03 1.1 4.69e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 13764
MatILUFactorSym        1 1.0 6.4588e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 5.5194e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.5218e-0312.3 1.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3504
MatResidual           76 1.0 6.3055e-03 1.2 4.14e+06 1.1 5.9e+04 3.1e+02 0.0e+00  0  5 15  2  0   1  5 18 13  0 30359
MatAssemblyBegin      18 1.0 3.1672e-03 7.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   0  0  0  0  3     0
MatAssemblyEnd        18 1.0 1.0310e-02 1.2 0.00e+00 0.0 1.3e+04 1.0e+02 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          16000 1.0 6.3519e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0  14  0  0  0  0     0
MatGetRowIJ            2 1.0 4.5180e-0475.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 2.7895e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 9.4771e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 5.5814e-04 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.2936e-01 1.0 0.00e+00 0.0 8.6e+02 2.7e+02 1.2e+01  1  0  0  0  2  28  0  0  0  3     0
MatMatMult             1 1.0 9.1429e-03 1.0 2.08e+05 1.0 1.7e+03 8.9e+02 1.6e+01  0  0  0  0  3   2  0  1  1  3  1083
MatMatMultSym          1 1.0 8.2440e-03 1.0 0.00e+00 0.0 1.5e+03 7.0e+02 1.4e+01  0  0  0  0  2   2  0  0  1  3     0
MatMatMultNum          1 1.0 8.9502e-04 1.0 2.08e+05 1.0 2.2e+02 2.2e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0 11060
MatRedundantMat        1 1.0 3.0804e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 1.0741e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 5.8627e-04 3.3 0.00e+00 0.0 8.6e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 1.8617e-01 1.0 2.14e+06 1.0 2.3e+04 7.5e+02 2.2e+02  1  2  6  2 36  41  2  7 12 46   543
PCSetUpOnBlocks       19 1.0 2.2240e-03 1.1 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  8045
PCApply               19 1.0 1.0738e-01 1.0 3.36e+07 1.0 3.0e+05 2.8e+02 1.6e+02  1 39 77  9 26  23 39 93 61 33 14674
KSPGMRESOrthog        18 1.0 8.1601e-03 1.3 1.70e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 19  0  0  3   2 19  0  0  4 97367
KSPSetUp              10 1.0 5.8508e-04 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 2.7342e-01 1.0 6.07e+07 1.0 3.2e+05 3.6e+02 4.0e+02  2 70 82 12 66  60 70 99 84 83 10394
SFBcastBegin           1 1.0 3.2558e-0320.1 0.00e+00 0.0 1.4e+03 1.7e+03 1.0e+00  0  0  0  0  0   0  0  0  2  0     0
SFBcastEnd             1 1.0 2.1031e-03163.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   195            200     22896544     0.
   IS L to G Mapping     3              3     14076928     0.
             Section    70             53        35616     0.
              Vector    15             87     11144984     0.
      Vector Scatter     2             13       575024     0.
              Matrix     0             29      3506540     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        60332     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      3661024     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      2166428     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.63211e-06
Average time for zero size MPI_Send(): 1.35601e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 56: solving... 
((20104, 1161600), (20104, 1161600))
	Solver time: 4.187219e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 56 processors, by jychang48 Wed Mar  2 17:42:50 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.527e+01      1.00017   1.527e+01
Objects:              8.920e+02      1.18933   7.639e+02
Flops:                7.757e+07      1.12535   7.363e+07  4.123e+09
Flops/sec:            5.079e+06      1.12536   4.821e+06  2.700e+08
MPI Messages:         1.388e+04      2.76367   8.987e+03  5.033e+05
MPI Message Lengths:  1.801e+08     15.41287   1.981e+03  9.969e+08
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4853e+01  97.3%  0.0000e+00   0.0%  8.318e+04  16.5%  1.686e+03       85.1%  1.250e+02  20.7% 
 1:             FEM: 4.1873e-01   2.7%  4.1231e+09 100.0%  4.201e+05  83.5%  2.951e+02       14.9%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2200e+0012.6 0.00e+00 0.0 2.0e+04 4.0e+00 4.4e+01  7  0  4  0  7   8  0 24  0 35     0
VecScatterBegin        2 1.0 2.7418e-05 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 9.0599e-06 4.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.1967e+00 1.1 0.00e+00 0.0 3.5e+04 2.2e+03 2.1e+01 14  0  7  8  3  15  0 42  9 17     0
Mesh Migration         2 1.0 4.0311e-01 1.0 0.00e+00 0.0 4.1e+04 1.6e+04 5.4e+01  3  0  8 68  9   3  0 50 79 43     0
DMPlexInterp           1 1.0 2.1118e+0061941.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.4639e+00 1.1 0.00e+00 0.0 2.6e+04 1.3e+04 2.5e+01 16  0  5 33  4  17  0 31 39 20     0
DMPlexDistCones        2 1.0 9.4431e-02 1.2 0.00e+00 0.0 6.2e+03 3.6e+04 4.0e+00  1  0  1 22  1   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.6321e-01 1.0 0.00e+00 0.0 2.5e+04 1.5e+04 2.2e+01  2  0  5 38  4   2  0 30 45 18     0
DMPlexDistribOL        1 1.0 1.5705e-01 1.1 0.00e+00 0.0 5.1e+04 9.2e+03 5.0e+01  1  0 10 47  8   1  0 62 56 40     0
DMPlexDistField        3 1.0 3.1058e-02 2.4 0.00e+00 0.0 8.3e+03 3.8e+03 1.2e+01  0  0  2  3  2   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0646e+0054.2 0.00e+00 0.0 2.8e+04 1.3e+03 6.0e+00  7  0  6  4  1   7  0 33  4  5     0
DMPlexStratify         6 1.5 5.4248e-0150.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 3.2736e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2264e+00 4.1 0.00e+00 0.0 8.0e+04 1.0e+04 4.1e+01  8  0 16 82  7   8  0 96 97 33     0
SFBcastEnd            95 1.0 2.9831e-01 9.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.1031e-0324.5 0.00e+00 0.0 2.5e+03 6.8e+03 3.0e+00  0  0  0  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 9.3877e-03 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.6001e-0518.9 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.2803e-04 3.4 0.00e+00 0.0 2.7e+02 1.5e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.4439e-0329.0 0.00e+00 0.0 5.6e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 6.0709e-03 1.5 7.29e+06 1.1 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 65437
VecNorm               19 1.0 1.3041e-03 1.4 8.10e+05 1.1 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 33846
VecScale             266 1.0 8.6451e-04 1.2 8.50e+05 1.2 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 51332
VecCopy                1 1.0 7.3910e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 1.2527e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 3.4428e-04 1.5 3.51e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 56644
VecAYPX               76 1.0 2.8181e-04 1.5 1.54e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 30478
VecMAXPY              19 1.0 1.9228e-03 1.2 8.05e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   0 11  0  0  0 228352
VecScatterBegin      527 1.0 9.7260e-03 2.0 0.00e+00 0.0 3.9e+05 2.9e+02 0.0e+00  0  0 78 12  0   2  0 94 78  0     0
VecScatterEnd        527 1.0 1.1311e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecNormalize          19 1.0 1.6813e-03 1.3 1.21e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 39380
MatMult              199 1.0 2.4066e-02 1.1 1.19e+07 1.1 1.0e+05 4.8e+02 1.4e+02  0 16 20  5 24   6 16 24 33 30 27088
MatMultAdd           148 1.0 1.2908e-02 1.2 6.69e+06 1.1 1.9e+04 1.3e+03 0.0e+00  0  9  4  2  0   3  9  5 16  0 28129
MatSolve              38 1.0 5.4102e-03 1.1 3.52e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   1  5  0  0  0 35022
MatSOR               152 1.0 3.0248e-02 1.1 1.88e+07 1.1 2.3e+05 2.6e+02 0.0e+00  0 25 45  6  0   7 25 54 41  0 33772
MatLUFactorSym         1 1.0 1.0395e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.3649e-03 1.1 4.56e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 17818
MatILUFactorSym        1 1.0 5.8317e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 5.0116e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 2.1529e-0319.4 9.68e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  2476
MatResidual           76 1.0 6.1171e-03 1.2 3.58e+06 1.1 7.6e+04 2.6e+02 0.0e+00  0  5 15  2  0   1  5 18 14  0 31389
MatAssemblyBegin      18 1.0 3.7098e-03 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   1  0  0  0  3     0
MatAssemblyEnd        18 1.0 1.0228e-02 1.2 0.00e+00 0.0 1.7e+04 8.6e+01 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          13716 1.0 5.4721e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0  13  0  0  0  0     0
MatGetRowIJ            2 1.0 5.5599e-0493.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 3.1495e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 8.2636e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 6.3801e-04 8.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 1.1296e-01 1.0 0.00e+00 0.0 1.0e+03 2.4e+02 1.2e+01  1  0  0  0  2  27  0  0  0  3     0
MatMatMult             1 1.0 8.5900e-03 1.0 1.78e+05 1.0 2.1e+03 8.0e+02 1.6e+01  0  0  0  0  3   2  0  0  1  3  1152
MatMatMultSym          1 1.0 7.7760e-03 1.0 0.00e+00 0.0 1.8e+03 6.3e+02 1.4e+01  0  0  0  0  2   2  0  0  1  3     0
MatMatMultNum          1 1.0 8.0490e-04 1.0 1.78e+05 1.0 2.6e+02 2.0e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0 12296
MatRedundantMat        1 1.0 3.4714e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 9.2196e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 4.9806e-04 2.9 0.00e+00 0.0 1.0e+03 1.4e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 1.6844e-01 1.0 1.89e+06 1.0 2.9e+04 6.0e+02 2.2e+02  1  3  6  2 36  40  3  7 12 46   617
PCSetUpOnBlocks       19 1.0 1.9562e-03 1.1 3.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  9125
PCApply               19 1.0 1.0325e-01 1.0 2.89e+07 1.0 3.9e+05 2.3e+02 1.6e+02  1 38 78  9 26  24 38 94 62 33 15320
KSPGMRESOrthog        18 1.0 7.8821e-03 1.3 1.46e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 19  0  0  3   2 19  0  0  4 100801
KSPSetUp              10 1.0 5.4169e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 2.5057e-01 1.0 5.19e+07 1.0 4.2e+05 3.0e+02 4.0e+02  2 69 83 13 66  60 69 99 85 83 11366
SFBcastBegin           1 1.0 1.5159e-03 8.7 0.00e+00 0.0 1.7e+03 1.5e+03 1.0e+00  0  0  0  0  0   0  0  0  2  0     0
SFBcastEnd             1 1.0 4.6587e-0431.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   213            218     22660336     0.
   IS L to G Mapping     3              3     12801292     0.
             Section    70             53        35616     0.
              Vector    15             87     10301560     0.
      Vector Scatter     2             13       496760     0.
              Matrix     0             29      3056328     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        55528     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      3202296     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      1858040     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 6.19888e-07
Average time for MPI_Barrier(): 9.39369e-06
Average time for zero size MPI_Send(): 1.51566e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------

=================
 ml 40 1
=================
Discretization: RT
MPI processes 64: solving... 
((17544, 1161600), (17544, 1161600))
	Solver time: 3.889170e-01
	Solver iterations: 18
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

Darcy_FE.py on a arch-linux2-c-opt named wf153.localdomain with 64 processors, by jychang48 Wed Mar  2 17:43:09 2016
Using Petsc Development GIT revision: v3.6.3-1924-ge972cd5  GIT Date: 2016-01-01 10:01:13 -0600

                         Max       Max/Min        Avg      Total 
Time (sec):           1.513e+01      1.00019   1.513e+01
Objects:              9.060e+02      1.21123   7.647e+02
Flops:                6.855e+07      1.13507   6.507e+07  4.164e+09
Flops/sec:            4.532e+06      1.13513   4.302e+06  2.753e+08
MPI Messages:         1.488e+04      2.83578   9.700e+03  6.208e+05
MPI Message Lengths:  1.790e+08     17.44347   1.649e+03  1.024e+09
MPI Reductions:       6.040e+02      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4737e+01  97.4%  0.0000e+00   0.0%  1.007e+05  16.2%  1.395e+03       84.6%  1.250e+02  20.7% 
 1:             FEM: 3.8892e-01   2.6%  4.1645e+09 100.0%  5.201e+05  83.8%  2.538e+02       15.4%  4.780e+02  79.1% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         44 1.0 1.2409e+0012.9 0.00e+00 0.0 2.4e+04 4.0e+00 4.4e+01  8  0  4  0  7   8  0 24  0 35     0
VecScatterBegin        2 1.0 1.8835e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterEnd          2 1.0 1.0729e-05 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
Mesh Partition         2 1.0 2.2316e+00 1.1 0.00e+00 0.0 4.4e+04 1.8e+03 2.1e+01 15  0  7  8  3  15  0 44  9 17     0
Mesh Migration         2 1.0 3.9511e-01 1.0 0.00e+00 0.0 4.9e+04 1.4e+04 5.4e+01  3  0  8 67  9   3  0 48 79 43     0
DMPlexInterp           1 1.0 2.1213e+0062219.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DMPlexDistribute       1 1.0 2.5055e+00 1.1 0.00e+00 0.0 3.3e+04 1.0e+04 2.5e+01 17  0  5 32  4  17  0 33 38 20     0
DMPlexDistCones        2 1.0 9.2890e-02 1.2 0.00e+00 0.0 7.2e+03 3.1e+04 4.0e+00  1  0  1 22  1   1  0  7 26  3     0
DMPlexDistLabels       2 1.0 2.5924e-01 1.0 0.00e+00 0.0 2.9e+04 1.3e+04 2.2e+01  2  0  5 38  4   2  0 29 45 18     0
DMPlexDistribOL        1 1.0 1.4374e-01 1.1 0.00e+00 0.0 6.1e+04 8.0e+03 5.0e+01  1  0 10 47  8   1  0 60 56 40     0
DMPlexDistField        3 1.0 3.1577e-02 2.4 0.00e+00 0.0 9.7e+03 3.4e+03 1.2e+01  0  0  2  3  2   0  0 10  4 10     0
DMPlexDistData         2 1.0 1.0807e+0052.5 0.00e+00 0.0 3.5e+04 1.0e+03 6.0e+00  7  0  6  4  1   7  0 35  4  5     0
DMPlexStratify         6 1.5 5.4088e-0157.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            51 1.0 2.7965e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          95 1.0 1.2455e+00 4.1 0.00e+00 0.0 9.7e+04 8.7e+03 4.1e+01  8  0 16 82  7   8  0 96 97 33     0
SFBcastEnd            95 1.0 3.0530e-0110.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
SFReduceBegin          4 1.0 9.7501e-0323.5 0.00e+00 0.0 2.9e+03 5.8e+03 3.0e+00  0  0  0  2  0   0  0  3  2  2     0
SFReduceEnd            4 1.0 1.0731e-02 8.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpBegin         1 1.0 3.3140e-0517.4 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFFetchOpEnd           1 1.0 1.0800e-04 2.5 0.00e+00 0.0 3.2e+02 1.3e+03 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: FEM

BuildTwoSided          1 1.0 1.4801e-0327.5 0.00e+00 0.0 6.5e+02 4.0e+00 1.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecMDot               18 1.0 5.7650e-03 1.6 6.39e+06 1.1 0.0e+00 0.0e+00 1.8e+01  0 10  0  0  3   1 10  0  0  4 68909
VecNorm               19 1.0 1.2131e-03 1.4 7.10e+05 1.1 0.0e+00 0.0e+00 1.9e+01  0  1  0  0  3   0  1  0  0  4 36388
VecScale             266 1.0 8.2517e-04 1.3 7.55e+05 1.2 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 54483
VecCopy                1 1.0 6.3181e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               244 1.0 1.0672e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               77 1.0 3.1257e-04 1.5 3.08e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 62425
VecAYPX               76 1.0 2.6488e-04 1.5 1.35e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 32446
VecMAXPY              19 1.0 1.6687e-03 1.1 7.06e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 11  0  0  0   0 11  0  0  0 263131
VecScatterBegin      527 1.0 9.8925e-03 2.1 0.00e+00 0.0 4.9e+05 2.5e+02 0.0e+00  0  0 78 12  0   2  0 93 79  0     0
VecScatterEnd        527 1.0 1.3103e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
VecNormalize          19 1.0 1.5395e-03 1.3 1.06e+06 1.1 0.0e+00 0.0e+00 1.9e+01  0  2  0  0  3   0  2  0  0  4 43009
MatMult              199 1.0 2.2541e-02 1.1 1.04e+07 1.0 1.2e+05 4.3e+02 1.4e+02  0 16 20  5 24   6 16 23 33 30 28955
MatMultAdd           148 1.0 1.1942e-02 1.2 5.84e+06 1.1 2.2e+04 1.1e+03 0.0e+00  0  9  4  2  0   3  9  4 16  0 30407
MatSolve              38 1.0 4.7653e-03 1.1 3.14e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   1  5  0  0  0 40351
MatSOR               152 1.0 2.8343e-02 1.1 1.65e+07 1.1 2.8e+05 2.3e+02 0.0e+00  0 25 45  6  0   7 25 54 41  0 36104
MatLUFactorSym         1 1.0 1.1396e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         2 1.0 1.2221e-03 1.1 4.69e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 23627
MatILUFactorSym        1 1.0 5.0211e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             2 1.0 4.3702e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               2 1.0 1.7388e-0318.5 8.46e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3066
MatResidual           76 1.0 6.0871e-03 1.2 3.14e+06 1.1 9.4e+04 2.3e+02 0.0e+00  0  5 15  2  0   1  5 18 14  0 31668
MatAssemblyBegin      18 1.0 3.1993e-03 6.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01  0  0  0  0  2   1  0  0  0  3     0
MatAssemblyEnd        18 1.0 8.8515e-03 1.2 0.00e+00 0.0 2.2e+04 7.4e+01 8.0e+01  0  0  3  0 13   2  0  4  1 17     0
MatGetRow          12000 1.0 4.8211e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0  12  0  0  0  0     0
MatGetRowIJ            2 1.0 5.0211e-0484.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrice       1 1.0 4.3511e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetSubMatrix        4 1.0 7.3576e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetOrdering         2 1.0 5.8365e-04 7.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY                1 1.0 9.8725e-02 1.0 0.00e+00 0.0 1.2e+03 2.2e+02 1.2e+01  1  0  0  0  2  25  0  0  0  3     0
MatMatMult             1 1.0 7.8750e-03 1.0 1.56e+05 1.0 2.4e+03 7.3e+02 1.6e+01  0  0  0  0  3   2  0  0  1  3  1257
MatMatMultSym          1 1.0 7.1521e-03 1.0 0.00e+00 0.0 2.1e+03 5.8e+02 1.4e+01  0  0  0  0  2   2  0  0  1  3     0
MatMatMultNum          1 1.0 7.3504e-04 1.1 1.56e+05 1.0 3.1e+02 1.8e+03 2.0e+00  0  0  0  0  0   0  0  0  0  0 13467
MatRedundantMat        1 1.0 4.6396e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatGetLocalMat         2 1.0 8.2898e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol          2 1.0 4.4298e-04 2.5 0.00e+00 0.0 1.2e+03 1.2e+03 0.0e+00  0  0  0  0  0   0  0  0  1  0     0
PCSetUp                4 1.0 1.4942e-01 1.0 1.72e+06 1.0 3.6e+04 5.0e+02 2.2e+02  1  3  6  2 36  38  3  7 12 46   726
PCSetUpOnBlocks       19 1.0 1.7056e-03 1.1 2.96e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 10448
PCApply               19 1.0 9.6567e-02 1.0 2.55e+07 1.1 4.9e+05 2.0e+02 1.6e+02  1 38 79 10 26  25 38 94 63 33 16484
KSPGMRESOrthog        18 1.0 7.3059e-03 1.5 1.28e+07 1.1 0.0e+00 0.0e+00 1.8e+01  0 19  0  0  3   2 19  0  0  4 108752
KSPSetUp              10 1.0 4.7541e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  1   0  0  0  0  2     0
KSPSolve               1 1.0 2.2602e-01 1.0 4.55e+07 1.0 5.2e+05 2.6e+02 4.0e+02  1 69 83 13 66  58 69 99 86 83 12644
SFBcastBegin           1 1.0 1.5361e-0312.2 0.00e+00 0.0 2.0e+03 1.4e+03 1.0e+00  0  0  0  0  0   0  0  0  2  0     0
SFBcastEnd             1 1.0 8.6808e-0472.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container     6              3         1728     0.
              Viewer     1              0            0     0.
           Index Set   227            232     22507744     0.
   IS L to G Mapping     3              3     12541056     0.
             Section    70             53        35616     0.
              Vector    15             87      9643080     0.
      Vector Scatter     2             13       435320     0.
              Matrix     0             29      2741220     0.
      Preconditioner     0             11        10960     0.
       Krylov Solver     0             11        30856     0.
    Distributed Mesh    14              8        38248     0.
    GraphPartitioner     6              5         3060     0.
Star Forest Bipartite Graph    74             63        53256     0.
     Discrete System    14              8         6912     0.

--- Event Stage 1: FEM

           Index Set    50             38        53548     0.
   IS L to G Mapping     4              0            0     0.
              Vector   327            243      2852408     0.
      Vector Scatter    19              2         2192     0.
              Matrix    50              8      1629940     0.
      Preconditioner    12              1          896     0.
       Krylov Solver    12              1         1352     0.
========================================================================================================================
Average time to get PetscTime(): 5.96046e-07
Average time for MPI_Barrier(): 9.20296e-06
Average time for zero size MPI_Send(): 1.65403e-06
#PETSc Option Table entries:
-log_summary
-solver_fieldsplit_0_ksp_type preonly
-solver_fieldsplit_0_pc_type bjacobi
-solver_fieldsplit_1_ksp_type preonly
-solver_fieldsplit_1_pc_type ml
-solver_ksp_rtol 1e-7
-solver_ksp_type gmres
-solver_pc_fieldsplit_schur_fact_type upper
-solver_pc_fieldsplit_schur_precondition selfp
-solver_pc_fieldsplit_type schur
-solver_pc_type fieldsplit
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco=/users/jychang48/externalpackages/Chaco-2.2-p2.tar.gz --download-ctetgen=/users/jychang48/externalpackages/ctetgen-0.4.tar.gz --download-exodusii=/users/jychang48/externalpackages/exodus-5.24.tar.bz2 --download-fblaslapack=/users/jychang48/externalpackages/fblaslapack-3.4.2.tar.gz --download-hdf5=/users/jychang48/externalpackages/hdf5-1.8.12.tar.gz --download-hypre=/users/jychang48/externalpackages/hypre-2.10.0b-p1.tar.gz --download-metis=/users/jychang48/externalpackages/metis-5.1.0-p1.tar.gz --download-ml=/users/jychang48/externalpackages/ml-6.2-p3.tar.gz --download-mumps=/users/jychang48/externalpackages/MUMPS_5.0.1-p1.tar.gz --download-netcdf=/users/jychang48/externalpackages/netcdf-4.3.2.tar.gz --download-parmetis=/users/jychang48/externalpackages/parmetis-4.0.3-p2.tar.gz --download-scalapack=/users/jychang48/externalpackages/scalapack-2.0.2.tgz --download-superlu_dist --download-triangle=/users/jychang48/externalpackages/Triangle.tar.gz --with-cc=mpicc --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-papi=/usr/projects/hpcsoft/toss2/common/papi/5.4.1 --with-shared-libraries COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Fri Jan  1 21:44:06 2016 on wf-fe2.lanl.gov 
Machine characteristics: Linux-2.6.32-573.8.1.2chaos.ch5.4.x86_64-x86_64-with-redhat-6.7-Santiago
Using PETSc directory: /users/jychang48/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/include -I/users/jychang48/petsc/arch-linux2-c-opt/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi/opal/mca/hwloc/hwloc132/hwloc/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include -I/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/include/openmpi
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/users/jychang48/petsc/arch-linux2-c-opt/lib -L/users/jychang48/petsc/arch-linux2-c-opt/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_4.2 -lHYPRE -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lflapack -lfblas -lparmetis -lmetis -lchaco -lexoIIv2for -lexodus -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -ltriangle -lX11 -lhwloc -lctetgen -lssl -lcrypto -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -L/usr/projects/hpcsoft/toss2.2/wolf/openmpi/1.6.5-gcc-4.8/lib -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib/gcc -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib64 -Wl,-rpath,/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -L/turquoise/usr/projects/hpcsoft/toss2/common/gcc/4.8.2/lib -ldl -lmpi -losmcomp -lrdmacm -libverbs -lsctp -lrt -lnsl -lutil -lpsm_infinipath -lpmi -lnuma -lgcc_s -lpthread -ldl 
-----------------------------------------


More information about the petsc-users mailing list