F I N I T E E L E M E N T A N A L Y S I S P R O G R A M FEAP (C) Regents of the University of California All Rights Reserved. VERSION: Release 8.4.1d DATE: 01 January 2014 Files are set as: Status Filename Input (read ) : Exists Icube_0001 Output (write) : Exists Ocube_0001 Restart (read ) : New Rcube_0001 Restart (write) : New Rcube_0001 Plots (write) : New Pcube_0001 Caution, existing write files will be overwritten. Are filenames correct?( y or n; r = redefine all, s = stop) : R U N N I N G F E A P P R O B L E M N O W --> Please report errors by e-mail to: feap@ce.berkeley.edu 0 KSP Residual norm 1.266581117211e-01 1 KSP Residual norm 9.014704066091e-03 2 KSP Residual norm 3.237516386729e-03 3 KSP Residual norm 6.444777226351e-04 4 KSP Residual norm 6.342235112452e-05 5 KSP Residual norm 1.146335639268e-05 6 KSP Residual norm 3.173304611887e-06 7 KSP Residual norm 4.119835364625e-07 8 KSP Residual norm 4.991941760052e-08 9 KSP Residual norm 1.590879862079e-08 10 KSP Residual norm 2.663825097732e-09 11 KSP Residual norm 3.616596060953e-10 KSP Object: 2 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi block Jacobi: number of blocks = 2 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1.08247 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 package used to perform factorization: petsc total: nonzeros=7560, allocated nonzeros=7560 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 27 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 32 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 32 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.170852, max = 3.58789 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 0 KSP Residual norm 2.427936673019e-05 1 KSP Residual norm 4.430719243198e-06 2 KSP Residual norm 7.423194767525e-07 3 KSP Residual norm 1.195825708861e-07 4 KSP Residual norm 2.172335608327e-08 5 KSP Residual norm 4.148761279987e-09 6 KSP Residual norm 7.637937354691e-10 7 KSP Residual norm 1.464316620831e-10 8 KSP Residual norm 2.764846540774e-11 9 KSP Residual norm 4.852929768008e-12 10 KSP Residual norm 8.728520322047e-13 11 KSP Residual norm 1.395722217412e-13 KSP Object: 2 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi block Jacobi: number of blocks = 2 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1.08247 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 package used to perform factorization: petsc total: nonzeros=7560, allocated nonzeros=7560 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 27 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 32 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 32 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.170984, max = 3.59065 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 0 KSP Residual norm 1.257714930168e-10 1 KSP Residual norm 2.887546908779e-11 2 KSP Residual norm 5.122654634680e-12 3 KSP Residual norm 8.988866569058e-13 4 KSP Residual norm 1.775803815901e-13 5 KSP Residual norm 3.141936676136e-14 6 KSP Residual norm 3.986376658673e-15 7 KSP Residual norm 6.909058218567e-16 8 KSP Residual norm 1.363174651176e-16 9 KSP Residual norm 2.146495218410e-17 KSP Object: 2 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi block Jacobi: number of blocks = 2 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1.08247 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 package used to perform factorization: petsc total: nonzeros=7560, allocated nonzeros=7560 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 27 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 32 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=96, cols=96, bs=6 total: nonzeros=6984, allocated nonzeros=6984 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 32 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.170983, max = 3.59065 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=2013, cols=2013, bs=3 total: nonzeros=100899, allocated nonzeros=100899 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 336 nodes, limit used is 5 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr2/tgross/parFEAP/parFEAP84_mod/FEAP84/ver84/parfeap/feap on a linux-gnu-c named ilfb35.ilsb.tuwien.ac.at with 2 processors, by tgross Mon Jan 27 18:35:08 2014 Using Petsc Release Version 3.4.3, Oct, 15, 2013 Max Max/Min Avg Total Time (sec): 2.795e-01 1.02724 2.758e-01 Objects: 5.360e+02 1.01132 5.330e+02 Flops: 3.283e+07 1.00501 3.275e+07 6.549e+07 Flops/sec: 1.207e+08 1.03238 1.188e+08 2.375e+08 MPI Messages: 4.280e+02 1.00000 4.280e+02 8.560e+02 MPI Message Lengths: 9.792e+05 1.00000 2.288e+03 1.958e+06 MPI Reductions: 1.293e+03 1.00466 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.7579e-01 100.0% 6.5492e+07 100.0% 8.560e+02 100.0% 2.288e+03 100.0% 1.289e+03 99.7% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 197 1.0 2.1117e-02 1.1 2.00e+07 1.0 3.9e+02 1.9e+03 0.0e+00 7 60 46 38 0 7 60 46 38 0 1864 MatMultAdd 34 1.0 4.2396e-03 3.0 9.30e+05 1.0 3.4e+01 6.7e+02 0.0e+00 1 3 4 1 0 1 3 4 1 0 432 MatMultTranspose 34 1.0 2.3260e-03 1.1 9.30e+05 1.0 3.4e+01 6.7e+02 0.0e+00 1 3 4 1 0 1 3 4 1 0 787 MatSolve 34 0.0 3.3379e-04 0.0 5.11e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1530 MatLUFactorSym 3 1.0 6.3205e-0433.1 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatLUFactorNum 3 1.0 9.0981e-04127.2 1.12e+06 0.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 1226 MatScale 9 1.0 2.5702e-04 1.0 1.16e+05 1.0 6.0e+00 6.3e+02 0.0e+00 0 0 1 0 0 0 0 1 0 0 890 MatAssemblyBegin 60 1.0 3.5784e-03 1.2 0.00e+00 0.0 1.8e+01 2.1e+03 6.6e+01 1 0 2 2 5 1 0 2 2 5 0 MatAssemblyEnd 60 1.0 1.1045e-02 1.0 0.00e+00 0.0 8.2e+01 1.4e+02 2.0e+02 4 0 10 1 16 4 0 10 1 16 0 MatGetRow 11088 1.0 1.5640e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetRowIJ 3 0.0 2.8133e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 3 0.0 1.1015e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 3 1.0 1.5969e-03 1.0 0.00e+00 0.0 2.4e+01 1.0e+03 5.1e+01 1 0 3 1 4 1 0 3 1 4 0 MatZeroEntries 3 1.0 3.2210e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 15 1.7 1.0612e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatAXPY 3 1.0 9.8944e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 3 1.0 9.8472e-03 1.0 1.16e+06 1.0 3.6e+01 3.9e+03 7.2e+01 4 3 4 7 6 4 3 4 7 6 230 MatMatMultSym 3 1.0 6.8631e-03 1.0 0.00e+00 0.0 3.0e+01 2.9e+03 6.6e+01 2 0 4 4 5 2 0 4 4 5 0 MatMatMultNum 3 1.0 3.0048e-03 1.0 1.16e+06 1.0 6.0e+00 9.0e+03 6.0e+00 1 3 1 3 0 1 3 1 3 0 754 MatPtAP 3 1.0 3.1201e-02 1.0 7.26e+06 1.1 5.4e+01 8.9e+03 7.5e+01 11 21 6 24 6 11 21 6 24 6 441 MatPtAPSymbolic 3 1.0 1.5792e-02 1.0 0.00e+00 0.0 3.6e+01 1.0e+04 4.5e+01 6 0 4 19 3 6 0 4 19 3 0 MatPtAPNumeric 3 1.0 1.5406e-02 1.0 7.26e+06 1.1 1.8e+01 5.8e+03 3.0e+01 6 21 2 5 2 6 21 2 5 2 893 MatTrnMatMult 3 1.0 7.7381e-03 1.0 3.04e+05 1.0 3.6e+01 4.3e+03 8.7e+01 3 1 4 8 7 3 1 4 8 7 78 MatGetLocalMat 15 1.0 7.3314e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 9 1.0 1.3981e-03 1.1 0.00e+00 0.0 4.2e+01 1.1e+04 1.2e+01 0 0 5 24 1 0 0 5 24 1 0 MatGetSymTrans 6 1.0 1.8764e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 3 1.0 1.0180e-04 1.2 6.04e+03 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 119 VecMDot 30 1.0 1.8976e-03 1.1 3.32e+05 1.0 0.0e+00 0.0e+00 3.0e+01 1 1 0 0 2 1 1 0 0 2 350 VecTDot 62 1.0 2.4662e-03 1.6 1.25e+05 1.0 0.0e+00 0.0e+00 6.2e+01 1 0 0 0 5 1 0 0 0 5 101 VecNorm 67 1.0 2.0871e-03 1.1 1.35e+05 1.0 0.0e+00 0.0e+00 6.7e+01 1 0 0 0 5 1 0 0 0 5 129 VecScale 101 1.0 1.6022e-04 1.0 1.02e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1269 VecCopy 111 1.0 7.6056e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 268 1.0 1.0705e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 201 1.0 2.7919e-04 1.0 4.05e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2898 VecAYPX 232 1.0 4.4227e-04 1.0 3.31e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1493 VecMAXPY 33 1.0 1.9598e-04 1.0 3.93e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 4006 VecAssemblyBegin 90 1.0 5.7271e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.6e+02 2 0 0 0 20 2 0 0 0 20 0 VecAssemblyEnd 90 1.0 6.3896e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 169 1.0 3.6716e-04 1.0 1.70e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 927 VecScatterBegin 355 1.0 3.1815e-03 1.3 0.00e+00 0.0 6.4e+02 1.5e+03 0.0e+00 1 0 75 49 0 1 0 75 49 0 0 VecScatterEnd 355 1.0 1.1966e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0 VecSetRandom 3 1.0 5.4121e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 33 1.0 9.8348e-04 1.0 9.98e+04 1.0 0.0e+00 0.0e+00 3.3e+01 0 0 0 0 3 0 0 0 0 3 203 KSPGMRESOrthog 30 1.0 2.0945e-03 1.1 6.65e+05 1.0 0.0e+00 0.0e+00 3.0e+01 1 2 0 0 2 1 2 0 0 2 634 KSPSetUp 18 1.0 5.5146e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 1 0 0 0 0 1 0 KSPSolve 3 1.0 1.3466e-01 1.0 3.28e+07 1.0 8.5e+02 2.3e+03 1.2e+03 49100 99100 96 49100 99100 96 486 PCSetUp 6 1.0 1.0567e-01 1.0 1.30e+07 1.0 4.4e+02 2.9e+03 1.1e+03 38 39 52 65 88 38 39 52 65 88 243 PCSetUpOnBlocks 34 1.0 1.7202e-0314.5 1.12e+06 0.0 0.0e+00 0.0e+00 1.8e+01 0 2 0 0 1 0 2 0 0 1 648 PCApply 34 1.0 2.3529e-02 1.0 1.76e+07 1.1 3.4e+02 1.6e+03 2.4e+01 9 52 40 28 2 9 52 40 28 2 1443 PCGAMGgraph_AGG 1 1.0 6.9730e-03 1.0 1.14e+04 1.0 1.0e+01 2.4e+02 3.6e+01 3 0 1 0 3 3 0 1 0 3 3 PCGAMGcoarse_AGG 1 1.0 3.9728e-03 1.0 1.01e+05 1.0 3.0e+01 2.3e+03 6.6e+01 1 0 4 4 5 1 0 4 4 5 51 PCGAMGProl_AGG 1 1.0 4.4410e-03 1.0 0.00e+00 0.0 4.8e+01 1.1e+03 1.1e+02 2 0 6 3 9 2 0 6 3 9 0 PCGAMGPOpt_AGG 1 1.0 6.4251e-03 1.0 1.71e+06 1.0 3.2e+01 2.7e+03 5.6e+01 2 5 4 4 4 2 5 4 4 4 526 PCGAMGgraph_AGG 1 1.0 5.6651e-03 1.0 1.14e+04 1.0 1.0e+01 2.4e+02 3.6e+01 2 0 1 0 3 2 0 1 0 3 4 PCGAMGcoarse_AGG 1 1.0 3.6709e-03 1.0 1.01e+05 1.0 3.0e+01 2.3e+03 6.6e+01 1 0 4 4 5 1 0 4 4 5 55 PCGAMGProl_AGG 1 1.0 3.9499e-03 1.0 0.00e+00 0.0 4.8e+01 1.1e+03 1.1e+02 1 0 6 3 9 1 0 6 3 9 0 PCGAMGPOpt_AGG 1 1.0 5.8770e-03 1.0 1.71e+06 1.0 3.2e+01 2.7e+03 5.6e+01 2 5 4 4 4 2 5 4 4 4 576 PCGAMGgraph_AGG 1 1.0 5.6491e-03 1.0 1.14e+04 1.0 1.0e+01 2.4e+02 3.6e+01 2 0 1 0 3 2 0 1 0 3 4 PCGAMGcoarse_AGG 1 1.0 3.6740e-03 1.0 1.01e+05 1.0 3.0e+01 2.3e+03 6.6e+01 1 0 4 4 5 1 0 4 4 5 55 PCGAMGProl_AGG 1 1.0 3.8781e-03 1.0 0.00e+00 0.0 4.8e+01 1.1e+03 1.1e+02 1 0 6 3 9 1 0 6 3 9 0 PCGAMGPOpt_AGG 1 1.0 6.1820e-03 1.0 1.71e+06 1.0 3.2e+01 2.7e+03 5.6e+01 2 5 4 4 4 2 5 4 4 4 547 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 105 105 6082820 0 Matrix Coarsen 3 3 1884 0 Vector 259 259 1362280 0 Vector Scatter 28 28 29456 0 Index Set 98 98 86424 0 Krylov Solver 18 18 160152 0 Preconditioner 18 18 18036 0 Viewer 4 3 2184 0 PetscRandom 3 3 1872 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 4.63963e-05 Average time for zero size MPI_Send(): 2.14577e-05 #PETSc Option Table entries: -ksp_monitor -ksp_type cg -ksp_view -log_summary -mg_levels_ksp_max_it 1 -options_left -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Thu Jan 23 19:10:29 2014 Configure options: --download-parmetis --download-superlu_dist --download-mpich --download-hypre --download-metis --download-ml --download-mumps --download-scalapack --download-blacs --download-cmake --download-f-blas-lapack=1 --with-debugging=0 ----------------------------------------- Libraries compiled on Thu Jan 23 19:10:29 2014 on ilfb35.ilsb.tuwien.ac.at Machine characteristics: Linux-2.6.32-358.2.1.el6.x86_64-x86_64-with-redhat-6.4-Carbon Using PETSc directory: /usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3 Using PETSc arch: linux-gnu-c ----------------------------------------- Using C compiler: /usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/include -I/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/include -I/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/include -I/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/include ----------------------------------------- Using C linker: /usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/bin/mpicc Using Fortran linker: /usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/bin/mpif90 Using libraries: -Wl,-rpath,/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -L/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -lpetsc -Wl,-rpath,/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -L/usr2/tgross/parFEAP/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -lHYPRE -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lml -lmpichcxx -lstdc++ -lsuperlu_dist_3.3 -lflapack -lfblas -lX11 -lparmetis -lmetis -lpthread -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl ----------------------------------------- #PETSc Option Table entries: -ksp_monitor -ksp_type cg -ksp_view -log_summary -mg_levels_ksp_max_it 1 -options_left -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries There are no unused options.