************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./DG_EULER on a arch-linux2-c-opt named compute-0-15.local with 8 processors, by kyriazin Tue Mar 18 13:52:04 2014 Using Petsc Release Version 3.4.4, Mar, 13, 2014 Max Max/Min Avg Total Time (sec): 1.777e+01 1.00004 1.777e+01 Objects: 1.410e+03 1.00000 1.410e+03 Flops: 4.114e+09 1.02189 4.057e+09 3.246e+10 Flops/sec: 2.315e+08 1.02193 2.283e+08 1.827e+09 MPI Messages: 1.454e+04 2.32898 1.091e+04 8.726e+04 MPI Message Lengths: 7.003e+07 2.03372 4.622e+03 4.033e+08 MPI Reductions: 1.170e+04 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.7769e+01 100.0% 3.2457e+10 100.0% 8.726e+04 100.0% 4.622e+03 100.0% 1.170e+04 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecDot 10272 1.0 2.0305e+00 1.1 5.08e+08 1.0 0.0e+00 0.0e+00 1.0e+04 11 13 0 0 88 11 13 0 0 88 2003 VecNorm 1372 1.0 2.5091e-01 1.2 6.79e+07 1.0 0.0e+00 0.0e+00 1.4e+03 1 2 0 0 12 1 2 0 0 12 2165 VecScale 696 1.0 1.2864e-02 1.0 1.72e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10713 VecCopy 698 1.0 5.6528e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2073 1.0 1.0040e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 10968 1.0 4.1884e-01 1.2 5.43e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 13 0 0 0 2 13 0 0 0 10370 VecAYPX 674 1.0 4.6228e-02 1.2 1.67e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2887 VecMAXPY 674 1.0 4.9761e-01 1.0 5.09e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 13 0 0 0 3 13 0 0 0 8184 VecAssemblyBegin 2 1.0 2.9973e-02 9.7 0.00e+00 0.0 9.2e+01 1.3e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 2 1.0 7.4148e-05 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 2760 1.0 2.4038e-01 1.2 0.00e+00 0.0 8.7e+04 4.3e+03 0.0e+00 1 0 99 92 0 1 0 99 92 0 0 VecScatterEnd 2760 1.0 5.9550e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecNormalize 696 1.0 1.2408e-01 1.0 5.17e+07 1.0 0.0e+00 0.0e+00 7.0e+02 1 1 0 0 6 1 1 0 0 6 3332 MatMult 1368 1.0 3.2535e+00 1.0 1.56e+09 1.0 5.7e+04 4.3e+03 0.0e+00 18 38 66 61 0 18 38 66 61 0 3787 MatSolve 696 1.0 1.4506e+00 1.1 8.84e+08 1.1 0.0e+00 0.0e+00 0.0e+00 8 21 0 0 0 8 21 0 0 0 4670 MatLUFactorNum 1 1.0 1.3801e-02 1.1 8.30e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4574 MatILUFactorSym 1 1.0 4.5230e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 6.7126e-0221.8 0.00e+00 0.0 6.9e+01 6.6e+04 2.0e+00 0 0 0 1 0 0 0 0 1 0 0 MatAssemblyEnd 2 1.0 1.8846e-02 1.0 0.00e+00 0.0 8.4e+01 1.1e+03 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 1.9073e-06 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 3.7363e-02 1.2 0.00e+00 0.0 2.1e+02 3.0e+04 7.0e+00 0 0 0 2 0 0 0 0 2 0 0 MatGetOrdering 1 1.0 1.9789e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 4.6601e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 1 1.0 1.0946e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 672 1.0 2.3478e+00 1.1 1.02e+09 1.0 0.0e+00 0.0e+00 1.0e+04 13 25 0 0 88 13 25 0 0 88 3465 KSPSetUp 2 1.0 4.3392e-04 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 2 1.0 8.3026e+00 1.0 4.11e+09 1.0 8.7e+04 4.3e+03 1.2e+04 47100100 93100 47100100 93100 3909 PCSetUp 2 1.0 6.3017e-02 1.1 8.30e+06 1.1 2.9e+02 2.2e+04 2.3e+01 0 0 0 2 0 0 0 0 2 0 1002 PCSetUpOnBlocks 2 1.0 1.8550e-02 1.1 8.30e+06 1.1 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 3403 PCApply 696 1.0 1.7992e+00 1.1 8.84e+08 1.1 2.9e+04 4.3e+03 0.0e+00 10 21 34 31 0 10 21 34 31 0 3765 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 1389 1350 269197192 0 Vector Scatter 2 0 0 0 Matrix 5 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 Viewer 2 0 0 0 Index Set 8 4 8512 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 6.09875e-05 Average time for zero size MPI_Send(): 1.28746e-05 #PETSc Option Table entries: -ksp_atol 1e-06 -ksp_divtol 10000 -ksp_max_it 100000 -ksp_monitor_true_residual -ksp_rtol 1e-06 -log_summary output8 -pc_type asm #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Mar 18 13:02:05 2014 Configure options: --with-debugging=no --with-cc=gcc --with-shared-libraries=1 --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-hypre=1 --download-hypre=yes --with-superlu=1 --download-superlu=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 ----------------------------------------- Libraries compiled on Tue Mar 18 13:02:05 2014 on orion.erau.edu Machine characteristics: Linux-2.6.18-194.17.4.el5xen-x86_64-with-redhat-5.5-Final Using PETSc directory: /ihome/home4/kyriazin/petsc-3.4.4 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include ----------------------------------------- Using C linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc Using Fortran linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 Using libraries: -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lHYPRE -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichcxx -lstdc++ -lscalapack -lml -lmpichcxx -lstdc++ -lsuperlu_4.3 -lumfpack -lamd -lflapack -lfblas -lX11 -lpthread -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------