************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./DG_EULER on a arch-linux2-c-opt named compute-0-1.local with 32 processors, by kyriazin Tue Mar 18 13:55:55 2014 Using Petsc Release Version 3.4.4, Mar, 13, 2014 Max Max/Min Avg Total Time (sec): 2.299e+01 1.00004 2.299e+01 Objects: 1.486e+03 1.00000 1.486e+03 Flops: 1.150e+09 1.07385 1.116e+09 3.573e+10 Flops/sec: 5.000e+07 1.07383 4.856e+07 1.554e+09 MPI Messages: 3.284e+04 3.74588 1.971e+04 6.308e+05 MPI Message Lengths: 6.320e+07 3.23639 2.301e+03 1.452e+09 MPI Reductions: 1.238e+04 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.2992e+01 100.0% 3.5726e+10 100.0% 6.308e+05 100.0% 2.301e+03 100.0% 1.238e+04 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecDot 10880 1.0 5.8044e+00 1.4 1.35e+08 1.0 0.0e+00 0.0e+00 1.1e+04 21 12 0 0 88 21 12 0 0 88 742 VecNorm 1448 1.0 2.2591e+00 3.9 1.79e+07 1.0 0.0e+00 0.0e+00 1.4e+03 5 2 0 0 12 5 2 0 0 12 254 VecScale 734 1.0 1.6861e-02 4.1 4.54e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8620 VecCopy 736 1.0 3.0711e-02 4.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2187 1.0 5.8744e-02 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 11614 1.0 3.5391e-01 3.7 1.44e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 13 0 0 0 1 13 0 0 0 12996 VecAYPX 712 1.0 2.2741e-02 2.2 4.41e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6199 VecMAXPY 712 1.0 3.8547e-01 5.3 1.35e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 12 0 0 0 1 12 0 0 0 11229 VecAssemblyBegin 2 1.0 3.9465e-02 3.3 0.00e+00 0.0 4.8e+02 1.2e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 2 1.0 1.2112e-04 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 2912 1.0 6.4863e-01 6.4 0.00e+00 0.0 6.3e+05 2.2e+03 0.0e+00 2 0 99 95 0 2 0 99 95 0 0 VecScatterEnd 2912 1.0 4.5132e+00 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 10 0 0 0 0 10 0 0 0 0 0 VecNormalize 734 1.0 9.7445e-01 3.4 1.36e+07 1.0 0.0e+00 0.0e+00 7.3e+02 2 1 0 0 6 2 1 0 0 6 447 MatMult 1444 1.0 4.1564e+00 1.9 4.21e+08 1.1 4.2e+05 2.2e+03 0.0e+00 14 36 66 63 0 14 36 66 63 0 3129 MatSolve 734 1.0 1.2027e+00 3.8 2.93e+08 1.3 0.0e+00 0.0e+00 0.0e+00 4 24 0 0 0 4 24 0 0 0 7109 MatLUFactorNum 1 1.0 7.8979e-03 2.4 2.55e+06 1.3 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9354 MatILUFactorSym 1 1.0 2.9581e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 4.6793e-01110.8 0.00e+00 0.0 3.6e+02 4.7e+04 2.0e+00 1 0 0 1 0 1 0 0 1 0 0 MatAssemblyEnd 2 1.0 3.7325e-02 1.6 0.00e+00 0.0 5.8e+02 5.5e+02 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 3.0994e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 4.5207e-02 2.1 0.00e+00 0.0 1.4e+03 1.5e+04 7.0e+00 0 0 0 2 0 0 0 0 2 0 0 MatGetOrdering 1 1.0 1.4400e-04 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 4.4651e-03 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 1 1.0 3.2940e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 710 1.0 6.1561e+00 1.4 2.69e+08 1.0 0.0e+00 0.0e+00 1.1e+04 22 24 0 0 88 22 24 0 0 88 1400 KSPSetUp 2 1.0 2.1291e-04 4.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 2 1.0 1.1895e+01 1.0 1.15e+09 1.1 6.3e+05 2.2e+03 1.2e+04 52100100 97100 52100100 97100 3003 PCSetUp 2 1.0 6.0581e-02 1.7 2.55e+06 1.3 2.0e+03 1.1e+04 2.3e+01 0 0 0 2 0 0 0 0 2 0 1219 PCSetUpOnBlocks 2 1.0 1.0877e-02 2.5 2.55e+06 1.3 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 6792 PCApply 734 1.0 2.2229e+00 2.0 2.93e+08 1.3 2.1e+05 2.2e+03 0.0e+00 8 24 34 32 0 8 24 34 32 0 3847 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 1465 1426 72744944 0 Vector Scatter 2 0 0 0 Matrix 5 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 Viewer 2 0 0 0 Index Set 8 4 6944 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 0.000290966 Average time for zero size MPI_Send(): 6.18175e-05 #PETSc Option Table entries: -ksp_atol 1e-06 -ksp_divtol 10000 -ksp_max_it 100000 -ksp_monitor_true_residual -ksp_rtol 1e-06 -log_summary output32 -pc_type asm #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Mar 18 13:02:05 2014 Configure options: --with-debugging=no --with-cc=gcc --with-shared-libraries=1 --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-hypre=1 --download-hypre=yes --with-superlu=1 --download-superlu=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 ----------------------------------------- Libraries compiled on Tue Mar 18 13:02:05 2014 on orion.erau.edu Machine characteristics: Linux-2.6.18-194.17.4.el5xen-x86_64-with-redhat-5.5-Final Using PETSc directory: /ihome/home4/kyriazin/petsc-3.4.4 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include ----------------------------------------- Using C linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc Using Fortran linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 Using libraries: -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lHYPRE -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichcxx -lstdc++ -lscalapack -lml -lmpichcxx -lstdc++ -lsuperlu_4.3 -lumfpack -lamd -lflapack -lfblas -lX11 -lpthread -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------