************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./DG_EULER on a arch-linux2-c-opt named compute-0-16.local with 16 processors, by kyriazin Tue Mar 18 13:53:50 2014 Using Petsc Release Version 3.4.4, Mar, 13, 2014 Max Max/Min Avg Total Time (sec): 1.862e+01 1.00007 1.862e+01 Objects: 1.438e+03 1.00000 1.438e+03 Flops: 2.115e+09 1.02984 2.089e+09 3.342e+10 Flops/sec: 1.136e+08 1.02983 1.122e+08 1.795e+09 MPI Messages: 2.119e+04 2.00014 1.510e+04 2.416e+05 MPI Message Lengths: 5.320e+07 1.83922 2.852e+03 6.890e+08 MPI Reductions: 1.187e+04 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.8623e+01 100.0% 3.3420e+10 100.0% 2.416e+05 100.0% 2.852e+03 100.0% 1.186e+04 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecDot 10412 1.0 2.6733e+00 1.0 2.58e+08 1.0 0.0e+00 0.0e+00 1.0e+04 14 12 0 0 88 14 12 0 0 88 1542 VecNorm 1400 1.0 3.7876e-01 1.3 3.47e+07 1.0 0.0e+00 0.0e+00 1.4e+03 2 2 0 0 12 2 2 0 0 12 1464 VecScale 710 1.0 2.6576e-02 2.8 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5290 VecCopy 712 1.0 5.6922e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2115 1.0 8.6033e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 11122 1.0 3.8122e-01 1.3 2.75e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 13 0 0 0 2 13 0 0 0 11554 VecAYPX 688 1.0 3.6964e-02 1.5 8.51e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3685 VecMAXPY 688 1.0 5.9494e-01 1.2 2.58e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 12 0 0 0 3 12 0 0 0 6948 VecAssemblyBegin 2 1.0 2.7307e-02 3.4 0.00e+00 0.0 2.3e+02 1.1e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 2 1.0 1.0705e-04 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 2816 1.0 3.4126e-01 1.3 0.00e+00 0.0 2.4e+05 2.7e+03 0.0e+00 2 0 99 94 0 2 0 99 94 0 0 VecScatterEnd 2816 1.0 1.3829e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 6 0 0 0 0 6 0 0 0 0 0 VecNormalize 710 1.0 1.9300e-01 1.2 2.64e+07 1.0 0.0e+00 0.0e+00 7.1e+02 1 1 0 0 6 1 1 0 0 6 2185 MatMult 1396 1.0 3.4525e+00 1.1 8.03e+08 1.0 1.6e+05 2.7e+03 0.0e+00 18 38 66 62 0 18 38 66 62 0 3642 MatSolve 710 1.0 1.4432e+00 1.1 4.73e+08 1.1 0.0e+00 0.0e+00 0.0e+00 8 22 0 0 0 8 22 0 0 0 5050 MatLUFactorNum 1 1.0 8.6749e-03 1.3 4.28e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7621 MatILUFactorSym 1 1.0 4.3521e-03 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 4.2787e-01123.1 0.00e+00 0.0 1.7e+02 4.6e+04 2.0e+00 1 0 0 1 0 1 0 0 1 0 0 MatAssemblyEnd 2 1.0 1.7166e-02 1.1 0.00e+00 0.0 2.3e+02 6.7e+02 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 3.0994e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 3.4007e-02 1.5 0.00e+00 0.0 5.7e+02 1.9e+04 7.0e+00 0 0 0 2 0 0 0 0 2 0 0 MatGetOrdering 1 1.0 1.9002e-04 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 4.0781e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 1 1.0 5.2390e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 686 1.0 2.9872e+00 1.1 5.15e+08 1.0 0.0e+00 0.0e+00 1.0e+04 16 25 0 0 88 16 25 0 0 88 2761 KSPSetUp 2 1.0 2.6989e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 2 1.0 9.2914e+00 1.0 2.11e+09 1.0 2.4e+05 2.7e+03 1.2e+04 50100100 95100 50100100 95100 3597 PCSetUp 2 1.0 5.1772e-02 1.3 4.28e+06 1.1 8.0e+02 1.4e+04 2.3e+01 0 0 0 2 0 0 0 0 2 0 1277 PCSetUpOnBlocks 2 1.0 1.2915e-02 1.4 4.28e+06 1.1 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 5119 PCApply 710 1.0 1.9024e+00 1.1 4.73e+08 1.1 8.1e+04 2.7e+03 0.0e+00 10 22 34 32 0 10 22 34 32 0 3831 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 1417 1378 138461648 0 Vector Scatter 2 0 0 0 Matrix 5 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 Viewer 2 0 0 0 Index Set 8 4 11212 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 0.000125599 Average time for zero size MPI_Send(): 1.61231e-05 #PETSc Option Table entries: -ksp_atol 1e-06 -ksp_divtol 10000 -ksp_max_it 100000 -ksp_monitor_true_residual -ksp_rtol 1e-06 -log_summary output16 -pc_type asm #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Mar 18 13:02:05 2014 Configure options: --with-debugging=no --with-cc=gcc --with-shared-libraries=1 --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-hypre=1 --download-hypre=yes --with-superlu=1 --download-superlu=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 ----------------------------------------- Libraries compiled on Tue Mar 18 13:02:05 2014 on orion.erau.edu Machine characteristics: Linux-2.6.18-194.17.4.el5xen-x86_64-with-redhat-5.5-Final Using PETSc directory: /ihome/home4/kyriazin/petsc-3.4.4 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/include -I/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/include ----------------------------------------- Using C linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpicc Using Fortran linker: /ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/bin/mpif90 Using libraries: -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -L/ihome/home4/kyriazin/petsc-3.4.4/arch-linux2-c-opt/lib -lHYPRE -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichcxx -lstdc++ -lscalapack -lml -lmpichcxx -lstdc++ -lsuperlu_4.3 -lumfpack -lamd -lflapack -lfblas -lX11 -lpthread -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------