************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./RunPart on a arch-linux2-c-debug named research-5.ece.iastate.edu with 1 processor, by bodhi91 Sat Mar 25 22:54:58 2017 Using Petsc Release Version 3.7.5, Jan, 01, 2017 Max Max/Min Avg Total Time (sec): 1.179e+02 1.00000 1.179e+02 Objects: 4.620e+02 1.00000 4.620e+02 Flops: 1.128e+10 1.00000 1.128e+10 1.128e+10 Flops/sec: 9.567e+07 1.00000 9.567e+07 9.567e+07 Memory: 4.204e+09 1.00000 4.204e+09 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.1791e+02 100.0% 1.1281e+10 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 817 1.0 3.8235e+01 1.0 2.87e+09 1.0 0.0e+00 0.0e+00 0.0e+00 32 25 0 0 0 32 25 0 0 0 75 MatSolve 834 1.0 4.2134e+01 1.0 2.60e+09 1.0 0.0e+00 0.0e+00 0.0e+00 36 23 0 0 0 36 23 0 0 0 62 MatCholFctrNum 1 1.0 5.1851e-01 1.0 2.73e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1 MatICCFactorSym 1 1.0 1.9637e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 3 1.0 1.6451e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3 1.0 2.8894e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 272836 1.0 1.0459e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetRowIJ 1 1.0 8.8215e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 1.0352e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 15 1.0 7.5817e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecTDot 1562 1.0 1.8975e+00 1.0 8.52e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 8 0 0 0 2 8 0 0 0 449 VecNorm 807 1.0 8.0234e-01 1.0 4.40e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 549 VecScale 27 1.0 3.2004e-02 1.0 7.37e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 230 VecCopy 213 1.0 1.6642e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1802 1.0 6.3942e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 2338 1.0 3.2840e+00 1.0 1.18e+09 1.0 0.0e+00 0.0e+00 0.0e+00 3 10 0 0 0 3 10 0 0 0 358 VecAYPX 754 1.0 1.3954e+00 1.0 4.11e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 295 VecSetRandom 6 1.0 2.7488e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 VecReduceArith 90 1.0 1.0776e-01 1.0 4.91e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 456 VecReduceComm 60 1.0 4.1914e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 EPSSetUp 1 1.0 4.6988e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 EPSSolve 1 1.0 1.0800e+02 1.0 1.13e+10 1.0 0.0e+00 0.0e+00 0.0e+00 92100 0 0 0 92100 0 0 0 104 STSetUp 1 1.0 6.3181e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 2 1.0 6.5174e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 57 1.0 9.0036e+01 1.0 8.79e+09 1.0 0.0e+00 0.0e+00 0.0e+00 76 78 0 0 0 76 78 0 0 0 98 PCSetUp 3 1.0 6.4302e-01 1.0 2.73e+05 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 PCApply 834 1.0 4.6604e+01 1.0 3.48e+09 1.0 0.0e+00 0.0e+00 0.0e+00 40 31 0 0 0 40 31 0 0 0 75 BVCreate 34 1.0 8.4237e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVCopy 66 1.0 5.7301e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVMultVec 942 1.0 4.5521e+00 1.0 1.22e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 11 0 0 0 4 11 0 0 0 268 BVMultInPlace 40 1.0 7.9652e+00 1.0 8.55e+08 1.0 0.0e+00 0.0e+00 0.0e+00 7 8 0 0 0 7 8 0 0 0 107 BVDot 85 1.0 1.3696e+00 1.0 3.87e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 282 BVDotVec 852 1.0 2.3064e+00 1.0 7.18e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 6 0 0 0 2 6 0 0 0 311 BVOrthogonalizeV 33 1.0 1.7609e+00 1.0 5.31e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 302 BVScale 60 1.0 7.2256e-02 1.0 1.64e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 227 BVSetRandom 6 1.0 2.7492e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 BVMatProject 58 1.0 1.3924e+00 1.0 3.87e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 278 DSSolve 33 1.0 9.0306e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSVectors 60 1.0 4.4394e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSOther 69 1.0 2.7680e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 165 164 2112124100 0. Vector 249 249 572244768 0. EPS Solver 1 1 2740 0. Spectral Transform 1 1 812 0. Krylov Solver 2 2 2552 0. Preconditioner 3 3 2800 0. Basis Vectors 35 35 84368 0. PetscRandom 1 1 638 0. Region 1 1 656 0. Direct Solver 1 1 17972 0. Index Set 2 2 1092896 0. Viewer 1 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 7.15256e-08 #PETSc Option Table entries: -eps_nev 3 -eps_smallest_real -eps_tol 0.001 -eps_type jd -log_view -st_ksp_rtol 0.001 -st_ksp_type cg -st_pc_type bjacobi #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --with-cc=gcc --with-css=g++ -with-fc=gfortran --download-fblaslapack --download-mpich ----------------------------------------- Libraries compiled on Wed Feb 22 17:20:19 2017 on research-5.ece.iastate.edu Machine characteristics: Linux-3.10.0-514.2.2.el7.x86_64-x86_64-with-redhat-7.3-Maipo Using PETSc directory: /tmp/Bodhi/petsc-3.7.5/petsc-3.7.5 Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: /tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fvisibility=hidden -g3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/bin/mpif90 -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/include -I/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/include -I/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/include -I/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/include ----------------------------------------- Using C linker: /tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/bin/mpicc Using Fortran linker: /tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/bin/mpif90 Using libraries: -Wl,-rpath,/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -L/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -lpetsc -Wl,-rpath,/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -L/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -lflapack -lfblas -lpthread -lm -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -lmpifort -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -Wl,-rpath,/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -L/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -ldl -Wl,-rpath,/tmp/Bodhi/petsc-3.7.5/petsc-3.7.5/arch-linux2-c-debug/lib -lmpi -lgcc_s -ldl