************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl with 2 processors, by sanderse Wed May 30 09:04:54 2012 Using Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 Max Max/Min Avg Total Time (sec): 1.351e+02 1.00000 1.351e+02 Objects: 5.430e+03 1.00000 5.430e+03 Flops: 9.038e+06 1.00000 9.038e+06 1.808e+07 Flops/sec: 6.690e+04 1.00000 6.690e+04 1.338e+05 Memory: 1.549e+07 1.00011 3.097e+07 MPI Messages: 1.542e+03 1.00000 1.542e+03 3.084e+03 MPI Message Lengths: 6.473e+05 1.00000 4.198e+02 1.295e+06 MPI Reductions: 1.807e+04 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.3510e+02 100.0% 1.8075e+07 100.0% 3.084e+03 100.0% 4.198e+02 100.0% 1.807e+04 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecView 49 1.0 4.2963e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecDot 198 1.0 5.0116e-03 1.1 8.56e+04 1.0 0.0e+00 0.0e+00 6.6e+01 0 1 0 0 0 0 1 0 0 0 34 VecTDot 392 1.0 4.9028e-02 1.1 3.92e+05 1.0 0.0e+00 0.0e+00 3.9e+02 0 4 0 0 2 0 4 0 0 2 16 VecNorm 322 1.0 1.7893e-02 1.0 2.78e+05 1.0 0.0e+00 0.0e+00 3.2e+02 0 3 0 0 2 0 3 0 0 2 31 VecScale 86 1.0 3.2830e-04 1.1 3.27e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 199 VecCopy 345 1.0 1.3387e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 856 1.0 2.0201e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 797 1.0 4.0824e-03 1.1 7.60e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 8 0 0 0 0 8 0 0 0 372 VecAYPX 426 1.0 2.5971e-03 1.1 3.30e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 254 VecAXPBYCZ 230 1.0 1.3807e-03 1.1 4.16e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 603 VecWAXPY 199 1.0 1.0917e-03 1.1 9.16e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 168 VecMAXPY 120 1.0 7.3242e-04 1.1 2.70e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 737 VecAssemblyBegin 494 1.0 5.6386e-02 1.2 0.00e+00 0.0 1.6e+01 2.8e+03 8.6e+02 0 0 1 3 5 0 0 1 3 5 0 VecAssemblyEnd 494 1.0 1.2319e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 975 1.0 4.8943e-03 1.0 4.37e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 179 VecScatterBegin 2629 1.0 9.0248e-02 1.1 0.00e+00 0.0 2.6e+03 4.6e+02 0.0e+00 0 0 85 92 0 0 0 85 92 0 0 VecScatterEnd 2629 1.0 6.0380e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 1055 1.0 3.3862e+01 1.0 2.33e+06 1.0 1.1e+03 5.2e+02 0.0e+00 25 26 35 43 0 25 26 35 43 0 0 MatMultAdd 1580 1.0 5.3828e+01 1.0 3.51e+06 1.0 1.5e+03 4.1e+02 0.0e+00 40 39 50 49 0 40 39 50 49 0 0 MatConvert 73 1.0 5.0633e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 MatScale 9 1.0 1.6282e-03 1.1 1.40e+04 1.0 2.0e+00 4.0e+02 1.2e+01 0 0 0 0 0 0 0 0 0 0 17 MatAssemblyBegin 1435 1.0 6.5864e-02 1.1 0.00e+00 0.0 6.0e+00 2.7e+02 4.3e+02 0 0 0 0 2 0 0 0 0 2 0 MatAssemblyEnd 1435 1.0 2.6729e-01 1.0 0.00e+00 0.0 2.1e+02 8.5e+01 3.5e+03 0 0 7 1 19 0 0 7 1 19 0 MatGetValues 96 1.0 1.6594e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 83884 1.0 1.0052e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 2 1.0 5.2452e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 96 1.0 1.7238e+00 1.0 0.00e+00 0.0 2.3e+02 1.5e+02 6.7e+02 1 0 7 3 4 1 0 7 3 4 0 MatAXPY 56 1.0 8.2416e-01 1.0 0.00e+00 0.0 2.0e+01 1.6e+02 4.3e+02 1 0 1 0 2 1 0 1 0 2 0 MatTranspose 3 1.0 8.9080e-03 1.0 0.00e+00 0.0 1.0e+01 2.0e+02 9.0e+01 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 357 1.0 9.8413e+00 1.0 8.81e+04 1.0 3.0e+02 1.4e+02 3.7e+03 7 1 10 3 20 7 1 10 3 20 0 MatGetLocalMatCondensed 48 1.0 8.6693e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+02 1 0 0 0 2 1 0 0 0 2 0 MatGetBrowsOfAcols 48 1.0 8.6068e-01 1.0 0.00e+00 0.0 2.3e+02 1.5e+02 4.3e+02 1 0 7 3 2 1 0 7 3 2 0 KSPSetup 1 1.0 2.1319e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 41 1.0 2.2578e+01 1.0 2.64e+06 1.0 4.7e+02 8.0e+02 2.2e+03 17 29 15 29 12 17 29 15 29 12 0 PCSetUp 1 1.0 1.5878e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 278 1.0 1.4611e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.8e+02 11 0 0 0 2 11 0 0 0 2 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 48 12 6576 0 Vector 3027 1779 7528496 0 Vector Scatter 222 58 60088 0 Matrix 1546 632 4264196 0 Matrix Null Space 1 1 580 0 Index Set 576 468 346272 0 Krylov Solver 1 1 1136 0 Preconditioner 1 1 1032 0 Viewer 8 7 4760 0 ======================================================================================================================== Average time to get PetscTime(): 5.96046e-07 Average time for MPI_Barrier(): 5.07832e-05 Average time for zero size MPI_Send(): 2.80142e-05 #PETSc Option Table entries: -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Wed Feb 22 18:04:02 2012 Configure options: --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc ----------------------------------------- Libraries compiled on Wed Feb 22 18:04:02 2012 on slippy.mas.cwi.nl Machine characteristics: Linux-3.2.2-1.fc16.x86_64-x86_64-with-fedora-16-Verne Using PETSc directory: /export/scratch1/sanderse/software/petsc-3.2-p6-debug/ Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/include -I/export/scratch1/sanderse/software/petsc-3.2-p6-debug/include -I/export/scratch1/sanderse/software/petsc-3.2-p6-debug/include -I/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/include ----------------------------------------- Using C linker: /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/bin/mpicc Using Fortran linker: /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/bin/mpif90 Using libraries: -Wl,-rpath,/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib -L/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib -lpetsc -lX11 -Wl,-rpath,/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib -L/export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib -lHYPRE -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.6.2 -lmpichcxx -lstdc++ -lpthread -lflapack -lfblas -lhdf5_fortran -lhdf5 -lz -lm -L/usr/lib/gcc/x86_64-redhat-linux/4.6.2 -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------