DEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- bin/navier-stokes on a linux-gnu named gb-r10n12.irc.sara.nl with 4 processors, by sanderse Tue Jul 10 15:14:32 2012 Using Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 Max Max/Min Avg Total Time (sec): 6.749e+01 1.00061 6.747e+01 Objects: 4.630e+02 1.00000 4.630e+02 Flops: 6.075e+08 1.00330 6.065e+08 2.426e+09 Flops/sec: 9.006e+06 1.00385 8.990e+06 3.596e+07 MPI Messages: 2.320e+02 1.98291 1.745e+02 6.980e+02 MPI Message Lengths: 1.681e+07 1.99407 7.232e+04 5.048e+07 MPI Reductions: 1.159e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.7467e+01 100.0% 2.4262e+09 100.0% 6.980e+02 100.0% 7.232e+04 100.0% 1.158e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecTDot 180 1.0 1.6677e-01 1.5 9.00e+07 1.0 0.0e+00 0.0e+00 1.8e+02 0 15 0 0 16 0 15 0 0 16 2159 VecNorm 109 1.0 1.6885e-02 1.0 5.45e+07 1.0 0.0e+00 0.0e+00 1.1e+02 0 9 0 0 9 0 9 0 0 9 12911 VecCopy 21 1.0 1.0588e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 345 1.0 3.7298e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 180 1.0 1.6885e-01 1.2 9.00e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 15 0 0 0 0 15 0 0 0 2132 VecAYPX 95 1.0 1.0317e-01 1.3 4.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 7 0 0 0 0 7 0 0 0 1640 VecAXPBYCZ 2 1.0 3.6781e-03 1.8 2.00e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2175 VecAssemblyBegin 112 1.0 1.8075e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.4e+01 0 0 0 0 7 0 0 0 0 7 0 VecAssemblyEnd 112 1.0 4.2200e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 110 1.0 8.4791e-03 2.2 0.00e+00 0.0 6.0e+02 7.9e+04 0.0e+00 0 0 87 95 0 0 0 87 95 0 0 VecScatterEnd 110 1.0 5.6755e-02 5.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 10 1.0 7.0530e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 116 1.0 8.1799e-01 1.1 3.20e+08 1.0 6.0e+02 8.0e+04 0.0e+00 1 53 86 94 0 1 53 86 94 0 1559 MatConvert 10 1.0 7.0609e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 6 1.0 1.3887e-02 1.1 2.98e+06 1.0 6.0e+00 4.0e+04 0.0e+00 0 0 1 0 0 0 0 1 0 0 855 MatAssemblyBegin 116 1.0 4.4550e-02 2.9 0.00e+00 0.0 1.8e+01 2.7e+04 4.4e+01 0 0 3 1 4 0 0 3 1 4 0 MatAssemblyEnd 116 1.0 2.6326e-01 1.0 0.00e+00 0.0 5.2e+01 1.5e+04 1.6e+02 0 0 7 2 14 0 0 7 2 14 0 MatGetValues 6 1.0 3.0994e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 3270606 1.0 3.3788e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 2 1.0 1.3113e-05 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 10 1.0 1.4005e-03 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 1 0 0 0 0 1 0 MatAXPY 5 1.0 4.2841e-01 1.0 0.00e+00 0.0 1.2e+01 2.0e+04 3.8e+01 1 0 2 0 3 1 0 2 0 3 0 MatTranspose 3 1.0 1.6110e-01 1.0 0.00e+00 0.0 3.0e+01 2.0e+04 5.1e+01 0 0 4 1 4 0 0 4 1 4 0 MatMatMult 27 1.0 3.3668e-01 1.0 5.96e+06 1.0 3.6e+01 4.3e+04 1.2e+02 0 1 5 3 10 0 1 5 3 10 71 MatMatMultSym 27 1.0 2.7376e-01 1.0 0.00e+00 0.0 3.0e+01 3.6e+04 1.1e+02 0 0 4 2 10 0 0 4 2 10 0 MatMatMultNum 27 1.0 6.3208e-02 1.0 5.96e+06 1.0 6.0e+00 8.0e+04 6.0e+00 0 1 1 1 1 0 1 1 1 1 376 MatGetLocalMat 6 1.0 5.5014e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 6 1.0 3.1843e-03 1.1 0.00e+00 0.0 2.4e+01 5.5e+04 6.0e+00 0 0 3 3 1 0 0 3 3 1 0 KSPSetUp 1 1.0 3.8800e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 10 1.0 4.9268e+01 1.0 5.97e+08 1.0 5.9e+02 8.0e+04 4.0e+02 73 98 85 94 34 73 98 85 94 34 48 PCSetUp 1 1.0 1.5216e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 23 0 0 0 0 23 0 0 0 0 0 PCApply 109 1.0 4.7984e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 71 0 0 0 0 71 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 250 171 68449128 0 Vector Scatter 22 22 23320 0 Matrix 139 133 352176864 0 Matrix Null Space 1 1 588 0 Index Set 38 38 168424 0 Krylov Solver 1 1 1144 0 Preconditioner 1 1 1040 0 PetscRandom 10 10 6160 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 2.6226e-06 Average time for zero size MPI_Send(): 7.21216e-06 #PETSc Option Table entries: -ksp_view -log_summary -random_type rand #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Jul 10 11:40:00 2012 Configure options: --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapa ck-dir=/sara/sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 ----------------------------------------- Libraries compiled on Tue Jul 10 11:40:00 2012 on login3.irc.sara.nl Machine characteristics: Linux-2.6.32.41-sara2-x86_64-with-debian-6.0.5 Using PETSc directory: /home/sanderse/Software/petsc-3.3-p1/ Using PETSc arch: linux-gnu-c-debug ----------------------------------------- Using C compiler: mpicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/include -I/home/sanderse/Software/petsc-3.3-p1/include -I/home /sanderse/Software/petsc-3.3-p1/include -I/home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/include -I/sara/sw/openmpi-intel-1.4.5/incl ude -I/sara/sw/ofed/1.5.0/64/include -I/sara/sw/intel/Compiler/11.0/069/mkl/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib -L/home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-de bug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib -L/home/sanderse/Software/petsc-3.3-p1 /linux-gnu-c-debug/lib -lHYPRE -Wl,-rpath,/sara/sw/openmpi-intel-1.4.5/lib -L/sara/sw/openmpi-intel-1.4.5/lib -Wl,-rpath,/sara/sw/ofed/1.5.0 /64/lib -L/sara/sw/ofed/1.5.0/64/lib -Wl,-rpath,/sara/sw/intel/Compiler/11.0/069/lib/intel64 -L/sara/sw/intel/Compiler/11.0/069/lib/intel64 -Wl,-rpath,/sara/sw/intel/Compiler/11.0/069/mkl/lib/em64t -L/sara/sw/intel/Compiler/11.0/069/mkl/lib/em64t -Wl,-rpath,/sara/sw/intel/Compile r/11.0/074/lib/intel64 -L/sara/sw/intel/Compiler/11.0/074/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.5 -L/usr/lib/gcc/x86_64-l inux-gnu/4.4.5 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lmpi_cxx -lstdc++ -Wl,-rpath,/sara/sw/intel/Compiler/11.0/0 69 -L/sara/sw/intel/Compiler/11.0/069 -lmkl_lapack -lmkl -lguide -lpthread -lhdf5_fortran -lhdf5 -lhdf5hl_fortran -lhdf5_hl -lz -lmpi_f90 -l mpi_f77 -lifport -lifcore -lm -lm -lmpi_cxx -lstdc++ -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -limf -lsvml -lipgo -l decimal -lirc -lgcc_s -lpthread -lirc_s -ldl -----------------------------------------