************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ../bin/RUN.exe on a linux-gnu named n1. with 120 processors, by waad Thu Jul 31 16:11:34 2008 Using Petsc Release Version 2.3.3, Patch 13, Thu May 15 17:29:26 CDT 2008 HG revision: 4466c6289a0922df26e20626fd4a0b4dd03c8124 Max Max/Min Avg Total Time (sec): 5.283e+01 1.02357 5.169e+01 Objects: 2.600e+02 1.00000 2.600e+02 Flops: 3.187e+09 1.69853 2.721e+09 3.265e+11 Flops/sec: 6.165e+07 1.69865 5.264e+07 6.317e+09 Memory: 6.081e+07 1.39801 6.608e+09 MPI Messages: 1.067e+05 1.00000 1.067e+05 1.281e+07 MPI Message Lengths: 5.205e+08 1.00081 4.875e+03 6.245e+10 MPI Reductions: 1.898e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.1693e+01 100.0% 3.2654e+11 100.0% 1.281e+07 100.0% 4.875e+03 100.0% 2.277e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops/sec: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run config/configure.py # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## ########################################################## # # # WARNING!!! # # # # This code was run without the PreLoadBegin() # # macros. To get timing results we always recommend # # preloading. otherwise timing numbers may be # # meaningless. # ########################################################## Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Matrix 7 0 0 0 Index Set 5 2 672 0 Vec 243 225 22109920 0 Vec Scatter 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 1.90735e-07 Average time for MPI_Barrier(): 0.000220013 Average time for zero size MPI_Send(): 6.09159e-06 Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Mon Jul 14 15:09:38 2008 Configure options: --download-f-blas-lapack=1 --with-shared=0 ----------------------------------------- Libraries compiled on Mon Jul 14 15:10:09 CDT 2008 on n1 Machine characteristics: Linux n1 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC 2007 x86_64 x86_64 x86_64 GNU/Linux Using PETSc directory: /home/waad/soft/petsc-2.3.3-p13 Using PETSc arch: linux-gnu-c-debug ----------------------------------------- Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran compiler: mpif90 -I. -fPIC -g ----------------------------------------- Using include paths: -I/home/waad/soft/petsc-2.3.3-p13 -I/home/waad/soft/petsc-2.3.3-p13/bmake/linux-gnu-c-debug -I/home/waad/soft/petsc-2.3.3-p13/include -I/home/waad/mvapich2-install/include -I/home/waad/mvapich2-install/include ------------------------------------------ Using C linker: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran linker: mpif90 -I. -fPIC -g Using libraries: -Wl,-rpath,/home/waad/soft/petsc-2.3.3-p13/lib/linux-gnu-c-debug -L/home/waad/soft/petsc-2.3.3-p13/lib/linux-gnu-c-debug -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -Wl,-rpath,/home/waad/soft/petsc-2.3.3-p13/externalpackages/fblaslapack/linux-gnu-c-debug -L/home/waad/soft/petsc-2.3.3-p13/externalpackages/fblaslapack/linux-gnu-c-debug -lflapack -Wl,-rpath,/home/waad/soft/petsc-2.3.3-p13/externalpackages/fblaslapack/linux-gnu-c-debug -L/home/waad/soft/petsc-2.3.3-p13/externalpackages/fblaslapack/linux-gnu-c-debug -lfblas -lm -Wl,-rpath,/home/waad/mvapich2-install/lib -L/home/waad/mvapich2-install/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -ldl -lmpich -lpthread -lrdmacm -libverbs -libumad -lrt -lgcc_s -lmpichf90 -Wl,-rpath,/home/waad/mvapich2-install/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -Wl,-rpath,/home/waad/intel/fce/10.1.015/lib -L/home/waad/intel/fce/10.1.015/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/ -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/ -lifport -lifcore -limf -lsvml -lm -lipgo -lirc -lirc_s -Wl,-rpath,/home/waad/mvapich2-install/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -lm -Wl,-rpath,/home/waad/mvapich2-install/lib -L/home/waad/mvapich2-install/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../lib64 -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../../../x86_64-suse-linux/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -L/usr/lib64/gcc/x86_64-suse-linux/4.1.2/../../.. -ldl -lmpich -lpthread -lrdmacm -libverbs -libumad -lrt -lgcc_s -ldl -lc ------------------------------------------