Number of iterations = 7 Residual norm 0.122685 Setup time: 7.3178110123e+00 Solve time: 2.4407420158e+00 Total: 9.7585530281e+00 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex10 on a intel-opt-precise-O3 named lagrange.tomato with 1 processor, by jfe Fri Aug 17 09:58:42 2012 Using Petsc Development HG revision: f9c6cac2d69c724a2258d4e0dc2f51b0825aa875 HG Date: Thu Aug 16 08:37:18 2012 -0700 Max Max/Min Avg Total Time (sec): 9.950e+00 1.00000 9.950e+00 Objects: 1.760e+02 1.00000 1.760e+02 Flops: 2.777e+09 1.00000 2.777e+09 2.777e+09 Flops/sec: 2.791e+08 1.00000 2.791e+08 2.791e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 2.010e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.1999e-04 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: Load system: 1.5789e-01 1.6% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 8.000e+00 4.0% 2: KSPSetUpSolve: 9.7915e+00 98.4% 2.7769e+09 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 1.920e+02 95.5% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage PetscBarrier 1 1.0 2.8610e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: Load system MatAssemblyBegin 1 1.0 1.1921e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 1 1.0 2.8972e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 18 0 0 0 0 0 MatLoad 1 1.0 1.4296e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 1 0 0 0 1 91 0 0 0 25 0 VecSet 6 1.0 7.6947e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 0 0 0 0 0 VecAssemblyBegin 2 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 2 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecLoad 2 1.0 7.2479e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 5 0 0 0 12 0 --- Event Stage 2: KSPSetUpSolve MatMult 198 1.0 2.3313e+00 1.0 1.74e+09 1.0 0.0e+00 0.0e+00 0.0e+00 23 63 0 0 0 24 63 0 0 0 747 MatMultAdd 24 1.0 3.9210e-02 1.0 1.22e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 311 MatMultTranspose 24 1.0 3.0026e-02 1.0 1.22e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 406 MatSolve 16 1.0 5.1260e-05 1.0 2.93e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 572 MatLUFactorSym 1 1.0 4.7922e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 1 0 0 0 0 2 0 MatLUFactorNum 1 1.0 4.0054e-05 1.0 9.41e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 235 MatConvert 3 1.0 4.7890e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 3 1.0 4.8120e-02 1.0 2.45e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 510 MatAssemblyBegin 27 1.0 5.9605e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 27 1.0 2.1214e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatGetRow 1525666 1.0 1.0185e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetRowIJ 1 1.0 1.5020e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 7.7963e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatCoarsen 3 1.0 9.0994e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+00 1 0 0 0 4 1 0 0 0 5 0 MatPtAP 3 1.0 4.2753e-01 1.0 5.12e+07 1.0 0.0e+00 0.0e+00 1.8e+01 4 2 0 0 9 4 2 0 0 9 120 MatPtAPSymbolic 3 1.0 2.6720e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 3 0 0 0 9 3 0 0 0 9 0 MatPtAPNumeric 3 1.0 1.6031e-01 1.0 5.12e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 319 MatTrnMatMult 3 1.0 4.9013e+00 1.0 3.19e+08 1.0 0.0e+00 0.0e+00 1.2e+01 49 11 0 0 6 50 11 0 0 6 65 MatGetSymTrans 3 1.0 1.6622e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 VecMDot 45 1.0 5.3461e-02 1.0 1.25e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 2332 VecNorm 68 1.0 1.1581e-02 1.0 3.86e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3334 VecScale 153 1.0 2.3536e-02 1.0 3.86e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1641 VecCopy 44 1.0 2.0373e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 217 1.0 5.6363e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 212 1.0 6.7505e-02 1.0 1.12e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 1663 VecAYPX 200 1.0 9.8936e-02 1.0 6.68e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 676 VecMAXPY 64 1.0 1.0951e-01 1.0 1.91e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 7 0 0 0 1 7 0 0 0 1743 VecAssemblyBegin 3 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 3 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 177 1.0 1.0243e-01 1.0 4.50e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 439 VecSetRandom 3 1.0 1.0293e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 57 1.0 1.5033e-02 1.0 4.26e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 2837 KSPGMRESOrthog 45 1.0 1.2512e-01 1.0 2.49e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 9 0 0 0 1 9 0 0 0 1993 KSPSetUp 9 1.0 1.6909e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.4e+01 0 0 0 0 22 0 0 0 0 23 0 KSPSolve 1 1.0 2.4407e+00 1.0 1.90e+09 1.0 0.0e+00 0.0e+00 1.5e+01 25 68 0 0 7 25 68 0 0 8 778 PCSetUp 2 1.0 7.3089e+00 1.0 8.50e+08 1.0 0.0e+00 0.0e+00 1.8e+02 73 31 0 0 87 75 31 0 0 91 116 PCSetUpOnBlocks 8 1.0 2.5105e-04 1.0 9.41e+03 1.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 2 0 0 0 0 3 37 PCApply 8 1.0 1.8634e+00 1.0 1.39e+09 1.0 0.0e+00 0.0e+00 5.0e+00 19 50 0 0 2 19 50 0 0 3 743 PCGAMGgraph_AGG 3 1.0 1.0681e+00 1.0 2.45e+07 1.0 0.0e+00 0.0e+00 1.2e+01 11 1 0 0 6 11 1 0 0 6 23 PCGAMGcoarse_AGG 3 1.0 5.0953e+00 1.0 3.19e+08 1.0 0.0e+00 0.0e+00 2.4e+01 51 11 0 0 12 52 11 0 0 12 63 PCGAMGProl_AGG 3 1.0 1.8773e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 2 0 0 0 6 2 0 0 0 6 0 PCGAMGPOpt_AGG 3 1.0 1.0014e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 1 0 0 0 --- Event Stage 1: Load system Viewer 1 1 728 0 Matrix 1 0 0 0 Vector 6 0 0 0 --- Event Stage 2: KSPSetUpSolve Matrix 25 26 501638360 0 Matrix Coarsen 3 3 1860 0 Vector 109 115 287486944 0 Krylov Solver 9 9 131888 0 Preconditioner 9 9 8836 0 Index Set 9 9 7216 0 PetscRandom 3 3 1848 0 ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -Pressure_ksp_type preonly -Pressure_pc_factor_mat_solver_package mumps -Pressure_pc_type lu -f0 Pressure__3_19_0.mtx -ksp_type gmres -log_summary -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Fri Aug 17 09:14:50 2012 Configure options: --with-x=0 --download-f-blas-lapack=0 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t --with-mpi=1 --with-mpi-shared=1 --with-mpi=1 --download-mpich=no --with-debugging=0 --with-gnu-compilers=no --with-vendor-compilers=intel --with-cc=/usr/local/encap/platform_mpi-8.01/bin/mpicc --with-cxx=/usr/local/encap/platform_mpi-8.01/bin/mpiCC --with-fc=/usr/local/encap/platform_mpi-8.01/bin/mpif90 --with-shared-libraries=1 --with-c++-support --with-clanguage=C --COPTFLAGS="-fPIC -O3 -xSSE4.2 -fp-model precise -g -debug inline_debug_info" --CXXOPTFLAGS="-fPIC -O3 -xSSE4.2 -fp-model precise -g -debug inline_debug_info" --FOPTFLAGS="-fPIC -O3 -xSSE4.2 -fp-model precise -g -debug inline_debug_info" --download-scalapack=1 --download-blacs=1 --with-blacs=1 --download-umfpack=1 --download-parmetis=1 --download-metis=1 --download-superlu=1 --download-superlu_dist=1 --download-mumps=1 --download-ml=1 --download-hypre=1 ----------------------------------------- Libraries compiled on Fri Aug 17 09:14:50 2012 on lagrange.tomato Machine characteristics: Linux-2.6.32-279.2.1.el6.x86_64-x86_64-with-centos-6.3-Final Using PETSc directory: /home/jfe/local/petsc-dev Using PETSc arch: intel-opt-precise-O3 ----------------------------------------- Using C compiler: /usr/local/encap/platform_mpi-8.01/bin/mpicc -fPIC -wd1572 -fPIC -O3 -xSSE4.2 -fp-model precise -g -debug inline_debug_info ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /usr/local/encap/platform_mpi-8.01/bin/mpif90 -fPIC -fPIC -O3 -xSSE4.2 -fp-model precise -g -debug inline_debug_info ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/jfe/local/petsc-dev/intel-opt-precise-O3/include -I/home/jfe/local/petsc-dev/include -I/home/jfe/local/petsc-dev/include -I/home/jfe/local/petsc-dev/intel-opt-precise-O3/include -I/usr/local/encap/platform_mpi-8.01/include ----------------------------------------- Using C linker: /usr/local/encap/platform_mpi-8.01/bin/mpicc Using Fortran linker: /usr/local/encap/platform_mpi-8.01/bin/mpif90 Using libraries: -Wl,-rpath,/home/jfe/local/petsc-dev/intel-opt-precise-O3/lib -L/home/jfe/local/petsc-dev/intel-opt-precise-O3/lib -lpetsc -Wl,-rpath,/home/jfe/local/petsc-dev/intel-opt-precise-O3/lib -L/home/jfe/local/petsc-dev/intel-opt-precise-O3/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lblacs -lml -Wl,-rpath,/usr/local/encap/platform_mpi-8.01/lib/linux_amd64 -L/usr/local/encap/platform_mpi-8.01/lib/linux_amd64 -lmpiCC -Wl,-rpath,/opt/intel/Compiler/11.1/072/lib/intel64 -L/opt/intel/Compiler/11.1/072/lib/intel64 -Wl,-rpath,/opt/intel/Compiler/11.1/072/ipp/em64t/lib -L/opt/intel/Compiler/11.1/072/ipp/em64t/lib -Wl,-rpath,/opt/intel/Compiler/11.1/072/mkl/lib/em64t -L/opt/intel/Compiler/11.1/072/mkl/lib/em64t -Wl,-rpath,/opt/intel/Compiler/11.1/072/tbb/intel64/cc4.1.0_libc2.4_kernel2.6.16.21/lib -L/opt/intel/Compiler/11.1/072/tbb/intel64/cc4.1.0_libc2.4_kernel2.6.16.21/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -lstdc++ -lsuperlu_dist_3.0 -lparmetis -lmetis -lpthread -lsuperlu_4.3 -lHYPRE -lmpiCC -lstdc++ -lumfpack -lamd -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lifport -lifcore -lm -lpthread -lm -lmpiCC -lstdc++ -lmpiCC -lstdc++ -lpcmpio -lpcmpi -ldl -limf -lsvml -lipgo -ldecimal -lgcc_s -lirc -lirc_s -ldl -----------------------------------------