Weak Scaling Study /usr/local/u/cekees/BOB/mpirun -np 1 /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 -da_grid_x 201 -da_grid_y 201 -log_summary -pc_type mg -da_refine 2 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 on a diamond named r8i3n8 with 1 processor, by cekees Thu Apr 11 22:31:20 2013 Using Petsc Development HG revision: unknown HG Date: unknown Max Max/Min Avg Total Time (sec): 6.344e+00 1.00000 6.344e+00 Objects: 2.150e+02 1.00000 2.150e+02 Flops: 3.973e+09 1.00000 3.973e+09 3.973e+09 Flops/sec: 6.262e+08 1.00000 6.262e+08 6.262e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 2.670e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.3442e+00 100.0% 3.9730e+09 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 2.660e+02 99.6% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 6.0290e+00 1.0 3.97e+09 1.0 0.0e+00 0.0e+00 2.3e+02 95100 0 0 87 95100 0 0 87 659 SNESFunctionEval 5 1.0 6.5801e-02 1.0 3.53e+07 1.0 0.0e+00 0.0e+00 2.0e+00 1 1 0 0 1 1 1 0 0 1 536 SNESJacobianEval 12 1.0 5.2314e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 8 0 0 0 1 8 0 0 0 2 0 SNESLineSearch 4 1.0 1.1169e-01 1.0 7.95e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 712 VecDot 4 1.0 3.7341e-03 1.0 5.13e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1375 VecMDot 88 1.0 1.5842e-01 1.0 3.70e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 9 0 0 0 2 9 0 0 0 2334 VecNorm 108 1.0 3.0260e-02 1.0 9.63e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 3182 VecScale 196 1.0 5.0530e-02 1.0 8.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 1613 VecCopy 44 1.0 3.4321e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 194 1.0 1.0791e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 VecAXPY 212 1.0 1.1473e-01 1.0 1.72e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 1500 VecAYPX 192 1.0 1.4111e-01 1.0 9.63e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 682 VecWAXPY 4 1.0 6.8300e-03 1.0 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 376 VecMAXPY 100 1.0 2.5357e-01 1.0 4.44e+08 1.0 0.0e+00 0.0e+00 0.0e+00 4 11 0 0 0 4 11 0 0 0 1752 VecPointwiseMult 8 1.0 1.5798e-03 1.0 8.05e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 509 VecScatterBegin 19 1.0 1.2397e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 9 1.0 5.6551e-02 1.0 1.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 204 VecReduceComm 5 1.0 1.4067e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 100 1.0 4.6867e-02 1.0 1.29e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 2753 MatMult 244 1.0 1.1714e+00 1.0 9.06e+08 1.0 0.0e+00 0.0e+00 0.0e+00 18 23 0 0 0 18 23 0 0 0 773 MatMultAdd 24 1.0 6.0903e-02 1.0 4.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 711 MatMultTranspose 34 1.0 9.4985e-02 1.0 6.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 646 MatSolve 12 1.0 3.6739e-02 1.0 4.46e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1215 MatSOR 232 1.0 2.0776e+00 1.0 9.33e+08 1.0 0.0e+00 0.0e+00 0.0e+00 33 23 0 0 0 33 23 0 0 0 449 MatLUFactorSym 4 1.0 1.3071e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 2 0 0 0 4 2 0 0 0 5 0 MatLUFactorNum 4 1.0 6.2064e-01 1.0 6.70e+08 1.0 0.0e+00 0.0e+00 0.0e+00 10 17 0 0 0 10 17 0 0 0 1079 MatAssemblyBegin 17 1.0 8.5831e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 17 1.0 6.0946e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetRowIJ 4 1.0 5.4560e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.0 6.8804e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 1 0 0 0 3 1 0 0 0 3 0 KSPGMRESOrthog 88 1.0 3.6994e-01 1.0 7.39e+08 1.0 0.0e+00 0.0e+00 0.0e+00 6 19 0 0 0 6 19 0 0 0 1999 KSPSetUp 18 1.0 2.4695e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.2e+01 0 0 0 0 16 0 0 0 0 16 0 KSPSolve 4 1.0 5.4429e+00 1.0 3.89e+09 1.0 0.0e+00 0.0e+00 2.3e+02 86 98 0 0 85 86 98 0 0 85 714 PCSetUp 4 1.0 1.1644e+00 1.0 6.89e+08 1.0 0.0e+00 0.0e+00 1.4e+02 18 17 0 0 51 18 17 0 0 51 591 PCApply 12 1.0 4.1405e+00 1.0 3.08e+09 1.0 0.0e+00 0.0e+00 6.0e+01 65 77 0 0 22 65 77 0 0 23 744 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage SNES 1 1 1316 0 SNESLineSearch 1 1 864 0 DMSNES 1 1 672 0 Vector 98 98 291564544 0 Vector Scatter 14 14 9016 0 Matrix 9 9 207991720 0 Distributed Mesh 7 7 16889564 0 Bipartite Graph 14 14 11200 0 Index Set 46 46 8070520 0 IS L to G Mapping 9 9 10118928 0 Krylov Solver 6 6 82512 0 DMKSP interface 2 2 1312 0 Preconditioner 6 6 5520 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -da_grid_x 201 -da_grid_y 201 -da_refine 2 -log_summary -pc_type mg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Apr 10 18:41:36 2013 Configure options: --with-debugging=0 --with-clanguage=C --with-pic=1 --with-shared-libraries=0 --with-mpi-compilers=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/cmkl/10.2.4.032 --download-cmake=1 --download-metis=1 --download-parmetis=1 --download-spooles=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-superlu=1 --download-superlu_dist=1 --download-hypre=1 --PETSC_ARCH=diamond --PETSC_DIR=/usr/local/u/cekees/proteus/externalPackages/petsc-dev --prefix=/usr/local/u/cekees/proteus/diamond ----------------------------------------- Libraries compiled on Wed Apr 10 18:41:36 2013 on diamond03 Machine characteristics: Linux-2.6.32.59-0.7-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /usr/local/u/cekees/proteus/externalPackages/petsc-dev Using PETSc arch: diamond ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpiifort -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/opt/intel/impi/4.0.3.008/intel64/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lpetsc -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -lHYPRE -Wl,-rpath,/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Wl,-rpath,/opt/intel/impi/4.0.3/lib64 -L/opt/intel/impi/4.0.3/lib64 -Wl,-rpath,/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -L/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -Wl,-rpath,/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.3 -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/-Xlinker -lmpigc4 -Wl,-rpath,/opt/intel/mpi-rt/4.0.3 -Wl,-rpath,/opt/intel/cmkl/10.2.4.032 -L/opt/intel/cmkl/10.2.4.032 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lX11 -lparmetis -lmetis -lpthread -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmpi -lmpigf -lmpigi -lpthread -lrt -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl ----------------------------------------- /usr/local/u/cekees/BOB/mpirun -np 4 /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 -da_grid_x 201 -da_grid_y 201 -log_summary -pc_type mg -da_refine 3 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 on a diamond named r8i3n8 with 4 processors, by cekees Thu Apr 11 22:31:36 2013 Using Petsc Development HG revision: unknown HG Date: unknown Max Max/Min Avg Total Time (sec): 8.249e+00 1.00000 8.249e+00 Objects: 3.450e+02 1.00000 3.450e+02 Flops: 3.988e+09 1.00267 3.982e+09 1.593e+10 Flops/sec: 4.834e+08 1.00267 4.828e+08 1.931e+09 MPI Messages: 9.955e+02 1.04569 9.738e+02 3.895e+03 MPI Message Lengths: 1.779e+07 1.00586 1.821e+04 7.094e+07 MPI Reductions: 8.440e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 8.2489e+00 100.0% 1.5929e+10 100.0% 3.895e+03 100.0% 1.821e+04 100.0% 8.430e+02 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 7.8948e+00 1.0 3.99e+09 1.0 3.8e+03 1.8e+04 7.8e+02 96100 99100 93 96100 99100 93 2018 SNESFunctionEval 5 1.0 7.4725e-02 1.1 3.53e+07 1.0 4.0e+01 6.4e+03 2.0e+00 1 1 1 0 0 1 1 1 0 0 1887 SNESJacobianEval 16 1.0 6.8283e-01 1.0 0.00e+00 0.0 1.3e+02 3.0e+03 3.8e+01 8 0 3 1 5 8 0 3 1 5 0 SNESLineSearch 4 1.0 1.4024e-01 1.0 7.95e+07 1.0 6.4e+01 6.4e+03 1.6e+01 2 2 2 1 2 2 2 2 1 2 2266 VecDot 4 1.0 7.0229e-03 1.5 5.13e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 2920 VecMDot 127 1.0 3.3611e-01 1.2 3.84e+08 1.0 0.0e+00 0.0e+00 1.3e+02 4 10 0 0 15 4 10 0 0 15 4558 VecNorm 151 1.0 8.0763e-02 1.2 9.85e+07 1.0 0.0e+00 0.0e+00 1.5e+02 1 2 0 0 18 1 2 0 0 18 4873 VecScale 275 1.0 9.0659e-02 1.1 8.12e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 3578 VecCopy 57 1.0 5.0821e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 208 1.0 3.5352e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 292 1.0 1.9624e-01 1.2 1.67e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 3397 VecAYPX 264 1.0 1.9603e-01 1.1 9.27e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 1889 VecWAXPY 4 1.0 9.7122e-03 1.0 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1056 VecMAXPY 143 1.0 4.1248e-01 1.1 4.60e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 12 0 0 0 5 12 0 0 0 4454 VecPointwiseMult 12 1.0 2.0232e-03 1.2 8.46e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1666 VecScatterBegin 468 1.0 3.6024e-02 1.0 0.00e+00 0.0 3.4e+03 6.5e+03 0.0e+00 0 0 88 31 0 0 0 88 31 0 0 VecScatterEnd 468 1.0 3.9677e-02 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 9 1.0 1.3337e-02 1.1 1.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3459 VecReduceComm 5 1.0 1.9660e-0319.3 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 1 0 0 0 0 1 0 VecNormalize 143 1.0 1.1828e-01 1.1 1.32e+08 1.0 0.0e+00 0.0e+00 1.4e+02 1 3 0 0 17 1 3 0 0 17 4471 MatMult 341 1.0 1.6538e+00 1.0 8.97e+08 1.0 2.7e+03 3.8e+03 0.0e+00 20 22 70 15 0 20 22 70 15 0 2167 MatMultAdd 33 1.0 9.5736e-02 1.0 4.17e+07 1.0 1.6e+02 1.5e+03 0.0e+00 1 1 4 0 0 1 1 4 0 0 1739 MatMultTranspose 48 1.0 1.3807e-01 1.0 6.06e+07 1.0 2.4e+02 1.5e+03 0.0e+00 2 2 6 1 0 2 2 6 1 0 1754 MatSolve 11 1.0 4.4869e-02 1.0 4.10e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 3658 MatSOR 330 1.0 2.7534e+00 1.0 9.29e+08 1.0 0.0e+00 0.0e+00 0.0e+00 33 23 0 0 0 33 23 0 0 0 1348 MatLUFactorSym 4 1.0 1.2822e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 2 0 0 0 1 2 0 0 0 1 0 MatLUFactorNum 4 1.0 6.2048e-01 1.0 6.79e+08 1.0 0.0e+00 0.0e+00 0.0e+00 7 17 0 0 0 7 17 0 0 0 4378 MatAssemblyBegin 27 1.0 1.6308e-02 9.5 0.00e+00 0.0 0.0e+00 0.0e+00 4.6e+01 0 0 0 0 5 0 0 0 0 5 0 MatAssemblyEnd 27 1.0 1.2054e-01 1.0 0.00e+00 0.0 9.4e+01 6.3e+02 5.6e+01 1 0 2 0 7 1 0 2 0 7 0 MatGetRowIJ 4 1.0 4.7109e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.0 6.6462e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 1 0 0 0 1 1 0 0 0 1 0 MatGetRedundant 4 1.0 2.7046e-02 1.0 0.00e+00 0.0 1.4e+02 2.3e+05 1.6e+01 0 0 4 46 2 0 0 4 46 2 0 KSPGMRESOrthog 127 1.0 6.7928e-01 1.2 7.67e+08 1.0 0.0e+00 0.0e+00 1.3e+02 8 19 0 0 15 8 19 0 0 15 4511 KSPSetUp 27 1.0 3.7129e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 5.8e+01 0 0 0 0 7 0 0 0 0 7 0 KSPSolve 4 1.0 7.2148e+00 1.0 3.90e+09 1.0 3.7e+03 1.9e+04 7.5e+02 87 98 96 99 89 87 98 96 99 89 2159 PCSetUp 4 1.0 1.3319e+00 1.0 6.99e+08 1.0 5.8e+02 8.5e+04 3.8e+02 16 18 15 69 45 16 18 15 69 45 2099 PCApply 11 1.0 5.7404e+00 1.0 3.10e+09 1.0 3.1e+03 6.8e+03 3.4e+02 69 78 80 30 41 69 78 80 30 41 2156 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 4 4 2288 0 SNES 1 1 1316 0 SNESLineSearch 1 1 864 0 DMSNES 1 1 672 0 Vector 144 144 248776992 0 Vector Scatter 28 28 29680 0 Matrix 29 29 251221944 0 Distributed Mesh 9 9 17163504 0 Bipartite Graph 18 18 14400 0 Index Set 76 76 8537616 0 IS L to G Mapping 12 12 10279248 0 Krylov Solver 9 9 115112 0 DMKSP interface 3 3 1968 0 Preconditioner 9 9 8176 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 1.43051e-06 Average time for zero size MPI_Send(): 1.32322e-05 #PETSc Option Table entries: -da_grid_x 201 -da_grid_y 201 -da_refine 3 -log_summary -pc_type mg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Apr 10 18:41:36 2013 Configure options: --with-debugging=0 --with-clanguage=C --with-pic=1 --with-shared-libraries=0 --with-mpi-compilers=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/cmkl/10.2.4.032 --download-cmake=1 --download-metis=1 --download-parmetis=1 --download-spooles=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-superlu=1 --download-superlu_dist=1 --download-hypre=1 --PETSC_ARCH=diamond --PETSC_DIR=/usr/local/u/cekees/proteus/externalPackages/petsc-dev --prefix=/usr/local/u/cekees/proteus/diamond ----------------------------------------- Libraries compiled on Wed Apr 10 18:41:36 2013 on diamond03 Machine characteristics: Linux-2.6.32.59-0.7-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /usr/local/u/cekees/proteus/externalPackages/petsc-dev Using PETSc arch: diamond ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpiifort -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/opt/intel/impi/4.0.3.008/intel64/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lpetsc -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -lHYPRE -Wl,-rpath,/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Wl,-rpath,/opt/intel/impi/4.0.3/lib64 -L/opt/intel/impi/4.0.3/lib64 -Wl,-rpath,/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -L/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -Wl,-rpath,/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.3 -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/-Xlinker -lmpigc4 -Wl,-rpath,/opt/intel/mpi-rt/4.0.3 -Wl,-rpath,/opt/intel/cmkl/10.2.4.032 -L/opt/intel/cmkl/10.2.4.032 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lX11 -lparmetis -lmetis -lpthread -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmpi -lmpigf -lmpigi -lpthread -lrt -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl ----------------------------------------- /usr/local/u/cekees/BOB/mpirun -np 16 /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 -da_grid_x 201 -da_grid_y 201 -log_summary -pc_type mg -da_refine 4 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 on a diamond named r8i3n8 with 16 processors, by cekees Thu Apr 11 22:31:57 2013 Using Petsc Development HG revision: unknown HG Date: unknown Max Max/Min Avg Total Time (sec): 1.309e+01 1.00000 1.309e+01 Objects: 4.250e+02 1.00000 4.250e+02 Flops: 4.027e+09 1.00282 4.019e+09 6.430e+10 Flops/sec: 3.076e+08 1.00282 3.070e+08 4.911e+09 MPI Messages: 2.799e+03 1.83541 2.167e+03 3.467e+04 MPI Message Lengths: 2.374e+07 1.15468 1.026e+04 3.556e+08 MPI Reductions: 1.064e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.3092e+01 100.0% 6.4301e+10 100.0% 3.467e+04 100.0% 1.026e+04 100.0% 1.063e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 1.2642e+01 1.0 4.03e+09 1.0 3.4e+04 1.0e+04 1.0e+03 97100 99100 94 97100 99100 94 5086 SNESFunctionEval 5 1.0 9.4277e-02 1.1 3.53e+07 1.0 2.4e+02 6.4e+03 2.0e+00 1 1 1 0 0 1 1 1 0 0 5978 SNESJacobianEval 20 1.0 7.5614e-01 1.0 0.00e+00 0.0 9.6e+02 2.5e+03 4.8e+01 6 0 3 1 5 6 0 3 1 5 0 SNESLineSearch 4 1.0 2.1302e-01 1.0 7.95e+07 1.0 3.8e+02 6.4e+03 1.6e+01 2 2 1 1 2 2 2 1 1 2 5964 VecDot 4 1.0 1.1415e-02 1.2 5.13e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 7181 VecMDot 167 1.0 5.5768e-01 1.1 3.88e+08 1.0 0.0e+00 0.0e+00 1.7e+02 4 10 0 0 16 4 10 0 0 16 11107 VecNorm 195 1.0 1.5176e-01 1.4 9.94e+07 1.0 0.0e+00 0.0e+00 2.0e+02 1 2 0 0 18 1 2 0 0 18 10459 VecScale 363 1.0 2.1793e-01 1.1 8.21e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 6014 VecCopy 72 1.0 1.0662e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 263 1.0 7.4222e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 388 1.0 4.1770e-01 1.1 1.69e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 4 0 0 0 3 4 0 0 0 6453 VecAYPX 352 1.0 3.7398e-01 1.0 9.38e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 2 0 0 0 3 2 0 0 0 4004 VecWAXPY 4 1.0 1.8459e-02 1.0 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2220 VecMAXPY 187 1.0 7.1968e-01 1.1 4.65e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 12 0 0 0 5 12 0 0 0 10319 VecPointwiseMult 16 1.0 3.5028e-03 1.1 8.56e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3889 VecScatterBegin 610 1.0 7.2758e-02 1.1 0.00e+00 0.0 2.9e+04 4.4e+03 0.0e+00 1 0 84 36 0 1 0 84 36 0 0 VecScatterEnd 610 1.0 1.3253e-01 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecReduceArith 9 1.0 5.0444e-02 2.8 1.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3656 VecReduceComm 5 1.0 3.6287e-02107.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 187 1.0 2.5765e-01 1.2 1.34e+08 1.0 0.0e+00 0.0e+00 1.9e+02 2 3 0 0 18 2 3 0 0 18 8286 MatMult 451 1.0 2.6525e+00 1.0 9.07e+08 1.0 2.2e+04 3.1e+03 0.0e+00 20 23 62 19 0 20 23 62 19 0 5461 MatMultAdd 44 1.0 1.4956e-01 1.0 4.22e+07 1.0 1.5e+03 1.1e+03 0.0e+00 1 1 4 0 0 1 1 4 0 0 4504 MatMultTranspose 64 1.0 2.0639e-01 1.1 6.13e+07 1.0 2.1e+03 1.1e+03 0.0e+00 2 2 6 1 0 2 2 6 1 0 4747 MatSolve 11 1.0 7.0360e-02 1.0 4.11e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 9342 MatSOR 440 1.0 4.8279e+00 1.0 9.40e+08 1.0 0.0e+00 0.0e+00 0.0e+00 36 23 0 0 0 36 23 0 0 0 3108 MatLUFactorSym 4 1.0 1.3977e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 1 0 0 0 1 1 0 0 0 1 0 MatLUFactorNum 4 1.0 6.9736e-01 1.0 6.81e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 5 17 0 0 0 15634 MatAssemblyBegin 33 1.0 1.9700e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 5.8e+01 0 0 0 0 5 0 0 0 0 5 0 MatAssemblyEnd 33 1.0 1.7567e-01 1.0 0.00e+00 0.0 7.4e+02 5.0e+02 7.2e+01 1 0 2 0 7 1 0 2 0 7 0 MatGetRowIJ 4 1.0 5.6391e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.0 7.1495e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 1 0 0 0 1 1 0 0 0 1 0 MatGetRedundant 4 1.0 5.1653e-02 1.0 0.00e+00 0.0 2.9e+03 5.7e+04 1.6e+01 0 0 8 46 2 0 0 8 46 2 0 KSPGMRESOrthog 167 1.0 1.1502e+00 1.1 7.76e+08 1.0 0.0e+00 0.0e+00 1.7e+02 9 19 0 0 16 9 19 0 0 16 10770 KSPSetUp 32 1.0 6.8159e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 7.4e+01 0 0 0 0 7 0 0 0 0 7 0 KSPSolve 4 1.0 1.1797e+01 1.0 3.94e+09 1.0 3.4e+04 1.0e+04 9.7e+02 90 98 97 99 91 90 98 97 99 91 5332 PCSetUp 4 1.0 1.5480e+00 1.0 7.01e+08 1.0 6.7e+03 3.4e+04 4.9e+02 12 17 19 64 46 12 17 19 64 46 7250 PCApply 11 1.0 1.0006e+01 1.0 3.14e+09 1.0 2.7e+04 4.5e+03 4.6e+02 76 78 77 34 43 76 78 77 34 43 5003 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 4 4 2288 0 SNES 1 1 1316 0 SNESLineSearch 1 1 864 0 DMSNES 1 1 672 0 Vector 182 182 250475944 0 Vector Scatter 35 35 37100 0 Matrix 35 35 252442224 0 Distributed Mesh 11 11 17227264 0 Bipartite Graph 22 22 17600 0 Index Set 92 92 8578396 0 IS L to G Mapping 15 15 10313460 0 Krylov Solver 11 11 146560 0 DMKSP interface 3 3 1968 0 Preconditioner 11 11 9928 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 4.19617e-06 Average time for zero size MPI_Send(): 7.49528e-06 #PETSc Option Table entries: -da_grid_x 201 -da_grid_y 201 -da_refine 4 -log_summary -pc_type mg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Apr 10 18:41:36 2013 Configure options: --with-debugging=0 --with-clanguage=C --with-pic=1 --with-shared-libraries=0 --with-mpi-compilers=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/cmkl/10.2.4.032 --download-cmake=1 --download-metis=1 --download-parmetis=1 --download-spooles=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-superlu=1 --download-superlu_dist=1 --download-hypre=1 --PETSC_ARCH=diamond --PETSC_DIR=/usr/local/u/cekees/proteus/externalPackages/petsc-dev --prefix=/usr/local/u/cekees/proteus/diamond ----------------------------------------- Libraries compiled on Wed Apr 10 18:41:36 2013 on diamond03 Machine characteristics: Linux-2.6.32.59-0.7-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /usr/local/u/cekees/proteus/externalPackages/petsc-dev Using PETSc arch: diamond ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpiifort -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/opt/intel/impi/4.0.3.008/intel64/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lpetsc -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -lHYPRE -Wl,-rpath,/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Wl,-rpath,/opt/intel/impi/4.0.3/lib64 -L/opt/intel/impi/4.0.3/lib64 -Wl,-rpath,/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -L/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -Wl,-rpath,/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.3 -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/-Xlinker -lmpigc4 -Wl,-rpath,/opt/intel/mpi-rt/4.0.3 -Wl,-rpath,/opt/intel/cmkl/10.2.4.032 -L/opt/intel/cmkl/10.2.4.032 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lX11 -lparmetis -lmetis -lpthread -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmpi -lmpigf -lmpigi -lpthread -lrt -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl ----------------------------------------- /usr/local/u/cekees/BOB/mpirun -np 64 /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 -da_grid_x 201 -da_grid_y 201 -log_summary -pc_type mg -da_refine 5 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 on a diamond named r8i3n8 with 64 processors, by cekees Thu Apr 11 22:32:24 2013 Using Petsc Development HG revision: unknown HG Date: unknown Max Max/Min Avg Total Time (sec): 1.424e+01 1.00001 1.424e+01 Objects: 5.060e+02 1.00000 5.060e+02 Flops: 4.037e+09 1.00290 4.027e+09 2.578e+11 Flops/sec: 2.835e+08 1.00291 2.828e+08 1.810e+10 MPI Messages: 4.576e+03 1.52635 4.182e+03 2.676e+05 MPI Message Lengths: 2.461e+07 1.15575 5.706e+03 1.527e+09 MPI Reductions: 1.284e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.4241e+01 100.0% 2.5775e+11 100.0% 2.676e+05 100.0% 5.706e+03 100.0% 1.283e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 1.3713e+01 1.0 4.04e+09 1.0 2.7e+05 5.7e+03 1.2e+03 96100 99100 95 96100 99100 95 18796 SNESFunctionEval 5 1.0 9.5342e-02 1.1 3.53e+07 1.0 1.1e+03 6.4e+03 2.0e+00 1 1 0 0 0 1 1 0 0 0 23636 SNESJacobianEval 24 1.0 7.6989e-01 1.0 0.00e+00 0.0 5.4e+03 2.1e+03 5.8e+01 5 0 2 1 5 5 0 2 1 5 0 SNESLineSearch 4 1.0 2.1720e-01 1.0 7.95e+07 1.0 1.8e+03 6.4e+03 1.6e+01 2 2 1 1 1 2 2 1 1 1 23391 VecDot 4 1.0 1.2260e-02 1.3 5.13e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 26736 VecMDot 207 1.0 5.8739e-01 1.2 3.89e+08 1.0 0.0e+00 0.0e+00 2.1e+02 4 10 0 0 16 4 10 0 0 16 42283 VecNorm 239 1.0 1.9918e-01 1.7 9.97e+07 1.0 0.0e+00 0.0e+00 2.4e+02 1 2 0 0 19 1 2 0 0 19 31933 VecScale 451 1.0 2.2576e-01 1.2 8.24e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 23274 VecCopy 87 1.0 1.0824e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 318 1.0 7.5548e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 484 1.0 4.2225e-01 1.1 1.69e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 4 0 0 0 3 4 0 0 0 25596 VecAYPX 440 1.0 3.8367e-01 1.1 9.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 2 0 0 0 3 2 0 0 0 15650 VecWAXPY 4 1.0 1.8883e-02 1.1 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8679 VecMAXPY 231 1.0 7.3848e-01 1.1 4.67e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 12 0 0 0 5 12 0 0 0 40322 VecPointwiseMult 20 1.0 3.5770e-03 1.1 8.59e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 15267 VecScatterBegin 752 1.0 7.6208e-02 1.1 0.00e+00 0.0 2.0e+05 2.9e+03 0.0e+00 1 0 74 38 0 1 0 74 38 0 0 VecScatterEnd 752 1.0 2.4344e-01 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecReduceArith 9 1.0 1.4358e-01 8.0 1.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 5137 VecReduceComm 5 1.0 1.3177e-0172.4 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 231 1.0 2.9808e-01 1.3 1.34e+08 1.0 0.0e+00 0.0e+00 2.3e+02 2 3 0 0 18 2 3 0 0 18 28708 MatMult 561 1.0 2.7506e+00 1.1 9.10e+08 1.0 1.3e+05 2.6e+03 0.0e+00 18 23 47 21 0 18 23 47 21 0 21117 MatMultAdd 55 1.0 1.5032e-01 1.0 4.23e+07 1.0 8.9e+03 8.7e+02 0.0e+00 1 1 3 1 0 1 1 3 1 0 17972 MatMultTranspose 80 1.0 2.1470e-01 1.1 6.15e+07 1.0 1.3e+04 8.7e+02 0.0e+00 1 2 5 1 0 1 2 5 1 0 18302 MatSolve 11 1.0 7.2504e-02 1.1 4.11e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 36299 MatSOR 550 1.0 4.8554e+00 1.0 9.43e+08 1.0 0.0e+00 0.0e+00 0.0e+00 34 23 0 0 0 34 23 0 0 0 12393 MatLUFactorSym 4 1.0 1.4190e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 1 0 0 0 1 1 0 0 0 1 0 MatLUFactorNum 4 1.0 7.0759e-01 1.1 6.82e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 5 17 0 0 0 61691 MatAssemblyBegin 39 1.0 3.7880e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+01 0 0 0 0 5 0 0 0 0 5 0 MatAssemblyEnd 39 1.0 1.7815e-01 1.0 0.00e+00 0.0 4.3e+03 4.1e+02 8.8e+01 1 0 2 0 7 1 0 2 0 7 0 MatGetRowIJ 4 1.0 5.7302e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.0 7.3185e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatGetRedundant 4 1.0 1.7324e-01 1.1 0.00e+00 0.0 4.8e+04 1.4e+04 1.6e+01 1 0 18 45 1 1 0 18 45 1 0 KSPGMRESOrthog 207 1.0 1.1783e+00 1.1 7.79e+08 1.0 0.0e+00 0.0e+00 2.1e+02 8 19 0 0 16 8 19 0 0 16 42157 KSPSetUp 37 1.0 7.6290e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+01 0 0 0 0 7 0 0 0 0 7 0 KSPSolve 4 1.0 1.2760e+01 1.0 3.95e+09 1.0 2.6e+05 5.7e+03 1.2e+03 90 98 98 99 93 90 98 98 99 93 19760 PCSetUp 4 1.0 2.2942e+00 1.0 7.02e+08 1.0 7.6e+04 1.2e+04 5.9e+02 16 17 28 62 46 16 17 28 62 46 19586 PCApply 11 1.0 1.0241e+01 1.0 3.15e+09 1.0 1.9e+05 2.9e+03 5.7e+02 72 78 69 36 44 72 78 69 36 44 19600 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 4 4 2288 0 SNES 1 1 1316 0 SNESLineSearch 1 1 864 0 DMSNES 1 1 672 0 Vector 220 220 250954896 0 Vector Scatter 42 42 44520 0 Matrix 41 41 252818272 0 Distributed Mesh 13 13 17251524 0 Bipartite Graph 26 26 20800 0 Index Set 108 108 8609864 0 IS L to G Mapping 18 18 10323972 0 Krylov Solver 13 13 178008 0 DMKSP interface 4 4 2624 0 Preconditioner 13 13 11680 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 1.18256e-05 Average time for zero size MPI_Send(): 5.29736e-06 #PETSc Option Table entries: -da_grid_x 201 -da_grid_y 201 -da_refine 5 -log_summary -pc_type mg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Apr 10 18:41:36 2013 Configure options: --with-debugging=0 --with-clanguage=C --with-pic=1 --with-shared-libraries=0 --with-mpi-compilers=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/cmkl/10.2.4.032 --download-cmake=1 --download-metis=1 --download-parmetis=1 --download-spooles=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-superlu=1 --download-superlu_dist=1 --download-hypre=1 --PETSC_ARCH=diamond --PETSC_DIR=/usr/local/u/cekees/proteus/externalPackages/petsc-dev --prefix=/usr/local/u/cekees/proteus/diamond ----------------------------------------- Libraries compiled on Wed Apr 10 18:41:36 2013 on diamond03 Machine characteristics: Linux-2.6.32.59-0.7-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /usr/local/u/cekees/proteus/externalPackages/petsc-dev Using PETSc arch: diamond ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpiifort -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/opt/intel/impi/4.0.3.008/intel64/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lpetsc -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -lHYPRE -Wl,-rpath,/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Wl,-rpath,/opt/intel/impi/4.0.3/lib64 -L/opt/intel/impi/4.0.3/lib64 -Wl,-rpath,/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -L/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -Wl,-rpath,/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.3 -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/-Xlinker -lmpigc4 -Wl,-rpath,/opt/intel/mpi-rt/4.0.3 -Wl,-rpath,/opt/intel/cmkl/10.2.4.032 -L/opt/intel/cmkl/10.2.4.032 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lX11 -lparmetis -lmetis -lpthread -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmpi -lmpigf -lmpigi -lpthread -lrt -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl ----------------------------------------- /usr/local/u/cekees/BOB/mpirun -np 256 /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 -da_grid_x 201 -da_grid_y 201 -log_summary -pc_type mg -da_refine 6 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr/local/u/cekees/proteus/externalPackages/petsc-dev/src/snes/examples/tutorials/ex5 on a diamond named r8i3n8 with 256 processors, by cekees Thu Apr 11 22:33:07 2013 Using Petsc Development HG revision: unknown HG Date: unknown Max Max/Min Avg Total Time (sec): 1.774e+01 1.00001 1.774e+01 Objects: 5.860e+02 1.00000 5.860e+02 Flops: 4.039e+09 1.00295 4.029e+09 1.031e+12 Flops/sec: 2.277e+08 1.00295 2.271e+08 5.814e+10 MPI Messages: 9.982e+03 1.23417 9.722e+03 2.489e+06 MPI Message Lengths: 2.548e+07 1.21988 2.524e+03 6.282e+09 MPI Reductions: 1.504e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.7737e+01 100.0% 1.0313e+12 100.0% 2.489e+06 100.0% 2.524e+03 100.0% 1.503e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 1.7086e+01 1.0 4.04e+09 1.0 2.5e+06 2.5e+03 1.4e+03 96100100100 96 96100100100 96 60360 SNESFunctionEval 5 1.0 9.7396e-02 1.1 3.53e+07 1.0 4.8e+03 6.4e+03 2.0e+00 1 1 0 0 0 1 1 0 0 0 92536 SNESJacobianEval 28 1.0 7.9204e-01 1.0 0.00e+00 0.0 2.7e+04 1.8e+03 6.8e+01 4 0 1 1 5 4 0 1 1 5 0 SNESLineSearch 4 1.0 2.2263e-01 1.0 7.95e+07 1.0 7.7e+03 6.4e+03 1.6e+01 1 2 0 1 1 1 2 0 1 1 91266 VecDot 4 1.0 1.2566e-02 1.2 5.13e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 104324 VecMDot 247 1.0 6.1848e-01 1.2 3.90e+08 1.0 0.0e+00 0.0e+00 2.5e+02 3 10 0 0 16 3 10 0 0 16 160709 VecNorm 283 1.0 2.6179e-01 1.5 9.97e+07 1.0 0.0e+00 0.0e+00 2.8e+02 1 2 0 0 19 1 2 0 0 19 97216 VecScale 539 1.0 2.2678e-01 1.2 8.24e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 92715 VecCopy 102 1.0 1.1017e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 373 1.0 7.7293e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 580 1.0 4.3241e-01 1.2 1.70e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 100028 VecAYPX 528 1.0 3.8948e-01 1.1 9.42e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 61696 VecWAXPY 4 1.0 1.9225e-02 1.1 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 34094 VecMAXPY 275 1.0 7.3860e-01 1.1 4.67e+08 1.0 0.0e+00 0.0e+00 0.0e+00 4 12 0 0 0 4 12 0 0 0 161340 VecPointwiseMult 24 1.0 3.6476e-03 1.1 8.59e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 59903 VecScatterBegin 894 1.0 8.2150e-02 1.1 0.00e+00 0.0 1.5e+06 1.6e+03 0.0e+00 0 0 61 39 0 0 0 61 39 0 0 VecScatterEnd 894 1.0 7.1373e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecReduceArith 9 1.0 3.0628e-0110.1 1.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 9630 VecReduceComm 5 1.0 2.8677e-01104.1 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecNormalize 275 1.0 3.6063e-01 1.3 1.34e+08 1.0 0.0e+00 0.0e+00 2.8e+02 2 3 0 0 18 2 3 0 0 18 94953 MatMult 671 1.0 2.9673e+00 1.2 9.11e+08 1.0 6.4e+05 2.2e+03 0.0e+00 16 23 26 22 0 16 23 26 22 0 78342 MatMultAdd 66 1.0 1.6735e-01 1.2 4.23e+07 1.0 4.7e+04 7.2e+02 0.0e+00 1 1 2 1 0 1 1 2 1 0 64610 MatMultTranspose 96 1.0 4.4701e-01 2.3 6.16e+07 1.0 6.8e+04 7.2e+02 0.0e+00 1 2 3 1 0 1 2 3 1 0 35183 MatSolve 11 1.0 7.7237e-02 1.1 4.11e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 136226 MatSOR 660 1.0 4.8745e+00 1.0 9.44e+08 1.0 0.0e+00 0.0e+00 0.0e+00 27 23 0 0 0 27 23 0 0 0 49403 MatLUFactorSym 4 1.0 1.4634e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 1 0 0 0 1 1 0 0 0 1 0 MatLUFactorNum 4 1.0 7.2301e-01 1.1 6.82e+08 1.0 0.0e+00 0.0e+00 0.0e+00 4 17 0 0 0 4 17 0 0 0 241346 MatAssemblyBegin 45 1.0 7.0929e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.2e+01 0 0 0 0 5 0 0 0 0 5 0 MatAssemblyEnd 45 1.0 1.8686e-01 1.0 0.00e+00 0.0 2.2e+04 3.5e+02 1.0e+02 1 0 1 0 7 1 0 1 0 7 0 MatGetRowIJ 4 1.0 8.2872e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.0 8.1038e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatGetRedundant 4 1.0 4.5670e-01 1.1 0.00e+00 0.0 7.8e+05 3.6e+03 1.6e+01 2 0 31 45 1 2 0 31 45 1 0 KSPGMRESOrthog 247 1.0 1.2252e+00 1.1 7.79e+08 1.0 0.0e+00 0.0e+00 2.5e+02 7 19 0 0 16 7 19 0 0 16 162252 KSPSetUp 42 1.0 1.4489e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 7 0 0 0 0 7 0 KSPSolve 4 1.0 1.5952e+01 1.0 3.95e+09 1.0 2.5e+06 2.5e+03 1.4e+03 90 98 99 99 94 90 98 99 99 94 63242 PCSetUp 4 1.0 4.8903e+00 1.0 7.02e+08 1.0 1.0e+06 3.8e+03 7.0e+02 27 17 41 61 47 27 17 41 61 47 36731 PCApply 11 1.0 1.0822e+01 1.0 3.15e+09 1.0 1.4e+06 1.6e+03 6.8e+02 61 78 58 37 45 61 78 58 37 46 74230 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 4 4 2288 0 SNES 1 1 1316 0 SNESLineSearch 1 1 864 0 DMSNES 1 1 672 0 Vector 258 258 251122352 0 Vector Scatter 49 49 51940 0 Matrix 47 47 252859436 0 Distributed Mesh 15 15 17265124 0 Bipartite Graph 30 30 24000 0 Index Set 124 124 8623888 0 IS L to G Mapping 21 21 10328088 0 Krylov Solver 15 15 209456 0 DMKSP interface 4 4 2624 0 Preconditioner 15 15 13432 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 3.09944e-05 Average time for zero size MPI_Send(): 4.28967e-06 #PETSc Option Table entries: -da_grid_x 201 -da_grid_y 201 -da_refine 6 -log_summary -pc_type mg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Apr 10 18:41:36 2013 Configure options: --with-debugging=0 --with-clanguage=C --with-pic=1 --with-shared-libraries=0 --with-mpi-compilers=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/cmkl/10.2.4.032 --download-cmake=1 --download-metis=1 --download-parmetis=1 --download-spooles=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-superlu=1 --download-superlu_dist=1 --download-hypre=1 --PETSC_ARCH=diamond --PETSC_DIR=/usr/local/u/cekees/proteus/externalPackages/petsc-dev --prefix=/usr/local/u/cekees/proteus/diamond ----------------------------------------- Libraries compiled on Wed Apr 10 18:41:36 2013 on diamond03 Machine characteristics: Linux-2.6.32.59-0.7-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /usr/local/u/cekees/proteus/externalPackages/petsc-dev Using PETSc arch: diamond ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpiifort -fPIC -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/include -I/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/include -I/opt/intel/impi/4.0.3.008/intel64/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lpetsc -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -L/usr/local/u/cekees/proteus/externalPackages/petsc-dev/diamond/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -lHYPRE -Wl,-rpath,/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Wl,-rpath,/opt/intel/impi/4.0.3/lib64 -L/opt/intel/impi/4.0.3/lib64 -Wl,-rpath,/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -L/opt/intel/Compiler/12.1.003/mkl/lib/intel64 -Wl,-rpath,/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/usr/local/applic/intel_new/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.3 -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/usr/local/u/cekees/proteus/externalPackages/petsc-dev/-Xlinker -lmpigc4 -Wl,-rpath,/opt/intel/mpi-rt/4.0.3 -Wl,-rpath,/opt/intel/cmkl/10.2.4.032 -L/opt/intel/cmkl/10.2.4.032 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lX11 -lparmetis -lmetis -lpthread -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmpi -lmpigf -lmpigi -lpthread -lrt -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl -----------------------------------------