************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- Unknown Name on a arch-linux2-c-debug named adpn243 with 72 processors, by hussaf Mon Jun 27 17:20:41 2016 Using Petsc Release Version 3.7.1, May, 15, 2016 Max Max/Min Avg Total Time (sec): 6.559e+02 1.02019 6.431e+02 Objects: 3.100e+01 1.00000 3.100e+01 Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Messages: 2.140e+02 8.56000 5.324e+01 3.834e+03 MPI Message Lengths: 1.025e+09 171.42017 5.522e+05 2.117e+09 MPI Reductions: 3.400e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.4307e+02 100.0% 0.0000e+00 0.0% 3.834e+03 100.0% 5.522e+05 100.0% 3.300e+01 97.1% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecSet 4 1.0 2.2069e-02273.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyBegin 2 1.0 3.5793e-01 1.9 0.00e+00 0.0 1.8e+02 2.2e+05 0.0e+00 0 0 5 2 0 0 0 5 2 0 0 VecAssemblyEnd 2 1.0 5.9872e-0338.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 3 1.0 1.9280e+0114113.0 0.00e+00 0.0 8.7e+02 6.1e+04 2.0e+00 0 0 23 2 6 0 0 23 2 6 0 VecScatterEnd 1 1.0 4.4847e-011239.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 2 1.0 3.5790e-01 5.6 0.00e+00 0.0 1.8e+02 2.2e+05 0.0e+00 0 0 5 2 0 0 0 5 2 0 0 MatSolve 1 1.0 7.7200e+00 1.1 0.00e+00 0.0 2.6e+03 2.0e+04 3.0e+00 1 0 68 2 9 1 0 68 2 9 0 MatCholFctrSym 1 1.0 1.8439e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 29 0 0 0 15 29 0 0 0 15 0 MatCholFctrNum 1 1.0 3.3969e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 53 0 0 0 0 53 0 0 0 0 0 MatAssemblyBegin 1 1.0 1.4913e+01 8.7 0.00e+00 0.0 2.1e+02 9.4e+06 2.0e+00 2 0 6 95 6 2 0 6 95 6 0 MatAssemblyEnd 1 1.0 4.4464e+00 1.4 0.00e+00 0.0 8.3e+02 4.4e+03 8.0e+00 1 0 22 0 24 1 0 22 0 24 0 MatGetRowIJ 1 1.0 6.8808e-04721.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 1.1780e-03 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 3.0994e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 7.7201e+00 1.1 0.00e+00 0.0 2.6e+03 2.0e+04 3.0e+00 1 0 68 2 9 1 0 68 2 9 0 PCSetUp 1 1.0 5.2408e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 81 0 0 0 29 81 0 0 0 30 0 PCApply 1 1.0 7.7200e+00 1.1 0.00e+00 0.0 2.6e+03 2.0e+04 3.0e+00 1 0 68 2 9 1 0 68 2 9 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 8 3 27039160 0. Vector Scatter 4 1 656 0. Matrix 6 0 0 0. Index Set 10 9 461628 0. Krylov Solver 1 0 0 0. Preconditioner 1 0 0 0. Viewer 1 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 0. Average time for MPI_Barrier(): 0.000895643 Average time for zero size MPI_Send(): 0.000239276 #PETSc Option Table entries: -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --with-debugging=0 --download-mumps --download-scalapack --download-fblaslapack --download-parmetis --download-metis --download-ptscotch ----------------------------------------- Libraries compiled on Mon Jun 20 20:41:00 2016 on admwvis06 Machine characteristics: Linux-2.6.32-504.23.4.el6.x86_64-x86_64-with-redhat-6.6-Santiago Using PETSc directory: /home/hussaf/calculix/petsc-3.7.1 Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fvisibility=hidden -g -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/include -I/home/hussaf/calculix/petsc-3.7.1/include -I/home/hussaf/calculix/petsc-3.7.1/include -I/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -L/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -lpetsc -Wl,-rpath,/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -L/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lscalapack -lflapack -lfblas -lX11 -lptesmumps -lptscotch -lptscotcherr -lscotch -lscotcherr -lssl -lcrypto -Wl,-rpath,/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -Wl,-rpath,/usr/local/lib64 -L/usr/local/lib64 -Wl,-rpath,/usr/local/lib -L/usr/local/lib -lmpifort -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -lrt -lm -lz -Wl,-rpath,/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -L/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -Wl,-rpath,/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -Wl,-rpath,/usr/local/lib64 -L/usr/local/lib64 -Wl,-rpath,/usr/local/lib -L/usr/local/lib -ldl -Wl,-rpath,/home/hussaf/calculix/petsc-3.7.1/arch-linux2-c-debug/lib -lmpi -lgcc_s -ldl ----------------------------------------- Using up to 1 cpu(s) for the stress calculation. Job finished