1-D Laplacian Eigenproblem, n=10 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./petsc_assembly.x on a named lanesa with 4 processors, by ale Thu Jun 20 11:08:04 2019 Using Petsc Release Version 3.11.0, Mar, 29, 2019 Max Max/Min Avg Total Time (sec): 1.759e-03 1.005 1.753e-03 Objects: 9.000e+00 1.000 9.000e+00 Flop: 0.000e+00 0.000 0.000e+00 0.000e+00 Flop/sec: 0.000e+00 0.000 0.000e+00 0.000e+00 MPI Messages: 4.000e+00 2.000 3.000e+00 1.200e+01 MPI Message Lengths: 2.400e+01 2.000 6.000e+00 7.200e+01 MPI Reductions: 1.900e+01 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 7.1973e-04 41.1% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 4.000e+00 21.1% 1: Assembly: 1.0322e-03 58.9% 0.0000e+00 0.0% 1.200e+01 100.0% 6.000e+00 100.0% 8.000e+00 42.1% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage --- Event Stage 1: Assembly BuildTwoSidedF 1 1.0 1.3494e-04 4.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 7 0 0 0 0 0 MatAssemblyBegin 1 1.0 3.3259e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 17 0 0 0 0 29 0 0 0 0 0 MatAssemblyEnd 1 1.0 7.9846e-04 1.1 0.00e+00 0.0 1.2e+01 6.0e+00 8.0e+00 42 0100100 42 70 0100100100 0 VecSet 1 1.0 4.0531e-06 5.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 3 3 10672 0. Vector 0 1 1688 0. Vec Scatter 0 1 1528 0. Viewer 1 0 0 0. --- Event Stage 1: Assembly Vector 2 1 1792 0. Index Set 2 2 1720 0. Vec Scatter 1 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 7.15256e-08 Average time for MPI_Barrier(): 7.9155e-06 Average time for zero size MPI_Send(): 2.26498e-06 #PETSc Option Table entries: -log_view -n 10 #End of PETSc Option Table entries Compiled with FORTRAN kernels Compiled with 64 bit PetscInt Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 8 Configure options: --prefix=/opt/lib/petsc/3.11.0/prod-mkl_seq-avx512-64bit-double-complex --with-precision=double --with-scalar-type=complex --with-64-bit-indices=1 --with-shared-libraries=1 --with-avx512-kernels=1 --with-memalign=64 --CC=mpicc --CXX=mpicxx --FC=mpifort --F90=mpifort --F77=mpifort --COPTFLAGS="-O3 -g" --CXXOPTFLAGS="-O3 -g" --FOPTFLAGS="-O3 -g" --CFLAGS="-DMKL_ILP64 -I/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/include" --CXXFLAGS="-DMKL_ILP64 -I/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/include" --FFLAGS="-DMKL_ILP64 -I/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/include" --with-debugging=0 --with-mpi=1 --with-mpi-compilers=1 --with-default-arch=0 --with-blaslapack=1 --with-blaslapack-pkg-config=/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/bin/pkgconfig/mkl-dynamic-ilp64-seq.pc --with-valgrind=0 --PETSC_ARCH=prod-mkl_seq-avx512-64bit-double-complex -with-batch=0 --known-mpi-shared-libraries=1 --known-64-bit-blas-indices=1 --CXX_LINKER_FLAGS="-L/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -ldl" --CC_LINKER_FLAGS="-L/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -ldl" --FC_LINKER_FLAGS="-L/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -ldl" --with-fortran-kernels=1 ----------------------------------------- Libraries compiled on 2019-06-17 09:03:01 on lanesa Machine characteristics: Linux-4.15.0-51-generic-x86_64-with-Ubuntu-18.04-bionic Using PETSc directory: /opt/lib/petsc/3.11.0/prod-mkl_seq-avx512-64bit-double-complex Using PETSc arch: ----------------------------------------- Using C compiler: mpicc -DMKL_ILP64 -I/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/include -fPIC -O3 -g Using Fortran compiler: mpifort -DMKL_ILP64 -I/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/include -fPIC -O3 -g ----------------------------------------- Using include paths: -I/opt/lib/petsc/3.11.0/prod-mkl_seq-avx512-64bit-double-complex/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpifort Using libraries: -Wl,-rpath,/opt/lib/petsc/3.11.0/prod-mkl_seq-avx512-64bit-double-complex/lib -L/opt/lib/petsc/3.11.0/prod-mkl_seq-avx512-64bit-double-complex/lib -lpetsc -Wl,-rpath,/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -L/opt/lib/intel/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/7 -L/usr/lib/gcc/x86_64-linux-gnu/7 -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lm -lX11 -lpthread -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -ldl -lstdc++ -lmpichfort -lmpich -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -----------------------------------------