[petsc-users] Call to PETSc functions way higher when using lower number of processors
Jose A. Abell M.
jaabell at ucdavis.edu
Wed Jul 1 16:29:48 CDT 2015
Dear PETSc-users,
I'm running the same dummy simulation (which involves solving a 10000 x
10000 linear system of equations 10 times) using 12 and 18 processors on a
SMP machine. With 18 processors I spend 3.5s on PETsc calls, with 12 I
spend ~260s.
Again, the matrix is the same, the only difference is the number of
processors, which would affect the ordering of the matrix rows and columns
as the domain gets partitioned differently.
When looking at the performance log I see:
For 12 processors:
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len
Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
MatSolve 103340 1.0 8.6910e+01 1.2 7.54e+10 1.0 0.0e+00 0.0e+00
0.0e+00 31 34 0 0 0 31 34 0 0 0 10113
and for 18 processors:
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len
Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
MatSolve 108 1.0 6.9855e-02 1.4 5.25e+07 1.1 0.0e+00 0.0e+00
0.0e+00 2 32 0 0 0 2 32 0 0 0 13136
The MatSolve count is soo large in the slow case. It is similar for other
operations like MatMult and all the vector-oriented operations. I've
included the complete logs for these cases.
What is the main driver behind the number of calls to these functions being
so high? Is it only the matrix ordering to blame or maybe there is
something else I'm missing?
Regards and thanks!
--
José Abell
PhD Candidate
Computational Geomechanics Group
Dept. of Civil and Environmental Engineering
UC Davis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150701/ec13a2af/attachment-0001.html>
-------------- next part --------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:40:46 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.522e+01 1.00014 2.522e+01
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 2.200e+10 1.04006 2.139e+10 2.566e+11
Flops/sec: 8.723e+08 1.04010 8.481e+08 1.018e+10
MPI Messages: 5.168e+04 2.49964 3.618e+04 4.341e+05
MPI Message Lengths: 6.298e+06 2.17256 1.260e+02 5.469e+07
MPI Reductions: 2.035e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.5218e+01 100.0% 2.5663e+11 100.0% 4.341e+05 100.0% 1.260e+02 100.0% 2.035e+04 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 10333 1.0 7.3286e+00 1.2 7.54e+09 1.0 4.3e+05 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 12012
MatSolve 10334 1.0 8.2564e+00 1.2 7.54e+09 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10646
MatLUFactorNum 1 1.0 9.8922e-03 1.2 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7922
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 1 1.0 1.2205e-03 4.0 0.00e+00 0.0 7.2e+01 1.8e+03 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 6.1265e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 9.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 4.8053e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 10000 1.0 8.3688e+00 1.5 3.19e+09 1.0 0.0e+00 0.0e+00 1.0e+04 26 14 0 0 49 26 14 0 0 49 4442
VecNorm 10334 1.0 1.6581e+00 1.6 2.13e+08 1.0 0.0e+00 0.0e+00 1.0e+04 5 1 0 0 51 5 1 0 0 51 1496
VecScale 10334 1.0 1.2875e-01 1.6 1.06e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9631
VecCopy 334 1.0 1.0633e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 10670 1.0 2.1987e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 667 1.0 1.5060e-02 1.4 1.37e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10630
VecMAXPY 10334 1.0 1.6847e+00 1.5 3.39e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 23491
VecScatterBegin 10333 1.0 6.2853e-02 1.4 0.00e+00 0.0 4.3e+05 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 10333 1.0 7.3711e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 10334 1.0 1.7767e+00 1.5 3.19e+08 1.0 0.0e+00 0.0e+00 1.0e+04 6 1 0 0 51 6 1 0 0 51 2094
KSPGMRESOrthog 10000 1.0 9.4583e+00 1.4 6.37e+09 1.0 0.0e+00 0.0e+00 1.0e+04 31 29 0 0 49 31 29 0 0 49 7861
KSPSetUp 2 1.0 4.5919e-04 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 2.4813e+01 1.0 2.20e+10 1.0 4.3e+05 1.3e+02 2.0e+04 98100100100100 98100100100100 10343
PCSetUp 2 1.0 1.7168e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4564
PCSetUpOnBlocks 1 1.0 1.7009e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4607
PCApply 10334 1.0 8.9851e+00 1.2 7.54e+09 1.0 0.0e+00 0.0e+00 0.0e+00 34 34 0 0 0 34 34 0 0 0 9782
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.07765e-05
Average time for zero size MPI_Send(): 0.00124647
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:41:13 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 5.171e+01 1.00007 5.171e+01
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 4.399e+10 1.04006 4.277e+10 5.133e+11
Flops/sec: 8.507e+08 1.04008 8.271e+08 9.925e+09
MPI Messages: 1.034e+05 2.49960 7.235e+04 8.682e+05
MPI Message Lengths: 1.260e+07 2.17256 1.260e+02 1.094e+08
MPI Reductions: 4.263e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 5.1680e+01 99.9% 5.1327e+11 100.0% 8.682e+05 100.0% 1.260e+02 100.0% 4.069e+04 95.5%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 20666 1.0 1.5988e+01 1.4 1.51e+10 1.0 8.7e+05 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11012
MatSolve 20668 1.0 1.7528e+01 1.3 1.51e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10029
MatLUFactorNum 2 1.0 2.2892e-02 1.3 1.34e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6846
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 2 1.0 1.7476e-03 2.2 0.00e+00 0.0 1.4e+02 1.8e+03 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 2 1.0 6.2687e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 1.1e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 2 1.0 6.1951e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 20000 1.0 1.8365e+01 1.7 6.37e+09 1.0 0.0e+00 0.0e+00 2.0e+04 28 14 0 0 47 28 14 0 0 49 4048
VecNorm 20668 1.0 2.9957e+00 1.4 4.25e+08 1.0 0.0e+00 0.0e+00 2.1e+04 5 1 0 0 48 5 1 0 0 51 1656
VecScale 20668 1.0 2.3675e-01 1.5 2.13e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10476
VecCopy 668 1.0 2.3654e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 21339 1.0 4.6239e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 1334 1.0 2.8088e-02 1.3 2.74e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11398
VecMAXPY 20668 1.0 2.8949e+00 1.3 6.78e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 27342
VecScatterBegin 20666 1.0 1.2330e-01 1.5 0.00e+00 0.0 8.7e+05 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 20666 1.0 1.5949e-01 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 20668 1.0 3.2343e+00 1.4 6.38e+08 1.0 0.0e+00 0.0e+00 2.1e+04 5 1 0 0 48 5 1 0 0 51 2301
KSPGMRESOrthog 20000 1.0 2.0480e+01 1.5 1.27e+10 1.0 0.0e+00 0.0e+00 2.0e+04 33 29 0 0 47 33 29 0 0 49 7261
KSPSetUp 3 1.0 4.5967e-04 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 2 1.0 5.0777e+01 1.0 4.40e+10 1.0 8.7e+05 1.3e+02 4.1e+04 98100100100 95 98100100100100 10108
PCSetUp 4 1.0 2.8852e-02 1.4 1.34e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5432
PCSetUpOnBlocks 2 1.0 1.7010e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4607
PCApply 20668 1.0 1.8838e+01 1.3 1.51e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9336
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.16348e-05
Average time for zero size MPI_Send(): 2.30471e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:41:39 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 7.766e+01 1.00006 7.766e+01
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 6.599e+10 1.04006 6.416e+10 7.699e+11
Flops/sec: 8.498e+08 1.04007 8.262e+08 9.914e+09
MPI Messages: 1.550e+05 2.49959 1.085e+05 1.302e+06
MPI Message Lengths: 1.889e+07 2.17256 1.260e+02 1.641e+08
MPI Reductions: 6.490e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 7.7610e+01 99.9% 7.6990e+11 100.0% 1.302e+06 100.0% 1.260e+02 100.0% 6.103e+04 94.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 30999 1.0 2.3715e+01 1.3 2.26e+10 1.0 1.3e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11136
MatSolve 31002 1.0 2.6059e+01 1.2 2.26e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10119
MatLUFactorNum 3 1.0 3.2447e-02 1.2 2.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7245
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 3 1.0 2.3293e-03 2.2 0.00e+00 0.0 2.2e+02 1.8e+03 6.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 3 1.0 6.4434e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 1.3e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 3 1.0 7.3338e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 30000 1.0 2.6812e+01 1.6 9.56e+09 1.0 0.0e+00 0.0e+00 3.0e+04 28 14 0 0 46 28 14 0 0 49 4159
VecNorm 31002 1.0 4.4017e+00 1.4 6.38e+08 1.0 0.0e+00 0.0e+00 3.1e+04 5 1 0 0 48 5 1 0 0 51 1690
VecScale 31002 1.0 3.4563e-01 1.4 3.19e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10764
VecCopy 1002 1.0 3.5723e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 32008 1.0 6.9122e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 2001 1.0 4.0981e-02 1.2 4.12e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11719
VecMAXPY 31002 1.0 4.4094e+00 1.3 1.02e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 26926
VecScatterBegin 30999 1.0 1.8540e-01 1.5 0.00e+00 0.0 1.3e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 30999 1.0 2.7407e-01 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 31002 1.0 4.7597e+00 1.3 9.57e+08 1.0 0.0e+00 0.0e+00 3.1e+04 5 1 0 0 48 5 1 0 0 51 2345
KSPGMRESOrthog 30000 1.0 2.9975e+01 1.5 1.91e+10 1.0 0.0e+00 0.0e+00 3.0e+04 32 29 0 0 46 33 29 0 0 49 7441
KSPSetUp 4 1.0 4.5991e-04 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 3 1.0 7.6298e+01 1.0 6.60e+10 1.0 1.3e+06 1.3e+02 6.1e+04 98100100100 94 98100100100100 10091
PCSetUp 6 1.0 3.8442e-02 1.2 2.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6115
PCSetUpOnBlocks 3 1.0 1.7011e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4607
PCApply 31002 1.0 2.8093e+01 1.2 2.26e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9392
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.07765e-05
Average time for zero size MPI_Send(): 2.30471e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:42:05 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.042e+02 1.00003 1.042e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 8.798e+10 1.04006 8.554e+10 1.027e+12
Flops/sec: 8.447e+08 1.04006 8.212e+08 9.855e+09
MPI Messages: 2.067e+05 2.49958 1.447e+05 1.736e+06
MPI Message Lengths: 2.519e+07 2.17256 1.260e+02 2.187e+08
MPI Reductions: 8.718e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.0411e+02 99.9% 1.0265e+12 100.0% 1.736e+06 100.0% 1.260e+02 100.0% 8.136e+04 93.3%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 41332 1.0 3.1362e+01 1.3 3.02e+10 1.0 1.7e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11228
MatSolve 41336 1.0 3.4559e+01 1.2 3.01e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10173
MatLUFactorNum 4 1.0 4.2350e-02 1.2 2.69e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7402
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 4 1.0 2.8634e-03 2.0 0.00e+00 0.0 2.9e+02 1.8e+03 8.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 4 1.0 6.6101e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 1.5e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 4 1.0 8.3561e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 40000 1.0 3.6134e+01 1.6 1.27e+10 1.0 0.0e+00 0.0e+00 4.0e+04 28 14 0 0 46 28 14 0 0 49 4115
VecNorm 41336 1.0 6.2329e+00 1.4 8.50e+08 1.0 0.0e+00 0.0e+00 4.1e+04 5 1 0 0 47 5 1 0 0 51 1592
VecScale 41336 1.0 4.6990e-01 1.5 4.25e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10556
VecCopy 1336 1.0 4.7449e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 42677 1.0 9.1115e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 2668 1.0 5.4109e-02 1.2 5.49e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11834
VecMAXPY 41336 1.0 5.8184e+00 1.3 1.36e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 27208
VecScatterBegin 41332 1.0 2.5026e-01 1.5 0.00e+00 0.0 1.7e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 41332 1.0 3.5811e-01 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 41336 1.0 6.7111e+00 1.4 1.28e+09 1.0 0.0e+00 0.0e+00 4.1e+04 5 1 0 0 47 5 1 0 0 51 2217
KSPGMRESOrthog 40000 1.0 4.0329e+01 1.4 2.55e+10 1.0 0.0e+00 0.0e+00 4.0e+04 32 29 0 0 46 32 29 0 0 49 7374
KSPSetUp 5 1.0 4.6062e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 4 1.0 1.0240e+02 1.0 8.80e+10 1.0 1.7e+06 1.3e+02 8.1e+04 98100100100 93 98100100100100 10025
PCSetUp 8 1.0 4.8366e-02 1.2 2.69e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6481
PCSetUpOnBlocks 4 1.0 1.7013e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4606
PCApply 41336 1.0 3.7219e+01 1.2 3.02e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9452
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.05381e-05
Average time for zero size MPI_Send(): 2.20537e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:42:34 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.326e+02 1.00003 1.326e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 1.100e+11 1.04006 1.069e+11 1.283e+12
Flops/sec: 8.293e+08 1.04007 8.063e+08 9.676e+09
MPI Messages: 2.584e+05 2.49958 1.809e+05 2.170e+06
MPI Message Lengths: 3.149e+07 2.17256 1.260e+02 2.734e+08
MPI Reductions: 1.095e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.3254e+02 99.9% 1.2832e+12 100.0% 2.170e+06 100.0% 1.260e+02 100.0% 1.017e+05 92.9%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 51665 1.0 4.0262e+01 1.3 3.77e+10 1.0 2.2e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 10932
MatSolve 51670 1.0 4.4664e+01 1.3 3.77e+10 1.0 0.0e+00 0.0e+00 0.0e+00 30 34 0 0 0 30 34 0 0 0 9839
MatLUFactorNum 5 1.0 5.2033e-02 1.1 3.36e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7530
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 5 1.0 3.3691e-03 2.0 0.00e+00 0.0 3.6e+02 1.8e+03 1.0e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 5 1.0 6.7972e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 1.7e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 5 1.0 9.2900e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 50000 1.0 4.6689e+01 1.7 1.59e+10 1.0 0.0e+00 0.0e+00 5.0e+04 28 14 0 0 46 28 14 0 0 49 3981
VecNorm 51670 1.0 7.4591e+00 1.4 1.06e+09 1.0 0.0e+00 0.0e+00 5.2e+04 5 1 0 0 47 5 1 0 0 51 1662
VecScale 51670 1.0 5.8028e-01 1.4 5.31e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10685
VecCopy 1670 1.0 5.8324e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 53346 1.0 1.1809e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 3335 1.0 6.9923e-02 1.2 6.86e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11447
VecMAXPY 51670 1.0 7.2888e+00 1.2 1.70e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 27149
VecScatterBegin 51665 1.0 3.1515e-01 1.5 0.00e+00 0.0 2.2e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 51665 1.0 4.6215e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 51670 1.0 8.0691e+00 1.3 1.59e+09 1.0 0.0e+00 0.0e+00 5.2e+04 5 1 0 0 47 5 1 0 0 51 2305
KSPGMRESOrthog 50000 1.0 5.2287e+01 1.5 3.19e+10 1.0 0.0e+00 0.0e+00 5.0e+04 33 29 0 0 46 33 29 0 0 49 7110
KSPSetUp 6 1.0 4.6134e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 5 1.0 1.3046e+02 1.0 1.10e+11 1.0 2.2e+06 1.3e+02 1.0e+05 98100100100 93 98100100100100 9836
PCSetUp 10 1.0 5.8070e-02 1.2 3.36e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6747
PCSetUpOnBlocks 5 1.0 1.7014e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4606
PCApply 51670 1.0 4.6982e+01 1.2 3.77e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9361
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.14441e-05
Average time for zero size MPI_Send(): 2.46366e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:43:00 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.587e+02 1.00002 1.587e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 1.320e+11 1.04006 1.283e+11 1.540e+12
Flops/sec: 8.318e+08 1.04006 8.088e+08 9.705e+09
MPI Messages: 3.100e+05 2.49958 2.170e+05 2.604e+06
MPI Message Lengths: 3.779e+07 2.17256 1.260e+02 3.281e+08
MPI Reductions: 1.317e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.5857e+02 99.9% 1.5398e+12 100.0% 2.604e+06 100.0% 1.260e+02 100.0% 1.220e+05 92.6%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 61998 1.0 4.8605e+01 1.3 4.53e+10 1.0 2.6e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 10867
MatSolve 62004 1.0 5.3971e+01 1.3 4.52e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 9771
MatLUFactorNum 6 1.0 6.1202e-02 1.1 4.03e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7682
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 6 1.0 3.8674e-03 1.9 0.00e+00 0.0 4.3e+02 1.8e+03 1.2e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 6 1.0 6.9834e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 1.9e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 6 1.0 1.0138e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 60000 1.0 5.3860e+01 1.7 1.91e+10 1.0 0.0e+00 0.0e+00 6.0e+04 28 14 0 0 46 28 14 0 0 49 4141
VecNorm 62004 1.0 8.6848e+00 1.3 1.28e+09 1.0 0.0e+00 0.0e+00 6.2e+04 5 1 0 0 47 5 1 0 0 51 1713
VecScale 62004 1.0 7.0675e-01 1.5 6.38e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10528
VecCopy 2004 1.0 6.7320e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 64015 1.0 1.4229e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 4002 1.0 8.5091e-02 1.3 8.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11288
VecMAXPY 62004 1.0 8.5972e+00 1.2 2.04e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 27620
VecScatterBegin 61998 1.0 3.8155e-01 1.5 0.00e+00 0.0 2.6e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 61998 1.0 5.3855e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 62004 1.0 9.4208e+00 1.3 1.91e+09 1.0 0.0e+00 0.0e+00 6.2e+04 5 1 0 0 47 5 1 0 0 51 2369
KSPGMRESOrthog 60000 1.0 6.1600e+01 1.6 3.82e+10 1.0 0.0e+00 0.0e+00 6.0e+04 33 29 0 0 46 33 29 0 0 49 7242
KSPSetUp 7 1.0 4.6158e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 6 1.0 1.5609e+02 1.0 1.32e+11 1.0 2.6e+06 1.3e+02 1.2e+05 98100100100 93 98100100100100 9865
PCSetUp 12 1.0 6.7255e-02 1.2 4.03e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6991
PCSetUpOnBlocks 6 1.0 1.7015e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4606
PCApply 62004 1.0 5.6753e+01 1.2 4.53e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9299
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.15395e-05
Average time for zero size MPI_Send(): 2.26498e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:43:26 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.848e+02 1.00002 1.848e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 1.540e+11 1.04006 1.497e+11 1.796e+12
Flops/sec: 8.330e+08 1.04007 8.099e+08 9.718e+09
MPI Messages: 3.617e+05 2.49958 2.532e+05 3.039e+06
MPI Message Lengths: 4.408e+07 2.17256 1.260e+02 3.828e+08
MPI Reductions: 1.540e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.8474e+02 99.9% 1.7964e+12 100.0% 3.038e+06 100.0% 1.260e+02 100.0% 1.424e+05 92.4%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 72331 1.0 5.5552e+01 1.2 5.28e+10 1.0 3.0e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11093
MatSolve 72338 1.0 6.2165e+01 1.2 5.28e+10 1.0 0.0e+00 0.0e+00 0.0e+00 30 34 0 0 0 31 34 0 0 0 9897
MatLUFactorNum 7 1.0 7.0379e-02 1.1 4.70e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7794
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 7 1.0 4.4162e-03 1.8 0.00e+00 0.0 5.0e+02 1.8e+03 1.4e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 7 1.0 7.1289e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 2.1e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 7 1.0 1.1036e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 70000 1.0 6.0912e+01 1.6 2.23e+10 1.0 0.0e+00 0.0e+00 7.0e+04 28 14 0 0 45 28 14 0 0 49 4272
VecNorm 72338 1.0 1.0526e+01 1.3 1.49e+09 1.0 0.0e+00 0.0e+00 7.2e+04 5 1 0 0 47 5 1 0 0 51 1649
VecScale 72338 1.0 8.5160e-01 1.5 7.44e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10193
VecCopy 2338 1.0 7.9150e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 74684 1.0 1.6278e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 4669 1.0 9.7399e-02 1.2 9.60e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11505
VecMAXPY 72338 1.0 1.0805e+01 1.3 2.37e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 25640
VecScatterBegin 72331 1.0 4.4327e-01 1.5 0.00e+00 0.0 3.0e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 72331 1.0 6.1295e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 72338 1.0 1.1378e+01 1.3 2.23e+09 1.0 0.0e+00 0.0e+00 7.2e+04 5 1 0 0 47 5 1 0 0 51 2289
KSPGMRESOrthog 70000 1.0 7.0872e+01 1.5 4.46e+10 1.0 0.0e+00 0.0e+00 7.0e+04 33 29 0 0 45 33 29 0 0 49 7344
KSPSetUp 8 1.0 4.6229e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 7 1.0 1.8188e+02 1.0 1.54e+11 1.0 3.0e+06 1.3e+02 1.4e+05 98100100100 92 98100100100100 9877
PCSetUp 14 1.0 7.6451e-02 1.1 4.70e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7175
PCSetUpOnBlocks 7 1.0 1.7016e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4605
PCApply 72338 1.0 6.5714e+01 1.2 5.28e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9370
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.0252e-05
Average time for zero size MPI_Send(): 2.1259e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:43:49 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.083e+02 1.00002 2.083e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 1.760e+11 1.04006 1.711e+11 2.053e+12
Flops/sec: 8.448e+08 1.04006 8.213e+08 9.856e+09
MPI Messages: 4.134e+05 2.49957 2.894e+05 3.473e+06
MPI Message Lengths: 5.038e+07 2.17256 1.260e+02 4.375e+08
MPI Reductions: 1.763e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.0819e+02 99.9% 2.0531e+12 100.0% 3.473e+06 100.0% 1.260e+02 100.0% 1.627e+05 92.3%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 82664 1.0 6.2189e+01 1.2 6.03e+10 1.0 3.5e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11324
MatSolve 82672 1.0 6.9832e+01 1.2 6.03e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10069
MatLUFactorNum 8 1.0 7.9988e-02 1.1 5.38e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7838
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 8 1.0 5.0349e-03 1.7 0.00e+00 0.0 5.8e+02 1.8e+03 1.6e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 8 1.0 7.2723e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 2.3e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 8 1.0 1.2084e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 80000 1.0 6.5728e+01 1.5 2.55e+10 1.0 0.0e+00 0.0e+00 8.0e+04 27 14 0 0 45 27 14 0 0 49 4525
VecNorm 82672 1.0 1.1794e+01 1.4 1.70e+09 1.0 0.0e+00 0.0e+00 8.3e+04 5 1 0 0 47 5 1 0 0 51 1682
VecScale 82672 1.0 9.7938e-01 1.6 8.50e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10130
VecCopy 2672 1.0 8.8863e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 85353 1.0 1.8428e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 5336 1.0 1.1113e-01 1.2 1.10e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11524
VecMAXPY 82672 1.0 1.2561e+01 1.4 2.71e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 25205
VecScatterBegin 82664 1.0 5.0803e-01 1.5 0.00e+00 0.0 3.5e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 82664 1.0 6.7971e-01 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 82672 1.0 1.2764e+01 1.3 2.55e+09 1.0 0.0e+00 0.0e+00 8.3e+04 5 1 0 0 47 5 1 0 0 51 2332
KSPGMRESOrthog 80000 1.0 7.7351e+01 1.4 5.10e+10 1.0 0.0e+00 0.0e+00 8.0e+04 32 29 0 0 45 32 29 0 0 49 7690
KSPSetUp 9 1.0 4.6277e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 8 1.0 2.0496e+02 1.0 1.76e+11 1.0 3.5e+06 1.3e+02 1.6e+05 98100100100 92 98100100100100 10017
PCSetUp 16 1.0 8.6079e-02 1.1 5.38e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7283
PCSetUpOnBlocks 8 1.0 1.7017e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4605
PCApply 82672 1.0 7.3875e+01 1.2 6.03e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9526
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.10626e-05
Average time for zero size MPI_Send(): 2.32458e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:44:13 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.318e+02 1.00001 2.318e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 1.980e+11 1.04006 1.925e+11 2.310e+12
Flops/sec: 8.542e+08 1.04006 8.305e+08 9.966e+09
MPI Messages: 4.651e+05 2.49957 3.256e+05 3.907e+06
MPI Message Lengths: 5.668e+07 2.17256 1.260e+02 4.922e+08
MPI Reductions: 1.986e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.3162e+02 99.9% 2.3097e+12 100.0% 3.907e+06 100.0% 1.260e+02 100.0% 1.831e+05 92.2%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 92997 1.0 7.0149e+01 1.2 6.79e+10 1.0 3.9e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11294
MatSolve 93006 1.0 7.7559e+01 1.2 6.78e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10199
MatLUFactorNum 9 1.0 8.9484e-02 1.1 6.05e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7881
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 9 1.0 5.5165e-03 1.7 0.00e+00 0.0 6.5e+02 1.8e+03 1.8e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 9 1.0 7.4254e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 2.5e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 9 1.0 1.3036e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 90000 1.0 7.5000e+01 1.6 2.87e+10 1.0 0.0e+00 0.0e+00 9.0e+04 27 14 0 0 45 27 14 0 0 49 4461
VecNorm 93006 1.0 1.2631e+01 1.3 1.91e+09 1.0 0.0e+00 0.0e+00 9.3e+04 5 1 0 0 47 5 1 0 0 51 1767
VecScale 93006 1.0 1.0864e+00 1.5 9.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10273
VecCopy 3006 1.0 1.0196e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 96022 1.0 2.0954e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 6003 1.0 1.2338e-01 1.2 1.23e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11677
VecMAXPY 93006 1.0 1.3557e+01 1.3 3.05e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 26274
VecScatterBegin 92997 1.0 5.8680e-01 1.5 0.00e+00 0.0 3.9e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 92997 1.0 7.3402e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 93006 1.0 1.3718e+01 1.3 2.87e+09 1.0 0.0e+00 0.0e+00 9.3e+04 5 1 0 0 47 5 1 0 0 51 2441
KSPGMRESOrthog 90000 1.0 8.7551e+01 1.5 5.74e+10 1.0 0.0e+00 0.0e+00 9.0e+04 32 29 0 0 45 32 29 0 0 49 7643
KSPSetUp 10 1.0 4.6301e-04 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 9 1.0 2.2805e+02 1.0 1.98e+11 1.0 3.9e+06 1.3e+02 1.8e+05 98100100100 92 98100100100100 10128
PCSetUp 18 1.0 9.5593e-02 1.1 6.05e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7378
PCSetUpOnBlocks 9 1.0 1.7018e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4605
PCApply 93006 1.0 8.2385e+01 1.2 6.79e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9609
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.92165e-05
Average time for zero size MPI_Send(): 2.14577e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 12 processors, by jaabell Wed Jul 1 13:44:41 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.597e+02 1.00001 2.597e+02
Objects: 5.800e+01 1.00000 5.800e+01
Flops: 2.200e+11 1.04006 2.139e+11 2.566e+12
Flops/sec: 8.469e+08 1.04006 8.234e+08 9.881e+09
MPI Messages: 5.167e+05 2.49957 3.617e+05 4.341e+06
MPI Message Lengths: 6.297e+07 2.17256 1.260e+02 5.469e+08
MPI Reductions: 2.208e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.5958e+02 99.9% 2.5663e+12 100.0% 4.341e+06 100.0% 1.260e+02 100.0% 2.034e+05 92.1%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 103330 1.0 7.9059e+01 1.3 7.54e+10 1.0 4.3e+06 1.3e+02 0.0e+00 27 34100100 0 27 34100100 0 11135
MatSolve 103340 1.0 8.6910e+01 1.2 7.54e+10 1.0 0.0e+00 0.0e+00 0.0e+00 31 34 0 0 0 31 34 0 0 0 10113
MatLUFactorNum 10 1.0 9.9038e-02 1.1 6.72e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7912
MatILUFactorSym 1 1.0 6.6602e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 10 1.0 6.0179e-03 1.6 0.00e+00 0.0 7.2e+02 1.8e+03 2.0e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 10 1.0 7.5516e-02 1.0 0.00e+00 0.0 8.4e+01 3.3e+01 2.7e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.6001e-05 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 5.5051e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 10 1.0 1.4120e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 100000 1.0 8.6161e+01 1.6 3.19e+10 1.0 0.0e+00 0.0e+00 1.0e+05 28 14 0 0 45 28 14 0 0 49 4315
VecNorm 103340 1.0 1.4042e+01 1.3 2.13e+09 1.0 0.0e+00 0.0e+00 1.0e+05 5 1 0 0 47 5 1 0 0 51 1766
VecScale 103340 1.0 1.1954e+00 1.5 1.06e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10374
VecCopy 3340 1.0 1.1708e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 106691 1.0 2.3814e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 6670 1.0 1.3779e-01 1.2 1.37e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11618
VecMAXPY 103340 1.0 1.4617e+01 1.3 3.39e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 15 0 0 0 5 15 0 0 0 27076
VecScatterBegin 103330 1.0 6.6218e-01 1.5 0.00e+00 0.0 4.3e+06 1.3e+02 0.0e+00 0 0100100 0 0 0100100 0 0
VecScatterEnd 103330 1.0 8.0669e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 103340 1.0 1.5246e+01 1.3 3.19e+09 1.0 0.0e+00 0.0e+00 1.0e+05 5 1 0 0 47 5 1 0 0 51 2440
KSPGMRESOrthog 100000 1.0 9.9719e+01 1.5 6.37e+10 1.0 0.0e+00 0.0e+00 1.0e+05 32 29 0 0 45 32 29 0 0 49 7456
KSPSetUp 11 1.0 4.6325e-04 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 10 1.0 2.5568e+02 1.0 2.20e+11 1.0 4.3e+06 1.3e+02 2.0e+05 98100100100 92 98100100100100 10037
PCSetUp 20 1.0 1.0517e-01 1.1 6.72e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7451
PCSetUpOnBlocks 10 1.0 1.7019e-02 1.5 6.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4605
PCApply 103340 1.0 9.3555e+01 1.2 7.54e+10 1.0 0.0e+00 0.0e+00 0.0e+00 33 34 0 0 0 33 34 0 0 0 9402
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 40 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 13448 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 8.86917e-06
Average time for zero size MPI_Send(): 2.32458e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
-------------- next part --------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:54:59 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 3.717e-01 1.01553 3.688e-01
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 1.161e+07 1.06173 1.129e+07 2.031e+08
Flops/sec: 3.164e+07 1.06524 3.060e+07 5.509e+08
MPI Messages: 5.400e+01 2.84211 3.503e+01 6.305e+02
MPI Message Lengths: 2.412e+04 3.68807 4.487e+02 2.829e+05
MPI Reductions: 3.000e+01 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 3.6875e-01 100.0% 2.0314e+08 100.0% 6.305e+02 100.0% 4.487e+02 100.0% 2.900e+01 96.7%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 6 1.0 5.7786e-03 2.4 2.92e+06 1.1 4.0e+02 1.3e+02 0.0e+00 1 25 63 18 0 1 25 63 18 0 8846
MatSolve 7 1.0 4.8840e-03 1.6 3.40e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 12178
MatLUFactorNum 1 1.0 8.3728e-03 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 2 39 0 0 0 2 39 0 0 0 9347
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
MatAssemblyBegin 1 1.0 1.4017e-03 2.6 0.00e+00 0.0 1.0e+02 2.2e+03 2.0e+00 0 0 16 81 7 0 0 16 81 7 0
MatAssemblyEnd 1 1.0 7.7770e-02 1.0 0.00e+00 0.0 1.3e+02 3.3e+01 9.0e+00 21 0 21 2 30 21 0 21 2 31 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 4.6177e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecMDot 6 1.0 5.7666e-0313.8 2.88e+05 1.1 0.0e+00 0.0e+00 6.0e+00 1 2 0 0 20 1 2 0 0 21 874
VecNorm 7 1.0 8.9319e-0325.5 9.59e+04 1.1 0.0e+00 0.0e+00 7.0e+00 1 1 0 0 23 1 1 0 0 24 188
VecScale 7 1.0 9.1314e-05 2.1 4.80e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9199
VecCopy 1 1.0 6.4850e-05 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 10 1.0 1.7023e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 1 1.0 2.6941e-05 2.6 1.37e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8908
VecMAXPY 7 1.0 2.6107e-04 2.3 3.70e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 24821
VecScatterBegin 6 1.0 8.4639e-05 1.9 0.00e+00 0.0 4.0e+02 1.3e+02 0.0e+00 0 0 63 18 0 0 0 63 18 0 0
VecScatterEnd 6 1.0 2.1577e-04 6.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 7 1.0 9.0244e-0318.8 1.44e+05 1.1 0.0e+00 0.0e+00 7.0e+00 2 1 0 0 23 2 1 0 0 24 279
KSPGMRESOrthog 6 1.0 5.9283e-03 9.4 5.76e+05 1.1 0.0e+00 0.0e+00 6.0e+00 1 5 0 0 20 1 5 0 0 21 1700
KSPSetUp 2 1.0 9.8395e-0410.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 3.2758e-02 1.3 1.16e+07 1.1 4.0e+02 1.3e+02 1.3e+01 8100 63 18 43 8100 63 18 45 6201
PCSetUp 2 1.0 1.3752e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 3 39 0 0 0 3 39 0 0 0 5691
PCSetUpOnBlocks 1 1.0 1.3273e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 3 39 0 0 0 3 39 0 0 0 5896
PCApply 7 1.0 5.2822e-03 1.6 3.40e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 11260
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.66893e-07
Average time for MPI_Barrier(): 1.3876e-05
Average time for zero size MPI_Send(): 0.00168145
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:54:59 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 7.851e-01 1.00891 7.810e-01
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 2.691e+07 1.06156 2.615e+07 4.706e+08
Flops/sec: 3.448e+07 1.06178 3.348e+07 6.026e+08
MPI Messages: 1.150e+02 2.80488 7.472e+01 1.345e+03
MPI Message Lengths: 4.966e+04 3.56270 4.359e+02 5.862e+05
MPI Reductions: 1.991e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 7.3103e-01 93.6% 4.7064e+08 100.0% 1.327e+03 98.7% 4.359e+02 100.0% 5.200e+01 2.6%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 15 1.0 1.2228e-02 2.3 7.30e+06 1.1 9.9e+02 1.3e+02 0.0e+00 1 27 74 21 0 1 27 75 21 0 10451
MatSolve 17 1.0 1.1958e-02 1.7 8.26e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 31 0 0 0 1 31 0 0 0 12080
MatLUFactorNum 2 1.0 2.5573e-02 2.3 8.96e+06 1.1 0.0e+00 0.0e+00 0.0e+00 2 33 0 0 0 2 33 0 0 0 6121
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
MatAssemblyBegin 2 1.0 1.7176e-03 2.0 0.00e+00 0.0 2.0e+02 2.2e+03 4.0e+00 0 0 15 78 0 0 0 15 78 8 0
MatAssemblyEnd 2 1.0 7.9341e-02 1.0 0.00e+00 0.0 1.3e+02 3.3e+01 1.1e+01 10 0 10 1 1 11 0 10 1 21 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 2 1.0 6.2950e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecMDot 15 1.0 1.6722e-02 3.1 9.05e+05 1.1 0.0e+00 0.0e+00 1.5e+01 1 3 0 0 1 1 3 0 0 29 947
VecNorm 17 1.0 2.5777e-02 3.7 2.33e+05 1.1 0.0e+00 0.0e+00 1.7e+01 2 1 0 0 1 2 1 0 0 33 158
VecScale 17 1.0 1.8477e-04 1.8 1.17e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11040
VecCopy 2 1.0 1.6999e-04 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 22 1.0 1.8678e-03 7.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 2 1.0 5.4359e-05 2.2 2.74e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8830
VecMAXPY 17 1.0 5.8579e-04 1.9 1.11e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 33186
VecScatterBegin 15 1.0 2.3293e-04 2.8 0.00e+00 0.0 9.9e+02 1.3e+02 0.0e+00 0 0 74 21 0 0 0 75 21 0 0
VecScatterEnd 15 1.0 2.9254e-04 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 17 1.0 2.5988e-02 3.6 3.50e+05 1.1 0.0e+00 0.0e+00 1.7e+01 2 1 0 0 1 2 1 0 0 33 235
KSPGMRESOrthog 15 1.0 1.7086e-02 3.1 1.81e+06 1.1 0.0e+00 0.0e+00 1.5e+01 1 7 0 0 1 1 7 0 0 29 1854
KSPSetUp 3 1.0 9.8419e-0410.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 2 1.0 7.6287e-02 1.3 2.69e+07 1.1 9.9e+02 1.3e+02 3.2e+01 9100 74 21 2 9100 75 21 62 6169
PCSetUp 4 1.0 3.0286e-02 2.0 8.96e+06 1.1 0.0e+00 0.0e+00 0.0e+00 3 33 0 0 0 3 33 0 0 0 5168
PCSetUpOnBlocks 2 1.0 1.3275e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 17 0 0 0 2 17 0 0 0 5896
PCApply 17 1.0 3.0767e-02 2.3 1.27e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 47 0 0 0 3 47 0 0 0 7239
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.45912e-05
Average time for zero size MPI_Send(): 2.38419e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:00 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.159e+00 1.00558 1.155e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 4.220e+07 1.06152 4.101e+07 7.381e+08
Flops/sec: 3.653e+07 1.06064 3.550e+07 6.391e+08
MPI Messages: 1.760e+02 2.79365 1.144e+02 2.060e+03
MPI Message Lengths: 7.521e+04 3.52427 4.319e+02 8.895e+05
MPI Reductions: 3.952e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.0843e+00 93.9% 7.3815e+08 100.0% 2.024e+03 98.3% 4.319e+02 100.0% 7.500e+01 1.9%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 24 1.0 1.9367e-02 2.4 1.17e+07 1.1 1.6e+03 1.3e+02 0.0e+00 1 28 77 22 0 1 28 78 22 0 10558
MatSolve 27 1.0 1.9733e-02 1.8 1.31e+07 1.1 0.0e+00 0.0e+00 0.0e+00 1 31 0 0 0 1 31 0 0 0 11626
MatLUFactorNum 3 1.0 3.5394e-02 2.1 1.34e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 6634
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 3 1.0 3.0434e-03 2.7 0.00e+00 0.0 3.1e+02 2.2e+03 6.0e+00 0 0 15 77 0 0 0 15 77 8 0
MatAssemblyEnd 3 1.0 1.2823e-01 1.1 0.00e+00 0.0 1.3e+02 3.3e+01 1.3e+01 10 0 6 0 0 11 0 7 0 17 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 3 1.0 7.3771e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
VecMDot 24 1.0 2.7449e-02 3.8 1.52e+06 1.1 0.0e+00 0.0e+00 2.4e+01 1 4 0 0 1 1 4 0 0 32 970
VecNorm 27 1.0 3.3906e-02 3.1 3.70e+05 1.1 0.0e+00 0.0e+00 2.7e+01 2 1 0 0 1 2 1 0 0 36 191
VecScale 27 1.0 2.8515e-04 1.9 1.85e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11362
VecCopy 3 1.0 1.8811e-04 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 34 1.0 2.1212e-03 5.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 3 1.0 7.0095e-05 1.9 4.11e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10272
VecMAXPY 27 1.0 1.2825e-03 2.4 1.85e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 25264
VecScatterBegin 24 1.0 3.0899e-04 2.6 0.00e+00 0.0 1.6e+03 1.3e+02 0.0e+00 0 0 77 22 0 0 0 78 22 0 0
VecScatterEnd 24 1.0 3.6216e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 27 1.0 3.4173e-02 3.0 5.55e+05 1.1 0.0e+00 0.0e+00 2.7e+01 2 1 0 0 1 2 1 0 0 36 284
KSPGMRESOrthog 24 1.0 2.8013e-02 3.6 3.04e+06 1.1 0.0e+00 0.0e+00 2.4e+01 1 7 0 0 1 1 7 0 0 32 1902
KSPSetUp 4 1.0 9.8443e-0410.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 3 1.0 1.0952e-01 1.2 4.22e+07 1.1 1.6e+03 1.3e+02 5.1e+01 9100 77 22 1 9100 78 22 68 6740
PCSetUp 6 1.0 4.0119e-02 1.9 1.34e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 5852
PCSetUpOnBlocks 3 1.0 1.3276e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 11 0 0 0 1 11 0 0 0 5895
PCApply 27 1.0 4.7019e-02 2.0 2.21e+07 1.1 0.0e+00 0.0e+00 0.0e+00 3 52 0 0 0 3 52 0 0 0 8208
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.36852e-05
Average time for zero size MPI_Send(): 2.39743e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:00 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.479e+00 1.00365 1.476e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 5.877e+07 1.06147 5.711e+07 1.028e+09
Flops/sec: 3.978e+07 1.06013 3.869e+07 6.964e+08
MPI Messages: 2.430e+02 2.79310 1.578e+02 2.840e+03
MPI Message Lengths: 1.013e+05 3.48302 4.229e+02 1.201e+06
MPI Reductions: 5.915e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.3853e+00 93.8% 1.0281e+09 100.0% 2.786e+03 98.1% 4.229e+02 100.0% 1.000e+02 1.7%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 34 1.0 2.6043e-02 2.2 1.65e+07 1.1 2.2e+03 1.3e+02 0.0e+00 1 28 79 23 0 1 28 81 23 0 11122
MatSolve 38 1.0 2.6503e-02 1.7 1.85e+07 1.1 0.0e+00 0.0e+00 0.0e+00 1 31 0 0 0 2 31 0 0 0 12183
MatLUFactorNum 4 1.0 4.4976e-02 2.0 1.79e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 6960
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 4 1.0 3.5584e-03 2.6 0.00e+00 0.0 4.1e+02 2.2e+03 8.0e+00 0 0 14 76 0 0 0 15 76 8 0
MatAssemblyEnd 4 1.0 1.2939e-01 1.1 0.00e+00 0.0 1.3e+02 3.3e+01 1.5e+01 8 0 5 0 0 9 0 5 0 15 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 4 1.0 8.7659e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 34 1.0 3.4745e-02 3.8 2.28e+06 1.1 0.0e+00 0.0e+00 3.4e+01 1 4 0 0 1 1 4 0 0 34 1147
VecNorm 38 1.0 4.1194e-02 3.1 5.21e+05 1.1 0.0e+00 0.0e+00 3.8e+01 2 1 0 0 1 2 1 0 0 38 221
VecScale 38 1.0 4.1938e-04 1.9 2.60e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10873
VecCopy 4 1.0 2.0719e-04 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 47 1.0 2.5659e-03 4.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 4 1.0 9.2745e-05 1.8 5.48e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10351
VecMAXPY 38 1.0 1.5228e-03 2.0 2.74e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 31521
VecScatterBegin 34 1.0 3.9530e-04 2.5 0.00e+00 0.0 2.2e+03 1.3e+02 0.0e+00 0 0 79 23 0 0 0 81 23 0 0
VecScatterEnd 34 1.0 4.3344e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 38 1.0 4.1541e-02 3.0 7.81e+05 1.1 0.0e+00 0.0e+00 3.8e+01 2 1 0 0 1 2 1 0 0 38 329
KSPGMRESOrthog 34 1.0 3.5556e-02 3.6 4.55e+06 1.1 0.0e+00 0.0e+00 3.4e+01 1 8 0 0 1 1 8 0 0 34 2241
KSPSetUp 5 1.0 9.8467e-0410.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 4 1.0 1.3956e-01 1.2 5.88e+07 1.1 2.2e+03 1.3e+02 7.2e+01 9100 79 23 1 9100 81 23 72 7366
PCSetUp 8 1.0 4.9715e-02 1.9 1.79e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 6297
PCSetUpOnBlocks 4 1.0 1.3277e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 8 0 0 0 1 8 0 0 0 5895
PCApply 38 1.0 6.2709e-02 1.8 3.19e+07 1.1 0.0e+00 0.0e+00 0.0e+00 3 54 0 0 0 3 54 0 0 0 8893
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.28746e-05
Average time for zero size MPI_Send(): 2.43717e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:00 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 1.868e+00 1.00356 1.864e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 7.534e+07 1.06145 7.322e+07 1.318e+09
Flops/sec: 4.038e+07 1.06008 3.928e+07 7.070e+08
MPI Messages: 3.100e+02 2.79279 2.011e+02 3.620e+03
MPI Message Lengths: 1.274e+05 3.45912 4.178e+02 1.513e+06
MPI Reductions: 7.878e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.7546e+00 94.1% 1.3180e+09 100.0% 3.548e+03 98.0% 4.178e+02 100.0% 1.250e+02 1.6%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 44 1.0 3.6276e-02 2.1 2.14e+07 1.1 2.9e+03 1.3e+02 0.0e+00 1 28 80 24 0 2 28 82 24 0 10333
MatSolve 49 1.0 3.5264e-02 1.7 2.38e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 11807
MatLUFactorNum 5 1.0 5.3989e-02 1.8 2.24e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 7248
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 5 1.0 1.8149e-0211.2 0.00e+00 0.0 5.1e+02 2.2e+03 1.0e+01 0 0 14 76 0 0 0 14 76 8 0
MatAssemblyEnd 5 1.0 1.6462e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 1.7e+01 8 0 4 0 0 9 0 4 0 14 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 5 1.0 9.7947e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 44 1.0 4.4258e-02 3.6 3.03e+06 1.1 0.0e+00 0.0e+00 4.4e+01 1 4 0 0 1 2 4 0 0 35 1198
VecNorm 49 1.0 6.2629e-02 4.5 6.72e+05 1.1 0.0e+00 0.0e+00 4.9e+01 2 1 0 0 1 2 1 0 0 39 188
VecScale 49 1.0 5.4622e-04 1.8 3.36e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10765
VecCopy 5 1.0 2.5487e-04 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 60 1.0 3.1738e-03 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 5 1.0 1.1301e-04 1.8 6.85e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10618
VecMAXPY 49 1.0 1.9016e-03 1.6 3.63e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 33445
VecScatterBegin 44 1.0 4.8709e-04 2.2 0.00e+00 0.0 2.9e+03 1.3e+02 0.0e+00 0 0 80 24 0 0 0 82 24 0 0
VecScatterEnd 44 1.0 5.1999e-04 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 49 1.0 6.3123e-02 4.4 1.01e+06 1.1 0.0e+00 0.0e+00 4.9e+01 2 1 0 0 1 2 1 0 0 39 279
KSPGMRESOrthog 44 1.0 4.5371e-02 3.3 6.06e+06 1.1 0.0e+00 0.0e+00 4.4e+01 1 8 0 0 1 2 8 0 0 35 2338
KSPSetUp 6 1.0 9.8491e-0410.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 5 1.0 1.9240e-01 1.3 7.53e+07 1.1 2.9e+03 1.3e+02 9.3e+01 9100 80 24 1 10100 82 24 74 6850
PCSetUp 10 1.0 5.8741e-02 1.7 2.24e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 6662
PCSetUpOnBlocks 5 1.0 1.3278e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 6 0 0 0 1 6 0 0 0 5894
PCApply 49 1.0 8.0155e-02 1.6 4.17e+07 1.1 0.0e+00 0.0e+00 0.0e+00 3 55 0 0 0 4 55 0 0 0 9100
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.37329e-05
Average time for zero size MPI_Send(): 2.34445e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:01 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.202e+00 1.00244 2.199e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 8.938e+07 1.06146 8.686e+07 1.564e+09
Flops/sec: 4.060e+07 1.06028 3.949e+07 7.109e+08
MPI Messages: 3.650e+02 2.78626 2.372e+02 4.269e+03
MPI Message Lengths: 1.524e+05 3.47252 4.235e+02 1.808e+06
MPI Reductions: 9.837e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.0674e+00 94.0% 1.5636e+09 100.0% 4.179e+03 97.9% 4.235e+02 100.0% 1.460e+02 1.5%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 52 1.0 4.3921e-02 2.1 2.53e+07 1.1 3.4e+03 1.3e+02 0.0e+00 1 28 80 24 0 2 28 82 24 0 10086
MatSolve 58 1.0 3.8979e-02 1.6 2.82e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 12643
MatLUFactorNum 6 1.0 6.1010e-02 1.7 2.69e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 7697
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 6 1.0 1.8684e-0210.2 0.00e+00 0.0 6.1e+02 2.2e+03 1.2e+01 0 0 14 76 0 0 0 15 76 8 0
MatAssemblyEnd 6 1.0 1.6610e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 1.9e+01 7 0 3 0 0 8 0 3 0 13 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 6 1.0 1.1241e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 52 1.0 5.1157e-02 3.6 3.52e+06 1.1 0.0e+00 0.0e+00 5.2e+01 1 4 0 0 1 1 4 0 0 36 1206
VecNorm 58 1.0 6.8662e-02 4.9 7.95e+05 1.1 0.0e+00 0.0e+00 5.8e+01 2 1 0 0 1 2 1 0 0 40 203
VecScale 58 1.0 6.3705e-04 1.8 3.97e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10925
VecCopy 6 1.0 2.8062e-04 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 71 1.0 3.3255e-03 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 6 1.0 1.3876e-04 1.7 8.22e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10378
VecMAXPY 58 1.0 2.1372e-03 1.6 4.24e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 34700
VecScatterBegin 52 1.0 5.4979e-04 2.2 0.00e+00 0.0 3.4e+03 1.3e+02 0.0e+00 0 0 80 24 0 0 0 82 24 0 0
VecScatterEnd 52 1.0 5.8532e-04 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 58 1.0 6.9216e-02 4.7 1.19e+06 1.1 0.0e+00 0.0e+00 5.8e+01 2 1 0 0 1 2 1 0 0 40 302
KSPGMRESOrthog 52 1.0 5.2438e-02 3.3 7.04e+06 1.1 0.0e+00 0.0e+00 5.2e+01 1 8 0 0 1 2 8 0 0 36 2352
KSPSetUp 7 1.0 9.8515e-0410.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 6 1.0 2.2081e-01 1.3 8.94e+07 1.1 3.4e+03 1.3e+02 1.1e+02 9100 80 24 1 9100 82 24 75 7081
PCSetUp 12 1.0 6.5772e-02 1.6 2.69e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 7139
PCSetUpOnBlocks 6 1.0 1.3279e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 5894
PCApply 58 1.0 9.2465e-02 1.5 5.06e+07 1.1 0.0e+00 0.0e+00 0.0e+00 3 57 0 0 0 4 57 0 0 0 9562
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.37329e-05
Average time for zero size MPI_Send(): 2.29147e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:01 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.527e+00 1.00220 2.524e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 1.034e+08 1.06147 1.005e+08 1.809e+09
Flops/sec: 4.094e+07 1.06035 3.982e+07 7.168e+08
MPI Messages: 4.200e+02 2.78146 2.732e+02 4.918e+03
MPI Message Lengths: 1.774e+05 3.48221 4.276e+02 2.103e+06
MPI Reductions: 1.180e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.3704e+00 93.9% 1.8091e+09 100.0% 4.810e+03 97.8% 4.276e+02 100.0% 1.670e+02 1.4%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 60 1.0 5.2383e-02 2.1 2.92e+07 1.1 4.0e+03 1.3e+02 0.0e+00 1 28 81 24 0 2 28 82 24 0 9758
MatSolve 67 1.0 4.3626e-02 1.5 3.26e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 31 0 0 0 2 31 0 0 0 13049
MatLUFactorNum 7 1.0 6.7851e-02 1.5 3.13e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 8074
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 7 1.0 1.9327e-02 8.9 0.00e+00 0.0 7.1e+02 2.2e+03 1.4e+01 0 0 15 76 0 0 0 15 76 8 0
MatAssemblyEnd 7 1.0 1.6972e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 2.1e+01 6 0 3 0 0 7 0 3 0 13 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 7 1.0 1.2034e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 60 1.0 5.6120e-02 3.7 4.02e+06 1.1 0.0e+00 0.0e+00 6.0e+01 1 4 0 0 1 1 4 0 0 36 1253
VecNorm 67 1.0 7.5908e-02 4.9 9.18e+05 1.1 0.0e+00 0.0e+00 6.7e+01 2 1 0 0 1 2 1 0 0 40 212
VecScale 67 1.0 7.0763e-04 1.7 4.59e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11362
VecCopy 7 1.0 3.0398e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 82 1.0 3.4907e-03 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 7 1.0 1.8215e-04 1.9 9.59e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9223
VecMAXPY 67 1.0 2.3444e-03 1.6 4.84e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 36138
VecScatterBegin 60 1.0 6.1297e-04 2.1 0.00e+00 0.0 4.0e+03 1.3e+02 0.0e+00 0 0 81 24 0 0 0 82 24 0 0
VecScatterEnd 60 1.0 6.3133e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 67 1.0 7.6524e-02 4.8 1.38e+06 1.1 0.0e+00 0.0e+00 6.7e+01 2 1 0 0 1 2 1 0 0 40 315
KSPGMRESOrthog 60 1.0 5.7593e-02 3.4 8.03e+06 1.1 0.0e+00 0.0e+00 6.0e+01 1 8 0 0 1 2 8 0 0 36 2442
KSPSetUp 8 1.0 9.8562e-0410.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 7 1.0 2.5042e-01 1.3 1.03e+08 1.1 4.0e+03 1.3e+02 1.3e+02 9100 81 24 1 9100 82 24 76 7224
PCSetUp 14 1.0 7.2622e-02 1.5 3.13e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 7544
PCSetUpOnBlocks 7 1.0 1.3280e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 5893
PCApply 67 1.0 1.0518e-01 1.4 5.94e+07 1.1 0.0e+00 0.0e+00 0.0e+00 3 57 0 0 0 4 57 0 0 0 9877
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 1.41621e-05
Average time for zero size MPI_Send(): 2.27822e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:01 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 2.852e+00 1.00183 2.850e+00
Objects: 3.900e+01 1.00000 3.900e+01
Flops: 1.200e+08 1.06145 1.166e+08 2.099e+09
Flops/sec: 4.208e+07 1.06047 4.092e+07 7.366e+08
MPI Messages: 4.870e+02 2.78286 3.166e+02 5.698e+03
MPI Message Lengths: 2.035e+05 3.46732 4.237e+02 2.414e+06
MPI Reductions: 1.376e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.6758e+00 93.9% 2.0990e+09 100.0% 5.572e+03 97.8% 4.237e+02 100.0% 1.920e+02 1.4%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 70 1.0 5.7620e-02 1.9 3.41e+07 1.1 4.6e+03 1.3e+02 0.0e+00 2 28 81 24 0 2 28 83 24 0 10350
MatSolve 78 1.0 5.1231e-02 1.5 3.79e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 12937
MatLUFactorNum 8 1.0 7.5051e-02 1.5 3.58e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 8342
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 8 1.0 2.2929e-02 9.6 0.00e+00 0.0 8.2e+02 2.2e+03 1.6e+01 0 0 14 76 0 0 0 15 76 8 0
MatAssemblyEnd 8 1.0 1.7418e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 2.3e+01 6 0 2 0 0 6 0 2 0 12 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 8 1.0 1.2835e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 70 1.0 6.5610e-02 3.0 4.77e+06 1.1 0.0e+00 0.0e+00 7.0e+01 1 4 0 0 1 2 4 0 0 36 1273
VecNorm 78 1.0 8.1122e-02 5.2 1.07e+06 1.1 0.0e+00 0.0e+00 7.8e+01 2 1 0 0 1 2 1 0 0 41 231
VecScale 78 1.0 8.2183e-04 1.7 5.35e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11389
VecCopy 8 1.0 3.2353e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 95 1.0 3.6633e-03 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 8 1.0 2.0409e-04 1.9 1.10e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9408
VecMAXPY 78 1.0 2.6994e-03 1.6 5.73e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 37164
VecScatterBegin 70 1.0 7.1168e-04 2.1 0.00e+00 0.0 4.6e+03 1.3e+02 0.0e+00 0 0 81 24 0 0 0 83 24 0 0
VecScatterEnd 70 1.0 7.1335e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 78 1.0 8.1872e-02 4.9 1.60e+06 1.1 0.0e+00 0.0e+00 7.8e+01 2 1 0 0 1 2 1 0 0 41 343
KSPGMRESOrthog 70 1.0 6.7232e-02 2.8 9.54e+06 1.1 0.0e+00 0.0e+00 7.0e+01 2 8 0 0 1 2 8 0 0 36 2484
KSPSetUp 9 1.0 9.8586e-0410.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 8 1.0 2.8454e-01 1.3 1.20e+08 1.1 4.6e+03 1.3e+02 1.5e+02 9100 81 24 1 9100 83 24 77 7377
PCSetUp 16 1.0 7.9836e-02 1.5 3.58e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 30 0 0 0 2 30 0 0 0 7842
PCSetUpOnBlocks 8 1.0 1.3281e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 5893
PCApply 78 1.0 1.2265e-01 1.4 6.92e+07 1.1 0.0e+00 0.0e+00 0.0e+00 4 58 0 0 0 4 58 0 0 0 9870
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 21 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.18732e-05
Average time for zero size MPI_Send(): 2.17226e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:02 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 3.198e+00 1.00167 3.196e+00
Objects: 4.900e+01 1.00000 4.900e+01
Flops: 1.420e+08 1.06140 1.380e+08 2.483e+09
Flops/sec: 4.440e+07 1.06062 4.317e+07 7.771e+08
MPI Messages: 5.780e+02 2.79227 3.746e+02 6.742e+03
MPI Message Lengths: 2.319e+05 3.41815 4.092e+02 2.759e+06
MPI Reductions: 1.573e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 3.0012e+00 93.9% 2.4834e+09 100.0% 6.598e+03 97.9% 4.092e+02 100.0% 2.250e+02 1.4%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 84 1.0 6.6434e-02 1.8 4.09e+07 1.1 5.5e+03 1.3e+02 0.0e+00 2 29 82 25 0 2 29 84 25 0 10772
MatSolve 93 1.0 5.9960e-02 1.4 4.52e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 13179
MatLUFactorNum 9 1.0 8.4842e-02 1.5 4.03e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 28 0 0 0 2 28 0 0 0 8302
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 9 1.0 2.3517e-02 8.8 0.00e+00 0.0 9.2e+02 2.2e+03 1.8e+01 0 0 14 75 0 0 0 14 75 8 0
MatAssemblyEnd 9 1.0 1.7687e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 2.5e+01 5 0 2 0 0 6 0 2 0 11 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 9 1.0 1.3521e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 84 1.0 7.7051e-02 2.8 6.21e+06 1.1 0.0e+00 0.0e+00 8.4e+01 2 4 0 0 1 2 4 0 0 37 1411
VecNorm 93 1.0 8.5851e-02 5.2 1.27e+06 1.1 0.0e+00 0.0e+00 9.3e+01 2 1 0 0 1 2 1 0 0 41 260
VecScale 93 1.0 9.5606e-04 1.7 6.37e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11673
VecCopy 9 1.0 3.3855e-04 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 112 1.0 4.2744e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 9 1.0 2.2507e-04 1.8 1.23e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9597
VecMAXPY 93 1.0 3.3107e-03 1.5 7.36e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 38929
VecScatterBegin 84 1.0 8.1396e-04 2.1 0.00e+00 0.0 5.5e+03 1.3e+02 0.0e+00 0 0 82 25 0 0 0 84 25 0 0
VecScatterEnd 84 1.0 1.4009e-03 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 93 1.0 8.6740e-02 4.9 1.91e+06 1.1 0.0e+00 0.0e+00 9.3e+01 2 1 0 0 1 2 1 0 0 41 386
KSPGMRESOrthog 84 1.0 7.9271e-02 2.6 1.24e+07 1.1 0.0e+00 0.0e+00 8.4e+01 2 9 0 0 1 2 9 0 0 37 2743
KSPSetUp 10 1.0 9.8610e-0410.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 9 1.0 3.2460e-01 1.3 1.42e+08 1.1 5.5e+03 1.3e+02 1.8e+02 9100 82 25 1 10100 84 25 79 7651
PCSetUp 18 1.0 8.9639e-02 1.5 4.03e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 28 0 0 0 2 28 0 0 0 7858
PCSetUpOnBlocks 9 1.0 1.3281e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 5893
PCApply 93 1.0 1.4292e-01 1.5 8.10e+07 1.1 0.0e+00 0.0e+00 0.0e+00 4 57 0 0 0 4 57 0 0 0 9910
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 31 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.45912e-05
Average time for zero size MPI_Send(): 2.19875e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
Unknown Name on a arch-linux2-c-opt named nagoyqatsi.engr.ucdavis.edu with 18 processors, by jaabell Wed Jul 1 13:55:02 2015
Using Petsc Release Version 3.6.0, Jun, 09, 2015
Max Max/Min Avg Total
Time (sec): 3.526e+00 1.00152 3.524e+00
Objects: 4.900e+01 1.00000 4.900e+01
Flops: 1.639e+08 1.06136 1.593e+08 2.868e+09
Flops/sec: 4.650e+07 1.06064 4.521e+07 8.139e+08
MPI Messages: 6.690e+02 2.79916 4.326e+02 7.787e+03
MPI Message Lengths: 2.604e+05 3.38067 3.985e+02 3.104e+06
MPI Reductions: 1.770e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 3.3098e+00 93.9% 2.8678e+09 100.0% 7.625e+03 97.9% 3.985e+02 100.0% 2.580e+02 1.5%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 98 1.0 7.4918e-02 1.8 4.77e+07 1.1 6.5e+03 1.3e+02 0.0e+00 2 29 83 26 0 2 29 85 26 0 11144
MatSolve 108 1.0 6.9855e-02 1.4 5.25e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 32 0 0 0 2 32 0 0 0 13136
MatLUFactorNum 10 1.0 9.2005e-02 1.5 4.48e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 27 0 0 0 2 27 0 0 0 8506
MatILUFactorSym 1 1.0 5.3070e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 10 1.0 2.4054e-02 8.1 0.00e+00 0.0 1.0e+03 2.2e+03 2.0e+01 0 0 13 74 0 0 0 13 74 8 0
MatAssemblyEnd 10 1.0 1.7850e-01 1.2 0.00e+00 0.0 1.3e+02 3.3e+01 2.7e+01 5 0 2 0 0 5 0 2 0 10 0
MatGetRowIJ 1 1.0 1.0014e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1127e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 10 1.0 1.4319e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 98 1.0 8.5841e-02 2.8 7.65e+06 1.1 0.0e+00 0.0e+00 9.8e+01 2 5 0 0 1 2 5 0 0 38 1560
VecNorm 108 1.0 9.2370e-02 5.1 1.48e+06 1.1 0.0e+00 0.0e+00 1.1e+02 1 1 0 0 1 2 1 0 0 42 281
VecScale 108 1.0 1.1256e-03 1.8 7.40e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11514
VecCopy 10 1.0 3.6168e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 129 1.0 4.5815e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 10 1.0 2.4629e-04 1.8 1.37e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9745
VecMAXPY 108 1.0 3.8722e-03 1.5 8.99e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 40660
VecScatterBegin 98 1.0 9.2292e-04 2.1 0.00e+00 0.0 6.5e+03 1.3e+02 0.0e+00 0 0 83 26 0 0 0 85 26 0 0
VecScatterEnd 98 1.0 1.4791e-03 3.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 108 1.0 9.3357e-02 4.9 2.22e+06 1.1 0.0e+00 0.0e+00 1.1e+02 2 1 0 0 1 2 1 0 0 42 416
KSPGMRESOrthog 98 1.0 8.8507e-02 2.6 1.53e+07 1.1 0.0e+00 0.0e+00 9.8e+01 2 9 0 0 1 2 9 0 0 38 3026
KSPSetUp 11 1.0 9.8634e-0410.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 10 1.0 3.5938e-01 1.3 1.64e+08 1.1 6.5e+03 1.3e+02 2.1e+02 9100 83 26 1 10100 85 26 80 7980
PCSetUp 20 1.0 9.6813e-02 1.4 4.48e+07 1.1 0.0e+00 0.0e+00 0.0e+00 2 27 0 0 0 2 27 0 0 0 8084
PCSetUpOnBlocks 10 1.0 1.3283e-02 1.6 4.48e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 5892
PCApply 108 1.0 1.5924e-01 1.4 9.28e+07 1.1 0.0e+00 0.0e+00 0.0e+00 4 57 0 0 0 4 57 0 0 0 10186
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 4 0 0 0
Vector 31 1 1584 0
Vector Scatter 1 0 0 0
Krylov Solver 3 0 0 0
Index Set 7 4 10056 0
Preconditioner 2 0 0 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.43051e-07
Average time for MPI_Barrier(): 1.27792e-05
Average time for zero size MPI_Send(): 2.39743e-06
#PETSc Option Table entries:
-info petsc_info.txt
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-x=0 --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-shared-libraries=0 --with-cxx=mpic++ --with-cc=mpicc --with-fc=mpif77 --with-blacs=1 --download-blacs=yes --with-scalapack=1 --download-scalapack=yes --with-spai=1 --download-spai=yes --with-hypre=0 --download-hypre=no --with-plapack=1 --download-plapack=yes --download-fblaslapack=1 --download-plapack=1 --with-metis-dir=/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install --with-parmetis-dir=/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install --with-superlu_dist=1 --download-superlu_dist=yes --download-superlu=yes --download-superlu_dist --with-spooles=1 --download-spooles make -j 24
-----------------------------------------
Libraries compiled on Tue Jun 30 15:38:16 2015 on nagoyqatsi.engr.ucdavis.edu
Machine characteristics: Linux-3.13.0-30-generic-x86_64-with-Ubuntu-14.04-trusty
Using PETSc directory: /home/jaabell/Repositories/essi_dependencies/petsc-3.6.0
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -march=native -mtune=native ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif77 -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -march=native -mtune=native ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/include -I/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/include -I/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/include -I/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/include -I/home/jaabell/.mvapich/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif77
Using libraries: -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -L/home/jaabell/Repositories/essi_dependencies/petsc-3.6.0/arch-linux2-c-opt/lib -lsuperlu_4.3 -lspai -lscalapack -lsuperlu_dist_4.0 -lflapack -lfblas -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/parmetis-4.0.2_install/lib -lparmetis -Wl,-rpath,/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -L/home/jaabell/Repositories/essi_dependencies/metis-4.0.2_install/lib -lmetis -lhwloc -lpthread -lssl -lcrypto -lm -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpifort -lgfortran -lm -Wl,-rpath,/home/jaabell/.mvapich/lib -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -L/home/jaabell/.mvapich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.9 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/jaabell/.mvapich/lib -lmpi -lgcc_s -ldl
-----------------------------------------
More information about the petsc-users
mailing list