************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./v3d3 on a linux-gcc named hb-2r06-n48 with 128 processors, by wjiang Sat Apr 28 18:49:27 2012 Using Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012 Max Max/Min Avg Total Time (sec): 2.351e+03 1.00072 2.350e+03 Objects: 3.540e+02 1.00000 3.540e+02 Flops: 1.150e+07 1.80851 1.074e+07 1.375e+09 Flops/sec: 4.892e+03 1.80861 4.570e+03 5.850e+05 MPI Messages: 7.731e+03 11.25328 1.106e+03 1.415e+05 MPI Message Lengths: 9.360e+08 2.08360 5.476e+05 7.750e+10 MPI Reductions: 5.900e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.3501e+03 100.0% 1.3748e+09 100.0% 1.415e+05 100.0% 5.476e+05 100.0% 5.890e+02 99.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) KSPSolve 4 1.0 2.2645e+03 1.0 0.00e+00 0.0 3.9e+04 3.6e+02 5.4e+01 96 0 27 0 9 96 0 27 0 9 0 PCSetUp 4 1.0 2.2633e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+01 96 0 0 0 6 96 0 0 0 6 0 PCApply 4 1.0 1.1641e+00 1.0 0.00e+00 0.0 3.9e+04 3.6e+02 2.0e+01 0 0 27 0 3 0 0 27 0 3 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 120 120 128915512 0 Vector Scatter 89 89 61420 0 Matrix 45 45 62915268 0 PetscRandom 1 1 608 0 Krylov Solver 3 3 3376 0 Preconditioner 3 3 2664 0 Viewer 5 4 2880 0 Index Set 88 88 303528 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 1.97887e-05 Average time for zero size MPI_Send(): 1.47521e-06 #PETSc Option Table entries: -ksp_view -log_summary -mat_mumps_icntl_14 50 -mat_mumps_icntl_4 1 -mat_mumps_icntl_6 2 -mat_mumps_icntl_7 5 -pc_factor_mat_solver_package mumps -pc_type lu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8