0 KSP Residual norm 6.649323550319e-01 1 KSP Residual norm 8.750923818038e-03 2 KSP Residual norm 3.390897189550e-04 3 KSP Residual norm 2.407098868614e-05 4 KSP Residual norm 2.348138574468e-06 5 KSP Residual norm 2.144365180509e-07 6 KSP Residual norm 2.240928229398e-08 7 KSP Residual norm 2.406236733356e-09 KSP Object: 2048 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=200, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2048 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Matrix Object: 2048 MPI processes type: mpiaij rows=531441, cols=531441 total: nonzeros=12013842, allocated nonzeros=12013842 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines --- system solved with PETSc (in 7.907795e+01) --- PETSc error = 5.165458e-07 / 1.955848e-03 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /ccc/scratch/cont003/gen6654/jolivetp/Cive/ff++/src/mpi/FreeFem++-mpi-PETSc on a arch-linux2-c-opt named curie2958 with 2048 processors, by jolivetp Tue Oct 8 23:24:03 2013 Using Petsc Release Version 3.4.2, Jul, 02, 2013 Max Max/Min Avg Total Time (sec): 1.086e+02 1.01860 1.076e+02 Objects: 3.300e+01 1.00000 3.300e+01 Flops: 4.207e+05 72.10626 1.341e+05 2.745e+08 Flops/sec: 3.927e+03 72.38036 1.246e+03 2.551e+06 MPI Messages: 2.400e+02 6.00000 1.364e+02 2.794e+05 MPI Message Lengths: 5.716e+04 9.02463 2.018e+02 5.637e+07 MPI Reductions: 4.900e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.0760e+02 100.0% 2.7454e+08 100.0% 2.794e+05 100.0% 2.018e+02 100.0% 4.800e+01 98.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 8 1.0 7.5954e-02 1.5 2.92e+0564.1 2.2e+05 2.4e+02 0.0e+00 0 68 80 94 0 0 68 80 94 0 2475 MatConvert 1 1.0 7.2964e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 1 1.0 1.0620e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 4 0 0 0 0 4 0 MatAssemblyEnd 1 1.0 4.2796e-01 1.0 0.00e+00 0.0 5.6e+04 6.1e+01 8.0e+00 0 0 20 6 16 0 0 20 6 17 0 MatGetRowIJ 2 1.0 1.8420e-03 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 1 1.0 4.4504e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 2 0 0 0 0 2 0 VecDot 1 1.0 1.3344e+00 1.0 1.62e+03107.8 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 2 1 0 0 0 2 1 VecMDot 7 1.0 4.2954e-02 2.3 4.53e+04107.8 0.0e+00 0.0e+00 7.0e+00 0 11 0 0 14 0 11 0 0 15 692 VecNorm 11 1.0 2.3901e-02 1.8 1.78e+04101.1 0.0e+00 0.0e+00 1.1e+01 0 4 0 0 22 0 4 0 0 23 489 VecScale 9 1.0 1.2851e+0027.7 7.28e+03101.1 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 4 VecCopy 1 1.0 6.9141e-06 7.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 11 1.0 9.1875e-0329.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2 1.0 2.5773e-0414.4 3.24e+03101.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 8248 VecMAXPY 8 1.0 7.9651e-03 1.6 5.66e+04101.1 0.0e+00 0.0e+00 0.0e+00 0 14 0 0 0 0 14 0 0 0 4670 VecScatterBegin 8 1.0 3.4055e-02 1.4 0.00e+00 0.0 2.2e+05 2.4e+02 0.0e+00 0 0 80 94 0 0 0 80 94 0 0 VecScatterEnd 8 1.0 2.4737e-0215.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 9 1.0 1.3002e+0018.7 2.18e+04101.1 0.0e+00 0.0e+00 9.0e+00 1 5 0 0 18 1 5 0 0 19 11 KSPGMRESOrthog 7 1.0 5.0120e-02 1.5 9.06e+04104.4 0.0e+00 0.0e+00 7.0e+00 0 22 0 0 14 0 22 0 0 15 1186 KSPSetUp 1 1.0 2.0134e-02162.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 7.8981e+01 1.0 3.76e+0572.5 2.0e+05 2.4e+02 1.9e+01 73 89 70 82 39 73 89 70 82 40 3 PCSetUp 1 1.0 6.1480e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 57 0 0 0 8 57 0 0 0 8 0 PCApply 8 1.0 1.7335e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 16 0 0 0 0 16 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 3 3 232360 0 Vector 24 21 147344 0 Vector Scatter 1 1 1076 0 Index Set 2 2 4064 0 Krylov Solver 1 1 18368 0 Preconditioner 1 1 1072 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 0.000101995 Average time for zero size MPI_Send(): 5.44626e-06 #PETSc Option Table entries: -eps 1e-8 -iter 200 -ksp_monitor -ksp_view -log_summary -pc_mg_log -pc_type hypre -pc_type_hypre boomeramg #End of PETSc Option Table entries