--- PETSc preconditioner built (in 5.301285e+01) 0 KSP Residual norm 2.110956032949e-01 1 KSP Residual norm 4.690565179982e-02 2 KSP Residual norm 1.839894244478e-02 3 KSP Residual norm 6.542338061019e-03 4 KSP Residual norm 2.429689521643e-03 5 KSP Residual norm 9.004565623659e-04 6 KSP Residual norm 3.318836859361e-04 7 KSP Residual norm 1.275124299236e-04 8 KSP Residual norm 4.895110252167e-05 9 KSP Residual norm 1.830749031983e-05 10 KSP Residual norm 6.776617839770e-06 11 KSP Residual norm 2.509926306109e-06 12 KSP Residual norm 9.267274226297e-07 13 KSP Residual norm 3.483272190197e-07 14 KSP Residual norm 1.295683269023e-07 15 KSP Residual norm 4.747805360549e-08 16 KSP Residual norm 1.722120251541e-08 17 KSP Residual norm 6.276392198641e-09 18 KSP Residual norm 2.294682152859e-09 19 KSP Residual norm 8.577329475494e-10 KSP Object: 2048 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=200, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2048 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 1 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Not using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type HMIS HYPRE BoomerAMG: Interpolation type ext+i linear system matrix = precond matrix: Matrix Object: 2048 MPI processes type: mpiaij rows=531441, cols=531441 total: nonzeros=12476324, allocated nonzeros=12476324 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines --- system solved with PETSc (in 8.078276e+00) --- PETSc error = 3.153226e-06 / 1.955848e-03 --- number of dof: 531441.0, on average, number of neighbors: 22.9, h: 3.5e-02 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /ccc/scratch/cont003/gen6654/jolivetp/Cive/ff++/src/mpi/FreeFem++-mpi-PETSc on a arch-linux2-c-opt named curie2688 with 2048 processors, by jolivetp Wed Oct 9 14:31:15 2013 Using Petsc Release Version 3.4.2, Jul, 02, 2013 Max Max/Min Avg Total Time (sec): 8.856e+01 1.01950 8.757e+01 Objects: 4.500e+01 1.00000 4.500e+01 Flops: 1.417e+06 77.92848 4.643e+05 9.509e+08 Flops/sec: 1.605e+04 76.80652 5.301e+03 1.086e+07 MPI Messages: 5.280e+02 6.00000 3.005e+02 6.155e+05 MPI Message Lengths: 1.378e+05 9.05442 2.239e+02 1.378e+08 MPI Reductions: 7.500e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 8.7564e+01 100.0% 9.5092e+08 100.0% 6.155e+05 100.0% 2.239e+02 100.0% 7.400e+01 98.7% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 20 1.0 4.7187e-02 1.9 7.31e+0564.1 5.6e+05 2.4e+02 0.0e+00 0 51 91 97 0 0 51 91 97 0 10351 MatConvert 1 1.0 3.0565e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 1 1.0 7.6596e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 3 0 0 0 0 3 0 MatAssemblyEnd 1 1.0 4.2022e-01 1.1 0.00e+00 0.0 5.6e+04 6.2e+01 8.0e+00 0 0 9 3 11 0 0 9 3 11 0 MatGetRowIJ 2 1.0 5.4717e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 1 1.0 4.7474e-02 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 1 0 0 0 0 1 0 VecDot 1 1.0 1.1117e+00 1.0 1.62e+03107.8 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 1 1 0 0 0 1 1 VecMDot 19 1.0 6.7045e-02 1.5 3.07e+05107.8 0.0e+00 0.0e+00 1.9e+01 0 21 0 0 25 0 21 0 0 26 3006 VecNorm 23 1.0 3.8061e-02 1.7 3.72e+04101.1 0.0e+00 0.0e+00 2.3e+01 0 3 0 0 31 0 3 0 0 31 642 VecScale 21 1.0 6.9720e-0111.8 1.70e+04101.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 16 VecCopy 1 1.0 1.4067e-0514.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 23 1.0 1.9797e-0253.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2 1.0 1.6403e-0410.3 3.24e+03101.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 12959 VecMAXPY 20 1.0 1.4663e-02 2.7 3.38e+05101.1 0.0e+00 0.0e+00 0.0e+00 0 23 0 0 0 0 23 0 0 0 15150 VecScatterBegin 20 1.0 2.0975e-0283.5 0.00e+00 0.0 5.6e+05 2.4e+02 0.0e+00 0 0 91 97 0 0 0 91 97 0 0 VecScatterEnd 20 1.0 1.8233e-0219.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 21 1.0 7.2118e-01 8.8 5.10e+04101.1 0.0e+00 0.0e+00 2.1e+01 0 4 0 0 28 0 4 0 0 28 46 KSPGMRESOrthog 19 1.0 8.1137e-02 1.5 6.15e+05104.4 0.0e+00 0.0e+00 1.9e+01 0 42 0 0 25 0 42 0 0 26 4973 KSPSetUp 1 1.0 1.0141e-0111.2 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 3 0 0 0 0 3 0 KSPSolve 1 1.0 7.9854e+00 1.0 1.37e+0678.3 5.3e+05 2.4e+02 3.9e+01 9 97 86 93 52 9 97 86 93 53 115 PCSetUp 1 1.0 5.2691e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 60 0 0 0 5 60 0 0 0 5 0 PCApply 20 1.0 7.8331e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 9 0 0 0 0 9 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 3 3 232360 0 Matrix Null Space 1 1 620 0 Vector 35 33 235280 0 Vector Scatter 1 1 1076 0 Index Set 2 2 4064 0 Krylov Solver 1 1 18368 0 Preconditioner 1 1 1072 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 8.47816e-05 Average time for zero size MPI_Send(): 5.07268e-06 #PETSc Option Table entries: -eps 1e-8 -iter 200 -ksp_monitor -ksp_view -log_summary -pc_hypre_boomeramg_agg_nl 1 -pc_hypre_boomeramg_coarsen_type HMIS -pc_hypre_boomeramg_interp_type ext+i -pc_hypre_boomeramg_no_CF -pc_mg_log -pc_type hypre -pc_type_hypre boomeramg #End of PETSc Option Table entries