************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /ccc/work/cont003/rndm/rndm/FreeFem-sources/src/mpi/FreeFem++-mpi on a arch-linux2-c-opt-complex-bullxmpi named curie2328 with 2048 processors, by jolivetp Tue Jul 10 02:10:53 2018 Using Petsc Development GIT revision: v3.9.2-603-gceafe64 GIT Date: 2018-06-10 12:46:16 -0500 Max Max/Min Avg Total Time (sec): 1.691e+04 1.00000 1.691e+04 Objects: 3.145e+04 1.00003 3.145e+04 Flop: 2.219e+12 1.95187 1.905e+12 3.901e+15 Flop/sec: 1.312e+08 1.95187 1.126e+08 2.307e+11 MPI Messages: 7.647e+07 7.97837 2.588e+07 5.300e+10 MPI Message Lengths: 1.195e+11 2.97535 2.889e+03 1.531e+14 MPI Reductions: 9.603e+05 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.6910e+04 100.0% 3.9010e+15 100.0% 5.300e+10 100.0% 2.889e+03 100.0% 9.603e+05 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 100 1.0 2.9788e+00 6.2 0.00e+00 0.0 2.0e+05 4.0e+00 1.0e+02 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 97 1.0 2.9260e+00 7.0 0.00e+00 0.0 1.6e+05 1.3e+03 9.7e+01 0 0 0 0 0 0 0 0 0 0 0 MatMult 1778795 1.0 3.5511e+03 4.1 1.46e+12 1.9 4.0e+10 2.4e+03 0.0e+00 7 66 75 61 0 7 66 75 61 0 728371 MatMultAdd 222360 1.0 2.5904e+0348.0 4.31e+09 1.9 2.4e+09 1.3e+02 0.0e+00 14 0 4 0 0 14 0 4 0 0 2872 MatMultTranspose 222360 1.0 1.8736e+03421.8 4.31e+09 1.9 2.4e+09 1.3e+02 0.0e+00 0 0 4 0 0 0 0 4 0 0 3970 MatSolve 404624 1.2 1.2139e+04 2.0 1.12e+08 0.0 0.0e+00 0.0e+00 0.0e+00 62 0 0 0 0 62 0 0 0 0 0 MatSOR 1334193 1.0 5.9901e+01 2.1 9.31e+10 2.3 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 2441131 MatLUFactorSym 4 1.0 1.1474e+00 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 4 1.0 4.8804e+00 4.6 6.95e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 4 1.0 6.8563e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 9 1.0 1.0051e-0132.5 3.20e+05 2.1 6.3e+04 6.4e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 5029 MatResidual 222360 1.0 1.2651e+02 9.1 1.97e+10 2.2 4.6e+09 6.4e+02 0.0e+00 0 1 9 2 0 0 1 9 2 0 237370 MatAssemblyBegin 279 1.0 3.6115e+00 4.8 0.00e+00 0.0 1.6e+05 1.3e+03 1.3e+02 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 279 1.0 4.5032e+00 1.1 0.00e+00 0.0 1.4e+06 8.3e+02 2.6e+02 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 94928 2.0 4.5495e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 4 1.3 1.7376e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMats 3 1.0 3.4639e-01 2.2 0.00e+00 0.0 3.9e+05 5.4e+04 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMat 12 1.0 2.3843e+00 1.1 0.00e+00 0.0 8.6e+05 1.1e+04 1.9e+02 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.3 1.1746e-01 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 3 1.0 1.7994e-01 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 3 1.0 3.6146e-01 1.2 0.00e+00 0.0 4.7e+06 1.1e+02 6.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 4 1.0 5.1689e-04180.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 4 1.0 7.2161e-01 1.3 0.00e+00 0.0 5.3e+04 2.7e+03 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 57 1.0 5.4752e+00 1.2 8.95e+09 2.0 1.8e+06 8.7e+04 1.5e+02 0 0 0 0 0 0 0 0 0 0 2876760 MatMatMultSym 3 1.0 3.2586e-01 1.2 0.00e+00 0.0 2.9e+05 2.1e+02 3.9e+01 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 57 1.0 4.1572e+00 1.0 8.95e+09 2.0 1.5e+06 1.0e+05 1.1e+02 0 0 0 0 0 0 0 0 0 0 3788824 MatPtAP 3 1.0 2.1469e+00 1.0 1.82e+06 1.9 6.6e+05 6.2e+02 4.9e+01 0 0 0 0 0 0 0 0 0 0 1413 MatPtAPSymbolic 3 1.0 3.4906e-01 1.0 0.00e+00 0.0 3.4e+05 7.6e+02 2.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 3 1.0 1.7279e+00 1.0 1.82e+06 1.9 3.2e+05 4.7e+02 2.7e+01 0 0 0 0 0 0 0 0 0 0 1756 MatTrnMatMult 3 1.0 1.3403e+00 1.1 1.31e+07 5.0 4.9e+05 4.9e+03 5.2e+01 0 0 0 0 0 0 0 0 0 0 6452 MatTrnMatMultSym 3 1.0 1.0600e+00 1.1 0.00e+00 0.0 4.2e+05 1.2e+03 4.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatTrnMatMultNum 3 1.0 3.3198e-01 1.2 1.31e+07 5.0 6.5e+04 2.9e+04 1.1e+01 0 0 0 0 0 0 0 0 0 0 26048 MatGetLocalMat 14 1.0 9.4834e-0243.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetBrAoCol 9 1.0 1.8926e-0166.7 0.00e+00 0.0 4.4e+05 7.6e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 79680 1.0 1.2788e+01 1.9 1.20e+09 2.0 0.0e+00 0.0e+00 8.0e+04 0 0 0 0 8 0 0 0 0 8 153323 VecMDot 375036 1.0 6.0551e+0320.7 2.78e+11 2.0 0.0e+00 0.0e+00 3.8e+05 11 12 0 0 39 11 12 0 0 39 77250 VecNorm 451942 1.0 5.4607e+02 3.9 3.95e+10 2.0 0.0e+00 0.0e+00 4.5e+05 1 2 0 0 47 1 2 0 0 47 121916 VecScale 426926 1.0 8.7319e+00 1.6 2.03e+10 2.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3919307 VecCopy 597347 1.0 9.7961e+00 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1962752 1.0 5.2963e+01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 116746 1.0 1.8358e+00 1.8 3.90e+09 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3597545 VecAYPX 1811308 1.0 2.1310e+00 1.8 6.32e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4831462 VecAXPBYCZ 889440 1.0 2.4358e+00 1.9 1.17e+10 2.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7798605 VecMAXPY 404690 1.0 1.1737e+02 3.0 3.14e+11 2.0 0.0e+00 0.0e+00 0.0e+00 0 14 0 0 0 0 14 0 0 0 4504798 VecAssemblyBegin 20 1.0 1.3878e-01 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 20 1.0 4.8667e-021085.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 47285 1.0 2.7815e-01 1.5 3.55e+08 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2091381 VecScatterBegin 3604869 1.0 1.8385e+02 2.2 0.00e+00 0.0 5.3e+10 2.9e+03 0.0e+00 1 0100100 0 1 0100100 0 0 VecScatterEnd 2943861 1.0 3.2413e+03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 17 0 0 0 0 17 0 0 0 0 0 VecSetRandom 4 1.0 5.1200e-03 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 404690 1.0 5.4170e+02 3.9 5.83e+10 2.0 0.0e+00 0.0e+00 4.0e+05 1 3 0 0 42 1 3 0 0 42 181127 KSPGMRESOrthog 375036 1.0 6.1054e+0318.1 5.56e+11 2.0 0.0e+00 0.0e+00 3.8e+05 11 24 0 0 39 11 24 0 0 39 153231 KSPSetUp 21 1.0 1.5609e-01 8.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 55 1.0 1.6857e+04 1.0 2.22e+12 2.0 5.3e+10 2.9e+03 9.6e+05100100100100100 100100100100100 231112 PCGAMGGraph_AGG 3 1.0 1.9606e-01 1.0 2.66e+05 2.2 1.9e+05 2.7e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 2066 PCGAMGCoarse_AGG 3 1.0 2.0658e+00 1.0 1.31e+07 5.0 5.7e+06 5.9e+02 1.4e+02 0 0 0 0 0 0 0 0 0 0 4186 PCGAMGProl_AGG 3 1.0 3.1476e+00 1.0 0.00e+00 0.0 3.5e+05 6.3e+02 5.7e+01 0 0 0 0 0 0 0 0 0 0 0 PCGAMGPOpt_AGG 3 1.0 1.6646e+00 1.0 5.00e+06 2.0 9.8e+05 5.1e+02 1.3e+02 0 0 0 0 0 0 0 0 0 0 4862 GAMG: createProl 3 1.0 7.0599e+00 1.0 1.84e+07 3.5 7.2e+06 5.7e+02 3.7e+02 0 0 0 0 0 0 0 0 0 0 2429 Graph 6 1.0 1.9571e-01 1.7 2.66e+05 2.2 1.9e+05 2.7e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 2070 MIS/Agg 3 1.0 3.6166e-01 1.0 0.00e+00 0.0 4.7e+06 1.1e+02 6.4e+01 0 0 0 0 0 0 0 0 0 0 0 SA: col data 3 1.0 1.3814e-01 1.1 0.00e+00 0.0 2.4e+05 8.7e+02 1.8e+01 0 0 0 0 0 0 0 0 0 0 0 SA: frmProl0 3 1.0 2.9433e+00 1.0 0.00e+00 0.0 1.1e+05 1.5e+02 2.7e+01 0 0 0 0 0 0 0 0 0 0 0 SA: smooth 3 1.0 1.5206e+00 3.1 3.20e+05 2.1 3.6e+05 2.8e+02 5.2e+01 0 0 0 0 0 0 0 0 0 0 332 GAMG: partLevel 3 1.0 4.4683e+00 1.0 1.82e+06 1.9 6.9e+05 6.0e+02 1.6e+02 0 0 0 0 0 0 0 0 0 0 679 repartition 2 1.0 2.6092e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 Invert-Sort 2 1.0 4.7723e-01 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 Move A 2 1.0 2.4393e-01 1.3 0.00e+00 0.0 7.5e+03 3.2e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 0 Move P 2 1.0 1.2479e+00 1.1 0.00e+00 0.0 1.2e+04 1.3e+01 3.6e+01 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 11 1.0 2.0163e+01 1.3 2.02e+07 3.3 9.3e+06 3.9e+03 7.3e+02 0 0 0 0 0 0 0 0 0 0 1001 PCSetUpOnBlocks 96356 1.0 6.1786e+00 3.2 6.95e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 7412 1.0 1.6481e+04 1.0 1.96e+12 2.0 5.3e+10 2.8e+03 9.1e+05 97 88100 97 94 97 88100 97 94 209137 KSPSolve_FS_0 7412 1.0 3.9385e+03 1.0 4.31e+11 2.0 4.9e+09 6.8e+03 1.9e+05 23 19 9 22 19 23 19 9 22 19 190105 KSPSolve_FS_1 7412 1.0 4.3909e+03 1.0 5.03e+11 2.0 5.5e+09 6.8e+03 2.1e+05 26 22 10 24 22 26 22 10 24 22 198859 KSPSolve_FS_2 7412 1.0 5.1515e+03 1.0 6.32e+11 2.0 6.4e+09 6.8e+03 2.4e+05 30 28 12 29 25 30 28 12 29 25 212652 KSPSolve_FS_3 7412 1.0 2.8939e+03 1.0 2.66e+11 2.1 3.5e+10 6.1e+02 2.7e+05 17 11 67 14 28 17 11 67 14 28 148175 EPSSetUp 1 1.0 2.4281e+00 1.0 0.00e+00 0.0 9.1e+05 1.2e+04 1.8e+02 0 0 0 0 0 0 0 0 0 0 0 EPSSolve 1 1.0 1.6862e+04 1.0 2.22e+12 2.0 5.3e+10 2.9e+03 9.6e+05100100100100100 100100100100100 231352 STSetUp 1 1.0 2.2099e+00 1.1 0.00e+00 0.0 9.1e+05 1.2e+04 1.8e+02 0 0 0 0 0 0 0 0 0 0 0 STApply 55 1.0 1.6858e+04 1.0 2.22e+12 2.0 5.3e+10 2.9e+03 9.6e+05100100100100100 100100100100100 231287 STMatSolve 55 1.0 1.6857e+04 1.0 2.22e+12 2.0 5.3e+10 2.9e+03 9.6e+05100100100100100 100100100100100 231112 BVCopy 11 1.0 2.9671e-03 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVMultVec 110 1.0 2.4307e-01 2.0 4.33e+08 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2998161 BVMultInPlace 8 1.0 3.5421e-02 2.2 1.91e+08 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9106553 BVDotVec 110 1.0 5.6104e+00 2.0 4.72e+08 2.0 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 0 0 0 0 0 0 141694 BVOrthogonalizeV 56 1.0 5.7752e+00 1.9 9.05e+08 2.0 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 0 0 0 0 0 0 263841 BVScale 56 1.0 4.2019e-03 1.6 1.00e+07 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4010138 BVSetRandom 1 1.0 5.8031e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSSolve 7 1.0 7.7277e-0236.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSVectors 90 1.0 1.1189e-0225.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSOther 7 1.0 6.6776e-0321.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetGraph 3 1.0 6.9141e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 3 1.0 2.6983e-01 9.3 0.00e+00 0.0 2.4e+05 1.1e+02 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastBegin 67 1.0 2.4278e-0236.4 0.00e+00 0.0 4.4e+06 1.1e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastEnd 67 1.0 2.1362e-0121.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 54 54 31536 0. Matrix 437 437 562055588 0. Matrix Coarsen 3 3 1932 0. Vector 30366 30366 108372264 0. Index Set 371 371 3832872 0. IS L to G Mapping 3 3 698736 0. Vec Scatter 156 156 809728 0. Krylov Solver 23 23 1401264 0. Preconditioner 22 22 21452 0. EPS Solver 1 1 2240 0. Spectral Transform 1 1 856 0. Viewer 2 1 848 0. Basis Vectors 1 1 9344 0. PetscRandom 7 7 4690 0. Region 1 1 680 0. Direct Solver 1 1 15736 0. Star Forest Graph 3 3 2640 0. ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 6.33717e-05 Average time for zero size MPI_Send(): 4.55277e-06 #PETSc Option Table entries: -eps_monitor_all -eps_ncv 15 -eps_nev 5 -eps_target 1e-06+0.6i -eps_tol 1e-6 -eps_type krylovschur -log_view -st_fieldsplit_pressure_ksp_type preonly -st_fieldsplit_pressure_pc_composite_type additive -st_fieldsplit_pressure_pc_type composite -st_fieldsplit_pressure_sub_0_ksp_ksp_converged_reason -st_fieldsplit_pressure_sub_0_ksp_ksp_rtol 1e-3 -st_fieldsplit_pressure_sub_0_ksp_ksp_type cg -st_fieldsplit_pressure_sub_0_ksp_pc_type jacobi -st_fieldsplit_pressure_sub_0_pc_type ksp -st_fieldsplit_pressure_sub_1_ksp_ksp_converged_reason -st_fieldsplit_pressure_sub_1_ksp_ksp_rtol 1e-3 -st_fieldsplit_pressure_sub_1_ksp_ksp_type gmres -st_fieldsplit_pressure_sub_1_ksp_pc_gamg_square_graph 10 -st_fieldsplit_pressure_sub_1_ksp_pc_type gamg -st_fieldsplit_pressure_sub_1_pc_type ksp -st_pc_fieldsplit_type multiplicative -st_pc_type fieldsplit -st_type sinvert #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --with-blacs-include=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/include --with-blacs-lib=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so --with-blaslapack-dir=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 --with-debugging=0 --with-errorchecking=0 --with-fortran-bindings=0 --with-metis-dir=arch-linux2-c-opt-bullxmpi --with-mumps-dir=arch-linux2-c-opt-bullxmpi --with-parmetis-dir=arch-linux2-c-opt-bullxmpi --with-ptscotch-dir=arch-linux2-c-opt-bullxmpi --with-scalapack-include=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/include --with-scalapack-lib="[/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_scalapack_lp64.so,/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so]" --with-scalar-type=complex --with-sowing-dir=arch-linux2-c-opt-bullxmpi --with-x=0 PETSC_ARCH=arch-linux2-c-opt-complex-bullxmpi ----------------------------------------- Libraries compiled on 2018-07-08 20:17:01 on curie90 Machine characteristics: Linux-2.6.32-696.30.1.el6.Bull.140.x86_64-x86_64-with-redhat-6.9-Santiago Using PETSc directory: /ccc/work/cont003/rndm/rndm/petsc Using PETSc arch: arch-linux2-c-opt-complex-bullxmpi ----------------------------------------- Using C compiler: mpicc -fPIC -wd1572 -g -O3 Using Fortran compiler: mpif90 -fPIC -g -O3 ----------------------------------------- Using include paths: -I/ccc/work/cont003/rndm/rndm/petsc/include -I/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/include -I/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/include -I/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/../../include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/lib -L/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/lib -lpetsc -Wl,-rpath,/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/lib -L/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/lib -Wl,-rpath,/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 -L/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 -Wl,-rpath,/opt/mpi/bullxmpi/1.2.9.2/lib -L/opt/mpi/bullxmpi/1.2.9.2/lib -Wl,-rpath,/ccc/products/gcc-6.1.0/default/lib -L/ccc/products/gcc-6.1.0/default/lib -Wl,-rpath,/ccc/products2/ifort-18.0.1.163/BullEL_6__x86_64/default/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin -L/ccc/products2/ifort-18.0.1.163/BullEL_6__x86_64/default/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc -Wl,-rpath,/ccc/products/gcc-6.1.0/default/lib64 -L/ccc/products/gcc-6.1.0/default/lib64 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib64 -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib64 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm -lnuma -lrt -lnsl -lutil -limf -lifport -lifcoremt_pic -lsvml -lipgo -lirc -lpthread -lgcc_s -lirc_s -lstdc++ -ldl ----------------------------------------- ########## ########## ########## ########## ########## ########## ########## ########## Execution Sum Up ########## ########## ########## ########## ########## ########## ########## ########## Jobid : 9963556 Jobname : Eig User : jolivetp Account : gen7519@standard Limits : time = 10:00:00 , memory/task = 4000 Mo Date : submit = 09/07/2018 21:28:37 , start = 09/07/2018 21:28:45 Execution : partition = standard , QoS = normal , Comment = (null) Resources : ntasks = 2048 , cpus/task = 1 , ncpus = 2048 , nodes = 128 Nodes=curie[2328-2363,2760-2768,2770-2777,4987-4989,5190-5207,5334-5351,5928-5945,6522-6539] CPU_IDs=0-15 Mem=64000 Memory / step -------------- Resident Size (Mo) Virtual Size (Go) JobID Max (Node:Task) AveTask Max (Node:Task) AveTask ----------- ------------------------ ------- -------------------------- ------- Accounting / step ------------------ JobID JobName Ntasks Ncpus Nnodes Layout Elapsed Ratio CPusage Eff State ------------ ------------ ------ ----- ------ ------- ------- ----- ------- --- ----- 9963556 Eig - 2048 128 - 04:42:13 100 - - - 9963556.0 FreeFem++-mpi 2048 2048 128 BBlock 04:42:09 99.9 04:21:08 92.5 COMPLETED ########## ########## ########## ########## ########## ########## ########## ##########