************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /ccc/work/cont003/rndm/rndm/FreeFem-sources/src/mpi/FreeFem++-mpi on a arch-linux2-c-opt-complex-bullxmpi named curie1536 with 512 processors, by jolivetp Tue Jul 10 17:17:12 2018 Using Petsc Development GIT revision: v3.9.2-603-gceafe64 GIT Date: 2018-06-10 12:46:16 -0500 Max Max/Min Avg Total Time (sec): 3.914e+04 1.00000 3.914e+04 Objects: 3.092e+04 1.00003 3.092e+04 Flop: 6.308e+12 1.64365 5.687e+12 2.912e+15 Flop/sec: 1.612e+08 1.64365 1.453e+08 7.439e+10 MPI Messages: 5.765e+07 7.26993 1.986e+07 1.017e+10 MPI Message Lengths: 2.085e+11 2.74277 6.708e+03 6.822e+13 MPI Reductions: 7.693e+05 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 3.9143e+04 100.0% 2.9119e+15 100.0% 1.017e+10 100.0% 6.708e+03 100.0% 7.693e+05 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 99 1.0 3.0518e+00 6.1 0.00e+00 0.0 4.3e+04 4.0e+00 9.9e+01 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 96 1.0 3.0232e+00 6.5 0.00e+00 0.0 4.4e+04 3.6e+03 9.6e+01 0 0 0 0 0 0 0 0 0 0 0 MatMult 1577790 1.0 3.1967e+03 1.2 4.48e+12 1.6 7.6e+09 5.6e+03 0.0e+00 7 71 75 63 0 7 71 75 63 0 650501 MatMultAdd 204786 1.0 1.3412e+02 5.5 1.50e+10 1.7 5.5e+08 2.7e+02 0.0e+00 0 0 5 0 0 0 0 5 0 0 50762 MatMultTranspose 204786 1.0 4.6790e+01 4.3 1.50e+10 1.7 5.5e+08 2.7e+02 0.0e+00 0 0 5 0 0 0 0 5 0 0 145505 MatSolve 308386 1.3 3.3958e+04 1.8 1.19e+08 0.0 0.0e+00 0.0e+00 0.0e+00 75 0 0 0 0 75 0 0 0 0 0 MatSOR 1228749 1.0 2.2734e+02 2.0 3.37e+11 2.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 644166 MatLUFactorSym 4 1.0 4.2369e+00 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 4 1.0 4.1001e+01 3.3 8.58e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 4 1.0 2.4988e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 9 1.0 9.5009e-02 2.1 1.11e+06 1.7 1.4e+04 1.6e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 5304 MatResidual 204786 1.0 5.4747e+01 1.7 6.06e+10 1.7 9.2e+08 1.6e+03 0.0e+00 0 1 9 2 0 0 1 9 2 0 503975 MatAssemblyBegin 276 1.0 4.9694e+00 5.5 0.00e+00 0.0 4.4e+04 3.6e+03 1.3e+02 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 276 1.0 2.6348e+00 1.2 0.00e+00 0.0 3.3e+05 2.1e+03 2.6e+02 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 358020 1.7 1.5754e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 4 1.3 6.1804e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMats 3 1.0 4.7321e-01 1.5 0.00e+00 0.0 8.9e+04 1.4e+05 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMat 12 1.0 1.7808e+00 1.0 0.00e+00 0.0 2.0e+05 2.8e+04 1.9e+02 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 4 1.3 5.2534e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 3 1.0 1.4504e-01 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 3 1.0 1.9934e-01 1.2 0.00e+00 0.0 3.5e+05 4.3e+02 3.7e+01 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 4 1.0 6.7019e-0485.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 4 1.0 1.6480e+00 1.1 0.00e+00 0.0 1.2e+04 6.9e+03 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 56 1.0 1.5935e+01 1.1 3.34e+10 1.6 4.0e+05 2.2e+05 1.5e+02 0 1 0 0 0 0 1 0 0 0 970110 MatMatMultSym 3 1.0 2.4633e-01 1.1 0.00e+00 0.0 6.7e+04 5.0e+02 3.9e+01 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 56 1.0 1.4204e+01 1.0 3.34e+10 1.6 3.3e+05 2.6e+05 1.1e+02 0 1 0 0 0 0 1 0 0 0 1088354 MatPtAP 3 1.0 8.6670e-01 1.1 6.65e+06 1.7 1.6e+05 1.4e+03 4.9e+01 0 0 0 0 0 0 0 0 0 0 3504 MatPtAPSymbolic 3 1.0 2.4013e-01 1.0 0.00e+00 0.0 8.0e+04 1.8e+03 2.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 3 1.0 5.7762e-01 1.0 6.65e+06 1.7 8.2e+04 1.0e+03 2.7e+01 0 0 0 0 0 0 0 0 0 0 5258 MatTrnMatMult 3 1.0 1.0263e+00 1.0 3.11e+07 2.9 9.6e+04 1.4e+04 5.2e+01 0 0 0 0 0 0 0 0 0 0 7992 MatTrnMatMultSym 3 1.0 6.1593e-01 1.1 0.00e+00 0.0 8.0e+04 3.7e+03 4.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatTrnMatMultNum 3 1.0 4.2675e-01 1.1 3.11e+07 2.9 1.6e+04 6.8e+04 1.1e+01 0 0 0 0 0 0 0 0 0 0 19220 MatGetLocalMat 14 1.0 3.5630e-02 9.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetBrAoCol 9 1.0 9.5346e-02 6.6 0.00e+00 0.0 9.5e+04 1.9e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 81038 1.0 8.9813e+00 1.9 4.65e+09 1.9 0.0e+00 0.0e+00 8.1e+04 0 0 0 0 11 0 0 0 0 11 222087 VecMDot 279302 1.0 1.4213e+0432.3 5.61e+11 1.7 0.0e+00 0.0e+00 2.8e+05 12 9 0 0 36 12 9 0 0 36 17623 VecNorm 356257 1.0 1.5284e+0311.3 1.10e+11 1.7 0.0e+00 0.0e+00 3.6e+05 1 2 0 0 46 1 2 0 0 46 32096 VecScale 330310 1.0 2.0543e+01 1.6 5.69e+10 1.7 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1237928 VecCopy 488637 1.0 2.4720e+01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1619927 1.0 1.0115e+02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 117474 1.0 4.3116e+00 1.4 1.47e+10 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1521193 VecAYPX 1671521 1.0 6.6199e+00 1.9 2.25e+10 2.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1444337 VecAXPBYCZ 819144 1.0 8.2493e+00 1.9 4.11e+10 2.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2119776 VecMAXPY 308452 1.0 1.5663e+02 1.7 6.58e+11 1.7 0.0e+00 0.0e+00 0.0e+00 0 10 0 0 0 0 10 0 0 0 1876793 VecAssemblyBegin 20 1.0 6.9636e-02 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 20 1.0 1.3635e-02262.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 47838 1.0 9.4340e-01 1.9 1.37e+09 1.9 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 623825 VecScatterBegin 3006188 1.0 2.9468e+02 1.9 0.00e+00 0.0 1.0e+10 6.7e+03 0.0e+00 1 0100100 0 1 0100100 0 0 VecScatterEnd 2525940 1.0 1.5575e+03 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSetRandom 4 1.0 1.5838e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 308452 1.0 1.5323e+0310.8 1.61e+11 1.7 0.0e+00 0.0e+00 3.1e+05 1 2 0 0 40 1 2 0 0 40 46869 KSPGMRESOrthog 279302 1.0 1.4301e+0426.1 1.12e+12 1.7 0.0e+00 0.0e+00 2.8e+05 12 17 0 0 36 12 17 0 0 36 35027 KSPSetUp 21 1.0 7.3652e-02 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 54 1.0 3.9037e+04 1.0 6.30e+12 1.6 1.0e+10 6.7e+03 7.7e+05100100100100100 100100100100100 74467 PCGAMGGraph_AGG 3 1.0 1.0171e-01 1.0 8.87e+05 1.7 4.1e+04 6.8e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 3974 PCGAMGCoarse_AGG 3 1.0 1.4047e+00 1.0 3.11e+07 2.9 5.4e+05 3.3e+03 1.1e+02 0 0 0 0 0 0 0 0 0 0 5839 PCGAMGProl_AGG 3 1.0 2.8626e+00 1.0 0.00e+00 0.0 7.5e+04 1.5e+03 5.7e+01 0 0 0 0 0 0 0 0 0 0 0 PCGAMGPOpt_AGG 3 1.0 1.9892e+00 1.0 1.83e+07 1.8 2.2e+05 1.3e+03 1.3e+02 0 0 0 0 0 0 0 0 0 0 4062 GAMG: createProl 3 1.0 6.3449e+00 1.0 4.82e+07 2.3 8.7e+05 2.5e+03 3.4e+02 0 0 0 0 0 0 0 0 0 0 2630 Graph 6 1.0 1.0126e-01 1.5 8.87e+05 1.7 4.1e+04 6.8e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 3992 MIS/Agg 3 1.0 1.9955e-01 1.0 0.00e+00 0.0 3.5e+05 4.3e+02 3.7e+01 0 0 0 0 0 0 0 0 0 0 0 SA: col data 3 1.0 6.9470e-02 1.2 0.00e+00 0.0 4.2e+04 2.5e+03 1.8e+01 0 0 0 0 0 0 0 0 0 0 0 SA: frmProl0 3 1.0 2.7672e+00 1.0 0.00e+00 0.0 3.3e+04 3.0e+02 2.7e+01 0 0 0 0 0 0 0 0 0 0 0 SA: smooth 3 1.0 1.8317e+00 5.6 1.11e+06 1.7 8.0e+04 6.9e+02 5.2e+01 0 0 0 0 0 0 0 0 0 0 275 GAMG: partLevel 3 1.0 1.7176e+00 1.0 6.65e+06 1.7 1.7e+05 1.4e+03 1.6e+02 0 0 0 0 0 0 0 0 0 0 1768 repartition 2 1.0 1.4555e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 Invert-Sort 2 1.0 2.1987e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 Move A 2 1.0 1.5709e-01 1.1 0.00e+00 0.0 3.6e+03 7.1e+02 3.6e+01 0 0 0 0 0 0 0 0 0 0 0 Move P 2 1.0 2.5289e-01 1.1 0.00e+00 0.0 4.3e+03 1.6e+01 3.6e+01 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 11 1.0 5.6209e+01 2.1 5.44e+07 2.2 1.4e+06 1.5e+04 7.1e+02 0 0 0 0 0 0 0 0 0 0 351 PCSetUpOnBlocks 90120 1.0 4.5512e+01 2.8 8.58e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 7286 1.0 3.7974e+04 1.0 5.34e+12 1.6 1.0e+10 6.5e+03 7.2e+05 97 85100 96 93 97 85100 96 93 64924 KSPSolve_FS_0 7286 1.0 9.8110e+03 1.0 9.48e+11 1.6 7.2e+08 1.7e+04 1.2e+05 25 15 7 18 16 25 15 7 18 16 44106 KSPSolve_FS_1 7286 1.0 1.2586e+04 1.0 1.32e+12 1.6 9.3e+08 1.7e+04 1.6e+05 32 21 9 24 20 32 21 9 24 20 47889 KSPSolve_FS_2 7286 1.0 1.4417e+04 1.0 1.60e+12 1.7 1.1e+09 1.7e+04 1.8e+05 37 25 11 27 23 37 25 11 27 23 50464 KSPSolve_FS_3 7286 1.0 7.5506e+02 1.0 9.14e+11 1.8 7.3e+09 1.5e+03 2.6e+05 2 14 71 16 34 2 14 71 16 34 539009 EPSSetUp 1 1.0 3.5744e+00 1.0 0.00e+00 0.0 2.1e+05 3.1e+04 1.8e+02 0 0 0 0 0 0 0 0 0 0 0 EPSSolve 1 1.0 3.9044e+04 1.0 6.31e+12 1.6 1.0e+10 6.7e+03 7.7e+05100100100100100 100100100100100 74580 STSetUp 1 1.0 3.4491e+00 1.0 0.00e+00 0.0 2.1e+05 3.1e+04 1.8e+02 0 0 0 0 0 0 0 0 0 0 0 STApply 54 1.0 3.9041e+04 1.0 6.30e+12 1.6 1.0e+10 6.7e+03 7.7e+05100100100100100 100100100100100 74539 STMatSolve 54 1.0 3.9037e+04 1.0 6.30e+12 1.6 1.0e+10 6.7e+03 7.7e+05100100100100100 100100100100100 74467 BVCopy 11 1.0 7.0257e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVMultVec 108 1.0 5.3077e-01 1.7 1.61e+09 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1352635 BVMultInPlace 8 1.0 1.1700e-01 1.7 6.91e+08 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2638653 BVDotVec 108 1.0 6.0818e+00 2.8 1.75e+09 1.7 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 0 0 0 0 0 0 128734 BVOrthogonalizeV 55 1.0 6.3981e+00 2.4 3.36e+09 1.7 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 0 0 0 0 0 0 234579 BVScale 55 1.0 1.8698e-02 2.0 3.70e+07 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 885105 BVSetRandom 1 1.0 1.5763e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSSolve 7 1.0 4.5462e-0237.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSVectors 88 1.0 1.3854e-0232.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSOther 7 1.0 3.8334e-02172.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetGraph 3 1.0 1.0014e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 3 1.0 1.0215e-0112.2 0.00e+00 0.0 4.2e+04 3.2e+02 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastBegin 40 1.0 2.0697e-0242.3 0.00e+00 0.0 3.1e+05 4.5e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastEnd 40 1.0 4.5877e-0214.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 53 53 30952 0. Matrix 432 432 1963070496 0. Matrix Coarsen 3 3 1932 0. Vector 29846 29846 244734008 0. Index Set 367 367 11133092 0. IS L to G Mapping 3 3 2250336 0. Vec Scatter 154 154 2638216 0. Krylov Solver 23 23 1401264 0. Preconditioner 22 22 21452 0. EPS Solver 1 1 2240 0. Spectral Transform 1 1 856 0. Viewer 2 1 848 0. Basis Vectors 1 1 9344 0. PetscRandom 7 7 4690 0. Region 1 1 680 0. Direct Solver 1 1 15736 0. Star Forest Graph 3 3 2640 0. ======================================================================================================================== Average time to get PetscTime(): 1.19209e-07 Average time for MPI_Barrier(): 2.8038e-05 Average time for zero size MPI_Send(): 5.38072e-06 #PETSc Option Table entries: -eps_monitor_all -eps_ncv 15 -eps_nev 5 -eps_target 1e-06+0.6i -eps_tol 1e-6 -eps_type krylovschur -log_view -st_fieldsplit_pressure_ksp_type preonly -st_fieldsplit_pressure_pc_composite_type additive -st_fieldsplit_pressure_pc_type composite -st_fieldsplit_pressure_sub_0_ksp_ksp_converged_reason -st_fieldsplit_pressure_sub_0_ksp_ksp_rtol 1e-3 -st_fieldsplit_pressure_sub_0_ksp_ksp_type cg -st_fieldsplit_pressure_sub_0_ksp_pc_type jacobi -st_fieldsplit_pressure_sub_0_pc_type ksp -st_fieldsplit_pressure_sub_1_ksp_ksp_converged_reason -st_fieldsplit_pressure_sub_1_ksp_ksp_rtol 1e-3 -st_fieldsplit_pressure_sub_1_ksp_ksp_type gmres -st_fieldsplit_pressure_sub_1_ksp_pc_gamg_square_graph 10 -st_fieldsplit_pressure_sub_1_ksp_pc_type gamg -st_fieldsplit_pressure_sub_1_pc_type ksp -st_pc_fieldsplit_type multiplicative -st_pc_type fieldsplit -st_type sinvert #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --with-blacs-include=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/include --with-blacs-lib=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so --with-blaslapack-dir=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 --with-debugging=0 --with-errorchecking=0 --with-fortran-bindings=0 --with-metis-dir=arch-linux2-c-opt-bullxmpi --with-mumps-dir=arch-linux2-c-opt-bullxmpi --with-parmetis-dir=arch-linux2-c-opt-bullxmpi --with-ptscotch-dir=arch-linux2-c-opt-bullxmpi --with-scalapack-include=/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/include --with-scalapack-lib="[/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_scalapack_lp64.so,/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so]" --with-scalar-type=complex --with-sowing-dir=arch-linux2-c-opt-bullxmpi --with-x=0 PETSC_ARCH=arch-linux2-c-opt-complex-bullxmpi ----------------------------------------- Libraries compiled on 2018-07-08 20:17:01 on curie90 Machine characteristics: Linux-2.6.32-696.30.1.el6.Bull.140.x86_64-x86_64-with-redhat-6.9-Santiago Using PETSc directory: /ccc/work/cont003/rndm/rndm/petsc Using PETSc arch: arch-linux2-c-opt-complex-bullxmpi ----------------------------------------- Using C compiler: mpicc -fPIC -wd1572 -g -O3 Using Fortran compiler: mpif90 -fPIC -g -O3 ----------------------------------------- Using include paths: -I/ccc/work/cont003/rndm/rndm/petsc/include -I/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/include -I/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/include -I/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64/../../include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/lib -L/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-complex-bullxmpi/lib -lpetsc -Wl,-rpath,/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/lib -L/ccc/work/cont003/rndm/rndm/petsc/arch-linux2-c-opt-bullxmpi/lib -Wl,-rpath,/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 -L/ccc/products/mkl-18.0.1.163/default/18.0.1.163/mkl/lib/intel64 -Wl,-rpath,/opt/mpi/bullxmpi/1.2.9.2/lib -L/opt/mpi/bullxmpi/1.2.9.2/lib -Wl,-rpath,/ccc/products/gcc-6.1.0/default/lib -L/ccc/products/gcc-6.1.0/default/lib -Wl,-rpath,/ccc/products2/ifort-18.0.1.163/BullEL_6__x86_64/default/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin -L/ccc/products2/ifort-18.0.1.163/BullEL_6__x86_64/default/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib/gcc -Wl,-rpath,/ccc/products/gcc-6.1.0/default/lib64 -L/ccc/products/gcc-6.1.0/default/lib64 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib64 -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib64 -Wl,-rpath,/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib -L/ccc/products2/gcc-6.1.0/BullEL_6__x86_64/default/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm -lnuma -lrt -lnsl -lutil -limf -lifport -lifcoremt_pic -lsvml -lipgo -lirc -lpthread -lgcc_s -lirc_s -lstdc++ -ldl ----------------------------------------- ########## ########## ########## ########## ########## ########## ########## ########## Execution Sum Up ########## ########## ########## ########## ########## ########## ########## ########## Jobid : 9964350 Jobname : Eig User : jolivetp Account : gen7519@standard Limits : time = 20:00:00 , memory/task = 4000 Mo Date : submit = 10/07/2018 06:24:25 , start = 10/07/2018 06:24:25 Execution : partition = standard , QoS = normal , Comment = (null) Resources : ntasks = 512 , cpus/task = 2 , ncpus = 1024 , nodes = 64 Nodes=curie[1536-1537,1539-1542,1546,1548,1550,1552,1626-1643,4380-4397,4452-4469] CPU_IDs=0-15 Mem=64000 Memory / step -------------- Resident Size (Mo) Virtual Size (Go) JobID Max (Node:Task) AveTask Max (Node:Task) AveTask ----------- ------------------------ ------- -------------------------- ------- Accounting / step ------------------ JobID JobName Ntasks Ncpus Nnodes Layout Elapsed Ratio CPusage Eff State ------------ ------------ ------ ----- ------ ------- ------- ----- ------- --- ----- 9964350 Eig - 1024 64 - 10:52:50 100 - - - 9964350.0 FreeFem++-mpi 512 1024 64 BBlock 10:52:40 99.9 09:30:19 87.3 COMPLETED ########## ########## ########## ########## ########## ########## ########## ##########