Sender: LSF System Subject: Job 98005: in cluster Done Job was submitted from host by user in cluster at Thu Feb 18 11:50:03 2021 Job was executed on host(s) <36*c1u26n02>, in queue , as user in cluster at Thu Feb 18 11:50:04 2021 <36*c6u01n03> <36*c6u01n04> <36*a6u24n01> <36*a6u24n02> <36*a6u12n03> <36*a6u19n04> <36*a6u12n04> <36*c6u15n02> <36*c6u15n03> <36*c6u15n04> <36*a3u22n02> <36*a3u10n01> <36*a3u17n02> <36*a3u17n03> <36*c4u08n01> <36*a3u17n04> <36*c4u08n02> <36*c4u01n01> <36*c4u01n02> <36*a6u26n01> <36*a6u26n02> <36*c3u08n01> <36*c3u08n02> <36*c3u01n02> <36*c3u08n03> <36*c6u22n02> <36*c3u08n04> <36*c6u22n04> <36*c6u17n01> <36*c6u10n01> <36*c6u10n02> was used as the home directory. was used as the working directory. Started at Thu Feb 18 11:50:04 2021 Terminated at Thu Feb 18 11:51:22 2021 Results reported at Thu Feb 18 11:51:22 2021 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input ######################################################################### # File Name: lsf.sh ######################################################################### #!/bin/bash #BSUB -J myjob #BSUB -n 1152 #BSUB -o %J.lsf.out #BSUB -e %J.lsf.err #BSUB -W 60 #BSUB -q batch #BSUB -R "span[ptile=36]" module purge module load mpi/mvapich2-2.3.5-gcc-10.2.0 cd $LS_SUBCWD mpirun FreeFem++-mpi poisson-2d-PETSc.edp -v 0 -nn 4096 -log_view ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 82179.11 sec. Max Memory : 3423707 MB Average Memory : 1087270.25 MB Total Requested Memory : - Delta Memory : - Max Swap : - Max Processes : 1183 Max Threads : 4533 Run time : 77 sec. Turnaround time : 79 sec. The output (if any) follows: Linear solve converged due to CONVERGED_RTOL iterations 5398 -------------------------------------------- ThG.nt = 33554432 VhG.ndof = 16785409 ||u - uh||_0 = 3.49928e-07 ||u - uh||_a = 0.00340765 -------------------------------------------- ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- FreeFem++-mpi on a named c1u26n02 with 1152 processors, by dongxy Thu Feb 18 11:51:09 2021 Using Petsc Release Version 3.14.2, Dec 03, 2020 Max Max/Min Avg Total Time (sec): 5.989e+01 1.000 5.989e+01 Objects: 2.500e+01 1.000 2.500e+01 Flop: 2.823e+09 1.052 2.753e+09 3.171e+12 Flop/sec: 4.714e+07 1.052 4.597e+07 5.295e+10 MPI Messages: 4.324e+04 4.000 3.103e+04 3.575e+07 MPI Message Lengths: 2.696e+07 2.709 6.902e+02 2.467e+10 MPI Reductions: 2.600e+01 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 4.0990e+01 68.4% 0.0000e+00 0.0% 2.976e+04 0.1% 1.336e+03 0.2% 1.500e+01 57.7% 1: Calling KSPSolve()...: 1.8899e+01 31.6% 3.1713e+12 100.0% 3.572e+07 99.9% 6.897e+02 99.8% 4.000e+00 15.4% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 1 1.0 1.7054e-02 2.9 0.00e+00 0.0 3.3e+03 4.0e+00 0.0e+00 0 0 0 0 0 0 0 11 0 0 0 BuildTwoSidedF 1 1.0 5.1785e-0415.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 5.6624e-04 8.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 3.5110e-02 1.9 0.00e+00 0.0 9.9e+03 2.3e+02 4.0e+00 0 0 0 0 15 0 0 33 6 27 0 VecSet 1 1.0 3.6001e-0510.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetGraph 1 1.0 4.8876e-0525.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 1 1.0 1.7292e-02 1.8 0.00e+00 0.0 9.9e+03 2.3e+02 0.0e+00 0 0 0 0 0 0 0 33 6 0 0 --- Event Stage 1: Calling KSPSolve()... MatMult 5400 1.0 1.5915e+00 1.8 1.05e+09 1.1 3.6e+07 6.9e+02 0.0e+00 2 37100100 0 6 37100100 0 740161 VecCopy 5405 1.0 6.2853e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1 1.0 5.2214e-05 5.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 21592 1.0 2.3194e-01 1.4 6.45e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 23 0 0 0 1 23 0 0 0 3125164 VecAYPX 21588 1.0 5.5059e-01 1.7 6.45e+08 1.1 0.0e+00 0.0e+00 0.0e+00 1 23 0 0 0 2 23 0 0 0 1316272 VecScatterBegin 5400 1.0 7.0132e-02 3.7 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 VecScatterEnd 5400 1.0 6.5329e-0122.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecReduceArith 16197 1.0 3.1135e-01 4.7 4.84e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 17 0 0 0 1 17 0 0 0 1746339 VecReduceBegin 5400 1.0 3.1471e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceEnd 5400 1.0 1.7226e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 28 0 0 0 0 90 0 0 0 0 0 SFBcastOpBegin 5400 1.0 6.6228e-02 4.1 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 SFBcastOpEnd 5400 1.0 6.5000e-0124.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 SFPack 5400 1.0 7.4158e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 5400 1.0 1.6499e-03 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 4.8900e-04 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.8893e+01 1.0 2.82e+09 1.1 3.6e+07 6.9e+02 0.0e+00 32100100100 0 100100100100 0 167860 PCSetUp 1 1.0 5.4359e-0532.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 5400 1.0 6.4616e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 4 4 3777360 0. Vec Scatter 1 1 800 0. Vector 2 13 1280032 0. Index Set 2 2 3708 0. Star Forest Graph 1 1 1136 0. Krylov Solver 1 1 1408 0. Preconditioner 1 1 832 0. Viewer 2 1 840 0. --- Event Stage 1: Calling KSPSolve()... Vector 11 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 2.38419e-08 Average time for MPI_Barrier(): 7.20501e-05 Average time for zero size MPI_Send(): 9.06529e-06 #PETSc Option Table entries: -ksp_converged_reason -ksp_rtol 1e-8 -ksp_type pipecg -log_view -nn 4096 -pc_type none -v 0 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r MAKEFLAGS= --with-debugging=0 COPTFLAGS="-O3 -mtune=native" CXXOPTFLAGS="-O3 -mtune=native" FOPTFLAGS="-O3 -mtune=native" --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-cc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc --with-cxx=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpic++ --with-fc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 --with-scalar-type=real --with-blaslapack-include=/soft/apps/intel/oneapi_base_2021/mkl/latest/include --with-blaslapack-lib="-Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -lmkl_rt -lmkl_sequential -lmkl_core -lpthread" --download-metis --download-ptscotch --download-hypre --download-ml --download-parmetis --download-superlu_dist --download-suitesparse --download-tetgen --download-slepc --download-elemental --download-hpddm --download-scalapack --download-mumps --download-slepc-configure-arguments=--download-arpack=https://github.com/prj-/arpack-ng/archive/6d11c37b2dc9110f3f6a434029353ae1c5112227.tar.gz PETSC_ARCH=fr ----------------------------------------- Libraries compiled on 2021-01-24 05:31:11 on ln02 Machine characteristics: Linux-3.10.0-957.el7.x86_64-x86_64-with-redhat-7.6-Maipo Using PETSc directory: /share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r Using PETSc arch: ----------------------------------------- Using C compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3 -mtune=native Using Fortran compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -mtune=native ----------------------------------------- Using include paths: -I/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/include -I/soft/apps/intel/oneapi_base_2021/mkl/latest/include ----------------------------------------- Using C linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc Using Fortran linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 Using libraries: -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -lpetsc -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -Wl,-rpath,/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -L/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib64 -L/soft/apps/gcc/gcc-10.2.0/lib64 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -L/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib -L/soft/apps/gcc/gcc-10.2.0/lib -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lml -lEl -lElSuiteSparse -lpmrrr -lmkl_rt -lmkl_sequential -lmkl_core -lpthread -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lpthread -lparmetis -lmetis -ltet -lm -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lrt -lquadmath -lstdc++ -ldl -----------------------------------------