Sender: LSF System Subject: Job 98002: in cluster Done Job was submitted from host by user in cluster at Thu Feb 18 11:42:59 2021 Job was executed on host(s) <36*c3u26n02>, in queue , as user in cluster at Thu Feb 18 11:43:00 2021 <36*c1u24n03> <36*c1u19n03> <36*c2u26n02> <36*a6u22n04> <36*a6u10n04> <36*a6u05n03> <36*c1u26n04> <36*c6u08n01> <36*c6u01n01> <36*c6u08n02> <36*c6u01n02> <36*c6u08n03> <36*a3u15n01> <36*c6u01n04> <36*a6u24n01> <36*a6u24n02> <36*a3u03n03> <36*a6u19n02> <36*a6u12n03> <36*a6u19n04> <36*a6u12n04> <36*c6u15n02> <36*c6u15n03> <36*c6u15n04> <36*a3u22n01> <36*c6u03n01> <36*a3u22n03> <36*a3u22n04> <36*a3u17n01> <36*c6u03n04> <36*a3u10n01> was used as the home directory. was used as the working directory. Started at Thu Feb 18 11:43:00 2021 Terminated at Thu Feb 18 11:44:08 2021 Results reported at Thu Feb 18 11:44:08 2021 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input ######################################################################### # File Name: lsf.sh ######################################################################### #!/bin/bash #BSUB -J myjob #BSUB -n 1152 #BSUB -o %J.lsf.out #BSUB -e %J.lsf.err #BSUB -W 60 #BSUB -q batch #BSUB -R "span[ptile=36]" module purge module load mpi/mvapich2-2.3.5-gcc-10.2.0 cd $LS_SUBCWD mpirun FreeFem++-mpi poisson-2d-PETSc.edp -v 0 -nn 4096 -log_view ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 70748.41 sec. Max Memory : 3529136 MB Average Memory : 138442.45 MB Total Requested Memory : - Delta Memory : - Max Swap : - Max Processes : 147 Max Threads : 473 Run time : 67 sec. Turnaround time : 69 sec. The output (if any) follows: Linear solve converged due to CONVERGED_RTOL iterations 5398 -------------------------------------------- ThG.nt = 33554432 VhG.ndof = 16785409 ||u - uh||_0 = 3.49928e-07 ||u - uh||_a = 0.00340765 -------------------------------------------- ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- FreeFem++-mpi on a named c3u26n02 with 1152 processors, by dongxy Thu Feb 18 11:44:06 2021 Using Petsc Release Version 3.14.2, Dec 03, 2020 Max Max/Min Avg Total Time (sec): 5.959e+01 1.000 5.959e+01 Objects: 1.900e+01 1.000 1.900e+01 Flop: 2.016e+09 1.052 1.966e+09 2.265e+12 Flop/sec: 3.384e+07 1.052 3.299e+07 3.801e+10 MPI Messages: 4.322e+04 4.000 3.102e+04 3.573e+07 MPI Message Lengths: 2.695e+07 2.709 6.902e+02 2.466e+10 MPI Reductions: 1.622e+04 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 4.2194e+01 70.8% 0.0000e+00 0.0% 2.976e+04 0.1% 1.336e+03 0.2% 1.500e+01 0.1% 1: Calling KSPSolve()...: 1.7396e+01 29.2% 2.2648e+12 100.0% 3.570e+07 99.9% 6.897e+02 99.8% 1.620e+04 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 1 1.0 1.2405e-02 1.7 0.00e+00 0.0 3.3e+03 4.0e+00 0.0e+00 0 0 0 0 0 0 0 11 0 0 0 BuildTwoSidedF 1 1.0 2.7752e-04 6.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 3.2926e-04 4.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 3.1900e-02 2.2 0.00e+00 0.0 9.9e+03 2.3e+02 4.0e+00 0 0 0 0 0 0 0 33 6 27 0 VecSet 1 1.0 4.2439e-0511.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetGraph 1 1.0 9.7513e-0551.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 1 1.0 1.2566e-02 1.2 0.00e+00 0.0 9.9e+03 2.3e+02 0.0e+00 0 0 0 0 0 0 0 33 6 0 0 --- Event Stage 1: Calling KSPSolve()... MatMult 5398 1.0 9.4707e+0012.6 1.05e+09 1.1 3.6e+07 6.9e+02 0.0e+00 3 52100100 0 10 52100100 0 124335 VecTDot 10796 1.0 1.4993e+01 8.3 3.23e+08 1.1 0.0e+00 0.0e+00 1.1e+04 16 16 0 0 67 55 16 0 0 67 24172 VecNorm 5399 1.0 6.2343e+00 4.4 1.61e+08 1.1 0.0e+00 0.0e+00 5.4e+03 10 8 0 0 33 33 8 0 0 33 29073 VecCopy 5401 1.0 3.3677e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1 1.0 2.7180e-05 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 10796 1.0 1.1721e-01 1.4 3.23e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 16 0 0 0 1 16 0 0 0 3092074 VecAYPX 5397 1.0 5.4340e-02 1.4 1.61e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 8 0 0 0 0 8 0 0 0 3334231 VecScatterBegin 5398 1.0 5.4152e-02 3.3 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 VecScatterEnd 5398 1.0 8.6881e+00489.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 6 0 0 0 0 0 SFBcastOpBegin 5398 1.0 5.1739e-02 3.8 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 SFBcastOpEnd 5398 1.0 8.6858e+00566.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 6 0 0 0 0 0 SFPack 5398 1.0 7.6318e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 5398 1.0 1.0190e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 1.1086e-04 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.7389e+01 1.0 2.02e+09 1.1 3.6e+07 6.9e+02 1.6e+04 29100100100100 100100100100100 130242 PCSetUp 1 1.0 5.3644e-0537.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 5399 1.0 3.5489e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 4 4 3777360 0. Vec Scatter 1 1 800 0. Vector 2 7 585712 0. Index Set 2 2 3708 0. Star Forest Graph 1 1 1136 0. Krylov Solver 1 1 1480 0. Preconditioner 1 1 832 0. Viewer 2 1 840 0. --- Event Stage 1: Calling KSPSolve()... Vector 5 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 2.38419e-08 Average time for MPI_Barrier(): 5.18799e-05 Average time for zero size MPI_Send(): 6.48035e-06 #PETSc Option Table entries: -ksp_converged_reason -ksp_rtol 1e-8 -ksp_type cg -log_view -nn 4096 -pc_type none -v 0 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r MAKEFLAGS= --with-debugging=0 COPTFLAGS="-O3 -mtune=native" CXXOPTFLAGS="-O3 -mtune=native" FOPTFLAGS="-O3 -mtune=native" --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-cc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc --with-cxx=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpic++ --with-fc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 --with-scalar-type=real --with-blaslapack-include=/soft/apps/intel/oneapi_base_2021/mkl/latest/include --with-blaslapack-lib="-Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -lmkl_rt -lmkl_sequential -lmkl_core -lpthread" --download-metis --download-ptscotch --download-hypre --download-ml --download-parmetis --download-superlu_dist --download-suitesparse --download-tetgen --download-slepc --download-elemental --download-hpddm --download-scalapack --download-mumps --download-slepc-configure-arguments=--download-arpack=https://github.com/prj-/arpack-ng/archive/6d11c37b2dc9110f3f6a434029353ae1c5112227.tar.gz PETSC_ARCH=fr ----------------------------------------- Libraries compiled on 2021-01-24 05:31:11 on ln02 Machine characteristics: Linux-3.10.0-957.el7.x86_64-x86_64-with-redhat-7.6-Maipo Using PETSc directory: /share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r Using PETSc arch: ----------------------------------------- Using C compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3 -mtune=native Using Fortran compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -mtune=native ----------------------------------------- Using include paths: -I/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/include -I/soft/apps/intel/oneapi_base_2021/mkl/latest/include ----------------------------------------- Using C linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc Using Fortran linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 Using libraries: -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -lpetsc -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -Wl,-rpath,/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -L/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib64 -L/soft/apps/gcc/gcc-10.2.0/lib64 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -L/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib -L/soft/apps/gcc/gcc-10.2.0/lib -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lml -lEl -lElSuiteSparse -lpmrrr -lmkl_rt -lmkl_sequential -lmkl_core -lpthread -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lpthread -lparmetis -lmetis -ltet -lm -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lrt -lquadmath -lstdc++ -ldl -----------------------------------------