Sender: LSF System Subject: Job 98000: in cluster Done Job was submitted from host by user in cluster at Thu Feb 18 10:11:17 2021 Job was executed on host(s) <36*c3u26n02>, in queue , as user in cluster at Thu Feb 18 10:11:17 2021 <36*c1u19n02> <36*c1u12n01> <36*c2u26n02> <36*a3u01n02> <36*a6u22n04> <36*a6u17n01> <36*a6u10n04> <36*a6u05n03> <36*c1u26n01> <36*c1u26n02> <36*c1u26n04> <36*c6u08n01> <36*c6u01n01> <36*c6u08n02> <36*c6u08n03> <36*c6u08n04> <36*c6u01n04> <36*a3u15n02> <36*a6u24n01> <36*a3u03n02> <36*a6u24n02> <36*a6u12n03> <36*a6u19n04> <36*a6u12n04> <36*c6u15n02> <36*c6u15n03> <36*c6u15n04> <36*a3u22n04> <36*a3u10n02> <36*a3u10n03> <36*a3u17n04> was used as the home directory. was used as the working directory. Started at Thu Feb 18 10:11:17 2021 Terminated at Thu Feb 18 10:12:39 2021 Results reported at Thu Feb 18 10:12:39 2021 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input ######################################################################### # File Name: lsf.sh ######################################################################### #!/bin/bash #BSUB -J myjob #BSUB -n 1152 #BSUB -o %J.lsf.out #BSUB -e %J.lsf.err #BSUB -W 60 #BSUB -q batch #BSUB -R "span[ptile=36]" module purge module load mpi/mvapich2-2.3.5-gcc-10.2.0 cd $LS_SUBCWD mpirun FreeFem++-mpi poisson-2d-PETSc.edp -v 0 -nn 4096 -log_view ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 89312.00 sec. Max Memory : 3529117 MB Average Memory : 1129126.00 MB Total Requested Memory : - Delta Memory : - Max Swap : - Max Processes : 1220 Max Threads : 4678 Run time : 81 sec. Turnaround time : 82 sec. The output (if any) follows: Linear solve converged due to CONVERGED_RTOL iterations 5398 KSP Object: 1152 MPI processes type: groppcg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1152 MPI processes type: none linear system matrix = precond matrix: Mat Object: 1152 MPI processes type: mpiaij rows=16785409, cols=16785409 total: nonzeros=117465089, allocated nonzeros=117465089 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines --- system solved with PETSc (in 28.1724) -------------------------------------------- ThG.nt = 33554432 VhG.ndof = 16785409 ||u - uh||_0 = 3.49928e-07 ||u - uh||_a = 0.00340765 -------------------------------------------- ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- FreeFem++-mpi on a named c3u26n02 with 1152 processors, by dongxy Thu Feb 18 10:12:30 2021 Using Petsc Release Version 3.14.2, Dec 03, 2020 Max Max/Min Avg Total Time (sec): 6.917e+01 1.000 6.916e+01 Objects: 2.200e+01 1.000 2.200e+01 Flop: 2.339e+09 1.052 2.281e+09 2.627e+12 Flop/sec: 3.382e+07 1.052 3.298e+07 3.799e+10 MPI Messages: 4.323e+04 4.000 3.102e+04 3.574e+07 MPI Message Lengths: 2.695e+07 2.709 6.902e+02 2.467e+10 MPI Reductions: 2.800e+01 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 6.9165e+01 100.0% 2.6275e+12 100.0% 3.574e+07 100.0% 6.902e+02 100.0% 2.100e+01 75.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 1 1.0 1.6172e-02 1.6 0.00e+00 0.0 3.3e+03 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 1 1.0 2.7132e-04 6.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 5399 1.0 9.3638e+0011.4 1.05e+09 1.1 3.6e+07 6.9e+02 0.0e+00 3 45100100 0 3 45100100 0 125778 MatAssemblyBegin 2 1.0 3.2473e-04 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 3.3739e-02 1.9 0.00e+00 0.0 9.9e+03 2.3e+02 4.0e+00 0 0 0 0 14 0 0 0 0 19 0 MatView 1 1.0 5.9652e-04 4.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 4 0 0 0 0 5 0 VecNorm 1 1.0 2.0823e-02459.7 2.99e+04 1.1 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 4 0 0 0 0 5 1612 VecCopy 5401 1.0 7.7543e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2 1.0 6.5565e-05 5.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 16194 1.0 1.5659e-01 1.3 4.84e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 21 0 0 0 0 21 0 0 0 3471832 VecAYPX 10794 1.0 1.8121e-01 1.3 3.23e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 14 0 0 0 0 14 0 0 0 1999706 VecScatterBegin 5399 1.0 5.9568e-02 3.9 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 VecScatterEnd 5399 1.0 8.5056e+00371.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 VecReduceArith 16195 1.0 2.4356e-01 4.1 4.84e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 21 0 0 0 0 21 0 0 0 2232131 VecReduceBegin 10797 1.0 3.5153e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceEnd 10797 1.0 2.6880e+01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 37 0 0 0 0 37 0 0 0 0 0 SFSetGraph 1 1.0 1.6308e-0485.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 1 1.0 1.6341e-02 1.5 0.00e+00 0.0 9.9e+03 2.3e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastOpBegin 5399 1.0 5.7348e-02 4.4 0.00e+00 0.0 3.6e+07 6.9e+02 0.0e+00 0 0100100 0 0 0100100 0 0 SFBcastOpEnd 5399 1.0 8.5022e+00418.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 SFPack 5399 1.0 7.1075e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 5399 1.0 1.2619e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 2.3532e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 2.8172e+01 1.0 2.34e+09 1.1 3.6e+07 6.9e+02 1.0e+00 41100100100 4 41100100100 5 93266 PCSetUp 1 1.0 9.0837e-0563.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 5399 1.0 7.9549e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 4 4 3777360 0. Vec Scatter 1 1 800 0. Vector 10 10 932872 0. Index Set 2 2 3708 0. Star Forest Graph 1 1 1136 0. Krylov Solver 1 1 1408 0. Preconditioner 1 1 832 0. Viewer 2 1 840 0. ======================================================================================================================== Average time to get PetscTime(): 2.38419e-08 Average time for MPI_Barrier(): 4.62532e-05 Average time for zero size MPI_Send(): 8.19109e-06 #PETSc Option Table entries: -ksp_converged_reason -ksp_rtol 1e-8 -ksp_type groppcg -ksp_view -log_view -nn 4096 -pc_type none -v 0 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r MAKEFLAGS= --with-debugging=0 COPTFLAGS="-O3 -mtune=native" CXXOPTFLAGS="-O3 -mtune=native" FOPTFLAGS="-O3 -mtune=native" --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-cc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc --with-cxx=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpic++ --with-fc=/soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 --with-scalar-type=real --with-blaslapack-include=/soft/apps/intel/oneapi_base_2021/mkl/latest/include --with-blaslapack-lib="-Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -lmkl_rt -lmkl_sequential -lmkl_core -lpthread" --download-metis --download-ptscotch --download-hypre --download-ml --download-parmetis --download-superlu_dist --download-suitesparse --download-tetgen --download-slepc --download-elemental --download-hpddm --download-scalapack --download-mumps --download-slepc-configure-arguments=--download-arpack=https://github.com/prj-/arpack-ng/archive/6d11c37b2dc9110f3f6a434029353ae1c5112227.tar.gz PETSC_ARCH=fr ----------------------------------------- Libraries compiled on 2021-01-24 05:31:11 on ln02 Machine characteristics: Linux-3.10.0-957.el7.x86_64-x86_64-with-redhat-7.6-Maipo Using PETSc directory: /share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r Using PETSc arch: ----------------------------------------- Using C compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3 -mtune=native Using Fortran compiler: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -mtune=native ----------------------------------------- Using include paths: -I/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/include -I/soft/apps/intel/oneapi_base_2021/mkl/latest/include ----------------------------------------- Using C linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpicc Using Fortran linker: /soft/apps/mvapich2/2.3.5-gcc-10.2.0/bin/mpif90 Using libraries: -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -lpetsc -Wl,-rpath,/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -L/share/home/dongxy/zg/FreeFEM/New_Version/ff-petsc/r/lib -Wl,-rpath,/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -L/soft/apps/intel/oneapi_base_2021/mkl/latest/lib/intel64 -Wl,-rpath,/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -L/soft/apps/mvapich2/2.3.5-gcc-10.2.0/lib -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib64 -L/soft/apps/gcc/gcc-10.2.0/lib64 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -L/soft/apps/gcc/gcc-10.2.0/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -Wl,-rpath,/soft/apps/gcc/gcc-10.2.0/lib -L/soft/apps/gcc/gcc-10.2.0/lib -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lml -lEl -lElSuiteSparse -lpmrrr -lmkl_rt -lmkl_sequential -lmkl_core -lpthread -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lpthread -lparmetis -lmetis -ltet -lm -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lrt -lquadmath -lstdc++ -ldl -----------------------------------------