[petsc-users] PETSC profiling on 1536 cores
Barry Smith
bsmith at mcs.anl.gov
Thu Jun 15 16:16:50 CDT 2017
Please send the complete -log_view files as attachments. Both 1536 and 48 cores. The mailers mess up the ASCII formatting in text so I can't make heads or tails out of the result.
What is the MPI being used and what kind of interconnect does the network have? Also is the MPI specific to that interconnect or just something compiled off the web?
Barry
> On Jun 15, 2017, at 4:09 PM, Pietro Incardona <asmprog32 at hotmail.com> wrote:
>
> Dear All
>
> I tried PETSC version 3.6.5 to solve a linear system with 256 000 000 unknown. The equation is Finite differences Poisson equation.
>
> I am using Conjugate gradient (the matrix is symmetric) with no preconditioner. Visualizing the solution is reasonable.
> Unfortunately the Conjugate-Gradient does not scale at all and I am extremely concerned about this problem in paticular about the profiling numbers.
> Looking at the profiler it seem that
>
> 1536 cores = 24 cores x 64
>
> VecScatterBegin 348 1.0 2.3975e-01 1.8 0.00e+00 0.0 7.7e+06 3.1e+04 0.0e+00 0 0 85 99 0 0 0 85 99 0 0
> VecScatterEnd 348 1.0 2.8680e+00 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
> MatMult 348 1.0 4.1088e+00 1.4 8.18e+08 1.3 7.7e+06 3.1e+04 0.0e+00 2 52 85 99 0 2 52 85 99 0 281866
>
> I was expecting that this part was the most expensive and take around 4 second in total that sound reasonable
>
> Unfortunately
>
> on 1536 cores = 24 cores x 64
>
> VecTDot 696 1.0 3.4442e+01 1.4 2.52e+08 1.3 0.0e+00 0.0e+00 7.0e+02 12 16 0 0 65 12 16 0 0 65 10346
> VecNorm 349 1.0 1.1101e+02 1.1 1.26e+08 1.3 0.0e+00 0.0e+00 3.5e+02 46 8 0 0 33 46 8 0 0 33 1610
> VecAXPY 696 1.0 8.3134e+01 1.1 2.52e+08 1.3 0.0e+00 0.0e+00 0.0e+00 34 16 0 0 0 34 16 0 0 0 4286
>
> Take over 228 seconds. Considering that doing some test on the cluster I do not see any problem with MPI_Reduce I do not understand how these numbers are possible
>
>
> ////////////////////////// I also attach to the profiling part the inversion on 48 cores /////////////////////////
>
> VecTDot 696 1.0 1.4684e+01 1.3 3.92e+09 1.1 0.0e+00 0.0e+00 7.0e+02 6 16 0 0 65 6 16 0 0 65 24269
> VecNorm 349 1.0 4.9612e+01 1.3 1.96e+09 1.1 0.0e+00 0.0e+00 3.5e+02 22 8 0 0 33 22 8 0 0 33 3602
> VecCopy 351 1.0 8.8359e+00 7.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
> VecSet 2 1.0 1.6177e-02 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 696 1.0 8.8559e+01 1.1 3.92e+09 1.1 0.0e+00 0.0e+00 0.0e+00 42 16 0 0 0 42 16 0 0 0 4024
> VecAYPX 347 1.0 4.6790e+00 1.2 1.95e+09 1.1 0.0e+00 0.0e+00 0.0e+00 2 8 0 0 0 2 8 0 0 0 37970
> VecAssemblyBegin 2 1.0 5.0942e-02 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 1 0 0 0 0 1 0
> VecAssemblyEnd 2 1.0 1.9073e-05 6.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecScatterBegin 348 1.0 1.2763e+00 1.5 0.00e+00 0.0 4.6e+05 2.0e+05 0.0e+00 0 0 97100 0 0 0 97100 0 0
> VecScatterEnd 348 1.0 4.6741e+00 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
> MatMult 348 1.0 2.8440e+01 1.1 1.27e+10 1.1 4.6e+05 2.0e+05 0.0e+00 13 52 97100 0 13 52 97100 0 40722
> MatAssemblyBegin 1 1.0 7.4749e-0124.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 1 1.0 8.3194e-01 1.0 0.00e+00 0.0 2.7e+03 5.1e+04 8.0e+00 0 0 1 0 1 0 0 1 0 1 0
> KSPSetUp 1 1.0 8.2883e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 1.7964e+02 1.0 2.45e+10 1.1 4.6e+05 2.0e+05 1.0e+03 87100 97100 98 87100 97100 98 12398
> PCSetUp 1 1.0 1.1921e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> PCApply 349 1.0 8.8166e+00 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
>
>
> /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
>
> If you need more information or test please let me know.
>
> Thanks in advance
>
> Here the log of 1536 cores
>
> 345 KSP Residual norm 1.007085286893e-02
> 346 KSP Residual norm 1.010054402040e-02
> 347 KSP Residual norm 1.002139574355e-02
> 348 KSP Residual norm 9.775851299055e-03
> Max div for vorticity 1.84572e-05 Integral: 6.62466e-09 130.764 -132
> ************************************************************************************************************************
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
> ************************************************************************************************************************
>
> ---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
>
> ./vic_petsc on a arch-linux2-c-opt named taurusi6217 with 1536 processors, by incard Thu Jun 15 22:27:09 2017
> Using Petsc Release Version 3.6.4, Apr, 12, 2016
>
> Max Max/Min Avg Total
> Time (sec): 2.312e+02 1.00027 2.312e+02
> Objects: 1.900e+01 1.00000 1.900e+01
> Flops: 1.573e+09 1.32212 1.450e+09 2.227e+12
> Flops/sec: 6.804e+06 1.32242 6.271e+06 9.633e+09
> MPI Messages: 8.202e+03 2.06821 5.871e+03 9.018e+06
> MPI Message Lengths: 2.013e+08 1.86665 2.640e+04 2.381e+11
> MPI Reductions: 1.066e+03 1.00000
>
> Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
> e.g., VecAXPY() for real vectors of length N --> 2N flops
> and VecAXPY() for complex vectors of length N --> 8N flops
>
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
> Avg %Total Avg %Total counts %Total Avg %Total counts %Total
> 0: Main Stage: 2.3120e+02 100.0% 2.2272e+12 100.0% 9.018e+06 100.0% 2.640e+04 100.0% 1.065e+03 99.9%
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on interpreting output.
> Phase summary info:
> Count: number of times phase was executed
> Time and Flops: Max - maximum over all processors
> Ratio - ratio of maximum to minimum over all processors
> Mess: number of messages sent
> Avg. len: average message length (bytes)
> Reduct: number of global reductions
> Global: entire computation
> Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
> %T - percent time in this phase %F - percent flops in this phase
> %M - percent messages in this phase %L - percent message lengths in this phase
> %R - percent reductions in this phase
> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
> ------------------------------------------------------------------------------------------------------------------------
> Event Count Time (sec) Flops --- Global --- --- Stage --- Total
> Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
> ------------------------------------------------------------------------------------------------------------------------
>
> --- Event Stage 0: Main Stage
>
> VecTDot 696 1.0 3.4442e+01 1.4 2.52e+08 1.3 0.0e+00 0.0e+00 7.0e+02 12 16 0 0 65 12 16 0 0 65 10346
> VecNorm 349 1.0 1.1101e+02 1.1 1.26e+08 1.3 0.0e+00 0.0e+00 3.5e+02 46 8 0 0 33 46 8 0 0 33 1610
> VecCopy 351 1.0 2.7609e-01 5.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecSet 2 1.0 3.8961e-0256.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 696 1.0 8.3134e+01 1.1 2.52e+08 1.3 0.0e+00 0.0e+00 0.0e+00 34 16 0 0 0 34 16 0 0 0 4286
> VecAYPX 347 1.0 2.0852e-01 2.0 1.25e+08 1.3 0.0e+00 0.0e+00 0.0e+00 0 8 0 0 0 0 8 0 0 0 852044
> VecAssemblyBegin 2 1.0 8.3237e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 1 0 0 0 0 1 0
> VecAssemblyEnd 2 1.0 5.1022e-0517.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecScatterBegin 348 1.0 2.3975e-01 1.8 0.00e+00 0.0 7.7e+06 3.1e+04 0.0e+00 0 0 85 99 0 0 0 85 99 0 0
> VecScatterEnd 348 1.0 2.8680e+00 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
> MatMult 348 1.0 4.1088e+00 1.4 8.18e+08 1.3 7.7e+06 3.1e+04 0.0e+00 2 52 85 99 0 2 52 85 99 0 281866
> MatAssemblyBegin 1 1.0 9.1920e-02 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 1 1.0 3.9093e+00 1.0 0.00e+00 0.0 4.4e+04 7.7e+03 8.0e+00 2 0 0 0 1 2 0 0 0 1 0
> KSPSetUp 1 1.0 8.1890e-03 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 2.1525e+02 1.0 1.57e+09 1.3 7.7e+06 3.1e+04 1.0e+03 93100 85 99 98 93100 85 99 98 10347
> PCSetUp 1 1.0 5.5075e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> PCApply 349 1.0 2.7485e-01 6.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> ------------------------------------------------------------------------------------------------------------------------
>
> Memory usage is given in bytes:
>
> Object Type Creations Destructions Memory Descendants' Mem.
> Reports information only for process 0.
>
> --- Event Stage 0: Main Stage
>
> Vector 10 7 7599552 0
> Vector Scatter 1 1 1088 0
> Matrix 3 3 20858912 0
> Krylov Solver 1 1 1216 0
> Index Set 2 2 242288 0
> Preconditioner 1 1 816 0
> Viewer 1 0 0 0
> ========================================================================================================================
> Average time to get PetscTime(): 9.53674e-08
> Average time for MPI_Barrier(): 3.68118e-05
> Average time for zero size MPI_Send(): 3.24349e-06
> #PETSc Option Table entries:
> -ksp_atol 0.010000
> -ksp_max_it 500
> -ksp_monitor
> -ksp_type cg
> -log_summary
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure options: --with-cxx-dialect=C++11 --with-mpi-dir=/sw/taurus/libraries/openmpi/1.10.2-gnu --with-parmetis=yes --with-parmetis-dir=/scratch/p_ppm//PARMETIS/ --with-metis=yes --with-metis-dir=/scratch/p_ppm//METIS --with-boost=yes --with-boost-dir=/scratch/p_ppm//BOOST --with-blas-lib=/scratch/p_ppm//OPENBLAS/lib/libopenblas.a --with-lapack-lib=/scratch/p_ppm//OPENBLAS/lib/libopenblas.a --with-suitesparse=yes --with-suitesparse-dir=/scratch/p_ppm//SUITESPARSE --with-trilinos=yes -with-trilinos-dir=/scratch/p_ppm//TRILINOS --with-scalapack=yes -with-scalapack-dir=/scratch/p_ppm//SCALAPACK --with-mumps=yes --with-mumps-include=/scratch/p_ppm//MUMPS/include --with-superlu_dist=yes --with-superlu_dist-lib=/scratch/p_ppm//SUPERLU_DIST/lib/libsuperlu_dist_4.3.a --with-superlu_dist-include=/scratch/p_ppm//SUPERLU_DIST/include/ --with-hypre=yes -with-hypre-dir=/scratch/p_ppm//HYPRE --with-mumps-lib=""/scratch/p_ppm//MUMPS/lib/libdmumps.a /scratch/p_ppm//MUMPS/lib/libmumps_common.a /scratch/p_ppm//MUMPS/lib/libpord.a"" --prefix=/scratch/p_ppm//PETSC --with-debugging=0
> -----------------------------------------
> Libraries compiled on Wed Feb 22 17:30:49 2017 on tauruslogin4
> Machine characteristics: Linux-2.6.32-642.11.1.el6.Bull.106.x86_64-x86_64-with-redhat-6.8-Santiago
> Using PETSc directory: /lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4
> Using PETSc arch: arch-linux2-c-opt
> -----------------------------------------
>
> Using C compiler: /sw/taurus/libraries/openmpi/1.10.2-gnu/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: /sw/taurus/libraries/openmpi/1.10.2-gnu/bin/mpif90 -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O ${FOPTFLAGS} ${FFLAGS}
> -----------------------------------------
>
> Using include paths: -I/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/arch-linux2-c-opt/include -I/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/include -I/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/include -I/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/arch-linux2-c-opt/include -I/scratch/p_ppm/TRILINOS/include -I/scratch/p_ppm/HYPRE/include -I/scratch/p_ppm/SUPERLU_DIST/include -I/scratch/p_ppm/SUITESPARSE/include -I/scratch/p_ppm/MUMPS/include -I/scratch/p_ppm/PARMETIS/include -I/scratch/p_ppm/METIS/include -I/scratch/p_ppm/BOOST/include -I/sw/taurus/libraries/openmpi/1.10.2-gnu/include
> -----------------------------------------
>
> Using C linker: /sw/taurus/libraries/openmpi/1.10.2-gnu/bin/mpicc
> Using Fortran linker: /sw/taurus/libraries/openmpi/1.10.2-gnu/bin/mpif90
> Using libraries: -Wl,-rpath,/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/arch-linux2-c-opt/lib -L/lustre/scratch2/p_ppm/jenkins2/workspace/OpenFPM_high_scal_tests/petsc-3.6.4/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/scratch/p_ppm/TRILINOS/lib -L/scratch/p_ppm/TRILINOS/lib -lpike-blackbox -ltrilinoscouplings -lmsqutil -lmesquite -lctrilinos -lsundancePdeopt -lsundanceStdFwk -lsundanceStdMesh -lsundanceCore -lsundanceInterop -lsundanceUtils -lsundancePlaya -lpiro -lrol -lstokhos_muelu -lstokhos_ifpack2 -lstokhos_amesos2 -lstokhos_tpetra -lstokhos_sacado -lstokhos -lmoochothyra -lmoocho -lrythmos -lmuelu-adapters -lmuelu-interface -lmuelu -lmoertel -llocathyra -llocaepetra -llocalapack -lloca -lnoxepetra -lnoxlapack -lnox -lphalanx -lstk_mesh_fixtures -lstk_search_util_base -lstk_search -lstk_unit_test_utils -lstk_io_util -lstk_io -lstk_mesh_base -lstk_topology -lstk_util_use_cases -lstk_util_registry -lstk_util_diag -lstk_util_env -lstk_util_util -lstkclassic_search_util -lstkclassic_search -lstkclassic_rebalance_utils -lstkclassic_rebalance -lstkclassic_linsys -lstkclassic_io_util -lstkclassic_io -lstkclassic_expreval -lstkclassic_algsup -lstkclassic_mesh_fem -lstkclassic_mesh_base -lstkclassic_util_use_cases -lstkclassic_util_unit_test_support -lstkclassic_util_parallel -lstkclassic_util_diag -lstkclassic_util_env -lstkclassic_util_util -lstk_mesh_fixtures -lstk_search_util_base -lstk_search -lstk_unit_test_utils -lstk_io_util -lstk_io -lstk_mesh_base -lstk_topology -lstk_util_use_cases -lstk_util_registry -lstk_util_diag -lstk_util_env -lstk_util_util -lstkclassic_search_util -lstkclassic_search -lstkclassic_rebalance_utils -lstkclassic_rebalance -lstkclassic_linsys -lstkclassic_io_util -lstkclassic_io -lstkclassic_expreval -lstkclassic_algsup -lstkclassic_mesh_fem -lstkclassic_mesh_base -lstkclassic_util_use_cases -lstkclassic_util_unit_test_support -lstkclassic_util_parallel -lstkclassic_util_diag -lstkclassic_util_env -lstkclassic_util_util -lintrepid -lteko -lfei_trilinos -lfei_base -lstratimikos -lstratimikosbelos -lstratimikosaztecoo -lstratimikosamesos -lstratimikosml -lstratimikosifpack -lifpack2-adapters -lifpack2 -lanasazitpetra -lModeLaplace -lanasaziepetra -lanasazi -lkomplex -lsupes -laprepro_lib -lchaco -lIonit -lIotr -lIohb -lIogn -lIopg -lIoss -lsupes -laprepro_lib -lchaco -lIonit -lIotr -lIohb -lIogn -lIopg -lIoss -lamesos2 -lshylu -lbelostpetra -lbelosepetra -lbelos -lml -lifpack -lzoltan2 -lpamgen_extras -lpamgen -lamesos -lgaleri-xpetra -lgaleri-epetra -laztecoo -ldpliris -lisorropia -loptipack -lxpetra-sup -lxpetra -lthyratpetra -lthyraepetraext -lthyraepetra -lthyracore -lthyratpetra -lthyraepetraext -lthyraepetra -lthyracore -lepetraext -ltpetraext -ltpetrainout -ltpetra -lkokkostsqr -ltpetrakernels -ltpetraclassiclinalg -ltpetraclassicnodeapi -ltpetraclassic -ltpetraext -ltpetrainout -ltpetra -lkokkostsqr -ltpetrakernels -ltpetraclassiclinalg -ltpetraclassicnodeapi -ltpetraclassic -ltriutils -lglobipack -lshards -lzoltan -lepetra -lsacado -lrtop -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchoscore -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -ltpi -lgtest -lpthread -Wl,-rpath,/scratch/p_ppm/HYPRE/lib -L/scratch/p_ppm/HYPRE/lib -lHYPRE -Wl,-rpath,/sw/taurus/libraries/openmpi/1.10.2-gnu/lib -L/sw/taurus/libraries/openmpi/1.10.2-gnu/lib -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib64 -L/sw/global/compilers/gcc/5.3.0/lib64 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -L/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/libexec/gcc/x86_64-unknown-linux-gnu/5.3.0 -L/sw/global/compilers/gcc/5.3.0/libexec/gcc/x86_64-unknown-linux-gnu/5.3.0 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/x86_64-unknown-linux-gnu/lib -L/sw/global/compilers/gcc/5.3.0/x86_64-unknown-linux-gnu/lib -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib -L/sw/global/compilers/gcc/5.3.0/lib -lmpi_cxx -lstdc++ -Wl,-rpath,/scratch/p_ppm//SUPERLU_DIST/lib -L/scratch/p_ppm//SUPERLU_DIST/lib -lsuperlu_dist_4.3 -Wl,-rpath,/scratch/p_ppm/SUITESPARSE/lib -L/scratch/p_ppm/SUITESPARSE/lib -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lrt -ldmumps -Wl,-rpath,/scratch/p_ppm//MUMPS/lib -L/scratch/p_ppm//MUMPS/lib -lmumps_common -lpord -Wl,-rpath,/scratch/p_ppm/SCALAPACK/lib -L/scratch/p_ppm/SCALAPACK/lib -lscalapack -Wl,-rpath,/scratch/p_ppm//OPENBLAS/lib -L/scratch/p_ppm//OPENBLAS/lib -lopenblas -Wl,-rpath,/scratch/p_ppm/PARMETIS/lib -L/scratch/p_ppm/PARMETIS/lib -lparmetis -Wl,-rpath,/scratch/p_ppm/METIS/lib -L/scratch/p_ppm/METIS/lib -lmetis -lX11 -lhwloc -lssl -lcrypto -lm -lmpi_usempi -lmpi_mpifh -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/sw/taurus/libraries/openmpi/1.10.2-gnu/lib -L/sw/taurus/libraries/openmpi/1.10.2-gnu/lib -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib64 -L/sw/global/compilers/gcc/5.3.0/lib64 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -L/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib64 -L/sw/global/compilers/gcc/5.3.0/lib64 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib64 -L/sw/global/compilers/gcc/5.3.0/lib64 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/libexec/gcc/x86_64-unknown-linux-gnu/5.3.0 -L/sw/global/compilers/gcc/5.3.0/libexec/gcc/x86_64-unknown-linux-gnu/5.3.0 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -L/sw/global/compilers/gcc/5.3.0/lib/gcc/x86_64-unknown-linux-gnu/5.3.0 -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/x86_64-unknown-linux-gnu/lib -L/sw/global/compilers/gcc/5.3.0/x86_64-unknown-linux-gnu/lib -Wl,-rpath,/sw/global/compilers/gcc/5.3.0/lib -L/sw/global/compilers/gcc/5.3.0/lib -ldl -Wl,-rpath,/sw/taurus/libraries/openmpi/1.10.2-gnu/lib -lmpi -lgcc_s -lpthread -ldl
> -----------------------------------------
>
> Regards
> Pietro Incardona
More information about the petsc-users
mailing list