[petsc-users] PETSc doesn't allow use of multithreaded MKL with MUMPS + fblaslapack?

Appel, Thibaut t.appel17 at imperial.ac.uk
Sun Aug 12 11:42:58 CDT 2018


Good afternoon,

I have an application code written in pure MPI but wanted to exploit multithreading in MUMPS (contained in calls to BLAS routines)
On a high-end parallel cluster I’m using, I’m linking with the Intel MKL library but it seems that PETSc won’t configure the way I want:

./configure […] —with-openmp=1 --with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} --with-scalapack-lib="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include --download-metis --download-parmetis --download-mumps

yields BLAS/LAPACK: -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread

while if I configure with cpardiso on top of the same flags

./configure […] —with-openmp=1 —with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} --with-scalapack-lib="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include --with-mkl_cpardiso-dir=${MKLROOT} --download-metis --download-parmetis --download-mumps

the configure script says
===============================================
BLASLAPACK: Looking for Multithreaded MKL for C/Pardiso
===============================================

and yields BLAS/LAPACK: -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread -lmkl_blacs_intelmpi_lp64 -liomp5 -ldl -lpthread

In other words, there is no current possibility of activating multithreaded BLAS with MUMPS in spite of the option —with-openmp=1, as libmkl_sequential is linked. Is it not possible to fix that and use libmkl_intel_thread by default?

On another smaller cluster, I do not have MKL and configure PETSc with BLAS downloaded with —download-fblaslapack, which is not multithreaded.
Could you confirm I would need to link with a multithreaded BLAS library I downloaded myself and use —with-openmp=1? Would it be `recognized` by the MUMPS installed by PETSc?

Thanks for your support,


Thibaut
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180812/df612768/attachment.html>


More information about the petsc-users mailing list