<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div class="">Good afternoon,</div>
<div class=""><br class="">
</div>
<div class="">I have an application code written in pure MPI but wanted to exploit multithreading in MUMPS (contained in calls to BLAS routines) </div>
<div class="">On a high-end parallel cluster I’m using, I’m linking with the Intel MKL library but it seems that PETSc won’t configure the way I want: </div>
<div class=""><br class="">
</div>
<div class=""><span style="font-size: 11px;" class=""><font face="Courier" class="">./configure […] —with-openmp=1 --with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} --with-scalapack-lib="-L${MKLROOT}/lib/intel64
 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include --download-metis --download-parmetis --download-mumps</font></span></div>
<div class=""><br class="">
</div>
<div class=""><span style="font-size: 11px;" class="">yields </span><span style="font-family: Courier;" class="">BLAS/LAPACK:</span><span style="font-family: Courier;" class=""> -</span><span style="font-family: Courier;" class="">lmkl_intel_lp64 -lmkl_sequential
 -lmkl_core -lpthread</span></div>
<span class=""><br class="">
while if I configure with cpardiso on top of the same flags</span>
<div class=""><span class=""><br class="">
</span></div>
<span style="font-size: 11px;" class=""><font face="Courier" class="">./configure […] —with-openmp=1 —with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} --with-scalapack-lib="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64
 -lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include --with-mkl_cpardiso-dir=${MKLROOT} --download-metis --download-parmetis --download-mumps</font></span>
<div class=""><span class=""><br class="">
</span></div>
<div class="">the configure script says</div>
<div class="">===============================================<br class="">
BLASLAPACK: Looking for Multithreaded MKL for C/Pardiso<br class="">
===============================================</div>
<div class=""><br class="">
</div>
<div class=""><span class="">and yields </span><span style="font-family: Courier;" class="">BLAS/LAPACK: </span><span style="font-family: Courier;" class="">-</span><span style="font-family: Courier;" class="">lmkl</span><span style="font-family: Courier;" class="">_intel_lp64
 -</span><span style="font-family: Courier;" class="">lmkl</span><span style="font-family: Courier;" class="">_core -</span><span style="font-family: Courier;" class="">lmkl</span><span style="font-family: Courier;" class="">_intel_thread -</span><span style="font-family: Courier;" class="">lmkl_blacs_intelmpi_lp64
 -liomp5 -ldl -lpthread </span></div>
<div class=""><span style="font-family: Courier;" class=""><br class="">
</span></div>
<div class="">In other words, there is no current possibility of activating multithreaded BLAS with MUMPS in spite of the option —with-openmp=1, as libmkl_sequential is linked. Is it not possible to fix that and use libmkl_intel_thread by default?</div>
<div class=""><br class="">
</div>
<div class="">On another smaller cluster, I do not have MKL and configure PETSc with BLAS downloaded with —download-fblaslapack, which is not multithreaded.</div>
<div class="">Could you confirm I would need to link with a multithreaded BLAS library I downloaded myself and use —with-openmp=1? Would it be `recognized` by the MUMPS installed by PETSc?</div>
<div class=""><br class="">
</div>
<div class="">Thanks for your support,</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class=""><span class="">Thibaut<br class="">
</span></div>
</body>
</html>