<div dir="ltr">Thank you very much for your information. I pulled the master branch but got the error when configuring it.<div><br></div><div>When I run configure without mkl_cpardiso (configure.log_nocpardiso): ./configure PETSC_ARCH=arch-debug --with-debugging=1 --with-mpi-dir=$MPI_ROOT --with-blaslapack-dir=${MKL_ROOT} , it works fine.</div><div><br></div><div>However, when I add mkl_cpardiso (configure.log_withcpardiso): ./configure PETSC_ARCH=arch-debug --with-debugging=1 --with-mpi-dir=$MPI_ROOT -with-blaslapack-dir=${MKL_ROOT} --with-mkl_cpardiso-dir=${MKL_ROOT} , it complains about "Could not find a functional BLAS.", but the blas was provided through mkl as same as previous configuration. </div><div><br></div><div>Can you help me on the configuration? Thank you.</div><div><br></div><div>Xiangdong</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 18, 2019 at 2:39 PM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Sep 18, 2019, at 9:15 AM, Xiangdong via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> <br>
> Hello everyone,<br>
> <br>
> From here,<br>
> <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html</a><br>
> <br>
> It seems thatMKL_PARDISO only works for seqaij. I am curious that whether one can use mkl_pardiso in petsc with multi-thread.<br>
<br>
You can use mkl_pardiso for multi-threaded and mkl_cpardiso for MPI parallelism.<br>
<br>
In both cases you must use the master branch of PETSc (or the next release of PETSc) to do this this easily.<br>
<br>
Note that when you use mkl_pardiso with multiple threads the matrix is coming from a single MPI process (or the single program if not running with MPI). So it is not MPI parallel that matches the rest of the parallelism with PETSc. So one much be a little careful: for example if one has 4 cores and uses them all with mpiexec -n 4 and then uses mkl_pardiso with 4 threads (each) then you have 16 threads fighting over 4 cores. So you need to select the number of MPI processes and number of threads wisely.<br>
<br>
> <br>
> Is there any reason that MKL_PARDISO is not listed in the linear solver table?<br>
> <a href="https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html</a><br>
> <br>
<br>
Just an oversight, thanks for letting us know, I have added it.<br>
<br>
<br>
> Thank you.<br>
> <br>
> Best,<br>
> Xiangdong<br>
<br>
</blockquote></div>