[petsc-users] MPI+OpenMP+MKL
Astor Piaz
appiazzolla at gmail.com
Fri Apr 7 13:26:35 CDT 2023
Hi Matthew, Jungchau,
Thank you for your advice. The code still does not work, I give more
details about it below, I can specify more about it as you wish.
I am implementing a spectral method resulting in a block matrix where the
off-diagonal blocks are Poincare-Steklov operators of
impedance-to-impedance type.
Those Poincare-Steklov operators have been created hierarchically merging
subdomain operators (the HPS method), and I have a well tuned (but rather
complex) OpenMP+MKL code that can apply this operator very fast.
I would like to use PETSc's MPI-parallel GMRES solver with a MatShell that
calls my OpenMP+MKL code, while each block can be in a different MPI
process.
At the moment the code runs correctly, except that PETSc is not letting my
OpenMP+MKL code make the scheduling of threads as I choose.
I am using
./configure --with-scalar-type=complex --prefix=../install/fast/
--with-debugging=0 -with-openmp=1 --with-blaslapack-dir=${MKLROOT}
--with-mkl_cpardiso-dir=${MKLROOT} --with-threadsafety --with-log=0
COPTFLAGS=-g -Ofast CXXOPTFLAGS=-g -Ofast FOPTFLAGS=-g -Ofast
Attached is an image of htop showing that the MKL threads are indeed being
spawn, but they remain unused by the code. The previous calculations on the
code show that it is capable of using OpenMP and MKL, only when PETSC
KSPSolver is called MKL seems to be turned off.
On Fri, Apr 7, 2023 at 8:10 AM Matthew Knepley <knepley at gmail.com> wrote:
> On Fri, Apr 7, 2023 at 10:06 AM Astor Piaz <appiazzolla at gmail.com> wrote:
>
>> Hello petsc-users,
>> I am trying to use a code that is parallelized with a combination of
>> OpenMP and MKL parallelisms, where OpenMP threads are able to spawn MPI
>> processes.
>> I have carefully scheduled the processes such that the right amount is
>> launched, at the right time.
>> When trying to use my code inside a MatShell (for later use in an FGMRES
>> KSPSolver), MKL processes are not being used.
>>
>> I am sorry if this has been asked before.
>> What configuration should I use in order to profit from MPI+OpenMP+MKL
>> parallelism?
>>
>
> You should configure using --with-threadsafety
>
> Thanks,
>
> Matt
>
>
>> Thank you!
>> --
>> Astor
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230407/0c8e4feb/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: htop.png
Type: image/png
Size: 298746 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230407/0c8e4feb/attachment-0001.png>
More information about the petsc-users
mailing list