<div dir="ltr"><div>Thanks for your reply Matt.</div><div><br></div><div>The problem seems to be the MKL threads I just realized.<br></div><div><br></div><div>Inside the MatShell I call:</div><div><br></div><div>call omp_set_nested(.true.)<br></div><div>call omp_set_dynamic(.false.)<br></div><div>call mkl_set_dynamic(0)<br></div><div><br></div><div>Then, inside the omp single thread I use:</div><div><br></div><div>nMkl0 = mkl_set_num_threads_local(nMkl)<br></div><div><br></div><div>where nMkl is set to 24</div><div><br></div><div>MKL_VERBOSE shows, that the calls to have access to 24 threads but the timings are the same as in 1 thread</div><div><br></div><div>MKL_VERBOSE ZGEMV(N,12544,12544,0x7ffde9edc800,0x14e4662d2010,12544,0x14985e610,1,0x7ffde9edc7f0,0x189faaa90,1) 117.09ms CNR:OFF Dyn:0 FastMM:1 TID:0 NThr:24<br></div><div>MKL_VERBOSE ZGEMV(N,12544,12544,0x7ffe00355700,0x14c8ec1e4010,12544,0x16959c830,1,0x7ffe003556f0,0x17dd7da70,1) 117.37ms CNR:OFF Dyn:0 FastMM:1 TID:0 NThr:1<br></div><div><br></div><div>The configuration of OpenMP that is launching these MKL processes is as follows:</div><div><br></div><div>OPENMP DISPLAY ENVIRONMENT BEGIN<br> _OPENMP = '201511'<br> OMP_DYNAMIC = 'FALSE'<br> OMP_NESTED = 'TRUE'<br> OMP_NUM_THREADS = '24'<br> OMP_SCHEDULE = 'DYNAMIC'<br> OMP_PROC_BIND = 'TRUE'<br> OMP_PLACES = '{0:24}'<br> OMP_STACKSIZE = '0'<br> OMP_WAIT_POLICY = 'PASSIVE'<br> OMP_THREAD_LIMIT = '4294967295'<br> OMP_MAX_ACTIVE_LEVELS = '255'<br> OMP_CANCELLATION = 'FALSE'<br> OMP_DEFAULT_DEVICE = '0'<br> OMP_MAX_TASK_PRIORITY = '0'<br> OMP_DISPLAY_AFFINITY = 'FALSE'<br> OMP_AFFINITY_FORMAT = 'level %L thread %i affinity %A'<br> OMP_ALLOCATOR = 'omp_default_mem_alloc'<br> OMP_TARGET_OFFLOAD = 'DEFAULT'<br> GOMP_CPU_AFFINITY = ''<br> GOMP_STACKSIZE = '0'<br> GOMP_SPINCOUNT = '300000'<br>OPENMP DISPLAY ENVIRONMENT END<br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 7, 2023 at 1:25 PM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Apr 7, 2023 at 2:26 PM Astor Piaz <<a href="mailto:appiazzolla@gmail.com" target="_blank">appiazzolla@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">Hi Matthew, Jungchau,<div>Thank you for your advice. The code still does not work, I give more details about it below, I can specify more about it as you wish.<br></div><div><br></div><div>I am implementing a spectral method resulting in a block matrix where the off-diagonal blocks are Poincare-Steklov operators of impedance-to-impedance type.</div><div>Those Poincare-Steklov operators have been created hierarchically merging subdomain operators (the HPS method), and I have a well tuned (but rather complex) OpenMP+MKL code that can apply this operator very fast.</div><div>I would like to use PETSc's MPI-parallel GMRES solver with a MatShell that calls my OpenMP+MKL code, while each block can be in a different MPI process.</div><div><br></div><div>At the moment the code runs correctly, except that PETSc is not letting my OpenMP+MKL code make the scheduling of threads as I choose.</div></div></div></div></blockquote><div><br></div><div>PETSc does not say anything about OpenMP threads. However, maybe you need to launch the executable with the correct OMP env variables?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div>I am using</div><div>./configure --with-scalar-type=complex --prefix=../install/fast/ --with-debugging=0 -with-openmp=1 --with-blaslapack-dir=${MKLROOT} --with-mkl_cpardiso-dir=${MKLROOT} --with-threadsafety --with-log=0 COPTFLAGS=-g -Ofast CXXOPTFLAGS=-g -Ofast FOPTFLAGS=-g -Ofast</div><div><br></div><div>Attached is an image of htop showing that the MKL threads are indeed being spawn, but they remain unused by the code. The previous calculations on the code show that it is capable of using OpenMP and MKL, only when PETSC KSPSolver is called MKL seems to be turned off.</div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 7, 2023 at 8:10 AM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Apr 7, 2023 at 10:06 AM Astor Piaz <<a href="mailto:appiazzolla@gmail.com" target="_blank">appiazzolla@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello petsc-users,<div>I am trying to use a code that is parallelized with a combination of OpenMP and MKL parallelisms, where OpenMP threads are able to spawn MPI processes.</div><div>I have carefully scheduled the processes such that the right amount is launched, at the right time.</div><div>When trying to use my code inside a MatShell (for later use in an FGMRES KSPSolver), MKL processes are not being used.</div><div><br></div><div>I am sorry if this has been asked before.</div><div>What configuration should I use in order to profit from MPI+OpenMP+MKL parallelism?</div></div></blockquote><div><br></div><div>You should configure using --with-threadsafety</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Thank you!</div><div>--</div><div>Astor</div></div>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>