<div dir="ltr">I'm sorry I meant OpenMP threads are able to spawn MKL processes</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 7, 2023 at 8:29 PM Dave May <<a href="mailto:dave.mayhem23@gmail.com">dave.mayhem23@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri 7. Apr 2023 at 07:06, Astor Piaz <<a href="mailto:appiazzolla@gmail.com" target="_blank">appiazzolla@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello petsc-users,<div>I am trying to use a code that is parallelized with a combination of OpenMP and MKL parallelisms, where OpenMP threads are able to spawn MPI processes.</div></div></blockquote><div dir="auto"><br></div><div dir="auto">Is this really the correct way to go?</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Would it not be more suitable (or simpler) to run your application on an MPI sub communicator which maps one rank to say one compute node, and then within each rank of the sub comm you utilize your threaded OpenMP / MKL code using as many physical threads as there are cores/ node (and or hyper threads if that’s is effective for you)?</div><div dir="auto"><br></div><div dir="auto">Thanks,</div><div dir="auto">Dave </div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto"></div><div>I have carefully scheduled the processes such that the right amount is launched, at the right time.</div><div>When trying to use my code inside a MatShell (for later use in an FGMRES KSPSolver), MKL processes are not being used.</div><div><br></div><div>I am sorry if this has been asked before.</div><div>What configuration should I use in order to profit from MPI+OpenMP+MKL parallelism?</div><div><br></div><div>Thank you!</div><div>--</div><div>Astor</div></div>
</blockquote></div></div>
</blockquote></div>