<div dir="ltr">So you want to run 16 MPI ranks. That is not bad. I was afraid you want to use like 1 or 2 MPI processes per socket.<div><br></div><div>I would not be optimistic that you are going to get a win with OMP here. But it would be good to get some data. Talk with the hypre folks, they can debug hypre and they have a lot more experience with OMP + AMG and can give you a better sense of what to expect.</div><div><br></div><div>Mark</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 10, 2017 at 4:06 PM, Bakytzhan Kallemov <span dir="ltr"><<a href="mailto:bkallemov@lbl.gov" target="_blank">bkallemov@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class="">
<p><br>
</p>
<br>
<div class="m_-7126695782558398110moz-cite-prefix">On 10/10/2017 12:47 PM, Mark Adams
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote"><span></span><br>
<span></span>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>
</span> What are you comparing? Are you using say 32 MPI
processes and 2 threads or 16 MPI processes and 4 threads?
How are you controlling the number of OpenMP threads,
OpenMP environmental variable? What parts of the time in
the code are you comparing? You should just -log_view and
compare the times for PCApply and PCSetUp() between say 64
MPI process/1 thread and 32 MPI processes/2 threads and
send us the output for those two cases.<br>
</blockquote>
<div><br>
</div>
<div>These folks don't use many MPI processes. I'm not sure
what the optimal configuration is with Chombo-Crunch when
using all of Cori.</div>
<div><br>
</div>
<div>Baky: how many MPI processes per socket are you aiming
for on Cori-KNL?</div>
<div> </div>
</div>
</div>
</div>
</blockquote></span>
right now I am testing it on a single KNL node going from flat 64+1
to 2+32 for comparison.<br>
But as you can see from the plot in the previous mail, we have a
sweet spot at 16+4 point, then we scale that accordingly when
running<br>
with 8k nodes.<span class=""><br>
<br>
<br>
<br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span><br>
><br>
> It seems that it made no difference, so perhaps I
am doing something wrong or my build is not configured
right.<br>
><br>
> Do you have any example that makes use of threads
when running hybrid and show an advantage?<br>
<br>
</span> There is not reason to think that using threads
on KNL is faster than just using MPI processes. Despite
what the NERSc/LBL web pages may say, just because a
website says something doesn't make it true.<br>
<div class="m_-7126695782558398110HOEnZb">
<div class="m_-7126695782558398110h5"><br>
<br>
><br>
> I'd like to test it and make sure that my libs
are configured correctly, before start to investigate
it further.<br>
><br>
><br>
> Thanks,<br>
><br>
> Baky<br>
><br>
><br>
<br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</span></div>
</blockquote></div><br></div>