<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Thu, Jul 5, 2018 at 2:04 PM Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 5, 2018 at 12:41 PM Tobin Isaac <<a href="mailto:tisaac@cc.gatech.edu" target="_blank">tisaac@cc.gatech.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, Jul 05, 2018 at 09:28:16AM -0400, Mark Adams wrote:<br>
> ><br>
> ><br>
> > Please share the results of your experiments that prove OpenMP does not<br>
> > improve performance for Mark’s users.<br>
> ><br>
> <br>
> This obviously does not "prove" anything but my users use OpenMP primarily<br>
> because they do not distribute their mesh metadata. They can not replicated<br>
> the mesh on every core, on large scale problems and shared memory allows<br>
> them to survive. They have decided to use threads as opposed to MPI shared<br>
> memory. (Not a big deal, once you decide not to use distributed memory the<br>
> damage is done and NERSC seems to be OMP centric so they can probably get<br>
> better support for OMP than MPI shared memory.)<br>
<br>
Out of curiosity, is the mesh immutable for a full simulation or adaptive?<br>
If it's immutable, that seems like a poster child for the "private by<br>
default, shared by choice" paradigm.<br></blockquote><div><br></div><div>This is Chombo so it is dynamic.</div></div></div></blockquote><div><br></div><div>We need more competitors like this :) We need to give more talks advocating serial meshes,</div><div>unstable algorithms, OpenMP, and templates.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
> BTW, PETSc does support OMP, that is what I have been working on testing<br>
> for the last few weeks. First with Hypre (numerics are screwed up from an<br>
> apparent compiler bug or a race condition of some sort; it fails at higher<br>
> levels of optimization), and second with MKL kernels. The numerics are<br>
> working with MKL and we are working on packaging this up to deliver to a<br>
> user (they will test performance).<br>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div>