<div dir="ltr">Jed (and others),<div><br></div><div>Are there any more concrete plans for MPI 3 shared memory-based approaches that have developed since some discussions about this topic a few months ago on PETSc-dev? Most of the discussion trended towards making better use of local memories by the use of neighborhood collectives (which sounds great, and I was intending to look into doing a neighborhood collective implementation for PetscSF, though I got buried under other things). We probably need to also think about doing things like allowing flexibility in what ghost points get allocated for local vectors obtained from a DM if neighbors live in the same shared memory domain, etc. Though this sounds... messy.</div><div><br></div><div>--Richard</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 22, 2015 at 12:20 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">"Douglas A. Augusto" <<a href="mailto:daaugusto@gmail.com">daaugusto@gmail.com</a>> writes:<br>
> So, will the threading support be eventually reintroduced in PETSc? What are<br>
> the plans regarding OpenMP?<br>
<br>
</span>The plan is to use conventional OpenMP because that seems to be what<br>
people want and it's easier to maintain than a more radical approach.<br>
We're also developing shared memory MPI-based approaches and would<br>
recommend that you plan to not use OpenMP (it's a horrible programming<br>
model -- oversynchronizing and without a concept of memory locality).<br>
Note that there is huge selection bias -- people almost never publish<br>
when flat MPI runs faster (which it often does for quality<br>
implementations), but are happy to declare OpenMP a success when it<br>
looks favorable.<br>
</blockquote></div><br></div>