[petsc-users] Current state of OpenMP support in PETSc

Richard Mills rtm at utk.edu
Fri May 22 15:02:23 CDT 2015


Jed (and others),

Are there any more concrete plans for MPI 3 shared memory-based approaches
that have developed since some discussions about this topic a few months
ago on PETSc-dev?  Most of the discussion trended towards making better use
of local memories by the use of neighborhood collectives (which sounds
great, and I was intending to look into doing a neighborhood collective
implementation for PetscSF, though I got buried under other things).  We
probably need to also think about doing things like allowing flexibility in
what ghost points get allocated for local vectors obtained from a DM if
neighbors live in the same shared memory domain, etc.  Though this
sounds... messy.

--Richard

On Fri, May 22, 2015 at 12:20 PM, Jed Brown <jed at jedbrown.org> wrote:

> "Douglas A. Augusto" <daaugusto at gmail.com> writes:
> > So, will the threading support be eventually reintroduced in PETSc? What
> are
> > the plans regarding OpenMP?
>
> The plan is to use conventional OpenMP because that seems to be what
> people want and it's easier to maintain than a more radical approach.
> We're also developing shared memory MPI-based approaches and would
> recommend that you plan to not use OpenMP (it's a horrible programming
> model -- oversynchronizing and without a concept of memory locality).
> Note that there is huge selection bias -- people almost never publish
> when flat MPI runs faster (which it often does for quality
> implementations), but are happy to declare OpenMP a success when it
> looks favorable.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150522/02987ab0/attachment.html>


More information about the petsc-users mailing list