[petsc-dev] GAMG error with MKL

Jed Brown jed at jedbrown.org
Mon Jul 9 12:04:23 CDT 2018


Jeff Hammond <jeff.science at gmail.com> writes:

> This is the textbook Wrong Way to write OpenMP and the reason that the
> thread-scalability of DOE applications using MPI+OpenMP sucks.  It leads to
> codes that do fork-join far too often and suffer from death by Amdahl,
> unless you do a second pass where you fuse all the OpenMP regions and
> replace the serial regions between them with critical sections or similar.
>
> This isn't how you'd write MPI, is it?  No, you'd figure out how to
> decompose your data properly to exploit locality and then implement an
> algorithm that minimizes communication and synchronization.  Do that with
> OpenMP.

The applications that would call PETSc do not do this decomposition and
the OpenMP programming model does not provide a "communicator" or
similar abstraction to associate the work done by the various threads.
It's all implicit.  The idea with PETSc's threadcomm was to provide an
object for this, but nobody wanted to call PETSc that way.  It's clear
that applications using OpenMP are almost exclusively interested in its
incrementalism, not in doing it right.  It's also pretty clear that the
OpenMP forum agrees, otherwise they would be providing abstractions for
performing collective operations across module boundaries within a
parallel region.

So the practical solution is to use OpenMP the way everyone else does,
even if the performance is not good, because at least it works with the
programming model the application has chosen.


More information about the petsc-dev mailing list