[petsc-dev] GAMG error with MKL

Tobin Isaac tisaac at cc.gatech.edu
Thu Jul 5 11:41:38 CDT 2018


On Thu, Jul 05, 2018 at 09:28:16AM -0400, Mark Adams wrote:
> >
> >
> > Please share the results of your experiments that prove OpenMP does not
> > improve performance for Mark’s users.
> >
> 
> This obviously does not "prove" anything but my users use OpenMP primarily
> because they do not distribute their mesh metadata. They can not replicated
> the mesh on every core, on large scale problems and shared memory allows
> them to survive. They have decided to use threads as opposed to MPI shared
> memory. (Not a big deal, once you decide not to use distributed memory the
> damage is done and NERSC seems to be OMP centric so they can probably get
> better support for OMP than MPI shared memory.)

Out of curiosity, is the mesh immutable for a full simulation or adaptive?
If it's immutable, that seems like a poster child for the "private by
default, shared by choice" paradigm.

> BTW, PETSc does support OMP, that is what I have been working on testing
> for the last few weeks. First with Hypre (numerics are screwed up from an
> apparent compiler bug or a race condition of some sort; it fails at higher
> levels of optimization), and second with MKL kernels. The numerics are
> working with MKL and we are working on packaging this up to deliver to a
> user (they will test performance).


More information about the petsc-dev mailing list