[petsc-dev] GAMG error with MKL

Mark Adams mfadams at lbl.gov
Thu Jul 5 14:04:26 CDT 2018


On Thu, Jul 5, 2018 at 12:41 PM Tobin Isaac <tisaac at cc.gatech.edu> wrote:

> On Thu, Jul 05, 2018 at 09:28:16AM -0400, Mark Adams wrote:
> > >
> > >
> > > Please share the results of your experiments that prove OpenMP does not
> > > improve performance for Mark’s users.
> > >
> >
> > This obviously does not "prove" anything but my users use OpenMP
> primarily
> > because they do not distribute their mesh metadata. They can not
> replicated
> > the mesh on every core, on large scale problems and shared memory allows
> > them to survive. They have decided to use threads as opposed to MPI
> shared
> > memory. (Not a big deal, once you decide not to use distributed memory
> the
> > damage is done and NERSC seems to be OMP centric so they can probably get
> > better support for OMP than MPI shared memory.)
>
> Out of curiosity, is the mesh immutable for a full simulation or adaptive?
> If it's immutable, that seems like a poster child for the "private by
> default, shared by choice" paradigm.
>

This is Chombo so it is dynamic.


>
> > BTW, PETSc does support OMP, that is what I have been working on testing
> > for the last few weeks. First with Hypre (numerics are screwed up from an
> > apparent compiler bug or a race condition of some sort; it fails at
> higher
> > levels of optimization), and second with MKL kernels. The numerics are
> > working with MKL and we are working on packaging this up to deliver to a
> > user (they will test performance).
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180705/7eccd21b/attachment.html>


More information about the petsc-dev mailing list