[petsc-dev] GAMG error with MKL

Mark Adams mfadams at lbl.gov
Thu Jul 5 08:28:16 CDT 2018


>
>
> Please share the results of your experiments that prove OpenMP does not
> improve performance for Mark’s users.
>

This obviously does not "prove" anything but my users use OpenMP primarily
because they do not distribute their mesh metadata. They can not replicated
the mesh on every core, on large scale problems and shared memory allows
them to survive. They have decided to use threads as opposed to MPI shared
memory. (Not a big deal, once you decide not to use distributed memory the
damage is done and NERSC seems to be OMP centric so they can probably get
better support for OMP than MPI shared memory.)

BTW, PETSc does support OMP, that is what I have been working on testing
for the last few weeks. First with Hypre (numerics are screwed up from an
apparent compiler bug or a race condition of some sort; it fails at higher
levels of optimization), and second with MKL kernels. The numerics are
working with MKL and we are working on packaging this up to deliver to a
user (they will test performance).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180705/b7898095/attachment.html>


More information about the petsc-dev mailing list