[petsc-users] with-openmp error with hypre

Mark Adams mfadams at lbl.gov
Wed Feb 14 04:36:09 CST 2018


>
>
> >
> > We have been tracking down what look like compiler bugs and we have only
> taken at peak performance to make sure we are not wasting our time with
> threads.
>
>    You are wasting your time. There are better ways to deal with global
> metadata than with threads.
>

OK while agree with Barry let me just add for Baky's benefit if nothing
else.

You can write efficient code with thread programing models (data shared by
default) but a thread PM does not help in developing the good data models
that are required for efficient programs. And you can write crappy code
with MPI shared memory. While a good start, just putting your shared memory
in an MPI shared memory window will not make your code faster. Experience
indicates that in general thread models are less efficient in terms of
programmer resources. Threads are a pain in the long run.

While this experience (Petsc/hypre fails when going from -O1 to -O2 on KNL
and -with-openmp=1 even on flat MPI runs) is only anecdotal and HPC is
going to involve pain no matter what you do, this may be an example of
threads biting you.

It is easier for everyone, compiler writers and programmers, to reason
about a program where threads live in their own address space, you need to
decompose your data at a fine level to get good performance anyway, and you
can use MPI shared memory when you really need it. I wish Chombo would get
rid of OpenMP but that is not likely to happen any time soon.

Mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180214/2aaf6062/attachment-0001.html>


More information about the petsc-users mailing list