[petsc-users] with-openmp error with hypre

Matthew Knepley knepley at gmail.com
Tue Feb 13 12:28:32 CST 2018


On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. <bsmith at mcs.anl.gov>
wrote:
>
> > On Feb 13, 2018, at 10:12 AM, Mark Adams <mfadams at lbl.gov> wrote:
> >
> > FYI, we were able to get hypre with threads working on KNL on Cori by
> going down to -O1 optimization. We are getting about 2x speedup with 4
> threads and 16 MPI processes per socket. Not bad.
>
>   In other works using 16 MPI processes with 4 threads per process is
> twice as fast as running with 64 mpi processes?  Could you send the
> -log_view output for these two cases?


Is that what you mean? I took it to mean

  We ran 16MPI processes and got time T.
  We ran 16MPI processes with 4 threads each and got time T/2.

I would likely eat my shirt if 16x4 was 2x faster than 64.

  Matt


>
> >
> > There error, flatlined or slightly diverging hypre solves, occurred even
> in flat MPI runs with openmp=1.
>
>   But the answers are wrong as soon as you turn on OpenMP?
>
>    Thanks
>
>     Barry
>
>
> >
> > We are going to test the Haswell nodes next.
> >
> > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams <mfadams at lbl.gov> wrote:
> > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using
> maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1
> and gamg, but with hypre and -with-openmp=1, even running with flat MPI,
> the solver seems flatline (see attached and notice that the residual starts
> to creep after a few time steps).
> >
> > Maybe you can suggest a hypre test that I can run?
> >
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180213/1777536b/attachment.html>


More information about the petsc-users mailing list