[petsc-users] with-openmp error with hypre

Smith, Barry F. bsmith at mcs.anl.gov
Tue Feb 13 10:30:18 CST 2018



> On Feb 13, 2018, at 10:12 AM, Mark Adams <mfadams at lbl.gov> wrote:
> 
> FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad.

  In other works using 16 MPI processes with 4 threads per process is twice as fast as running with 64 mpi processes?  Could you send the -log_view output for these two cases?

> 
> There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1.

  But the answers are wrong as soon as you turn on OpenMP?

   Thanks

    Barry


> 
> We are going to test the Haswell nodes next.
> 
> On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams <mfadams at lbl.gov> wrote:
> Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 and gamg, but with hypre and -with-openmp=1, even running with flat MPI, the solver seems flatline (see attached and notice that the residual starts to creep after a few time steps).
> 
> Maybe you can suggest a hypre test that I can run?
> 



More information about the petsc-users mailing list