[petsc-dev] GAMG error with MKL

Jed Brown jed at jedbrown.org
Tue Jul 10 11:37:38 CDT 2018


Jeff Hammond <jeff.science at gmail.com> writes:

> If PETSc was an application, it could do whatever it wanted, but it's not.
> If PETSc is a library that intends to meet the needs of HPC applications,
> it needs to support the programming models the applications are using.  Or
> I suppose you will continue to disparage everyone who doesn't bow down and
> worship flat MPI on homogeneous big-core machines as a divine execution
> model until your users abandon you for otherwise inferior software that is
> willing to embrace user requirements.

At present, we have users wanting to call PETSc interfaces at various
levels of granularity and have PETSc use threads internally.  To be
concrete, they'll call everything from a nonlinear solver to a vector
dot product and expect threads to be used internally.  There is a much
smaller set of users that would like to call collective operations [1]
from within an omp parallel block.  OpenMP does not provide a
"communicator" or similar, and there are very few examples of crossing
module boundaries for collective operations.  It's super fragile
(terrible user experience) to assume scope of parallelism.

[1] It's common to want to call non-collective operations like
MatSetValues(), but collective semantics are rarely requested in this
mode.  They would need it if they followed your recommendations about
OpenMP performance.


More information about the petsc-dev mailing list