[petsc-dev] Hybrid MPI/OpenMP reflections

Jed Brown jedbrown at mcs.anl.gov
Thu Aug 8 10:06:47 CDT 2013


Karl Rupp <rupp at mcs.anl.gov> writes:
> When using good preconditioners, spMV is essentially never the 
> bottleneck and hence I don't think a separate communication thread 
> should be implemented in PETSc. Instead, such a fallback should be part 
> of a good MPI implementation.

SpMV is an important part of most of those scalable preconditioners.  In
multigrid, those are grid transfer operators, residuals, and Chebyshev
or Krylov-accelerated smoothers.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130808/7d7f04a4/attachment.sig>


More information about the petsc-dev mailing list