[petsc-dev] Hybrid MPI/OpenMP reflections
Jed Brown
jedbrown at mcs.anl.gov
Thu Aug 8 17:42:25 CDT 2013
Karl Rupp <rupp at mcs.anl.gov> writes:
> Hi,
>
> >> When using good preconditioners, spMV is essentially never the
>>> bottleneck and hence I don't think a separate communication thread
>>> should be implemented in PETSc. Instead, such a fallback should be part
>>> of a good MPI implementation.
>>
>> SpMV is an important part of most of those scalable preconditioners. In
>> multigrid, those are grid transfer operators, residuals, and Chebyshev
>> or Krylov-accelerated smoothers.
>
> From the context I was referring to SpMV for a 'full system matrix' as
> part of the outer Krylov solver as it was the topic in the paper (cf.
> third paragraph). In preconditioners you may use different storage
> formats, not communicate across nodes, etc.
It's exactly the same operation in the Chebyshev smoother, for example.
There is no reason to use a different format (it doesn't) and "not
communicating across nodes" in the preconditioner is an egregious crime
that prevents a scalable method. So I would say that fine-grid SpMV
performance is of similarly high performance with a "scalable
preconditioner" as with Jacobi.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130808/c7c1938f/attachment.sig>
More information about the petsc-dev
mailing list