[petsc-dev] Generality of VecScatter

Jed Brown jedbrown at mcs.anl.gov
Fri Nov 25 09:57:25 CST 2011


On Fri, Nov 25, 2011 at 09:45, Mark F. Adams <mark.adams at columbia.edu>wrote:

> So you want MPI to provide an incremental path to a model that will work
> at exa-scale and pthreads are part of that?  So you are in the MPI +
> pthreads camp?


I don't know what Barry wants, but I like the idea of building the library
interface that we want on top of MPI+pthreads. I see them merely as
portable network- and kernels-level abstractions, not as anything intended
to provide a good interface for applications to call directly, or even to
use directly when implementing most PETSc functionality.

It is a noble goal to offer a unified view of shared/threads and
distributed memory. I don't know how possible this is, so it always involve
some explicit hierarchy. The distributed-memory primitives might just
involve some layout informed by NUMA, followed by threads or CUDA-style
kernels run locally.

I would really like to see some performance numbers for OpenCL running on
CPUs. It would make our life simpler (once OpenCL supports indirect calls)
if we could just use that, but I'm not convinced that it will deliver good
performance due to the different memory model.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111125/e31fee0c/attachment.html>


More information about the petsc-dev mailing list