[petsc-dev] Generality of VecScatter

Matthew Knepley knepley at gmail.com
Thu Nov 24 16:43:46 CST 2011


On Thu, Nov 24, 2011 at 4:40 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Thu, Nov 24, 2011 at 16:26, Matthew Knepley <knepley at gmail.com> wrote:
>
>> This is one great reason that vectorization works and pthreads is crap. I
>> am not totally sold on the thread block system, but
>> it looks like genius compared to pthreads. I would start there.
>>
>
> Suppose you had a higher level way to describe data movement (across
> shared and distributed memory) between invocation of CUDA/OpenCL kernels.
> How far would that get you?
>

Move this question to Barry's new thread. I think it will get you quite
far, and the point for me will be
how will the user describe a communication pattern, and how will we
automate the generation of MPI
from that specification. Sieve has an attempt to do this buried in it
inspired by the "manifold" idea.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/29f5fdc7/attachment.html>


More information about the petsc-dev mailing list