<div class="gmail_quote">On Fri, Nov 25, 2011 at 16:48, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div>Synopsis of what I said before to elicit comment:</div><div><br></div><div>1) I think the only thing we can learn from Brook, CUDA, OpenCL is that you identify threads by a grid ID.</div>
<div><br></div><div>2) Things like BLAS are so easy that you can move up to the streaming model, but this does not work for </div><div><br></div><div> - FD and FEM residual evaluation (Jed has an FD example with Aron, SNES ex52 is my FEM example)</div>
<div><br></div><div> - FD and FEM Jacobian evaluation</div></blockquote><div><br></div><div>I think these are also probably too simple. Discontinuous Galerkin with overlapped flux computations and interior integration would be a somewhat better model problem. Nonlinear Gauss-Seidel in a multigrid context would be another.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><br></div><div>3) If you look at ex52 I do a "thread transposition" meaning threads start working on different areas of</div>
<div> memory which looks like a transpose on a 2D grid. I can do this using shared memory for the vector group.</div>
<div><br></div><div>The API is very simple. Give grid indices to the thread, and its done in CUDA and OpenCL essentially the</div><div>same way.</div></blockquote></div><br><div>As is, this seems to assume a flat memory model and the memory access only appears in how the kernel uses threadIdx to determine what memory to operate on. If we could say something about this up-front, then the library could schedule tasks relative to memory and perhaps handle some updates for distributed memory.</div>
<div><br></div><div>Can we have a way to specify the required memory access before launching the kernels?</div>