On Thu, Nov 24, 2011 at 4:49 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div class="im">On Thu, Nov 24, 2011 at 16:41, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote: <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div></div></div><div class="gmail_quote"><div>Let's start with the "lowest" level, or at least the smallest. I think the only sane way to program for portable performance here</div><div>is using CUDA-type vectorization. This SIMT style is explained well here <a href="http://www.yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html" target="_blank">http://www.yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html</a></div>
<div>I think this is much easier and more portable than the intrinsics for Intel, and more performant and less error prone than threads.</div><div>I think you can show that it will accomplish anything we want to do. OpenCL seems to have capitulated on this point. Do we agree</div>
<div>here?</div></div></blockquote><div><br></div></div><div>Moving from the other thread, I asked how far we could get with an API for high-level data movement combined with CUDA/OpenCL kernels. Matt wrote</div><div><br>
</div>
<div><div><i>I think it will get you quite far, and the point for me will be</i></div><div><i>how will the user describe a communication pattern, and how will we automate the generation of MPI</i></div><div><i>from that specification. Sieve has an attempt to do this buried in it inspired by the "manifold" idea.</i></div>
</div><div><i><br></i></div><div>Now that CUDA supports function pointers and similar, we can write real code in it. Whenever OpenCL gets around to supporting them, we'll be able to write real code for multicore and see how it performs. To unify the distributed and manycore aspects, we need some sort of hierarchical abstraction for NUMA and a communicator-like object to maintain scope. After applying a local-distribution filter, we might be able to express this using coloring plus the parallel primitives that I have been suggesting in the other thread.</div>
<div><br></div><div>I'll think more on this and see if I can put together a concrete API proposal.</div></div>
</blockquote></div><br>Next, I think we need example problems, as I said before. DM ex1 does mesh distribution, which I think should also include<div>distribution of data over the mesh. I think we should add AMG, and FMM. With these three examples, we can prove this system</div>
<div>is worthwhile. Any discussion of these examples, or other suggestions?</div><div><br></div><div> Matt<br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
</div>