Launching smaller overlapping asynchronous kernels can have speed up if your vectors are large and you are doing reductions. This way warps stalls can be compensated for, and latencies can be hidden. Not sure what you mean "the way it currently is" though...<div>
<br><div class="gmail_quote">On Tue, Apr 24, 2012 at 2:21 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><div class="gmail_quote">On Tue, Apr 24, 2012 at 14:12, Daniel Lowell <span dir="ltr"><<a href="mailto:redratio1@gmail.com" target="_blank">redratio1@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I'm writing a vector type with uses flag synching like you have in PETSc with Vec CUSP, however it uses asynchronous kernel launches (pipeling,etc..) and autotuned kernels. Not quite ready for primetime, but we have seen the value of it in terms of speed up.</blockquote>
</div><br></div><div>Okay, but why do dozens of small kernel launches when all the data is available up-front? I'm just skeptical that VecMDot should be implemented for CUDA the way it currently is.</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Daniel Lowell<br><br>
</div>