Doing tree-wise parallel reductions on a single dot does leave most of the threads stalled by the end of the kernel execution. I keep this inefficiency built-in and take advantage of it by launching overlapping kernels and pipelining the dot operation, breaking it up into smaller kernels. Obviously this only makes sense for large vectors. I haven't tried implementing the entire MDot as one large kernel though. Might be worthwhile though....<br>
<br><div class="gmail_quote">On Tue, Apr 24, 2012 at 2:42 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><div class="gmail_quote">On Tue, Apr 24, 2012 at 14:29, Daniel Lowell <span dir="ltr"><<a href="mailto:redratio1@gmail.com" target="_blank">redratio1@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Launching smaller overlapping asynchronous kernels can have speed up if your vectors are large and you are doing reductions. This way warps stalls can be compensated for, and latencies can be hidden. Not sure what you mean "the way it currently is" though...</blockquote>
</div><br></div><div>The reduction is only needed at the end. Any sequential launch adds artificial synchronization. I'd be interested to see the performance comparison, but I'd be surprised if independent kernel launches were faster than a decent implementation with one kernel launch.</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Daniel Lowell<br><br>