Thanks for the clarification Jed. I am involved in numerical software development but never have come across code that drastically changes performance based on blas/lapack implementation (for exactly the same reason you mentioned), so I was trying to learn what the issue might be. I certainly agree with your comment on HT, the idea is to maintain computational throughput, I guess if you have a high ILP or full registers it really doesn't help!<br>
<br><br><div class="gmail_quote">On Tue, Mar 15, 2011 at 10:48 AM, Jed Brown <span dir="ltr"><<a href="mailto:jed@59a2.org">jed@59a2.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im"><div class="gmail_quote">On Tue, Mar 15, 2011 at 16:36, Natarajan CS <span dir="ltr"><<a href="mailto:csnataraj@gmail.com" target="_blank">csnataraj@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Also I wonder what percentage of the code is actually blas/lapack intensive to make any significant dent in wall cock?</blockquote></div><br></div><div>Rather little of PETSc is dependent on dense linear algebra. Some third-party direct solvers use it in their numerical factorization routines. Otherwise, it's mostly just vector operations which tend to be bandwidth limited anyway and are not so sensitive to implementation. Also, it is much more common for the majority of run time to be in matrix kernels than pure vector operations. Note that while HT is effective at covering stalls due to irregular memory access, it's not so good for tight kernels or purely bandwidth-limited tasks.</div>
</blockquote></div><br>