On Sun, Feb 19, 2012 at 11:20 AM, Nun ion <span dir="ltr"><<a href="mailto:m.skates82@gmail.com">m.skates82@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello i have a conceptual idea of a sparse matvec implementation where i have multiple matrices, how would i go about implementing something such as<div><br></div><div>for i = ...</div><div> for k = ...</div><div> w_{ik} = K_i * u_k</div>
<div> end</div><div>end</div><div><br></div><div>Where each of the K_i are sparse matrices... the K_i are various stiffness matrices whose size can range (although they are all the same size). The u_k are reused</div>
</blockquote><div><br></div><div>I suspect that this reuse does not matter. You can do a back of the envelope calculation for your matrices, using the</div><div>analysis method in <a href="http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf">http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf</a>. K_i is much bigger than u_k, and will</div>
<div>generally blow u_k right out of the cache. In fact, this is the optimization that PETSc currently makes (see Prefetch code</div><div>in MatMult).</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Thanks!</div><div><br></div><div>Mark</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>