Only if your matrices are <b>not sparse</b>, by which we main that more than, say, 20% of the entries are nonzero.<div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Nov 8, 2012 at 9:10 PM, Ganesh Hegde <span dir="ltr"><<a href="mailto:ghegde@purdue.edu" target="_blank">ghegde@purdue.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have an application where I am multiplying several large, complex PETSc matrices (that could be sparse) with each other.<div>
This happens often enough during program execution that I am interested in looking for optimal solutions to this.<br clear="all">
<div>The usual algorithm followed is initializing a PETSc matrix with the required dimensionality and specifying the sparsity pattern.</div><div>They're then multiplied with each other.</div><div>So my questions are as follows:</div>
<div>1) Could specifying a matrix as full instead of sparse improve speed by reducing overhead involved in processing sparse matrices?</div><div>2) If not, should BLAS routines be called directly from the code instead of PETSc calling them, to improve performance?</div>
<div>Regards, </div><span class="HOEnZb"><font color="#888888">-- <br>Ganesh<br>
<br>
</font></span></div>
</blockquote></div><br></div>