<div class="gmail_quote">On Sat, Mar 10, 2012 at 09:59, Xavier Garnaud <span dir="ltr"><<a href="mailto:xavier.garnaud@ladhyx.polytechnique.fr">xavier.garnaud@ladhyx.polytechnique.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div id=":2u1">am solving the compressible Navier--Stokes equations in compressible form, so in order to apply the operator, I<br><ol><li>apply BCs on the flow field</li><li>compute the flux</li><li>take the derivative using finite differences</li>
<li>apply BCs on the derivatives of the flux<br></li></ol><br>In order to apply the linearized operator, I wish to linearize steps 2 and 4 (the other are linear). For this I assemble sparse matrices (MPIAIJ). The matrices should be block diagonal -- with square or rectangular blocks -- so I preallocate the whole diagonal blocks (but I only use MatSetValues for nonzero entries). When I do this, the linearized code runs approximately 50% slower (the computation of derivatives takes more that 70% of the time in the non-linear code), so steps 2 and 4 are much slower for the linear operator although the number of operations is very similar. Is this be due to the poor preallocation? Is there a way to improve the performance?</div>
</blockquote></div><br><div>It's not clear to me from this description if you are even using an implicit method. Is the linearization for use in a Newton iteration? How often do you have to reassemble? Please always send -log_summary output with performance questions.</div>