<div class="gmail_quote">On Sat, Aug 27, 2011 at 12:52, Milan Mitrovic <span dir="ltr"><<a href="mailto:milan.v.mitrovic@gmail.com">milan.v.mitrovic@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":42">and do the multiplication myself because it is much faster. I tried<br>
using parallel AIJ format but constructing the matrix took more than a<br>
minute for my problem with only ~150 nonzero entries per row... (maybe<br>
I was doing something very wrong)</div></blockquote></div><br><div>Probably preallocation, seeĀ <a href="http://www.mcs.anl.gov/petsc/petsc-2/documentation/faq.html#efficient-assembly">http://www.mcs.anl.gov/petsc/petsc-2/documentation/faq.html#efficient-assembly</a></div>
<div><br></div><div><br></div><div>If you have a more efficient way to apply the action of the matrix (e.g. exploit a tensor product to reduce the memory usage), it makes sense to use the PETSc matrix formats. Also, most problems need preconditioning and it's convenient to have an assembled matrix available in a PETSc format so that you can try a variety of preconditioners. You might end up doing something clever in the end, for which fewer/smaller matrices are assembled, but having assembled matrices is very convenient for experimentation and to check code correctness.</div>