<div dir="ltr"><div><div>Thanks, I will have a look. Regarding the performance, I am just using my desktop computer here, on the supercomputer, I don't have the issue that it is compiled with the debugging options. In any case, I am not yet at the point of optimizing performance<br><br></div>Cheers<br><br></div>Timothee<br></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-25 14:34 GMT+09:00 Dave May <span dir="ltr"><<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On 25 September 2015 at 07:24, Timothée Nicolas <span dir="ltr"><<a href="mailto:timothee.nicolas@gmail.com" target="_blank">timothee.nicolas@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi all, from the manual, I get that the options
<div title="Page 94">
<div>
<div>
<p><span style="font-size:11pt;font-family:"NimbusMonL"">-ksp_type
preonly -pc_type lu <span style="font-family:arial,helvetica,sans-serif"></span></span></p><p><span style="font-size:11pt;font-family:"NimbusMonL""></span>to solve a problem by direct LU inversion </p></div></div></div></div></blockquote></span><div>This is doing LU factorization. <br>The inverse matrix is not assembled explicitly.<br></div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div title="Page 94"><div><div><p>are available only for sequential matrices. Should I conclude that there is no method to try a direct inversion of a big problem in parallel ?</p></div></div></div></div></blockquote><div><br></div></span><div>The packages, <br> superlu_dist, mumps and pastix <br></div><div>provided support for parallel LU factorization.<br></div><div>These packages can be installed by petsc'c configure system.<br></div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div title="Page 94"><div><div><p>I plan to use the direct inversion only as a check that my approximation to the inverse problem is OK, because so far my algorithm which should work is not working at all and I need to debug what is going on. Namely I use an approximation to the linear problem using an approximate Schur complement, and I want to know if my approximation is false or if from the start my matrices are false.</p><p>I have tried a direct inversion on one process with the above options for a quite small problem (12x12x40 with 8 dof), but it did not work, I suppose because of memory limitation (output with log_summary at the end attached just in case).</p></div></div></div></div></blockquote><div><br></div></span><div>From the output it appears you are running a debug build of petsc. <br>If you want to see an immediate gain in performance, profile your algorithm with an optimized build of petsc. <br>Also, if you want to get better performance from sequential sparse direct solvers, consider using the packages<br></div><div> umfpack (or cholmod if the matrix is symmetric)<br></div><div>available from suitesparse. <br>These libraries are great. <br>The implementations also leverage multi-threaded BLAS thus they will be much faster than using petsc default LU.<br><br></div><div>Cheers<br></div><div> Dave<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div title="Page 94"><div><div><p>Best</p><p>Timothee NICOLAS<br></p>
</div>
</div>
</div>
</div>
</blockquote></div><br></div></div>
</blockquote></div><br></div>