Send all the output of view and -log_summary.<br><br> Matt<br><br><div class="gmail_quote">On Fri, May 8, 2009 at 10:39 AM, Fredrik Bengzon <span dir="ltr"><<a href="mailto:fredrik.bengzon@math.umu.se">fredrik.bengzon@math.umu.se</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hong,<br>
Thank you for the suggestions, but I have looked at the EPS and KSP objects and I can not find anything wrong. The problem is that it takes longer to solve with 4 cpus than with 2 so the scalability seems to be absent when using superlu_dist. I have stored my mass and stiffness matrix in the mpiaij format and just passed them on to slepc. When using the petsc iterative krylov solvers i see 100% workload on all processors but when i switch to superlu_dist only two cpus seem to do the whole work of LU factoring. I don't want to use the krylov solver though since it might cause slepc not to converge.<br>
Regards,<br>
Fredrik<br>
<br>
Hong Zhang wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Run your code with '-eps_view -ksp_view' for checking<br>
which methods are used<br>
and '-log_summary' to see which operations dominate<br>
the computation.<br>
<br>
You can turn on parallel symbolic factorization<br>
with '-mat_superlu_dist_parsymbfact'.<br>
<br>
Unless you use large num of processors, symbolic factorization<br>
takes ignorable execution time. The numeric<br>
factorization usually dominates.<br>
<br>
Hong<br>
<br>
On Fri, 8 May 2009, Fredrik Bengzon wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Petsc team,<br>
Sorry for posting questions not really concerning the petsc core, but when I run superlu_dist from within slepc I notice that the load balance is poor. It is just fine during assembly (I use Metis to partition my finite element mesh) but when calling the slepc solver it dramatically changes. I use superlu_dist as solver for the eigenvalue iteration. My question is: can this have something to do with the fact that the option 'Parallel symbolic factorization' is set to false? If so, can I change the options to superlu_dist using MatSetOption for instance? Also, does this mean that superlu_dist is not using parmetis to reorder the matrix?<br>
Best Regards,<br>
Fredrik Bengzon<br>
<br>
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>