superlu_dist options

Matthew Knepley knepley at gmail.com
Fri May 8 10:41:56 CDT 2009


Send all the output of view and -log_summary.

  Matt

On Fri, May 8, 2009 at 10:39 AM, Fredrik Bengzon <
fredrik.bengzon at math.umu.se> wrote:

> Hong,
> Thank you for the suggestions, but I have looked at the EPS and KSP objects
> and I can not find anything wrong. The problem is that it takes longer to
> solve with 4 cpus than with 2 so the scalability seems to be absent when
> using superlu_dist. I have stored my mass and stiffness matrix in the mpiaij
> format and just passed them on to slepc. When using the petsc iterative
> krylov solvers i see 100% workload on all processors but when i switch to
> superlu_dist only two cpus seem to do the whole work of LU factoring. I
> don't want to use the krylov solver though since it might cause slepc not to
> converge.
> Regards,
> Fredrik
>
> Hong Zhang wrote:
>
>>
>> Run your code with '-eps_view -ksp_view' for checking
>> which methods are used
>> and '-log_summary' to see which operations dominate
>> the computation.
>>
>> You can turn on parallel symbolic factorization
>> with '-mat_superlu_dist_parsymbfact'.
>>
>> Unless you use large num of processors, symbolic factorization
>> takes ignorable execution time. The numeric
>> factorization usually dominates.
>>
>> Hong
>>
>> On Fri, 8 May 2009, Fredrik Bengzon wrote:
>>
>>  Hi Petsc team,
>>> Sorry for posting questions not really concerning the petsc core, but
>>> when I run superlu_dist from within slepc I notice that the load balance is
>>> poor. It is just fine during assembly (I use Metis to partition my finite
>>> element mesh) but when calling the slepc solver it dramatically changes. I
>>> use superlu_dist as solver for the eigenvalue iteration. My question is: can
>>> this have something to do with the fact that the option 'Parallel symbolic
>>> factorization' is set to false? If so, can I change the options to
>>> superlu_dist using MatSetOption for instance? Also, does this mean that
>>> superlu_dist is not using parmetis to reorder the matrix?
>>> Best Regards,
>>> Fredrik Bengzon
>>>
>>>
>>>
>>
>


-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20090508/3e0d3755/attachment.htm>


More information about the petsc-users mailing list