[petsc-users] Problem when switching from debug to optimized

Hong Zhang hzhang at mcs.anl.gov
Mon Nov 14 08:17:03 CST 2011


Suggest experiment with mumps parallel direct LU solver as well.
Hong

On Mon, Nov 14, 2011 at 7:34 AM, Matthew Knepley <knepley at gmail.com> wrote:
> On Mon, Nov 14, 2011 at 12:02 PM, Bogdan Dita <bogdan at lmn.pub.ro> wrote:
>>
>>  Hello,
>>
>>  Below is my post from a few days ago and this time I've attached the
>> output from log_summary.
>
> The time increase comes completely from SuperLU_dist during the
> factorization
> phase. You should use -ksp_view so we can see what solver options are used.
>    Matt
>
>>
>> "
>>  Until a few days ago I've only be using PETSc in debug mode and when I
>> switch to the optimised version(--with-debugging=0) I got a strange
>> result regarding the solve time, what I mean is that it was 10-15 %
>> higher then in debug mode.
>>  I'm trying to solve a linear system in parallel with superlu_dist, and
>> I've tested my program on a Beowulf cluster, so far only using a single
>> node with 2 quad-core Intel processors.
>>  From what I know the "no debug" version should be faster and I know it
>> should be faster because on my laptop(dual-core Intel) for the same
>> program and even the same matrices the solve time for the optimised
>> version is 2 times faster, but when I use the cluster the optimised
>> version time is slower then the debug version.
>>  Any thoughts?
>>
>> "
>>  Best regards,
>>  Bogdan Dita
>>
>>
>>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>


More information about the petsc-users mailing list