[petsc-users] an ambiguity on a Lanczos solver for a symmetric system -- advice needed

Hong Zhang hzhang at mcs.anl.gov
Mon Apr 21 10:00:34 CDT 2014


Sorry, ignore my previous email. You are not solving eigenvalue problem.
Hong

On Mon, Apr 21, 2014 at 9:59 AM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
> Umut,
> Have you tried slepc for eigenvalue problems?
> Why do you need mumps in your eigensolver? Shift-and-invert?
>
> Hong
>
> On Mon, Apr 21, 2014 at 9:39 AM, Matthew Knepley <knepley at gmail.com> wrote:
>> On Sat, Apr 19, 2014 at 2:13 PM, Umut Tabak <u.tabak at tudelft.nl> wrote:
>>>
>>> Dear all,
>>
>>
>> For any timing question, we need to see the output f -log_summary. Also, if
>> you have significant
>> time in routines you wrote, we need you to create PETSc events for these.
>>
>>   Matt
>>
>>>
>>> I am experiencing lately some issues with a symmetric Lanczos eigensolver
>>> in FORTRAN. Basically, I have test code in MATLAB where I am using
>>> HSL_MA97(MATLAB interface) at the moment
>>>
>>> When I program Lanczos iterations in blocks in MATLAB by using HSL_MA97,
>>> as expected my overall solution time decreases meaning that block solution
>>> improves the solution efficiency.
>>>
>>> Then, to apply the same algorithm on problems on the orders of millions, I
>>> am transferring the same algorithm to a FORTRAN code but this time with
>>> MUMPS as the solver then I was expecting the solution time to decrease as
>>> well, but my overall solution times are increasing when I increase the block
>>> size.
>>>
>>> For a check with MUMPS, I only tried the block solution phase and compared
>>> 120 single solutions to
>>>
>>> 60 solutions by blocks of 2
>>> 30 solutions by blocks of 4
>>> 20 solutions by blocks of 6
>>> 15 solutions by blocks of 8
>>>
>>> and saw that the total solution time in comparison to single solves are
>>> decreasing so I am thinking this is not the source of the problem, I
>>> believe.
>>>
>>> What I am doing is that I am performing a full reorthogonalization in the
>>> Lanczos loop, which includes some dgemm calls and moreover there are some
>>> other calls for sparse symmetric matrix vector multiplications from Intel
>>> MKL.
>>>
>>> I could not really understand why the overall solution time is increasing
>>> with the increase of the block sizes in FORTRAN whereas I was expecting even
>>> an improvement over my MATLAB code.
>>>
>>> Any ideas on what could be going wrong.
>>>
>>> Best regards and thanks in advance,
>>>
>>> Umut
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their experiments
>> is infinitely more interesting than any results to which their experiments
>> lead.
>> -- Norbert Wiener


More information about the petsc-users mailing list