[petsc-users] SLEPc Bogus eigenvalues for large -eps_nev [SOLVED]

Vijay Gopal Chilkuri vijay.gopal.c at gmail.com
Fri May 15 08:21:27 CDT 2015


To close this thread let me point out the problem.

The problem occured due to the distribution of the number of processors
over the total number of nodes used (as Jose E. Roman pointed out.)
The problem was solved and all the 300 (good) eigenvalues, properly
converged,
were obtained when the correct distribution of the number of processors
over the nodes was ascertained using *numactl *.

TLDR, incorrect distribution of processors using numactl may lead to bogus
eigenvalues.

Thanks Jose E. Roman !

Problem is now solved.

With regards,
 Vijay

On Fri, May 15, 2015 at 3:00 PM, Vijay Gopal Chilkuri <
vijay.gopal.c at gmail.com> wrote:

> Yes, those seem to be the right eigenvalues.
> Ok so the solution is to recompile PETSc/SLEPc with a basic configuration
> and test with --with-debugging=1
>
> Would it make a difference if I use *-esp_type lanczos *or some other
> diagonalization procedure ?
>
> I'll run the test with a new version of PETSc/SLEPc and report back.
>
> Thanks a  lot,
>  Vijay
>
> On Fri, May 15, 2015 at 2:55 PM, Jose E. Roman <jroman at dsic.upv.es> wrote:
>
>>
>> El 14/05/2015, a las 19:13, Vijay Gopal Chilkuri escribió:
>>
>> > oups sorry, I send you a smaller one (540540) this should finish in a
>> few minutes.
>> >
>> > It requires the same makefile and irpf90.a library.
>> > so just replace the old problem.c file with this and it should compile.
>> >
>> > Thanks again,
>> >  Vijay
>> >
>>
>> I was able to compute 300 eigenvalues of this matrix of size 540540. All
>> eigenvalues are in the range -4.70811 .. -4.613807, and the associated
>> residual is always below 1e-9.
>>
>> There must be something in your software configuration that is causing
>> problems. I would suggest trying with a basic PETSc/SLEPc configuration,
>> with no openmp flags, using --download-fblaslapack (instead of MKL). Also,
>> although it should not make any difference, you may want to try with a
>> smaller number of MPI processes (rather than 741).
>>
>> Jose
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150515/ca49aecc/attachment.html>


More information about the petsc-users mailing list