[petsc-users] SLEPc Bogus eigenvalues for large -eps_nev

Jose E. Roman jroman at dsic.upv.es
Fri May 15 08:18:27 CDT 2015


El 15/05/2015, a las 15:00, Vijay Gopal Chilkuri escribió:

> Yes, those seem to be the right eigenvalues.
> Ok so the solution is to recompile PETSc/SLEPc with a basic configuration and test with --with-debugging=1 

Yes, it is always good to work --with-debugging=1 until you know your code is working correctly. But anyway, I did the computation --with-debugging=0, with gnu compilers. It is more important to avoid multiple threads, for the moment, both in the LAPACK linked with SLEPc (avoid MKL), and the code that generates the matrix (just in case there was a problem in computing the coefficients).

> 
> Would it make a difference if I use -esp_type lanczos or some other diagonalization procedure ?

Do not use lanczos, it is just a worse algorithm than the default solver (explicit vs implicit restart).

In principle, methods of the CG type are appropriate for your problem, but convergence may be too slow. You can try RQCG with icc preconditioner. I am currently implementing LOBPCG in the development version - I will try with your matrices.

Jose


> 
> I'll run the test with a new version of PETSc/SLEPc and report back.
> 
> Thanks a  lot,
>  Vijay
> 
> On Fri, May 15, 2015 at 2:55 PM, Jose E. Roman <jroman at dsic.upv.es> wrote:
> 
> El 14/05/2015, a las 19:13, Vijay Gopal Chilkuri escribió:
> 
> > oups sorry, I send you a smaller one (540540) this should finish in a few minutes.
> >
> > It requires the same makefile and irpf90.a library.
> > so just replace the old problem.c file with this and it should compile.
> >
> > Thanks again,
> >  Vijay
> >
> 
> I was able to compute 300 eigenvalues of this matrix of size 540540. All eigenvalues are in the range -4.70811 .. -4.613807, and the associated residual is always below 1e-9.
> 
> There must be something in your software configuration that is causing problems. I would suggest trying with a basic PETSc/SLEPc configuration, with no openmp flags, using --download-fblaslapack (instead of MKL). Also, although it should not make any difference, you may want to try with a smaller number of MPI processes (rather than 741).
> 
> Jose
> 
> 



More information about the petsc-users mailing list