[petsc-users] SLEPc eigensolver that uses minimal memory and finds ALL eigenvalues of a real symmetric sparse matrix in reasonable time

Jose E. Roman jroman at dsic.upv.es
Wed Aug 3 04:01:21 CDT 2011


El 03/08/2011, a las 08:30, Shitij Bhargava escribió:

> Hi all !!
> 
> 
> 
> The reason I have started using SLEPc/PETSc is that I want to reduce the memory requirements of a program that I already have in Intel MKL. The program's purpose is to calculate ALL eigenvalues and eigenvectors of a real symmetric sparse matrix. Intel MKL doesnt have a sparse solver for this (cant solve for eigenvalues of a matrix represented in sparse format), so I am using SLEPc, as it can do this.
> (NOTE: by "sparsity" I mean ZEROES*100/(ZEROES+NONZEROES). Also the tolerance I am using is 10^-6)

If you really want to compute ALL eigenvalues, then SLEPc is not the way to go. SLEPc is intended for computing a few eigenpairs, and in some cases a few percentage, e.g. 20% at most. Computing more than that with SLEPc is probably going to be less efficient than other alternatives.

> 
> Till now I have been using the LAPACK method (EPSLAPACK) for the slepc eigensolver, as I found out that it was the fastest (I configured PETSc to use Intel MKL BLAS/LAPACK routines). But I just found out that it is using a lot of memory irrespective of the sparsity of  the matrix. For example, on a matrix of order 9600x9600 no matter how sparse/dense it is (I tested from 0.02% to 75% sparsity), it used  1.3gb of peak memory every time, whereas the part of the program in which the matrix is assembled in AIJ format (in case of 75% sparsity) takes only 550mb. The MKL program takes 3.1gb. Although memory requirement is less, the difference is constant irrespective of sparsity.So, it seems that the LAPACK method will use memory dependent on the order of the matrix, irrespective of the sparsity. So I guess its not very suitable for my purpose ? I also tried changing the NCV and MPD (Maximum projected Dimension) parameters hoping they would somehow reduce memory requirements even if at the cost of more time, but whatever I pass NCV and MPD (in EPSSetDimensions) as, they are not considered at all. They remain the same as the default values (as I verified with EPSGetDimensions). 

The EPSLAPACK solver converts the matrix to dense format. As explained in the documentation, this solver must be used only on small matrices for debugging purposes. NCV and MPD are not relevant in this setting.

> 
> So I tried other solvers like the default, krylovschur, lanczos, but I think they are built to find only a few eigenvalues and not all, because they almost never converged to find ALL eigenvalues, even if I gave them a lot of time (big max number of iterations parameter). But their memory requirements seem to be pretty less. For example, I ran the program with EPSLANCZOS on the same 9600x9600 matrix (75% sparsity) with maximum number of iterations as 500,000. It ran it for 6.5 hrs (then I killed it), and at the end it was using 530mb only (although I dont know how near it was to completion).

Krylov-Schur is almost always preferred over Lanczos, since the latter implements an explicit restart which is much worse than implicit restart in Krylov-Schur. Anyway, in both of them if you set NEV=9600 you are building a projected problem (dense matrix) of order equal to the original matrix so you have the same cost as with EPSLAPACK together with the cost of building the Krylov subspace. Again: do not use SLEPc for all eigenvalues!

> 
> I am somewhat illiterate about the mathematics behind all this, and how the lapack,krylovschur, etc. work and I have no idea if what I am experiencing was expected/obvious, although I half-expected some things by reading the slepc documentation, where a table was given comparing different eigensolvers.

A general recommendation is to try to understand a little bit how the algorithms work before attempting to use them.

> 
> Also, I have observed that as the sparsity of the matrix increases (more zeroes), more eigenvalues also become zeroes. Is there a way to exploit this using probably krylovschur/lanczos so that there is no need to converge to all eigenvalues (meaning that the time they will take will be reasonable), but to only the number of eigenvalues that are non-zero,so that I can assume the non-converged eigenvalues to be simply zeroes (but is there a way to know beforehand by sparsity approximately how many non-zeroe eigenvalues will be there, and the corresponding value of maximum iterations that are required ?)

There is no direct connection between the number of zero entries and the number of zero eigenvalues, unless you have rows/columns with all zeros, in which case the (permuted) matrix can be written as A=[A11 0; 0 0] so it is enough to compute eigenvalues of A11.

> 
> Is there a way to do what I want to do through SLEPc ???  I want ALL eigenvalues and want that the memory requirements of the eigensolver should be less/in accordance with the sparsity of the matrix, otherwise the very purpose of using a sparse matrix representation is somewhat defeated. I am doing all this in FORTRAN by the way.

Ask yourself this question: is it really necessary to compute all eigenvalues? If the answer is yes then maybe ScaLAPACK is what you need.

Jose

> 
> 
> Thanks a lot in advance !!!
> 
> 
> Shitij






More information about the petsc-users mailing list