Thank you very much for your reply.<br><br>I am a summer trainee, and I talked to my guide about this, and now what I want might be a little different.<br><br>I really do want all the eigenvalues. There cannot be any compromise on that. But please note that time is not much of a concern here, the much bigger concern is of memory. Even if I can distribute the memory somehow for the computation of all eigenvalues, that is fine. (doesnt matter if it takes a huge amount of time, he is willing to wait). Can any slepc Eigensolver distribute memory this way while solving for all eigenvalues ?<br>
<br>I am sorry if what I am asking is obvious, I dont know much about distributed memory, etc. also. I am stupid at this moment, but I am learning.<br><br>Also, when I ran the slepc program on a small cluster, and then ran the top command, I saw that while solving for eigenvalues, the program was already using upto 500% CPU. Does this mean that the eigensolver was "distributing memory" automatically among different nodes ? I understand that it must be distributing computational effort, but how do I know if it was distributing memory also ? It simply showed me the RES memory usage to be 1.3 gb, and the %MEM to be about 17%. Also, I did not use any MPIXXX data structures. I simply used SeqAIJ. Will using MPIAIJ "distribute memory" while solving for all eigenvalues ? (I understand that it will distribute memory to store the matrix, but the memory bottleneck is eigenvalue solver, so its more important for memory to be distributed then ) My guide said that one node will not have enough memory, but all nodes combined will, and hence the requirement of memory to be distributed.<br>
<br>I read about ScaLAPACK, and I have already asked on their forums to confirm if it is suitable for me. I am waiting for an answer.<br><br>Thank you once again.<br><br>Shitij<br><br><div class="gmail_quote">On 3 August 2011 14:31, Jose E. Roman <span dir="ltr"><<a href="mailto:jroman@dsic.upv.es">jroman@dsic.upv.es</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
El 03/08/2011, a las 08:30, Shitij Bhargava escribió:<br>
<div class="im"><br>
> Hi all !!<br>
><br>
><br>
><br>
> The reason I have started using SLEPc/PETSc is that I want to reduce the memory requirements of a program that I already have in Intel MKL. The program's purpose is to calculate ALL eigenvalues and eigenvectors of a real symmetric sparse matrix. Intel MKL doesnt have a sparse solver for this (cant solve for eigenvalues of a matrix represented in sparse format), so I am using SLEPc, as it can do this.<br>
> (NOTE: by "sparsity" I mean ZEROES*100/(ZEROES+NONZEROES). Also the tolerance I am using is 10^-6)<br>
<br>
</div>If you really want to compute ALL eigenvalues, then SLEPc is not the way to go. SLEPc is intended for computing a few eigenpairs, and in some cases a few percentage, e.g. 20% at most. Computing more than that with SLEPc is probably going to be less efficient than other alternatives.<br>
<div class="im"><br>
><br>
> Till now I have been using the LAPACK method (EPSLAPACK) for the slepc eigensolver, as I found out that it was the fastest (I configured PETSc to use Intel MKL BLAS/LAPACK routines). But I just found out that it is using a lot of memory irrespective of the sparsity of the matrix. For example, on a matrix of order 9600x9600 no matter how sparse/dense it is (I tested from 0.02% to 75% sparsity), it used 1.3gb of peak memory every time, whereas the part of the program in which the matrix is assembled in AIJ format (in case of 75% sparsity) takes only 550mb. The MKL program takes 3.1gb. Although memory requirement is less, the difference is constant irrespective of sparsity.So, it seems that the LAPACK method will use memory dependent on the order of the matrix, irrespective of the sparsity. So I guess its not very suitable for my purpose ? I also tried changing the NCV and MPD (Maximum projected Dimension) parameters hoping they would somehow reduce memory requirements even if at the cost of more time, but whatever I pass NCV and MPD (in EPSSetDimensions) as, they are not considered at all. They remain the same as the default values (as I verified with EPSGetDimensions).<br>
<br>
</div>The EPSLAPACK solver converts the matrix to dense format. As explained in the documentation, this solver must be used only on small matrices for debugging purposes. NCV and MPD are not relevant in this setting.<br>
<div class="im"><br>
><br>
> So I tried other solvers like the default, krylovschur, lanczos, but I think they are built to find only a few eigenvalues and not all, because they almost never converged to find ALL eigenvalues, even if I gave them a lot of time (big max number of iterations parameter). But their memory requirements seem to be pretty less. For example, I ran the program with EPSLANCZOS on the same 9600x9600 matrix (75% sparsity) with maximum number of iterations as 500,000. It ran it for 6.5 hrs (then I killed it), and at the end it was using 530mb only (although I dont know how near it was to completion).<br>
<br>
</div>Krylov-Schur is almost always preferred over Lanczos, since the latter implements an explicit restart which is much worse than implicit restart in Krylov-Schur. Anyway, in both of them if you set NEV=9600 you are building a projected problem (dense matrix) of order equal to the original matrix so you have the same cost as with EPSLAPACK together with the cost of building the Krylov subspace. Again: do not use SLEPc for all eigenvalues!<br>
<div class="im"><br>
><br>
> I am somewhat illiterate about the mathematics behind all this, and how the lapack,krylovschur, etc. work and I have no idea if what I am experiencing was expected/obvious, although I half-expected some things by reading the slepc documentation, where a table was given comparing different eigensolvers.<br>
<br>
</div>A general recommendation is to try to understand a little bit how the algorithms work before attempting to use them.<br>
<div class="im"><br>
><br>
> Also, I have observed that as the sparsity of the matrix increases (more zeroes), more eigenvalues also become zeroes. Is there a way to exploit this using probably krylovschur/lanczos so that there is no need to converge to all eigenvalues (meaning that the time they will take will be reasonable), but to only the number of eigenvalues that are non-zero,so that I can assume the non-converged eigenvalues to be simply zeroes (but is there a way to know beforehand by sparsity approximately how many non-zeroe eigenvalues will be there, and the corresponding value of maximum iterations that are required ?)<br>
<br>
</div>There is no direct connection between the number of zero entries and the number of zero eigenvalues, unless you have rows/columns with all zeros, in which case the (permuted) matrix can be written as A=[A11 0; 0 0] so it is enough to compute eigenvalues of A11.<br>
<div class="im"><br>
><br>
> Is there a way to do what I want to do through SLEPc ??? I want ALL eigenvalues and want that the memory requirements of the eigensolver should be less/in accordance with the sparsity of the matrix, otherwise the very purpose of using a sparse matrix representation is somewhat defeated. I am doing all this in FORTRAN by the way.<br>
<br>
</div>Ask yourself this question: is it really necessary to compute all eigenvalues? If the answer is yes then maybe ScaLAPACK is what you need.<br>
<font color="#888888"><br>
Jose<br>
</font><div><div></div><div class="h5"><br>
><br>
><br>
> Thanks a lot in advance !!!<br>
><br>
><br>
> Shitij<br>
<br>
<br>
<br>
<br>
</div></div></blockquote></div><br>