[petsc-users] SLEPc eigensolver that uses minimal memory and finds ALL eigenvalues of a real symmetric sparse matrix in reasonable time

Shitij Bhargava shitij.cse at gmail.com
Mon Aug 8 00:29:55 CDT 2011


Thank you very much for your reply.

Unfortunately, I do not have access to the cluster at this moment because
there is some problem with it.

But I made a seperate program that reads a matrix of size 2500x2500 from a
file and calculates eigenvalues.The matrix itself is sparse, symmetric and
has non-zero values randomly distributed. I read more about the krylovschur
and other solvers, and parallel computing and mpi, and I probably understand
what you mean. The eigensolvers are efficient for calculating eigenvalues at
the peripheries, and especially quickly (less number of iterations) on the
larger side than the smaller side, right ? Although I still dont understand
what Maximum Projected Dimension exactly means. I played with mpd values
nonetheless, and it doesnt seem to have much effect on memory usage. I am
running it on my laptop.

But the main problem seems to be that in the program, different threads dont
seem to 'cooperate'. Every thread is calculating its own solution and
storing its own matrix, as if running its own version of EPSSolve. This is
because in the output to top command, each process (I used 2) uses nearly
equal amount of memory, which is equal to the memory usage if I run the
program with only 1 process. (so I suppose memory distribution is not
happening ?) I am using MPIAIJ now. I am putting the code below:
(I understand why the program prints things multiple times, but I know I can
fix that by something like "if(processID=0) printSomething"....what I am not
able to understand is how to distribute memory while solving for
eigenvalues, which is the call to EPSSolve. There doesnt seem to be any
parameter to tell it to solve while cooperating with other threads)

*#include "slepceps.h"
#include <stdio.h>
#include <time.h>
#include <math.h>

#define TOL 0.000001
#define MAXIT 1000

int main(int argc,char**argv)
{
    FILE *fp;
    double entry;
    int N;
    int nzval=0,Istart,Iend;

    Mat A;
    PetscErrorCode ierr;
    double *value,kr,ki;
    int i,j,nconv,nev,maxit,its;
    int col_nz;/*number of non-zero columns in a row*/
    Vec xr, xi;
    double tol,error,re,im;
    EPS eps; /*Eigen solver context*/
    const EPSType  type;
    int ncv,mpd;

    /*PetscInt rows;*/

    int* columns;

    SlepcInitialize(&argc,&argv,(char*)0,PETSC_NULL);

     /*Read matrix from file*/
    fp=fopen("sparseMatrix.txt","r");
    fscanf(fp,"%d",&N);
    fscanf(fp,"%lf",&entry);/*not required, but only to move file pointer
forward*/

    /*Allocate memory to arrays*/

    value=(double*)malloc(N*sizeof(double));
    columns=(int*)malloc(N*sizeof(int));

    ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
    ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);CHKERRQ(ierr);
    ierr=MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
    //ierr=MatSeqAIJSetPreallocation(A,N,PETSC_NULL);CHKERRQ(ierr);

ierr=MatMPIAIJSetPreallocation(A,N,PETSC_NULL,N,PETSC_NULL);CHKERRQ(ierr);//?????!!!!!!!!!!!!!!!

    ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr);

    for(i=0;i<N;i++)
    {
        col_nz=0;
        if(i<Iend && i>=Istart) //important, see where interval closed and
open
        {
               for(j=0;j<N;j++)
               {
                   fscanf(fp,"%lf",&entry);
                   if(entry!=0)
                   {
                       value[col_nz]=entry;
                       columns[col_nz]=j;
                       col_nz++;
                   }

               }

               ierr =
MatSetValues(A,1,&i,col_nz,columns,value,INSERT_VALUES);CHKERRQ(ierr);
        }
    }
    fclose(fp);

    MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);
     MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);

    printf("MATRIX ASSMEBLY DONE !!!!!!!! \n\n");

    /*Solve the Eigen Problem*/

    /***********START TIME************/
    clock_t start=clock();
    /********************************/

    ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr);

    ierr = EPSSetOperators(eps,A,PETSC_NULL);CHKERRQ(ierr);
       ierr = EPSSetProblemType(eps,EPS_HEP);CHKERRQ(ierr);

    ierr = EPSSetFromOptions(eps);CHKERRQ(ierr);
    ierr=EPSSetWhichEigenpairs(eps,EPS_LARGEST_REAL);CHKERRQ(ierr);

ierr=EPSSetDimensions(eps,0.5*N,PETSC_DECIDE,PETSC_DECIDE);CHKERRQ(ierr);/*N
specifies that find ALL eigenvalues*/

    ierr = EPSSetTolerances(eps,TOL,500);CHKERRQ(ierr);

    //ierr = MatDestroy(A);CHKERRQ(ierr); //dont need this anymore

    ierr = EPSSolve(eps);CHKERRQ(ierr);


    printf("Solved for half the largest values\n\n\n\n");

    ierr = EPSGetConverged(eps,&nconv);CHKERRQ(ierr);
      ierr = PetscPrintf(PETSC_COMM_WORLD," Number of converged eigenpairs:
%d\n\n",nconv);CHKERRQ(ierr);

    ierr = EPSDestroy(eps);CHKERRQ(ierr);

    ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr);

    ierr = EPSSetOperators(eps,A,PETSC_NULL);CHKERRQ(ierr);
       ierr = EPSSetProblemType(eps,EPS_HEP);CHKERRQ(ierr);

    ierr = EPSSetFromOptions(eps);CHKERRQ(ierr);
    ierr=EPSSetWhichEigenpairs(eps,EPS_SMALLEST_REAL);CHKERRQ(ierr);

ierr=EPSSetDimensions(eps,0.5*N,PETSC_DECIDE,PETSC_DECIDE);CHKERRQ(ierr);/*N
specifies that find ALL eigenvalues*/

    ierr = EPSSetTolerances(eps,TOL,500);CHKERRQ(ierr);
    ierr = EPSSolve(eps);CHKERRQ(ierr);


   ierr = EPSGetConverged(eps,&nconv);CHKERRQ(ierr);
      ierr = PetscPrintf(PETSC_COMM_WORLD," Number of converged eigenpairs:
%d\n\n",nconv);CHKERRQ(ierr);


    ierr = EPSDestroy(eps);CHKERRQ(ierr);

    /**********STOP TIME**************/
    clock_t end=clock();
    /********************************/


ierr = PetscPrintf(PETSC_COMM_WORLD,"Time taken to compute all eigenvalues
of a %d x %d matrix is : %lf
seconds\n\n",N,N,(double)(end-start)/CLOCKS_PER_SEC );CHKERRQ(ierr);

     ierr = SlepcFinalize();CHKERRQ(ierr);

    return 0;

}

*I* *ran it with:

mpirun -np 2 ./slepcEigenMPI -eps_monitor

I didnt do exactly what you said, because the matrix generation part in the
actual program is quite time consuming itself. But I assume what I am doing
is equivalent to what you meant to do? Also, I put MPD as PETSC_DECIDE,
because I didnt know what to put it for this matrix dimension.

This is the output I get: (part of the output)
*MATRIX ASSMEBLY DONE !!!!!!!!

MATRIX ASSMEBLY DONE !!!!!!!!

  1 EPS nconv=98 first unconverged value (error) 1490.88 (1.73958730e-05)
  1 EPS nconv=98 first unconverged value (error) 1490.88 (1.73958730e-05)
  2 EPS nconv=282 first unconverged value (error) 3.04636e-27
(2.49532175e-04)
  2 EPS nconv=282 first unconverged value (error) 3.04636e-27
(2.49532175e-04)
  3 EPS nconv=386 first unconverged value (error) 2.81217e-25
(6.07438011e-04)
  3 EPS nconv=386 first unconverged value (error) 2.81217e-25
(6.07438011e-04)
  4 EPS nconv=420 first unconverged value (error) 2.75201e-27
(1.65345126e-03)
  4 EPS nconv=420 first unconverged value (error) 2.75201e-27
(1.65345126e-03)
  6 EPS nconv=1001 first unconverged value (error) 1.93852e-27
(8.40346506e-02)
  6 EPS nconv=1001 first unconverged value (error) 1.93852e-27
(8.40346506e-02)
  7 EPS nconv=1027 first unconverged value (error) 2.00333e-27
(7.40100945e-03)
  7 EPS nconv=1027 first unconverged value (error) 2.00333e-27
(7.40100945e-03)*

How do I make "different threads store the matrix and more importantly, run
EPSSolve together" ? Thats what I want right ?



Thank you very much !

Shitij



**


On 4 August 2011 16:57, Jose E. Roman <jroman at dsic.upv.es> wrote:

>
> El 04/08/2011, a las 08:48, Shitij Bhargava escribió:
>
> > Thank you very much for your reply.
> >
> > I am a summer trainee, and I talked to my guide about this, and now what
> I want might be a little different.
> >
> > I really do want all the eigenvalues. There cannot be any compromise on
> that. But please note that time is not much of a concern here, the much
> bigger concern is of memory.  Even if I can distribute the memory somehow
> for the computation of all eigenvalues, that is fine. (doesnt matter if it
> takes a huge amount of time, he is willing to wait). Can any slepc
> Eigensolver distribute memory this way while solving for all eigenvalues ?
> >
> > I am sorry if what I am asking is obvious, I dont know much about
> distributed memory, etc. also. I am stupid at this moment, but I am
> learning.
> >
> > Also, when I ran the slepc program on a small cluster, and then ran the
> top command, I saw that while solving for eigenvalues, the program was
> already using upto 500% CPU. Does this mean that the eigensolver was
> "distributing memory" automatically among different nodes ? I understand
> that it must be distributing computational effort, but how do I know if it
> was distributing memory also ? It simply showed me the RES memory usage to
> be 1.3 gb, and the %MEM to be about 17%. Also, I did not use any MPIXXX data
> structures. I simply used SeqAIJ. Will using MPIAIJ "distribute memory"
> while solving for all eigenvalues  ? (I understand that it will distribute
> memory to store the matrix, but the memory bottleneck is eigenvalue solver,
> so its more important for memory to be distributed then ) My guide said that
> one node will not have enough memory, but all nodes combined will, and hence
> the requirement of memory to be distributed.
> >
> > I read about ScaLAPACK, and I have already asked on their forums to
> confirm if it is suitable for me. I am waiting for an answer.
> >
> > Thank you once again.
> >
> > Shitij
>
> Try something like this:
>
> $ mpirun -np 8 ./program -eps_nev 4800 -eps_mpd 400 -eps_largest_real
> $ mpirun -np 8 ./program -eps_nev 4800 -eps_mpd 400 -eps_smallest_real
>
> But this may not work for the smallest ones.
>
> Jose
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110808/6b1fef05/attachment-0001.htm>


More information about the petsc-users mailing list