[petsc-users] Newbie question: Something is wrong with this Slepc simple example for generalised eigenvalue problem
Jose E. Roman
jroman at dsic.upv.es
Mon Mar 29 05:15:38 CDT 2021
> El 29 mar 2021, a las 11:59, dazza simplythebest <sayosale at hotmail.com> escribió:
>
> Dear Jose,
> Many thanks for your reply, it enabled me to solve the problem I described below.
> In particular, I recompiled PETSC and SLEPC with MUMPS and SCALAPACK included
> (by including the following lines in the configuration file:
> '--download-mumps',
> '--download-scalapack',
> '--download-cmake',
> )
> and then the program (unchanged from the previous attempt) then both compiled
> and ran straightaway with no hanging. Just in case someone is following this example later,
> the correct eigenvalues are (I have checked them against an independent lapack zggev code):
> 0 (15.7283987073479, 79.3812583009335)
> 1 (10.3657189951037, 65.4935496512632)
> 2 (20.2726807729152, 60.7235264113338)
> 3 (15.8693370278539, 54.4403170495817)
> 4 (8.93270390707530, 42.0676105026967)
> 5 (18.0161334989426, 31.7976217614629)
> 6 (16.2219350827311, 26.7999463239364)
> 7 (6.64653598248233, 19.2535093354505)
> 8 (7.23494239184217, 4.58606776802574)
> 9 (3.68158090200136, 1.65838104812904)
>
> The problem would then indeed appear to have been - exactly as you suggested -
> the fact that the generalised eigenvalue problem requires linear systems to be
> solved as part of the solution process, but in the previous attempt I was trying
> to solve an MPI-distributed system using a sequential solver.
>
> A couple of other quick points / queries:
> 1) By 'debug mode' you mean compiling the library with '--with-debugging=1'?
Yes.
> 2) Out of interest, I am wondering out of interest which LU solver was used - scalapack or
> MUMPS ? I could see both these libraries in the linking command, is there an easy way
> to find out which solver was actually called ?
MUMPS (which uses ScaLAPACK internally). Run with -eps_view and you will see the details of the solver being used.
> 2) One slightly strange thing is that I also compiled the library with '--with-64-bit-indices=1'
> for 64-bit memory addressing, but I noticed in the compilation commands generated
> by the make command there is no '-i8' flag, which is used with mpiifort to request 64 bit
> integers. Is it the case that this is not required since everything is somehow taken care of by
> Petsc custom data type ( PetscInt) ? As an experiment I tried manually inserting the -i8
> and got a few errors like:
> /data/work/rotplane/omega_to_zero/stability/test/tmp10/tmp3/write_and_solve8.F(38): warning #6075: The data type of the actual argument does not match the definition. [PETSC_COMM_WORLD]
> call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);if (ierr .ne. 0) th
If you use PetscInt throughout your code, then no -i8 flag is required. Note that the argument of MPI functions should be PetscMPIInt, not PetscInt.
Jose
>
>
> A big thank you once again,
> Best wishes,
> Dan.
>
> From: Jose E. Roman <jroman at dsic.upv.es>
> Sent: Sunday, March 28, 2021 4:42 PM
> To: dazza simplythebest <sayosale at hotmail.com>
> Cc: PETSc users list <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Newbie question: Something is wrong with this Slepc simple example for generalised eigenvalue problem
>
> You should run in debug mode until you get a correct code, otherwise you may no see some error messages. Also, it is recommended to add error checking after every call to PETSc, otherwise the execution continues and may get blocked. See the CHKERRA macro in https://slepc.upv.es/documentation/current/src/eps/tutorials/ex1f90.F90.html
>
> The problem you are probably having is that you are running with several MPI processes, so you need a parallel LU solver. See FAQ #10 https://slepc.upv.es/documentation/faq.htm
>
> Jose
>
>
> > El 28 mar 2021, a las 11:25, dazza simplythebest <sayosale at hotmail.com> escribió:
> >
> > Dear All,
> > I am seeking to use slepc/petsc to solve a generalised eigenvalue problem
> > that arises in a hydrodynamic stability problem. The code is a parallelisation of an existing
> > serial Fortran code. Before I get to grips with this target problem, I would of course like to get
> > some relevant examples working. I have installed petsc/ slepc seemingly without any problem,
> > and the provided slepc example fortran program ex1f.F, which solves a regular eigenvalue
> > problem Ax = lambda x, seemed to compile and run correctly.
> > I have now written a short program to instead solve the complex generalised
> > problem Ax = lambda B x (see below) . This code compiles and runs w/out errors but
> > for some reason hangs when calling EPSSolve - we enter EPSSolve but never leave.
> > The matrices appear to be correctly assembled -all the values are correct in the Matview
> > printout, so I am not quite sure where I have gone wrong, can anyone spot my mistake?
> > ( Note that for the actual problem I wish to solve I have already written the code to construct the matrix,
> > which distributes the rows across the processes and it is fully tested and working. Hence I want to specify
> > the distribution of rows and not leave it up to a PETS_DECIDE .)
> > I would be very grateful if someone can point out what is wrong with this small example code (see below),
> > Many thanks,
> > Dan.
> >
> > ! this program MUST be run with NGLOBAL = 10, MY_NO_ROWS = 5
> > ! and two MPI processes since row distribution is hard-baked into code
> > !
> > program main
> > #include <slepc/finclude/slepceps.h>
> > use slepceps
> > implicit none
> >
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! Declarations
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > !
> > ! Variables:
> > ! A , B double complex operator matrices
> > ! eps eigenproblem solver context
> >
> > Mat A,B
> > EPS eps
> > EPSType tname
> > PetscReal tol, error
> > PetscScalar kr, ki
> > Vec xr, xi
> > PetscInt NGLOBAL , MY_NO_ROWS, NL3, owner
> > PetscInt nev, maxit, its, nconv
> > PetscInt i,j,ic,jc
> > PetscReal reala, imaga, realb, imagb, di, dj
> > PetscScalar a_entry, b_entry
> > PetscMPIInt rank
> > PetscErrorCode ierr
> > PetscInt,parameter :: zero = 0, one = 1, two = 2, three = 3
> >
> > PetscInt M1, N1, mm1, nn1
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! Beginning of program
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > call SlepcInitialize(PETSC_NULL_CHARACTER,ierr)
> > call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
> > ! make sure you set NGLOBAL = 10, MY_NO_ROWS = 5 and run with two processes
> > NGLOBAL = 10
> > MY_NO_ROWS = 5
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! Compute the operator matrices that define the eigensystem, Ax=kBx
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > !!!!!!!!! Setup A matrix
> >
> > call MatCreate(PETSC_COMM_WORLD,A,ierr)
> > call MatSetsizes(A,MY_NO_ROWS, MY_NO_ROWS ,NGLOBAL,NGLOBAL,ierr)
> > call MatSetFromOptions(A,ierr)
> > call MatGetSize(A,M1,N1,ierr)
> > write(*,*)'Rank [',rank,']: global size of A is ',M1, N1
> > call MatGetLocalSize(A,mm1,nn1,ierr)
> > write(*,*)'Rank [',rank,']: my local size of A is ',mm1, nn1
> > call MatMPIAIJSetPreallocation(A,three, PETSC_NULL_INTEGER,one, &
> > & PETSC_NULL_INTEGER,ierr) !parallel (MPI) allocation
> >
> > !!!!!!!!! Setup B matrix
> > call MatCreate(PETSC_COMM_WORLD,B,ierr)
> > call MatSetsizes(B,MY_NO_ROWS, MY_NO_ROWS ,NGLOBAL,NGLOBAL,ierr)
> > call MatSetFromOptions(B,ierr)
> > call MatGetSize(B,M1,N1,ierr)
> > write(*,*)'Rank [',rank,']: global size of B is ',M1, N1
> > call MatGetLocalSize(B,mm1,nn1,ierr)
> > write(*,*)'Rank [',rank,']: my local size of B is ',mm1, nn1
> >
> > call MatMPIAIJSetPreallocation(B,three, PETSC_NULL_INTEGER,one, &
> > & PETSC_NULL_INTEGER,ierr) !parallel (MPI) allocation
> >
> > ! initalise
> > call MatZeroEntries(A,ierr)
> > call MatZeroEntries(B,ierr)
> >
> > ! Fill in values of A, B and assemble matrices
> > ! Both matrices are tridiagonal with
> > ! Aij = cmplx( (i-j)**2, (i+j)**2)
> > ! Bij = cmplx( ij/i + j, (i/j)**2)
> > ! (numbering from 1 )
> >
> > do i = 1, NGLOBAL
> > ! a rather crude way to distribute rows
> > if (i < 6) owner = 0
> > if (i >= 6) owner = 1
> > if (rank /= owner) cycle
> > do j = 1, NGLOBAL
> > if ( abs(i-j) < 2 ) then
> > write(*,*)rank,' : Setting ',i,j
> > di = dble(i) ; dj = dble(j)
> >
> > reala = (di - dj)**2 ; imaga = (di + dj)**2
> > a_entry = dcmplx(reala, imaga)
> > realb = (di*dj)/(di + dj) ; imagb = di**2/dj**2
> > b_entry = dcmplx(realb, imagb)
> >
> > ic = i -1 ; jc = j-1 ! convert to C indexing
> > call MatSetValue(A, ic, jc, a_entry, ADD_VALUES,ierr)
> > call MatSetValue(B, ic, jc, b_entry, ADD_VALUES,ierr)
> > endif
> > enddo
> > enddo
> >
> > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
> > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
> > call MatAssemblyBegin(B,MAT_FINAL_ASSEMBLY,ierr)
> > call MatAssemblyEnd(B,MAT_FINAL_ASSEMBLY,ierr)
> >
> > ! Check matrices
> > write(*,*)'A matrix ... '
> > call MatView(A,PETSC_VIEWER_STDOUT_WORLD,ierr)
> > write(*,*)'B matrix ... '
> > call MatView(B,PETSC_VIEWER_STDOUT_WORLD,ierr)
> >
> > call MatCreateVecs(A,PETSC_NULL_VEC,xr)
> > call MatCreateVecs(A, PETSC_NULL_VEC,xi)
> >
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! Create the eigensolver and display info
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! ** Create eigensolver context
> > call EPSCreate(PETSC_COMM_WORLD,eps,ierr)
> >
> > ! ** Set operators.for general problem Ax = lambda B x
> > call EPSSetOperators(eps,A, B, ierr)
> > call EPSSetProblemType(eps,EPS_GNHEP,ierr)
> >
> > ! ** Set solver parameters at runtime
> > call EPSSetFromOptions(eps,ierr)
> >
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> > ! Solve the eigensystem
> > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > write(*,*)'rank',rank, 'entering solver ...'
> > call EPSSolve(eps,ierr)
> >
> > ! ** Free work space
> > call EPSDestroy(eps,ierr)
> > call MatDestroy(A,ierr)
> > call MatDestroy(B,ierr)
> >
> > call VecDestroy(xr,ierr)
> > call VecDestroy(xi,ierr)
> >
> > call SlepcFinalize(ierr)
> > end
More information about the petsc-users
mailing list