[petsc-users] Solving tridiagonal hermitian generalized eigenvalue problem with SLEPC

Toon Weyens tweyens at fis.uc3m.es
Tue Aug 19 06:05:01 CDT 2014


Thanks for the answers!

First of all, I indeed didn't specify GHEP because if I do, it gives me the
message "IPNorm: The inner product is not well defined!" (see below). If I
do the same calculation with LAPACK, it does work. And if I compare the
Eigenvalues of the GHEP LAPACK simulation and the NHEP (standard)
Krylov-Schur simulation, there is a relative difference between both of
about 1E-10 (while the solution states that the rel. error is about 1E-14,
anyway)

So, if I try to do the same thing as in ex. 13, and set the Spectral Type
to STSINVERT, it takes the error away for Krylov-Schur, but the solver does
not find a solution. For Lanczos and Arnoldi with explicit restart, the
error remains. The problem is that I don't really have any idea about what
I'm doing!

Anyway, here is what I need to do: My matrices A and B correspond to the
potential energy and the kinetic energy of a plasma, respectively. It is
very similar to the calculation of energy levels of quantum systems: you
perturb the energy of the system while holding the kinetic energy of the
perturbation constant. Minimizing this leads to the discrete (and
continuous) energy levels. In this case, of the plasma system. If the
eigenvalues are positive, this means that there is an exploding solution
(the EV are the squares of the actual time evolution appearing in an
exponential: e^(i sqrt(lambda) t))

So, when abstracting boundary conditions, using spectral transforms in 2
dimensions and finite differences in the third, I get two matrices A and B,
both hermitian.

I do as follows (FORTRAN):

matrices:
- call MatCreateAIJ explicitely giving the sizes of each processor
- call MatGetOwnershipRange to find the ownership and determine which
blocks to put
- call call MatSetValues with INSERT_VALUES to put these blocks in the
matrices
- call MatAssemblyBegin and End to assemble them

EPS:
- call EPSCreate
- call EPSSetOperators
- call EPSSetProblemType with GHEP -> gives the error about the IPnorm, so
I leave it out
- call EPSSetWhichEigenpairs with EPS_LARGEST_REAL
- call EPSSetFromOptions (why do I need to do this?)
- call EPSSolve

That's it...

Thank you again in advance!
Toon



On 18 August 2014 22:20, Hong <hzhang at mcs.anl.gov> wrote:

> Toon:
> >
> > I think I expressed myself wrong: I can indeed get it to work with just
> > using AIJ matrices, as in example 13. This is the way that I am currently
> > solving my problem. There are only two issues:
> > 1. memory is indeed important so I would certainly like to decrease it by
> > one third if possible :-) The goal is to make the simulations as fast and
> > light as possible to be able to perform parameter studies (on the
> stability
> > of MHD configurations).
>
> We need find out where the memory is taken. It seems you are using
> slepc default krylovschur/sinvert/preonly/lu, lu factorization might
> consume the most of memory. Run your code with '-eps_view' and find
> out what solver you are using.
> >
> > 2. I have played around a little bit with the different solvers but it
> > appears that the standard method and the Arnoldi with explicit restart
> > method are the best. Some of the others don't converge and some are
> slower.
>
> Your problem is symmetric, so you should use:
> problem type: generalized symmetric eigenvalue problem
> It seems you are using non-symmetric type. Check it with '-eps_view'.
> >
> > The thing is that in the end the matrices that I use are large but they
> have
> > a very easy structure: hermitian tri-diagonal. That's why, I think, slepc
> > usually converges in a few iterations (correct me if I'm wrong).
>
> The number of iterations does not depend on the data structure of the
> matrices, but the location of eigenvalues.
> >
> > The problem is that sometimes, when I consider more grid points, the
> solver
> > doesn't work any more because apparently it uses the LU decomposition
> (not
> > sure for the matrix A or B in A x = lambda B x) and there is a zero pivot
> > element (see below for error message). In other words: the matrices
> become
> > almost singular. This is a characteristic of the numerical method I
> think.
> > Is there any way to fix this by setting a more precise threshold or
> > something?
>
> You need carefully select the target of the shift.
> >
> > Also, there is one more thing: is it possible and useful to use the
> non-zero
> > structure of A when defining matrix B? Does this affect performance?
>
> Yes, use same data structure for A and B would make MatAXPY faster,
> but may not affect much of performance if you only use a single shift,
> e.g., ex13.c.
>
> Hong
>
> >
> > Error messages: identical for KRYLOV-SCHUR and ARNOLDI
> >
> > [0]PETSC ERROR: --------------------- Error Message
> > ------------------------------------
> > [0]PETSC ERROR: Detected zero pivot in LU factorization:
> > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot!
> > [0]PETSC ERROR: Zero pivot row 19 value 6.57877e-16 tolerance
> 2.22045e-14!
> > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.4.5, Jun, 29, 2014
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > [0]PETSC ERROR: ./PB3D on a debug-complex named toon-XPS-L501X by toon
> Mon
> > Aug 18 21:47:41 2014
> > [0]PETSC ERROR: Libraries linked from
> > /opt/petsc/petsc-3.4.5/debug-complex/lib
> > [0]PETSC ERROR: Configure run at Mon Aug 11 10:08:29 2014
> > [0]PETSC ERROR: Configure options PETSC_ARCH=debug-complex
> > --with-scalar-type=complex --with-debugging
> > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > [0]PETSC ERROR: MatPivotCheck_none() line 589 in
> > /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h
> > [0]PETSC ERROR: MatPivotCheck() line 608 in
> > /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h
> > [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1837 in
> > /opt/petsc/petsc-3.4.5/src/mat/impls/aij/seq/inode.c
> > [0]PETSC ERROR: MatLUFactorNumeric() line 2889 in
> > /opt/petsc/petsc-3.4.5/src/mat/interface/matrix.c
> > [0]PETSC ERROR: PCSetUp_LU() line 152 in
> > /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/factor/lu/lu.c
> > [0]PETSC ERROR: PCSetUp() line 890 in
> > /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c
> > [0]PETSC ERROR: KSPSetUp() line 278 in
> > /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c
> > [0]PETSC ERROR: PCSetUp_Redundant() line 170 in
> > /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/redundant/redundant.c
> > [0]PETSC ERROR: PCSetUp() line 890 in
> > /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c
> > [0]PETSC ERROR: KSPSetUp() line 278 in
> > /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c
> > [0]PETSC ERROR: STSetUp_Shift() line 126 in
> > /opt/slepc/slepc-3.4.4/src/st/impls/shift/shift.c
> > [0]PETSC ERROR: STSetUp() line 294 in
> > /opt/slepc/slepc-3.4.4/src/st/interface/stsolve.c
> > [0]PETSC ERROR: EPSSetUp() line 215 in
> > /opt/slepc/slepc-3.4.4/src/eps/interface/setup.c
> > [0]PETSC ERROR: EPSSolve() line 90 in
> > /opt/slepc/slepc-3.4.4/src/eps/interface/solve.c
> >
> >
> >
> > On 18 August 2014 20:52, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >>
> >>
> >>    I would recommend just using AIJ matrices and not worry about SBAIJ
> at
> >> this stage. SBAIJ can save you memory but not operations and unless
> memory
> >> is really tight is not likely a useful optimization.
> >>
> >>    What eigensolvers have you tried and which eigenvalues do you want?
> >>
> >>    Barry
> >>
> >> On Aug 18, 2014, at 5:46 AM, Toon Weyens <tweyens at fis.uc3m.es> wrote:
> >>
> >> > Dear all,
> >> >
> >> > I am using PETSC and SLEPC to simulate a problem in MHD, described in
> >> >
> http://scitation.aip.org/content/aip/journal/pop/21/4/10.1063/1.4871859.
> >> >
> >> > I have a bit of experience with MPI but not too much with PETSC and
> >> > SLEPC. So after reading both user manuals and also the relevant
> chapters of
> >> > the PETSC developers manual, I still can't get it to work.
> >> >
> >> > The problem that I have to solve is a large generalized eigenvalue
> >> > system where the matrices are both Hermitian by blocks and
> tridiagonal,
> >> > e.g.:
> >> >
> >> > ( A11  A12    0     0       0   )                      ( B11  B12    0
> >> > 0       0   )
> >> > ( A12* A22  A23   0      0   )                       ( B12* B22  B23
>  0
> >> > 0   )
> >> > (   0    A23* A33  A34   0   )   =   lambda    (   0    B23* B33  B34
> >> > 0   )
> >> > (   0      0    A34* A44  A45)                       (   0      0
> >> > B34* B44  B45)
> >> > (   0      0    A45* A55    0  )                       (   0      0
> >> > B45* B55    0  )
> >> >
> >> > where Aii = Aii*, with * the Hermitian conjugate. I apologize for the
> >> > ugly representation.
> >> >
> >> > The dimensions of both A and B are around 50 to 100 blocks (as there
> is
> >> > a block per discretized point) and the blocks themselves can vary
> from 1 to
> >> > more than 100x100 as well (as they correspond to a spectral
> decomposition).
> >> >
> >> > Now, my question is: how to solve this economically?
> >> >
> >> > What I have been trying to do is to make use of the fact that the
> >> > matrices are Hermitian and by using matcreatesbaij and through the
> >> > recommended matcreate, matsettype(matsbaij), etc.
> >> >
> >> > Could someone help me out? All help would be greatly appreciated!
> >> >
> >> > Thank you in advance,
> >> > Toon
> >> > UC3M
> >>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140819/cd043a6d/attachment.html>


More information about the petsc-users mailing list