[petsc-users] [SLEPc] Krylov-Schur convergence

Ale Foggia amfoggia at gmail.com
Tue Nov 13 10:14:02 CST 2018


Thanks for the answers. I've checked with PETSc function MatIsSymmetric
(mpiaij does not support MatIsHermitian) for the size that was giving the
wrong eigenvalue and indeed it is not symmetric, for a tolerance of 1e-14.
I also wanted to check for smaller sizes, for which the eigenvalue was
correct, but I get a "Bus error, possibly illegal memory access" when the
program arrives to that function call. Any idea why it works for one case
and not for the other? I'm using PETSc 3.9.3 and SLEPc 3.9.2.

The matrix has to be symmetric at all times, by construction and because
the physics problem I'm dealing with implies a symmetric matrix. What it's
strange for me is that I do unit testing for the construction for smaller
sizes and I obtain real symmetric matrices, as expected. So, maybe when
growing in size it gets asymmetric for some reason? Are there any more test
I can do to check this?

El mar., 13 nov. 2018 a las 14:52, Jose E. Roman (<jroman at dsic.upv.es>)
escribió:

> As Matt points out, the probable cause is that your matrix is not exactly
> Hermitian. In our implementation of Lanczos we assume symmetry, so if
> ||A-A^*|| is non-negligible then some information is being discarded, which
> may confuse Lanczos giving some eigenpair as converged when it is not.
>
> Why doesn't this happen with the basic Lanczos solver? I don't know,
> probably due to the restart scheme. In symmetric Krylov-Schur, restart is
> done with eigenvectors of the projected problem (thick-restart Lanczos),
> while non-symmetric Krylov-Schur uses Schur vectors. In the basic Lanczos,
> restart is done building a new initial vector explicitly, which is harmless
> in this case.
>
> Suggestion: either use non-Hermitian Krylov-Schur or symmetrize your
> matrix as (A+A^*)/2 (or better when contructing it, note that transposing a
> parallel matrix with 1024 processes requires a lot of communication).
>
> Jose
>
>
> > El 13 nov 2018, a las 13:56, Matthew Knepley <knepley at gmail.com>
> escribió:
> >
> > On Tue, Nov 13, 2018 at 7:48 AM Ale Foggia via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> > Thanks Jose for the answer. I tried it an it worked! I send the output
> of -eps_view again, just in case:
> >
> > EPS Object: 1024 MPI processes
> >   type: krylovschur
> >     50% of basis vectors kept after restart
> >     using the locking variant
> >   problem type: non-symmetric eigenvalue problem
> >   selected portion of the spectrum: smallest real parts
> >   number of eigenvalues (nev): 1
> >   number of column vectors (ncv): 16
> >   maximum dimension of projected problem (mpd): 16
> >   maximum number of iterations: 291700777
> >   tolerance: 1e-09
> >   convergence test: relative to the eigenvalue
> > BV Object: 1024 MPI processes
> >   type: svec
> >   17 columns of global length 2333606220
> >   vector orthogonalization method: classical Gram-Schmidt
> >   orthogonalization refinement: if needed (eta: 0.7071)
> >   block orthogonalization method: GS
> >   doing matmult as a single matrix-matrix product
> > DS Object: 1024 MPI processes
> >   type: nhep
> >   parallel operation mode: REDUNDANT
> > ST Object: 1024 MPI processes
> >   type: shift
> >   shift: 0.
> >   number of matrices: 1
> >
> >               k               ||Ax-kx||/||kx||
> >    -----------------       ------------------
> >      -15.048025        1.85112e-10
> >      -15.047159        3.13104e-10
> >
> > Iterations performed 18
> >
> > Why does treating it as non-Hermitian help the convergence? Why doesn't
> this happen with Lanczos? I'm lost :/
> >
> > Are you sure your matrix is Hermitian? Lanczos will work for
> non-Hermitian things (I believe).
> >
> >   Thanks,
> >
> >      Matt
> >
> >
> > Ale
> >
> > El mar., 13 nov. 2018 a las 12:34, Jose E. Roman (<jroman at dsic.upv.es>)
> escribió:
> > This is really strange. We cannot say what is going on, everything seems
> fine.
> > Could you try solving the problem as non-Hermitian to see what happens?
> Just run with -eps_non_hermitian. Depending on the result, we can suggest
> other things to try.
> > Jose
> >
> >
> > > El 13 nov 2018, a las 10:58, Ale Foggia via petsc-users <
> petsc-users at mcs.anl.gov> escribió:
> > >
> > > Hello,
> > >
> > > I'm using SLEPc to get the smallest real eigenvalue
> (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). The linear size of
> the matrices I'm solving is around 10**9 elements and they are sparse. I've
> asked a few questions before regarding the same problem setting and you
> suggested me to use Krylov-Schur (because I was using Lanczos). I tried KS
> and up to a certain matrix size the convergence (relative to the
> eigenvalue) is good, it's around 10**-9, like with Lanczos, but when I
> increase the size I start getting the eigenvalue with only 3 correct
> digits. I've used the options: -eps_tol 1e-9 -eps_mpd 100 (16 was the
> default), but the only thing I got is one more eigenvalue with the same big
> error, and the iterations performed were only 2. Why didn't it do more in
> order to reach the convergence? Should I set other parameters? I don't know
> how to work out this problem, can you help me with this please? I send the
> -eps_view output and the eigenvalues with its errors:
> > >
> > > EPS Object: 2048 MPI processes
> > >   type: krylovschur
> > >     50% of basis vectors kept after restart
> > >     using the locking variant
> > >   problem type: symmetric eigenvalue problem
> > >   selected portion of the spectrum: smallest real parts
> > >   number of eigenvalues (nev): 1
> > >   number of column vectors (ncv): 101
> > >   maximum dimension of projected problem (mpd): 100
> > >   maximum number of iterations: 46210024
> > >   tolerance: 1e-09
> > >   convergence test: relative to the eigenvalue
> > > BV Object: 2048 MPI processes
> > >   type: svec
> > >   102 columns of global length 2333606220
> > >   vector orthogonalization method: classical Gram-Schmidt
> > >   orthogonalization refinement: if needed (eta: 0.7071)
> > >   block orthogonalization method: GS
> > >   doing matmult as a single matrix-matrix product
> > > DS Object: 2048 MPI processes
> > >   type: hep
> > >   parallel operation mode: REDUNDANT
> > >   solving the problem with: Implicit QR method (_steqr)
> > > ST Object: 2048 MPI processes
> > >   type: shift
> > >   shift: 0.
> > >   number of matrices: 1
> > >
> > >               k                ||Ax-kx||/||kx||
> > >    -----------------        ------------------
> > >      -15.093051         0.00323917 (with KS)
> > >      -15.087320         0.00265215 (with KS)
> > >      -15.048025         8.67204e-09 (with Lanczos)
> > > Iterations performed 2
> > >
> > > Ale
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181113/1ce7fb39/attachment-0001.html>


More information about the petsc-users mailing list