[petsc-users] [SLEPc] Krylov-Schur convergence
Jose E. Roman
jroman at dsic.upv.es
Tue Nov 13 05:34:47 CST 2018
This is really strange. We cannot say what is going on, everything seems fine.
Could you try solving the problem as non-Hermitian to see what happens? Just run with -eps_non_hermitian. Depending on the result, we can suggest other things to try.
Jose
> El 13 nov 2018, a las 10:58, Ale Foggia via petsc-users <petsc-users at mcs.anl.gov> escribió:
>
> Hello,
>
> I'm using SLEPc to get the smallest real eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). The linear size of the matrices I'm solving is around 10**9 elements and they are sparse. I've asked a few questions before regarding the same problem setting and you suggested me to use Krylov-Schur (because I was using Lanczos). I tried KS and up to a certain matrix size the convergence (relative to the eigenvalue) is good, it's around 10**-9, like with Lanczos, but when I increase the size I start getting the eigenvalue with only 3 correct digits. I've used the options: -eps_tol 1e-9 -eps_mpd 100 (16 was the default), but the only thing I got is one more eigenvalue with the same big error, and the iterations performed were only 2. Why didn't it do more in order to reach the convergence? Should I set other parameters? I don't know how to work out this problem, can you help me with this please? I send the -eps_view output and the eigenvalues with its errors:
>
> EPS Object: 2048 MPI processes
> type: krylovschur
> 50% of basis vectors kept after restart
> using the locking variant
> problem type: symmetric eigenvalue problem
> selected portion of the spectrum: smallest real parts
> number of eigenvalues (nev): 1
> number of column vectors (ncv): 101
> maximum dimension of projected problem (mpd): 100
> maximum number of iterations: 46210024
> tolerance: 1e-09
> convergence test: relative to the eigenvalue
> BV Object: 2048 MPI processes
> type: svec
> 102 columns of global length 2333606220
> vector orthogonalization method: classical Gram-Schmidt
> orthogonalization refinement: if needed (eta: 0.7071)
> block orthogonalization method: GS
> doing matmult as a single matrix-matrix product
> DS Object: 2048 MPI processes
> type: hep
> parallel operation mode: REDUNDANT
> solving the problem with: Implicit QR method (_steqr)
> ST Object: 2048 MPI processes
> type: shift
> shift: 0.
> number of matrices: 1
>
> k ||Ax-kx||/||kx||
> ----------------- ------------------
> -15.093051 0.00323917 (with KS)
> -15.087320 0.00265215 (with KS)
> -15.048025 8.67204e-09 (with Lanczos)
> Iterations performed 2
>
> Ale
More information about the petsc-users
mailing list