<div dir="ltr"><div dir="ltr"><div></div><div>Thanks for the answers. I've checked with PETSc function MatIsSymmetric (mpiaij does not support MatIsHermitian) for the size that was giving the wrong eigenvalue and indeed it is not symmetric, for a tolerance of 1e-14. I also wanted to check for smaller sizes, for which the eigenvalue was correct, but I get a "Bus error, possibly illegal memory access" when the program arrives to that function call. Any idea why it works for one case and not for the other? I'm using PETSc 3.9.3 and SLEPc 3.9.2.<br></div><div><br></div><div>The matrix has to be symmetric at all times, by construction and because the physics problem I'm dealing with implies a symmetric matrix. What it's strange for me is that I do unit testing for the construction for smaller sizes and I obtain real symmetric matrices, as expected. So, maybe when growing in size it gets asymmetric for some reason? Are there any more test I can do to check this?<br></div></div></div><br><div class="gmail_quote"><div dir="ltr">El mar., 13 nov. 2018 a las 14:52, Jose E. Roman (<<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">As Matt points out, the probable cause is that your matrix is not exactly Hermitian. In our implementation of Lanczos we assume symmetry, so if ||A-A^*|| is non-negligible then some information is being discarded, which may confuse Lanczos giving some eigenpair as converged when it is not.<br>
<br>
Why doesn't this happen with the basic Lanczos solver? I don't know, probably due to the restart scheme. In symmetric Krylov-Schur, restart is done with eigenvectors of the projected problem (thick-restart Lanczos), while non-symmetric Krylov-Schur uses Schur vectors. In the basic Lanczos, restart is done building a new initial vector explicitly, which is harmless in this case.<br>
<br>
Suggestion: either use non-Hermitian Krylov-Schur or symmetrize your matrix as (A+A^*)/2 (or better when contructing it, note that transposing a parallel matrix with 1024 processes requires a lot of communication).<br>
<br>
Jose<br>
<br>
<br>
> El 13 nov 2018, a las 13:56, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> escribió:<br>
> <br>
> On Tue, Nov 13, 2018 at 7:48 AM Ale Foggia via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> Thanks Jose for the answer. I tried it an it worked! I send the output of -eps_view again, just in case:<br>
> <br>
> EPS Object: 1024 MPI processes<br>
> type: krylovschur<br>
> 50% of basis vectors kept after restart<br>
> using the locking variant<br>
> problem type: non-symmetric eigenvalue problem<br>
> selected portion of the spectrum: smallest real parts<br>
> number of eigenvalues (nev): 1<br>
> number of column vectors (ncv): 16<br>
> maximum dimension of projected problem (mpd): 16<br>
> maximum number of iterations: 291700777<br>
> tolerance: 1e-09<br>
> convergence test: relative to the eigenvalue<br>
> BV Object: 1024 MPI processes<br>
> type: svec<br>
> 17 columns of global length 2333606220<br>
> vector orthogonalization method: classical Gram-Schmidt<br>
> orthogonalization refinement: if needed (eta: 0.7071)<br>
> block orthogonalization method: GS<br>
> doing matmult as a single matrix-matrix product<br>
> DS Object: 1024 MPI processes<br>
> type: nhep<br>
> parallel operation mode: REDUNDANT<br>
> ST Object: 1024 MPI processes<br>
> type: shift<br>
> shift: 0.<br>
> number of matrices: 1<br>
> <br>
> k ||Ax-kx||/||kx||<br>
> ----------------- ------------------<br>
> -15.048025 1.85112e-10<br>
> -15.047159 3.13104e-10<br>
> <br>
> Iterations performed 18<br>
> <br>
> Why does treating it as non-Hermitian help the convergence? Why doesn't this happen with Lanczos? I'm lost :/<br>
> <br>
> Are you sure your matrix is Hermitian? Lanczos will work for non-Hermitian things (I believe).<br>
> <br>
> Thanks,<br>
> <br>
> Matt<br>
> <br>
> <br>
> Ale<br>
> <br>
> El mar., 13 nov. 2018 a las 12:34, Jose E. Roman (<<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>>) escribió:<br>
> This is really strange. We cannot say what is going on, everything seems fine.<br>
> Could you try solving the problem as non-Hermitian to see what happens? Just run with -eps_non_hermitian. Depending on the result, we can suggest other things to try.<br>
> Jose<br>
> <br>
> <br>
> > El 13 nov 2018, a las 10:58, Ale Foggia via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> escribió:<br>
> > <br>
> > Hello,<br>
> > <br>
> > I'm using SLEPc to get the smallest real eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). The linear size of the matrices I'm solving is around 10**9 elements and they are sparse. I've asked a few questions before regarding the same problem setting and you suggested me to use Krylov-Schur (because I was using Lanczos). I tried KS and up to a certain matrix size the convergence (relative to the eigenvalue) is good, it's around 10**-9, like with Lanczos, but when I increase the size I start getting the eigenvalue with only 3 correct digits. I've used the options: -eps_tol 1e-9 -eps_mpd 100 (16 was the default), but the only thing I got is one more eigenvalue with the same big error, and the iterations performed were only 2. Why didn't it do more in order to reach the convergence? Should I set other parameters? I don't know how to work out this problem, can you help me with this please? I send the -eps_view output and the eigenvalues with its errors:<br>
> > <br>
> > EPS Object: 2048 MPI processes<br>
> > type: krylovschur<br>
> > 50% of basis vectors kept after restart<br>
> > using the locking variant<br>
> > problem type: symmetric eigenvalue problem<br>
> > selected portion of the spectrum: smallest real parts<br>
> > number of eigenvalues (nev): 1<br>
> > number of column vectors (ncv): 101<br>
> > maximum dimension of projected problem (mpd): 100<br>
> > maximum number of iterations: 46210024<br>
> > tolerance: 1e-09<br>
> > convergence test: relative to the eigenvalue<br>
> > BV Object: 2048 MPI processes<br>
> > type: svec<br>
> > 102 columns of global length 2333606220<br>
> > vector orthogonalization method: classical Gram-Schmidt<br>
> > orthogonalization refinement: if needed (eta: 0.7071)<br>
> > block orthogonalization method: GS<br>
> > doing matmult as a single matrix-matrix product<br>
> > DS Object: 2048 MPI processes<br>
> > type: hep<br>
> > parallel operation mode: REDUNDANT<br>
> > solving the problem with: Implicit QR method (_steqr)<br>
> > ST Object: 2048 MPI processes<br>
> > type: shift<br>
> > shift: 0.<br>
> > number of matrices: 1<br>
> > <br>
> > k ||Ax-kx||/||kx||<br>
> > ----------------- ------------------<br>
> > -15.093051 0.00323917 (with KS)<br>
> > -15.087320 0.00265215 (with KS)<br>
> > -15.048025 8.67204e-09 (with Lanczos)<br>
> > Iterations performed 2<br>
> > <br>
> > Ale<br>
> <br>
> <br>
> <br>
> -- <br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
> <br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
<br>
</blockquote></div>