<div dir="ltr">Dear Jose and Matthew,<div><br></div><div>Thank you so much for the effort!<br><br>I still don't manage to converge using the range interval technique to filter out the positive eigenvalues, but using shift-invert combined with a target eigenvalue does true miracles. I get extremely fast convergence.</div><div><br></div><div>The truth of the matter is that we are mainly interested in negative eigenvalues (unstable modes), and from physical considerations they are more or less situated in -0.2<lambda<0 in the normalized quantities that we use. So we will just use guesses.<br></div><div><br></div><div>Thank you so much again!<br><br>Also, I have finally managed to run streams (the cluster is quite full atm). These are the outputs:<br><br><div>1 processes</div><div>Number of MPI processes 1 Processor names c04b27 </div><div>Triad: 12352.0825 Rate (MB/s) </div><div>2 processes</div><div>Number of MPI processes 2 Processor names c04b27 c04b27 </div><div>Triad: 18968.0226 Rate (MB/s) </div><div>3 processes</div><div>Number of MPI processes 3 Processor names c04b27 c04b27 c04b27 </div><div>Triad: 21106.8580 Rate (MB/s) </div><div>4 processes</div><div>Number of MPI processes 4 Processor names c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 21655.5885 Rate (MB/s) </div><div>5 processes</div><div>Number of MPI processes 5 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 21627.5559 Rate (MB/s) </div><div>6 processes</div><div>Number of MPI processes 6 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 21394.9620 Rate (MB/s) </div><div>7 processes</div><div>Number of MPI processes 7 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 24952.7076 Rate (MB/s) </div><div>8 processes</div><div>Number of MPI processes 8 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 28357.1062 Rate (MB/s) </div><div>9 processes</div><div>Number of MPI processes 9 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 31720.4545 Rate (MB/s) </div><div>10 processes</div><div>Number of MPI processes 10 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 35198.7412 Rate (MB/s) </div><div>11 processes</div><div>Number of MPI processes 11 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 38616.0615 Rate (MB/s) </div><div>12 processes</div><div>Number of MPI processes 12 Processor names c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 c04b27 </div><div>Triad: 41939.3994 Rate (MB/s) </div></div><div><br></div><div>I attach a figure.<br><br>Thanks again!</div><br><div class="gmail_quote"><div dir="ltr">On Mon, Apr 3, 2017 at 8:29 PM Jose E. Roman <<a href="mailto:jroman@dsic.upv.es">jroman@dsic.upv.es</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br class="gmail_msg">
> El 1 abr 2017, a las 0:01, Toon Weyens <<a href="mailto:toon.weyens@gmail.com" class="gmail_msg" target="_blank">toon.weyens@gmail.com</a>> escribió:<br class="gmail_msg">
><br class="gmail_msg">
> Dear jose,<br class="gmail_msg">
><br class="gmail_msg">
> I have saved the matrices in Matlab format and am sending them to you using pCloud. If you want another format, please tell me. Please also note that they are about 1.4GB each.<br class="gmail_msg">
><br class="gmail_msg">
> I also attach a typical output of eps_view and log_view in output.txt, for 8 processes.<br class="gmail_msg">
><br class="gmail_msg">
> Thanks so much for helping me out! I think Petsc and Slepc are amazing inventions that really have saved me many months of work!<br class="gmail_msg">
><br class="gmail_msg">
> Regards<br class="gmail_msg">
<br class="gmail_msg">
I played a little bit with your matrices.<br class="gmail_msg">
<br class="gmail_msg">
With Krylov-Schur I can solve the problem quite easily. Note that in generalized eigenvalue problems it is always better to use STSINVERT because you have to invert a matrix anyway. So instead of setting which=smallest_real, use shift-and-invert with a target that is close to the wanted eigenvalue. For instance, with target=-0.005 I get convergence with just one iteration:<br class="gmail_msg">
<br class="gmail_msg">
$ ./ex7 -f1 A.bin -f2 B.bin -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -eps_tol 1e-5 -st_type sinvert -eps_target -0.005<br class="gmail_msg">
<br class="gmail_msg">
Generalized eigenproblem stored in file.<br class="gmail_msg">
<br class="gmail_msg">
Reading COMPLEX matrices from binary files...<br class="gmail_msg">
Number of iterations of the method: 1<br class="gmail_msg">
Number of linear iterations of the method: 16<br class="gmail_msg">
Solution method: krylovschur<br class="gmail_msg">
<br class="gmail_msg">
Number of requested eigenvalues: 1<br class="gmail_msg">
Stopping condition: tol=1e-05, maxit=7500<br class="gmail_msg">
Linear eigensolve converged (1 eigenpair) due to CONVERGED_TOL; iterations 1<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
k ||Ax-kBx||/||kx||<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
-0.004809-0.000000i 8.82085e-05<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
Of course, you don't know a priori where your eigenvalue is. Alternatively, you can set the target at 0 and get rid of positive eigenvalues with a region filtering. For instance:<br class="gmail_msg">
<br class="gmail_msg">
$ ./ex7 -f1 A.bin -f2 B.bin -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -eps_tol 1e-5 -st_type sinvert -eps_target 0 -rg_type interval -rg_interval_endpoints -1,0,-.05,.05 -eps_nev 2<br class="gmail_msg">
<br class="gmail_msg">
Generalized eigenproblem stored in file.<br class="gmail_msg">
<br class="gmail_msg">
Reading COMPLEX matrices from binary files...<br class="gmail_msg">
Number of iterations of the method: 8<br class="gmail_msg">
Number of linear iterations of the method: 74<br class="gmail_msg">
Solution method: krylovschur<br class="gmail_msg">
<br class="gmail_msg">
Number of requested eigenvalues: 2<br class="gmail_msg">
Stopping condition: tol=1e-05, maxit=7058<br class="gmail_msg">
Linear eigensolve converged (2 eigenpairs) due to CONVERGED_TOL; iterations 8<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
k ||Ax-kBx||/||kx||<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
-0.000392-0.000000i 2636.4<br class="gmail_msg">
-0.004809+0.000000i 318441<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
<br class="gmail_msg">
In this case, the residuals seem very bad. But this is due to the fact that your matrices have huge norms. Adding the option -eps_error_backward ::ascii_info_detail will show residuals relative to the matrix norms:<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
k eta(x,k)<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
-0.000392-0.000000i 3.78647e-11<br class="gmail_msg">
-0.004809+0.000000i 5.61419e-08<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
Regarding the GD solver, I am also getting the correct solution. I don't know why you are not getting convergence to the wanted eigenvalue:<br class="gmail_msg">
<br class="gmail_msg">
$ ./ex7 -f1 A.bin -f2 B.bin -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -eps_tol 1e-5 -eps_smallest_real -eps_ncv 32 -eps_type gd<br class="gmail_msg">
<br class="gmail_msg">
Generalized eigenproblem stored in file.<br class="gmail_msg">
<br class="gmail_msg">
Reading COMPLEX matrices from binary files...<br class="gmail_msg">
Number of iterations of the method: 132<br class="gmail_msg">
Number of linear iterations of the method: 0<br class="gmail_msg">
Solution method: gd<br class="gmail_msg">
<br class="gmail_msg">
Number of requested eigenvalues: 1<br class="gmail_msg">
Stopping condition: tol=1e-05, maxit=120000<br class="gmail_msg">
Linear eigensolve converged (1 eigenpair) due to CONVERGED_TOL; iterations 132<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
k ||Ax-kBx||/||kx||<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
-0.004809+0.000000i 2.16223e-05<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
Again, it is much better to use a target instead of smallest_real:<br class="gmail_msg">
<br class="gmail_msg">
$ ./ex7 -f1 A.bin -f2 B.bin -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -eps_tol 1e-5 -eps_type gd -eps_target -0.005<br class="gmail_msg">
<br class="gmail_msg">
Generalized eigenproblem stored in file.<br class="gmail_msg">
<br class="gmail_msg">
Reading COMPLEX matrices from binary files...<br class="gmail_msg">
Number of iterations of the method: 23<br class="gmail_msg">
Number of linear iterations of the method: 0<br class="gmail_msg">
Solution method: gd<br class="gmail_msg">
<br class="gmail_msg">
Number of requested eigenvalues: 1<br class="gmail_msg">
Stopping condition: tol=1e-05, maxit=120000<br class="gmail_msg">
Linear eigensolve converged (1 eigenpair) due to CONVERGED_TOL; iterations 23<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
k ||Ax-kBx||/||kx||<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
-0.004809-0.000000i 2.06572e-05<br class="gmail_msg">
---------------------- --------------------<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
Jose<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
</blockquote></div></div>