<div dir="ltr">Dear Jose,<div><br></div><div>Thanks for the answer. I am looking for the smallest real, indeed. </div><div><br></div><div>I have, just now, accidentally figured out that I can get correct convergence by increasing NCV to higher values, so that's covered! I thought I had checked this before, but apparently not. It's converging well now, and rather fast (still about 8 times faster than Krylov-Schur).<br><br></div><div>The issue now is that it scales rather badly: If I use 2 or more MPI processes, the time required to solve it goes up drastically. A small test case, on my Ubuntu 16.04 laptop, takes 10 seconds (blazing fast) for 1 MPI process, 25 for 2, 33 for 3, 59 for 4, etc... It is a machine with 8 cores, so i don't really understand why this is.<br><br>Are there other methods that can actually maintain the time required to solve for multiple MPI process? Or, preferable, decrease it (why else would I use multiple processes if not for memory restrictions)?<br><br>I will never have to do something bigger than a generalized non-Hermitian ev problem of, let's say, 5000 blocks of 200x200 complex values per block, and a band size of about 11 blocks wide (so a few GB per matrix max).</div><div><br></div><div>Thanks so much!</div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Mar 29, 2017 at 9:54 AM Jose E. Roman <<a href="mailto:jroman@dsic.upv.es">jroman@dsic.upv.es</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br class="gmail_msg">
> El 29 mar 2017, a las 9:08, Toon Weyens <<a href="mailto:toon.weyens@gmail.com" class="gmail_msg" target="_blank">toon.weyens@gmail.com</a>> escribió:<br class="gmail_msg">
><br class="gmail_msg">
> I started looking for alternatives from the standard Krylov-Schur method to solve the generalized eigenvalue problem Ax = kBx in my code. These matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with block sizes of the order 20) of size typically 1000 blocks. This eigenvalue problem results from the minimization of the energy of a perturbed plasma-vacuum system in order to investigate its stability. So far, I've not taken advantage of the Hermiticity of the problem.<br class="gmail_msg">
><br class="gmail_msg">
> For "easier" problems, especially the Generalized Davidson method converges like lightning, sometimes up to 100 times faster than Krylov-Schur.<br class="gmail_msg">
><br class="gmail_msg">
> However, for slightly more complicated problems, GD converges to the wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower than 0 (i.e. unstable), but the solver never gets below some small, positive value, to which it wrongly converges.<br class="gmail_msg">
<br class="gmail_msg">
I would need to know the settings you are using. Are you doing smallest_real? Maybe you can try target_magnitude with harmonic extraction.<br class="gmail_msg">
<br class="gmail_msg">
><br class="gmail_msg">
> Is it possible to improve this behavior? I tried changing the preconditioner, but it did not work.<br class="gmail_msg">
><br class="gmail_msg">
> Might it be possible to use Krylov-Schur until reaching some precision, and then switching to JD to quickly converge?<br class="gmail_msg">
<br class="gmail_msg">
Yes, you can do this, using EPSSetInitialSpace() in the second solve. But, depending on the settings, this may not buy you much.<br class="gmail_msg">
<br class="gmail_msg">
Jose<br class="gmail_msg">
<br class="gmail_msg">
><br class="gmail_msg">
> Thanks!<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
</blockquote></div>