<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Mar 29, 2017 at 6:58 AM, Toon Weyens <span dir="ltr"><<a href="mailto:toon.weyens@gmail.com" target="_blank">toon.weyens@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Dear Jose,<div><br></div><div>Thanks for the answer. I am looking for the smallest real, indeed. </div><div><br></div><div>I have, just now, accidentally figured out that I can get correct convergence by increasing NCV to higher values, so that's covered! I thought I had checked this before, but apparently not. It's converging well now, and rather fast (still about 8 times faster than Krylov-Schur).<br><br></div><div>The issue now is that it scales rather badly: If I use 2 or more MPI processes, the time required to solve it goes up drastically. A small test case, on my Ubuntu 16.04 laptop, takes 10 seconds (blazing fast) for 1 MPI process, 25 for 2, 33 for 3, 59 for 4, etc... It is a machine with 8 cores, so i don't really understand why this is.<br></div></div></blockquote><div><br></div><div>For any scalability question, we need to see the output of</div><div><br></div><div> -log_view -ksp_view -ksp_monitor_true_residual -ksp_converged_reason</div><div><br></div><div>and other EPS options which I forget unfortunately. What seems likely here is that you</div><div>are using a PC which is not scalable, so iteration would be going up.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Are there other methods that can actually maintain the time required to solve for multiple MPI process? Or, preferable, decrease it (why else would I use multiple processes if not for memory restrictions)?<br><br>I will never have to do something bigger than a generalized non-Hermitian ev problem of, let's say, 5000 blocks of 200x200 complex values per block, and a band size of about 11 blocks wide (so a few GB per matrix max).</div><div><br></div><div>Thanks so much!</div></div><div class="HOEnZb"><div class="h5"><br><div class="gmail_quote"><div dir="ltr">On Wed, Mar 29, 2017 at 9:54 AM Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br class="m_-3848618158238288581gmail_msg">
> El 29 mar 2017, a las 9:08, Toon Weyens <<a href="mailto:toon.weyens@gmail.com" class="m_-3848618158238288581gmail_msg" target="_blank">toon.weyens@gmail.com</a>> escribió:<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> I started looking for alternatives from the standard Krylov-Schur method to solve the generalized eigenvalue problem Ax = kBx in my code. These matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with block sizes of the order 20) of size typically 1000 blocks. This eigenvalue problem results from the minimization of the energy of a perturbed plasma-vacuum system in order to investigate its stability. So far, I've not taken advantage of the Hermiticity of the problem.<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> For "easier" problems, especially the Generalized Davidson method converges like lightning, sometimes up to 100 times faster than Krylov-Schur.<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> However, for slightly more complicated problems, GD converges to the wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower than 0 (i.e. unstable), but the solver never gets below some small, positive value, to which it wrongly converges.<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
I would need to know the settings you are using. Are you doing smallest_real? Maybe you can try target_magnitude with harmonic extraction.<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> Is it possible to improve this behavior? I tried changing the preconditioner, but it did not work.<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> Might it be possible to use Krylov-Schur until reaching some precision, and then switching to JD to quickly converge?<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
Yes, you can do this, using EPSSetInitialSpace() in the second solve. But, depending on the settings, this may not buy you much.<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
Jose<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
><br class="m_-3848618158238288581gmail_msg">
> Thanks!<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
<br class="m_-3848618158238288581gmail_msg">
</blockquote></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>