<div dir="ltr">Sorry, I forgot to add the download link for the matrix files: <a href="https://transfer.pcloud.com/download.html?code=5ZViHIZI96yPIODHYSZ7y1HZMloBfcyhAHunjQVMpWUJIykLt76k">https://transfer.pcloud.com/download.html?code=5ZViHIZI96yPIODHYSZ7y1HZMloBfcyhAHunjQVMpWUJIykLt76k</a><div><br></div><div>Thanks</div></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Apr 1, 2017 at 12:01 AM Toon Weyens <<a href="mailto:toon.weyens@gmail.com">toon.weyens@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="gmail_msg"><div dir="ltr" class="gmail_msg">Dear jose,<div class="gmail_msg"><br class="gmail_msg"></div><div class="gmail_msg">I have saved the matrices in Matlab format and am sending them to you using pCloud. If you want another format, please tell me. Please also note that they are about 1.4GB each.<br class="gmail_msg"><br class="gmail_msg">I also attach a typical output of eps_view and log_view in output.txt, for 8 processes.</div></div><div class="gmail_msg"><br class="gmail_msg"></div><div class="gmail_msg">Thanks so much for helping me out! I think Petsc and Slepc are amazing inventions that really have saved me many months of work!<br class="gmail_msg"><br class="gmail_msg">Regards</div></div><div dir="ltr" class="gmail_msg"><br class="gmail_msg"><div class="gmail_quote gmail_msg"><div dir="ltr" class="gmail_msg">On Fri, Mar 31, 2017 at 5:12 PM Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" class="gmail_msg" target="_blank">jroman@dsic.upv.es</a>> wrote:<br class="gmail_msg"></div><blockquote class="gmail_quote gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">In order to answer about GD I would need to know all the settings you are using. Also if you could send me the matrix I could do some tests.<br class="gmail_msg">
GD and JD are preconditioned eigensolvers, which need a reasonably good preconditioner. But MUMPS is a direct solver, not a preconditioner, and that is often counterproductive in this kind of methods.<br class="gmail_msg">
Jose<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
> El 31 mar 2017, a las 16:45, Toon Weyens <<a href="mailto:toon.weyens@gmail.com" class="gmail_msg" target="_blank">toon.weyens@gmail.com</a>> escribió:<br class="gmail_msg">
><br class="gmail_msg">
> Dear both,<br class="gmail_msg">
><br class="gmail_msg">
> I have recompiled slepc and petsc without debugging, as well as with the recommended --with-fortran-kernels=1. In the attachment I show the scaling for a typical "large" simulation with about 120 000 unkowns, using Krylov-Schur.<br class="gmail_msg">
><br class="gmail_msg">
> There are two sets of datapoints there, as I do two EPS solves in one simulations. The second solve is faster as it results from a grid refinement of the first solve, and takes the solution of the first solve as a first, good guess. Note that there are two pages in the PDF and in the second page I show the time · n_procs.<br class="gmail_msg">
><br class="gmail_msg">
> As you can see, the scaling is better than before, especially up to 8 processes (which means about 15,000 unknowns per process, which is, as I recall, cited as a good minimum on the website.<br class="gmail_msg">
><br class="gmail_msg">
> I am currently trying to run make streams NPMAX=8, but the cluster is extraordinarily crowded today and it does not like my interactive jobs. I will try to run them asap.<br class="gmail_msg">
><br class="gmail_msg">
> The main issue now, however, is again the first issue: the Generalizeid Davidson method does not converge to the physically correct negative eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it stays stuck at some small positive eigenvalue of about +0.0002. It looks as if the solver really does not like passing the eigenvalue = 0 barrier, a behavior I also see in smaller simulations, where the convergence is greatly slowed down when crossing this.<br class="gmail_msg">
><br class="gmail_msg">
> However, this time, for this big simulation, just increasing NCV does not do the trick, at least not until NCV=2048.<br class="gmail_msg">
><br class="gmail_msg">
> Also, I tried to use target magnitude without success either.<br class="gmail_msg">
><br class="gmail_msg">
> I started implementing the capability to start with Krylov-Schur and then switch to GD with EPSSetInitialSpace when a certain precision has been reached, but then realized it might be a bit of overkill as the SLEPC solution phase in my code is generally not more than 15% of the time. There are probably other places where I can gain more than a few percents.<br class="gmail_msg">
><br class="gmail_msg">
> However, if there is another trick that can make GD to work, it would certainly be appreciated, as in my experience it is really about 5 times faster than Krylov-Schur!<br class="gmail_msg">
><br class="gmail_msg">
> Thanks!<br class="gmail_msg">
><br class="gmail_msg">
> Toon<br class="gmail_msg">
<br class="gmail_msg">
</blockquote></div></div></blockquote></div>