<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 19 Nov 2015, at 11:19, Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" class="">jroman@dsic.upv.es</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><br class="Apple-interchange-newline">El 19 nov 2015, a las 10:49, Denis Davydov <<a href="mailto:davydden@gmail.com" class="">davydden@gmail.com</a>> escribió:<br class=""><br class="">Dear all,<br class=""><br class="">I was trying to get some scaling results for the GD eigensolver as applied to the density functional theory.<br class="">Interestingly enough, the number of self-consistent iterations (solution of couple eigenvalue problem and poisson equations)<span class="Apple-converted-space"> </span><br class="">depends on the number of MPI cores used. For my case the range of iterations is 19-24 for MPI cores between 2 and 160.<br class="">That makes the whole scaling check useless as the eigenproblem is solved different number of times.<br class=""><br class="">That is **not** the case when I use Krylov-Schur eigensolver with zero shift, which makes me believe that I am missing some settings on GD to make it fully deterministic. The only non-deterministic part I am currently aware of is the initial subspace for the first SC iterations. But that’s the case for both KS and GD. For subsequent iterations I provide previously obtained eigenvectors as initial subspace.<br class=""><br class="">Certainly there will be some round-off error due to different partition of DoFs for different number of MPI cores,<span class="Apple-converted-space"> </span><br class="">but i don’t expect it to have such a strong influence. Especially given the fact that I don’t see this problem with KS.<br class=""><br class="">Below is the output of -eps-view for GD with -eps_type gd -eps_harmonic -st_pc_type bjacobi -eps_gd_krylov_start -eps_target -10.0<br class="">I would appreciate any suggestions on how to address the issue.<br class=""></blockquote><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><span style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; float: none; display: inline !important;" class="">The block Jacobi preconditioner differs when you change the number of processes. This will probably make GD iterate more when you use more processes.</span><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""></div></blockquote><div><br class=""></div><div>Switching to Jacobi preconditioner reduced variation in number of SC iterations, but does not remove it. </div><div>Any other options but initial vector space which may introduce non-deterministic behaviour?</div><br class=""><blockquote type="cite" class=""><blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><br class="">As a side question, why GD uses KSP pre-only? It could as well be using a proper linear solver to apply K^{-1} in the expansion state --<br class=""></blockquote><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><span style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; float: none; display: inline !important;" class="">You can achieve that with PCKSP. But if you are going to do that, why not using JD instead of GD?</span><br class=""></blockquote><br class="">It was more a general question why the inverse is implemented by pre-only for GD and is done properly with a full control of KSP for JD.</div><div><br class=""></div><div>I will try JD as well because so far GD for my problems has a bottleneck in: BVDot (13% time), BVOrthogonalize (10% time), DSSolve (62% time); </div><div>whereas only 11% of time is spent in MatMult.</div><div>I suppose BVDot is mostly used in BVOrthogonalize and partly in calculation of Ritz vectors?</div><div>My best bet with DSSolve (with mpd=175 only) is a better preconditioner and thus reduced number of iterations or double expansion with simple preconditioner?</div><div><br class=""></div><div>Regards,</div><div>Denis.</div></body></html>