[petsc-users] [SLEPc] GD is not deterministic when using different number of cores

Jose E. Roman jroman at dsic.upv.es
Fri Nov 20 06:10:59 CST 2015


> El 20 nov 2015, a las 12:06, Denis Davydov <davydden at gmail.com> escribió:
> 
>> 
>> On 19 Nov 2015, at 11:19, Jose E. Roman <jroman at dsic.upv.es> wrote:
>> 
>>> 
>>> El 19 nov 2015, a las 10:49, Denis Davydov <davydden at gmail.com> escribió:
>>> 
>>> Dear all,
>>> 
>>> I was trying to get some scaling results for the GD eigensolver as applied to the density functional theory.
>>> Interestingly enough, the number of self-consistent iterations (solution of couple eigenvalue problem and poisson equations) 
>>> depends on the number of MPI cores used. For my case the range of iterations is 19-24 for MPI cores between 2 and 160.
>>> That makes the whole scaling check useless as the eigenproblem is solved different number of times.
>>> 
>>> That is **not** the case when I use Krylov-Schur eigensolver with zero shift, which makes me believe that I am missing some settings on GD to make it fully deterministic. The only non-deterministic part I am currently aware of is the initial subspace for the first SC iterations. But that’s the case for both KS and GD. For subsequent iterations I provide previously obtained eigenvectors as initial subspace.
>>> 
>>> Certainly there will be some round-off error due to different partition of DoFs for different number of MPI cores, 
>>> but i don’t expect it to have such a strong influence. Especially given the fact that I don’t see this problem with KS.
>>> 
>>> Below is the output of -eps-view for GD with -eps_type gd -eps_harmonic -st_pc_type bjacobi -eps_gd_krylov_start -eps_target -10.0
>>> I would appreciate any suggestions on how to address the issue.
>> 
>> The block Jacobi preconditioner differs when you change the number of processes. This will probably make GD iterate more when you use more processes.
> 
> Switching to Jacobi preconditioner reduced variation in number of SC iterations, but does not remove it. 
> Any other options but initial vector space which may introduce non-deterministic behaviour?
> 
>>> 
>>> As a side question, why GD uses KSP pre-only? It could as well be using a proper linear solver to apply K^{-1} in the expansion state --
>> 
>> You can achieve that with PCKSP. But if you are going to do that, why not using JD instead of GD?
> 
> It was more a general question why the inverse is implemented by pre-only for GD and is done properly with a full control of KSP for JD.

GD uses the preconditioned residual to expand the subspace. JD uses the (approximate) solution of the correction equation.

> 
> I will try JD as well because so far GD for my problems has a bottleneck in: BVDot (13% time), BVOrthogonalize (10% time), DSSolve (62% time); 
> whereas only 11% of time is spent in MatMult.
> I suppose BVDot is mostly used in BVOrthogonalize and partly in calculation of Ritz vectors?
> My best bet with DSSolve (with mpd=175 only) is a better preconditioner and thus reduced number of iterations or double expansion with simple preconditioner?

DSSolve should always be small. Try reducing mpd.

> 
> Regards,
> Denis.



More information about the petsc-users mailing list