[petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos
Jose E. Roman
jroman at dsic.upv.es
Tue Oct 23 05:59:29 CDT 2018
There is an undocumented option:
-bv_reproducible_random
It will force the initial vector of the Krylov subspace to be the same irrespective of the number of MPI processes. This should be used for scaling analyses as the one you are trying to do.
An additional comment is that we strongly recommend to use the default solver (Krylov-Schur), which will do Lanczos with implicit restart. It is generally faster and more stable.
Jose
> El 23 oct 2018, a las 12:13, Ale Foggia <amfoggia at gmail.com> escribió:
>
> Hello,
>
> I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are the only options I set for the solver. My aim is to be able to predict/estimate the time-to-solution. To do so, I was doing a scaling of the code for different sizes of matrices and for different number of MPI processes. As I was not observing a good scaling I checked the number of iterations of the solver (given by EPSGetIterationNumber). I've encounter that for the **same size** of matrix (that meaning, the same problem), when I change the number of MPI processes, the amount of iterations changes, and the behaviour is not monotonic. This are the numbers I've got:
>
> # procs # iters
> 960 157
> 992 189
> 1024 338
> 1056 190
> 1120 174
> 2048 136
>
> I've checked the mailing list for a similar situation and I've found another person with the same problem but in another solver ("[SLEPc] GD is not deterministic when using different number of cores", Nov 19 2015), but I think the solution this person finds does not apply to my problem (removing "-eps_harmonic" option).
>
> Can you give me any hint on what is the reason for this behaviour? Is there a way to prevent this? It's not possible to estimate/predict any time consumption for bigger problems if the number of iterations varies this much.
>
> Ale
More information about the petsc-users
mailing list