[petsc-users] [SLEPc] Number of iterations changes with MPI processes in Lanczos
Ale Foggia
amfoggia at gmail.com
Tue Oct 23 05:13:36 CDT 2018
Hello,
I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real
eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are
the only options I set for the solver. My aim is to be able to
predict/estimate the time-to-solution. To do so, I was doing a scaling of
the code for different sizes of matrices and for different number of MPI
processes. As I was not observing a good scaling I checked the number of
iterations of the solver (given by EPSGetIterationNumber). I've encounter
that for the **same size** of matrix (that meaning, the same problem), when
I change the number of MPI processes, the amount of iterations changes, and
the behaviour is not monotonic. This are the numbers I've got:
# procs # iters
960 157
992 189
1024 338
1056 190
1120 174
2048 136
I've checked the mailing list for a similar situation and I've found
another person with the same problem but in another solver ("[SLEPc] GD is
not deterministic when using different number of cores", Nov 19 2015), but
I think the solution this person finds does not apply to my problem
(removing "-eps_harmonic" option).
Can you give me any hint on what is the reason for this behaviour? Is there
a way to prevent this? It's not possible to estimate/predict any time
consumption for bigger problems if the number of iterations varies this
much.
Ale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181023/cffc6b84/attachment.html>
More information about the petsc-users
mailing list