<div dir="ltr"><div class="gmail_quote"><div dir="ltr">El mar., 23 oct. 2018 a las 13:53, Matthew Knepley (<<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Oct 23, 2018 at 6:24 AM Ale Foggia <<a href="mailto:amfoggia@gmail.com" target="_blank">amfoggia@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hello, </div><div><br></div><div>I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are the only options I set for the solver. My aim is to be able to predict/estimate the time-to-solution. To do so, I was doing a scaling of the code for different sizes of matrices and for different number of MPI processes. As I was not observing a good scaling I checked the number of iterations of the solver (given by EPSGetIterationNumber). I've encounter that for the **same size** of matrix (that meaning, the same problem), when I change the number of MPI processes, the amount of iterations changes, and the behaviour is not monotonic. This are the numbers I've got:</div></div></div></blockquote><div><br></div><div>I am sure you know this, but this test is strong scaling and will top out when the individual problem sizes become too small (we see this at several thousand unknowns).</div></div></div></blockquote><div><br></div><div>Thanks for pointing this out, we are aware of that and I've been "playing" around to try to see by myself this behaviour. Now, I think I'll go with the Krylov-Schur method because is the only solution to the problem of the number of iterations. With this I think I'll be able to see the individual problem size effect in the scaling.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div><br></div><div># procs # iters<br></div><div>960 157<br></div><div>992 189<br></div><div>1024 338<br></div><div>1056 190<br></div><div>1120 174<br></div><div>2048 136<br></div><div><br></div><div>I've checked the mailing list for a similar situation and I've found another person with the same problem but in another solver ("[SLEPc] GD is not deterministic when using different number of cores", Nov 19 2015), but I think the solution this person finds does not apply to my problem (removing "-eps_harmonic" option).</div><div><br></div><div>Can you give me any hint on what is the reason for this behaviour? Is there a way to prevent this? It's not possible to estimate/predict any time consumption for bigger problems if the number of iterations varies this much.</div><div><br></div><div>Ale<br></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="m_4538236537080181968m_-8075526979556023096m_-1735611911667925449gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>