<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Aug 21, 2016 at 9:38 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>This may or may not be a PETSc specific question but...</div><div><br></div><div>I have seen some people claim that strong-scaling is harder to achieve than weak scaling (e.g., <a href="https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance" target="_blank">https://www.sharcnet.<wbr>ca/help/index.php/Measuring_<wbr>Parallel_Scaling_Performance</a>) and generally speaking it makes sense - communication overhead increases with concurrency.</div><div><br></div><div>However, we know that most PETSc solvers/applications are not only memory-bandwidth bound, but may not scale as well w.r.t. problem size as other solvers (e.g., ILU(0) may beat out GAMG for small elliptic problems but GAMG will eventually beat out ILU(0) for larger problems), so wouldn't weak-scaling not only be the more interesting but more difficult performance metric to achieve? Strong-scaling issues arise mostly from communication overhead but weak-scaling issues may come from that and also solver/algorithmic scalability w.r.t problem size (e.g., problem size N takes 10*T seconds to compute but problem size 2*N takes 50*T seconds to compute).</div><div><br></div><div>In other words, if one were to propose or design a new algorithm/solver capable of handling large-scale problems, would it be equally if not more important to show the weak-scaling potential? Because if you really think about it, a "truly efficient" algorithm will be less likely to scale in the strong sense but computation time will be close to linearly proportional to problem size (hence better scaling in the weak-sense). It seems if I am trying to convince someone that a proposed computational framework is "high performing" without getting too deep into performance modeling, a poor parallel efficiency (arising due to good sequential efficiency) in the strong sense may not look promising.</div></div></blockquote><div><br></div><div>It definitely depends on your point of view. However, I believe that the point being made by people is twofold:</div><div><br></div><div> 1) Weak scaling is relatively easy to game by making a serially inefficient code.</div><div><br></div><div> 2) Strong scaling involves reducing the serial fraction of your code to get around the Amdahl limit. For most codes, I don't think</div><div> communication overhead even makes it on the table. Engineering for this level of concurrency is not easy.</div><div><br></div><div>There is a related point:</div><div><br></div><div> 3) Scientists often want to solve a particular problem rather than the biggest problem a machine can fit in memory</div><div><br></div><div>For sophisticated algorithms that can handle real world problems, I don't think weak scaling is easy either. However, for toy</div><div>problems it is, and since most CS types only deal with the Laplace equation on a square, it does not look hard.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thanks,</div><div>Justin</div><div><br></div><div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>