[petsc-users] strong-scaling vs weak-scaling
Justin Chang
jychang48 at gmail.com
Sun Aug 21 21:38:37 CDT 2016
Hi all,
This may or may not be a PETSc specific question but...
I have seen some people claim that strong-scaling is harder to achieve than
weak scaling (e.g.,
https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance)
and generally speaking it makes sense - communication overhead increases
with concurrency.
However, we know that most PETSc solvers/applications are not only
memory-bandwidth bound, but may not scale as well w.r.t. problem size as
other solvers (e.g., ILU(0) may beat out GAMG for small elliptic problems
but GAMG will eventually beat out ILU(0) for larger problems), so wouldn't
weak-scaling not only be the more interesting but more difficult
performance metric to achieve? Strong-scaling issues arise mostly from
communication overhead but weak-scaling issues may come from that and also
solver/algorithmic scalability w.r.t problem size (e.g., problem size N
takes 10*T seconds to compute but problem size 2*N takes 50*T seconds to
compute).
In other words, if one were to propose or design a new algorithm/solver
capable of handling large-scale problems, would it be equally if not more
important to show the weak-scaling potential? Because if you really think
about it, a "truly efficient" algorithm will be less likely to scale in the
strong sense but computation time will be close to linearly proportional to
problem size (hence better scaling in the weak-sense). It seems if I am
trying to convince someone that a proposed computational framework is "high
performing" without getting too deep into performance modeling, a poor
parallel efficiency (arising due to good sequential efficiency) in the
strong sense may not look promising.
Thanks,
Justin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160821/fcbdfb31/attachment.html>
More information about the petsc-users
mailing list