<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Oct 31, 2015 at 11:34 AM, TAY wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p dir="ltr">Hi, </p>
<p dir="ltr">I understand that as mentioned in the faq, due to the
limitations in memory, the scaling is not linear. So, I am trying
to write a proposal to use a supercomputer.<br>
</p>
<p dir="ltr">Its specs are:<br>
</p>
<p dir="ltr">Compute nodes: 82,944 nodes (SPARC64 VIIIfx; 16GB of
memory per node)</p>
<p dir="ltr">8 cores / processor<br>
</p>
<p dir="ltr">Interconnect: Tofu (6-dimensional mesh/torus)
Interconnect<br>
</p>
<p dir="ltr">Each cabinet contains 96 computing nodes,<br>
</p>
<p dir="ltr">One of the requirement is to give the performance of my
current code with my current set of data, and there is a formula
to calculate the estimated parallel efficiency when using the new
large set of data<br>
</p>
<p dir="ltr">There are 2 ways to give performance:<br>
1. Strong scaling, which is defined as how the elapsed time varies
with the number of processors for a fixed<br>
problem. <br>
2. Weak scaling, which is defined as how the elapsed time varies
with the number of processors for a<br>
fixed problem size per processor.<br>
</p>
<p dir="ltr">I ran my cases with 48 and 96 cores with my current
cluster, giving 140 and 90 mins respectively. This is classified
as strong scaling.<br>
</p>
<p dir="ltr">Cluster specs:<br>
</p>
<p dir="ltr">CPU: AMD 6234 2.4GHz<br>
</p>
<p dir="ltr">8 cores / processor (CPU)<br>
</p>
<p dir="ltr">6 CPU / node<br>
</p>
<p dir="ltr">So 48 Cores / CPU<br>
</p>
<p dir="ltr">Not sure abt the memory / node<br>
</p>
<p dir="ltr"><br>
</p>
<p dir="ltr">The parallel efficiency ‘En’ for a given degree of
parallelism ‘n’ indicates how much the program is<br>
efficiently accelerated by parallel processing. ‘En’ is given by
the following formulae. Although their<br>
derivation processes are different depending on strong and weak
scaling, derived formulae are the<br>
same.<br>
</p>
<p dir="ltr">From the estimated time, my parallel efficiency using
Amdahl's law on the current old cluster was 52.7%.<br>
</p>
<p dir="ltr">So is my results acceptable?<br>
</p>
<p dir="ltr">For the large data set, if using 2205 nodes
(2205X8cores), my expected parallel efficiency is only 0.5%. The
proposal recommends value of > 50%.<br>
</p>
<p dir="ltr"></p></div></blockquote><div>The problem with this analysis is that the estimated serial fraction from Amdahl's Law changes as a function<br></div><div>of problem size, so you cannot take the strong scaling from one problem and apply it to another without a</div><div>model of this dependence.</div><div><br></div><div>Weak scaling does model changes with problem size, so I would measure weak scaling on your current</div><div>cluster, and extrapolate to the big machine. I realize that this does not make sense for many scientific</div><div>applications, but neither does requiring a certain parallel efficiency.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><p dir="ltr">Is it possible for this type of scaling in PETSc
(>50%), when using 17640 (2205X8) cores?<br>
</p>
<p dir="ltr">Btw, I do not have access to the system.<br>
</p>
<p dir="ltr"><br>
</p>
<div> <br>
<br>
Sent using <a href="https://cloudmagic.com/k/d/mailapp?ct=pa&cv=7.4.10&pv=5.0.2&source=email_footer_2" target="_blank">CloudMagic
Email</a> </div>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>