<div dir="ltr"><div><div><div><div><div><div>Lots of topics to discuss here...<br><br></div>- 315,342 unknowns is a very small problem. <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel">The PETSc gurus require at minimum 10,000 unknowns per process</a> for the computation time to outweigh communication time (although 20,000 unknowns or more is preferable). So when using 32 MPI processes and more, you're going to have ~10k unknowns or less so that's one reason why you're going to see less speedup.<br><br>- Another reason you get poor parallel scalability is that PETSc is limited by the memory-bandwidth. Meaning you have to use the optimal number of cores per compute node or whatever it is you're running on. <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#computers">The PETSc gurus talk about this issue in depth</a>. So not only do you need proper MPI process bindings, but it is likely that you will not want to saturate all available cores on a single node (the STREAMS Benchmark can tell you this). In other words, 16 cores spread across 2 nodes is going to outperform 16 cores on 1 node.<br><br></div><div>- If operations like MatMult are not scaling, this is likely due to the memory bandwidth limitations. If operations like VecDot or VecNorm are not scaling, this is likely due to the network latency between compute nodes. <br></div><br></div>- What kind of problem are you solving? CG/BJacobi is a mediocre solver/preconditioner combination, and solver iterations will increase with MPI processes if your tolerances are too lax. You can try using CG with any of the multi-grid preconditioners like GAMG if you have something nice like the poission equation. <br><br></div>- The best way to improve parallel performance is to make your code really inefficient and crappy.<br><br></div>- And most importantly, always send -log_view if you want people to help identify performance related issues with your application :)<br><br></div>Justin<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 7, 2016 at 8:37 PM, Jinlei Shen <span dir="ltr"><<a href="mailto:jshen25@jhu.edu" target="_blank">jshen25@jhu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-size:small">Hi,</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">I am trying to test the parallel scalablity of iterative solver (CG with BJacobi preconditioner) in PETSc.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Since the iteration number increases with more processors, I calculated the single iteration time by dividing the total KSPSolve time by number of iteration in this test.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">The linear system I'm solving has 315342 unknowns. Only KSPSolve cost is analyzed.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">The results show that the parallelism works well with small number of processes (less than 32 in my case), and is almost perfect parallel within first 10 processors. </div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">However, the effect of parallelization degrades if I use more processors. The wired thing is that with more than 100 processors, the single iteration cost is slightly increasing.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">To investigate this issue, I then looked into the composition of KSPSolve time.</div><div class="gmail_default" style="font-size:small">It seems KSPSolve consists of MatMult, VecTDot(min),VecNorm(min),VecA<wbr>XPY(max),VecAXPX(max),ApplyPC. Please correct me if I'm wrong.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">And I found for small number of processors, all these components scale well. </div><div class="gmail_default" style="font-size:small">However, using more processors(roughly larger than 40), MatMult, VecTDot(min),VecNorm(min) behaves worse, and even increasing after 100 processors, while the other three parts parallel well even for 1000 processors.</div><div class="gmail_default" style="font-size:small">Since MatMult composed major cost in single iteration, the total single iteration cost increases as well.(See the below figure).</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">My question:</div><div class="gmail_default" style="font-size:small">1. Is such situation reasonable? Could anyone explain why MatMult scales poor after certain number of processors? I heard some about different algorithms for matrix multiplication. Is that the bottleneck?</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">2. Is the parallelism dependent of matrix size? If I use larger system size,e.g. million , can the solver scale better?</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">3. Do you have any idea to improve the parallel performance?</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Thank you very much.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">JInlei</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default"><img src="cid:ii_157076e42bc6f601" alt="Inline image 1" height="408" width="544"><br></div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small"><br></div></div>
</blockquote></div><br></div>