<div dir="ltr"><img src="https://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/1d72bc0ac8dbe7175f1b193f0716cce6/spacer.gif" style="border: 0px; width: 0px; min-height: 0px; overflow: hidden;" width="0" height="0"><img src="http://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/1d72bc0ac8dbe7175f1b193f0716cce6/spacer.gif" style="border: 0px; width: 0px; min-height: 0px; overflow: hidden;" width="0" height="0">Barry,<div><br></div><div>Thanks a lot for your reply. Your explanation helps me understand my test results. So In this case, to compute the speedup for a strong scalability test, I should use the the wall clock time with multiple cores as a reference time instead of serial run time? </div><div><br></div><div>e.g. for computing speed up of 16 cores, i should use</div><div><div class="" align="center"><img alt="speedup=\frac{4 \times wclock_{4core}}{wclock_{16core}}" title="speedup=\frac{4 \times wclock_{4core}}{wclock_{16core}}" class="" src="http://latex.codecogs.com/gif.latex?%5Cdpi%7B300%7D%5Cdisplaystyle%09speedup=%5Cfrac{4%09%5Ctimes%09wclock%5F{4core}}{wclock%5F{16core}}" id="l0.17888954863883555" style="display: block;" height="40" width="202"></div><br><br></div><div>instead of using</div><div><br></div><div><div class="" align="center"><img alt="speedup=\frac{wclock_{1core}}{wclock_{16core}}" title="speedup=\frac{wclock_{1core}}{wclock_{16core}}" class="" src="http://latex.codecogs.com/gif.latex?%5Cdpi%7B300%7D%5Cdisplaystyle%09speedup=%5Cfrac{wclock%5F{1core}}{wclock%5F{16core}}" id="l0.23653535335324705" style="display: block;" height="40" width="177"><br></div><br></div><div class="gmail_extra">Another question is when I use asm as a preconditioner only, the speedup of 2 cores is much better than the case using asm with a local solve sub_ksp_type gmres. </div><div class="gmail_extra"><blockquote style="font-size:12.8000001907349px;margin:0px 0px 0px 40px;border:none;padding:0px"><div>-ksp_type gmres -ksp_max_it 100 -ksp_rtol 1e-5 -ksp_atol 1e-50 </div><div>-ksp_gmres_restart 30 -ksp_pc_side right</div><div>-pc_type asm -sub_pc_type ilu -sub_pc_factor_levels 0 -sub_pc_factor_fill 1.9</div></blockquote><blockquote style="font-size:12.8000001907349px;margin:0px 0px 0px 40px;border:none;padding:0px"><table cellspacing="0" cellpadding="0" border="1" style="table-layout:fixed;font-size:13px;font-family:arial,sans,sans-serif;border-collapse:collapse;border:1px solid rgb(204,204,204)"><colgroup><col width="100"><col width="100"><col width="144"><col width="144"><col width="100"><col width="100"></colgroup><tbody><tr style="height:21px"><td style="padding:2px 3px;vertical-align:bottom">cores</td><td style="padding:2px 3px;vertical-align:bottom">iterations</td><td style="padding:2px 3px;vertical-align:bottom">err</td><td style="padding:2px 3px;vertical-align:bottom">petsc solve cpu time</td><td style="padding:2px 3px;vertical-align:bottom">speedup</td><td style="padding:2px 3px;vertical-align:bottom">efficiency</td></tr><tr style="height:21px"><td style="padding:2px 3px;vertical-align:bottom">1</td><td style="padding:2px 3px;vertical-align:bottom">10</td><td style="padding:2px 3px;vertical-align:bottom">4.54E-04</td><td style="padding:2px 3px;vertical-align:bottom">10.68</td><td style="padding:2px 3px;vertical-align:bottom">1</td><td></td></tr><tr style="height:21px"><td style="padding:2px 3px;vertical-align:bottom">2</td><td style="padding:2px 3px;vertical-align:bottom">11</td><td style="padding:2px 3px;vertical-align:bottom">9.55E-04</td><td style="padding:2px 3px;vertical-align:bottom">8.2</td><td style="padding:2px 3px;vertical-align:bottom">1.30</td><td style="padding:2px 3px;vertical-align:bottom">0.65</td></tr><tr style="height:21px"><td style="padding:2px 3px;vertical-align:bottom">4</td><td style="padding:2px 3px;vertical-align:bottom">12</td><td style="padding:2px 3px;vertical-align:bottom">3.59E-04</td><td style="padding:2px 3px;vertical-align:bottom">5.26</td><td style="padding:2px 3px;vertical-align:bottom">2.03</td><td style="padding:2px 3px;vertical-align:bottom">0.50</td></tr><tr style="height:21px"><td style="padding:2px 3px;vertical-align:bottom"><br></td><td style="padding:2px 3px;vertical-align:bottom"><br></td><td style="padding:2px 3px;vertical-align:bottom"><br></td><td style="padding:2px 3px;vertical-align:bottom"><br></td><td style="padding:2px 3px;vertical-align:bottom"><br></td><td style="padding:2px 3px;vertical-align:bottom"><br></td></tr></tbody></table></blockquote></div><div class="gmail_extra">What is the main differences between those two? Thanks. </div><div class="gmail_extra"><br></div><div class="gmail_extra">Would you please take a look of my profiling data? Do you think this is the best parallel efficiency I can get from Petsc? How can I improve it? </div><div class="gmail_extra"><br></div><div class="gmail_extra">Best,</div><div class="gmail_extra"><br></div><div class="gmail_extra">Lei Shi</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br clear="all"><div><div>Sincerely Yours,<br><br>Lei Shi <br>---------</div></div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:33 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span><br>
> On Jun 25, 2015, at 3:48 PM, Lei Shi <<a href="mailto:stoneszone@gmail.com" target="_blank">stoneszone@gmail.com</a>> wrote:<br>
><br>
> Hi Justin,<br>
><br>
> Thanks for your suggestion. I will test it asap.<br>
><br>
> Another thing confusing me is the wclock time with 2 cores is almost the same as the serial run when I use asm with sub_ksp_type gmres and ilu0 on subdomains. Serial run takes 11.95 sec and parallel run takes 10.5 sec. There is almost no speedup at all.<br>
<br>
</span> On one process ASM is ilu(0), so the setup time is one ILU(0) factorization of the entire matrix. On two processes the ILU(0) is run on a matrix that is more than 1/2 the size of the matrix; due to the overlap of 1. In particular for small problems the overlap will pull in most of the matrix so the setup time is not 1/2 of the setup time of one process. Then the number of iterations increases a good amount in going from 1 to 2 processes. In combination this means that ASM going from one to two process requires one each process much more than 1/2 the work of running on 1 process so you should not expect great speedup in going from one to two processes. </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span><br>
<br>
><br>
> And I found some other people got similar bad speedups when comparing 2 cores with 1 core. Attached is one slide from J.A. Davis's presentation. I just found it from the web. As you can see, asm with 2 cores takes almost the same cpu times compare 1 core too! May be I miss understanding some fundamental things related to asm.<br>
><br>
> cores iterations err petsc solve wclock time speedup efficiency<br>
> 1 2 1.15E-04 11.95 1<br>
> 2 5 2.05E-02 10.5 1.01 0.50<br>
> 4 6 2.19E-02 7.64 1.39 0.34<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
</span>> <Screenshot - 06252015 - 03:44:53 PM.png><br>
<div><div>> <br>
><br>
> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
> On Thu, Jun 25, 2015 at 3:34 PM, Justin Chang <<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>> wrote:<br>
> Hi Lei,<br>
><br>
> Depending on your machine and MPI library, you may have to use smart process to core/socket bindings to achieve better speedup. Instructions can be found here:<br>
><br>
> <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#computers" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#computers</a><br>
><br>
><br>
> Justin<br>
><br>
> On Thu, Jun 25, 2015 at 3:24 PM, Lei Shi <<a href="mailto:stoneszone@gmail.com" target="_blank">stoneszone@gmail.com</a>> wrote:<br>
> Hi Matt,<br>
><br>
> Thanks for your suggestions. Here is the output from Stream test on one node which has 20 cores. I run it up to 20. Attached are the dumped output with your suggested options. Really appreciate your help!!!<br>
><br>
> Number of MPI processes 1<br>
> Function Rate (MB/s)<br>
> Copy: 13816.9372<br>
> Scale: 8020.1809<br>
> Add: 12762.3830<br>
> Triad: 11852.5016<br>
><br>
> Number of MPI processes 2<br>
> Function Rate (MB/s)<br>
> Copy: 22748.7681<br>
> Scale: 14081.4906<br>
> Add: 18998.4516<br>
> Triad: 18303.2494<br>
><br>
> Number of MPI processes 3<br>
> Function Rate (MB/s)<br>
> Copy: 34045.2510<br>
> Scale: 23410.9767<br>
> Add: 30320.2702<br>
> Triad: 30163.7977<br>
><br>
> Number of MPI processes 4<br>
> Function Rate (MB/s)<br>
> Copy: 36875.5349<br>
> Scale: 29440.1694<br>
> Add: 36971.1860<br>
> Triad: 37377.0103<br>
><br>
> Number of MPI processes 5<br>
> Function Rate (MB/s)<br>
> Copy: 32272.8763<br>
> Scale: 30316.3435<br>
> Add: 38022.0193<br>
> Triad: 38815.4830<br>
><br>
> Number of MPI processes 6<br>
> Function Rate (MB/s)<br>
> Copy: 35619.8925<br>
> Scale: 34457.5078<br>
> Add: 41419.3722<br>
> Triad: 35825.3621<br>
><br>
> Number of MPI processes 7<br>
> Function Rate (MB/s)<br>
> Copy: 55284.2420<br>
> Scale: 47706.8009<br>
> Add: 59076.4735<br>
> Triad: 61680.5559<br>
><br>
> Number of MPI processes 8<br>
> Function Rate (MB/s)<br>
> Copy: 44525.8901<br>
> Scale: 48949.9599<br>
> Add: 57437.7784<br>
> Triad: 56671.0593<br>
><br>
> Number of MPI processes 9<br>
> Function Rate (MB/s)<br>
> Copy: 34375.7364<br>
> Scale: 29507.5293<br>
> Add: 45405.3120<br>
> Triad: 39518.7559<br>
><br>
> Number of MPI processes 10<br>
> Function Rate (MB/s)<br>
> Copy: 34278.0415<br>
> Scale: 41721.7843<br>
> Add: 46642.2465<br>
> Triad: 45454.7000<br>
><br>
> Number of MPI processes 11<br>
> Function Rate (MB/s)<br>
> Copy: 38093.7244<br>
> Scale: 35147.2412<br>
> Add: 45047.0853<br>
> Triad: 44983.2013<br>
><br>
> Number of MPI processes 12<br>
> Function Rate (MB/s)<br>
> Copy: 39750.8760<br>
> Scale: 52038.0631<br>
> Add: 55552.9503<br>
> Triad: 54884.3839<br>
><br>
> Number of MPI processes 13<br>
> Function Rate (MB/s)<br>
> Copy: 60839.0248<br>
> Scale: 74143.7458<br>
> Add: 85545.3135<br>
> Triad: 85667.6551<br>
><br>
> Number of MPI processes 14<br>
> Function Rate (MB/s)<br>
> Copy: 37766.2343<br>
> Scale: 40279.1928<br>
> Add: 49992.8572<br>
> Triad: 50303.4809<br>
><br>
> Number of MPI processes 15<br>
> Function Rate (MB/s)<br>
> Copy: 49762.3670<br>
> Scale: 59077.8251<br>
> Add: 60407.9651<br>
> Triad: 61691.9456<br>
><br>
> Number of MPI processes 16<br>
> Function Rate (MB/s)<br>
> Copy: 31996.7169<br>
> Scale: 36962.4860<br>
> Add: 40183.5060<br>
> Triad: 41096.0512<br>
><br>
> Number of MPI processes 17<br>
> Function Rate (MB/s)<br>
> Copy: 36348.3839<br>
> Scale: 39108.6761<br>
> Add: 46853.4476<br>
> Triad: 47266.1778<br>
><br>
> Number of MPI processes 18<br>
> Function Rate (MB/s)<br>
> Copy: 40438.7558<br>
> Scale: 43195.5785<br>
> Add: 53063.4321<br>
> Triad: 53605.0293<br>
><br>
> Number of MPI processes 19<br>
> Function Rate (MB/s)<br>
> Copy: 30739.4908<br>
> Scale: 34280.8118<br>
> Add: 40710.5155<br>
> Triad: 43330.9503<br>
><br>
> Number of MPI processes 20<br>
> Function Rate (MB/s)<br>
> Copy: 37488.3777<br>
> Scale: 41791.8999<br>
> Add: 49518.9604<br>
> Triad: 48908.2677<br>
> ------------------------------------------------<br>
> np speedup<br>
> 1 1.0<br>
> 2 1.54<br>
> 3 2.54<br>
> 4 3.15<br>
> 5 3.27<br>
> 6 3.02<br>
> 7 5.2<br>
> 8 4.78<br>
> 9 3.33<br>
> 10 3.84<br>
> 11 3.8<br>
> 12 4.63<br>
> 13 7.23<br>
> 14 4.24<br>
> 15 5.2<br>
> 16 3.47<br>
> 17 3.99<br>
> 18 4.52<br>
> 19 3.66<br>
> 20 4.13<br>
><br>
><br>
><br>
><br>
><br>
</div></div><div><div>> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
> On Thu, Jun 25, 2015 at 6:44 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> On Thu, Jun 25, 2015 at 5:51 AM, Lei Shi <<a href="mailto:stoneszone@gmail.com" target="_blank">stoneszone@gmail.com</a>> wrote:<br>
> Hello,<br>
><br>
> 1) In order to understand this, we have to disentagle the various effect. First, run the STREAMS benchmark<br>
><br>
> make NPMAX=4 streams<br>
><br>
> This will tell you the maximum speedup you can expect on this machine.<br>
><br>
> 2) For these test cases, also send the output of<br>
><br>
> -ksp_view -ksp_converged_reason -ksp_monitor_true_residual<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
> I'm trying to improve the parallel efficiency of gmres solve in my. In my CFD solver, Petsc gmres is used to solve the linear system generated by the Newton's method. To test its efficiency, I started with a very simple inviscid subsonic 3D flow as the first testcase. The parallel efficiency of gmres solve with asm as the preconditioner is very bad. The results are from our latest cluster. Right now, I'm only looking at the wclock time of the ksp_solve.<br>
> • First I tested ASM with gmres and ilu 0 for the sub domain , the cpu time of 2 cores is almost the same as the serial run. Here is the options for this case<br>
> -ksp_type gmres -ksp_max_it 100 -ksp_rtol 1e-5 -ksp_atol 1e-50<br>
> -ksp_gmres_restart 30 -ksp_pc_side right<br>
> -pc_type asm -sub_ksp_type gmres -sub_ksp_rtol 0.001 -sub_ksp_atol 1e-30<br>
> -sub_ksp_max_it 1000 -sub_pc_type ilu -sub_pc_factor_levels 0<br>
> -sub_pc_factor_fill 1.9<br>
> The iteration numbers increase a lot for parallel run.<br>
> cores iterations err petsc solve wclock time speedup efficiency<br>
> 1 2 1.15E-04 11.95 1<br>
> 2 5 2.05E-02 10.5 1.01 0.50<br>
> 4 6 2.19E-02 7.64 1.39 0.34<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> 2. Then I tested ASM with ilu 0 as the preconditoner only, the cpu time of 2 cores is better than the 1st test, but the speedup is still very bad. Here is the options i'm using<br>
> -ksp_type gmres -ksp_max_it 100 -ksp_rtol 1e-5 -ksp_atol 1e-50<br>
> -ksp_gmres_restart 30 -ksp_pc_side right<br>
> -pc_type asm -sub_pc_type ilu -sub_pc_factor_levels 0 -sub_pc_factor_fill 1.9<br>
> cores iterations err petsc solve cpu time speedup efficiency<br>
> 1 10 4.54E-04 10.68 1<br>
> 2 11 9.55E-04 8.2 1.30 0.65<br>
> 4 12 3.59E-04 5.26 2.03 0.50<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> Those results are from a third order "DG" scheme with a very coarse 3D mesh (480 elements). I believe I should get some speedups for this test even on this coarse mesh.<br>
><br>
> My question is why does the asm with a local solve take much longer time than the asm as a preconditioner only? Also the accuracy is very bad too I have tested changing the overlap of asm to 2, but make it even worse.<br>
><br>
> If I used a larger mesh ~4000 elements, the 2nd case with asm as the preconditioner gives me a better speedup, but still not very good.<br>
><br>
> cores iterations err petsc solve cpu time speedup efficiency<br>
> 1 7 1.91E-02 97.32 1<br>
> 2 7 2.07E-02 64.94 1.5 0.74<br>
> 4 7 2.61E-02 36.97 2.6 0.65<br>
><br>
><br>
> Attached are the log_summary dumped from petsc, any suggestions are welcome. I really appreciate it.<br>
><br>
><br>
> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
><br>
><br>
><br>
</div></div><div><div>> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-bf5b023d9df4c85360f188f22218ce35--to" style></font></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature">Sincerely Yours,<br><br>Lei Shi <br>---------</div></div>
<br><div class="gmail_quote">On Thu, Jun 25, 2015 at 5:33 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Jun 25, 2015, at 3:48 PM, Lei Shi <<a href="mailto:stoneszone@gmail.com">stoneszone@gmail.com</a>> wrote:<br>
><br>
> Hi Justin,<br>
><br>
> Thanks for your suggestion. I will test it asap.<br>
><br>
> Another thing confusing me is the wclock time with 2 cores is almost the same as the serial run when I use asm with sub_ksp_type gmres and ilu0 on subdomains. Serial run takes 11.95 sec and parallel run takes 10.5 sec. There is almost no speedup at all.<br>
<br>
</span> On one process ASM is ilu(0), so the setup time is one ILU(0) factorization of the entire matrix. On two processes the ILU(0) is run on a matrix that is more than 1/2 the size of the matrix; due to the overlap of 1. In particular for small problems the overlap will pull in most of the matrix so the setup time is not 1/2 of the setup time of one process. Then the number of iterations increases a good amount in going from 1 to 2 processes. In combination this means that ASM going from one to two process requires one each process much more than 1/2 the work of running on 1 process so you should not expect great speedup in going from one to two processes.<br>
<span class=""><br>
<br>
<br>
><br>
> And I found some other people got similar bad speedups when comparing 2 cores with 1 core. Attached is one slide from J.A. Davis's presentation. I just found it from the web. As you can see, asm with 2 cores takes almost the same cpu times compare 1 core too! May be I miss understanding some fundamental things related to asm.<br>
><br>
> cores iterations err petsc solve wclock time speedup efficiency<br>
> 1 2 1.15E-04 11.95 1<br>
> 2 5 2.05E-02 10.5 1.01 0.50<br>
> 4 6 2.19E-02 7.64 1.39 0.34<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
</span>> <Screenshot - 06252015 - 03:44:53 PM.png><br>
<div class="HOEnZb"><div class="h5">> <br>
><br>
> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
> On Thu, Jun 25, 2015 at 3:34 PM, Justin Chang <<a href="mailto:jychang48@gmail.com">jychang48@gmail.com</a>> wrote:<br>
> Hi Lei,<br>
><br>
> Depending on your machine and MPI library, you may have to use smart process to core/socket bindings to achieve better speedup. Instructions can be found here:<br>
><br>
> <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#computers" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#computers</a><br>
><br>
><br>
> Justin<br>
><br>
> On Thu, Jun 25, 2015 at 3:24 PM, Lei Shi <<a href="mailto:stoneszone@gmail.com">stoneszone@gmail.com</a>> wrote:<br>
> Hi Matt,<br>
><br>
> Thanks for your suggestions. Here is the output from Stream test on one node which has 20 cores. I run it up to 20. Attached are the dumped output with your suggested options. Really appreciate your help!!!<br>
><br>
> Number of MPI processes 1<br>
> Function Rate (MB/s)<br>
> Copy: 13816.9372<br>
> Scale: 8020.1809<br>
> Add: 12762.3830<br>
> Triad: 11852.5016<br>
><br>
> Number of MPI processes 2<br>
> Function Rate (MB/s)<br>
> Copy: 22748.7681<br>
> Scale: 14081.4906<br>
> Add: 18998.4516<br>
> Triad: 18303.2494<br>
><br>
> Number of MPI processes 3<br>
> Function Rate (MB/s)<br>
> Copy: 34045.2510<br>
> Scale: 23410.9767<br>
> Add: 30320.2702<br>
> Triad: 30163.7977<br>
><br>
> Number of MPI processes 4<br>
> Function Rate (MB/s)<br>
> Copy: 36875.5349<br>
> Scale: 29440.1694<br>
> Add: 36971.1860<br>
> Triad: 37377.0103<br>
><br>
> Number of MPI processes 5<br>
> Function Rate (MB/s)<br>
> Copy: 32272.8763<br>
> Scale: 30316.3435<br>
> Add: 38022.0193<br>
> Triad: 38815.4830<br>
><br>
> Number of MPI processes 6<br>
> Function Rate (MB/s)<br>
> Copy: 35619.8925<br>
> Scale: 34457.5078<br>
> Add: 41419.3722<br>
> Triad: 35825.3621<br>
><br>
> Number of MPI processes 7<br>
> Function Rate (MB/s)<br>
> Copy: 55284.2420<br>
> Scale: 47706.8009<br>
> Add: 59076.4735<br>
> Triad: 61680.5559<br>
><br>
> Number of MPI processes 8<br>
> Function Rate (MB/s)<br>
> Copy: 44525.8901<br>
> Scale: 48949.9599<br>
> Add: 57437.7784<br>
> Triad: 56671.0593<br>
><br>
> Number of MPI processes 9<br>
> Function Rate (MB/s)<br>
> Copy: 34375.7364<br>
> Scale: 29507.5293<br>
> Add: 45405.3120<br>
> Triad: 39518.7559<br>
><br>
> Number of MPI processes 10<br>
> Function Rate (MB/s)<br>
> Copy: 34278.0415<br>
> Scale: 41721.7843<br>
> Add: 46642.2465<br>
> Triad: 45454.7000<br>
><br>
> Number of MPI processes 11<br>
> Function Rate (MB/s)<br>
> Copy: 38093.7244<br>
> Scale: 35147.2412<br>
> Add: 45047.0853<br>
> Triad: 44983.2013<br>
><br>
> Number of MPI processes 12<br>
> Function Rate (MB/s)<br>
> Copy: 39750.8760<br>
> Scale: 52038.0631<br>
> Add: 55552.9503<br>
> Triad: 54884.3839<br>
><br>
> Number of MPI processes 13<br>
> Function Rate (MB/s)<br>
> Copy: 60839.0248<br>
> Scale: 74143.7458<br>
> Add: 85545.3135<br>
> Triad: 85667.6551<br>
><br>
> Number of MPI processes 14<br>
> Function Rate (MB/s)<br>
> Copy: 37766.2343<br>
> Scale: 40279.1928<br>
> Add: 49992.8572<br>
> Triad: 50303.4809<br>
><br>
> Number of MPI processes 15<br>
> Function Rate (MB/s)<br>
> Copy: 49762.3670<br>
> Scale: 59077.8251<br>
> Add: 60407.9651<br>
> Triad: 61691.9456<br>
><br>
> Number of MPI processes 16<br>
> Function Rate (MB/s)<br>
> Copy: 31996.7169<br>
> Scale: 36962.4860<br>
> Add: 40183.5060<br>
> Triad: 41096.0512<br>
><br>
> Number of MPI processes 17<br>
> Function Rate (MB/s)<br>
> Copy: 36348.3839<br>
> Scale: 39108.6761<br>
> Add: 46853.4476<br>
> Triad: 47266.1778<br>
><br>
> Number of MPI processes 18<br>
> Function Rate (MB/s)<br>
> Copy: 40438.7558<br>
> Scale: 43195.5785<br>
> Add: 53063.4321<br>
> Triad: 53605.0293<br>
><br>
> Number of MPI processes 19<br>
> Function Rate (MB/s)<br>
> Copy: 30739.4908<br>
> Scale: 34280.8118<br>
> Add: 40710.5155<br>
> Triad: 43330.9503<br>
><br>
> Number of MPI processes 20<br>
> Function Rate (MB/s)<br>
> Copy: 37488.3777<br>
> Scale: 41791.8999<br>
> Add: 49518.9604<br>
> Triad: 48908.2677<br>
> ------------------------------------------------<br>
> np speedup<br>
> 1 1.0<br>
> 2 1.54<br>
> 3 2.54<br>
> 4 3.15<br>
> 5 3.27<br>
> 6 3.02<br>
> 7 5.2<br>
> 8 4.78<br>
> 9 3.33<br>
> 10 3.84<br>
> 11 3.8<br>
> 12 4.63<br>
> 13 7.23<br>
> 14 4.24<br>
> 15 5.2<br>
> 16 3.47<br>
> 17 3.99<br>
> 18 4.52<br>
> 19 3.66<br>
> 20 4.13<br>
><br>
><br>
><br>
><br>
><br>
</div></div><div class="HOEnZb"><div class="h5">> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
> On Thu, Jun 25, 2015 at 6:44 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> On Thu, Jun 25, 2015 at 5:51 AM, Lei Shi <<a href="mailto:stoneszone@gmail.com">stoneszone@gmail.com</a>> wrote:<br>
> Hello,<br>
><br>
> 1) In order to understand this, we have to disentagle the various effect. First, run the STREAMS benchmark<br>
><br>
> make NPMAX=4 streams<br>
><br>
> This will tell you the maximum speedup you can expect on this machine.<br>
><br>
> 2) For these test cases, also send the output of<br>
><br>
> -ksp_view -ksp_converged_reason -ksp_monitor_true_residual<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
> I'm trying to improve the parallel efficiency of gmres solve in my. In my CFD solver, Petsc gmres is used to solve the linear system generated by the Newton's method. To test its efficiency, I started with a very simple inviscid subsonic 3D flow as the first testcase. The parallel efficiency of gmres solve with asm as the preconditioner is very bad. The results are from our latest cluster. Right now, I'm only looking at the wclock time of the ksp_solve.<br>
> • First I tested ASM with gmres and ilu 0 for the sub domain , the cpu time of 2 cores is almost the same as the serial run. Here is the options for this case<br>
> -ksp_type gmres -ksp_max_it 100 -ksp_rtol 1e-5 -ksp_atol 1e-50<br>
> -ksp_gmres_restart 30 -ksp_pc_side right<br>
> -pc_type asm -sub_ksp_type gmres -sub_ksp_rtol 0.001 -sub_ksp_atol 1e-30<br>
> -sub_ksp_max_it 1000 -sub_pc_type ilu -sub_pc_factor_levels 0<br>
> -sub_pc_factor_fill 1.9<br>
> The iteration numbers increase a lot for parallel run.<br>
> cores iterations err petsc solve wclock time speedup efficiency<br>
> 1 2 1.15E-04 11.95 1<br>
> 2 5 2.05E-02 10.5 1.01 0.50<br>
> 4 6 2.19E-02 7.64 1.39 0.34<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> 2. Then I tested ASM with ilu 0 as the preconditoner only, the cpu time of 2 cores is better than the 1st test, but the speedup is still very bad. Here is the options i'm using<br>
> -ksp_type gmres -ksp_max_it 100 -ksp_rtol 1e-5 -ksp_atol 1e-50<br>
> -ksp_gmres_restart 30 -ksp_pc_side right<br>
> -pc_type asm -sub_pc_type ilu -sub_pc_factor_levels 0 -sub_pc_factor_fill 1.9<br>
> cores iterations err petsc solve cpu time speedup efficiency<br>
> 1 10 4.54E-04 10.68 1<br>
> 2 11 9.55E-04 8.2 1.30 0.65<br>
> 4 12 3.59E-04 5.26 2.03 0.50<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> Those results are from a third order "DG" scheme with a very coarse 3D mesh (480 elements). I believe I should get some speedups for this test even on this coarse mesh.<br>
><br>
> My question is why does the asm with a local solve take much longer time than the asm as a preconditioner only? Also the accuracy is very bad too I have tested changing the overlap of asm to 2, but make it even worse.<br>
><br>
> If I used a larger mesh ~4000 elements, the 2nd case with asm as the preconditioner gives me a better speedup, but still not very good.<br>
><br>
> cores iterations err petsc solve cpu time speedup efficiency<br>
> 1 7 1.91E-02 97.32 1<br>
> 2 7 2.07E-02 64.94 1.5 0.74<br>
> 4 7 2.61E-02 36.97 2.6 0.65<br>
><br>
><br>
> Attached are the log_summary dumped from petsc, any suggestions are welcome. I really appreciate it.<br>
><br>
><br>
> Sincerely Yours,<br>
><br>
> Lei Shi<br>
> ---------<br>
><br>
><br>
><br>
><br>
</div></div><div class="HOEnZb"><div class="h5">> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>