<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div>Will the speedup measured by the streams benchmark the upper limit of speedup of a parallel program? I.e., suppose there is a program with ideal linear speedup (=2 for np=2 if running in a perfect machine for parallelism), if it runs in your laptop, the maximum speedup would be 1.44 with np=2?</div><div> </div><div>Thanks,</div><div>Qin</div><div><br></div> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div dir="ltr"> <div style="margin: 5px 0px; padding: 0px; border: 1px solid rgb(204, 204, 204); height: 0px; line-height: 0; font-size: 0px;" class="hr" contentEditable="false"
readonly="true"></div> <font size="2" face="Arial"> <b><span style="font-weight: bold;">From:</span></b> Barry Smith <bsmith@mcs.anl.gov><br> <b><span style="font-weight: bold;">To:</span></b> Qin Lu <lu_qin_2000@yahoo.com> <br><b><span style="font-weight: bold;">Cc:</span></b> petsc-users <petsc-users@mcs.anl.gov> <br> <b><span style="font-weight: bold;">Sent:</span></b> Thursday, May 29, 2014 5:46 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> Re: [petsc-users] About parallel performance<br> </font> </div> <div class="y_msg_container"><br><br clear="none"> For the parallel case a perfect machine would have twice the memory bandwidth when using 2 cores as opposed to 1 core. For yours it is almost exactly the same. The issue is not with the MPI or software. It depends on how many memory sockets there are and how they are shared by the various cores. As I said the initial memory bandwidth for one core 21,682.
gigabytes per second is good so it is a very good sequential machine. <br clear="none"><br clear="none"> Here are the results on my laptop <br clear="none"><br clear="none">Number of MPI processes 1<br clear="none">Process 0 Barrys-MacBook-Pro.local<br clear="none">Function Rate (MB/s) <br clear="none">Copy: 7928.7346<br clear="none">Scale: 8271.5103<br clear="none">Add: 11017.0430<br clear="none">Triad: 10843.9018<br clear="none">Number of MPI processes 2<br clear="none">Process 0 Barrys-MacBook-Pro.local<br clear="none">Process 1 Barrys-MacBook-Pro.local<br clear="none">Function Rate (MB/s) <br clear="none">Copy: 13513.0365<br clear="none">Scale: 13516.7086<br clear="none">Add: 15455.3952<br clear="none">Triad: 15562.0822<br
clear="none">------------------------------------------------<br clear="none">np speedup<br clear="none">1 1.0<br clear="none">2 1.44<br clear="none"><br clear="none"><br clear="none">Note that the memory bandwidth is much lower than your machine but there is an increase in speedup from one to two cores because one core cannot utilize all the memory bandwidth. But even with two cores my laptop will be slower on PETSc then one core on your machine.<br clear="none"><br clear="none">Here is the performance on a workstation we have that has multiple CPUs and multiple memory sockets<br clear="none"><br clear="none">Number of MPI processes 1<br clear="none">Process 0 es<br clear="none">Function Rate (MB/s) <br clear="none">Copy: 13077.8260<br clear="none">Scale: 12867.1966<br clear="none">Add: 14637.6757<br clear="none">Triad: 14414.4478<br
clear="none">Number of MPI processes 2<br clear="none">Process 0 es<br clear="none">Process 1 es<br clear="none">Function Rate (MB/s) <br clear="none">Copy: 22663.3116<br clear="none">Scale: 22102.5495<br clear="none">Add: 25768.1550<br clear="none">Triad: 26076.0410<br clear="none">Number of MPI processes 3<br clear="none">Process 0 es<br clear="none">Process 1 es<br clear="none">Process 2 es<br clear="none">Function Rate (MB/s) <br clear="none">Copy: 27501.7610<br clear="none">Scale: 26971.2183<br clear="none">Add: 30433.3276<br clear="none">Triad: 31302.9396<br clear="none">Number of MPI processes 4<br clear="none">Process 0 es<br clear="none">Process 1 es<br clear="none">Process 2 es<br clear="none">Process 3 es<br
clear="none">Function Rate (MB/s) <br clear="none">Copy: 29302.3183<br clear="none">Scale: 30165.5295<br clear="none">Add: 34577.3458<br clear="none">Triad: 35195.8067<br clear="none">------------------------------------------------<br clear="none">np speedup<br clear="none">1 1.0<br clear="none">2 1.81<br clear="none">3 2.17<br clear="none">4 2.44<br clear="none"><br clear="none">Note that one core has a lower memory bandwidth than your machine but as I add more cores the memory bandwidth increases by a factor of 2.4<br clear="none"><br clear="none">There is nothing wrong with your machine, it is just not suitable to run sparse linear algebra on multiple cores for it.<br clear="none"><br clear="none"> Barry<br clear="none"><br clear="none"><div class="qtdSeparateBR"><br><br></div><div id="yqtfd88425" class="yqt3166546604"><br
clear="none">On May 29, 2014, at 5:15 PM, Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>> wrote:<br clear="none"><br clear="none">> Barry,<br clear="none">> <br clear="none">> How did you read the test results? For a machine good for parallism, should the data of np=2 be about half of the those of np=1?<br clear="none">> <br clear="none">> The machine has very new Intel chips and is very for serial run. What may cause the bad parallism? - the configurations of the machine, or I am using a MPI lib (MPICH2) that was not built correctly?<br clear="none">> Many thanks,<br clear="none">> Qin<br clear="none">> <br clear="none">> ----- Original Message -----<br clear="none">> From: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" shape="rect" ymailto="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br clear="none">>
To: Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>>; petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" shape="rect" ymailto="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br clear="none">> Cc: <br clear="none">> Sent: Thursday, May 29, 2014 4:54 PM<br clear="none">> Subject: Re: [petsc-users] About parallel performance<br clear="none">> <br clear="none">> <br clear="none">> In that PETSc version BasicVersion is actually the MPI streams benchmark so you ran the right thing. Your machine is totally worthless for sparse linear algebra parallelism. The entire memory bandwidth is used by the first core so adding the second core to the computation gives you no improvement at all in the streams benchmark. <br clear="none">> <br clear="none">> But the single core memory bandwidth is pretty good so for problems that
don’t need parallelism you should get good performance.<br clear="none">> <br clear="none">> Barry<br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">> On May 29, 2014, at 4:37 PM, Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>> wrote:<br clear="none">> <br clear="none">>> Barry,<br clear="none">>> <br clear="none">>> I have PETSc-3.4.2 and I didn't see MPIVersion there; do you mean BasicVersion? I built and ran it (if you did mean MPIVersion, I will get PETSc-3.4 later):<br clear="none">>> <br clear="none">>> =================<br clear="none">>> [/petsc-3.4.2-64bit/src/benchmarks/streams]$ mpiexec -n 1 ./BasicVersion<br clear="none">>> Number of MPI processes 1<br clear="none">>> Function Rate (MB/s)<br clear="none">>>
Copy: 21682.9932<br clear="none">>> Scale: 21637.5509<br clear="none">>> Add: 21583.0395<br clear="none">>> Triad: 21504.6563<br clear="none">>> [/petsc-3.4.2-64bit/src/benchmarks/streams]$ mpiexec -n 2 ./BasicVersion<br clear="none">>> Number of MPI processes 2<br clear="none">>> Function Rate (MB/s)<br clear="none">>> Copy: 21369.6976<br clear="none">>> Scale: 21632.3203<br clear="none">>> Add: 22203.7107<br clear="none">>> Triad: 22305.1841<br clear="none">>> =======================<br clear="none">>> <br clear="none">>> Thanks a lot,<br clear="none">>> Qin<br clear="none">>> <br clear="none">>> From: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov"
shape="rect" ymailto="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br clear="none">>> To: Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>> <br clear="none">>> Cc: "<a href="mailto:petsc-users@mcs.anl.gov" shape="rect" ymailto="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>" <<a href="mailto:petsc-users@mcs.anl.gov" shape="rect" ymailto="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> <br clear="none">>> Sent: Thursday, May 29, 2014 4:17 PM<br clear="none">>> Subject: Re: [petsc-users] About parallel performance<br clear="none">>> <br clear="none">>> <br clear="none">>> <br clear="none">>> You need to run the streams benchmarks are one and two processes to see how the memory bandwidth changes. If you are using petsc-3.4 you can <br clear="none">>> <br
clear="none">>> cd src/benchmarks/streams/ <br clear="none">>> <br clear="none">>> make MPIVersion<br clear="none">>> <br clear="none">>> mpiexec -n 1 ./MPIVersion<br clear="none">>> <br clear="none">>> mpiexec -n 2 ./MPIVersion <br clear="none">>> <br clear="none">>> and send all the results<br clear="none">>> <br clear="none">>> Barry<br clear="none">>> <br clear="none">>> <br clear="none">>> <br clear="none">>> On May 29, 2014, at 4:06 PM, Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>> wrote:<br clear="none">>> <br clear="none">>>> For now I only care about the CPU of PETSc subroutines. I tried to add PetscLogEventBegin/End and the results are consistent with the log_summary attached in my first email.<br
clear="none">>>> <br clear="none">>>> The CPU of MatSetValues and MatAssemblyBegin/End of both p1 and p2 runs are small (< 20 sec). The CPU of PCSetup/PCApply are about the same between p1 and p2 (~120 sec). The CPU of KSPSolve of p2 (143 sec) is a little faster than p1's (176 sec), but p2 spent more time in MatGetSubMatrice (43 sec). So the total CPU of PETSc subtroutines are about the same between p1 and p2 (502 sec vs. 488 sec).<br clear="none">>>> <br clear="none">>>> It seems I need a more efficient parallel preconditioner. Do you have any suggestions for that?<br clear="none">>>> <br clear="none">>>> Many thanks,<br clear="none">>>> Qin<br clear="none">>>> <br clear="none">>>> ----- Original Message -----<br clear="none">>>> From: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" shape="rect"
ymailto="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br clear="none">>>> To: Qin Lu <<a href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>><br clear="none">>>> Cc: "<a href="mailto:petsc-users@mcs.anl.gov" shape="rect" ymailto="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>" <<a href="mailto:petsc-users@mcs.anl.gov" shape="rect" ymailto="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br clear="none">>>> Sent: Thursday, May 29, 2014 2:12 PM<br clear="none">>>> Subject: Re: [petsc-users] About parallel performance<br clear="none">>>> <br clear="none">>>> <br clear="none">>>> You need to determine where the other 80% of the time is. My guess it is in setting the values into the matrix each time. Use PetscLogEventRegister() and put a PetscLogEventBegin/End() around
the code that computes all the entries in the matrix and calls MatSetValues() and MatAssemblyBegin/End().<br clear="none">>>> <br clear="none">>>> Likely the reason the linear solver does not scale better is that you have a machine with multiple cores that share the same memory bandwidth and the first core is already using well over half the memory bandwidth so the second core cannot be fully utilized since both cores have to wait for data to arrive from memory. If you are using the development version of PETSc you can run make streams NPMAX=2 from the PETSc root directory and send this to us to confirm this.<br clear="none">>>> <br clear="none">>>> Barry<br clear="none">>>> <br clear="none">>>> <br clear="none">>>> <br clear="none">>>> <br clear="none">>>> <br clear="none">>>> On May 29, 2014, at 1:23 PM, Qin Lu <<a
href="mailto:lu_qin_2000@yahoo.com" shape="rect" ymailto="mailto:lu_qin_2000@yahoo.com">lu_qin_2000@yahoo.com</a>> wrote:<br clear="none">>>> <br clear="none">>>>> Hello,<br clear="none">>>>> <br clear="none">>>>> I implemented PETSc parallel linear solver in a program, the implementation is basically the same as /src/ksp/ksp/examples/tutorials/ex2.c, i.e., I preallocated the MatMPIAIJ, and let PETSc partition the matrix through MatGetOwnershipRange. However, a few tests shows the parallel solver is always a little slower the serial solver (I have excluded the matrix generation CPU).<br clear="none">>>>> <br clear="none">>>>> For serial run I used PCILU as preconditioner; for parallel run, I used ASM with ILU(0) at each subblocks (-sub_pc_type ilu -sub_ksp_type preonly -ksp_type bcgs -pc_type asm). The number of unknowns are around 200,000.<br clear="none">>>>>
<br clear="none">>>>> I have used -log_summary to print out the performance summary as attached (log_summary_p1 for serial run and log_summary_p2 for the run with 2 processes). It seems the KSPSolve counts only for less than 20% of Global %T. <br clear="none">>>>> My questions are:<br clear="none">>>>> <br clear="none">>>>> 1. what is the bottle neck of the parallel run according to the summary?<br clear="none">>>>> 2. Do you have any suggestions to improve the parallel performance?<br clear="none">>>>> <br clear="none">>>>> Thanks a lot for your suggestions!<br clear="none">>>>> <br clear="none">>>>> Regards,<br clear="none">>>>> Qin <log_summary_p1.txt><log_summary_p2.txt> <br clear="none"></div><br><br></div> </div> </div> </div></body></html>