<table cellspacing='0' cellpadding='0' border='0' ><tr><td valign='top' style='font: inherit;'><P>Just FYI,</P>
<P>from my knowledge, at least 1 answer to the question in that thread is absolutely wrong, according to HW information on hand. Some of the info in that thread are not applicable across the board, and the original question : threaded application, is not answered.</P>
<P> </P>
<P>whether to use numactl on NUMA system is situation dependent. In general, numactl is bad if you over subscribe the system.</P>
<P> </P>
<P>tan</P>
<P><BR><BR>--- On <B>Tue, 7/15/08, Robert Kubrick <I><robertkubrick@gmail.com></I></B> wrote:<BR></P>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: rgb(16,16,255) 2px solid">From: Robert Kubrick <robertkubrick@gmail.com><BR>Subject: Re: [mpich-discuss] Why is my quad core slower than cluster<BR>To: mpich-discuss@mcs.anl.gov<BR>Date: Tuesday, July 15, 2008, 4:06 PM<BR><BR>
<DIV id=yiv1508286376>A recent (long) discussion about numactl and taskset on the beowulf mailing list:
<DIV><A href="http://www.beowulf.org/archive/2008-June/021810.html" target=_blank rel=nofollow>http://www.beowulf.org/archive/2008-June/021810.html</A></DIV>
<DIV><BR>
<DIV>
<DIV>On Jul 15, 2008, at 1:35 PM, chong tan wrote:</DIV><BR class=Apple-interchange-newline>
<BLOCKQUOTE type="cite">
<TABLE style="FONT-SIZE: 10pt; WIDTH: 100%; COLOR: #000000; FONT-FAMILY: arial; BACKGROUND-COLOR: transparent" cellSpacing=0 cellPadding=0 border=0>
<TBODY>
<TR>
<TD vAlign=top>
<DIV id=yiv1408584783>
<P>Eric,</P>
<P>I know you are referring me as the one not sharing. I am no expert on MP, but someone who have done his homeworks. I like to share, but the NDAs and company policy say no. </P>
<P>You have good points and did some good experiements. That is what I expect most MP designers and users to have done at the first place.</P>
<P>There answers to the original question are simple :</P>
<P>- on 2Xquad, you have one memory system, while on cluster, you have 8 memory systems, the total bandwidth favor the cluster considerably.</P>
<P>- on cluster, there is not way for the process to be context switched, while that can happen on 2XQuad. When this happens, live is bad.</P>
<P>- The only thing that favor the SMP is the cost of communication and shared memory.</P>
<DIV> <BR class=khtml-block-placeholder></DIV>
<P>There are more factors, Thea rt is balancing them to your favor. In a way, the X86 Quad are not designed to let us load it up with fat adnd heavy processes. That is what I have been saying all along: know your HW first. Your MP solution should come second. Whatever utilities you can find will help put the solution together.</P>
<DIV> <BR class=khtml-block-placeholder></DIV>
<P>So, the problem is not MPI in this case.</P>
<DIV> <BR class=khtml-block-placeholder></DIV>
<P>tan</P>
<P><BR><BR>--- On <B>Mon, 7/14/08, Eric A. Borisch <I><<A href="mailto:eborisch@ieee.org" target=_blank rel=nofollow>eborisch@ieee.org</A>></I></B> wrote:<BR></P>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: rgb(16,16,255) 2px solid">From: Eric A. Borisch <<A href="mailto:eborisch@ieee.org" target=_blank rel=nofollow>eborisch@ieee.org</A>><BR>Subject: Re: [mpich-discuss] Why is my quad core slower than cluster<BR>To: <A href="mailto:mpich-discuss@mcs.anl.gov" target=_blank rel=nofollow>mpich-discuss@mcs.anl.gov</A><BR>Date: Monday, July 14, 2008, 9:36 PM<BR><BR>
<DIV id=yiv1302378445><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">Gus,</FONT></SPAN>
<DIV><BR></DIV>
<DIV>Information sharing is truly the point of the mailing list. Useful messages should ask questions or provide answers! :)</DIV>
<DIV><BR></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">Someone mentioned STREAM benchmarks (memory BW benchmarks) a little while back. I did these when our new system came in a while ago, so I dug them back out.</FONT></SPAN>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">This (STREAM) can be compiled to use MPI, but it is only a synchronization tool, the benchmark is still a memory bus test (each task is trying to run through memory, but this is not an MPI communication test.)</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">My results on a dual E5472 machine (Two Quad-core 3Ghz packages; 1600MHz bus; 8 total cores)</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">Results (each set are [1..8] processes in order), double-precision array size = 20,000,000, run through 10 times.</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">Function Rate (MB/s) Avg time Min time Max time</FONT></SPAN></DIV>
<DIV>
<DIV>Copy: 2962.6937 0.1081 0.1080 0.1081</DIV>
<DIV>Copy: 5685.3008 0.1126 0.1126 0.1128</DIV>
<DIV>Copy: 5484.6846 0.1751 0.1750 0.1751</DIV>
<DIV>Copy: 7085.7959 0.1809 0.1806 0.1817</DIV>
<DIV>Copy: 5981.6033 0.2676 0.2675 0.2676</DIV>
<DIV>Copy: 7071.2490 0.2718 0.2715 0.2722</DIV>
<DIV>Copy: 6537.4934 0.3427 0.3426 0.3428</DIV>
<DIV>Copy: 7423.4545 0.3451 0.3449 0.3455</DIV>
<DIV><BR></DIV></DIV></DIV>
<DIV>
<DIV>
<DIV>Scale: 3011.8445 0.1063 0.1062 0.1063</DIV>
<DIV>Scale: 5675.8162 0.1128 0.1128 0.1129</DIV>
<DIV>Scale: 5474.8854 0.1754 0.1753 0.1754</DIV>
<DIV>Scale: 7068.6204 0.1814 0.1811 0.1819</DIV>
<DIV>Scale: 5974.6112 0.2679 0.2678 0.2680</DIV>
<DIV>Scale: 7063.8307 0.2721 0.2718 0.2725</DIV>
<DIV>Scale: 6533.4473 0.3430 0.3429 0.3431</DIV>
<DIV>Scale: 7418.6128 0.3453 0.3451 0.3456</DIV></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV>
<DIV>Add: 3184.3129 0.1508 0.1507 0.1508</DIV>
<DIV>Add: 5892.1781 0.1631 0.1629 0.1633</DIV>
<DIV>Add: 5588.0229 0.2577 0.2577 0.2578</DIV>
<DIV>Add: 7275.0745 0.2642 0.2639 0.2646</DIV>
<DIV>Add: 6175.7646 0.3887 0.3886 0.3889</DIV>
<DIV>Add: 7262.7112 0.3970 0.3965 0.3976</DIV>
<DIV>Add: 6687.7658 0.5025 0.5024 0.5026</DIV>
<DIV>Add: 7599.2516 0.5057 0.5053 0.5062</DIV></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV>
<DIV>
<DIV>Triad: 3224.7856 0.1489 0.1488 0.1489</DIV>
<DIV>Triad: 6021.2613 0.1596 0.1594 0.1598</DIV>
<DIV>Triad: 5609.9260 0.2567 0.2567 0.2568</DIV>
<DIV>Triad: 7293.2790 0.2637 0.2633 0.2641</DIV>
<DIV>Triad: 6185.4376 0.3881 0.3880 0.3881</DIV>
<DIV>Triad: 7279.1231 0.3958 0.3957 0.3961</DIV>
<DIV>Triad: 6691.8560 0.5022 0.5021 0.5022</DIV>
<DIV>Triad: 7604.1238 0.5052 0.5050 0.5057</DIV></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV>These work out to (~):</DIV>
<DIV>1x</DIV>
<DIV>1.9x</DIV>
<DIV>1.8x</DIV>
<DIV>2.3x</DIV>
<DIV>1.9x</DIV>
<DIV>2.2x</DIV>
<DIV>2.1x</DIV>
<DIV>2.4x</DIV>
<DIV> </DIV>
<DIV>for [1..8] cores.</DIV>
<DIV><BR></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">As you can see, it doesn't take eight cores to saturate the bus, even with a 1600MHz bus. Four of the eight cores running does this trick.</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">With all that said, there are still advantages to be had with the multicore chipsets, but only if you're not blowing full tilt through memory. If it can fit the problem, do more inside a loop rather than running multiple loops over the same memory. </FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">For reference, here's what using the osu_mbw_mr test (from MVAPICH2 1.0.2; I also have a cluster running nearby :) compiled on MPICH2 (1.0.7rc1 with nemesis provides this performance from one/two/four pairs (2/4/8 processes) of producer/consumers:</FONT></SPAN></DIV>
<DIV><BR></DIV></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">
<DIV># OSU MPI Multi BW / Message Rate Test (Version 1.0)</DIV>
<DIV># [ pairs: 1 ] [ window size: 64 ]</DIV>
<DIV><BR></DIV>
<DIV># Size MB/sec Messages/sec</DIV>
<DIV> 1 1.08 1076540.83</DIV>
<DIV> 2 2.14 1068102.24</DIV>
<DIV> 4 3.99 997382.24</DIV>
<DIV> 8 7.97 996419.66</DIV>
<DIV> 16 15.95 996567.63</DIV>
<DIV> 32 31.67 989660.29</DIV>
<DIV> 64 62.73 980084.91</DIV>
<DIV> 128 124.12 969676.18</DIV>
<DIV> 256 243.59 951527.62</DIV>
<DIV> 512 445.52 870159.34</DIV>
<DIV> 1024 810.28 791284.80</DIV>
<DIV> 2048 1357.25 662721.78</DIV>
<DIV> 4096 1935.08 472431.28</DIV>
<DIV> 8192 2454.29 299596.49</DIV>
<DIV> 16384 2717.61 165869.84</DIV>
<DIV> 32768 2900.23 88507.85</DIV>
<DIV> 65536 2279.71 34785.63</DIV>
<DIV> 131072 2540.51 19382.53</DIV>
<DIV> 262144 1335.16 5093.21</DIV>
<DIV> 524288 1364.05 2601.72</DIV>
<DIV>1048576 1378.39 1314.53</DIV>
<DIV>2097152 1380.78 658.41</DIV>
<DIV>4194304 1343.48 320.31</DIV>
<DIV><BR></DIV></FONT></SPAN></DIV>
<DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"># OSU MPI Multi BW / Message Rate Test (Version 1.0)</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"># [ pairs: 2 ] [ window size: 64 ]</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"># Size MB/sec Messages/sec</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 1 2.15 2150580.48</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 2 4.22 2109761.12</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 4 7.84 1960742.53</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 8 15.80 1974733.92</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 16 31.38 1961100.64</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 32 62.32 1947654.32</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 64 123.39 1928000.11</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 128 243.19 1899957.22</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 256 475.32 1856721.12</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 512 856.90 1673642.10</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 1024 1513.19 1477721.26</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 2048 2312.91 1129351.07</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 4096 2891.21 705861.12</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 8192 3267.49 398863.98</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 16384 3400.64 207558.54</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 32768 3519.74 107413.93</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 65536 3141.80 47940.04</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 131072 3368.65 25700.76</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 262144 2211.53 8436.31</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 524288 2264.90 4319.95</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">1048576 2282.69 2176.94</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">2097152 2250.72 1073.23</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">4194304 2087.00 497.58</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN></DIV></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">
<DIV>
<DIV>
<DIV><FONT class=Apple-style-span face="arial, sans-serif"># OSU MPI Multi BW / Message Rate Test (Version 1.0)</FONT></DIV>
<DIV><FONT class=Apple-style-span face="arial, sans-serif"># [ pairs: 4 ] [ window size: 64 ]</FONT></DIV>
<DIV><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></DIV>
<DIV><FONT class=Apple-style-span face="arial, sans-serif"># Size MB/sec Messages/sec</FONT></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 1 3.65 3651934.64</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 2 8.16 4080341.34</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 4 15.66 3914908.02</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 8 31.32 3915621.85</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 16 62.67 3916764.51</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 32 124.37 3886426.18</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 64 246.38 3849640.84</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 128 486.39 3799914.44</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 256 942.40 3681232.25</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 512 1664.21 3250414.19</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 1024 2756.50 2691891.86</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 2048 3829.45 1869848.54</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 4096 4465.25 1090148.56</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 8192 4777.45 583184.51</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 16384 4822.75 294357.30</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 32768 4829.77 147392.80</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 65536 4556.93 69533.18</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 131072 4789.32 36539.60</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 262144 3631.68 13853.75</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"> 524288 3679.31 7017.72</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">1048576 3553.61 3388.99</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">2097152 3113.12 1484.45</FONT></SPAN></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">4194304 2452.69 584.77</FONT></SPAN></DIV></DIV></DIV>
<DIV><BR></DIV>
<DIV>So from a messaging standpoint, you can see that you squeeze more data through with more processes; I'd guess that this is because there's processing to be done within MPI to move the data, and a lot of the bookkeeping steps probably cache well (updating the same status structure on a communication multiple times; perhaps reusing the structure for subsequent transfers and finding it still in cache) so the performance scaling is not completely FSB bound.</DIV>
<DIV><BR></DIV>
<DIV>I'm sure there's plenty of additional things that could be done here to test different CPU to process layouts, etc, but in testing my own real-world code, I've found that, unfortunately, "it depends." I have some code that nearly scales linearly (multiple computationally expensive operations inside the innermost loop) and some that scales like the STREAM results above ("add one to the next 20 million points") ...</DIV>
<DIV><BR></DIV>
<DIV>As always, your mileage may vary. If your speedup looks like the STREAM numbers above, you're likely memory bound. Try to reformulate your problem to go through memory slower but with more done each pass, or invest in a cluster. At some point -- for some problems -- you can't beat more memory busses!</DIV></FONT></SPAN></DIV>
<DIV><BR></DIV>
<DIV>Cheers,</DIV>
<DIV> Eric Borisch</DIV>
<DIV><BR></DIV>
<DIV>--</DIV>
<DIV> <A href="mailto:borisch.eric@mayo.edu" target=_blank rel=nofollow>borisch.eric@mayo.edu</A></DIV>
<DIV> MRI Research</DIV>
<DIV> Mayo Clinic</DIV>
<DIV><BR></DIV>
<DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">On Mon, Jul 14, 2008 at 9:48 PM, Gus Correa <<A href="mailto:gus@ldeo.columbia.edu" target=_blank rel=nofollow>gus@ldeo.columbia.edu</A>> wrote:</FONT></SPAN></DIV>
<DIV class=gmail_quote>
<BLOCKQUOTE class=gmail_quote style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">Hello Sami and list<BR><BR>Oh, well, as you see, an expert who claims to know the answers to these problems<BR>seems not to be willing to share these answers with less knowledgeable MPI users like us.<BR>So, maybe we can find the answers ourselves, not by individual "homework" brainstorming,<BR>but through community collaboration and generous information sharing,<BR>which is the hallmark of this mailing list.<BR><BR>I Googled around today to find out how to assign MPI processes to specific processors,<BR>and I found some interesting information on how to do it.<BR><BR>Below is a link to a posting from the computational fluid dynamics (CFD) community that may be of interest.<BR>Not surprisingly, they are struggling with the same type of
problems all of us have,<BR>including how to tie MPI processes to specific processors:<BR><BR></FONT></SPAN><A href="http://openfoam.cfd-online.com/cgi-bin/forum/board-auth.cgi?file=/1/5949.html#POST18006" target=_blank rel=nofollow><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">http://openfoam.cfd-online.</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">com/cgi-bin/forum/board-auth.</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">cgi?file=/1/5949.html#</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">POST18006</FONT></SPAN></A><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR><BR>I would summarize these problems as
related to three types of bottleneck:<BR><BR>1) Multicore processor bottlenecks (standalone machines and clusters)<BR>2) Network fabric bottlenecks (clusters)<BR>3) File system bottlenecks (clusters)<BR><BR>All three types of problems are due to contention for some type of system resource<BR>by the MPI processes that take part in a computation/program.<BR><BR>Our focus on this thread, started by Zach, has been on problem 1),<BR>although most of us may need to look into problems 2) and 3) sooner or later.<BR>(I have all the three of them already!)<BR><BR>The CFD folks use MPI as we do.<BR>They seem to use another MPI flavor, but the same problems are there.<BR>The problems are not caused by MPI itself, but they become apparent when you run MPI programs.<BR>That has been my experience too.<BR><BR>As for how to map the MPI processes to specific processors (or cores),<BR>the key command seems to be "taskset", as my googling afternoon showed.<BR>Try "man
taskset" for more info.<BR><BR>For a standalone machine like yours, something like the command line below should work to<BR>force execution on "processors" 0 and 2 (which in my case are two different physical CPUs):<BR><BR>mpiexec -n 2 taskset -c 0,2 my_mpi_program<BR><BR>You need to check on your computer ("more /proc/cpuinfo")<BR>what are the exact "processor" numbers that correspond to separate physical CPUs. Most likely they are the even numbered processors only, or the odd numbered only,<BR>since you have dual-core CPUs (integers module 2), with "processors" 0,1 being the four<BR>cores of the first physical CPU, "processors" 2,3 the cores of the second physical CPU, and so on.<BR>At least, this is what I see on my dual-core dual-processor machine.<BR>I would say for quad-cores the separate physical CPUs would be processors 0,4,8, etc,<BR>or 1,5,7, etc, and so on (integers module 4), with "processors" 0,1,2,3 being the four cores<BR>in the
first physical CPU, and so on. <BR>In /proc/cpuinfo look for the keyword "processor".<BR>These are the numbers you need to use in "taskset -c".<BR>However, other helpful information comes in the keywords "physical id",<BR>"core id", "siblings", and "cpu cores".<BR>They will allow you to map cores and physical CPUs to<BR>the "processor" number.<BR><BR>The "taskset" command line above worked in one of my standalone multicore machines,<BR>and I hope a variant of it will work on your machine also.<BR>It works with the "mpiexec" that comes with the MPICH distribution, and also with<BR>the "mpiexec" associated to the Torque/PBS batch system, which is nice for clusters as well.<BR><BR>"Taskset" can change the default behavior of the Linux scheduler, which is to allow processes to<BR>be moved from one core/CPU to another during execution.<BR>The scheduler does this to ensure optimal CPU use (i.e. load balance).<BR>With taskset you can force execution to
happen on the cores you specify on the command line,<BR>i.e. you can force the so called "CPU affinity" you wish.<BR>Note that the "taskset" man page uses both the terms "CPU" and "processor", and doesn't use the term "core",<BR>which may be a bit confusing. Make no mistake, "processor" and "CPU" there stand for what we've been calling "core" here.<BR><BR>Other postings that you may find useful on closely related topics are:<BR><BR></FONT></SPAN><A href="http://www.ibm.com/developerworks/linux/library/l-scheduler/" target=_blank rel=nofollow><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">http://www.ibm.com/</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">developerworks/linux/library/</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial,
sans-serif">l-scheduler/</FONT></SPAN></A><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR></FONT></SPAN><A href="http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html" target=_blank rel=nofollow><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">http://www.cyberciti.biz/tips/</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">setting-processor-affinity-</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">certain-task-or-process.html</FONT></SPAN></A><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR><BR>I hope this helps,<BR><BR>Still, we have a long way to go to sort out how much of the multicore
bottleneck can<BR>be ascribed to lack of memory bandwidth, and how much may be perhaps associated to how<BR>memcpy is compiled by different compilers,<BR>or if there are other components of this problem that we don't see now.<BR><BR>Maybe our community won't find a solution to Zach's problem: "Why is my quad core slower than cluster?"<BR>However, I hope that through collaboration, and by sharing information,<BR>we may be able to nail down the root of the problem,<BR>and perhaps to find ways to improve the alarmingly bad performance<BR>some of us have reported on multicore machines.</FONT></SPAN>
<DIV class=Ih2E3d><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR><BR>Gus Correa<BR><BR>-- <BR>------------------------------</FONT></SPAN> <SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">------------------------------</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">---------<BR>Gustavo J. Ponce Correa, PhD - Email: </FONT></SPAN><A href="mailto:gus@ldeo.columbia.edu" target=_blank rel=nofollow><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">gus@ldeo.columbia.edu</FONT></SPAN></A><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR>Lamont-Doherty Earth Observatory - Columbia University<BR>P.O. Box 1000 [61 Route 9W] - Palisades, NY, 10964-8000
- USA<BR>------------------------------</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">------------------------------</FONT></SPAN><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif">---------<BR></FONT></SPAN></DIV></BLOCKQUOTE></DIV><SPAN class=Apple-style-span style="FONT-SIZE: small"><FONT class=Apple-style-span face="arial, sans-serif"><BR clear=all><BR></FONT></SPAN></DIV></DIV></DIV></BLOCKQUOTE></DIV></TD></TR></TBODY></TABLE><BR></BLOCKQUOTE></DIV><BR></DIV></DIV></BLOCKQUOTE></td></tr></table><br>