<div dir="ltr">I get it, thanks, that's a strong argument i will tell my advisor about<div><br></div><div>Have a great day,</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 25, 2018 at 12:30 PM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
<br>
> On Apr 25, 2018, at 2:12 PM, Manuel Valera <<a href="mailto:mvalera-w@sdsu.edu">mvalera-w@sdsu.edu</a>> wrote:<br>
> <br>
> Hi and thanks for the quick answer,<br>
> <br>
> Yes it looks i am using MPICH for my configure instead of using the system installation of OpenMPI, in the past i had better experience using MPICH but maybe this will be a conflict, should i reconfigure using the system MPI installation?<br>
> <br>
> I solved the problem in a different way by login into the nodes i wanted to use and doing the make streams tests there, but i get the following:<br>
> <br>
> np speedup<br>
> 1 1.0<br>
> 2 1.51<br>
> 3 2.17<br>
> 4 2.66<br>
> 5 2.87<br>
> 6 3.06<br>
> 7 3.44<br>
> 8 3.84<br>
> 9 3.81<br>
> 10 3.17<br>
> 11 3.69<br>
> 12 3.81<br>
> 13 3.26<br>
> 14 3.51<br>
> 15 3.61<br>
> 16 3.81<br>
> 17 3.8<br>
> 18 3.64<br>
> 19 3.48<br>
> 20 4.01<br>
> <br>
> So very modest scaling, this is about the same i get with my application, how can i make it work faster?<br>
<br>
</div></div> You can't, the memory bandwidth is the limiting factor on this machine (not the number of cores) and there is nothing to be done about it. When buying machines make sure that the memory bandwidth is an important factor in the decision.<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> i am already using --map-by and --machinefile arguments for mpirun, maybe this is also a conflict with the different MPI installations?<br>
> <br>
> Thanks,<br>
> <br>
> <br>
> <br>
> On Wed, Apr 25, 2018 at 11:51 AM, Karl Rupp <<a href="mailto:rupp@iue.tuwien.ac.at">rupp@iue.tuwien.ac.at</a>> wrote:<br>
> Hi Manuel,<br>
> <br>
> this looks like the wrong MPI gets used. You should see an increasing number of processes, e.g.<br>
> <br>
> Number of MPI processes 1 Processor names node37<br>
> Triad: 6052.3571 Rate (MB/s)<br>
> Number of MPI processes 2 Processor names node37 node37<br>
> Triad: 9138.9376 Rate (MB/s)<br>
> Number of MPI processes 3 Processor names node37 node37 node37<br>
> Triad: 11077.5905 Rate (MB/s)<br>
> Number of MPI processes 4 Processor names node37 node37 node37 node37<br>
> Triad: 12055.9123 Rate (MB/s)<br>
> <br>
> Best regards,<br>
> Karli<br>
> <br>
> <br>
> <br>
> <br>
> On 04/25/2018 08:26 PM, Manuel Valera wrote:<br>
> Hi,<br>
> <br>
> I'm running scaling tests on my system to check why my scaling is so poor, and after following the MPIVersion guidelines my scaling.log output looks like this:<br>
> <br>
> Number of MPI processes 1 Processor names node37<br>
> Triad: 12856.9252 Rate (MB/s)<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Triad: 9138.3320 Rate (MB/s)<br>
> Triad: 9945.0006 Rate (MB/s)<br>
> Triad: 10480.8471 Rate (MB/s)<br>
> Triad: 12055.4846 Rate (MB/s)<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Triad: 7394.1014 Rate (MB/s)<br>
> Triad: 5528.9757 Rate (MB/s)<br>
> Triad: 6052.7506 Rate (MB/s)<br>
> Triad: 6188.5710 Rate (MB/s)<br>
> Triad: 6944.4515 Rate (MB/s)<br>
> Triad: 7407.1594 Rate (MB/s)<br>
> Triad: 9508.1984 Rate (MB/s)<br>
> Triad: 10699.7551 Rate (MB/s)<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Number of MPI processes 1 Processor names node37<br>
> Triad: 6682.3749 Rate (MB/s)<br>
> Triad: 6825.3243 Rate (MB/s)<br>
> Triad: 7217.8178 Rate (MB/s)<br>
> Triad: 7525.1025 Rate (MB/s)<br>
> Triad: 7882.1781 Rate (MB/s)<br>
> Triad: 8071.1430 Rate (MB/s)<br>
> Triad: 10341.9424 Rate (MB/s)<br>
> Triad: 10418.4740 Rate (MB/s)<br>
> <br>
> <br>
> Is this normal? i feel is different from what i get from an usual streams test, how can i get it to work properly?<br>
> <br>
> Thanks,<br>
> <br>
> <br>
> <br>
> <br>
> <br>
<br>
</div></div></blockquote></div><br></div>