Okay, thanks. What is the exact command line you used to set the affinity correctly? (I want to know the semantics of what you did.)<br><br><div class="gmail_quote">On Thu, Feb 23, 2012 at 23:37, Dave Nystrom <span dir="ltr"><<a href="mailto:dnystrom1@comcast.net">dnystrom1@comcast.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">Jed Brown writes:<br>
> On Thu, Feb 23, 2012 at 18:53, Nystrom, William D <<a href="mailto:wdn@lanl.gov">wdn@lanl.gov</a>> wrote:<br>
><br>
> > Rerunning the CPU case with numactl results in a 25x speedup and<br>
> > log_summary<br>
> > results that look reasonable to me now.<br>
> ><br>
><br>
> What command are you using for this? We usually use the affinity options to<br>
> mpiexec instead of using numactl/taskset manually.<br>
<br>
</div>I was using openmpi-1.5.4 as installed by the system admins on our testbed<br>
cluster. I talked to a couple of our openmpi developers and they indicated<br>
that the affinity stuff was broken in that version but should be fixed when<br>
1.5.5 and 1.6 come out - which should be within the next month.<br>
<br>
I also tried mvapich2-1.7 built with slurm and tried using the affinity stuff<br>
with srun. That also did not seem to work. But I should probably revisit<br>
that and try to make sure that I really understand how to use srun.<br>
<br>
I was pretty surprised that getting the numa stuff right made such a huge<br>
difference. I'm also wondering if getting the affinity right will make much<br>
of a difference for the gpu case.<br>
<div class="im"><br>
> Did you also set a specific memory policy?<br>
<br>
</div>I'm not sure what you mean by the above question but I'm kind of new to all<br>
this numa stuff.<br>
<div class="im"><br>
> Which Linux kernel is this?<br>
<br>
</div>The OS was the latest beta of TOSS2. If I remember, I can check next time I<br>
am in my office. It is probably RHEL6.<br>
</blockquote></div><br>