[MPICH] Binding and instance of the MPI program to a particular processor
William Gropp
gropp at mcs.anl.gov
Thu Jun 21 11:20:53 CDT 2007
That's a good point. I'd like to see the reference, because I don't
see how the scheduler can be sure that the data in cache won't be
needed in the near future (after all, the compiler often can't
predict this). That is, it doesn't matter if the data is out of the
cache; for performance on many scientific workloads, it is essential
to reuse data that is already in cache, and if the data must be
reloaded, performance will plummet.
Bill
On Jun 21, 2007, at 10:49 AM, Diego M. Vadell wrote:
> Hi,
> Is there any proof (e.g. benchmark) that binding a process to a
> processor/core is better than letting the scheduler work things out? I
> remember reading - sorry for the lack of reference - that linux's
> scheduler
> knows about the cache when scheduling so when the processes bounce
> from
> processor to processor, it is because the process' data is already
> out of the
> cache (and there is really no gain in keeping the process in the same
> processor, esp. when that processor is busy with another process).
>
> Thanks in advance,
> -- Diego.
>
> El Jueves, 21 de Junio de 2007 12:27, Darius Buntinas escribió:
>> In linux there's a command called numactl which can be used to
>> execute a
>> process and bind it to a physical processor. Unfortunately, it looks
>> like pbind can only bind a process once it has already been
>> started. If
>> you can find a command on sun similar to numactl on linux you can do
>> something like what I do with numactl.
>>
>> With numactl one can do something like this to bind each process to a
>> different processor:
>>
>> mpiexec -n 1 numactl --physcpubind=0 -- myapp myapparg : -n 1 numactl
>> --physcpubind=1 -- myapp myapparg : -n 1 numactl --physcpubind=2 --
>> myapp myapparg
>>
>> etc.
>>
>> Of course that's messy, so a more general solution would be to
>> write a
>> script called, e.g. execbind, to do that:
>>
>> numactl --physcpubind=`expr $PMI_RANK % $PROCS_PER_NODE` -- $@
>>
>> Where $PROCS_PER_NODE is the number of processors on each node of
>> your
>> cluster, and $PMI_RANK is set by mpiexec.
>>
>> Then, if you wanted to start 16 processes on a cluster of machines
>> with
>> 4 cores each, you could simply do:
>>
>> mpiexec -n 16 -env PROCS_PER_NODE 4 execbind myapp myapparg
>>
>> Darius
>>
>> Christina Patrick wrote:
>>> Is it possible for me to modify the Python script (mpiexec) in
>>> such a
>>> way that I use the pbind command to bind each instance of the
>>> executing
>>> program to a different processor? I would like to give that a shot.
>>>
>>> Warm Regards,
>>> Christina.
>>>
>>>
>>> On 6/20/07, *Rajeev Thakur* <thakur at mcs.anl.gov
>>> <mailto:thakur at mcs.anl.gov>> wrote:
>>>
>>> MPICH2 leaves the scheduling of processes to the OS. If the
>>> OS has
>>> some way
>>> to bind processes to processors, you could try using it.
>>>
>>> Rajeev
>>>
>>>> -----Original Message-----
>>>> From: owner-mpich-discuss at mcs.anl.gov
>>>
>>> <mailto:owner-mpich-discuss at mcs.anl.gov>
>>>
>>>> [mailto:owner-mpich-discuss at mcs.anl.gov
>>>
>>> <mailto:owner-mpich-discuss at mcs.anl.gov>] On Behalf Of
>>>
>>>> Christina Patrick
>>>> Sent: Wednesday, June 20, 2007 4:12 PM
>>>> To: mpich-discuss-digest at mcs.anl.gov
>>>
>>> <mailto:mpich-discuss-digest at mcs.anl.gov>
>>>
>>>> Subject: [MPICH] Binding and instance of the MPI program to a
>>>> particular processor
>>>>
>>>> Hi everybody,
>>>>
>>>> I am having a 8 processor Solaris 9 machine and I want to
>>>> execute an MPI program on it. The problem is that the tasks
>>>> created by mpiexec keep migrating on the different
>>>> processors. Since it is only one machine, there is only one
>>>> instance of the mpdboot daemon running on the machine. Hence
>>>> when I execute the below command on the machine with 8
>>>> processors, I get an output that says:
>>>>
>>>> (For example, if the MPI program name is "finalized")
>>>> # mpiexec -n 8 ./finalized
>>>> 0: No Errors
>>>>
>>>> When I examined the system using the "prstat" command, I
>>>> observed that the tasks are migrating between the different
>>>> processors.
>>>>
>>>> Is there any why in which I could bind each instance of the
>>>> MPI program to a different processor.
>>>>
>>>> Your suggesstions and help is appreciated,
>>>>
>>>> Thanks,
>>>> Christina.
>
More information about the mpich-discuss
mailing list