[MPICH] Binding and instance of the MPI program to a particular processor
chong tan
chong_guan_tan at yahoo.com
Thu Jun 21 13:44:42 CDT 2007
I think processor_bind is for light-weighted-thread/process. When I met with SUN few months ago, they recommended pbind.
Although it is painful, but pain == gain.
tan
----- Original Message ----
From: William Gropp <gropp at mcs.anl.gov>
To: Christina Patrick <christina.subscribes at gmail.com>
Cc: Rajeev Thakur <thakur at mcs.anl.gov>; mpich-discuss-digest at mcs.anl.gov
Sent: Thursday, June 21, 2007 8:01:16 AM
Subject: Re: [MPICH] Binding and instance of the MPI program to a particular processor
There also appears to be a processor_bind routine that you can call. You might try using that just after MPI_Init. Let us know how that works; we could add that to mpiexec.
Bill
On Jun 21, 2007, at 9:53 AM, Christina Patrick wrote:
Is it possible for me to modify the Python script (mpiexec) in such a way that I use the pbind command to bind each instance of the executing program to a different processor? I would like to give that a shot.
Warm Regards,
Christina.
On 6/20/07, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
MPICH2 leaves the scheduling of processes to the OS. If the OS has some way
to bind processes to processors, you could try using it.
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov ] On Behalf Of
> Christina Patrick
> Sent: Wednesday, June 20, 2007 4:12 PM
> To: mpich-discuss-digest at mcs.anl.gov
> Subject: [MPICH] Binding and instance of the MPI program to a
> particular processor
>
> Hi everybody,
>
> I am having a 8 processor Solaris 9 machine and I want to
> execute an MPI program on it. The problem is that the tasks
> created by mpiexec keep migrating on the different
> processors. Since it is only one machine, there is only one
> instance of the mpdboot daemon running on the machine. Hence
> when I execute the below command on the machine with 8
> processors, I get an output that says:
>
> (For example, if the MPI program name is "finalized")
> # mpiexec -n 8 ./finalized
> 0: No Errors
>
> When I examined the system using the "prstat" command, I
> observed that the tasks are migrating between the different
> processors.
>
> Is there any why in which I could bind each instance of the
> MPI program to a different processor.
>
> Your suggesstions and help is appreciated,
>
> Thanks,
> Christina.
>
____________________________________________________________________________________
Looking for earth-friendly autos?
Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.
http://autos.yahoo.com/green_center/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070621/284f009f/attachment.htm>
More information about the mpich-discuss
mailing list