<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">There also appears to be a processor_bind routine that you can call. You might try using that just after MPI_Init. Let us know how that works; we could add that to mpiexec.<DIV>Bill</DIV><DIV><BR><DIV><DIV>On Jun 21, 2007, at 9:53 AM, Christina Patrick wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><DIV>Is it possible for me to modify the Python script (mpiexec) in such a way that I use the pbind command to bind each instance of the executing program to a different processor? I would like to give that a shot.</DIV> <DIV> </DIV> <DIV>Warm Regards,</DIV> <DIV>Christina.<BR><BR> </DIV> <DIV><SPAN class="gmail_quote">On 6/20/07, <B class="gmail_sendername">Rajeev Thakur</B> <<A href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</A>> wrote:</SPAN> <BLOCKQUOTE class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">MPICH2 leaves the scheduling of processes to the OS. If the OS has some way<BR>to bind processes to processors, you could try using it. <BR><BR>Rajeev<BR><BR>> -----Original Message-----<BR>> From: <A href="mailto:owner-mpich-discuss@mcs.anl.gov">owner-mpich-discuss@mcs.anl.gov</A><BR>> [mailto:<A href="mailto:owner-mpich-discuss@mcs.anl.gov">owner-mpich-discuss@mcs.anl.gov </A>] On Behalf Of<BR>> Christina Patrick<BR>> Sent: Wednesday, June 20, 2007 4:12 PM<BR>> To: <A href="mailto:mpich-discuss-digest@mcs.anl.gov">mpich-discuss-digest@mcs.anl.gov</A><BR>> Subject: [MPICH] Binding and instance of the MPI program to a <BR>> particular processor<BR>><BR>> Hi everybody,<BR>><BR>> I am having a 8 processor Solaris 9 machine and I want to<BR>> execute an MPI program on it. The problem is that the tasks<BR>> created by mpiexec keep migrating on the different <BR>> processors. Since it is only one machine, there is only one<BR>> instance of the mpdboot daemon running on the machine. Hence<BR>> when I execute the below command on the machine with 8<BR>> processors, I get an output that says: <BR>><BR>> (For example, if the MPI program name is "finalized")<BR>> # mpiexec -n 8 ./finalized<BR>> 0: No Errors<BR>><BR>> When I examined the system using the "prstat" command, I<BR> > observed that the tasks are migrating between the different<BR>> processors.<BR>><BR>> Is there any why in which I could bind each instance of the<BR>> MPI program to a different processor.<BR>><BR>> Your suggesstions and help is appreciated, <BR>><BR>> Thanks,<BR>> Christina.<BR>><BR><BR></BLOCKQUOTE></DIV><BR></BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>