<p class="MsoNormal"><span style lang="EN-US">Hi Jeff, </span></p><br><p class="MsoNormal"><span style lang="EN-US">Yes I am
running on scc with rckmpi not rckmpi2. </span></p><p class="MsoNormal"><br><span style lang="EN-US"></span></p>

<p class="MsoNormal"><span style lang="EN-US"> </span></p>

<p class="MsoNormal"><span style lang="EN-US">Best
Regards, </span></p>

<br><br><div class="gmail_quote">On 16 April 2012 16:14, Jeff Hammond <span dir="ltr">&lt;<a href="mailto:jhammond@alcf.anl.gov">jhammond@alcf.anl.gov</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
It seems that you are running on an Intel SCC.  Is this true?<br>
<br>
Which version of RCKMPI are you running?  It seems there are a few<br>
MPICH-derived implementations of MPI for SCC.<br>
<br>
Some people at Argonne have SCC access (I am one of them) but those of<br>
us that do are not necessarily the people qualified to debug the MPI<br>
process manager on SCC.  I am most certainly not qualified to do this.<br>
<br>
Jeff<br>
<div><div class="h5"><br>
On Mon, Apr 16, 2012 at 8:14 AM, Umit &lt;<a href="mailto:umitcanyilmaz@gmail.com">umitcanyilmaz@gmail.com</a>&gt; wrote:<br>
&gt; Hello all,<br>
&gt;<br>
&gt;<br>
&gt; There are some spawn commands in my program. Now I want to specify the nodes<br>
&gt; of my new spawned processes.  I am trying to use a hostfile for this but  I<br>
&gt; couldn’t do it successfully.  New processes are still spawned on next<br>
&gt; available nodes.<br>
&gt;<br>
&gt; I added my code and outputs of my console.<br>
&gt;<br>
&gt; My hostfile:<br>
&gt;<br>
&gt; root@rck00:~&gt; cat /shared/mpihosts<br>
&gt;<br>
&gt; rck03<br>
&gt;<br>
&gt; rck04<br>
&gt;<br>
&gt; rck05<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Can somebody help me? What is the problem? Can this be a bug?<br>
&gt;<br>
&gt;<br>
&gt; Here is my code and output of my program:<br>
&gt;<br>
&gt; #include &quot;mpi.h&quot;<br>
&gt;<br>
&gt; #include &lt;stdio.h&gt;<br>
&gt;<br>
&gt; #include &lt;stdlib.h&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; #define NUM_SPAWNS 3<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; int main( int argc, char *argv[] )<br>
&gt;<br>
&gt; {<br>
&gt;<br>
&gt;     int errcodes[NUM_SPAWNS];<br>
&gt;<br>
&gt;     MPI_Comm parentcomm, intercomm;<br>
&gt;<br>
&gt;     int len;<br>
&gt;<br>
&gt;     char name[MPI_MAX_PROCESSOR_NAME];<br>
&gt;<br>
&gt;     int rank;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     MPI_Init( &amp;argc, &amp;argv );<br>
&gt;<br>
&gt;     MPI_Comm_get_parent( &amp;parentcomm );<br>
&gt;<br>
&gt;     MPI_Comm_rank(MPI_COMM_WORLD,&amp;rank);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     if (parentcomm == MPI_COMM_NULL)<br>
&gt;<br>
&gt;     {<br>
&gt;<br>
&gt;             MPI_Info info;<br>
&gt;<br>
&gt;             MPI_Info_create( &amp;info );<br>
&gt;<br>
&gt;             MPI_Info_set(info, &quot;hostfile&quot;, &quot;/shared/mpihosts&quot;);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;             MPI_Comm_spawn( &quot;/shared/spawn/./spawn&quot;, MPI_ARGV_NULL,<br>
&gt; NUM_SPAWNS, info, 0, MPI_COMM_WORLD, &amp;intercomm, errcodes );<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;             MPI_Get_processor_name(name, &amp;len);<br>
&gt;<br>
&gt;             printf(&quot;I am parent process %d on %s.  \n&quot;, rank,  name);<br>
&gt;<br>
&gt;     }<br>
&gt;<br>
&gt;     else<br>
&gt;<br>
&gt;     {<br>
&gt;<br>
&gt;             MPI_Get_processor_name(name, &amp;len);<br>
&gt;<br>
&gt;             printf(&quot;I am a spawned process %d on %s.\n&quot;, rank,  name);<br>
&gt;<br>
&gt;     }<br>
&gt;<br>
&gt;     fflush(stdout);<br>
&gt;<br>
&gt;     MPI_Finalize();<br>
&gt;<br>
&gt;     return 0;<br>
&gt;<br>
&gt; }<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; output of my program:<br>
&gt;<br>
&gt; root@rck00:~&gt; mpirun -np 1 /shared/spawn/./spawn<br>
&gt;<br>
&gt; I am parent process 0 on rck00.<br>
&gt;<br>
&gt; I am a spawned process 0 on rck01.<br>
&gt;<br>
&gt; I am a spawned process 1 on rck02.<br>
&gt;<br>
&gt; I am a spawned process 2 on rck03.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Thanks in advance,<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
&gt; mpich-discuss mailing list     <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
&gt; To manage subscription options or unsubscribe:<br>
&gt; <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
&gt;<br>
<br>
<br>
<br>
--<br>
Jeff Hammond<br>
Argonne Leadership Computing Facility<br>
University of Chicago Computation Institute<br>
<a href="mailto:jhammond@alcf.anl.gov">jhammond@alcf.anl.gov</a> / <a href="tel:%28630%29%20252-5381" value="+16302525381">(630) 252-5381</a><br>
<a href="http://www.linkedin.com/in/jeffhammond" target="_blank">http://www.linkedin.com/in/jeffhammond</a><br>
<a href="https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond" target="_blank">https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond</a> (in-progress)<br>
<a href="https://wiki.alcf.anl.gov/old/index.php/User:Jhammond" target="_blank">https://wiki.alcf.anl.gov/old/index.php/User:Jhammond</a> (deprecated)<br>
<a href="https://wiki-old.alcf.anl.gov/index.php/User:Jhammond%28deprecated%29" target="_blank">https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)</a><br>
_______________________________________________<br>
mpich-discuss mailing list     <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br>