<p class="MsoNormal"><span style lang="EN-US">Hi Jeff, </span></p><br><p class="MsoNormal"><span style lang="EN-US">Yes I am
running on scc with rckmpi not rckmpi2. </span></p><p class="MsoNormal"><br><span style lang="EN-US"></span></p>
<p class="MsoNormal"><span style lang="EN-US"> </span></p>
<p class="MsoNormal"><span style lang="EN-US">Best
Regards, </span></p>
<br><br><div class="gmail_quote">On 16 April 2012 16:14, Jeff Hammond <span dir="ltr"><<a href="mailto:jhammond@alcf.anl.gov">jhammond@alcf.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
It seems that you are running on an Intel SCC. Is this true?<br>
<br>
Which version of RCKMPI are you running? It seems there are a few<br>
MPICH-derived implementations of MPI for SCC.<br>
<br>
Some people at Argonne have SCC access (I am one of them) but those of<br>
us that do are not necessarily the people qualified to debug the MPI<br>
process manager on SCC. I am most certainly not qualified to do this.<br>
<br>
Jeff<br>
<div><div class="h5"><br>
On Mon, Apr 16, 2012 at 8:14 AM, Umit <<a href="mailto:umitcanyilmaz@gmail.com">umitcanyilmaz@gmail.com</a>> wrote:<br>
> Hello all,<br>
><br>
><br>
> There are some spawn commands in my program. Now I want to specify the nodes<br>
> of my new spawned processes. I am trying to use a hostfile for this but I<br>
> couldn’t do it successfully. New processes are still spawned on next<br>
> available nodes.<br>
><br>
> I added my code and outputs of my console.<br>
><br>
> My hostfile:<br>
><br>
> root@rck00:~> cat /shared/mpihosts<br>
><br>
> rck03<br>
><br>
> rck04<br>
><br>
> rck05<br>
><br>
><br>
><br>
> Can somebody help me? What is the problem? Can this be a bug?<br>
><br>
><br>
> Here is my code and output of my program:<br>
><br>
> #include "mpi.h"<br>
><br>
> #include <stdio.h><br>
><br>
> #include <stdlib.h><br>
><br>
><br>
><br>
> #define NUM_SPAWNS 3<br>
><br>
><br>
><br>
> int main( int argc, char *argv[] )<br>
><br>
> {<br>
><br>
> int errcodes[NUM_SPAWNS];<br>
><br>
> MPI_Comm parentcomm, intercomm;<br>
><br>
> int len;<br>
><br>
> char name[MPI_MAX_PROCESSOR_NAME];<br>
><br>
> int rank;<br>
><br>
><br>
><br>
> MPI_Init( &argc, &argv );<br>
><br>
> MPI_Comm_get_parent( &parentcomm );<br>
><br>
> MPI_Comm_rank(MPI_COMM_WORLD,&rank);<br>
><br>
><br>
><br>
> if (parentcomm == MPI_COMM_NULL)<br>
><br>
> {<br>
><br>
> MPI_Info info;<br>
><br>
> MPI_Info_create( &info );<br>
><br>
> MPI_Info_set(info, "hostfile", "/shared/mpihosts");<br>
><br>
><br>
><br>
> MPI_Comm_spawn( "/shared/spawn/./spawn", MPI_ARGV_NULL,<br>
> NUM_SPAWNS, info, 0, MPI_COMM_WORLD, &intercomm, errcodes );<br>
><br>
><br>
><br>
> MPI_Get_processor_name(name, &len);<br>
><br>
> printf("I am parent process %d on %s. \n", rank, name);<br>
><br>
> }<br>
><br>
> else<br>
><br>
> {<br>
><br>
> MPI_Get_processor_name(name, &len);<br>
><br>
> printf("I am a spawned process %d on %s.\n", rank, name);<br>
><br>
> }<br>
><br>
> fflush(stdout);<br>
><br>
> MPI_Finalize();<br>
><br>
> return 0;<br>
><br>
> }<br>
><br>
><br>
><br>
> output of my program:<br>
><br>
> root@rck00:~> mpirun -np 1 /shared/spawn/./spawn<br>
><br>
> I am parent process 0 on rck00.<br>
><br>
> I am a spawned process 0 on rck01.<br>
><br>
> I am a spawned process 1 on rck02.<br>
><br>
> I am a spawned process 2 on rck03.<br>
><br>
><br>
><br>
><br>
><br>
> Thanks in advance,<br>
><br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
<br>
<br>
<br>
--<br>
Jeff Hammond<br>
Argonne Leadership Computing Facility<br>
University of Chicago Computation Institute<br>
<a href="mailto:jhammond@alcf.anl.gov">jhammond@alcf.anl.gov</a> / <a href="tel:%28630%29%20252-5381" value="+16302525381">(630) 252-5381</a><br>
<a href="http://www.linkedin.com/in/jeffhammond" target="_blank">http://www.linkedin.com/in/jeffhammond</a><br>
<a href="https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond" target="_blank">https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond</a> (in-progress)<br>
<a href="https://wiki.alcf.anl.gov/old/index.php/User:Jhammond" target="_blank">https://wiki.alcf.anl.gov/old/index.php/User:Jhammond</a> (deprecated)<br>
<a href="https://wiki-old.alcf.anl.gov/index.php/User:Jhammond%28deprecated%29" target="_blank">https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)</a><br>
_______________________________________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br>