<div dir="ltr">Hello;<br>Thank you for the detailed information, however I am using visual windows software with the code impeded in it, so I am not using the commands directly. so I am wondering if there is a way to add the stdin/stdout script? and save them some where in the cluster<br>
I am attaching a screen shot of the software I am using just to let you know what I am talking about!<br><br><br><div class="gmail_quote">On Mon, Nov 29, 2010 at 9:28 PM, Gus Correa <span dir="ltr"><<a href="mailto:gus@ldeo.columbia.edu">gus@ldeo.columbia.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Hi Talla<br>
<br>
It is no rocket science, just add them to your mpirun command.<br>
Say:<br>
<br>
Old command (whatever you have there):<br>
mpirun -n 16 my_program<br>
<br>
Modified command (whatever you had there with "-s all" added):<br>
mpirun -s all -n 16 my_program<br>
<br>
I am not even sure if it will work, but it is worth giving it a try,<br>
as all processes seem to be trying to read from stdin,<br>
for which in MPICH2 seems to require the specific flag '-s all'.<br>
<br>
***<br>
<br>
In a different key,<br>
looking at your messages, and the paths that you refer to,<br>
it seems to me that you may be mixing MPICH2 and OpenMPI.<br>
At least the test program you mention this:<div class="im"><br>
<br>
> I tested the cluster using the following command:<br>
> /opt/openmpi/bin/mpirun -nolocal np 16 -machinefile machines<br>
> /opt/mpi-test/bin/mpi-ring<br>
<br>
<br></div>
In particular, the mpirun commands of each MPI are quite different,<br>
and to some extent so is the hardware support.<br>
This mixing is a very common source of frustration.<br>
<br>
Both MPICH2 and OpenMPI follow the MPI standard.<br>
However, they are different beasts,<br>
and mixing them will not work at all.<br>
<br>
Stick to one of them only, for compilation (say mpicc or mpif90)<br>
and to run the program (mpirun/mpiexec).<br>
In doubt, use full path names for all (mpicc, mpif90, mpirun, etc).<br>
This mailing list is the support list for MPICH2:<br>
<a href="http://www.mcs.anl.gov/research/projects/mpich2/" target="_blank">http://www.mcs.anl.gov/research/projects/mpich2/</a><br>
<br>
For OpenMPI you need to look elsewhere:<br>
<a href="http://www.open-mpi.org/" target="_blank">http://www.open-mpi.org/</a><br>
<br>
You may want to check first how your (ABINIT) computational Chemistry<br>
program was compiled.<br>
Computational Chemistry is *not* my league, I have no idea of what the<br>
heck ABINIT does, probably some many-body Schroedinger equation, and I forgot Quantum Mechanics a while ago.<br>
Don't ask me about it.<br>
<br>
If your ABINIT came pre-compiled you need to stick with<br>
whatever flavor (and even version) of MPI that they used.<br>
If you compiled it yourself, you need to stick to the MPI flavor and<br>
version associated to the *same* MPI compiler wrapper you used to compile it.<br>
I.e. mpicc or mpif90 *and* mpirun/mpiexec must be from the same MPI flavor and version.<br>
Take a look at the Makefiles, perhaps at the configure scripts,<br>
they may give a hint.<br>
<br>
Good luck.<div class="im"><br>
<br>
My two cents,<br>
Gus Correa<br>
<br>
Talla wrote:<br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="im">
Hello Gus,<br>
Thank you for pointing me to this error, but I am new to Linux world so I am wondering if you have a ready script or commands to add stdin flags to mpirun?<br>
Thanks<br>
<br></div><div><div></div><div class="h5">
On Mon, Nov 29, 2010 at 7:34 PM, Gus Correa <<a href="mailto:gus@ldeo.columbia.edu" target="_blank">gus@ldeo.columbia.edu</a> <mailto:<a href="mailto:gus@ldeo.columbia.edu" target="_blank">gus@ldeo.columbia.edu</a>>> wrote:<br>
<br>
Hi Talla<br>
<br>
It sounds like all processes are trying to read from stdin, right?<br>
Stdin/stdout are not guaranteed to be available to all processes.<br>
<br>
Have you tried to add these flags in your mpiexec/mpirun<br>
command line: "-s all" ?<br>
Check the meaning with "man mpiexec".<br>
<br>
My two cents,<br>
Gus Correa<br>
<br>
Talla wrote:<br>
<br>
Hello,<br>
<br>
this issue is consuming all my time with no luck. I can submit<br>
any job without any error message *BUT ONLY* when I am using<br>
*one* CPU, when I am using more than one CPU, I got the error<br>
message : " *forrtl: severe (24): end-of-file during read, unit<br>
5, file stdin* ". I got this error line repeated as many as the<br>
number of CPU's I am using.<br>
<br>
Which mean that the other nodes are not doing any job here. So<br>
if you can help me to link the other nodes so I can take<br>
advantage of all the nodes.<br>
I have 8 PC and each one has 2 CPU (in total I have 16 CPU).<br>
<br>
I tested the cluster using the following command:<br>
/opt/openmpi/bin/mpirun -nolocal np 16 -machinefile machines<br>
/opt/mpi-test/bin/mpi-ring<br>
<br>
and all the nodes can send and receive data like a charm.<br>
<br>
Just to mention that I have Rocks Clustering software with open<br>
centOS.<br>
<br>
The code I am using is called ABINIT and it is impeded as a<br>
plugin in a visual software called MAPS.<br>
<br>
Your help is really appreciated.<br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br></div></div>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a>><div class="im"><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br></div>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a> <mailto:<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a>><div class="im"><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</div></blockquote><div><div></div><div class="h5">
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</div></div></blockquote></div><br><br>
<div style="visibility: hidden; display: inline;" id="avg_ls_inline_popup"></div><style type="text/css">#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}</style></div>