<div dir="ltr">Hello Gus,<br>Thank you for pointing me to this error, but I am new to Linux world so I am wondering if you have a ready script or commands to add stdin flags to mpirun?<br> <br>Thanks<br><br><div class="gmail_quote">
On Mon, Nov 29, 2010 at 7:34 PM, Gus Correa <span dir="ltr"><<a href="mailto:gus@ldeo.columbia.edu">gus@ldeo.columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi Talla<br>
<br>
It sounds like all processes are trying to read from stdin, right?<br>
Stdin/stdout are not guaranteed to be available to all processes.<br>
<br>
Have you tried to add these flags in your mpiexec/mpirun<br>
command line: "-s all" ?<br>
Check the meaning with "man mpiexec".<br>
<br>
My two cents,<br>
Gus Correa<br>
<br>
Talla wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div><div></div><div class="h5">
Hello,<br>
<br>
this issue is consuming all my time with no luck. I can submit any job without any error message *BUT ONLY* when I am using *one* CPU, when I am using more than one CPU, I got the error message : " *forrtl: severe (24): end-of-file during read, unit 5, file stdin* ". I got this error line repeated as many as the number of CPU's I am using.<br>
<br>
Which mean that the other nodes are not doing any job here. So if you can help me to link the other nodes so I can take advantage of all the nodes.<br>
I have 8 PC and each one has 2 CPU (in total I have 16 CPU).<br>
<br>
I tested the cluster using the following command: /opt/openmpi/bin/mpirun -nolocal np 16 -machinefile machines /opt/mpi-test/bin/mpi-ring<br>
<br>
and all the nodes can send and receive data like a charm.<br>
<br>
Just to mention that I have Rocks Clustering software with open centOS.<br>
<br>
The code I am using is called ABINIT and it is impeded as a plugin in a visual software called MAPS.<br>
<br>
Your help is really appreciated.<br>
<br>
<br>
<br>
<br></div></div>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br><br>
<div style="visibility: hidden; display: inline;" id="avg_ls_inline_popup"></div><style type="text/css">#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}</style></div>