Hi Pavan,<br><br>As you said I installed new mpiexec. Now I am getting this error.<br>Whether I have to do this new mpiexec changes to all the remaining four nodes?<br>Here is my partial screenshot:<br>-----------------------------------<br>
[root@beowulf bin]# cd bin/<br>[root@beowulf bin]# ls<br>hydra_nameserver hydra_pmi_proxy mpiexec.hydra<br>hydra_persist mpiexec mpirun<br>[root@beowulf bin]# pwd<br>/opt/mpich2-1.4.1p1/bin/bin<br>[root@beowulf bin]# mpiexec -n 4 /opt/mpich2-1.4.1p1/examples/./cpi<br>
-bash: mpiexec: command not found<br>[root@beowulf bin]# ls<br>hydra_nameserver hydra_pmi_proxy mpiexec.hydra<br>hydra_persist mpiexec mpirun<br>[root@beowulf bin]# mpiexec -n 4 /opt/mpich2-1.4.1p1/examples/./cpi<br>
-bash: mpiexec: command not found<br>[root@beowulf bin]# PATH=/opt/mpich2-1.4.1p1/bin/bin/:$PATH<br>[root@beowulf bin]# export PATH<br>[root@beowulf bin]# which mpiexec<br>/opt/mpich2-1.4.1p1/bin/bin/mpiexec<br>[root@beowulf bin]# mpiexec -n 4 /opt/mpich2-1.4.1p1/examples/./cpi<br>
Process 0 of 4 is on beowulf.master<br>Process 1 of 4 is on beowulf.master<br>Process 2 of 4 is on beowulf.master<br>Process 3 of 4 is on beowulf.master<br>pi is approximately 3.1415926544231239, Error is 0.0000000008333307<br>
wall clock time = 0.000260<br>[root@beowulf bin]# mpiexec -f /root/hosts -n 4 /opt/mpich2-1.4.1p1/examples/./cpi<br>bash: /opt/mpich2-1.4.1p1/bin/bin//hydra_pmi_proxy: No such file or directory<br>bash: /opt/mpich2-1.4.1p1/bin/bin//hydra_pmi_proxy: No such file or directory<br>
bash: /opt/mpich2-1.4.1p1/bin/bin//hydra_pmi_proxy: No such file or directory<br>^C[mpiexec@beowulf.master] Sending Ctrl-C to processes as requested<br>[mpiexec@beowulf.master] Press Ctrl-C again to force abort<br>[mpiexec@beowulf.master] HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:166): assert (!closed) failed<br>
[mpiexec@beowulf.master] ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:79): unable to send signal downstream<br>[mpiexec@beowulf.master] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status<br>
[mpiexec@beowulf.master] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:205): error waiting for event<br>[mpiexec@beowulf.master] main (./ui/mpich/mpiexec.c:437): process manager error waiting for completion<br>
[root@beowulf bin]#<br><br><br><div class="gmail_quote">On Wed, May 2, 2012 at 12:32 AM, Pavan Balaji <span dir="ltr"><<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
On 05/01/2012 02:01 PM, Albert Spade wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks alot for your help.<br>
Can you please tell me how can I recompile with this new mpiexec?? As in<br>
ui folder I found mpiexec.c and mpiexec.h.<br>
So whether first I have to compile mpiexec.c and create mpiexec. And<br>
then use this mpiexec to compile my cpi.c program?<br>
Sorry for my simple query. I am new to the field of clusters and MPI.<br>
</blockquote>
<br></div>
./configure --prefix=`pwd`/install && make && make install<br>
<br>
You'll find the new mpiexec in `pwd`/install/bin<div class="HOEnZb"><div class="h5"><br>
<br>
-- Pavan<br>
<br>
-- <br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/%7Ebalaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
</div></div></blockquote></div><br>