I reconfigure it by<br> /home/netlab/MPI/mpich2-1.3.2p1/configure -prefix=/home/netlab/MPI/mpich2-install --enable-g=all <br>and run by <br>mpiexec -n 2 -l -f machinefile /home/netlab/MPI/mpich2-build/examples/cpi -mpich-dbg=file -mpich-dbg-level=verbose -mpich2-dbg-class=all<br>
<div class="gmail_quote"><br>I found the following error message<br>netlab@query:~/MPI$ mpiexec -n 2 -l -f machinefile /home/netlab/MPI/mpich2-build/examples/cpi -mpich-dbg=file -mpich-dbg-level=verbose -mpich2-dbg-class=all<br>
[0] /home/netlab/MPI/mpich2-build/examples/cpi: error while loading shared libraries: libopa.so.1: cannot open shared object file: No such file or directory<br>[mpiexec@query] ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP<br>
[1] /home/netlab/MPI/mpich2-build/examples/cpi: error while loading shared libraries: libopa.so.1: cannot open shared object file: No such file or directory<br>[mpiexec@query] ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP<br>
<br>Sorry, I didn't find the log file<br><br>Best Wishes, <br><br>On Fri, May 27, 2011 at 1:38 PM, Darius Buntinas <span dir="ltr"><<a href="mailto:buntinas@mcs.anl.gov">buntinas@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Can you reconfigure with the --enable-g=all option, then re-run it like this (all on one line):<br>
<br>
mpiexec -n 2 -l -f machinefile /home/netlab/MPI/mpich2-build/examples/cpi -mpich-dbg=file -mpich-dbg-level=verbose -mpich2-dbg-class=all<br>
<br>
There should then be two files starting with "dbg" and ending with ".log". Please send those to us.<br>
<br>
Thanks,<br>
<font color="#888888">-d<br>
</font><div><div></div><div class="h5"><br>
On May 27, 2011, at 11:51 AM, Fujun Liu wrote:<br>
<br>
> I also doubt it is a networking problem. I am trying to how to find it. Anyway, thanks a lot<br>
><br>
> On Fri, May 27, 2011 at 12:46 PM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>> wrote:<br>
> If your firewall truly is disabled and those /etc/hosts files are accurate, then I don't know what the problem might be. It still sounds like a networking problem, but I don't have any concrete suggestions for what else to check.<br>
><br>
> Perhaps others on the list have experienced these sorts of problems before and can offer ideas.<br>
><br>
> -Dave<br>
><br>
> On May 27, 2011, at 11:24 AM CDT, Fujun Liu wrote:<br>
><br>
> > I use two hosts: one is query, the other is trigger<br>
> ><br>
> > (1) about firewall<br>
> ><br>
> > netlab@query:~$ sudo ufw status<br>
> > Status: inactive<br>
> ><br>
> > netlab@trigger:~$ sudo ufw status<br>
> > Status: inactive<br>
> ><br>
> > Both firewalls are turned off.<br>
> ><br>
> > (2)about DNS<br>
> ><br>
> > for query, /etc/hosts is as below:<br>
> ><br>
> > 127.0.0.1 localhost<br>
> > #127.0.1.1 query<br>
> ><br>
> > xxx.xxx.xxx.42 trigger<br>
> > xxx.xxx.xxx.43 query<br>
> ><br>
> > for trigger, /etc/hosts is as below:<br>
> > 127.0.0.1 localhost<br>
> > #127.0.1.1 trigger<br>
> ><br>
> > xxx.xxx.xxx.42 trigger<br>
> > xxx.xxx.xxx.43 query<br>
> ><br>
> > In fact, they are the same<br>
> ><br>
> > (3) version of MPICH2<br>
> ><br>
> > mpich2-1.3.2p1, it is from <a href="http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads" target="_blank">http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads</a><br>
> > As you can notice, it is called stable version<br>
> ><br>
> > (4) about configure.<br>
> ><br>
> > I did nothing about this. I just use the -prefix option. Do I need more about this?<br>
> ><br>
> > Now hellowworld workds fine on two hosts, cpi works fine on single one host. The problem is probably that the two hosts can't communicate. So any suggestion?<br>
> ><br>
> > Best Wishes,<br>
> ><br>
> > On Fri, May 27, 2011 at 11:55 AM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>> wrote:<br>
> > The problem looks like a networking issue, either a firewall or DNS (bad /etc/hosts file?) issue. Are the firewalls disabled on these machines? How are the hostnames configured?<br>
> ><br>
> > What version of MPICH2 is this? What configure options did you use when you built MPICH2?<br>
> ><br>
> > -Dave<br>
> ><br>
> > On May 27, 2011, at 10:49 AM CDT, Fujun Liu wrote:<br>
> ><br>
> > > The cpi also does not work. There is no error message, but it takes forever:<br>
> > ><br>
> > > xxxx@query:~/MPI$ mpiexec -n 2 -f machinefile /home/netlab/MPI/mpich2-build/examples/cpi<br>
> > > Process 1 of 2 is on query<br>
> > > Process 0 of 2 is on trigger<br>
> > ><br>
> > > I think my two hosts are still trying to communicate to each other. Any suggestions?<br>
> > ><br>
> > > Best wishes,<br>
> > ><br>
> > ><br>
> > > On Fri, May 27, 2011 at 9:42 AM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>> wrote:<br>
> > > Does the "examples/cpi" program from the MPICH2 build directory work correctly for you when you run it on multiple nodes?<br>
> > ><br>
> > > -Dave<br>
> > ><br>
> > > On May 26, 2011, at 5:49 PM CDT, Fujun Liu wrote:<br>
> > ><br>
> > > > Hi everyone,<br>
> > > ><br>
> > > > When I try one example from <a href="http://beige.ucs.indiana.edu/I590/node62.html" target="_blank">http://beige.ucs.indiana.edu/I590/node62.html</a>, I got the following error message as below. In the MPI cluster, there are two hosts. If I run the two processes on just one host, everything works fine. But if I run two processes on the two-host cluster, the following error happens. I think the two hosts just can't send/receive message to each other, but I don't know how to resolve this.<br>
> > > ><br>
> > > > Thanks in advance!<br>
> > > ><br>
> > > > xxxx@query:~/MPI$ mpiexec -n 2 -f machinefile ./GreetMaster<br>
> > > > Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>
> > > > PMPI_Bcast(1430).......................: MPI_Bcast(buf=0x7fff13114cb0, count=8192, MPI_CHAR, root=0, MPI_COMM_WORLD) failed<br>
> > > > MPIR_Bcast_impl(1273)..................:<br>
> > > > MPIR_Bcast_intra(1107).................:<br>
> > > > MPIR_Bcast_binomial(143)...............:<br>
> > > > MPIC_Recv(110).........................:<br>
> > > > MPIC_Wait(540).........................:<br>
> > > > MPIDI_CH3I_Progress(353)...............:<br>
> > > > MPID_nem_mpich2_blocking_recv(905).....:<br>
> > > > MPID_nem_tcp_connpoll(1823)............:<br>
> > > > state_commrdy_handler(1665)............:<br>
> > > > MPID_nem_tcp_recv_handler(1559)........:<br>
> > > > MPID_nem_handle_pkt(587)...............:<br>
> > > > MPIDI_CH3_PktHandler_EagerSend(632)....: failure occurred while posting a receive for message data (MPIDI_CH3_PKT_EAGER_SEND)<br>
> > > > MPIDI_CH3U_Receive_data_unexpected(251): Out of memory (unable to allocate -1216907051 bytes)<br>
> > > > [mpiexec@query] ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP<br>
> > > > APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)<br>
> > > ><br>
> > > > --<br>
> > > > Fujun Liu<br>
> > > > Department of Computer Science, University of Kentucky, 2010.08-<br>
> > > > <a href="mailto:fujun.liu@uky.edu">fujun.liu@uky.edu</a>, <a href="tel:%28859%29229-3659" value="+18592293659">(859)229-3659</a><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > mpich-discuss mailing list<br>
> > > > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> > > > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
> > ><br>
> > > _______________________________________________<br>
> > > mpich-discuss mailing list<br>
> > > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> > > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > > Fujun Liu<br>
> > > Department of Computer Science, University of Kentucky, 2010.08-<br>
> > > <a href="mailto:fujun.liu@uky.edu">fujun.liu@uky.edu</a>, <a href="tel:%28859%29229-3659" value="+18592293659">(859)229-3659</a><br>
> > ><br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > mpich-discuss mailing list<br>
> > > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> > > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > mpich-discuss mailing list<br>
> > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > Fujun Liu<br>
> > Department of Computer Science, University of Kentucky, 2010.08-<br>
> > <a href="mailto:fujun.liu@uky.edu">fujun.liu@uky.edu</a>, <a href="tel:%28859%29229-3659" value="+18592293659">(859)229-3659</a><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > mpich-discuss mailing list<br>
> > <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> > <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
><br>
><br>
><br>
> --<br>
> Fujun Liu<br>
> Department of Computer Science, University of Kentucky, 2010.08-<br>
> <a href="mailto:fujun.liu@uky.edu">fujun.liu@uky.edu</a>, <a href="tel:%28859%29229-3659" value="+18592293659">(859)229-3659</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div>Fujun Liu<br>Department of Computer Science, University of Kentucky, 2010.08-<br></div>
<div><a href="mailto:fujun.liu@uky.edu" target="_blank">fujun.liu@uky.edu</a>, (859)229-3659</div>
<div><br> </div><br>