<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
Hello everyone!<br><br>I am new to mpich2 and am trying to build a small diskless cluster (centos linux). I installed Torque as job manager and also installed mpich2-1.4 on all 3 compute nodes (slave0 slave2 slave3); There was no error message during the installation, so I thought it should be working.<br><br>But when I submit a sample MPI program(named a.out),<br> `ps aux | grep a.out` shows that all 3 compute nodes has a process 'a.out' running, however, when 'ps aux | grep mpiexec' shows that there is only one process 'mpiexec' on one of the 3 nodes.<br><br>It seems that the TORQUE did allocate the computing resource to the job(which requires 3 compute nodes), but the mpiexec could not run simultaneously on all 3 nodes. I compiled mpich2 using the default process manager hydra. I even tried mpich2.1.3 and mpdboot, etc, all have the similar problem.<br>I also check the password-less ssh, it works fine, so I am now clueless.<br><br><br>I am new to mpich2, so my question to you guys who have more experience in this field is, is this problem the MPICH2 problem? or somehow related to TORQUE(pbs_mom,pbs_server,pbs_sched,or maui)?<br><br>BTW, I use mpiexec -v to acquire more info. The error message really confuses me:<br>[proxy:0:0@slave3] we don't understand this command put; forwarding upstream<br><br>anyhow I post the whole output, sorry a little bit long.<br>Thanks in advance!<br><br>==========<br><br>host: slave3<br>host: slave2<br>host: slave0<br><br>==================================================================================================<br>mpiexec options:<br>----------------<br> Base path: /usr/local/bin/<br> Launcher: (null)<br> Debug level: 1<br> Enable X: -1<br><br> Global environment:<br> -------------------<br> HOSTNAME=slave3<br> PBS_VERSION=TORQUE-3.0.2<br> SHELL=/bin/bash<br> HISTSIZE=1000<br> PBS_JOBNAME=p<br> TMPDIR=/tmp/90.master<br> PBS_ENVIRONMENT=PBS_BATCH<br> OLDPWD=/home/tony<br> PBS_O_WORKDIR=/home/tony/ab/test<br> USER=tony<br> PBS_TASKNUM=1<br> LS_COLORS=<br> LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:<br> PBS_O_HOME=/home/tony<br> PBS_MOMPORT=15003<br> PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu<br> PBS_O_QUEUE=batch<br> PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin<br> PBS_O_LOGNAME=tony<br> MAIL=/var/spool/mail/tony<br> PBS_O_LANG=en_US.UTF-8<br> PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F<br> PWD=/home/tony/ab/test<br> INPUTRC=/etc/inputrc<br> LANG=en_US.UTF-8<br> PBS_NODENUM=0<br> PBS_NUM_NODES=3<br> PBS_O_SHELL=/bin/bash<br> PBS_SERVER=master<br> PBS_JOBID=90.master<br> ENVIRONMENT=BATCH<br> HOME=/home/tony<br> SHLVL=2<br> PBS_O_HOST=master<br> PBS_VNODENUM=0<br> LOGNAME=tony<br> PBS_QUEUE=batch<br> PBS_O_MAIL=/var/spool/mail/tony<br> LESSOPEN=|/usr/bin/lesspipe.sh %s<br> PBS_NP=3<br> PBS_NUM_PPN=1<br> PBS_NODEFILE=/var/spool/torque/aux//90.master<br> G_BROKEN_FILENAMES=1<br> PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin<br> _=/usr/local/bin/mpiexec<br> Hydra internal environment:<br> ---------------------------<br> GFORTRAN_UNBUFFERED_PRECONNECTED=y<br><br><br> Proxy information:<br> *********************<br> [1] proxy: slave3 (1 cores)<br> Exec list: /home/tony/ab/test/a.out (1 processes);<br><br> [2] proxy: slave2 (1 cores)<br> Exec list: /home/tony/ab/test/a.out (1 processes);<br><br> [3] proxy: slave0 (1 cores)<br> Exec list: /home/tony/ab/test/a.out (1 processes);<br><br><br>==================================================================================================<br><br>[mpiexec@slave3] Timeout set to -1 (-1 means infinite)<br>[mpiexec@slave3] Got a control port string of slave3:57288<br><br>Proxy launch args: /usr/local/bin/hydra_pmi_proxy --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id<br><br>[mpiexec@slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1<br>Arguments being passed to proxy 0:<br>--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave3 --global-core-map 0,1,2 --filler-process-map 0,1,2 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out<br><br>[mpiexec@slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1<br>Arguments being passed to proxy 1:<br>--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave2 --global-core-map 1,1,1 --filler-process-map 1,1,1 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out<br><br>[mpiexec@slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1<br>Arguments being passed to proxy 2:<br>--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave0 --global-core-map 2,1,0 --filler-process-map 2,1,0 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out<br><br>[mpiexec@slave3] Launch arguments: /usr/local/bin/hydra_pmi_proxy --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 0<br>[mpiexec@slave3] Launch arguments: /usr/bin/ssh -x slave2 "/usr/local/bin/hydra_pmi_proxy" --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 1<br>[mpiexec@slave3] Launch arguments: /usr/bin/ssh -x slave0 "/usr/local/bin/hydra_pmi_proxy" --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 2<br>[proxy:0:0@slave3] got pmi command (from 0): init<br>pmi_version=1 pmi_subversion=1<br>[proxy:0:0@slave3] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0<br>[proxy:0:0@slave3] got pmi command (from 0): get_maxes<br><br>[proxy:0:0@slave3] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024<br>[proxy:0:0@slave3] got pmi command (from 0): get_appnum<br><br>[proxy:0:0@slave3] PMI response: cmd=appnum appnum=0<br>[proxy:0:0@slave3] got pmi command (from 0): get_my_kvsname<br><br>[proxy:0:0@slave3] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:0@slave3] got pmi command (from 0): get_my_kvsname<br><br>[proxy:0:0@slave3] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:0@slave3] got pmi command (from 0): get<br>kvsname=kvs_4814_0 key=PMI_process_mapping<br>[proxy:0:0@slave3] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))<br>[proxy:0:0@slave3] got pmi command (from 0): barrier_in<br><br><br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[proxy:0:0@slave3] forwarding command (cmd=barrier_in) upstream<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[mpiexec@slave3] PMI response to fd 6 pid 4: cmd=barrier_out<br>[mpiexec@slave3] PMI response to fd 7 pid 4: cmd=barrier_out<br>[mpiexec@slave3] PMI response to fd 15 pid 4: cmd=barrier_out<br>[proxy:0:0@slave3] PMI response: cmd=barrier_out<br>[proxy:0:0@slave3] got pmi command (from 0): put<br>kvsname=kvs_4814_0 key=P0-businesscard value=description#slave3$port#48677$ifname#192.168.1.111$<br>[proxy:0:0@slave3] we don't understand this command put; forwarding upstream<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P0-businesscard value=description#slave3$port#48677$ifname#192.168.1.111$<br>[mpiexec@slave3] PMI response to fd 6 pid 0: cmd=put_result rc=0 msg=success<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P1-businesscard value=description#slave2$port#52192$ifname#192.168.1.110$<br>[mpiexec@slave3] PMI response to fd 7 pid 4: cmd=put_result rc=0 msg=success<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P2-businesscard value=description#slave0$port#57785$ifname#192.168.1.108$<br>[mpiexec@slave3] PMI response to fd 15 pid 4: cmd=put_result rc=0 msg=success<br>[proxy:0:0@slave3] we don't understand the response put_result; forwarding downstream<br>[proxy:0:0@slave3] got pmi command (from 0): barrier_in<br><br>[proxy:0:0@slave3] forwarding command (cmd=barrier_in) upstream<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[proxy:0:1@slave2] got pmi command (from 4): init<br>pmi_version=1 pmi_subversion=1<br>[proxy:0:1@slave2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0<br>[proxy:0:1@slave2] got pmi command (from 4): get_maxes<br><br>[proxy:0:1@slave2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024<br>[proxy:0:1@slave2] got pmi command (from 4): get_appnum<br><br>[proxy:0:1@slave2] PMI response: cmd=appnum appnum=0<br>[proxy:0:1@slave2] got pmi command (from 4): get_my_kvsname<br><br>[proxy:0:1@slave2] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:1@slave2] got pmi command (from 4): get_my_kvsname<br><br>[proxy:0:1@slave2] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:1@slave2] got pmi command (from 4): get<br>kvsname=kvs_4814_0 key=PMI_process_mapping<br>[proxy:0:1@slave2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))<br>[proxy:0:1@slave2] got pmi command (from 4): barrier_in<br><br>[proxy:0:1@slave2] forwarding command (cmd=barrier_in) upstream<br>[proxy:0:1@slave2] PMI response: cmd=barrier_out<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=barrier_in<br>[mpiexec@slave3] PMI response to fd 6 pid 4: cmd=barrier_out<br>[mpiexec@slave3] PMI response to fd 7 pid 4: cmd=barrier_out<br>[mpiexec@slave3] PMI response to fd 15 pid 4: cmd=barrier_out<br>[proxy:0:1@slave2] got pmi command (from 4): put<br>kvsname=kvs_4814_0 key=P1-businesscard value=description#slave2$port#52192$ifname#192.168.1.110$<br>[proxy:0:1@slave2] we don't understand this command put; forwarding upstream<br>[proxy:0:1@slave2] we don't understand the response put_result; forwarding downstream<br>[proxy:0:1@slave2] got pmi command (from 4): barrier_in<br><br>[proxy:0:1@slave2] forwarding command (cmd=barrier_in) upstream<br>[proxy:0:0@slave3] PMI response: cmd=barrier_out<br>[proxy:0:2@slave0] got pmi command (from 4): init<br>pmi_version=1 pmi_subversion=1<br>[proxy:0:2@slave0] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0<br>[proxy:0:2@slave0] got pmi command (from 4): get_maxes<br><br>[proxy:0:2@slave0] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024<br>[proxy:0:2@slave0] got pmi command (from 4): get_appnum<br><br>[proxy:0:2@slave0] PMI response: cmd=appnum appnum=0<br>[proxy:0:2@slave0] got pmi command (from 4): get_my_kvsname<br><br>[proxy:0:2@slave0] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:2@slave0] got pmi command (from 4): get_my_kvsname<br><br>[proxy:0:2@slave0] PMI response: cmd=my_kvsname kvsname=kvs_4814_0<br>[proxy:0:2@slave0] got pmi command (from 4): get<br>kvsname=kvs_4814_0 key=PMI_process_mapping<br>[proxy:0:2@slave0] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))<br>[proxy:0:2@slave0] got pmi command (from 4): barrier_in<br>[proxy:0:2@slave0] forwarding command (cmd=barrier_in) upstream<br>[proxy:0:2@slave0] PMI response: cmd=barrier_out<br>Give number of samples in each process: [proxy:0:2@slave0] got pmi command (from 4): put<br>kvsname=kvs_4814_0 key=P2-businesscard value=description#slave0$port#57785$ifname#192.168.1.108$<br>[proxy:0:2@slave0] we don't understand this command put; forwarding upstream<br>[proxy:0:2@slave0] we don't understand the response put_result; forwarding downstream<br>[proxy:0:2@slave0] got pmi command (from 4): barrier_in<br><br>[proxy:0:2@slave0] forwarding command (cmd=barrier_in) upstream<br>[proxy:0:0@slave3] got pmi command (from 0): get<br>kvsname=kvs_4814_0 key=P2-businesscard<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=get kvsname=kvs_4814_0 key=P2-businesscard<br>[mpiexec@slave3] PMI response to fd 6 pid 0: cmd=get_result rc=0 msg=success value=description#slave0$port#57785$ifname#192.168.1.108$<br>[proxy:0:0@slave3] forwarding command (cmd=get kvsname=kvs_4814_0 key=P2-businesscard) upstream<br>[proxy:0:0@slave3] we don't understand the response get_result; forwarding downstream<br>[proxy:0:0@slave3] got pmi command (from 0): get<br>kvsname=kvs_4814_0 key=P1-businesscard<br>[proxy:0:0@slave3] forwarding command (cmd=get kvsname=kvs_4814_0 key=P1-businesscard) upstream<br>[mpiexec@slave3] [pgid: 0] got PMI command: cmd=get kvsname=kvs_4814_0 key=P1-businesscard<br>[mpiexec@slave3] PMI response to fd 6 pid 0: cmd=get_result rc=0 msg=success value=description#slave2$port#52192$ifname#192.168.1.110$<br>[proxy:0:0@slave3] we don't understand the response get_result; forwarding downstream<br>Process 2 of 3 is on slave0<br>Number of samples is 0<br>Process 0 of 3 is on slave3<br>Process 1 of 3 is on slave2<br>The computed value of Pi is nan<br>The "exact" value of Pi is 3.141592653589793115997963<br>The difference is nan<br>wall clock time = 0.000128<br>[proxy:0:1@slave2] PMI response: cmd=barrier_out<br>[proxy:0:2@slave0] PMI response: cmd=barrier_out<br>[proxy:0:1@slave2] got pmi command (from 4): finalize<br><br>[proxy:0:1@slave2] PMI response: cmd=finalize_ack<br>[proxy:0:0@slave3] got pmi command (from 0): finalize<br><br>[proxy:0:0@slave3] PMI response: cmd=finalize_ack<br>[proxy:0:2@slave0] got pmi command (from 4): finalize<br><br>[proxy:0:2@slave0] PMI response: cmd=finalize_ack<br> <br>                                            </div></body>
</html>