[mpich-discuss] problem with mpiexec.hydra in mpich2-1.4

tony lee tonymidlee at hotmail.com
Fri Jul 29 08:09:47 CDT 2011


Hello everyone!

I am new to mpich2 and am trying to build a small diskless cluster (centos linux).  I installed Torque as job manager and also installed mpich2-1.4 on all 3 compute nodes (slave0 slave2 slave3); There was no error message during the installation, so I thought it should be working.

But when I submit a sample MPI program(named a.out),
 `ps aux | grep a.out` shows that all 3 compute nodes has a process 'a.out' running, however, when 'ps aux | grep mpiexec' shows that there is only one process 'mpiexec' on one of the 3 nodes.

It seems that the TORQUE did allocate the computing resource to the job(which requires 3 compute nodes), but the mpiexec could not run simultaneously on all 3 nodes. I compiled mpich2 using the default process manager hydra. I even tried mpich2.1.3 and mpdboot, etc, all have the similar problem.
I also check the password-less ssh, it works fine, so I am now clueless.


I am new to mpich2, so my question to you guys who have more experience in this field is, is this problem the MPICH2 problem? or somehow related to TORQUE(pbs_mom,pbs_server,pbs_sched,or maui)?

BTW, I use mpiexec -v to acquire more info. The error message really confuses me:
[proxy:0:0 at slave3] we don't understand this command put; forwarding upstream

anyhow I post the whole output, sorry a little bit long.
Thanks in advance!

==========

host: slave3
host: slave2
host: slave0

==================================================================================================
mpiexec options:
----------------
  Base path: /usr/local/bin/
  Launcher: (null)
  Debug level: 1
  Enable X: -1

  Global environment:
  -------------------
    HOSTNAME=slave3
    PBS_VERSION=TORQUE-3.0.2
    SHELL=/bin/bash
    HISTSIZE=1000
    PBS_JOBNAME=p
    TMPDIR=/tmp/90.master
    PBS_ENVIRONMENT=PBS_BATCH
    OLDPWD=/home/tony
    PBS_O_WORKDIR=/home/tony/ab/test
    USER=tony
    PBS_TASKNUM=1
    LS_COLORS=
    LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:
    PBS_O_HOME=/home/tony
    PBS_MOMPORT=15003
    PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu
    PBS_O_QUEUE=batch
    PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin
    PBS_O_LOGNAME=tony
    MAIL=/var/spool/mail/tony
    PBS_O_LANG=en_US.UTF-8
    PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F
    PWD=/home/tony/ab/test
    INPUTRC=/etc/inputrc
    LANG=en_US.UTF-8
    PBS_NODENUM=0
    PBS_NUM_NODES=3
    PBS_O_SHELL=/bin/bash
    PBS_SERVER=master
    PBS_JOBID=90.master
    ENVIRONMENT=BATCH
    HOME=/home/tony
    SHLVL=2
    PBS_O_HOST=master
    PBS_VNODENUM=0
    LOGNAME=tony
    PBS_QUEUE=batch
    PBS_O_MAIL=/var/spool/mail/tony
    LESSOPEN=|/usr/bin/lesspipe.sh %s
    PBS_NP=3
    PBS_NUM_PPN=1
    PBS_NODEFILE=/var/spool/torque/aux//90.master
    G_BROKEN_FILENAMES=1
    PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin
    _=/usr/local/bin/mpiexec
 Hydra internal environment:
  ---------------------------
    GFORTRAN_UNBUFFERED_PRECONNECTED=y


    Proxy information:
    *********************
      [1] proxy: slave3 (1 cores)
      Exec list: /home/tony/ab/test/a.out (1 processes);

      [2] proxy: slave2 (1 cores)
      Exec list: /home/tony/ab/test/a.out (1 processes);

      [3] proxy: slave0 (1 cores)
      Exec list: /home/tony/ab/test/a.out (1 processes);


==================================================================================================

[mpiexec at slave3] Timeout set to -1 (-1 means infinite)
[mpiexec at slave3] Got a control port string of slave3:57288

Proxy launch args: /usr/local/bin/hydra_pmi_proxy --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id

[mpiexec at slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1
Arguments being passed to proxy 0:
--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave3 --global-core-map 0,1,2 --filler-process-map 0,1,2 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out

[mpiexec at slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1
Arguments being passed to proxy 1:
--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave2 --global-core-map 1,1,1 --filler-process-map 1,1,1 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out

[mpiexec at slave3] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1
Arguments being passed to proxy 2:
--version 1.4 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname slave0 --global-core-map 2,1,0 --filler-process-map 2,1,0 --global-process-count 3 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_4814_0 --pmi-process-mapping (vector,(0,3,1)) --ckpoint-num -1 --global-inherited-env 45 'HOSTNAME=slave3' 'PBS_VERSION=TORQUE-3.0.2' 'SHELL=/bin/bash' 'HISTSIZE=1000' 'PBS_JOBNAME=p' 'TMPDIR=/tmp/90.master' 'PBS_ENVIRONMENT=PBS_BATCH' 'OLDPWD=/home/tony' 'PBS_O_WORKDIR=/home/tony/ab/test' 'USER=tony' 'PBS_TASKNUM=1' 'LS_COLORS=' 'LD_LIBRARY_PATH=/lib:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/root/resource/XCrySDen-1.5.24-bin-semishared/external/lib:' 'PBS_O_HOME=/home/tony' 'PBS_MOMPORT=15003' 'PBS_GPUFILE=/var/spool/torque/aux//90.mastergpu' 'PBS_O_QUEUE=batch' 'PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin:/home/tony/bin' 'PBS_O_LOGNAME=tony' 'MAIL=/var/spool/mail/tony' 'PBS_O_LANG=en_US.UTF-8' 'PBS_JOBCOOKIE=E6650E0AF337859CA4C760C88A1B9D5F' 'PWD=/home/tony/ab/test' 'INPUTRC=/etc/inputrc' 'LANG=en_US.UTF-8' 'PBS_NODENUM=0' 'PBS_NUM_NODES=3' 'PBS_O_SHELL=/bin/bash' 'PBS_SERVER=master' 'PBS_JOBID=90.master' 'ENVIRONMENT=BATCH' 'HOME=/home/tony' 'SHLVL=2' 'PBS_O_HOST=master' 'PBS_VNODENUM=0' 'LOGNAME=tony' 'PBS_QUEUE=batch' 'PBS_O_MAIL=/var/spool/mail/tony' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'PBS_NP=3' 'PBS_NUM_PPN=1' 'PBS_NODEFILE=/var/spool/torque/aux//90.master' 'G_BROKEN_FILENAMES=1' 'PBS_O_PATH=./:/home/tony/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/resource/XCrySDen-1.5.17-bin-semishared/scripts:/usr/totalview/bin:/usr/local/maui/bin:/usr/local/maui/sbin:/u/bin:/u/sbin' '_=/usr/local/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/tony/ab/test --exec-args 1 /home/tony/ab/test/a.out

[mpiexec at slave3] Launch arguments: /usr/local/bin/hydra_pmi_proxy --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 0
[mpiexec at slave3] Launch arguments: /usr/bin/ssh -x slave2 "/usr/local/bin/hydra_pmi_proxy" --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 1
[mpiexec at slave3] Launch arguments: /usr/bin/ssh -x slave0 "/usr/local/bin/hydra_pmi_proxy" --control-port slave3:57288 --debug --demux poll --pgid 0 --retries 10 --proxy-id 2
[proxy:0:0 at slave3] got pmi command (from 0): init
pmi_version=1 pmi_subversion=1
[proxy:0:0 at slave3] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:0 at slave3] got pmi command (from 0): get_maxes

[proxy:0:0 at slave3] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024
[proxy:0:0 at slave3] got pmi command (from 0): get_appnum

[proxy:0:0 at slave3] PMI response: cmd=appnum appnum=0
[proxy:0:0 at slave3] got pmi command (from 0): get_my_kvsname

[proxy:0:0 at slave3] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:0 at slave3] got pmi command (from 0): get_my_kvsname

[proxy:0:0 at slave3] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:0 at slave3] got pmi command (from 0): get
kvsname=kvs_4814_0 key=PMI_process_mapping
[proxy:0:0 at slave3] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))
[proxy:0:0 at slave3] got pmi command (from 0): barrier_in


[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[proxy:0:0 at slave3] forwarding command (cmd=barrier_in) upstream
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[mpiexec at slave3] PMI response to fd 6 pid 4: cmd=barrier_out
[mpiexec at slave3] PMI response to fd 7 pid 4: cmd=barrier_out
[mpiexec at slave3] PMI response to fd 15 pid 4: cmd=barrier_out
[proxy:0:0 at slave3] PMI response: cmd=barrier_out
[proxy:0:0 at slave3] got pmi command (from 0): put
kvsname=kvs_4814_0 key=P0-businesscard value=description#slave3$port#48677$ifname#192.168.1.111$
[proxy:0:0 at slave3] we don't understand this command put; forwarding upstream
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P0-businesscard value=description#slave3$port#48677$ifname#192.168.1.111$
[mpiexec at slave3] PMI response to fd 6 pid 0: cmd=put_result rc=0 msg=success
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P1-businesscard value=description#slave2$port#52192$ifname#192.168.1.110$
[mpiexec at slave3] PMI response to fd 7 pid 4: cmd=put_result rc=0 msg=success
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=put kvsname=kvs_4814_0 key=P2-businesscard value=description#slave0$port#57785$ifname#192.168.1.108$
[mpiexec at slave3] PMI response to fd 15 pid 4: cmd=put_result rc=0 msg=success
[proxy:0:0 at slave3] we don't understand the response put_result; forwarding downstream
[proxy:0:0 at slave3] got pmi command (from 0): barrier_in

[proxy:0:0 at slave3] forwarding command (cmd=barrier_in) upstream
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[proxy:0:1 at slave2] got pmi command (from 4): init
pmi_version=1 pmi_subversion=1
[proxy:0:1 at slave2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:1 at slave2] got pmi command (from 4): get_maxes

[proxy:0:1 at slave2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024
[proxy:0:1 at slave2] got pmi command (from 4): get_appnum

[proxy:0:1 at slave2] PMI response: cmd=appnum appnum=0
[proxy:0:1 at slave2] got pmi command (from 4): get_my_kvsname

[proxy:0:1 at slave2] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:1 at slave2] got pmi command (from 4): get_my_kvsname

[proxy:0:1 at slave2] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:1 at slave2] got pmi command (from 4): get
kvsname=kvs_4814_0 key=PMI_process_mapping
[proxy:0:1 at slave2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))
[proxy:0:1 at slave2] got pmi command (from 4): barrier_in

[proxy:0:1 at slave2] forwarding command (cmd=barrier_in) upstream
[proxy:0:1 at slave2] PMI response: cmd=barrier_out
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=barrier_in
[mpiexec at slave3] PMI response to fd 6 pid 4: cmd=barrier_out
[mpiexec at slave3] PMI response to fd 7 pid 4: cmd=barrier_out
[mpiexec at slave3] PMI response to fd 15 pid 4: cmd=barrier_out
[proxy:0:1 at slave2] got pmi command (from 4): put
kvsname=kvs_4814_0 key=P1-businesscard value=description#slave2$port#52192$ifname#192.168.1.110$
[proxy:0:1 at slave2] we don't understand this command put; forwarding upstream
[proxy:0:1 at slave2] we don't understand the response put_result; forwarding downstream
[proxy:0:1 at slave2] got pmi command (from 4): barrier_in

[proxy:0:1 at slave2] forwarding command (cmd=barrier_in) upstream
[proxy:0:0 at slave3] PMI response: cmd=barrier_out
[proxy:0:2 at slave0] got pmi command (from 4): init
pmi_version=1 pmi_subversion=1
[proxy:0:2 at slave0] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:2 at slave0] got pmi command (from 4): get_maxes

[proxy:0:2 at slave0] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024
[proxy:0:2 at slave0] got pmi command (from 4): get_appnum

[proxy:0:2 at slave0] PMI response: cmd=appnum appnum=0
[proxy:0:2 at slave0] got pmi command (from 4): get_my_kvsname

[proxy:0:2 at slave0] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:2 at slave0] got pmi command (from 4): get_my_kvsname

[proxy:0:2 at slave0] PMI response: cmd=my_kvsname kvsname=kvs_4814_0
[proxy:0:2 at slave0] got pmi command (from 4): get
kvsname=kvs_4814_0 key=PMI_process_mapping
[proxy:0:2 at slave0] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,3,1))
[proxy:0:2 at slave0] got pmi command (from 4): barrier_in
[proxy:0:2 at slave0] forwarding command (cmd=barrier_in) upstream
[proxy:0:2 at slave0] PMI response: cmd=barrier_out
Give number of samples in each process: [proxy:0:2 at slave0] got pmi command (from 4): put
kvsname=kvs_4814_0 key=P2-businesscard value=description#slave0$port#57785$ifname#192.168.1.108$
[proxy:0:2 at slave0] we don't understand this command put; forwarding upstream
[proxy:0:2 at slave0] we don't understand the response put_result; forwarding downstream
[proxy:0:2 at slave0] got pmi command (from 4): barrier_in

[proxy:0:2 at slave0] forwarding command (cmd=barrier_in) upstream
[proxy:0:0 at slave3] got pmi command (from 0): get
kvsname=kvs_4814_0 key=P2-businesscard
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=get kvsname=kvs_4814_0 key=P2-businesscard
[mpiexec at slave3] PMI response to fd 6 pid 0: cmd=get_result rc=0 msg=success value=description#slave0$port#57785$ifname#192.168.1.108$
[proxy:0:0 at slave3] forwarding command (cmd=get kvsname=kvs_4814_0 key=P2-businesscard) upstream
[proxy:0:0 at slave3] we don't understand the response get_result; forwarding downstream
[proxy:0:0 at slave3] got pmi command (from 0): get
kvsname=kvs_4814_0 key=P1-businesscard
[proxy:0:0 at slave3] forwarding command (cmd=get kvsname=kvs_4814_0 key=P1-businesscard) upstream
[mpiexec at slave3] [pgid: 0] got PMI command: cmd=get kvsname=kvs_4814_0 key=P1-businesscard
[mpiexec at slave3] PMI response to fd 6 pid 0: cmd=get_result rc=0 msg=success value=description#slave2$port#52192$ifname#192.168.1.110$
[proxy:0:0 at slave3] we don't understand the response get_result; forwarding downstream
Process 2 of 3 is on slave0
Number of samples is 0
Process 0 of 3 is on slave3
Process 1 of 3 is on slave2
The computed value of Pi is nan
The  "exact" value of Pi is 3.141592653589793115997963
The difference is nan
wall clock time = 0.000128
[proxy:0:1 at slave2] PMI response: cmd=barrier_out
[proxy:0:2 at slave0] PMI response: cmd=barrier_out
[proxy:0:1 at slave2] got pmi command (from 4): finalize

[proxy:0:1 at slave2] PMI response: cmd=finalize_ack
[proxy:0:0 at slave3] got pmi command (from 0): finalize

[proxy:0:0 at slave3] PMI response: cmd=finalize_ack
[proxy:0:2 at slave0] got pmi command (from 4): finalize

[proxy:0:2 at slave0] PMI response: cmd=finalize_ack
                                                                     
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110729/f128d2ed/attachment-0001.htm>


More information about the mpich-discuss mailing list