[Swift-devel] Re: multiple worker.sh in the same job

Ben Clifford benc at hawaga.org.uk
Tue Jul 1 17:25:14 CDT 2008


Here's how I just ran a simple mpi-hello-world with one wrapper.sh that 
launches MPI inside the wrapper. I would be interested if Andriy could try 
his app in the style shown below.

I think the behaviour is now correct. From a user configuration 
perspective it is somewhat unpleasant, though.

On TG-UC:

 /home/benc/mpi/a.out  is my mpi hello world program
 /home/benc/mpi/mpi.sh contains:

#!/bin/bash

echo running launcher on $(hostname)
mpirun -np 3 -machinefile $PBS_NODEFILE /home/benc/mpi/a.out 


On swift submit side (my laptop):

tc.data maps mpi to /home/benc/mpi/mpi.sh

sites.xml defines:

  <pool handle="tguc" >
    <gridftp  url="gsiftp://tg-gridftp.uc.teragrid.org" /> 
    <jobmanager universe="vanilla" 
url="tg-grid.uc.teragrid.org/jobmanager-pbs" 
major="2" />
    <profile namespace="globus" key="project">TG-CCR080002N</profile>
    <profile namespace="globus" key="host_types">ia64-compute</profile>
    <profile namespace="globus" key="host_xcount">4</profile>
    <profile namespace="globus" key="jobtype">single</profile>
    <workdirectory >/home/benc/mpi</workdirectory>
  </pool>

Note specifically, jobtype=single (which is what causes only a single 
wrapper.sh to be run, even though 4 nodes will be allocated).

mpi.swift contains:

$ cat mpi.swift 

type file;

(file o, file e) p() { 
    app {
        mpi stdout=@filename(o) stderr=@filename(e);
    }
}

file mpiout <"mpi.out">;
file mpierr <"mpi.err">;

(mpiout, mpierr) = p();



so now run the above, and the output of the hello world MPI app (different 
pieces output by all workers) appears mpi.out, correctly staged back 
through mpirun and wrapper.sh.

-- 



More information about the Swift-devel mailing list