[Swift-commit] r6693 - SwiftApps/Scattering/paintgrid

davidk at ci.uchicago.edu davidk at ci.uchicago.edu
Wed Jul 31 14:18:07 CDT 2013


Author: davidk
Date: 2013-07-31 14:18:06 -0500 (Wed, 31 Jul 2013)
New Revision: 6693

Added:
   SwiftApps/Scattering/paintgrid/cleanup
   SwiftApps/Scattering/paintgrid/fewpoints.params
   SwiftApps/Scattering/paintgrid/onepoint.params
   SwiftApps/Scattering/paintgrid/pecos.xml
   SwiftApps/Scattering/paintgrid/run-paintgrid
   SwiftApps/Scattering/paintgrid/run-remote-persist.sh
   SwiftApps/Scattering/paintgrid/sites-used
   SwiftApps/Scattering/paintgrid/start-beagle
   SwiftApps/Scattering/paintgrid/start-tunnels
   SwiftApps/Scattering/paintgrid/stats
   SwiftApps/Scattering/paintgrid/stop-beagle
   SwiftApps/Scattering/paintgrid/tssh
Removed:
   SwiftApps/Scattering/paintgrid/apps.multisite
   SwiftApps/Scattering/paintgrid/multisites.xml
Modified:
   SwiftApps/Scattering/paintgrid/apps
   SwiftApps/Scattering/paintgrid/genpoints.py
   SwiftApps/Scattering/paintgrid/paintgrid.swift
   SwiftApps/Scattering/paintgrid/processpoints.py
   SwiftApps/Scattering/paintgrid/sites.xml
Log:
Various updates to a more recent set of scripts/configs 


Modified: SwiftApps/Scattering/paintgrid/apps
===================================================================
--- SwiftApps/Scattering/paintgrid/apps	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/apps	2013-07-31 19:18:06 UTC (rev 6693)
@@ -1,2 +1,4 @@
 localhost local_python python
-westmere  python       python
+beagle    python       python
+#orthros   python       python
+

Deleted: SwiftApps/Scattering/paintgrid/apps.multisite
===================================================================
--- SwiftApps/Scattering/paintgrid/apps.multisite	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/apps.multisite	2013-07-31 19:18:06 UTC (rev 6693)
@@ -1,4 +0,0 @@
-uc3      perl /usr/bin/perl null null null
-beagle   perl /usr/bin/perl null null null
-#sandy    perl /usr/bin/perl null null null
-westmere perl /usr/bin/perl null null null

Added: SwiftApps/Scattering/paintgrid/cleanup
===================================================================
--- SwiftApps/Scattering/paintgrid/cleanup	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/cleanup	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,6 @@
+#! /bin/sh
+
+rm -rf out _concurrent *swiftx *kml *~ paintgrid-* python-* swift.log
+
+
+


Property changes on: SwiftApps/Scattering/paintgrid/cleanup
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/fewpoints.params
===================================================================
--- SwiftApps/Scattering/paintgrid/fewpoints.params	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/fewpoints.params	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,10 @@
+minx=0.0
+maxx=10.0
+miny=0.0
+maxy=10.0
+minz=0.0
+maxz=10.0
+incr=1.0
+tuplesPerFile=100
+filePrefix="seq"
+outDir="out/seq"

Modified: SwiftApps/Scattering/paintgrid/genpoints.py
===================================================================
--- SwiftApps/Scattering/paintgrid/genpoints.py	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/genpoints.py	2013-07-31 19:18:06 UTC (rev 6693)
@@ -46,7 +46,7 @@
   for y in xfrange(miny,maxy,incr):
     for z in xfrange(minz,maxz,incr):
       if n % tuplesPerFile == 0 :
-        filename = str.format(outDir + "/" + filePrefix + ".{0!s:0>5}",filenum)
+        filename = outDir + "/" + filePrefix + ("%05d" % filenum)
         print filename
         of = file(filename,"w")
         filenum += 1

Deleted: SwiftApps/Scattering/paintgrid/multisites.xml
===================================================================
--- SwiftApps/Scattering/paintgrid/multisites.xml	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/multisites.xml	2013-07-31 19:18:06 UTC (rev 6693)
@@ -1,71 +0,0 @@
-<config>
-
-  <pool handle="uc3">
-    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="ssh-cl:condor"/>
-    <profile namespace="karajan" key="jobThrottle">10.00</profile>
-    <profile namespace="karajan" key="initialScore">10000</profile>
-    <profile namespace="globus"  key="jobsPerNode">1</profile>
-    <profile namespace="globus"  key="maxtime">3600</profile>
-    <profile namespace="globus"  key="maxWalltime">00:05:00</profile>
-    <profile namespace="globus"  key="highOverAllocation">100</profile>
-    <profile namespace="globus"  key="lowOverAllocation">100</profile>
-    <profile namespace="globus"  key="slots">1000</profile>
-    <profile namespace="globus"  key="maxNodes">1</profile>
-    <profile namespace="globus"  key="nodeGranularity">1</profile>
-    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.{env.USER}"</profile>
-    <profile namespace="globus"  key="jobType">nonshared</profile>
-    <!-- <profile namespace="globus"  key="condor.+Requirements">isUndefined(GLIDECLIENT_Name) == FALSE</profile> -->
-    <workdirectory>.</workdirectory>
-  </pool>
-
-  <pool handle="beagle">
-    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login4.beagle.ci.uchicago.edu"/>
-    <profile namespace="globus" key="jobsPerNode">24</profile>
-    <profile namespace="globus" key="lowOverAllocation">100</profile>
-    <profile namespace="globus" key="highOverAllocation">100</profile>
-    <!-- <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile> -->
-    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24;pbs.resource_list=advres=wilde.1768</profile>
-    <profile namespace="globus" key="maxtime">3600</profile>
-    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
-    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/{env.USER}/swiftwork</profile>
-    <profile namespace="globus" key="slots">5</profile>
-    <profile namespace="globus" key="maxnodes">1</profile>
-    <profile namespace="globus" key="nodeGranularity">1</profile>
-    <profile namespace="karajan" key="jobThrottle">4.80</profile>
-    <profile namespace="karajan" key="initialScore">10000</profile>
-    <workdirectory>/tmp/{env.USER}/swiftwork</workdirectory>
-  </pool>
-
-  <pool handle="sandyb">
-    <execution provider="coaster" jobmanager="local:slurm"/>
-    <profile namespace="globus" key="queue">sandyb</profile>
-    <profile namespace="globus" key="jobsPerNode">16</profile>
-    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
-    <profile namespace="globus" key="maxTime">3600</profile>
-    <profile namespace="globus" key="highOverAllocation">100</profile>
-    <profile namespace="globus" key="lowOverAllocation">100</profile>
-    <profile namespace="globus" key="slots">4</profile>
-    <profile namespace="globus" key="maxNodes">1</profile>
-    <profile namespace="globus" key="nodeGranularity">1</profile>
-    <profile namespace="karajan" key="jobThrottle">.64</profile>
-    <profile namespace="karajan" key="initialScore">10000</profile>
-    <workdirectory>/tmp/{env.USER}</workdirectory>
-  </pool>
-
-  <pool handle="westmere">
-    <execution provider="coaster" jobmanager="local:slurm"/>
-    <profile namespace="globus" key="queue">westmere</profile>
-    <profile namespace="globus" key="jobsPerNode">12</profile>
-    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
-    <profile namespace="globus" key="maxTime">3600</profile>
-    <profile namespace="globus" key="highOverAllocation">100</profile>
-    <profile namespace="globus" key="lowOverAllocation">100</profile>
-    <profile namespace="globus" key="slots">4</profile>
-    <profile namespace="globus" key="maxNodes">1</profile>
-    <profile namespace="globus" key="nodeGranularity">1</profile>
-    <profile namespace="karajan" key="jobThrottle">.48</profile>
-    <profile namespace="karajan" key="initialScore">10000</profile>
-    <workdirectory>/tmp/{env.USER}</workdirectory>
-  </pool>
-
-</config>

Added: SwiftApps/Scattering/paintgrid/onepoint.params
===================================================================
--- SwiftApps/Scattering/paintgrid/onepoint.params	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/onepoint.params	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,10 @@
+minx=0.0
+maxx=10.0
+miny=0.0
+maxy=10.0
+minz=0.0
+maxz=10.0
+incr=10.0
+tuplesPerFile=100
+filePrefix="seq"
+outDir="out/seq"

Modified: SwiftApps/Scattering/paintgrid/paintgrid.swift
===================================================================
--- SwiftApps/Scattering/paintgrid/paintgrid.swift	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/paintgrid.swift	2013-07-31 19:18:06 UTC (rev 6693)
@@ -24,7 +24,7 @@
 file   params   <single_file_mapper;file=@arg("params", "genpoints.params")>;
 file   image    <single_file_mapper;file=@arg("image",  "UNSPECIFIED.tif")>;
 global string runTime = @arg("runTime","0.0");
-global string runDir  = @arg("runDir");
+global string runDir  = @java("java.lang.System","getProperty","user.dir");
 
 # Main script:
 #   Call genPoints to make a set of files, each of which contains a set of data points to process

Added: SwiftApps/Scattering/paintgrid/pecos.xml
===================================================================
--- SwiftApps/Scattering/paintgrid/pecos.xml	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/pecos.xml	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,59 @@
+ <config>
+
+  <pool handle="localhost">
+    <execution provider="local"/>
+    <!-- <filesystem provider="local"/> -->
+    <workdirectory>/tmp/wilde/swiftwork</workdirectory>
+    <profile namespace="swift" key="stagingMethod">local</profile>
+  </pool>
+
+  <pool handle="orig-mcluster">
+    <execution provider="coaster" jobmanager="local:sge"/>
+
+    <!-- Set partition and account here: -->
+    <profile namespace="globus" key="queue">sec1all.q</profile> -->
+    <profile namespace="globus" key="pe">sec1_all</profile> -->
+    <profile namespace="globus" key="ppn">1</profile>
+    <!-- <profile namespace="globus" key="project">pi-wilde</profile> -->
+
+    <!-- Set number of jobs and nodes per job here: -->
+    <profile namespace="globus" key="slots">1</profile>
+    <profile namespace="globus" key="maxnodes">4</profile>
+    <profile namespace="globus" key="nodegranularity">4</profile>
+    <profile namespace="globus" key="jobsPerNode">64</profile> <!-- apps per node! -->
+    <profile namespace="karajan" key="jobThrottle">3.20</profile> <!-- eg .11 -> 12 -->
+
+    <!-- Set estimated app time (maxwalltime) and requested job time (maxtime) here: -->
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus" key="maxtime">1800</profile>  <!-- in seconds! -->
+
+    <!-- Set data staging model and work dir here: -->
+    <filesystem provider="local"/>
+    <workdirectory>/clhome/WILDE/swiftwork</workdirectory>
+
+    <!-- Typically leave these constant: -->
+    <!-- <profile namespace="globus" key="slurm.exclusive">false</profile> -->
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+  </pool>
+
+  <pool handle="cluster"> <!-- beagle -->
+    <execution provider="coaster-persistent" url="http://localhost:59900" jobmanager="local:pbs"/>
+    <!-- <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login4.beagle.ci.uchicago.edu"/> -->
+    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/wilde/swift.scripts</profile>
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile> 
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus" key="slots">5</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">4.80</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/lustre/beagle/wilde/swiftwork</workdirectory>
+  </pool>
+
+</config>

Modified: SwiftApps/Scattering/paintgrid/processpoints.py
===================================================================
--- SwiftApps/Scattering/paintgrid/processpoints.py	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/processpoints.py	2013-07-31 19:18:06 UTC (rev 6693)
@@ -9,11 +9,12 @@
 
 n = 0;
 pixel = [];
-with open(dataFileName, "rb") as f:
+#with open(dataFileName, "rb") as f:
+f = open(dataFileName, "rb")
+byte = f.read(1)
+while byte != "":
+    pixel.append(byte)
     byte = f.read(1)
-    while byte != "":
-        pixel.append(byte)
-        byte = f.read(1)
 
 print "Data file has ", len(pixel), " pixels"
 

Added: SwiftApps/Scattering/paintgrid/run-paintgrid
===================================================================
--- SwiftApps/Scattering/paintgrid/run-paintgrid	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/run-paintgrid	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,9 @@
+#! /bin/sh
+
+PATH=/clhome/WILDE/swift/rev/swift-0.94.1/bin:$PATH
+
+export GLOBUS_HOSTNAME=localhost
+export GLOBUS_TCP_PORT_RANGE=59900,59909
+export GLOBUS_TCP_SOURCE_RANGE=59900,59909
+
+swift paintgrid.swift -params=genpoints.params -image=data.0001.tiny -runTime=0.0001


Property changes on: SwiftApps/Scattering/paintgrid/run-paintgrid
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/run-remote-persist.sh
===================================================================
--- SwiftApps/Scattering/paintgrid/run-remote-persist.sh	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/run-remote-persist.sh	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,52 @@
+swift$ cat run-persist.sh 
+#! /bin/sh
+
+# Start a persistent swift coaster service, capturing its port # in portfile
+
+portfile=$(mktemp portfile.XXXX)
+coaster-service -portfile $portfile -nosec >& coaster.log &
+coasterpid=$!
+sleep 5  # Wait for the service to record its port
+port=$(cat $portfile)
+
+# Report the coaster service port and process id
+
+echo coaster pid: $coasterpid port: $port
+echo $coasterpid >coasterpid
+echo started coaster service:
+ps -p $coasterpid --ppid $coasterpid -H -j # display parent shell and java child processes
+
+# Create a sites file pointing to the service we just started
+
+cat >sites.xml <<END
+<config>
+  <pool handle="cluster">
+    <execution provider="coaster-persistent" url="http://localhost:$port" jobmanager="local:slurm"/>
+    <profile namespace="globus" key="jobsPerNode">4</profile>
+    <profile namespace="globus" key="ppn">16</profile>
+    <profile namespace="globus" key="slots">1</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodegranularity">1</profile>
+    <profile namespace="globus" key="maxWalltime">00:01:00</profile>
+    <profile namespace="globus" key="walltime">600</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="queue">sandyb</profile>
+    <profile namespace="karajan" key="jobThrottle">1.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="globus" key="jobtype">single</profile>
+    <profile namespace="globus" key="slurm.exclusive">true</profile>
+    <filesystem provider="local"/>
+    <workdirectory>/scratch/midway/{env.USER}/work</workdirectory>
+  </pool>
+</config>
+END
+
+# Now run several Swift scripts, one at a time, just like MD_String will do
+
+NUM_SWIFT_RUNS=$1
+for ((i=0;i<$NUM_SWIFT_RUNS;i++)); do
+  set -x
+  swift -sites.file sites.xml -tc.file tc.intelmpi -config cf mpicatnap.swift -n=${2:-1} -t=${3:-2}
+  set +x
+done

Added: SwiftApps/Scattering/paintgrid/sites-used
===================================================================
--- SwiftApps/Scattering/paintgrid/sites-used	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/sites-used	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,3 @@
+#! /bin/sh
+
+for l in $(/bin/ls -1t paint*.log) ; do echo $l; ./stats $l | grep -v localhost ; done | more


Property changes on: SwiftApps/Scattering/paintgrid/sites-used
___________________________________________________________________
Added: svn:executable
   + *

Modified: SwiftApps/Scattering/paintgrid/sites.xml
===================================================================
--- SwiftApps/Scattering/paintgrid/sites.xml	2013-07-30 18:53:17 UTC (rev 6692)
+++ SwiftApps/Scattering/paintgrid/sites.xml	2013-07-31 19:18:06 UTC (rev 6693)
@@ -1,25 +1,27 @@
-<config>
+ <config>
 
   <pool handle="localhost">
     <execution provider="local"/>
-    <filesystem provider="local"/>
-    <workdirectory>/scratch/midway/{env.USER}/swiftwork</workdirectory>
+    <!-- <filesystem provider="local"/> -->
+    <workdirectory>/tmp/wilde/swiftwork</workdirectory>
+    <profile namespace="swift" key="stagingMethod">local</profile>
   </pool>
 
-  <pool handle="westmere">
-    <execution provider="coaster" jobmanager="local:slurm"/>
+  <pool handle="orthros">
+    <execution provider="coaster" jobmanager="local:sge"/>
 
     <!-- Set partition and account here: -->
-    <profile namespace="globus" key="queue">westmere</profile>
-    <profile namespace="globus" key="ppn">12</profile>
+    <profile namespace="globus" key="queue">sec1all.q</profile> -->
+    <profile namespace="globus" key="pe">sec1_all</profile> -->
+    <profile namespace="globus" key="ppn">1</profile>
     <!-- <profile namespace="globus" key="project">pi-wilde</profile> -->
 
     <!-- Set number of jobs and nodes per job here: -->
     <profile namespace="globus" key="slots">1</profile>
-    <profile namespace="globus" key="maxnodes">1</profile>
-    <profile namespace="globus" key="nodegranularity">1</profile>
-    <profile namespace="globus" key="jobsPerNode">12</profile> <!-- apps per node! -->
-    <profile namespace="karajan" key="jobThrottle">.11</profile> <!-- eg .11 -> 12 -->
+    <profile namespace="globus" key="maxnodes">4</profile>
+    <profile namespace="globus" key="nodegranularity">4</profile>
+    <profile namespace="globus" key="jobsPerNode">64</profile> <!-- apps per node! -->
+    <profile namespace="karajan" key="jobThrottle">2.56</profile> <!-- eg .11 -> 12 -->
 
     <!-- Set estimated app time (maxwalltime) and requested job time (maxtime) here: -->
     <profile namespace="globus" key="maxWalltime">00:15:00</profile>
@@ -27,13 +29,31 @@
 
     <!-- Set data staging model and work dir here: -->
     <filesystem provider="local"/>
-    <workdirectory>/scratch/midway/{env.USER}/swiftwork</workdirectory>
+    <workdirectory>/clhome/WILDE/swiftwork</workdirectory>
 
     <!-- Typically leave these constant: -->
-    <profile namespace="globus" key="slurm.exclusive">false</profile>
+    <!-- <profile namespace="globus" key="slurm.exclusive">false</profile> -->
     <profile namespace="globus" key="highOverAllocation">100</profile>
     <profile namespace="globus" key="lowOverAllocation">100</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
   </pool>
 
+  <pool handle="beagle">
+    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login1.beagle.ci.uchicago.edu"/>
+    <!-- <execution provider="coaster-persistent" url="http://localhost:59900" jobmanager="local:pbs"/> -->
+    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/wilde/swift.scripts</profile>
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile> 
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus" key="slots">5</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">1.20</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/lustre/beagle/wilde/swiftwork</workdirectory>
+  </pool>
+
 </config>

Added: SwiftApps/Scattering/paintgrid/start-beagle
===================================================================
--- SwiftApps/Scattering/paintgrid/start-beagle	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/start-beagle	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,4 @@
+ssh -n login1.beagle.ci.uchicago.edu -L 59900:localhost:59900 \
+  'export SWIFT_USERHOME=/lustre/beagle/wilde/swift.scripts;
+   PATH=$HOME/swift/rev/swift-0.94.1/bin:$PATH;
+   coaster-service -nosec -p 59900 1>&2 & echo $!; sleep 99999' 1>beagle-pid 2>beagle-log &


Property changes on: SwiftApps/Scattering/paintgrid/start-beagle
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/start-tunnels
===================================================================
--- SwiftApps/Scattering/paintgrid/start-tunnels	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/start-tunnels	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,11 @@
+ssh -n -N login1.beagle.ci.uchicago.edu \
+   -R 59900:orthros.xray.aps.anl.gov:59900 \
+   -R 59901:orthros.xray.aps.anl.gov:59901 \
+   -R 59902:orthros.xray.aps.anl.gov:59902 \
+   -R 59903:orthros.xray.aps.anl.gov:59903 \
+   -R 59904:orthros.xray.aps.anl.gov:59904 \
+   -R 59905:orthros.xray.aps.anl.gov:59905 \
+   -R 59906:orthros.xray.aps.anl.gov:59906 \
+   -R 59907:orthros.xray.aps.anl.gov:59907 \
+   -R 59908:orthros.xray.aps.anl.gov:59908 \
+   -R 59909:orthros.xray.aps.anl.gov:59909


Property changes on: SwiftApps/Scattering/paintgrid/start-tunnels
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/stats
===================================================================
--- SwiftApps/Scattering/paintgrid/stats	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/stats	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,3 @@
+#! /bin/sh
+
+grep "JOB_START" $1 | sed -e 's/^.* host=//'| sort | uniq -c


Property changes on: SwiftApps/Scattering/paintgrid/stats
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/stop-beagle
===================================================================
--- SwiftApps/Scattering/paintgrid/stop-beagle	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/stop-beagle	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1 @@
+ssh -n login1.beagle.ci.uchicago.edu "echo killing coaster service: ; ps -j $(cat beagle-pid); /bin/kill 15 -\$(ps -j $(cat beagle-pid)| tail -1 | awk '{print \$2}' ) "


Property changes on: SwiftApps/Scattering/paintgrid/stop-beagle
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/tssh
===================================================================
--- SwiftApps/Scattering/paintgrid/tssh	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/tssh	2013-07-31 19:18:06 UTC (rev 6693)
@@ -0,0 +1,3 @@
+# ssh login1.beagle.ci.uchicago.edu -L 59900:localhost:59900 'sleep 1234'
+  ssh  -n login1.beagle.ci.uchicago.edu 'sleep 1234 & echo $!; exit'
+


Property changes on: SwiftApps/Scattering/paintgrid/tssh
___________________________________________________________________
Added: svn:executable
   + *




More information about the Swift-commit mailing list