[Swift-commit] r6625 - SwiftApps/Scattering/paintgrid

wilde at ci.uchicago.edu wilde at ci.uchicago.edu
Mon Jul 8 15:48:48 CDT 2013


Author: wilde
Date: 2013-07-08 15:48:48 -0500 (Mon, 08 Jul 2013)
New Revision: 6625

Added:
   SwiftApps/Scattering/paintgrid/README
   SwiftApps/Scattering/paintgrid/apps
   SwiftApps/Scattering/paintgrid/apps.multisite
   SwiftApps/Scattering/paintgrid/genpoints.params
   SwiftApps/Scattering/paintgrid/genpoints.py
   SwiftApps/Scattering/paintgrid/multisites.xml
   SwiftApps/Scattering/paintgrid/paintgrid.swift
   SwiftApps/Scattering/paintgrid/processpoints.py
   SwiftApps/Scattering/paintgrid/sites.xml
Log:
Initial revision of paintgrid demo code.

Added: SwiftApps/Scattering/paintgrid/README
===================================================================
--- SwiftApps/Scattering/paintgrid/README	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/README	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,29 @@
+Demo / tutorial application to mimic the needs of the PaintGrid application
+
+Execution scenarios:
+
+  run.local.sh
+
+  run.beagle.sh
+
+  run.midway.sh
+
+  run.midway+beagle.sh
+
+  run.blues.sh
+
+  run.orthros.sh
+
+
+Data transfer models:
+
+- provider staging without caching
+
+- provider staging with ad-hoc caching of big common files
+
+- gridftp staging
+
+- ssh (scp) staging
+
+- wrapper staging with scp and caching
+

Added: SwiftApps/Scattering/paintgrid/apps
===================================================================
--- SwiftApps/Scattering/paintgrid/apps	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/apps	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,2 @@
+localhost local_python python
+westmere  python       python

Added: SwiftApps/Scattering/paintgrid/apps.multisite
===================================================================
--- SwiftApps/Scattering/paintgrid/apps.multisite	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/apps.multisite	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,4 @@
+uc3      perl /usr/bin/perl null null null
+beagle   perl /usr/bin/perl null null null
+#sandy    perl /usr/bin/perl null null null
+westmere perl /usr/bin/perl null null null

Added: SwiftApps/Scattering/paintgrid/genpoints.params
===================================================================
--- SwiftApps/Scattering/paintgrid/genpoints.params	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/genpoints.params	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,10 @@
+minx=0.0
+maxx=171.0
+miny=0.0
+maxy=171.0
+minz=0.0
+maxz=171.0
+incr=1.0
+tuplesPerFile=10000
+filePrefix="seq"
+outDir="out/seq"

Added: SwiftApps/Scattering/paintgrid/genpoints.py
===================================================================
--- SwiftApps/Scattering/paintgrid/genpoints.py	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/genpoints.py	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,54 @@
+#! /usr/bin/env python
+
+import sys
+import os
+
+x = y = z = 0
+n = filenum = 0
+
+# default param values, may be overridden by .params file (argv[1]) :
+
+minx = 0.0
+maxx = 5.0
+miny = 0.0
+maxy = 5.0
+minz = 0.0
+maxz = 5.0
+incr = 1.0
+tuplesPerFile = 25
+filePrefix = "seq"
+outDir = "out"
+maxFiles = 100000
+
+# xrange function for floats
+
+def xfrange(start, stop, step):
+  while start < stop:
+    yield start
+    start += step
+
+# read in user params if specified
+
+if len(sys.argv) > 1:
+  execfile(sys.argv[1]) in globals()
+    
+if len(sys.argv) > 2:
+  runDir = sys.argv[2]
+else:
+  runDir = "missingRunDir";
+    
+# Generate sequences of point values in the parameter space
+
+os.chdir(runDir)
+os.system("mkdir -p " + outDir)
+
+for x in xfrange(minx,maxx,incr):
+  for y in xfrange(miny,maxy,incr):
+    for z in xfrange(minz,maxz,incr):
+      if n % tuplesPerFile == 0 :
+        filename = str.format(outDir + "/" + filePrefix + ".{0!s:0>5}",filenum)
+        print filename
+        of = file(filename,"w")
+        filenum += 1
+      print >> of, x, y, z
+      n += 1


Property changes on: SwiftApps/Scattering/paintgrid/genpoints.py
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/multisites.xml
===================================================================
--- SwiftApps/Scattering/paintgrid/multisites.xml	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/multisites.xml	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,71 @@
+<config>
+
+  <pool handle="uc3">
+    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="ssh-cl:condor"/>
+    <profile namespace="karajan" key="jobThrottle">10.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="globus"  key="jobsPerNode">1</profile>
+    <profile namespace="globus"  key="maxtime">3600</profile>
+    <profile namespace="globus"  key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus"  key="highOverAllocation">100</profile>
+    <profile namespace="globus"  key="lowOverAllocation">100</profile>
+    <profile namespace="globus"  key="slots">1000</profile>
+    <profile namespace="globus"  key="maxNodes">1</profile>
+    <profile namespace="globus"  key="nodeGranularity">1</profile>
+    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.{env.USER}"</profile>
+    <profile namespace="globus"  key="jobType">nonshared</profile>
+    <!-- <profile namespace="globus"  key="condor.+Requirements">isUndefined(GLIDECLIENT_Name) == FALSE</profile> -->
+    <workdirectory>.</workdirectory>
+  </pool>
+
+  <pool handle="beagle">
+    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login4.beagle.ci.uchicago.edu"/>
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <!-- <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile> -->
+    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24;pbs.resource_list=advres=wilde.1768</profile>
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/{env.USER}/swiftwork</profile>
+    <profile namespace="globus" key="slots">5</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">4.80</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/{env.USER}/swiftwork</workdirectory>
+  </pool>
+
+  <pool handle="sandyb">
+    <execution provider="coaster" jobmanager="local:slurm"/>
+    <profile namespace="globus" key="queue">sandyb</profile>
+    <profile namespace="globus" key="jobsPerNode">16</profile>
+    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus" key="maxTime">3600</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">.64</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/{env.USER}</workdirectory>
+  </pool>
+
+  <pool handle="westmere">
+    <execution provider="coaster" jobmanager="local:slurm"/>
+    <profile namespace="globus" key="queue">westmere</profile>
+    <profile namespace="globus" key="jobsPerNode">12</profile>
+    <profile namespace="globus" key="maxWalltime">00:05:00</profile>
+    <profile namespace="globus" key="maxTime">3600</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">.48</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/{env.USER}</workdirectory>
+  </pool>
+
+</config>

Added: SwiftApps/Scattering/paintgrid/paintgrid.swift
===================================================================
--- SwiftApps/Scattering/paintgrid/paintgrid.swift	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/paintgrid.swift	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,40 @@
+type file;
+
+# External app to generate sets of input data points
+
+app (file plist) genPoints (file pyscript, file paramFile)
+{
+  local_python @pyscript @paramFile runDir stdout=@plist;
+}
+
+# External app to process a set of data points (a mockup of paintGrid)
+
+app (file ofile) processPoints (file pyscript, file imageFile, file points)
+{
+  python @pyscript @imageFile @points runTime stdout=@ofile;
+}
+
+# The actual python scripts for the app() functions above
+
+file genPoints_script     <"genpoints.py">;
+file processPoints_script <"processpoints.py">;
+
+# Command line args to this script
+
+file   params   <single_file_mapper;file=@arg("params", "genpoints.params")>;
+file   image    <single_file_mapper;file=@arg("image",  "UNSPECIFIED.tif")>;
+global string runTime = @arg("runTime","0.0");
+global string runDir  = @arg("runDir");
+
+# Main script:
+#   Call genPoints to make a set of files, each of which contains a set of data points to process
+#   The params file specifies the range of points to generate, and how to batch them into files
+#   (In this example the input points are triples in 3-space)
+
+string pointSetFileNames[]=readData(genPoints(genPoints_script,params));
+file   pointSets[] <array_mapper; files=pointSetFileNames>;
+
+foreach pointSet, i in pointSets {
+  file ofile<single_file_mapper; file=@strcat("out/out.", at strcut(@strcat("00000",i),"(.....$)"))>;
+  ofile = processPoints(processPoints_script, image, pointSet);
+}

Added: SwiftApps/Scattering/paintgrid/processpoints.py
===================================================================
--- SwiftApps/Scattering/paintgrid/processpoints.py	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/processpoints.py	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,37 @@
+#! /usr/bin/env python
+
+import sys
+import time
+
+dataFileName = sys.argv[1]
+pointFileName = sys.argv[2]
+runTime = float(sys.argv[3])
+
+n = 0;
+pixel = [];
+with open(dataFileName, "rb") as f:
+    byte = f.read(1)
+    while byte != "":
+        pixel.append(byte)
+        byte = f.read(1)
+
+print "Data file has ", len(pixel), " pixels"
+
+f = open(pointFileName)
+lines = f.read().splitlines()
+f.close()
+
+print "Processing ", len(lines), " points in model space"
+print "Runtime is ", runTime, " seconds per point"
+
+for line in lines:
+  xyz = line.split()
+  x = float(xyz[0]) * .001
+  y = float(xyz[1]) * .001
+  z = float(xyz[2]) * .001
+  func = 0.0
+  for p in pixel:
+    v=float(ord(p))
+    func += v*x + v*x + v*z
+  time.sleep(runTime)
+  print " %10f %10f %10f     %10f" % (x, y, z, func)


Property changes on: SwiftApps/Scattering/paintgrid/processpoints.py
___________________________________________________________________
Added: svn:executable
   + *

Added: SwiftApps/Scattering/paintgrid/sites.xml
===================================================================
--- SwiftApps/Scattering/paintgrid/sites.xml	                        (rev 0)
+++ SwiftApps/Scattering/paintgrid/sites.xml	2013-07-08 20:48:48 UTC (rev 6625)
@@ -0,0 +1,39 @@
+<config>
+
+  <pool handle="localhost">
+    <execution provider="local"/>
+    <filesystem provider="local"/>
+    <workdirectory>/scratch/midway/{env.USER}/swiftwork</workdirectory>
+  </pool>
+
+  <pool handle="westmere">
+    <execution provider="coaster" jobmanager="local:slurm"/>
+
+    <!-- Set partition and account here: -->
+    <profile namespace="globus" key="queue">westmere</profile>
+    <profile namespace="globus" key="ppn">12</profile>
+    <!-- <profile namespace="globus" key="project">pi-wilde</profile> -->
+
+    <!-- Set number of jobs and nodes per job here: -->
+    <profile namespace="globus" key="slots">1</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodegranularity">1</profile>
+    <profile namespace="globus" key="jobsPerNode">12</profile> <!-- apps per node! -->
+    <profile namespace="karajan" key="jobThrottle">.11</profile> <!-- eg .11 -> 12 -->
+
+    <!-- Set estimated app time (maxwalltime) and requested job time (maxtime) here: -->
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus" key="maxtime">1800</profile>  <!-- in seconds! -->
+
+    <!-- Set data staging model and work dir here: -->
+    <filesystem provider="local"/>
+    <workdirectory>/scratch/midway/{env.USER}/swiftwork</workdirectory>
+
+    <!-- Typically leave these constant: -->
+    <profile namespace="globus" key="slurm.exclusive">false</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+  </pool>
+
+</config>




More information about the Swift-commit mailing list