[Swift-commit] r7006 - in SwiftTutorials/swift-cray-tutorial: . doc

wilde at ci.uchicago.edu wilde at ci.uchicago.edu
Mon Aug 26 14:16:38 CDT 2013


Author: wilde
Date: 2013-08-26 14:16:38 -0500 (Mon, 26 Aug 2013)
New Revision: 7006

Modified:
   SwiftTutorials/swift-cray-tutorial/doc/README
   SwiftTutorials/swift-cray-tutorial/setup.sh
Log:
Changes for Cray.

Modified: SwiftTutorials/swift-cray-tutorial/doc/README
===================================================================
--- SwiftTutorials/swift-cray-tutorial/doc/README	2013-08-26 16:16:18 UTC (rev 7005)
+++ SwiftTutorials/swift-cray-tutorial/doc/README	2013-08-26 19:16:38 UTC (rev 7006)
@@ -1,23 +1,29 @@
 Swift Cray Tutorial
 ===================
 
-////
+//// Comments:
 
-Outline
+This is the asciidoc input file.
+The content below is also viewable as a plain-text README file.
 
-Introductory exercises
+////
 
-p1 - Run an application under Swift
+This tutorial is viewable at:
+http://swift-lang.org/tutorials/swift-cray-tutorial.html
 
-p2 - Parallel loops with foreach
+//// Tutorial Outline:
 
-p3 - Merging/reducing the results of a parallel foreach loop
+Introductory exercises, local, on login node:
 
-p4 - Running on the remote site nodes
+  p1 - Run an application under Swift
+  p2 - Parallel loops with foreach
+  p3 - Merging/reducing the results of a parallel foreach loop
 
-p5 - Running the stats summary step on the remote site
+Compute-node exercises:
 
-p6 - Add additional apps for generating seeds remotely 
+  p4 - Running apps on Cray comupte nodes
+  p5 - Running on multiple pools of compute nodes
+  p6 - A more complex workflow pattern
 
 ////
 
@@ -45,11 +51,11 @@
 compute nodes, and see how more complex workflows can be expressed
 with Swift scripts.
 
-
 Swift tutorial setup
 --------------------
 
-To install the tutorial scripts on Raven, do:
+To install the tutorial scripts on Cray "Raven" XE6-XK7 test system,
+do:
 
 -----
 $ cd $HOME
@@ -69,8 +75,7 @@
 -----
 
 NOTE: If you re-login or open new ssh sessions, you must
-re-run `source setup.sh` in each ssh shell window:
-
+re-run `source setup.sh` in each ssh shell/window.
 To check out the tutorial scripts from SVN
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -78,7 +83,7 @@
 the Swift Subversion repository, do:
 
 -----
-$ svn co https://svn.ci.uchicago.edu/svn/vdl2/SwiftTutorials/Cray-Swift
+$ svn co https://svn.ci.uchicago.edu/svn/vdl2/SwiftTutorials/swift-cray-tutorial Cray-Swift
 -----
 
 This will create a directory called "Cray-Swift" which contains all of the
@@ -395,27 +400,6 @@
 sys::[cat -n ../part04/apps]
 -----
 
-
-Part 5: Controlling where applications run
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-p5.swift introduces a postprocessing step. After all the parallel
-simulations have completed, the files created by simulation.sh will be
-averaged by stats.sh. This is similar to p3, but all app invocations
-are done on remote nodes with Swift managing file transfers.
-
-image::part05.png[align="center"]
-
-.p5.swift
-----
-sys::[cat -n ../part05/p5.swift]
-----
-
-To run:
-----
-$ swift p5.swift
-----
-
 Larger runs
 ~~~~~~~~~~~
 To test with larger runs, there are two changes that are required. The first is a 
@@ -434,9 +418,6 @@
 -----
 
 
-
-
-
 Plotting 
 ~~~~~~~~
 Each part directory contains a file called plot.sh that can be used for plotting. 
@@ -456,10 +437,6 @@
 
 image::activeplot.png[width=700,align=center]
 image::cumulativeplot.png[width=700,align=center]
-////
-image::activeplot.png[align="center",scaledwidth="60%"]
-image::cumulativeplot.png[align="center",scaledwidth="60%"]
-////
 
 
 You can then scp the two resulting plot images to another host to view them. Eg, from a Mac, do:
@@ -469,7 +446,29 @@
 $ open *.png
 ----
 
+Part 5: Controlling the compute-node pools where applications run
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+p5.swift introduces a postprocessing step. After all the parallel
+simulations have completed, the files created by simulation.sh will be
+averaged by stats.sh. This is similar to p3, but all app invocations
+are done on remote nodes with Swift managing file transfers.
+
+image::part05.png[align="center"]
+
+.p5.swift
+----
+sys::[cat -n ../part05/p5.swift]
+----
+
+To run:
+----
+$ swift p5.swift
+----
+
+
+
+
 Part 6: Specifying more complex workflow patterns
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 

Modified: SwiftTutorials/swift-cray-tutorial/setup.sh
===================================================================
--- SwiftTutorials/swift-cray-tutorial/setup.sh	2013-08-26 16:16:18 UTC (rev 7005)
+++ SwiftTutorials/swift-cray-tutorial/setup.sh	2013-08-26 19:16:38 UTC (rev 7006)
@@ -57,67 +57,63 @@
 sites.file=sites.xml
 tc.file=apps
 
-wrapperlog.always.transfer=true
-sitedir.keep=true
+wrapperlog.always.transfer=false
+sitedir.keep=false
 file.gc.enabled=false
 status.mode=provider
 
 execution.retries=0
 lazy.errors=false
 
-use.wrapper.staging=false
 use.provider.staging=true
-provider.staging.pin.swiftfiles=false
+provider.staging.pin.swiftfiles=true
+use.wrapper.staging=false
 
 END
 
 cat >sites.raven<<END
 <?xml version="1.0" encoding="UTF-8"?>
-<config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
+<config xmlns="http://swift-lang.org/sites">
 
   <pool handle="localhost">
     <execution provider="local" />
-    <profile namespace="karajan" key="jobThrottle">0</profile>
+    <profile namespace="karajan" key="jobThrottle">0.04</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
-    <filesystem provider="local"/>
-    <workdirectory>.swift/tmp</workdirectory>
+    <workdirectory>/lus/scratch/$USER/swiftwork</workdirectory>
     <profile namespace="swift" key="stagingMethod">local</profile>
   </pool>
 
   <pool handle="raven">
     <execution provider="coaster" jobmanager="local:pbs" URL="local:01"/>
-    <profile namespace="env" key="SWIFT_GEN_SCRIPTS">KEEP</profile>
     <profile namespace="globus" key="jobsPerNode">32</profile>
+    <profile namespace="globus" key="queue">small</profile>
     <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=32</profile>
     <profile namespace="globus" key="maxWallTime">00:01:00</profile>
-    <profile namespace="globus" key="slots">10</profile>
+    <profile namespace="globus" key="slots">2</profile>
     <profile namespace="globus" key="maxNodes">1</profile>
     <profile namespace="karajan" key="jobThrottle">3.20</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
-    <filesystem provider="local"/>
-    <workdirectory>{env.HOME}/swiftwork</workdirectory>
+    <workdirectory>/lus/scratch/{env.USER}/swiftwork</workdirectory>
+    <profile namespace="swift" key="stagingMethod">sfs</profile>
   </pool>
 
   <pool handle="ravenMED">
     <execution provider="coaster" jobmanager="local:pbs" URL="local:02"/>
-    <profile namespace="env" key="SWIFT_GEN_SCRIPTS">KEEP</profile>
     <profile namespace="globus" key="queue">medium</profile>
     <profile namespace="globus" key="jobsPerNode">32</profile>
     <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=32</profile>
     <profile namespace="globus" key="maxWallTime">00:01:00</profile>
     <profile namespace="globus" key="slots">1</profile>
-    <profile namespace="globus" key="maxNodes">6</profile>
-    <profile namespace="globus" key="nodeGranularity">6</profile>
-    <profile namespace="karajan" key="jobThrottle">5.00</profile>
+    <profile namespace="globus" key="maxNodes">8</profile>
+    <profile namespace="globus" key="nodeGranularity">8</profile>
+    <profile namespace="karajan" key="jobThrottle">2.56</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
-    <!-- <filesystem provider="local"/> -->
-    <!-- <workdirectory>{env.HOME}/swiftwork</workdirectory> -->
-    <workdirectory>/dev/shm/</workdirectory> -->
+    <workdirectory>/lus/scratch/{env.USER}/swiftwork</workdirectory>
+    <profile namespace="swift" key="stagingMethod">sfs</profile>
   </pool>
 
   <pool handle="ravenGPU">
     <execution provider="coaster" jobmanager="local:pbs" URL="local:03"/> 
-    <profile namespace="env" key="SWIFT_GEN_SCRIPTS">KEEP</profile>
     <profile namespace="globus" key="queue">gpu_nodes</profile>
     <profile namespace="globus" key="jobsPerNode">16</profile>
     <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=16</profile>
@@ -127,9 +123,8 @@
     <profile namespace="globus" key="nodeGranularity">6</profile>
     <profile namespace="karajan" key="jobThrottle">5.00</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
-    <!-- <filesystem provider="local"/> -->
-    <!-- <workdirectory>{env.HOME}/swiftwork</workdirectory> -->
-    <workdirectory>/dev/shm/</workdirectory> -->
+    <workdirectory>/lus/scratch/{env.USER}/swiftwork</workdirectory>
+    <profile namespace="swift" key="stagingMethod">sfs</profile>
   </pool>
 
 </config>




More information about the Swift-commit mailing list