[Swift-commit] r6118 - trunk/docs/siteguide

davidk at ci.uchicago.edu davidk at ci.uchicago.edu
Wed Jan 2 15:00:33 CST 2013


Author: davidk
Date: 2013-01-02 15:00:33 -0600 (Wed, 02 Jan 2013)
New Revision: 6118

Removed:
   trunk/docs/siteguide/pads
Modified:
   trunk/docs/siteguide/beagle
   trunk/docs/siteguide/fusion
   trunk/docs/siteguide/futuregrid
   trunk/docs/siteguide/intrepid
   trunk/docs/siteguide/mcs
   trunk/docs/siteguide/uc3
Log:
Rewrite uc3 site guide to use automatic coasters with condor
Some reformatting


Modified: trunk/docs/siteguide/beagle
===================================================================
--- trunk/docs/siteguide/beagle	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/beagle	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,4 +1,4 @@
-Cray XE6: Beagle
+Beagle (Cray XE6)
 ----------------
 
 Beagle is a Cray XE6 supercomputer at UChicago. It employs a batch-oriented

Modified: trunk/docs/siteguide/fusion
===================================================================
--- trunk/docs/siteguide/fusion	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/fusion	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,4 +1,4 @@
-x86 Cluster: Fusion
+Fusion (x86 cluster)
 -------------------
 
 Fusion is a 320-node computing cluster for the Argonne

Modified: trunk/docs/siteguide/futuregrid
===================================================================
--- trunk/docs/siteguide/futuregrid	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/futuregrid	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,5 +1,5 @@
-x86 Cloud: Futuregrid Quickstart Guide
---------------------------------------
+Futuregrid (x86 cloud)
+----------------------
 
 FutureGrid is a distributed, high-performance test-bed that allows
 scientists to collaboratively develop and test innovative approaches

Modified: trunk/docs/siteguide/intrepid
===================================================================
--- trunk/docs/siteguide/intrepid	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/intrepid	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,4 +1,4 @@
-Blue Gene/P: Intrepid
+Intrepid (Blue Gene/P)
 ---------------------
 
 Intrepid is an IBM Blue Gene/P supercomputer located at the Argonne

Modified: trunk/docs/siteguide/mcs
===================================================================
--- trunk/docs/siteguide/mcs	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/mcs	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,4 +1,4 @@
-x86 Workstations: MCS Compute Servers
+MCS Compute Servers (x86 workstations)
 -------------------------------------
 
 This sections describes how to use the general use compute servers for

Deleted: trunk/docs/siteguide/pads
===================================================================
--- trunk/docs/siteguide/pads	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/pads	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,183 +0,0 @@
-x86 Cluster: PADS
------------------
-
-PADS is a petabyte-scale, data intense computing resource located
-at the joint Argonne National Laboratory/University of Chicago
-Computation Institute. More information about PADS can be found
-at http://pads.ci.uchicago.edu.
-
-Requesting Access
-~~~~~~~~~~~~~~~~~
-If you do not already have a Computation Institute account, you can request
-access at https://www.ci.uchicago.edu/accounts. This page will give you a list
-of resources you can request access to. Be sure that PADS is selected. If
-you already have an existing CI account, but do not have access to PADS,
-send an email to support at ci.uchicago.edu to request access.
-
-SSH Keys
-~~~~~~~~
-Before accessing PADS, be sure to have your SSH keys configured correctly.
-There is some basic information about SSH and how to generate your key at
-http://www.ci.uchicago.edu/wiki/bin/view/Resources/SshKeys. Once you have
-followed those instructions, you can add your key at
-https://www.ci.uchicago.edu/support/sshkeys/.
-
-Connecting to a login node
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-Once your keys are configured, you should be able to access a PADS login
-node with the following command:
-
------
-ssh yourusername at login.pads.ci.uchicago.edu
------
-
-Adding Software Packages
-~~~~~~~~~~~~~~~~~~~~~~~~
-Softenv is a system used for managing applications. In order to run Swift,
-the softenv environment will have to be modified slightly. Softenv is
-configured by a file in your home directory called .soft. Edit this file
-to look like this:
------
-+java-sun
-+maui
-+torque
- at default
------
-
-Log out of PADS, and log back in for these changes to take effect.
-
-Which project(s) are you a member of?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-PADS requires that you are a member of a project. You can determine this by
-running the following command:
-
------
-$ projects --available
-
-The following projects are available for your use
-
-Project      PI                      Title
-CI-CCR000013 Michael Wilde           The Swift Parallel Scripting System
------
-
-If you are not a member of a project, you must first request access
-to a project at http://www.ci.uchicago.edu/hpc/projects.
-
-You should make sure that you have a project set as default. Run
-the projects command with no arguments to determine if you have a default.
-
-------
-$ projects
-You have no default project set.
------
-
-To set your default project, use projects --set
-------
-$ projects --set CI-CCR000013 --all
-Your default project for all CI clusters has been set to CI-CCR000013.
------
-
-Creating sites.xml
-^^^^^^^^^^^^^^^^^^
-Swift relies on various configuration files to determine how to
-run. This section will provide a working configuration file which
-you can copy and paste to get running quickly. The sites.xml file
-tells Swift how to submit jobs, where working directories are
-located, and various other configuration information. More
-information on sites.xml can be found in the Swift User's Guide.
-
-The first step is to paste the text below into a file named sites.xml.
-
------
-include::../../tests/providers/pads/coasters/sites.template.xml[]
------
-
-This file will require just a few customizations. First, create a
-directory called swiftwork. Modify \_WORK_ in sites.xml
-to point to this new directory. For example
------
-<workdirectory>/home/myhome/swiftwork</workdirectory>
------
-
-Creating tc.data
-^^^^^^^^^^^^^^^^
-The tc.data configuration file gives information about the applications
-that will be called by Swift. More information about the format
-of tc.data can be found in the Swift User's guide.
-
-Paste the following example into a file named tc.data
-
------
-include::../../tests/providers/pads/coasters/tc.template.data[]
------
-
-Copy a Swift Script
-^^^^^^^^^^^^^^^^^^^
-Within the Swift directory is an examples directory which contains
-several introductory Swift scripts. The example we will use in this
-section is called catsn.swift. Copy this script to the same directory
-that your sites.xml and tc.data files are located.
-
------
-$ cp ~/swift-0.93/examples/misc/catsn.swift .
-$ cp ~/swift-0.93/examples/misc/data.txt .
------
-
-TIP: The location of your swift directory may vary depending on how
-you installed it. Change this to the examples/misc directory of your
-installation as needed.
-
-
-Run Swift
-^^^^^^^^^
-
-Finally, run the script:
-
------
- swift -sites.file sites.xml -tc.file tc.data catsn.swift
------
-
-You should see several new files being created, called catsn.0001.out, catsn.0002.out, etc. Each of these
-files should contain the contents of what you placed into data.txt. If this happens, your job has run
-successfully on PADS!
-
-TIP: Make sure your default project is defined. Read on for more
-information.
-
-Read on for more detailed information about running Swift on PADS.
-
-
-Queues
-^^^^^^
-
-As you run more application in the future, you will likely need
-to change queues.
-
-PADS has several different queues you can submit jobs to depending on
-the type of work you will be doing. The command "qstat -q" will print
-the most up to date list of this information.
-
-.PADS Queues
-[options="header"]
-|=========================================================
-|Queue   |Memory|CPU Time|Walltime|Node|Run|Que|Lm  |State
-|route   |--    |--      |--      |--  |  0|  0|--  | E R
-|short   |--    |--      |04:00:00|--  | 64|  0|--  | E R
-|extended|--    |--      |--      |--  |  0|  0|--  | E R
-|fast    |--    |--      |01:00:00|1   |  0|152|--  | E R
-|long    |--    |--      |24:00:00|--  |232|130|--  | E R
-|=========================================================
-
-When you determine your computing requirements, modify this line in your
-sites.xml:
-
------
-<profile key="queue" namespace="globus">fast</profile>
------
-
-More Help
-~~~~~~~~~
-The best place for additional help is the Swift user mailing list. You can subscribe to this list at
-https://lists.ci.uchicago.edu/cgi-bin/mailman/listinfo/swift-user. When submitting information, please send your sites.xml file, your tc.data, and any Swift log files that were created during your attempt.
-
-

Modified: trunk/docs/siteguide/uc3
===================================================================
--- trunk/docs/siteguide/uc3	2013-01-02 00:14:03 UTC (rev 6117)
+++ trunk/docs/siteguide/uc3	2013-01-02 21:00:33 UTC (rev 6118)
@@ -1,53 +1,135 @@
-x86 Cluster: UC3
+UC3 (x86 cluster)
 ----------------
 
-Create a coaster-service.conf
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To begin, copy the text below and paste it into your Swift distribution's etc
-directory. Name the file coaster-service.conf.
+Requesting Access
+~~~~~~~~~~~~~~~~~
+To request access to UC3, you must have a University of Chicago CNetID 
+and be a meimber of the UC3 group. More information about UC3 can be 
+found at https://wiki.uchicago.edu/display/uc3/UC3+Home or 
+uc3-support at lists.uchicago.edu.
 
+Connecting to a login node
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+To access the UC3 login node, you will use your CNetID and password.
+
 -----
-include::../../tests/providers/uc3/coaster-service.conf[]
+ssh -l <cnetid> uc3-sub.uchicago.edu
 -----
 
-Starting the Coaster Service
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Installing Swift
+~~~~~~~~~~~~~~~~
+Swift should be available by default on the UC3 login nodes. You can verify 
+this by running the following command:
+-----
+swift -version
+-----
 
-Change directories to the location you would like to run a
-Swift script and start the coaster service with this
-command:
+If for some reason Swift is not available, you can following the instructions at 
+http://www.ci.uchicago.edu/swift/guides/release-0.93/quickstart/quickstart.html.
+Swift 0.94 or later is required to work with the condor provider on UC3.
 
+Creating sites.xml
+~~~~~~~~~~~~~~~~~~
+This section will provide a working configuration file which you can copy and paste 
+to get running quickly. The sites.xml file tells Swift how to submit jobs, where 
+working directories are located, and various other configuration information. 
+More information on sites.xml can be found in the Swift User’s Guide.
+
+The first step is to paste the text below into a file named sites.xml:
 -----
-start-coaster-service
+<config>
+  <pool handle="uc3">
+    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="local:condor"/>
+    <profile namespace="karajan" key="jobThrottle">999.99</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="globus"  key="jobsPerNode">1</profile>
+    <profile namespace="globus"  key="maxWalltime">3600</profile>
+    <profile namespace="globus"  key="nodeGranularity">1</profile>
+    <profile namespace="globus"  key="highOverAllocation">100</profile>
+    <profile namespace="globus"  key="lowOverAllocation">100</profile>
+    <profile namespace="globus"  key="slots">1000</profile>
+    <profile namespace="globus"  key="maxNodes">1</profile>
+    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.{env.USER}"</profile>
+    <profile namespace="globus"  key="jobType">nonshared</profile>
+    <filesystem provider="local" url="none" />
+    <workdirectory>.</workdirectory>
+  </pool>
+</config>
 -----
 
-This will create a configuration file that Swift needs
-called sites.xml.
+Creating tc.data
+~~~~~~~~~~~~~~~~
+The tc.data configuration file gives information about the applications that will be called by Swift.
+More information about the format of tc.data can be found in the Swift User’s guide.
 
-WARNING: Any existing sites.xml files in this directory
-will be overwritten. Be sure to make a copy of any
-custom configuration files you may have.
+Paste the following example into a file named tc.data:
+-----
+uc3 echo /bin/echo null null null
+-----
 
-Run Swift
-~~~~~~~~~
+Create a configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+A swift configuration file enables and disables some settings in Swift. More information on what
+these settings do can be found in the Swift User's guide.
 
-Next, run Swift. If you do not have a particular script
-in mind, you can test Swift by using a Swift script in
-the examples/ directory.
+Paste the following lines into a file called cf:
+-----
+wrapperlog.always.transfer=false
+sitedir.keep=true
+execution.retries=0
+lazy.errors=false
+status.mode=provider
+use.provider.staging=true
+provider.staging.pin.swiftfiles=false
+use.wrapper.staging=false
+-----
 
-Run the following command to run the script:
+Creating echo.swift
+~~~~~~~~~~~~~~~~~~~
+Now we need to create a swift script to test with. Let's use a simple application that calls /bin/echo.
+
 -----
-swift -sites.file sites.xml -tc.file tc.data yourscript.swift
+type file;
+
+app (file o) echo (string s) {
+   echo s stdout=@o;
+}
+
+foreach i in [1:5] {
+  file output_file <single_file_mapper; file=@strcat("output/output.", i, ".txt")>;
+  output_file = echo( @strcat("This is test number ", i) );
+}
 -----
 
-Stopping the Coaster Service
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The coaster service will run indefinitely. The stop-coaster-service
-script will terminate the coaster service.
+Running Swift
+~~~~~~~~~~~~~
+Putting everything together now, run your Swift script with the following command:
 
 -----
-$ stop-coaster-service
+swift -sites.file sites.xml -tc.file tc.data -config cf echo.swift
 -----
 
-This will kill the coaster service and kill the worker scripts on remote systems.
+If everything runs successfully, you will see 5 files get created in the output directory.
 
+
+Controlling where jobs run
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Swift will automatically generate condor scripts for you with the basic information about
+how to run. However, condor has hundreds of commands that let you customize how things work.
+If you need one of these advanced commands, you can add it to your sites.xml. The basic
+template for this is:
+
+-----
+<profile namespace="globus" key="condor.key">value</profile>
+-----
+
+For example, let's assume that you want to control where your jobs run by adding a
+requirement. The condor command that will control the run is:
+-----
+Requirements = UidDomain == "osg-gk.mwt2.org"
+-----
+
+To have this generated by Swift, you will add a line to your sites.xml in the key/value style
+shown above.
+<profile namespace="globus" key="condor.Requirements">UidDomain == "osg-gk.mwt2.org"</profile>
+




More information about the Swift-commit mailing list