[Swift-commit] r6303 - trunk/docs/userguide

ketan at ci.uchicago.edu ketan at ci.uchicago.edu
Thu Feb 21 19:36:09 CST 2013


Author: ketan
Date: 2013-02-21 19:36:08 -0600 (Thu, 21 Feb 2013)
New Revision: 6303

Modified:
   trunk/docs/userguide/configuration_properties
   trunk/docs/userguide/kickstart
   trunk/docs/userguide/profiles
Log:
cleanup and format

Modified: trunk/docs/userguide/configuration_properties
===================================================================
--- trunk/docs/userguide/configuration_properties	2013-02-21 18:53:23 UTC (rev 6302)
+++ trunk/docs/userguide/configuration_properties	2013-02-22 01:36:08 UTC (rev 6303)
@@ -416,6 +416,13 @@
     directory. In such cases, relative mode must be used. (since Swift
     0.9)
 
+use.wrapper.staging
+
+    Valid values: true, false
+    Default value: false
+    
+    Determines if the Swift wrapper should do file staging.
+
 wrapper.parameter.mode
 
     Controls how Swift will supply parameters to the remote wrapper

Modified: trunk/docs/userguide/kickstart
===================================================================
--- trunk/docs/userguide/kickstart	2013-02-21 18:53:23 UTC (rev 6302)
+++ trunk/docs/userguide/kickstart	2013-02-22 01:36:08 UTC (rev 6303)
@@ -27,5 +27,5 @@
 </pool>
 ----
 
-There are various kickstat.* properties, which have sensible default
+There are various kickstart.* properties, which have sensible default
 values. These are documented in the properties section.

Modified: trunk/docs/userguide/profiles
===================================================================
--- trunk/docs/userguide/profiles	2013-02-21 18:53:23 UTC (rev 6302)
+++ trunk/docs/userguide/profiles	2013-02-22 01:36:08 UTC (rev 6303)
@@ -39,35 +39,47 @@
 
 swift namespace
 ~~~~~~~~~~~~~~~
-storagesize limits the amount of space that will be used on the remote
-site for temporary files. When more than that amount of space is used,
-the remote temporary file cache will be cleared using the algorithm
-specified in the caching.algorithm property.
 
-wrapperInterpreter - The wrapper interpreter indicates the command
-(executable) to be used to run the Swift wrapper script. The default is
-"/bin/bash" on Unix sites and "cscript.exe" on Windows sites.
+storagesize
 
-wrapperInterpreterOptions - Allows specifying additional options to the
-executable used to run the Swift wrapper. The defaults are no options on
-Unix sites and "Nologo" on Windows sites.
+    limits the amount of space that will be used on the remote site for temporary
+    files. When more than that amount of space is used, the remote temporary file
+    cache will be cleared using the algorithm specified in the caching.algorithm
+    property.
 
-wrapperScript - Specifies the name of the wrapper script to be used on a
-site. The defaults are "_swiftwrap" on Unix sites and "_swiftwrap.vbs"
-on Windows sites. If you specify a custom wrapper script, it must be
-present in the "libexec" directory of the Swift installation.
+wrapperInterpreter 
 
-cleanupCommand Indicates the command to be run at the end of a Swift
-run to clean up the run directories on a remote site. Defaults are
-"/bin/rm" on Unix sites and "cmd.exe" on Windows sites
+    The wrapper interpreter indicates the command (executable) to be used to run
+    the Swift wrapper script. The default is "/bin/bash" on Unix sites and
+    "cscript.exe" on Windows sites.
 
-cleanupCommandOptions Specifies the options to be passed to the
-cleanup command above. The options are passed in the argument list to
-the cleanup command. After the options, the last argument is the
-directory to be deleted. The default on Unix sites is "-rf". The default
-on Windows sites is ["/C", "del", "/Q"].
+wrapperInterpreterOptions
 
+    Allows specifying additional options to the executable used to run the Swift
+    wrapper. The defaults are no options on Unix sites and "Nologo" on Windows
+    sites.
 
+wrapperScript
+
+    Specifies the name of the wrapper script to be used on a site. The defaults are
+    "_swiftwrap" on Unix sites and "_swiftwrap.vbs" on Windows sites. If you
+    specify a custom wrapper script, it must be present in the "libexec" directory
+    of the Swift installation.
+
+cleanupCommand
+
+    Indicates the command to be run at the end of a Swift run to clean up the run
+    directories on a remote site. Defaults are "/bin/rm" on Unix sites and
+    "cmd.exe" on Windows sites
+
+cleanupCommandOptions
+
+    Specifies the options to be passed to the cleanup command above. The options
+    are passed in the argument list to the cleanup command.  After the options, the
+    last argument is the directory to be deleted. The default on Unix sites is
+    "-rf". The default on Windows sites is ["/C", "del", "/Q"].
+
+
 Globus namespace
 ~~~~~~~~~~~~~~~~
 maxwalltime specifies a walltime limit for each job, in minutes.
@@ -80,29 +92,28 @@
 
 Example:
 ----
-localhost	echo	/bin/echo	INSTALLED	INTEL32::LINUX	GLOBUS::maxwalltime="00:20:00"
+localhost echo /bin/echo INSTALLED INTEL32::LINUX GLOBUS::maxwalltime="00:20:00"
 ----
 
-When replication is enabled (see replication), then
-walltime will also be enforced at the Swift client side: when a job has
-been active for more than twice the maxwalltime, Swift will kill the job
-and regard it as failed.
+When replication is enabled (see replication), then walltime will also be
+enforced at the Swift client side: when a job has been active for more than
+twice the maxwalltime, Swift will kill the job and regard it as failed.
 
-When clustering is used, maxwalltime will be used to select which jobs
-will be clustered together. More information on this is available in the
-clustering section.
+When clustering is used, maxwalltime will be used to select which jobs will be
+clustered together. More information on this is available in the clustering
+section.
 
-When coasters as used, maxwalltime influences the default coaster
-worker maxwalltime, and which jobs will be sent to which workers. More
-information on this is available in the coasters section.
+When coasters as used, maxwalltime influences the default coaster worker
+maxwalltime, and which jobs will be sent to which workers. More information on
+this is available in the coasters section.
 
-queue is used by the PBS, GRAM2 and GRAM4 providers. This profile
-entry specifies which queue jobs will be submitted to. The valid queue
-names are site-specific.
+queue is used by the PBS, GRAM2 and GRAM4 providers. This profile entry
+specifies which queue jobs will be submitted to. The valid queue names are
+site-specific.
 
-host_types specifies the types of host that are permissible for a job
-to run on. The valid values are site-specific. This profile entry is
-used by the GRAM2 and GRAM4 providers.
+host_types specifies the types of host that are permissible for a job to run
+on. The valid values are site-specific. This profile entry is used by the GRAM2
+and GRAM4 providers.
 
 condor_requirements allows a requirements string to be specified when
 Condor is used as an LRM behind GRAM2. Example:
@@ -111,113 +122,144 @@
 <profile namespace="globus" key="condor_requirements">Arch == "X86_64" || Arch="INTEL"</profile>
 ----
 
-slots When using coasters, this parameter specifies the
-maximum number of jobs/blocks that the coaster scheduler will have
-running at any given time. The default is 20.
+slots
 
-jobsPerNode - This parameter determines how many coaster workers are
-started one each compute node. The default value is 1.
+    When using coasters, this parameter specifies the maximum number of jobs/blocks
+    that the coaster scheduler will have running at any given time. The default is
+    20.
 
-nodeGranularity - When allocating a coaster worker block, this parameter
-restricts the number of nodes in a block to a multiple of this value.
-The total number of workers will then be a multiple of workersPerNode *
-nodeGranularity. The default value is 1.
+jobsPerNode
 
-allocationStepSize - Each time the coaster block scheduler computes a
-schedule, it will attempt to allocate a number of slots from the number
-of available slots (limited using the above slots profile). This
-parameter specifies the maximum fraction of slots that are allocated in
-one schedule. Default is 0.1.
+    This parameter determines how many coaster workers are started one each compute
+    node. The default value is 1.
 
-lowOverallocation - Overallocation is a function of the walltime of a
-job which determines how long (time-wise) a worker job will be. For
-example, if a number of 10s jobs are submitted to the coaster service,
-and the overallocation for 10s jobs is 10, the coaster scheduler will
-attempt to start worker jobs that have a walltime of 100s. The
-overallocation is controlled by manipulating the end-points of an
-overallocation function. The low endpoint, specified by this parameter,
-is the overallocation for a 1s job. The high endpoint is the
-overallocation for a (theoretical) job of infinite length. The
-overallocation for job sizes in the [1s, +inf) interval is determined
-using an exponential decay function: overallocation(walltime) = walltime
-* (lowOverallocation - highOverallocation) * exp(-walltime *
-overallocationDecayFactor) + highOverallocation The default value of
-lowOverallocation is 10.
+nodeGranularity
 
-highOverallocation - The high overallocation endpoint (as described
-above). Default: 1
+    When allocating a coaster worker block, this parameter restricts the number of
+    nodes in a block to a multiple of this value.  The total number of workers will
+    then be a multiple of workersPerNode * nodeGranularity. The default value is 1.
 
-overallocationDecayFactor - The decay factor for the overallocation
-curve. Default 0.001 (1e-3).
+allocationStepSize
 
-spread - When a large number of jobs is submitted to the a coaster
-service, the work is divided into blocks. This parameter allows a rough
-control of the relative sizes of those blocks. A value of 0 indicates
-that all work should be divided equally between the blocks (and blocks
-will therefore have equal sizes). A value of 1 indicates the largest
-possible spread. The existence of the spread parameter is based on the
-assumption that smaller overall jobs will generally spend less time in
-the queue than larger jobs. By submitting blocks of different sizes,
-submitted jobs may be finished quicker by smaller blocks. Default: 0.9.
+    Each time the coaster block scheduler computes a schedule, it will attempt to
+    allocate a number of slots from the number of available slots (limited using
+    the above slots profile). This parameter specifies the maximum fraction of
+    slots that are allocated in one schedule. Default is 0.1.
 
-reserve - Reserve time is a time in the allocation of a worker that sits
-at the end of the worker time and is useable only for critical
-operations. For example, a job will not be submitted to a worker if it
-overlaps its reserve time, but a job that (due to inaccurate walltime
-specification) runs into the reserve time will not be killed (note that
-once the worker exceeds its walltime, the queuing system will kill the
-job anyway). Default 10 (s).
+lowOverallocation
 
-maxnodes - Determines the maximum number of nodes that can be allocated
-in one coaster block. Default: unlimited.
+    Overallocation is a function of the walltime of a job which determines how
+    long (time-wise) a worker job will be. For example, if a number of 10s jobs
+    are submitted to the coaster service, and the overallocation for 10s jobs
+    is 10, the coaster scheduler will attempt to start worker jobs that have a
+    walltime of 100s. The overallocation is controlled by manipulating the
+    end-points of an overallocation function. The low endpoint, specified by
+    this parameter, is the overallocation for a 1s job. The high endpoint is
+    the overallocation for a (theoretical) job of infinite length. The
+    overallocation for job sizes in the [1s, +inf) interval is determined using
+    an exponential decay function:
+    
+    overallocation(walltime) = walltime * (lowOverallocation -
+    highOverallocation) * exp(-walltime * overallocationDecayFactor) +
+    highOverallocation
 
-maxtime - Indicates the maximum walltime, in seconds, that a coaster
-block can have.
-Default: unlimited.
+    The default value of lowOverallocation is 10.
 
-remoteMonitorEnabled - If set to "true", the client side will get a
-Swing window showing, graphically, the state of the coaster scheduler
-(blocks, jobs, etc.). Default: false
+highOverallocation
 
-internalhostname - If the head node has multiple network interfaces,
-only one of which is visible from the worker nodes. The choice of
-which interface is the one that worker nodes can connect to is a
-matter of the particular cluster. This must be set in the your
-sites file to clarify to the workers which exact interface on the
-head node they are to try to connect to.
+    The high overallocation endpoint (as described above). Default: 1
 
+overallocationDecayFactor
+
+    The decay factor for the overallocation curve. Default 0.001 (1e-3).
+
+spread
+
+    When a large number of jobs is submitted to coaster service, the work is
+    divided into blocks. This parameter allows a rough control of the relative
+    sizes of those blocks. A value of 0 indicates that all work should be divided
+    equally between the blocks (and blocks will therefore have equal sizes). A
+    value of 1 indicates the largest possible spread. The existence of the spread
+    parameter is based on the assumption that smaller overall jobs will generally
+    spend less time in the queue than larger jobs. By submitting blocks of
+    different sizes, submitted jobs may be finished quicker by smaller blocks.
+    Default: 0.9.
+
+reserve
+
+    Reserve time is a time in the allocation of a worker that sits
+    at the end of the worker time and is useable only for critical
+    operations. For example, a job will not be submitted to a worker if it
+    overlaps its reserve time, but a job that (due to inaccurate walltime
+    specification) runs into the reserve time will not be killed (note that
+    once the worker exceeds its walltime, the queuing system will kill the
+    job anyway). Default 10 (s).
+
+maxnodes
+
+    Determines the maximum number of nodes that can be allocated
+    in one coaster block. Default: unlimited.
+
+maxtime
+
+    Indicates the maximum walltime, in seconds, that a coaster
+    block can have.
+    Default: unlimited.
+
+remoteMonitorEnabled
+
+    If set to "true", the client side will get a Swing window showing, graphically,
+    the state of the coaster scheduler (blocks, jobs, etc.). Default: false
+
+internalhostname
+
+    If the head node has multiple network interfaces,
+    only one of which is visible from the worker nodes. The choice of
+    which interface is the one that worker nodes can connect to is a
+    matter of the particular cluster. This must be set in the your
+    sites file to clarify to the workers which exact interface on the
+    head node they are to try to connect to.
+
 env namespace
 ~~~~~~~~~~~~~
+
 Profile keys set in the env namespace will be set in the unix
 environment of the executed job. Some environment variables influence
 the worker-side behaviour of Swift:
 
-PATHPREFIX - set in env namespace profiles. This path is prefixed onto
-the start of the PATH when jobs are executed. It can be more useful
-than setting the PATH environment variable directly, because setting
-PATH will cause the execution site's default path to be lost.
+PATHPREFIX
 
-SWIFT_JOBDIR_PATH - set in env namespace profiles. If set, then Swift
-will use the path specified here as a worker-node local temporary
-directory to copy input files to before running a job. If unset, Swift
-will keep input files on the site-shared filesystem. In some cases,
-copying to a worker-node local directory can be much faster than having
-applications access the site-shared filesystem directly.
+    set in env namespace profiles. This path is prefixed onto the start of the
+    PATH when jobs are executed. It can be more useful than setting the PATH
+    environment variable directly, because setting PATH will cause the
+    execution site's default path to be lost.
 
-SWIFT_EXTRA_INFO - set in env namespace profiles. If set, then Swift
-will execute the command specified in SWIFT_EXTRA_INFO on execution
-sites immediately before each application execution, and will record the
-stdout of that command in the wrapper info log file for that job. This
-is intended to allow software version and other arbitrary information
-about the remote site to be gathered and returned to the submit side.
-(since Swift 0.9)
+SWIFT_JOBDIR_PATH 
 
-SWIFT_GEN_SCRIPTS - set in the env namespace profiles. This variable
-just needs to be set, it doesn't matter what it is set to. If set, then Swift
-will keep the script that was used to execute the job in the job directory.
-The script will be called run.sh and will have the command line that Swift
-tried to execute with.
+    set in env namespace profiles. If set, then Swift will use the path
+    specified here as a worker-node local temporary directory to copy input
+    files to before running a job. If unset, Swift will keep input files on the
+    site-shared filesystem. In some cases, copying to a worker-node local
+    directory can be much faster than having applications access the
+    site-shared filesystem directly.
 
+SWIFT_EXTRA_INFO 
+
+    set in env namespace profiles. If set, then Swift will execute the command
+    specified in SWIFT_EXTRA_INFO on execution sites immediately before each
+    application execution, and will record the stdout of that command in the
+    wrapper info log file for that job. This is intended to allow software
+    version and other arbitrary information about the remote site to be
+    gathered and returned to the submit side.  (since Swift 0.9)
+
+SWIFT_GEN_SCRIPTS 
+
+    set in the env namespace profiles. This variable just needs to be set, it
+    doesn't matter what it is set to. If set, then Swift will keep the script
+    that was used to execute the job in the job directory.  The script will be
+    called run.sh and will have the command line that Swift tried to execute
+    with.
+
 === Dynamic profiles
 
 To set a profile setting based on the value of a Swift variable, you




More information about the Swift-commit mailing list