[Swift-commit] r3581 - trunk/tests/cdm-ps

noreply at svn.ci.uchicago.edu noreply at svn.ci.uchicago.edu
Mon Aug 30 12:18:21 CDT 2010


Author: wozniak
Date: 2010-08-30 12:18:21 -0500 (Mon, 30 Aug 2010)
New Revision: 3581

Modified:
   trunk/tests/cdm-ps/swift.properties
Log:
These tests will use provider staging


Modified: trunk/tests/cdm-ps/swift.properties
===================================================================
--- trunk/tests/cdm-ps/swift.properties	2010-08-30 17:17:51 UTC (rev 3580)
+++ trunk/tests/cdm-ps/swift.properties	2010-08-30 17:18:21 UTC (rev 3581)
@@ -4,10 +4,10 @@
 #
 # The host name of the submit machine is used by GRAM as a callback
 # address to report the status of submitted jobs. In general, Swift
-# can automatically detect the host name of the local machine. 
+# can automatically detect the host name of the local machine.
 # However, if the machine host name is improperly configured or if
 # it does not represent a valid DNS entry, certain services (such as
-# GRAM) will not be able to send job status notifications back to 
+# GRAM) will not be able to send job status notifications back to
 # the client. The value of this property can be an IP address.
 #
 # Format:
@@ -20,7 +20,7 @@
 #
 # A TCP port range can be specified to restrict the ports on which GRAM
 # callback services are started. This is likely needed if your submit
-# host is behind a firewall, in which case the firewall should be 
+# host is behind a firewall, in which case the firewall should be
 # configured to allow incoming connections on ports in the range.
 #
 # Format:
@@ -33,10 +33,10 @@
 # false	- means an error will be immediately reported and cause the
 # 		workflow to abort. At this time remote jobs that are already
 #		running will not be canceled
-# true	- means that Swift will try to do as much work as possible and 
+# true	- means that Swift will try to do as much work as possible and
 #		report all errors encountered at the end. However, "errors"
 #		here only applies to job execution errors. Certain errors
-#		that are related to the Swift implementation (should such 
+#		that are related to the Swift implementation (should such
 #		errors occur) will still be reported eagerly.
 #
 # Default: false
@@ -46,7 +46,7 @@
 #
 # What algorithm to use for caching of remote files. LRU (as in what
 # files to purge) is the only implementation right now. One can set
-# a target size (in bytes) for a host by using the swift:storagesize 
+# a target size (in bytes) for a host by using the swift:storagesize
 # profile for a host in sites.xml
 #
 # Default: LRU
@@ -56,7 +56,7 @@
 #
 # true       - generate a provenance graph in .dot format (Swift will
 #			 choose a random file name)
-# false      - do not generate a provenance graph 
+# false      - do not generate a provenance graph
 # <filename> - generate a provenange graph in the give file name
 #
 # Default: false
@@ -65,7 +65,7 @@
 
 
 #
-# graph properties for the provenance graph (.dot specific) 
+# graph properties for the provenance graph (.dot specific)
 #
 # Default: splines="compound", rankdir="TB"
 #
@@ -73,25 +73,25 @@
 
 
 #
-# node properties for the provenance graph (.dot specific) 
+# node properties for the provenance graph (.dot specific)
 #
 # Default: color="seagreen", style="filled"
 #
 pgraph.node.options=color="seagreen", style="filled"
 
 #
-# true	- clustering of small jobs is enabled. Clustering works in the 
+# true	- clustering of small jobs is enabled. Clustering works in the
 #       following way: If a job is clusterable (meaning that it has the
 #       GLOBUS::maxwalltime profile specified in tc.data and its value
 #       is less than the value of the "clustering.min.time" property) it will
-#       be put in a clustering queue. The queue is processed at intervals 
+#       be put in a clustering queue. The queue is processed at intervals
 #       specified by the "clustering.queue.delay" property. The processing
 #       of the clustering queue consists of selecting compatible jobs and
 #		grouping them in clusters whose max wall time does not exceed twice
-#       the value of the "clustering.min.time" property. Two or more jobs are 
+#       the value of the "clustering.min.time" property. Two or more jobs are
 #       considered compatible if they share the same site and do not have
 #       conflicting profiles (e.g. different values for the same environment
-#       variable). 
+#       variable).
 # false	- clustering of small jobs is disabled.
 #
 # Default: false
@@ -123,7 +123,7 @@
 # true  - use Kickstart. If a job is scheduled on a site that does not have
 #       Kickstart installed, that job will fail.
 # maybe - Use Kickstart if installed (i.e. the entry is present in the sites
-#       file) 
+#       file)
 #
 # Default: maybe
 #
@@ -164,9 +164,9 @@
 # throttle only limits the number of concurrent tasks (jobs) that are being
 # sent to sites, not the total number of concurrent jobs that can be run.
 # The submission stage in GRAM is one of the most CPU expensive stages (due
-# mostly to the mutual authentication and delegation). Having too many 
+# mostly to the mutual authentication and delegation). Having too many
 # concurrent submissions can overload either or both the submit host CPU
-# and the remote host/head node causing degraded performance.     
+# and the remote host/head node causing degraded performance.
 #
 # Default: 4
 #
@@ -176,7 +176,7 @@
 
 #
 # Limits the number of concurrent submissions for any of the sites Swift will
-# try to send jobs to. In other words it guarantees that no more than the 
+# try to send jobs to. In other words it guarantees that no more than the
 # value of this throttle jobs sent to any site will be concurrently in a state
 # of being submitted.
 #
@@ -192,7 +192,7 @@
 # is assigned a score (initially 1), which can increase or decrease based
 # on whether the site yields successful or faulty job runs. The score for a
 # site can take values in the (0.1, 100) interval. The number of allowed jobs
-# is calculated using the following formula: 
+# is calculated using the following formula:
 # 	2 + score*throttle.score.job.factor
 # This means a site will always be allowed at least two concurrent jobs and
 # at most 2 + 100*throttle.score.job.factor. With a default of 4 this means
@@ -220,11 +220,11 @@
 # Limits the total number of concurrent file operations that can happen at any
 # given time. File operations (like transfers) require an exclusive connection
 # to a site. These connections can be expensive to establish. A large number
-# of concurrent file operations may cause Swift to attempt to establish many 
+# of concurrent file operations may cause Swift to attempt to establish many
 # such expensive connections to various sites. Limiting the number of concurrent
 # file operations causes Swift to use a small number of cached connections and
-# achieve better overall performance. 
-# 
+# achieve better overall performance.
+#
 # Default: 8
 #
 
@@ -240,7 +240,7 @@
 
 sitedir.keep=false
 
-# number of time a job will be retried if it fails (giving a maximum of 
+# number of time a job will be retried if it fails (giving a maximum of
 # 1 + execution.retries attempts at execution)
 #
 
@@ -256,7 +256,7 @@
 replication.enabled=false
 
 # If replication is enabled, this value specifies the minimum time, in seconds,
-# a job needs to be queued in a batch queue in order to be considered for 
+# a job needs to be queued in a batch queue in order to be considered for
 # replication
 #
 
@@ -271,7 +271,7 @@
 #
 # The IP address of the submit machine is used by GRAM as a callback
 # address to report the status of submitted jobs. In general, Swift
-# can automatically detect the IP address of the local machine. 
+# can automatically detect the IP address of the local machine.
 # However, if the machine has more than one network interface, Swift
 # will pick the first one, which may not be the right choice. It is
 # recommended that this property is set properly before attempting to
@@ -312,7 +312,7 @@
 
 #
 # Limits the number of concurrent iterations that each foreach statement
-# can have at one time. This conserves memory for swift programs that 
+# can have at one time. This conserves memory for swift programs that
 # have large numbers of iterations (which would otherwise all be executed
 # in parallel).
 #
@@ -327,11 +327,11 @@
 
 provenance.log=false
 
-# Controls whether file staging is done by swift or by the execution 
+# Controls whether file staging is done by swift or by the execution
 # provider. If set to false, the standard swift staging mechanism is
-# used. If set to true, swift does not stage files. Instead, the 
+# used. If set to true, swift does not stage files. Instead, the
 # execution provider is instructed to stage files in and out.
-# 
+#
 # Provider staging is experimental.
 #
 # When enabled, and when coasters are used as an execution provider,
@@ -339,14 +339,14 @@
 # using the swift:stagingMethod site profile in sites.xml. The
 # following is a list of accepted mechanisms:
 #
-# * file:  Staging is done from a filesystem accessible to the 
-#          coaster service (typically running on the head node) 
+# * file:  Staging is done from a filesystem accessible to the
+#          coaster service (typically running on the head node)
 # * proxy: Staging is done from a filesystem accessible to the
 #          client machine that swift is running on, and is proxied
 #          through the coaster service
 # * sfs:   (short for "shared filesystem") Staging is done by
 #          copying files to and from a filesystem accessible
-#          by the compute node (such as an NFS or GPFS mount).   
- 
+#          by the compute node (such as an NFS or GPFS mount).
 
-use.provider.staging=false
\ No newline at end of file
+
+use.provider.staging=true




More information about the Swift-commit mailing list