[Swift-commit] r7923 - in branches/release-0.95/tests: . groups random_fail sites/beagle sites/local-coasters sites/mcs sites/osgconnect sites/raven stress stress/IO/uc3 stress/apps/modis_local stress/apps/modis_midway stress/apps/modis_uc3 stress/local_cluster

yadunandb at ci.uchicago.edu yadunandb at ci.uchicago.edu
Sat Jun 14 16:07:39 CDT 2014


Author: yadunandb
Date: 2014-06-14 16:07:38 -0500 (Sat, 14 Jun 2014)
New Revision: 7923

Added:
   branches/release-0.95/tests/random_fail/
   branches/release-0.95/tests/random_fail/Bug_info
   branches/release-0.95/tests/random_fail/rand_fail_Bug1067.check.sh
   branches/release-0.95/tests/random_fail/rand_fail_Bug1067.setup.sh
   branches/release-0.95/tests/random_fail/rand_fail_Bug1067.swift
   branches/release-0.95/tests/random_fail/run
   branches/release-0.95/tests/random_fail/sites.template.xml
   branches/release-0.95/tests/random_fail/swift.properties
   branches/release-0.95/tests/random_fail/tc.template.data
   branches/release-0.95/tests/random_fail/title.txt
   branches/release-0.95/tests/sites/local-coasters/sites.backup
   branches/release-0.95/tests/sites/raven/sites.template.backup
   branches/release-0.95/tests/stress/local_cluster/
   branches/release-0.95/tests/stress/local_cluster/combiner.sh
   branches/release-0.95/tests/stress/local_cluster/run
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.args
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.check.sh
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.setup.sh
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.stdout
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.swift
   branches/release-0.95/tests/stress/local_cluster/simple_MapRed.timeout
   branches/release-0.95/tests/stress/local_cluster/sites.template.xml
   branches/release-0.95/tests/stress/local_cluster/sites.xml
   branches/release-0.95/tests/stress/local_cluster/swift.properties
   branches/release-0.95/tests/stress/local_cluster/teragen_wrap.sh
   branches/release-0.95/tests/stress/local_cluster/title.txt
Removed:
   branches/release-0.95/tests/sites/beagle/sanity.setup.sh
   branches/release-0.95/tests/sites/beagle/sanity.source.sh
   branches/release-0.95/tests/sites/beagle/sanity.swift
   branches/release-0.95/tests/sites/beagle/sanity.timeout
   branches/release-0.95/tests/sites/mcs/sanity.setup.sh
   branches/release-0.95/tests/sites/mcs/sanity.source.sh
   branches/release-0.95/tests/sites/mcs/sanity.swift
   branches/release-0.95/tests/sites/mcs/sanity.timeout
Modified:
   branches/release-0.95/tests/groups/group-daily-remote.sh
   branches/release-0.95/tests/sites/beagle/sites.template.xml
   branches/release-0.95/tests/sites/osgconnect/sites.template.xml
   branches/release-0.95/tests/sites/raven/sites.template.xml
   branches/release-0.95/tests/stress/IO/uc3/sites.template.xml
   branches/release-0.95/tests/stress/apps/modis_local/sites.template.xml
   branches/release-0.95/tests/stress/apps/modis_midway/sites.template.xml
   branches/release-0.95/tests/stress/apps/modis_uc3/modis.timeout
   branches/release-0.95/tests/stress/apps/modis_uc3/sites.template.xml
Log:

Fixes for test-suite.
FIxed modis, local_cluster, random_fail and several tests
failing due to remote site quotas being hit and due to
lack of housekeeping. I should probably add a housekeeping
job to the tail end of the test suite.



Modified: branches/release-0.95/tests/groups/group-daily-remote.sh
===================================================================
--- branches/release-0.95/tests/groups/group-daily-remote.sh	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/groups/group-daily-remote.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -25,15 +25,19 @@
             $TESTDIR/functions \
 
             # Site testing test-group
+            $TESTDIR/sites/local \
+            $TESTDIR/sites/local-coasters \
+            $TESTDIR/sites/multiple_coaster_pools \
+
             $TESTDIR/sites/beagle \
             $TESTDIR/sites/mcs    \
             $TESTDIR/sites/midway \
-            $TESTDIR/sites/uc3    \
+            $TESTDIR/sites/osgconnect    \
 	        # Frisbee will fail due to Bug 1030
             $TESTDIR/sites/mac-frisbee  \
             $TESTDIR/sites/blues  \
             $TESTDIR/sites/fusion \
-            #$TESTDIR/sites/raven  \
+            $TESTDIR/sites/raven  \
 
  	        # Remote-cluster IO tests
 	        $TESTDIR/stress/IO/beagle \

Added: branches/release-0.95/tests/random_fail/Bug_info
===================================================================
--- branches/release-0.95/tests/random_fail/Bug_info	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/Bug_info	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,22 @@
+Regression tests for Bug 1067
+| Link -> http://bugzilla.mcs.anl.gov/swift/show_bug.cgi?id=1067
+Swift 0.94 swift-r6888 cog-r3762
+
+Exception in sh:
+    Arguments: [randfail.sh, 50, 0]
+    Host: local
+    Directory: rand_fail_Bug1067-20130820-1749-6n7zbux4/jobs/6/sh-6abm13el
+    stderr.txt: Failing 11 < 50
+    stdout.txt: 
+Caused by: Application /bin/bash failed with an exit code of 255
+
+Exception in sh:
+    Arguments: [randfail.sh, 50, 0]
+    Host: local
+    Directory: rand_fail_Bug1067-20130820-1749-6n7zbux4/jobs/8/sh-8abm13el
+    stderr.txt: Failing 19 < 50
+    stdout.txt: 
+Caused by: Application /bin/bash failed with an exit code of 255
+
+Execution failed:
+    Got one name (derr) and 0 values: []
\ No newline at end of file

Added: branches/release-0.95/tests/random_fail/rand_fail_Bug1067.check.sh
===================================================================
--- branches/release-0.95/tests/random_fail/rand_fail_Bug1067.check.sh	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/rand_fail_Bug1067.check.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,17 @@
+#!/bin/bash
+
+if [ ! -f ${0%.check.sh}.stdout ]
+then
+    echo "${$0%.check.sh}.stdout missing"
+    exit -1
+fi
+
+grep "Got one name (derr)" ${0%.check.sh}.stdout
+if [ "$?" == 0 ]
+then
+    echo "EXIT : REGRESSION FOUND!" >&2
+    exit -1
+else
+    echo "Test passed"
+fi
+exit 0
\ No newline at end of file


Property changes on: branches/release-0.95/tests/random_fail/rand_fail_Bug1067.check.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/random_fail/rand_fail_Bug1067.setup.sh
===================================================================
--- branches/release-0.95/tests/random_fail/rand_fail_Bug1067.setup.sh	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/rand_fail_Bug1067.setup.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+# Setup script will just output the following file
+
+cat<<'EOF' > randfail.sh
+#!/bin/bash
+
+FAIL_PROBABILITY=$1
+DELAY=$2
+
+ITEM=$(($RANDOM%100))
+sleep $2
+
+if (( "$ITEM" <= "$FAIL_PROBABILITY" ))
+then
+    echo "Failing $ITEM < $FAIL_PROBABILITY" >&2
+    exit -1
+fi
+echo "Not failing $ITEM > $FAIL_PROBABILITY"
+exit 0
+EOF
+


Property changes on: branches/release-0.95/tests/random_fail/rand_fail_Bug1067.setup.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/random_fail/rand_fail_Bug1067.swift
===================================================================
--- branches/release-0.95/tests/random_fail/rand_fail_Bug1067.swift	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/rand_fail_Bug1067.swift	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,34 @@
+type file;
+
+file script<"randfail.sh">;
+
+app (file ofile1, file ofile2) quicklyFailingApp(file script, int failchance,
+int delay)
+{
+  sh @script failchance delay stdout=@ofile1 stderr=@ofile2;
+}
+
+app (file ofile) someApp3(file ifile, file jfile, file kfile)
+{
+  sh "-c" @strcat("cat ", at filename(ifile)) stdout=@ofile;
+}
+
+app (file ofile) someApp(file ifile)
+{
+  sh "-c" @strcat("cat ", at filename(ifile)) stdout=@ofile;
+}
+
+app sleep (int sec)
+{
+  sh "-c" @strcat("sleep ",sec);
+}
+
+int sufficientlyLargeNumber = 100;
+
+file a[];
+foreach i in [0:sufficientlyLargeNumber] {
+  file f1<single_file_mapper; file=@strcat("failed1.",i,".out")>;
+  file f2<single_file_mapper; file=@strcat("failed2.",i,".out")>;
+  (f1,f2)  = quicklyFailingApp(script,50,0);
+  a[i] = someApp(f2);
+}

Added: branches/release-0.95/tests/random_fail/run
===================================================================
--- branches/release-0.95/tests/random_fail/run	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/run	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+PATH=/scratch/midway/yadunand/swift-0.94RC2/cog/modules/swift/dist/swift-svn/bin:$PATH
+
+
+echo "Swift location: "; which swift
+echo "Swift version : "; swift -version
+
+rm rand_fail_Bug1067.stdout
+cat title.txt
+
+./rand_fail_Bug1067.setup.sh
+
+for i in `seq 1 10`
+do
+swift -tc.file tc.template.data -config swift.properties -sites.file sites.template.xml rand_fail_Bug1067.swift  | tee -a rand_fail_Bug1067.stdout
+
+rm -rf *{swiftx,kml} rand_fail_Bug1067-* _concurrent* failed*
+done
+
+./rand_fail_Bug1067.check.sh
\ No newline at end of file


Property changes on: branches/release-0.95/tests/random_fail/run
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/random_fail/sites.template.xml
===================================================================
--- branches/release-0.95/tests/random_fail/sites.template.xml	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
+
+  <pool handle="midway">
+    <execution provider="coaster" jobmanager="local:local"/>
+    <profile namespace="globus" key="queue">sandyb</profile>
+    <profile namespace="globus" key="jobsPerNode">16</profile>
+    <profile namespace="globus" key="maxtime">36000</profile>
+    <profile namespace="globus" key="maxWalltime">00:10:00</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">.64</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/MIDWAY_USERNAME</workdirectory>
+  </pool>
+
+</config>

Added: branches/release-0.95/tests/random_fail/swift.properties
===================================================================
--- branches/release-0.95/tests/random_fail/swift.properties	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/swift.properties	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,11 @@
+wrapperlog.always.transfer=true
+sitedir.keep=true
+file.gc.enabled=false
+status.mode=provider
+
+execution.retries=5
+lazy.errors=true
+
+use.wrapper.staging=false
+use.provider.staging=false
+provider.staging.pin.swiftfiles=falsewrapperlog.always.transfer=true

Added: branches/release-0.95/tests/random_fail/tc.template.data
===================================================================
--- branches/release-0.95/tests/random_fail/tc.template.data	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/tc.template.data	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,2 @@
+local perl /usr/bin/perl null null null
+local sh   /bin/bash

Added: branches/release-0.95/tests/random_fail/title.txt
===================================================================
--- branches/release-0.95/tests/random_fail/title.txt	                        (rev 0)
+++ branches/release-0.95/tests/random_fail/title.txt	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,2 @@
+Regression tests for Bug 1067
+| Link -> http://bugzilla.mcs.anl.gov/swift/show_bug.cgi?id=1067

Deleted: branches/release-0.95/tests/sites/beagle/sanity.setup.sh
===================================================================
--- branches/release-0.95/tests/sites/beagle/sanity.setup.sh	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/beagle/sanity.setup.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-echo  "BEAGLE_USERNAME : $BEAGLE_USERNAME"
-echo  "MIDWAY_USERNAME : $MIDWAY_USERNAME"
-echo  "MCS_USERNAME    : $MCS_USERNAME"
-echo  "UC3_USERNAME    : $UC3_USERNAME"
-USERNAME=$BEAGLE_USERNAME
-
-if [[ -z $USERNAME ]] 
-then
-    echo "Remote username not provided. Skipping sites configs"
-else
-    ls *xml
-    cat sites.xml  | sed "s/{env.USER}/$USERNAME/" > tmp && mv tmp sites.xml
-fi

Deleted: branches/release-0.95/tests/sites/beagle/sanity.source.sh
===================================================================
--- branches/release-0.95/tests/sites/beagle/sanity.source.sh	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/beagle/sanity.source.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,6 +0,0 @@
-#!/bin/bash
-if [ "$HOSTNAME" == "midway001" ]
-then
-   export GLOBUS_HOSTNAME=swift.rcc.uchicago.edu
-   export GLOBUS_TCP_PORT_RANGE=50000,51000
-fi;

Deleted: branches/release-0.95/tests/sites/beagle/sanity.swift
===================================================================
--- branches/release-0.95/tests/sites/beagle/sanity.swift	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/beagle/sanity.swift	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,11 +0,0 @@
-type file;
-
-app (file out, file err) remote_driver ()
-{
-    date stdout=@filename(out) stderr=@filename(err);
-}
-
-file driver_out <simple_mapper; prefix="sanity", suffix=".out">;
-file driver_err <simple_mapper; prefix="sanity", suffix=".err">;
-
-(driver_out, driver_err) = remote_driver();
\ No newline at end of file

Deleted: branches/release-0.95/tests/sites/beagle/sanity.timeout
===================================================================
--- branches/release-0.95/tests/sites/beagle/sanity.timeout	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/beagle/sanity.timeout	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1 +0,0 @@
-300

Modified: branches/release-0.95/tests/sites/beagle/sites.template.xml
===================================================================
--- branches/release-0.95/tests/sites/beagle/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/beagle/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -19,4 +19,4 @@
     <!-- <workdirectory>/lustre/beagle/yadunandb/swiftwork</workdirectory> -->
     <workdirectory>/tmp/{env.USER}/swiftwork</workdirectory>
   </pool>
-</config>
\ No newline at end of file
+</config>

Added: branches/release-0.95/tests/sites/local-coasters/sites.backup
===================================================================
--- branches/release-0.95/tests/sites/local-coasters/sites.backup	                        (rev 0)
+++ branches/release-0.95/tests/sites/local-coasters/sites.backup	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,24 @@
+<config>
+
+  <pool handle="localhost" sysinfo="INTEL32::LINUX">
+    <gridftp url="local://localhost" />
+    <execution provider="local" url="none" />
+    <workdirectory>_WORK_</workdirectory>
+    <profile namespace="swift" key="stagingMethod">file</profile>
+  </pool>
+
+  <pool handle="coasterslocal">
+    <filesystem provider="local" />
+    <execution provider="coaster" jobmanager="local:local" url="localhost:0"/>
+    <!-- <profile namespace="globus"   key="internalHostname">_HOST_</profile> -->
+    <profile namespace="karajan"  key="jobthrottle">2.55</profile>
+    <profile namespace="karajan"  key="initialScore">10000</profile>
+    <profile namespace="globus"   key="jobsPerNode">4</profile>
+    <profile namespace="globus"   key="slots">8</profile>
+    <profile namespace="globus"   key="maxTime">1000</profile>
+    <profile namespace="globus"   key="nodeGranularity">1</profile>
+    <profile namespace="globus"   key="maxNodes">4</profile>
+    <workdirectory>_WORK_</workdirectory>
+  </pool>
+
+</config>

Deleted: branches/release-0.95/tests/sites/mcs/sanity.setup.sh
===================================================================
--- branches/release-0.95/tests/sites/mcs/sanity.setup.sh	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/mcs/sanity.setup.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,11 +0,0 @@
-#!/bin/bash
-
-USERNAME=$MCS_USERNAME
-
-if [[ -z $USERNAME ]] 
-then
-    echo "Remote username not provided. Skipping sites configs"
-else
-    ls *xml
-    cat sites.xml  | sed "s/{env.USER}/$USERNAME/" > tmp && mv tmp sites.xml
-fi

Deleted: branches/release-0.95/tests/sites/mcs/sanity.source.sh
===================================================================
--- branches/release-0.95/tests/sites/mcs/sanity.source.sh	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/mcs/sanity.source.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,6 +0,0 @@
-#!/bin/bash
-if [ "$HOSTNAME" == "midway001" ]
-then
-   export GLOBUS_HOSTNAME=swift.rcc.uchicago.edu
-   export GLOBUS_TCP_PORT_RANGE=50000,51000
-fi;

Deleted: branches/release-0.95/tests/sites/mcs/sanity.swift
===================================================================
--- branches/release-0.95/tests/sites/mcs/sanity.swift	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/mcs/sanity.swift	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,11 +0,0 @@
-type file;
-
-app (file out, file err) remote_driver ()
-{
-    date stdout=@filename(out) stderr=@filename(err);
-}
-
-file driver_out <simple_mapper; prefix="sanity", suffix=".out">;
-file driver_err <simple_mapper; prefix="sanity", suffix=".err">;
-
-(driver_out, driver_err) = remote_driver();
\ No newline at end of file

Deleted: branches/release-0.95/tests/sites/mcs/sanity.timeout
===================================================================
--- branches/release-0.95/tests/sites/mcs/sanity.timeout	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/mcs/sanity.timeout	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1 +0,0 @@
-300

Modified: branches/release-0.95/tests/sites/osgconnect/sites.template.xml
===================================================================
--- branches/release-0.95/tests/sites/osgconnect/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/osgconnect/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -2,7 +2,7 @@
 <config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
 
   <pool handle="osgc">
-    <execution provider="coaster" url="login01.osgconnect.net" jobmanager="ssh-cl:condor"/>
+    <execution provider="coaster" url="login.osgconnect.net" jobmanager="ssh-cl:condor"/>
     <profile namespace="karajan" key="jobThrottle">10.00</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
     <profile namespace="globus"  key="jobsPerNode">1</profile>

Added: branches/release-0.95/tests/sites/raven/sites.template.backup
===================================================================
--- branches/release-0.95/tests/sites/raven/sites.template.backup	                        (rev 0)
+++ branches/release-0.95/tests/sites/raven/sites.template.backup	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,21 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
+
+  <pool handle="raven">
+    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="raven.cray.com"/>
+    <profile namespace="globus" key="project">CI-SES000031</profile>
+    <!-- <profile namespace="env" key="SWIFT_GEN_SCRIPTS">KEEP</profile> -->
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile>
+    <profile namespace="globus" key="maxWallTime">00:01:00</profile>
+    <profile namespace="globus" key="slots">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="karajan" key="jobThrottle">1.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <filesystem provider="local"/>
+    <workdirectory>/home/users/{env.USER}/swiftwork</workdirectory>
+  </pool>
+
+</config>
+

Modified: branches/release-0.95/tests/sites/raven/sites.template.xml
===================================================================
--- branches/release-0.95/tests/sites/raven/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/sites/raven/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,9 +1,9 @@
 <?xml version="1.0" encoding="UTF-8"?>
 <config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
+
   <pool handle="raven">
     <execution provider="coaster" jobmanager="ssh-cl:pbs" url="raven.cray.com"/>
     <profile namespace="globus" key="project">CI-SES000031</profile>
-    <profile namespace="env" key="SWIFT_GEN_SCRIPTS">KEEP</profile>
     <profile namespace="globus" key="jobsPerNode">24</profile>
     <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile>
     <profile namespace="globus" key="maxWallTime">00:01:00</profile>
@@ -13,7 +13,8 @@
     <profile namespace="karajan" key="jobThrottle">1.00</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
     <filesystem provider="local"/>
-    <workdirectory>/home/users/{env.USER}/swiftwork</workdirectory>
+    <workdirectory>/home/users/p01898/swiftwork/</workdirectory>
   </pool>
+
 </config>
 

Modified: branches/release-0.95/tests/stress/IO/uc3/sites.template.xml
===================================================================
--- branches/release-0.95/tests/stress/IO/uc3/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/stress/IO/uc3/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -2,7 +2,7 @@
 <config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
 
   <pool handle="uc3">
-    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="ssh-cl:condor"/>
+    <execution provider="coaster" url="login.osgconnect.net" jobmanager="ssh-cl:condor"/>
     <profile namespace="karajan" key="jobThrottle">10.00</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
     <profile namespace="globus"  key="jobsPerNode">1</profile>

Modified: branches/release-0.95/tests/stress/apps/modis_local/sites.template.xml
===================================================================
--- branches/release-0.95/tests/stress/apps/modis_local/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/stress/apps/modis_local/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1,5 +1,16 @@
 <?xml version="1.0" encoding="UTF-8"?>
 <config xmlns="http://www.ci.uchicago.edu/swift/SwiftSites">
+
+<pool handle="local">
+  <execution provider="local"/>
+  <filesystem provider="local"/>
+  <profile key="initialScore" namespace="karajan">10000</profile>
+  <profile key="jobThrottle" namespace="karajan">.31</profile>
+  <workdirectory>_WORK_</workdirectory>
+</pool>
+
+
+<!-- 
   <pool handle="local">
     <execution provider="coaster" jobmanager="local:local"/>
     <profile namespace="globus" key="jobsPerNode">4</profile>
@@ -14,4 +25,5 @@
     <filesystem provider="local"/>
     <workdirectory>/scratch/midway/{env.USER}</workdirectory>
   </pool>
+-->
 </config>

Modified: branches/release-0.95/tests/stress/apps/modis_midway/sites.template.xml
===================================================================
--- branches/release-0.95/tests/stress/apps/modis_midway/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/stress/apps/modis_midway/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -4,16 +4,16 @@
   <pool handle="midway">
     <execution provider="coaster" jobmanager="local:slurm"/>
     <profile namespace="globus" key="queue">westmere</profile>
-    <profile namespace="globus" key="jobsPerNode">16</profile>
     <profile namespace="globus" key="maxWalltime">00:05:00</profile>
     <profile namespace="globus" key="maxTime">3600</profile>
     <profile namespace="globus" key="highOverAllocation">100</profile>
     <profile namespace="globus" key="lowOverAllocation">100</profile>
-    <profile namespace="globus" key="slots">2</profile>
+    <profile namespace="globus" key="slots">1</profile>
     <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="jobsPerNode">8</profile>
     <profile namespace="globus" key="nodeGranularity">1</profile>
     <profile namespace="karajan" key="jobThrottle">.64</profile>
     <profile namespace="karajan" key="initialScore">10000</profile>
-    <workdirectory>/tmp/{env.USER}</workdirectory>
+    <workdirectory>/scratch/midway/{env.USER}/swiftwork</workdirectory>
   </pool>
 </config>

Modified: branches/release-0.95/tests/stress/apps/modis_uc3/modis.timeout
===================================================================
--- branches/release-0.95/tests/stress/apps/modis_uc3/modis.timeout	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/stress/apps/modis_uc3/modis.timeout	2014-06-14 21:07:38 UTC (rev 7923)
@@ -1 +1 @@
-600
+900

Modified: branches/release-0.95/tests/stress/apps/modis_uc3/sites.template.xml
===================================================================
--- branches/release-0.95/tests/stress/apps/modis_uc3/sites.template.xml	2014-06-14 02:57:18 UTC (rev 7922)
+++ branches/release-0.95/tests/stress/apps/modis_uc3/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -3,20 +3,19 @@
 
   <pool handle="uc3">
     <execution provider="coaster" url="login.osgconnect.net" jobmanager="ssh-cl:condor"/>
-    <profile namespace="karajan" key="jobThrottle">10.00</profile>
-    <profile namespace="karajan" key="initialScore">10000</profile>
-    <profile namespace="globus"  key="jobsPerNode">1</profile>
     <profile namespace="globus"  key="maxtime">3600</profile>
-    <profile namespace="globus"  key="maxWalltime">00:10:00</profile>
-    <profile namespace="globus"  key="highOverAllocation">100</profile>
-    <profile namespace="globus"  key="lowOverAllocation">100</profile>
-    <profile namespace="globus"  key="slots">1</profile>
+    <profile namespace="globus"  key="maxWalltime">00:30:00</profile>
+
+    <profile namespace="globus"  key="slots">4</profile>
     <profile namespace="globus"  key="maxNodes">1</profile>
+    <profile namespace="globus"  key="jobsPerNode">4</profile>
     <profile namespace="globus"  key="nodeGranularity">1</profile>
-    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.{env.USER}"</profile>
-    <profile namespace="globus"  key="condor.+ProjectName">"Swift"</profile>
+    <profile namespace="karajan" key="jobThrottle">10.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+
     <profile namespace="globus"  key="jobType">nonshared</profile>
-    <workdirectory>/home/{env.USER}/swiftwork</workdirectory>
-  </pool>
+    <profile namespace="globus"  key="condor.+ProjectName">"Swift"</profile>
 
+    <workdirectory>.</workdirectory>
+  </pool>
 </config>

Added: branches/release-0.95/tests/stress/local_cluster/combiner.sh
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/combiner.sh	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/combiner.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,16 @@
+#!/bin/bash
+
+FILES=$*
+SUM=0
+COUNT=0
+
+for file in $*
+do
+    RES=($(awk '{ sum += $1 } END { print sum,NR }' $file))
+    echo "${RES[0]} ${RES[1]}"
+    SUM=$(($SUM+${RES[0]}))
+    COUNT=$(($COUNT+${RES[1]}))
+done
+echo "SUM  : $SUM"
+echo "COUNT: $COUNT"
+exit 0


Property changes on: branches/release-0.95/tests/stress/local_cluster/combiner.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/stress/local_cluster/run
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/run	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/run	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,27 @@
+#!/bin/bash
+
+PATH=/scratch/midway/yadunand/swift-0.94RC2/cog/modules/swift/dist/swift-svn/bin:$PATH
+
+echo "Swift location: "; which swift
+echo "Swift version : "; swift -version
+
+export MIDWAY_USERNAME=yadunand
+export BEAGLE_USERNAME=yadunandb
+export MCS_USERNAME=yadunand
+export UC3_USERNAME=yadunand
+
+SCRIPT=simple_MapRed.swift
+BASE=${SCRIPT%.swift}
+
+rm $BASE.stdout
+cat title.txt
+
+cp sites.template.xml sites.xml
+./$BASE.setup.sh
+
+ARGS=$(cat $BASE.args)
+swift -tc.file tc.data -config swift.properties -sites.file sites.xml $BASE.swift ${ARGS[*]} | tee -a $BASE.stdout
+
+rm -rf *{swiftx,kml} $BASE-* _concurrent* failed* &> /dev/null
+
+./$BASE.check.sh
\ No newline at end of file


Property changes on: branches/release-0.95/tests/stress/local_cluster/run
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.args
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.args	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.args	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1 @@
+-loops=10
\ No newline at end of file

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.check.sh
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.check.sh	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.check.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+BASE=${0%.check.sh}
+ARGS=`cat $BASE.args | sed 's/-loops=//'`
+
+EXPECTED=$(($ARGS * 10000))
+
+if [ -f "final_result" ];then
+    RESULT=($(tail -n 1 final_result))
+    echo "RESULT line : ${RESULT[*]}"
+    echo "EXPECTED = $EXPECTED"
+    echo "ACTUAL   = ${RESULT[1]}"
+fi
+
+if [[ "${RESULT[1]}" == "$EXPECTED" ]]
+then
+    echo "Result matched"
+else
+    echo "Result does not match expectation" >&2
+    exit 1
+fi


Property changes on: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.check.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.setup.sh
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.setup.sh	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.setup.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,86 @@
+#!/bin/bash
+
+HOST=$(hostname -f)
+
+if   [[ "$HOST" == *midway* ]]; then
+    echo "On Midway"
+    echo "midway bash /bin/bash null null null" > tc.data
+elif [[ "$HOST" == *beagle* ]]; then
+    echo "On Beagle"
+    echo "beagle bash /bin/bash null null null" > tc.data
+elif [[ "$HOST" == *mcs* ]]; then
+    echo "On MCS"
+    echo "mcs bash /bin/bash null null null" > tc.data
+elif [[ "$HOST" == *uc3* ]]; then
+    echo "On UC3"
+    echo "uc3 bash /bin/bash null null null" > tc.data
+else
+    echo "On unidentified machine, using defaults"
+    echo "local bash /bin/bash null null null" > tc.data
+fi
+
+if [[ -z $MIDWAY_USERNAME ]]
+then
+    echo "Remote username not provided. Skipping sites configs"
+else
+    cat sites.xml  | sed "s/{mid.USER}/$MIDWAY_USERNAME/" > tmp && mv tmp\
+ sites.xml
+fi
+if [[ -z $UC3_USERNAME ]]
+then
+    echo "Remote username not provided. Skipping sites configs"
+else
+    cat sites.xml  | sed "s/{uc3.USER}/$UC3_USERNAME/" > tmp && mv tmp si\
+tes.xml
+fi
+if [[ -z $BEAGLE_USERNAME ]]
+then
+    echo "Remote username not provided. Skipping sites configs"
+else
+    cat sites.xml  | sed "s/{beagle.USER}/$BEAGLE_USERNAME/" > tmp && mv \
+tmp sites.xml
+fi
+if [[ -z $MCS_USERNAME ]]
+then
+    echo "Remote username not provided. Skipping sites configs"
+else
+    cat sites.xml  | sed "s/{mcs.USER}/$MCS_USERNAME/" > tmp && mv \
+tmp sites.xml
+fi
+
+cat<<'EOF' > teragen_wrap.sh
+#!/bin/bash
+
+# By default with ARG1:100 and SLICESIZE=10000, this script will generate
+# 10^6 records.
+ARG1=1
+[ ! -z $1 ] && ARG1=$1
+
+FILE="input_$RANDOM.txt"
+LOWERLIMIT=0
+UPPERLIMIT=1000000 # 10^9
+SLICESIZE=10000     # 10^4 records padded to 100B would result in 1MB file
+#SLICESIZE=1000     # 10^3  If padded to 100B would result
+
+shuf -i $LOWERLIMIT-$UPPERLIMIT -n $(($SLICESIZE*$ARG1)) | awk '{printf "%-99s\n", $0}'
+exit 0
+EOF
+
+cat <<'EOF' > combiner.sh
+#!/bin/bash
+
+FILES=$*
+SUM=0
+COUNT=0
+
+for file in $*
+do
+    RES=($(awk '{ sum += $1 } END { print sum,NR }' $file))
+    echo "${RES[0]} ${RES[1]}"
+    SUM=$(($SUM+${RES[0]}))
+    COUNT=$(($COUNT+${RES[1]}))
+done
+echo "SUM  : $SUM"
+echo "COUNT: $COUNT"
+exit 0
+EOF


Property changes on: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.setup.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.stdout
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.stdout	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.stdout	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,3 @@
+Swift 0.94.1 RC2 swift-r6895 cog-r3765
+
+RunID: 20130820-2209-384c1ky1

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.swift
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.swift	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.swift	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,30 @@
+type file;
+type script;
+
+app (file out, file err) gen_data (script run, int recsize)
+{
+    bash @run recsize stdout=@out stderr=@err;
+}
+
+app (file out, file err) comb_data (script comb, file array[])
+{
+    bash @comb @array stdout=@out stderr=@err;
+}
+
+
+file tgen_out[] <simple_mapper; prefix="tgen", suffix=".out">;
+file tgen_err[] <simple_mapper; prefix="tgen", suffix=".err">;
+
+script wrapper <"teragen_wrap.sh">;
+int loop = @toInt(@arg("loops","10"));
+int fsize = @toInt(@arg("recsize","1")); # This would make 10M records per file
+string dir = @arg("dir", "./");
+
+foreach item,i in [0:loop-1] {
+	(tgen_out[i], tgen_err[i]) = gen_data(wrapper, fsize);
+}
+
+script combine <"combiner.sh">;
+file final <"final_result">;
+file errs <"err_file">;
+(final, errs) = comb_data(combine, tgen_out);

Added: branches/release-0.95/tests/stress/local_cluster/simple_MapRed.timeout
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/simple_MapRed.timeout	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/simple_MapRed.timeout	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1 @@
+900
\ No newline at end of file

Added: branches/release-0.95/tests/stress/local_cluster/sites.template.xml
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/sites.template.xml	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/sites.template.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,66 @@
+<config>
+
+  <pool handle="uc3">
+    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="ssh-cl:condor"/>
+    <profile namespace="karajan" key="jobThrottle">10.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="globus"  key="jobsPerNode">1</profile>
+    <profile namespace="globus"  key="maxtime">3600</profile>
+    <profile namespace="globus"  key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus"  key="highOverAllocation">100</profile>
+    <profile namespace="globus"  key="lowOverAllocation">100</profile>
+    <profile namespace="globus"  key="slots">1000</profile>
+    <profile namespace="globus"  key="maxNodes">1</profile>
+    <profile namespace="globus"  key="nodeGranularity">1</profile>
+    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.{uc3.USER}"</profile>
+    <profile namespace="globus"  key="jobType">nonshared</profile>
+    <workdirectory>.</workdirectory>
+  </pool>
+
+  <pool handle="beagle">
+    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login4.beagle.ci.uchicago.edu"/>
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="providerAttributes">pbs.aprun;pbs.mpp;depth=24</profile>
+    <profile namespace="globus" key="maxtime">3700</profile>
+    <profile namespace="globus" key="maxWalltime">01:00:00</profile>
+    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/{env.USER}/swiftwork</profile>
+    <profile namespace="globus" key="slots">1</profile>
+    <profile namespace="globus" key="maxnodes">2</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">1.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="karajan" key="workerLoggingLevel">trace</profile>
+    <workdirectory>/tmp/{beagle.USER}/swiftwork</workdirectory>
+  </pool>
+
+  <pool handle="midway">
+    <execution provider="coaster" jobmanager="local:slurm"/>
+    <profile namespace="globus" key="queue">sandyb</profile>
+    <profile namespace="globus" key="jobsPerNode">16</profile>
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus" key="maxTime">3600</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">.64</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/{mid.USER}</workdirectory>
+  </pool>
+
+  <pool handle="mcs">
+    <execution provider="coaster" jobmanager="ssh-cl:local" url="thwomp.mcs.anl.gov"/>
+    <profile namespace="globus" key="jobsPerNode">8</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="karajan" key="jobThrottle">0.0799</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/sandbox/{mcs.USER}/swiftwork</workdirectory>
+  </pool>
+
+</config>

Added: branches/release-0.95/tests/stress/local_cluster/sites.xml
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/sites.xml	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/sites.xml	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,65 @@
+<config>
+
+  <pool handle="uc3">
+    <execution provider="coaster" url="uc3-sub.uchicago.edu" jobmanager="ssh-cl:condor"/>
+    <profile namespace="karajan" key="jobThrottle">10.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <profile namespace="globus"  key="jobsPerNode">1</profile>
+    <profile namespace="globus"  key="maxtime">3600</profile>
+    <profile namespace="globus"  key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus"  key="highOverAllocation">100</profile>
+    <profile namespace="globus"  key="lowOverAllocation">100</profile>
+    <profile namespace="globus"  key="slots">4</profile>
+    <profile namespace="globus"  key="maxNodes">1</profile>
+    <profile namespace="globus"  key="nodeGranularity">1</profile>
+    <profile namespace="globus"  key="condor.+AccountingGroup">"group_friends.yadunand"</profile>
+    <profile namespace="globus"  key="jobType">nonshared</profile>
+    <workdirectory>.</workdirectory>
+  </pool>
+
+  <pool handle="beagle">
+    <execution provider="coaster" jobmanager="ssh-cl:pbs" url="login4.beagle.ci.uchicago.edu"/>
+    <profile namespace="globus" key="jobsPerNode">24</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="providerAttributes"></profile>
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus" key="userHomeOverride">/lustre/beagle/yadunandb/swiftwork</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxnodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">1.00</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/yadunandb/swiftwork</workdirectory>
+  </pool>
+
+  <pool handle="midway">
+    <execution provider="coaster" jobmanager="local:slurm"/>
+    <profile namespace="globus" key="queue">sandyb</profile>
+    <profile namespace="globus" key="jobsPerNode">16</profile>
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="globus" key="maxTime">3600</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="slots">4</profile>
+    <profile namespace="globus" key="maxNodes">1</profile>
+    <profile namespace="globus" key="nodeGranularity">1</profile>
+    <profile namespace="karajan" key="jobThrottle">.64</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/tmp/yadunand</workdirectory>
+  </pool>
+
+  <pool handle="mcs">
+    <execution provider="coaster" jobmanager="ssh-cl:local" url="thwomp.mcs.anl.gov"/>
+    <profile namespace="globus" key="jobsPerNode">8</profile>
+    <profile namespace="globus" key="lowOverAllocation">100</profile>
+    <profile namespace="globus" key="highOverAllocation">100</profile>
+    <profile namespace="globus" key="maxtime">3600</profile>
+    <profile namespace="globus" key="maxWalltime">00:15:00</profile>
+    <profile namespace="karajan" key="jobThrottle">0.0799</profile>
+    <profile namespace="karajan" key="initialScore">10000</profile>
+    <workdirectory>/sandbox/yadunand/swiftwork</workdirectory>
+  </pool>
+
+</config>

Added: branches/release-0.95/tests/stress/local_cluster/swift.properties
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/swift.properties	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/swift.properties	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,8 @@
+use.provider.staging=true
+use.wrapper.staging=false
+wrapperlog.always.transfer=true
+execution.retries=0
+lazy.errors=false
+provider.staging.pin.swiftfiles=false
+sitedir.keep=true
+tcp.port.range=50000,51000
\ No newline at end of file

Added: branches/release-0.95/tests/stress/local_cluster/teragen_wrap.sh
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/teragen_wrap.sh	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/teragen_wrap.sh	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+# By default with ARG1:100 and SLICESIZE=10000, this script will generate
+# 10^6 records.
+ARG1=1
+[ ! -z $1 ] && ARG1=$1
+
+FILE="input_$RANDOM.txt"
+LOWERLIMIT=0
+UPPERLIMIT=1000000 # 10^9
+SLICESIZE=10000     # 10^4 records padded to 100B would result in 1MB file
+#SLICESIZE=1000     # 10^3  If padded to 100B would result
+
+shuf -i $LOWERLIMIT-$UPPERLIMIT -n $(($SLICESIZE*$ARG1)) | awk '{printf "%-99s\n", $0}'
+exit 0


Property changes on: branches/release-0.95/tests/stress/local_cluster/teragen_wrap.sh
___________________________________________________________________
Added: svn:executable
   + *

Added: branches/release-0.95/tests/stress/local_cluster/title.txt
===================================================================
--- branches/release-0.95/tests/stress/local_cluster/title.txt	                        (rev 0)
+++ branches/release-0.95/tests/stress/local_cluster/title.txt	2014-06-14 21:07:38 UTC (rev 7923)
@@ -0,0 +1,4 @@
+Simple MapReduce style job for Local cluster testing
+| The first map stage generates a large number of random numbers
+| The reduce stage aggregates the results and outputs the count
+| and sum




More information about the Swift-commit mailing list