[Swift-commit] r3945 - text/parco10submission

noreply at svn.ci.uchicago.edu noreply at svn.ci.uchicago.edu
Mon Jan 10 13:35:26 CST 2011


Author: dsk
Date: 2011-01-10 13:35:25 -0600 (Mon, 10 Jan 2011)
New Revision: 3945

Modified:
   text/parco10submission/paper.tex
Log:
adding hyphen


Modified: text/parco10submission/paper.tex
===================================================================
--- text/parco10submission/paper.tex	2011-01-10 19:31:51 UTC (rev 3944)
+++ text/parco10submission/paper.tex	2011-01-10 19:35:25 UTC (rev 3945)
@@ -1366,7 +1366,7 @@
 amount of node-time consumed to report a utilization ratio, which is
 plotted in Figure~\ref{fig:swift-performance}, case B.
 We observe that for tasks of 100 second duration, Swift achieves
-a 95\% CPU utilization of 2,048 compute nodes. Even for 30 second tasks,
+a 95\% CPU utilization of 2,048 compute nodes. Even for 30-second tasks,
 it can sustain an 80\% utilization at this level of concurrency.
 
 
@@ -1426,7 +1426,7 @@
 Previously published measurements of Swift performance performance on several scientific applications provide evidence that its parallel distributed programming model can be implemented with sufficient scalability and efficiency to make it a practical tool for large-scale parallel application scripting.
 
 The performance of Swift submitting jobs over the wide area network from UChicago to the TeraGrid Ranger cluster at TACC are shown in Figure~\ref{SEMplots} (from \cite{CNARI_2009}), which shows an SEM workload of 131,072 jobs for 4 brain regions and two experimental conditions. This workflow completed in approximately 3 hours.  The logs from the {\tt swift\_plot\_log} utility show the high degree of concurrent overlap between job execution and input and output file staging to remote computing resources. 
-The workflows were developed on and submitted (to Ranger) from a single-core Linux workstation at UChicago running an Intel¨ Xeonª 3.20 GHz CPU. Data staging was performed using the Globus GridFTP protocol and job execution was performed over the Globus GRAM 2 protocol.
+The workflows were developed on and submitted (to Ranger) from a single-core Linux workstation at UChicago running an Intel Xeon 3.20-GHz CPU. Data staging was performed using the Globus GridFTP protocol and job execution was performed over the Globus GRAM~2 protocol.
 During the third hour of the workflow, Swift achieved very high utilization of the 2,048 allocated processor cores and a steady rate of input and output transfers. The first two hours of the run were more bursty, due to fluctuating grid conditions and data server loads.
 
 
@@ -1446,7 +1446,7 @@
 
 The top plot in Figure~\ref{TaskPlots}-A shows the PTMap application running  the stage 1 processing of the E.coli K12 genome (4,127 sequences) on 2,048 Intrepid cores. The lower plot shows processor utilization as time progresses; overall, the average per task execution time was 64 seconds, with a standard deviation of 14 seconds. These 4,127 tasks consumed a total of 73 CPU hours, in a span of 161 seconds on 2,048 processor cores, achieving 80 percent utilization.
 
-The top plot in Figure~\ref{TaskPlots}-B shows performance of Swift running structural equation modeling problem at large scale using on the Ranger Constellation to model neural pathway connectivity from experimental fMRI data~\cite{CNARI_2009}. The lower plot shows the active jobs for a larger version of the problem type shown in Figure~\ref{SEMplots}.  This shows a Swift script executing 418,000 structural equation modeling jobs over a 40 hour period.
+The top plot in Figure~\ref{TaskPlots}-B shows performance of Swift running structural equation modeling problem at large scale using on the Ranger Constellation to model neural pathway connectivity from experimental fMRI data~\cite{CNARI_2009}. The lower plot shows the active jobs for a larger version of the problem type shown in Figure~\ref{SEMplots}.  This shows a Swift script executing 418,000 structural equation modeling jobs over a 40-hour period.
 
 \begin{figure}
   \begin{center}




More information about the Swift-commit mailing list