<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Hi, Mike<br>
<br>
Michael Wilde wrote:
<blockquote cite="mid:48026E8B.501@mcs.anl.gov" type="cite">Ben, your
analysis sounds very good. Some notes below, including questions for
Zhao.
<br>
<br>
On 4/13/08 2:57 PM, Ben Clifford wrote:
<br>
<blockquote type="cite"><br>
<blockquote type="cite">Ben, can you point me to the graphs for
this run? (Zhao's *99cy0z4g.log)
<br>
</blockquote>
<br>
<a class="moz-txt-link-freetext" href="http://www.ci.uchicago.edu/~benc/report-dock2-20080412-1609-99cy0z4g">http://www.ci.uchicago.edu/~benc/report-dock2-20080412-1609-99cy0z4g</a>
<br>
<br>
<blockquote type="cite">Once stage-ins start to complete, are the
corresponding jobs initiated quickly, or is Swift doing mostly
stage-ins for some period?
<br>
</blockquote>
<br>
In the run dock2-20080412-1609-99cy0z4g, jobs are submitted (to falkon)
pretty much right as the corresponding stagein completes. I have no
deeper information about when the worker actually starts to run.
<br>
<br>
<blockquote type="cite">Zhao indicated he saw data indicating there
was about a 700 second lag from
<br>
workflow start time till the first Falkon jobs started, if I understood
<br>
correctly. Do the graphs confirm this or say something different?
<br>
</blockquote>
<br>
There is a period of about 500s or so until stuff starts to happen; I
haven't looked at it. That is before stage-ins start too, though, which
means that i think this...
<br>
<br>
<blockquote type="cite">If the 700-second delay figure is true, and
stage-in was eliminated by copying
<br>
input files right to the /tmp workdir rather than first to /shared,
then we'd
<br>
have:
<br>
<br>
1190260 / ( 1290 * 2048 ) = .45 efficiency
<br>
</blockquote>
<br>
calculation is not meaningful.
<br>
<br>
I have not looked at what is going on during that 500s startup time,
but I plan to.
<br>
</blockquote>
<br>
Zhao, what SVN rev is your Swift at? Ben fixed an N^2 mapper logging
problem a few weeks ago. Could that cause such a delay, Ben? It would
be very obvious in the swift log.
<br>
</blockquote>
The version is Swift svn swift-r1780 cog-r1956<br>
<blockquote cite="mid:48026E8B.501@mcs.anl.gov" type="cite"><br>
<blockquote type="cite"><br>
<blockquote type="cite">I assume we're paying the same staging
price on the output side?
<br>
</blockquote>
<br>
not really - the output stageouts go very fast, and also because job
ending is staggered, they don't happen all at once.
<br>
<br>
This is the same with most of the large runs I've seen (of any
application) - stageout tends not to be a problem (or at least, no
where near the problems of stagein).
<br>
<br>
All stageins happen over a period t=400 to t=1100 fairly smoothly.
There's rate limiting still on file operations (100 max) and file
transfers (2000 max) which is being hit still.
<br>
</blockquote>
<br>
I thought Zhao set file operations throttle to 2000 as well. Sounds
like we can test with the latter higher, and find out what's limiting
the former.
<br>
<br>
Zhao, what are your settings for property throttle.file.operations?
<br>
I assume you have throttle.transfers set to 2000.
<br>
<br>
If its set right, any chance that Swift or Karajan is limiting it
somewhere?
<br>
</blockquote>
2000 for sure, <br>
<span style="color: rgb(0, 0, 0); font-family: MS Shell Dlg;">throttle.submit=off
<br>
throttle.host.submit=off <br>
throttle.score.job.factor=off <br>
throttle.transfers=2000 <br>
throttle.file.operation=2000</span><br>
<blockquote cite="mid:48026E8B.501@mcs.anl.gov" type="cite">
<blockquote type="cite"><br>
I think there's two directions to proceed in here that make sense for
actual use on single clusters running falkon (rather than trying to cut
out stuff randomly to push up numbers):
<br>
<br>
i) use some of the data placement features in falkon, rather than
Swift's
<br>
relatively simple data management that was designed more for
running
<br>
on the grid.
<br>
</blockquote>
<br>
Long term: we should consider how the Coaster implementation could
eventually do a similar data placement approach. In the meantime (mid
term) examining what interface changes are needed for Falkon data
placement might help prepare for that. Need to discuss if that would be
a good step or not.
<br>
<br>
<blockquote type="cite"><br>
ii) do stage-ins using symlinks rather than file copying. this makes
<br>
sense when everything is living in a single filesystem, which
again
<br>
is not what Swift's data management was originally optimised for.
<br>
</blockquote>
<br>
I assume you mean symlinks from shared/ back to the user's input files?
<br>
<br>
That sounds worth testing: find out if symlink creation is fast on NFS
and GPFS.
<br>
<br>
Is another approach to copy direct from the user's files to the /tmp
workdir (ie wrapper.sh pulls the data in)? Measurement will tell if
symlinks alone get adequate performance. Symlinks do seem an easier
first step.
<br>
<br>
<blockquote type="cite">I think option ii) is substantially easier to
implement (on the order of days) and is generally useful in the
single-cluster, local-source-data situation that appears to be what
people want to do for running on the BG/P and scicortex (that is,
pretty much ignoring anything grid-like at all).
<br>
</blockquote>
<br>
Grid-like might mean pulling data to the /tmp workdir directly by the
wrapper - but that seems like a harder step, and would need measurement
and prototyping of such code before attempting. Data transfer clients
that the wrapper script can count on might be an obstacle.
<br>
<br>
<blockquote type="cite"><br>
Option i) is much harder (on the order of months), needing a very
different interface between Swift and Falkon than exists at the moment.
<br>
<br>
<br>
<br>
</blockquote>
<br>
</blockquote>
</body>
</html>