<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
<br>
Mihael Hategan wrote:
<blockquote cite="mid:1207063923.30798.0.camel@blabla.mcs.anl.gov"
type="cite">
<pre wrap="">On Tue, 2008-04-01 at 10:26 -0500, Ioan Raicu wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Michael Wilde wrote:
</pre>
<blockquote type="cite">
<pre wrap="">We're only working on the BG/P system, and GPFS is the only shared
filesystem there.
</pre>
</blockquote>
<pre wrap="">There is PVFS, but that performed even worse in our tests.
</pre>
<blockquote type="cite">
<pre wrap="">GPFS access, however, remains a big scalabiity issue. Frequent small
accesses to GPFS in our measurements really slow down the workflow. We
did a lot of micro-benchmark tests.
</pre>
</blockquote>
<pre wrap="">Yes! The BG/P's GPFS probably performs the worst out of all GPFSes I
have worked on, in terms of small granular accesses. For example,
reading 1 byte files, invoking a trivial script (i.e. exit 0), etc...
all perform extremely poor, to the point that we need to move away from
GPFS almost completely. For example, the things that we eventually need
to avoid on GPFS for the BG/P are:
invoking wrapper.sh
mkdir
any logging to GPFS
</pre>
</blockquote>
<pre wrap=""><!---->
Doing nothing can be incredibly fast.
</pre>
</blockquote>
What I meant is that we need to move these operations to the local file
system, i.e. RAM. We have run applications on BG/P via Falkon only,
and implemented a caching strategy that caches all scripts, binaries,
and input data, to RAM... once the task execution (all from RAM)
completes, and has written its output to RAM, then there is a single
copy operation of the output data from RAM to GPFS. We control how
frequently this copy operation occurs, so we can essentially scale
quite nicely and linearly with this approach. The hope is that we can
eventually work this kind of functionality in the wrapper.sh, or in
Swift itself. So, a reply to your statement, we would like to preserve
the functionality of the wrapper.sh, but move as much as possible of
that functionality from a shared file system to a local disk. <br>
<br>
Ioan<br>
<blockquote cite="mid:1207063923.30798.0.camel@blabla.mcs.anl.gov"
type="cite">
<pre wrap="">
</pre>
<blockquote type="cite">
<pre wrap="">There are probably others.
</pre>
<blockquote type="cite">
<pre wrap="">Zhao, can you gather a set of these tests into a small suite and post
numbers so the Swift developers can get an understanding of the
system's GPFS access performance?
Also note: the only local filesystem is RAM disk on /tmp or /dev/shm.
(Ioan and Zhao should confirm if they verified that /tmp is on RAM).
</pre>
</blockquote>
<pre wrap="">Yes, there are no local disks on either BG/P or SiCortex. Both machines
have /tmp and dev/shm mounted as ram disks.
Ioan
</pre>
<blockquote type="cite">
<pre wrap="">- Mike
On 4/1/08 5:05 AM, Ben Clifford wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Tue, 1 Apr 2008, Ben Clifford wrote:
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">With this fixed, the total time in wrapper.sh including the app is
now about
15 seconds, with 3 being in the app-wrapper itself. The time seems
about
evenly spread over the several wrapper.sh operations, which is not
surprising
when 500 wrappers hit NFS all at once.
</pre>
</blockquote>
<pre wrap="">Does this machine have a higher (/different) performance shared file
system such as PVFS or GPFS? We spent some time in november layout
out the filesystem to be sympathetic to GPFS to help avoid
bottlenecks like you are seeing here. It would be kinda sad if
either it isn't available or you aren't using it even though it is
available.
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">From what I can tell from the web, PVFS and/or GPFS are available on
all
</pre>
</blockquote>
<pre wrap="">of the Argonne Blue Gene machines. Is this true? I don't want to
provide more scalability support for NFS-on-bluegene if it is.
</pre>
</blockquote>
<pre wrap="">_______________________________________________
Swift-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Swift-devel@ci.uchicago.edu">Swift-devel@ci.uchicago.edu</a>
<a class="moz-txt-link-freetext" href="http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel">http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel</a>
</pre>
</blockquote>
</blockquote>
<pre wrap=""><!---->
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
===================================================
Ioan Raicu
Ph.D. Candidate
===================================================
Distributed Systems Laboratory
Computer Science Department
University of Chicago
1100 E. 58th Street, Ryerson Hall
Chicago, IL 60637
===================================================
Email: <a class="moz-txt-link-abbreviated" href="mailto:iraicu@cs.uchicago.edu">iraicu@cs.uchicago.edu</a>
Web: <a class="moz-txt-link-freetext" href="http://www.cs.uchicago.edu/~iraicu">http://www.cs.uchicago.edu/~iraicu</a>
<a class="moz-txt-link-freetext" href="http://dev.globus.org/wiki/Incubator/Falkon">http://dev.globus.org/wiki/Incubator/Falkon</a>
<a class="moz-txt-link-freetext" href="http://dsl-wiki.cs.uchicago.edu/index.php/Main_Page">http://dsl-wiki.cs.uchicago.edu/index.php/Main_Page</a>
===================================================
===================================================
</pre>
</body>
</html>