<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Dear All:</div><div><br></div><div>I am finding it hard to sort through this chain of emails, but I wanted to make a couple of points. </div><div><br></div><div>Zhao, Allan, Ioan, et al., have demonstrated considerable benefits from applying two methods to Swift-like workloads on the BG/P:</div><div><br></div><div>a) "Storage hierarchy": the use of federated per-node storage (RAM on BG/P, could be local disk on other systems) as an "intermediate file system" layer in the storage hierarchy between the ultra-fast but low-capacity local storage and the high-capacity but slower GPFS.</div><div><br></div><div>b) "Collective I/O": improving performance between intermediate file system and GPFS by aggregating many small operations into fewer large operations.</div><div><br></div><div>These are both well-known, extensively studied, and proven methods. Furthermore, we have some nice performance data that allows us to quantify their benefits in our specific situation. Perhaps it would be worth looking at the methods from that perspective.</div><div><br></div><div>Ian.</div><div><br></div><br><div><div>On Dec 1, 2008, at 9:32 PM, Ioan Raicu wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"> <div bgcolor="#ffffff" text="#000000"> But its not just about directories and GPFS locking.... its about 8 or 16 large servers with 10Gb/s network connectivity (as is the case for GPFS) compared to potentially 40K servers, each with 1Gb/s connectivity (as would be the case in our example). The potential raw throughput of the later case, when we use all 40K nodes as servers to the file system, is orders of magnitude larger than a static configuration with 8 or 16 servers. Its not yet clear we can actually achieve anything close to the upper bound of performance at full scale, but it should be obvious that the performance characteristics will be quite different between GPFS and CIO.<br> <br> Ioan<br> <br> Mihael Hategan wrote: <blockquote cite="mid:1228175016.5031.6.camel@localhost" type="cite"> <pre wrap="">On Mon, 2008-12-01 at 17:10 -0600, Ioan Raicu wrote:
</pre> <blockquote type="cite"> <pre wrap="">Mihael Hategan wrote:
</pre> <blockquote type="cite"> <pre wrap="">On Mon, 2008-12-01 at 16:52 -0600, Ioan Raicu wrote:
...
</pre> <blockquote type="cite"> <pre wrap="">I don't think you realize how expensive GPFS access is when doing so
at 100K CPU scale.
</pre> </blockquote> <pre wrap="">I don't think I understand what you mean by "access". As I said, things
that generate contention are going to be slow.
If the problem requires that contention to happen, then it doesn't
matter what the solution is. If it does not, then I suspect that there
is a way to avoid contention in GPFS, too (sticking things in different
directories).
</pre> </blockquote> <pre wrap="">The basic idea is that many smaller shared file systems will scale
better than 1 large file system, as the contention is localized.
</pre> </blockquote> <pre wrap=""><!---->Which is the same behaviour you get if you have a hierarchy of
directories. This is what Ben implemented in Swift.
</pre> <blockquote type="cite"> <pre wrap=""> The problem is that having 1 global namespace is simple and straight
forward, but having N local namespaces is not, and requires extra
management.
</pre> </blockquote> <pre wrap=""><!---->Right. That's why most filesystems I know of treat directories as
independent files containing file metadata (aka. "local namespaces").
</pre> </blockquote> <br> <pre class="moz-signature" cols="72">--
===================================================
Ioan Raicu
Ph.D. Candidate
===================================================
Distributed Systems Laboratory
Computer Science Department
University of Chicago
1100 E. 58th Street, Ryerson Hall
Chicago, IL 60637
===================================================
Email: <a class="moz-txt-link-abbreviated" href="mailto:iraicu@cs.uchicago.edu">iraicu@cs.uchicago.edu</a>
Web: <a class="moz-txt-link-freetext" href="http://www.cs.uchicago.edu/~iraicu">http://www.cs.uchicago.edu/~iraicu</a>
<a class="moz-txt-link-freetext" href="http://dev.globus.org/wiki/Incubator/Falkon">http://dev.globus.org/wiki/Incubator/Falkon</a>
<a class="moz-txt-link-freetext" href="http://dsl-wiki.cs.uchicago.edu/index.php/Main_Page">http://dsl-wiki.cs.uchicago.edu/index.php/Main_Page</a>
===================================================
===================================================
</pre> </div> _______________________________________________<br>Swift-devel mailing list<br><a href="mailto:Swift-devel@ci.uchicago.edu">Swift-devel@ci.uchicago.edu</a><br>http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel<br></blockquote></div><br></body></html>