[Fwd: Re: [Swift-devel] Re: swift-falkon problem... plots to explain plateaus...]
Mihael Hategan
hategan at mcs.anl.gov
Tue Apr 1 10:32:03 CDT 2008
On Tue, 2008-04-01 at 10:26 -0500, Ioan Raicu wrote:
>
> Michael Wilde wrote:
> > We're only working on the BG/P system, and GPFS is the only shared
> > filesystem there.
> There is PVFS, but that performed even worse in our tests.
> >
> > GPFS access, however, remains a big scalabiity issue. Frequent small
> > accesses to GPFS in our measurements really slow down the workflow. We
> > did a lot of micro-benchmark tests.
> Yes! The BG/P's GPFS probably performs the worst out of all GPFSes I
> have worked on, in terms of small granular accesses. For example,
> reading 1 byte files, invoking a trivial script (i.e. exit 0), etc...
> all perform extremely poor, to the point that we need to move away from
> GPFS almost completely. For example, the things that we eventually need
> to avoid on GPFS for the BG/P are:
> invoking wrapper.sh
> mkdir
> any logging to GPFS
Doing nothing can be incredibly fast.
>
> There are probably others.
> >
> > Zhao, can you gather a set of these tests into a small suite and post
> > numbers so the Swift developers can get an understanding of the
> > system's GPFS access performance?
> >
> > Also note: the only local filesystem is RAM disk on /tmp or /dev/shm.
> > (Ioan and Zhao should confirm if they verified that /tmp is on RAM).
> Yes, there are no local disks on either BG/P or SiCortex. Both machines
> have /tmp and dev/shm mounted as ram disks.
>
> Ioan
> >
> > - Mike
> >
> > On 4/1/08 5:05 AM, Ben Clifford wrote:
> >> On Tue, 1 Apr 2008, Ben Clifford wrote:
> >>
> >>>> With this fixed, the total time in wrapper.sh including the app is
> >>>> now about
> >>>> 15 seconds, with 3 being in the app-wrapper itself. The time seems
> >>>> about
> >>>> evenly spread over the several wrapper.sh operations, which is not
> >>>> surprising
> >>>> when 500 wrappers hit NFS all at once.
> >>> Does this machine have a higher (/different) performance shared file
> >>> system such as PVFS or GPFS? We spent some time in november layout
> >>> out the filesystem to be sympathetic to GPFS to help avoid
> >>> bottlenecks like you are seeing here. It would be kinda sad if
> >>> either it isn't available or you aren't using it even though it is
> >>> available.
> >>
> >>> From what I can tell from the web, PVFS and/or GPFS are available on
> >>> all
> >> of the Argonne Blue Gene machines. Is this true? I don't want to
> >> provide more scalability support for NFS-on-bluegene if it is.
> >>
> > _______________________________________________
> > Swift-devel mailing list
> > Swift-devel at ci.uchicago.edu
> > http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel
> >
>
More information about the Swift-devel
mailing list