[Swift-devel] Swift-issues (PBS+NFS Cluster)

Tim Freeman tfreeman at mcs.anl.gov
Tue May 12 12:38:40 CDT 2009


On Tue, 12 May 2009 12:30:40 -0500
Ian Foster <foster at anl.gov> wrote:

> Tim:
> 
> Can you "share" data by mounting an EBS volume on node A, sticking  
> data on it, then unmounting the volume and mounting it on node B?

Yes, as long as node B is in the same EC2 "availability zone."

People set up something like that often: "production instance fails, hot backup
instance takes over with the production data in tact".  The only difference
from your scenario being that when an instance fails there, it is detaching
'violently' from the EBS volume.  There can be sync issues in the violent case
(OS buffer cache --> disk).

Tim


> Ian.
> 
> 
> On May 12, 2009, at 12:27 PM, Tim Freeman wrote:
> 
> > On Tue, 12 May 2009 12:04:23 -0500
> > Michael Wilde <wilde at mcs.anl.gov> wrote:
> >
> >> Indeed.
> >>
> >> I should note that the intent in Swift is to reduce or eliminate
> >> accesses to shared filesystems, so EBS and S3 would become similar.
> >
> > Is that for speed or less moving parts?  I think EBS is the fastest  
> > option they
> > have for disk space (faster than local disk), fyi.
> >
> > And just to make sure everyone is on same page they are not shared  
> > in and of
> > themselves.  They are much like any block device that you mount to  
> > an OS.
> > Multiple EC2 instances can not mount them simultaneously.
> >
> > They are durable (exist without ec2 instances attached to them),  
> > portable (can
> > be mounted by other instances if the current instance dies etc),  
> > snapshottable
> > (with explicit save to S3, it is not the same thing as S3), RAID- 
> > able (one
> > instance can mount multiple EBS volumes and get even more  
> > performance), but not
> > shared in the traditional sense of the word.
> >
> > Tim
> 



More information about the Swift-devel mailing list