[tahoe-dev] [tahoe-lafs] #999: amazon s3 backend

tahoe-lafs trac at allmydata.org
Tue Mar 16 09:03:05 PDT 2010


#999: amazon s3 backend
--------------------------+-------------------------------------------------
 Reporter:  zooko         |           Owner:           
     Type:  enhancement   |          Status:  new      
 Priority:  major         |       Milestone:  undecided
Component:  code-storage  |         Version:  1.6.0    
 Keywords:  gsoc          |   Launchpad_bug:           
--------------------------+-------------------------------------------------
 (originally I incorrectly posted this to #917)

 The way to do it is to make a variant of
 [source:src/allmydata/storage/server.py] which doesn't read from local
 disk in its [source:src/allmydata/storage/server.py at 4164#L359
 _iter_share_files()], but instead reads the files from its S3 bucket (it
 is an S3 client and a Tahoe-LAFS storage server). Likewise variants of
 [source:src/allmydata/storage/shares.py at 3762 storage/shares.py],
 [source:src/allmydata/storage/immutable.py at 3871#L39 storage/immutable.py],
 and [source:src/allmydata/storage/mutable.py at 3815#L34 storage/mutable.py]
 which write their data out to S3 instead of to their local filesystem.

 Probably one should first start by abstracting out just the "does this go
 to local disk, S3, Rackspace Cloudfiles, etc" part from all the other
 functionality in those four files...  :-)

-- 
Ticket URL: <http://allmydata.org/trac/tahoe/ticket/999>
tahoe-lafs <http://allmydata.org>
secure decentralized file storage grid


More information about the tahoe-dev mailing list