#707 new enhancement

use local storage server as encrypted cache

Reported by: kpreid Owned by: nobody
Priority: minor Milestone: undecided
Component: code-encoding Version: 1.4.1
Keywords: performance Cc: kpreid, vikarti@…
Launchpad Bug:

Description

Summary:

  • Local caching of arbitrary portions of a tahoe filesystem can be implemented by adding shares to the local storage server.
  • This can be considered an aspect of server selection policy, except that the implementation is not just uploader's-choice.
  • Caching of a selected subtree could be implemented as a variant of deep-check/repair.

Condensed log:

  • [17:39] dreid: I think it's a negative user experience to require UI file operations to actually hit a tahoe node. At the I have a folder on disk and I want to access it's data level.
  • [17:39] kpreid: you mean, due to response time?
  • [17:44] zooko: dreid: you mean you want the file contents to be cached locally, right?
  • [17:45] dreid: zooko: For most stuff yeah.
  • [17:46] kpreid: that's just a matter of proper distribution of shares :-)
  • [17:46] dreid: I obviously wouldn't want that for like my Pictures folder, because that's huge. But for where I keep my documents, and presentations, and my bzr repos, totally want them always local.
  • [17:47] kpreid: (semi-serious: using the shares for cache is good if you want your local machine to not have plaintext when idle. you could also put a plaintext cache in an encrypted fs, but that's less trivial)
  • [17:56] dreid: kpreid: I don't think in everyday usage I would care, but I can see how some people would.
  • [17:56] kpreid: well, I run an encrypted-homedir laptop. I want my private keys and financial info to not fall into random laptop thief's hands and so on
  • [17:57] dreid: Well you're a paranoid freak. (In the nicest way possible.)
  • [17:58] dreid: My private keys live in my pocket, but yes i see your point.
  • [17:58] kpreid: so I could put my cache of hypothetical private shared documents on tahoe in that home dir, but then I'm bloating it with cache -- I'd rather keep some already-ciphered shares around
  • [17:58] kpreid: so it's just a matter of appropriate share (re)location policy!
  • [18:00] kpreid: Random strawman proposal: start with downloading shares of all files in a directory you just viewed (do it starting with 1 share parallel across all files, not k shares)
  • [18:32] zooko: kpreid: I really like your idea of using "server selection policy" as an implementation of "caching".
  • [18:32] kpreid: but...this is not *just* the uploader's choice; this is nodes proactively adding shares to their own stores
  • [18:33] zooko: Good point.
  • [18:34] kpreid: which isn't to say it's not a good thing, but just that it's a slightly different mechanism from uploader's selection
  • [18:35] zooko: Yep.
  • [18:36] kpreid: also: deep-check/repair is similar to 'pull this subtree into my cache' in what it has to do
  • [18:59] zooko: Okay, that's a cool idea. Maybe mention it on some ticket? Gotta run... :-)

Change History (4)

comment:1 Changed at 2009-06-19T21:51:44Z by kpreid

  • Cc kpreid added

comment:2 Changed at 2009-07-11T11:33:12Z by warner

  • Component changed from unknown to code-encoding

comment:3 Changed at 2009-11-26T23:28:35Z by davidsarah

  • Keywords performance added

comment:4 Changed at 2012-02-16T07:46:12Z by vikarti

  • Cc vikarti@… added
Note: See TracTickets for help on using tickets.