#2059 closed enhancement (duplicate)

Increase file reliability against group failure

Reported by: markberger Owned by:
Priority: normal Milestone: undecided
Component: code-peerselection Version: 1.10.0
Keywords: preservation servers-of-happiness Cc:
Launchpad Bug:

Description (last modified by markberger)

Servers of happiness improves share distribution and guarantees a file can be recovered for up to h - k node failures. However, if a group of nodes fail, servers of happiness makes no guarantees. If I lose all the machines in my house, I have no way of knowing whether my other nodes have enough shares to reconstruct all my data.

One way of fixing this is to group a maximum of h - k nodes in a single location, but I think that solution is silly because I might not want to increase my n or lower my h to meet the requirement. Instead, I should be able to organize storage nodes into failure groups and guarantee that a subset of those groups will be able reconstruct every file. Given a set of groups with g cardinality, any subset with a cardinality of g - 1 must have k shares.

This is somewhat related to #467, but I think this ticket serves a different purpose.

Change History (5)

comment:1 Changed at 2013-08-17T22:10:09Z by markberger

  • Description modified (diff)

comment:2 Changed at 2013-08-18T14:51:19Z by markberger

  • Description modified (diff)

comment:3 Changed at 2013-08-19T12:49:02Z by markberger

This feature should be disabled by default because new users should have to change as few options as possible to start using tahoe.

comment:4 Changed at 2013-08-19T20:41:51Z by PRabahy

Looks related to/duplicate of #1838 to me.

comment:5 Changed at 2013-08-21T03:08:13Z by zooko

  • Resolution set to duplicate
  • Status changed from new to closed

I agree that this is a dup of #1838. As often happens, the text in this dup should be read by readers of the other ticket, so I'll post a comment there urging them to do so. Oh, by the way, please read #573, which was another dup and also had a lot of useful detail.

Note: See TracTickets for help on using tickets.