[tahoe-lafs-trac-stream] [tahoe-lafs] #2059: Increase file reliability against group failure

tahoe-lafs trac at tahoe-lafs.org
Sat Aug 17 22:10:11 UTC 2013


#2059: Increase file reliability against group failure
---------------------------+-----------------------------------------------
     Reporter:             |      Owner:
  markberger               |     Status:  new
         Type:             |  Milestone:  undecided
  enhancement              |    Version:  1.10.0
     Priority:  normal     |   Keywords:  preservation servers-of-happiness
    Component:  code-      |
  peerselection            |
   Resolution:             |
Launchpad Bug:             |
---------------------------+-----------------------------------------------
Description changed by markberger:

Old description:

> Servers of happiness improves share distribution and guarantees a file
> can be recovered for up to n - h node failures. However, if a group of
> nodes fail, servers of happiness makes no guarantees. If I lose all the
> machines in my house, I have no way of knowing whether my other nodes
> have enough shares to reconstruct all my data.
>
> One way of fixing this is to group a maximum of n - h nodes in a single
> location, but I think that solution is silly because I might not want to
> increase my n or lower my h to meet the requirement. Instead, I should be
> able to organize storage nodes into failure groups and guarantee that a
> subset of those groups can reconstruct the file. Given a set of groups
> with g cardinality, any subset with a cardinality of g - 1 must have k
> shares.
>
> This is somewhat related to #467, but I think this ticket serves a
> different purpose.

New description:

 Servers of happiness improves share distribution and guarantees a file can
 be recovered for up to n - h node failures. However, if a group of nodes
 fail, servers of happiness makes no guarantees. If I lose all the machines
 in my house, I have no way of knowing whether my other nodes have enough
 shares to reconstruct all my data.

 One way of fixing this is to group a maximum of n - h nodes in a single
 location, but I think that solution is silly because I might not want to
 increase my n or lower my h to meet the requirement. Instead, I should be
 able to organize storage nodes into failure groups and guarantee that a
 subset of those groups will be able reconstruct every file. Given a set of
 groups with g cardinality, any subset with a cardinality of g - 1 must
 have k shares.

 This is somewhat related to #467, but I think this ticket serves a
 different purpose.

--

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2059#comment:1>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list