[tahoe-lafs-trac-stream] [tahoe-lafs] #2107: don't place shares on servers that already have shares
tahoe-lafs
trac at tahoe-lafs.org
Tue Dec 3 02:42:34 UTC 2013
#2107: don't place shares on servers that already have shares
-------------------------+-------------------------------------------------
Reporter: zooko | Owner:
Type: | Status: new
enhancement | Milestone: undecided
Priority: normal | Version: 1.10.0
Component: code- | Keywords: upload servers-of-happiness brians-
peerselection | opinion-needed
Resolution: |
Launchpad Bug: |
-------------------------+-------------------------------------------------
Comment (by gdt):
So no, I did not really follow the s-o-h definition. I don't see why a
k-sized subset is the right metric, if K is the shares-needed (e.g. 3 in
3-of-10 coding). If I have 4 servers and 3/10 encoding, then I certainly
want any 3 servers to recover, but I want more than that, because losing 2
of 4 is not so unthinkable. What I want is to minimize the probability
that the file will be unrecoverable, given some arguably reasonable
assumptions, and aubject to not trying to store any particular share on
more than one server, on the theory that such duplication is wasteful and
should be done by changing the coding instead.
For S >> N, I see why s-o-h makes sense; it essentially measures the
number of servers that have a share, but not quite. But I don't see how
s-o-h strictly relates to probability of success (even assuming equal
probability of node failure). And probability of success is what we
should care about, not some intermediate metric that is hard to understand
how it relates to what we care about.
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2107#comment:27>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list