[tahoe-dev] [tahoe-lafs] #778: "shares of happiness" is the wrong measure; "servers of happiness" is better
tahoe-lafs
trac at allmydata.org
Tue Aug 18 11:03:27 PDT 2009
#778: "shares of happiness" is the wrong measure; "servers of happiness" is
better
--------------------------------+-------------------------------------------
Reporter: zooko | Owner:
Type: defect | Status: new
Priority: critical | Milestone: undecided
Component: code-peerselection | Version: 1.4.1
Keywords: reliability | Launchpad_bug:
--------------------------------+-------------------------------------------
Comment(by swillden):
Replying to [comment:25 zooko]:
> > I'm not sure how this k interacts with the number-of-shares k. Are you
assuming implicitly that there is only one share per server?
>
> Treat that as an "implementation detail" that Tahoe-LAFS can optimize as
it likes, as long as the guarantee that it offers to the user still holds
I like it. Not as much as a statistical reliability threshold, but I like
it.
However, I think that it's worth working out how to implement the
"implementation detail".
So, to be clear, this is my understanding of your proposal:
We still have three configured parameters, {{{k, h, m}}} which mean,
respectively, "Minimum number of servers required to recover a file",
"Servers of Happiness (aka the repair threshold)" and "Maximum number of
servers to use".
FEC encoding parameters {{{k_e, m_e}}} must be chosen by Tahoe such that
it's possible to satisfy {{{k, h, m}}}.
My suggested algorithm for selecting {{{k_e}}} is to set it to the number
of servers chosen to receive shares, {{{n}}}, which must satisfy {{{k <= h
<= n == k_e <= m}}}. Setting {{{k_e}}} to the number of servers chosen
will maximize download performance when all servers are available
(assuming all servers have similar performance -- and it could do a
reasonable job even if they don't).
{{{m_e}}} must then be chosen such that when those {{{m_e}}} shares are
equally distributed across the {{{n}}} servers, any {{{k}}}-server subset
of them has sufficient shares to recover the file. That is, each of the
{{{n}}} servers must have at least {{{k_e // k}}} shares. If {{{k_e /
k}}} is an integer, that means that {{{m_e = n * (k_e / k)}}}.
If {{{k_e / k}}} is not an integer, the minimal {{{m_e}}} is a little more
complicated. In that case, most of the servers must have {{{1 + k_e //
k}}} shares, but {{{k-1}}} of them only need {{{k_e // k}}} shares. So
{{{m_e = (n-k-1)*(1 + k_e // k) + (k-1)*(1 + k_e // k)}}}. Rearranging
gives {{{m_e = n - k + 1 + n * (k_e // k)}}}.
In Python code:
{{{
def compute_num_shares(k, n, k_e):
if k_e % k == 0:
return n * k_e / k
else:
return n - k + 1 + n * (k_e // k)
}}}
So, for {{{k=3, n=10}}}, and assuming we set {{{k_e = n}}} to maximize
download performance, {{{k_e = 10, m_e = 38}}}. Eight of the 10 servers
will get four shares and two will get three shares. The fewest shares
that any three servers can have is 10, but three-server sets may have 11
or (most commonly) 12 shares.
This multiplication of shares may even make it easy to address performance
mismatches between servers. The client can request one share from each of
10 servers, but if one of the servers is very fast and completes first,
while another is very slow, the client could request another share from
the fast server.
One downside of this approach is that it makes the expansion factor
somewhat unpredictable, since it's dependent on the number of servers
available. More servers means more expansion -- also more reliability.
For completeness, we should also define what this approach means for
repair. An algorithm for deciding if a file needs to be repaired is:
1. Sort the available servers by number of shares, ascending.
2. Look at the first {{{k}}} servers in the list. If they collectively
have fewer than {{{k_e}}} shares, remove the first one and repeat this
step.
3. If {{{len(list) >= h}}}, the file is happy.
--
Ticket URL: <http://allmydata.org/trac/tahoe/ticket/778#comment:26>
tahoe-lafs <http://allmydata.org>
secure decentralized file storage grid
More information about the tahoe-dev
mailing list