[tahoe-lafs-trac-stream] [tahoe-lafs] #573: Allow client to control which storage servers receive shares
tahoe-lafs
trac at tahoe-lafs.org
Fri Jun 28 04:28:01 UTC 2013
#573: Allow client to control which storage servers receive shares
------------------------------------+-----------------------
Reporter: swillden | Owner:
Type: enhancement | Status: closed
Priority: minor | Milestone: undecided
Component: code-peerselection | Version: 1.2.0
Resolution: duplicate | Keywords:
Launchpad Bug: |
------------------------------------+-----------------------
Old description:
> Although in general the peer list permutation method is great, there are
> some situations in which clients may want to choose which specific peers
> get their shares.
>
> One example is clients performing backup operations. If the client
> machine is also running a storage server, the client will often place a
> share there, but in the most important backup recovery scenarios the
> local storage server's data will be gone, too. By choosing to place a
> share locally, the client is more or less "pre-failing" that share.
> Similarly, if a single site has multiple systems participating in a
> network, it's probably not advisable to store shares on any of the co-
> located machines, if part of the backup goal is the ability to survive
> catastrophe (excepting data centers that provide other disaster recovery
> solutions, of course).
>
> So, backup clients should be able to specify that machine-local and site-
> local storage is not allowed.
>
> Another example is a client that wants to provide fast local access to
> specific caps (mutable or immutable) to certain peer clients. By
> choosing to place k shares with a specified storage server, the client
> can ensure that a client on that machine can access those caps without
> reaching out over the network.
>
> Of course, deliberately placing k shares with a single storage server
> somewhat reduces reliability, assuming the remaining n-k shares are
> distributed normally. This is mitigated to a large degree if the k-share
> server is known to be highly available (e.g. if I want to back up my
> digital photos to the network, but put k shares of each on my Mom's
> computer so that she has fast access to the photos, I can take steps to
> make sure that the data on her storage server is available). However, to
> further mitigate this risk, a client that is storing k shares with a
> single storage server should probably distribute more than n-k shares to
> other storage servers.
>
> There are probably other scenarios where clients should be able to
> exercise greater control over share targeting as well.
>
> I think share targeting should not be a tahoe-level configuration option.
> Instead, I think tahoe should provide an API to allow applications using
> tahoe to specify target selection parameters.
>
> One issue created by targeted peer selection is the breaking of the share
> recovery search process. It's not an issue for a backup client's refusal
> to store a share locally, essentially we're just simulating a full
> storage peer and otherwise we can walk the permuted peer list. Where it
> becomes a problem is when clients wish to choose the peers that receive
> shares, potentially completely ignoring the permuted list and completely
> breaking the recovery search. This could result in requiring the
> recovery process to search the entire network.
>
> One solution is to simply ignore the issue and accept that recovery of
> targeted shares is harder. In small networks that would be fine, since
> you're probably retrieving from almost every peer anyway. In larger
> networks, searching the entire peer set might be unacceptable. If
> applications can request specific targeting for storage, perhaps they
> should also be able to suggest specific peers for recovery. Then they
> could store the targeted peer list as another file, and place that file
> normally. The only problem I see with making this a purely application-
> level issue is that a generic Repairer will have a hard time finding the
> shares, unless it is also told where they are, or knows about the pointer
> file.
>
> As for the nature of the targeting APIs, I can think of a lot of
> sophisticated ways to specify selection criteria, but we should probably
> start simple and then see if something more is required. The simplest
> solution I can thing of is to allow the application to specify a list of
> (peer ID, share count) tuples. The client would traverse this list and
> deliver the specified number of shares to each peer. Any remaining
> shares (assuming sum(share_counts) < n) would be delivered normally,
> except that peers with a specified count of 0 would not receive any
> shares, even if they're at the top of the peer list.
New description:
Although in general the peer list permutation method is great, there are
some situations in which clients may want to choose which specific peers
get their shares.
One example is clients performing backup operations. If the client
machine is also running a storage server, the client will often place a
share there, but in the most important backup recovery scenarios the local
storage server's data will be gone, too. By choosing to place a share
locally, the client is more or less "pre-failing" that share. Similarly,
if a single site has multiple systems participating in a network, it's
probably not advisable to store shares on any of the co-located machines,
if part of the backup goal is the ability to survive catastrophe
(excepting data centers that provide other disaster recovery solutions, of
course).
So, backup clients should be able to specify that machine-local and site-
local storage is not allowed.
Another example is a client that wants to provide fast local access to
specific caps (mutable or immutable) to certain peer clients. By choosing
to place k shares with a specified storage server, the client can ensure
that a client on that machine can access those caps without reaching out
over the network.
Of course, deliberately placing k shares with a single storage server
somewhat reduces reliability, assuming the remaining n-k shares are
distributed normally. This is mitigated to a large degree if the k-share
server is known to be highly available (e.g. if I want to back up my
digital photos to the network, but put k shares of each on my Mom's
computer so that she has fast access to the photos, I can take steps to
make sure that the data on her storage server is available). However, to
further mitigate this risk, a client that is storing k shares with a
single storage server should probably distribute more than n-k shares to
other storage servers.
There are probably other scenarios where clients should be able to
exercise greater control over share targeting as well.
I think share targeting should not be a tahoe-level configuration option.
Instead, I think tahoe should provide an API to allow applications using
tahoe to specify target selection parameters.
One issue created by targeted peer selection is the breaking of the share
recovery search process. It's not an issue for a backup client's refusal
to store a share locally, essentially we're just simulating a full storage
peer and otherwise we can walk the permuted peer list. Where it becomes a
problem is when clients wish to choose the peers that receive shares,
potentially completely ignoring the permuted list and completely breaking
the recovery search. This could result in requiring the recovery process
to search the entire network.
One solution is to simply ignore the issue and accept that recovery of
targeted shares is harder. In small networks that would be fine, since
you're probably retrieving from almost every peer anyway. In larger
networks, searching the entire peer set might be unacceptable. If
applications can request specific targeting for storage, perhaps they
should also be able to suggest specific peers for recovery. Then they
could store the targeted peer list as another file, and place that file
normally. The only problem I see with making this a purely application-
level issue is that a generic Repairer will have a hard time finding the
shares, unless it is also told where they are, or knows about the pointer
file.
As for the nature of the targeting APIs, I can think of a lot of
sophisticated ways to specify selection criteria, but we should probably
start simple and then see if something more is required. The simplest
solution I can thing of is to allow the application to specify a list of
(peer ID, share count) tuples. The client would traverse this list and
deliver the specified number of shares to each peer. Any remaining shares
(assuming sum(share_counts) < n) would be delivered normally, except that
peers with a specified count of 0 would not receive any shares, even if
they're at the top of the peer list.
--
Comment (by nejucomo):
This is one possible stepping stone feature to the "universal caps" use
case I just created a ticket for here: #2009
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/573#comment:13>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list