[tahoe-dev] [tahoe-lafs] #573: Allow client to control which storage servers receive shares

tahoe-lafs trac at allmydata.org
Sun Jun 21 23:50:49 PDT 2009


#573: Allow client to control which storage servers receive shares
--------------------------------+-------------------------------------------
 Reporter:  swillden            |           Owner:           
     Type:  enhancement         |          Status:  new      
 Priority:  minor               |       Milestone:  undecided
Component:  code-peerselection  |         Version:  1.2.0    
 Keywords:                      |   Launchpad_bug:           
--------------------------------+-------------------------------------------

Comment(by warner):

 I had a thought.. one scaling problem we've anticipated is the practical
 requirement of maintaining a connection to all known servers. Because each
 file will put shares on a pseudorandomly-selected set of servers, reading
 A files will obligate you to talk to something like max(A*k,NUMSERVERS)
 hosts. You can make peer-selection more clever and have it prefer to use
 hosts that it already has connections to, but if you're grabbing a
 significant number of files, you're going to wind up with a full mesh of
 connections.

 But, what if we assume that most of the time, people are downloading their
 own files? Or files that were mostly uploaded by the same person? We could
 set it up such that Alice's files all tend to wind up on the same N
 servers (except for overspill due to full/faulty ones). Then, if we make
 the network layer only bring up connections on demand, and prefer
 connected servers to not-yet-connected servers, this use case would see
 fewer connections being made.

 We've talked about splitting out a new "peer selection index" from the
 existing storage-index. To achieve useful load-balancing across the grid
 in this scheme I'm thinking about, we could have each node choose a random
 peer-selection-index at create-client time, and use it for each upload (of
 course, it could be overridden per-upload). Now, and this is the important
 part, the peer-selection-index gets written into the filecap. Everything
 Alice uploads will have the same PSI, and anyone who sees the filecap will
 have enough information to create the right serverlist and ask the right
 servers. But Alice's files will all tend to be concentrated on the first N
 machines in her own personal list.

 You could imagine changing the way this PSI is implemented to reference
 some other ordering, perhaps an explicitly-defined one (which is perhaps
 stored in some distributed fashion, and retrievable with the PSI? imagine
 if the PSI is a random string that can somehow be turned into a mutable
 readcap, and you publish a list of storage server ids/furls in that slot,
 and you give it a DNS-style TTL so readers are allowed to cache it for a
 long time).  I think that might accomplish the goals of this ticket while
 still making the filecap sufficient to retrieve any file.

 On the other hand, maybe Alice doesn't want all of her files to be
 concentrated on the same N servers. If she loses N-k-1 of those, she's
 lost everything. She might prefer a distribute-over-all-servers mode which
 gives a slightly higher chance of losing a small number of files, over the
 slightly-lower chance of losing everything. For her, she'll just duplicate
 or hash the storage-index to generate the PSI for each upload, giving the
 exact same behavior that we have now.

-- 
Ticket URL: <http://allmydata.org/trac/tahoe/ticket/573#comment:2>
tahoe-lafs <http://allmydata.org>
secure decentralized file storage grid


More information about the tahoe-dev mailing list