Changes between Version 22 and Version 23 of ServerSelection


Ignore:
Timestamp:
2014-02-04T12:17:21Z (10 years ago)
Author:
zooko
Comment:

tracify and/or relativize links

Legend:

Unmodified
Added
Removed
Modified
  • ServerSelection

    v22 v23  
    1010 * Jacob Appelbaum and Harold Gonzales want to specify a set of servers which collectively are guaranteed to have at least K shares -- they intend to use this to specify the ones that are running as Tor hidden services and thus are attack-resistant (but also extra slow-and-expensive to reach).  Interestingly the server selection policy on ''download'' should be that the K servers which are Tor hidden services should be downloaded from as a last resort.
    1111 * Several people -- again I'm sorry I've forgotten specific attribution -- want to identify which servers live in which cluster or co-lo or geographical area, and then to distribute shares evenly across clusters/colos/geographical-areas instead of evenly across servers.
    12   * Here's an example of this desire, Nathan Eisenberg asked on the mailing list for "Proximity Aware Decoding": http://tahoe-lafs.org/pipermail/tahoe-dev/2009-December/003286.html
     12  * Here's an example of this desire, Nathan Eisenberg asked on the mailing list for "Proximity Aware Decoding": [//pipermail/tahoe-dev/2009-December/003286.html]
    1313  * If you have ''K+1'' shares stored in a single location then you can repair after a loss (such as a hard drive failure) in that location without having to transfer data from other locations. This can save bandwidth expenses (since inter-location bandwidth is typically free), and of course it also means you can recover from that hard drive failure in that one location even if all the other locations have been stomped to death by Godzilla.
    1414  * This is called "rack awareness" in the Hadoop and Cassandra projects, where the unit of distribution would be the rack.
    15   * John Case wrote a letter to tahoe-dev asking for this feature and comparing it to the concept of "families" in the Tor project: http://tahoe-lafs.org/pipermail/tahoe-dev/2011-April/006301.html
     15  * John Case wrote a letter to tahoe-dev asking for this feature and comparing it to the concept of "families" in the Tor project: [//pipermail/tahoe-dev/2011-April/006301.html]
    1616 * Brian Parma wants to share storage with one other person, and have all of his files stored on their server and vice versa. (Since he already has local copies of his files, so there's no value to him in storing his files on his server.)
    1717 * A. Montero wants a typical reciprocal friendnet backup grid, but nodes are connected only sporadically via direct links (LAN/USB). Nodes are unlikely to see each other via internet and bandwith is low. In order to exchange the shares of each individual participant local backups, nodes connect from time to time in a rendez-vous operation and the exchange happens. See ticket #1657 for a detailed description.