Changes between Version 49 and Version 50 of FAQ


Ignore:
Timestamp:
2011-07-22T22:09:55Z (13 years ago)
Author:
zooko
Comment:

add an anchor link for each question

Legend:

Unmodified
Added
Removed
Modified
  • FAQ

    v49 v50  
    11
    2 '''Q: What is special about Tahoe-LAFS? Why should anyone care about it instead of [http://tahoe-lafs.org/trac/tahoe/wiki/RelatedProjects#OtherProjects other distributed storage systems]?'''
     2'''[=#Q1_why_tahoe_lafs Q1:] What is special about Tahoe-LAFS? Why should anyone care about it instead of [http://tahoe-lafs.org/trac/tahoe/wiki/RelatedProjects#OtherProjects other distributed storage systems]?'''
    33
    44A1: Tahoe-LAFS is the first Free !Software/Open Source storage technology to offer ''provider-independent security''.  ''Provider-independent security'' means that the integrity and confidentiality of your files is guaranteed by mathematics computed on the client side, and is independent of the servers, which may be owned and operated by someone else.  To learn more, read [http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html our one-page explanation].
     
    66A2: Tahoe-LAFS provides reliable, fault-tolerant storage. Even if you do not need its security properties, you might want to use Tahoe-LAFS for extremely reliable storage. (Tahoe-LAFS's security features do a good job of staying out of your way when you don't need them.)
    77
    8 '''Q: "Erasure-coding"?  What's that?'''
     8'''[=#Q2_what_is_erasure_coding Q2:] "Erasure-coding"?  What's that?'''
    99
    1010A: You know how with RAID-5 you can lose any one drive and still recover?  And there is also something called RAID-6 where you can lose any two drives and still recover.  Erasure coding is the generalization of this pattern: you get to configure how many drives you could lose and still recover.  You can choose how many drives (actually storage servers) will be used in total, from 1 to 256, and how many storage servers are required to recover all the data, from 1 to however many storage servers there are.  We call the number of total servers {{{N}}} and the number required {{{K}}}, and we write the parameters as "{{{K-of-N}}}".
     
    2020A: There isn't currently a way to disable the encryption, but if you look at the "Recent Uploads and Downloads" page on your local tahoe-lafs gateway, you'll see that the encryption takes a tiny sliver of the total time to upload or download a file, so there isn't significant performance to be gained by skipping the encryption. We prefer 'secure by default', so without a compelling reason to allow insecure operation, our plan is to leave encryption turned on all the time.  Note that because Tahoe-LAFS includes the decryption key in the capability to a file, it is trivial to share or to publish an encrypted file—you just share or publish the capability, and everyone who uses that capability automatically sees the plaintext of the file.
    2121
    22 '''Q: Where should I look for current documentation about the Tahoe-LAFS protocols?'''
     22'''[=#Q4_where_are_the_docs Q4:] Where should I look for current documentation about the Tahoe-LAFS protocols?'''
    2323
    2424A: http://tahoe-lafs.org/source/tahoe/trunk/docs/architecture.rst
    2525
    26 '''Q: Does Tahoe-LAFS work on embedded devices such as a [http://www.pogoplug.com PogoPlug] or an [http://openwrt.org OpenWRT] router?'''
     26'''[=#Q5_embedded_devices Q5:] Does Tahoe-LAFS work on embedded devices such as a [http://www.pogoplug.com PogoPlug] or an [http://openwrt.org OpenWRT] router?'''
    2727
    2828A: Yes.  François Deppierraz contributes [http://tahoe-lafs.org/buildbot/builders/FranXois%20lenny-armv5tel a buildbot] which shows that Tahoe-LAFS builds and all the unit tests pass on his Intel SS4000-E NAS box running under Debian Squeeze.  Zandr Milewski [http://tahoe-lafs.org/pipermail/tahoe-dev/2009-November/003157.html reported] that it took him only an hour to build, install, and test Tahoe-LAFS on a !PogoPlug.
    2929
    30 '''Q: Does Tahoe-LAFS work on Windows?'''
     30'''[=#Q6_windows Q6"] Does Tahoe-LAFS work on Windows?'''
    3131
    3232A: Yes.  Follow [http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/quickstart.rst the standard quickstart instructions] to get Tahoe-LAFS running on Windows. (There was also an "Allmydata Windows client", but that is not actively maintained at the moment, and relied on some components that are not open-source.)
    3333
    34 '''Q: Does Tahoe-LAFS work on Mac OS X?'''
     34'''[=#Q7_mac_os_x Q7:] Does Tahoe-LAFS work on Mac OS X?'''
    3535
    3636A: Yes.  Follow [http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/quickstart.rst the standard quickstart instructions] on Mac OS X and it will result in a working command-line tool on Mac OS X just as it does on other Unixes.
    3737
    38 '''Q: Can there be more than one storage folder on a storage node? So if a storage server contains 3 drives without RAID, can it use all 3 for storage?'''
     38'''[=#Q8_storage_in_multiple_dirs Q8:] Can there be more than one storage directory on a storage node? So if a storage server contains 3 drives without RAID, can it use all 3 for storage?'''
    3939
    4040A: Not directly. Each storage server has a single "base directory" which we term {{{$BASEDIR}}}. The server keeps all of its shares in a subdirectory named {{{$BASEDIR/storage/shares/}}}. (Note that you can symlink this to whatever you want: you can keep the rest of the node's files in one place, and store all the shares somewhere else). Since there's only one such subdirectory, you can only use one filesystem per node. On the other hand, shares are stored in a set of 1024 subdirectories of that one, named {{{$BASEDIR/storage/shares/aa/}}}, {{{$BASEDIR/storage/shares/ab/}}}, etc. If you were to symlink the first third of these to one filesystem, the next third to a second filesystem, etc, (hopefully with a script!), then you'd get about 1/3rd of the shares stored on each disk. The "how much space is available" and space-reservation tools would be confused (including making the {{{reserved_space}}} parameter unusable), but basically everything else should work normally.
     
    4242A cleaner solution would be to use LVM instead, which can combine several physical disks (or loop devices consisting of common files) to a single logical volume. This logical volume can then be mounted or symlinked to {{{$BASEDIR/storage}}}. This also is a more flexible solution because new disks can then be added seamlessly to the volume with LVM.
    4343
    44 '''Q: Would it make sense to not use any RAID and let Tahoe-LAFS deal with the redundancy?'''
     44'''[=#Q9_use_raid_with_tahoe_lafs Q9:] Would it make sense to not use any RAID and let Tahoe-LAFS deal with the redundancy?'''
    4545
    4646A: The Allmydata grid didn't bother with RAID at all: each Tahoe-LAFS storage server node used a single spindle.
     
    5050Using RAID for your redundancy gets you fairly fast repair, because it's all being handled by a controller that sits right on top of the raw drive. Tahoe-LAFS's repair is a lot slower, because it is driven by a client that's examining one file at a time, and since there are a lot of network roundtrips for each file. Doing a repair of a 1TB RAID-5 drive can easily be finished in a day. If that 1TB drive is filled with a million Tahoe-LAFS files that are being repaired over a Wide Area Network, the repair could take a month.  On the other hand, many RAID configurations degrade significantly when a drive is lost, and Tahoe-LAFS's read performance is nearly unaffected.  So repair events may be infrequent enough to just let them happen quietly in the background and not care much about how long they take.
    5151
    52 '''Q: Suppose I have a file of 100GB and 2 storage nodes each with 75GB available, will I be able to store the file or does it have to fit
    53 within the realms of a single node?'''
     52'''[=#Q10_file_bigger_than_one_server Q10:] Suppose I have a file of 100GB and 2 storage nodes each with 75GB available, will I be able to store the file or does it have to fit within the realms of a single node?'''
    5453
    5554A: The ability to store the file will depend upon how you set the encoding parameters: you get to choose the tradeoff between expansion (how much space gets used) and reliability. The default settings are {{{3-of-10}}}, which means the file is encoded into 10 shares, and any 3 will be sufficient to reconstruct it. That means each share will be 1/3rd the size of the original file (plus a small overhead, less than 0.5% for large files). For your 100GB file, that means 10 shares, each of which is 33GB in size, which would not fit (it could get two shares on each server, but it couldn't place all ten, so it would return an error).
     
    5958You could also set the encoding to {{{4-of-6}}}, which would generate six 25GB shares, and put three on each server. This would still be vulnerable to either server being down (since neither server has enough shares to give you the whole file by itself), but would become tolerant to errors in an individual share (if only one share file were damaged, there are still five other shares, and we only need four). A lot of disk errors affect only a single file, so there's some benefit to this even if you're still vulnerable to a full disk/server failure.
    6059
    61 '''Q: Do I need to shutdown all clients/servers to add a storage node?'''
     60'''[=#Q11_dynamically_add_servers Q11:] Do I need to shutdown all clients/servers to add a storage node?'''
    6261
    6362A: No, You can add or remove clients or servers anytime you like. The central "Introducer" is responsible for telling clients and servers about each other, and it acts as a simple publish-subscribe hub, so everything is very dynamic. Clients re-evaluate the list of available servers each time they do an upload.
     
    6564This is great for long-term servers, but can cause a problem right then the node starts up. if you've just started your client and upload a file before it has a chance to connect to all of the servers, your upload may fail due to insufficient servers. Usually you can just try again (your client will usually have finished connecting to all the servers in the time it takes you to see the error message and click retry).
    6665
    67 '''Q: If I had 3 locations each with 5 storage nodes, could I configure the grid to ensure a file is written to each location so that I could handle all
    68 servers at a particular location going down?'''
     66'''[=#Q12_server_location_distribution Q12:] If I had 3 locations each with 5 storage nodes, could I configure the grid to ensure a file is written to each location so that I could handle all servers at a particular location going down?'''
    6967
    7068A: Not directly. We have a ticket about that one (#467, #302), but it's deeper than it looks and we haven't come to a conclusion on how to
     
    9290So you've got a 50% chance of the ideal distribution, and a 1/1000 chance of the worst-case distribution.
    9391
    94 '''Q: Is it possible to modify a mutable file by "patching" it? Also... if I have a file stored and I want to update a section of the file in the middle, is that possible or would be file need to be downloaded, patched and re-uploaded?'''
     92'''[=#Q13_modify_a_section_of_mutable_file Q13:] Is it possible to modify a mutable file by "patching" it? Also... if I have a file stored and I want to update a section of the file in the middle, is that possible or would be file need to be downloaded, patched and re-uploaded?'''
    9593
    9694A: Not at present. We've implemented only "Small Distributed Mutable Files" (SDMF) so far, which have the property that the whole file must be
     
    10199Kevan Carstensen has implemented MDMF, thanks in part to the sponsorship of Google Summer Of Code. Ticket #393 is tracking this work.
    102100
    103 '''Q: How can Tahoe-LAFS ensure that every node ID is unique?'''
     101'''[=#Q14_unique_node_id Q14:] How can Tahoe-LAFS ensure that every node ID is unique?'''
    104102
    105103A: The node ID is the secure hash of the SSL public key certificate of the node.  As long the node's public key is unique and the secure hash function doesn't allow collisions, then the node ID will be unique.
    106104
    107 '''Q: If upload the same file again and again, Tahoe-LAFS will return the same capability. How does Tahoe-LAFS identify that the client is same, when I upload files mutiple times, is it based on node ID?'''
     105'''[=#Q15_same_file_same_cap Q15:] If upload the same file again and again, Tahoe-LAFS will return the same capability. How does Tahoe-LAFS identify that the client is same, when I upload files mutiple times, is it based on node ID?'''
    108106
    109107A: For immutable files this is true—the resulting capability will be the same each time you upload the same file contents.  The capability is derived from two pieces of information:  The content of the file and the "convergence secret".  By default, the convergence secret is randomly generated by the node when it first starts up, then stored and re-used after that.  So the same file content uploaded from the same node will always have the same cap string.  Uploading the file from a different node with a different convergence secret would result in a different cap string—and in a second copy of the file's contents stored on the grid. If you files you upload to converge (also known as "deduplicate") with files uploaded by someone else, just make sure you're using the same convergence secret as they are.
    110108
    111 '''Q: If I move the client node base directory to different machine and start the client there, will the node have the same node ID as on the previous machine?'''
     109'''[=#Q16_move_node_to_different_machine Q16:] If I move the client node base directory to different machine and start the client there, will the node have the same node ID as on the previous machine?'''
    112110
    113111A: Yes, the node ID is stored in the {{{my_nodeid}}} file in your tahoe base directory, and it is derived from the SSL public/private keypair which is stored in the {{{private}}} subdirectory of the tahoe base directory. As long as you move both of those then the node on the new machine will have the same node ID.
    114112
    115 '''Q: Is it possible to run multiple introducers on the same grid?'''
     113'''[=#Q17_multiple_introducers Q17:] Is it possible to run multiple introducers on the same grid?'''
    116114
    117115A: Faruque Sarker has been working on this as a Google Summer of Code project. His changes are blocked due to needing more people to test them, review their code, and write more unit tests. For more information please take a look at ticket #68
    118116
    119 '''Q: Will this thing run only when I tell it to? Will it use up a lot of my network bandwidth, CPU, or RAM?'''
     117'''[=#Q18_unobtrusive_software Q18:] Will this thing run only when I tell it to? Will it use up a lot of my network bandwidth, CPU, or RAM?'''
    120118
    121119A: Tahoe-LAFS is designed to be unobtrusive. First of all, it doesn't start at all except when you tell it to—you start it with {{{tahoe start}}} and stop it with {{{tahoe stop}}}. Secondly, the software doesn't act as a server unless you configure it to do so—it isn't like peer-to-peer software which automatically acts as a server as well as a client. Thirdly, the client doesn't do anything except in response to the user starting an upload or a download—it doesn't do anything automatically or in the background. Fourthly, with two minor exceptions described below, the server doesn't do anything either, except in response to clients doing uploads or downloads. Finally, even when the server is actively serving clients it isn't too intensive of a process. It uses between 40 and 56 MB of RAM on a 64-bit Linux server. We used to run eight of them on a single-core 2 GHz Opteron and had plenty of CPU to spare, so it isn't too CPU intensive.
     
    123121The two minor exceptions are that the server periodically inspects all of the ciphertext that it is storing on behalf of clients. It is configured to do this "in the background", by doing it only for a second at a time and waiting for a few seconds in between each step. The intent is that this will not noticably impact other users of the same server. For all the details about when these background processes run and what they do, read the documentation in [http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/crawler.py?annotate=blame&rev=4164 storage/crawler.py] and [http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/expirer.py?annotate=blame&rev=4329 storage/expirer.py].
    124122
    125 '''Q: What about revoking access to a file or directory?'''
     123'''[=#Q19_revocation Q19:] What about revoking access to a file or directory?'''
    126124
    127125Please see these mailing list threads: