'''[=#Q0_what_is_it Q0:] What is Tahoe-LAFS? What can you do with it?''' A: Think of Tahoe-LAFS as being like [https://en.wikipedia.org/wiki/BitTorrent BitTorrent], except you can upload as well as download. Also, Tahoe-LAFS has directories and files so that if you're looking at a directory that is stored in Tahoe-LAFS, you can navigate to a file or sub-directory that is also in Tahoe-LAFS. So in that sense it is a little more like a filesystem than !BitTorrent is. '''[=#Q1_why_tahoe_lafs Q1:] What is special about Tahoe-LAFS? Why should anyone care about it instead of [wiki:RelatedProjects#OtherProjects other distributed storage systems]?''' A1: Tahoe-LAFS is the first Free !Software/Open Source storage technology to offer ''provider-independent security''. ''Provider-independent security'' means that the integrity and confidentiality of your files is guaranteed by mathematics computed on the client side, and is independent of the servers, which may be owned and operated by someone else. To learn more, read [source:docs/about.rst our one-page explanation]. A2: Tahoe-LAFS provides reliable, fault-tolerant storage. Even if you do not need its security properties, you might want to use Tahoe-LAFS for extremely reliable storage. (Tahoe-LAFS's security features do a good job of staying out of your way when you don't need them.) '''[=#Q1_5_what_s_the_difference_from_freenet Q1.5:] What's the difference between Tahoe-LAFS and Freenet?''' A: Zooko wrote [//pipermail/tahoe-dev/2011-July/006560.html a long post about that] to the tahoe-dev mailing list. '''[=#Q2_what_is_erasure_coding Q2:] "Erasure-coding"? What's that?''' A: RAID-5 can lose one drive and RAID-6 can lose two drives and recover. Using a method of data protection in which data is broken into fragments, expanded and codified with redundancies, stored across a selected set of various places or storage servers, Erasure coding (CE). The number of records (storage / servers / nodes) used in total can be chosen from 1 to 256, and the number of storage servers that are required to recover all the data, from 1 to the total number of available storage servers. The number of overall storage servers, we call {{{N}}} and the number needed {{{K}}} and write the parameters such that it is "{{{K-of-N}}}". This uses an amount of space on each storage server equal to the total size of your data is shared over all {{{K}}}. Tahoe-LAFS having default parameters {{{3-of-10}}}, the data is spread over 10 different disks and losing any 7, continue to recover all the data. This is more reliable than comparable RAID arrangements, with a cost of only 3.3 times the storage space that a single copy carries. It takes about 3.3 times the storage space, because it uses space on each server, equal to 1/3 of the size of the data, and there are 10 servers. "Forward error correction" (FEC) is another term for erasure coding. Erasure coding should not be confused with "secret sharing", which has the additional security property that fewer than {{{K}}} servers cannot recover any information about the data. Tahoe-LAFS' erasure coding does not have this property, and does not need to have it because we rely on secret-key encryption (using a key in the read cap) for confidentiality. "Information Dispersal Algorithm" (IDA) can refer either to erasure code or secret sharing algorithm according to context, so we prefer not to use that term. '''[=#Q3_disable_encryption Q3:] Is there a way to disable the encryption for content which isn't secret? Won't that save a lot of CPU cycles?''' A: There isn't currently a way to disable the encryption, but if you look at the "Recent Uploads and Downloads" page on your local tahoe-lafs gateway, you'll see that the encryption takes a tiny sliver of the total time to upload or download a file, so there isn't significant performance to be gained by skipping the encryption. We prefer 'secure by default', so without a compelling reason to allow insecure operation, our plan is to leave encryption turned on all the time. Note that because Tahoe-LAFS includes the decryption key in the capability to a file, it is trivial to share or to publish an encrypted file—you just share or publish the capability, and everyone who uses that capability automatically sees the plaintext of the file. '''[=#Q4_where_are_the_docs Q4:] Where should I look for current documentation about the Tahoe-LAFS protocols?''' A: [source:docs/architecture.rst https://tahoe-lafs.org/trac/tahoe-lafs/browser/docs/architecture.rst] '''[=#Q5_embedded_devices Q5:] Does Tahoe-LAFS work on embedded devices such as a [http://www.pogoplug.com PogoPlug] or an [http://openwrt.org OpenWRT] router?''' A: Yes. François Deppierraz contributes [//buildbot-tahoe-lafs/builders/Francois%20lenny-armv5tel a buildbot] which shows that Tahoe-LAFS builds and all the unit tests pass on his Intel SS4000-E NAS box running under Debian Squeeze. Zandr Milewski [//pipermail/tahoe-dev/2009-November/003157.html reported] that it took him only an hour to build, install, and test Tahoe-LAFS on a !PogoPlug. Johannes Nix [//pipermail/tahoe-dev/2012-March/007073.html reported] that the Tahoe-LAFS storage server runs okay on a "DNS-323" which has 64 MB of RAM. If you try it, note that the Tahoe-LAFS storage ''server'' is a much less demanding process than the Tahoe-LAFS gateway. The server doesn't do any decryption or digital signature signing or verifying or erasure coding, and in general is pretty dumb, so it fits more easily into RAM and CPU limits. The gateway has to do all of that, so it requires more CPU and RAM than the server does. Please send a letter to the tahoe-dev mailing list if you try deploying Tahoe-LAFS on an embedded device and let us know the details of your device and how well it worked. '''[=#Q6_windows Q6:] Does Tahoe-LAFS work on Windows?''' A: Yes. Follow [source:docs/quickstart.rst the standard quickstart instructions] to get Tahoe-LAFS running on Windows. '''[=#Q7_mac_os_x Q7:] Does Tahoe-LAFS work on Mac OS X?''' A: Yes. Follow [source:docs/quickstart.rst the standard quickstart instructions] on Mac OS X and it will result in a working command-line tool on Mac OS X just as it does on other Unixes. '''[=#Q8_storage_in_multiple_dirs Q8:] Can there be more than one storage directory on a storage node? So if a storage server contains 3 drives without RAID, can it use all 3 for storage?''' A: Not directly. Each storage server has a single "base directory" which we term {{{$BASEDIR}}}. The server keeps all of its shares in a subdirectory named {{{$BASEDIR/storage/shares/}}}. (Note that you can symlink this to whatever you want: you can keep the rest of the node's files in one place, and store all the shares somewhere else). Since there's only one such subdirectory, you can only use one filesystem per node. On the other hand, shares are stored in a set of 1024 subdirectories of that one, named {{{$BASEDIR/storage/shares/aa/}}}, {{{$BASEDIR/storage/shares/ab/}}}, etc. If you were to symlink the first third of these to one filesystem, the next third to a second filesystem, etc, (hopefully with a script!), then you'd get about 1/3rd of the shares stored on each disk. The "how much space is available" and space-reservation tools would be confused (including making the {{{reserved_space}}} parameter unusable), but basically everything else should work normally. A cleaner solution would be to use LVM instead, which can combine several physical disks (or loop devices consisting of common files) to a single logical volume. This logical volume can then be mounted or symlinked to {{{$BASEDIR/storage}}}. This also is a more flexible solution because new disks can then be added seamlessly to the volume with LVM. Another clean solution is to run three tahoe-lafs storage server processes—one for each of your three drives. That's what the Tahoe-LAFS developers would do. '''[=#Q9_use_raid_with_tahoe_lafs Q9:] Would it make sense to not use any RAID and let Tahoe-LAFS deal with the redundancy?''' A: The Allmydata grid didn't bother with RAID at all: each Tahoe-LAFS storage server node used a single spindle. The optimal layout depends on how expensive the different forms of repair would be. Tahoe-LAFS can correctly be thought of as a form of "application-level RAID", with more flexibility than the usual RAID-1/4/5 styles (RAID-1 is equivalent to {{{1-of-2}}} encoding, and RAID-5 is like {{{3-of-4}}}). Using RAID for your redundancy gets you fairly fast repair, because it's all being handled by a controller that sits right on top of the raw drive. Tahoe-LAFS's repair is a lot slower, because it is driven by a client that's examining one file at a time, and since there are a lot of network roundtrips for each file. Doing a repair of a 1TB RAID-5 drive can easily be finished in a day. If that 1TB drive is filled with a million Tahoe-LAFS files that are being repaired over a Wide Area Network, the repair could take a month. On the other hand, many RAID configurations degrade significantly when a drive is lost, and Tahoe-LAFS's read performance is nearly unaffected. So repair events may be infrequent enough to just let them happen quietly in the background and not care much about how long they take. '''[=#Q10_file_bigger_than_one_server Q10:] Suppose I have a file of 100GB and 2 storage nodes each with 75GB available, will I be able to store the file or does it have to fit within the realms of a single node?''' A: The ability to store the file will depend upon how you set the encoding parameters: you get to choose the tradeoff between expansion (how much space gets used) and reliability. The default settings are {{{3-of-10}}}, which means the file is encoded into 10 shares, and any 3 will be sufficient to reconstruct it. That means each share will be 1/3rd the size of the original file (plus a small overhead, less than 0.5% for large files). For your 100GB file, that means 10 shares, each of which is 33GB in size, which would not fit (it could get two shares on each server, but it couldn't place all ten, so it would return an error). But you could set the encoding to {{{2-of-2}}}, which would give you two 50GB shares, and it would happily put one share on each server. That would store the file, but it wouldn't give you any redundancy: a failure of either server would prevent you from recovering the file. You could also set the encoding to {{{4-of-6}}}, which would generate six 25GB shares, and put three on each server. This would still be vulnerable to either server being down (since neither server has enough shares to give you the whole file by itself), but would become tolerant to errors in an individual share (if only one share file were damaged, there are still five other shares, and we only need four). A lot of disk errors affect only a single file, so there's some benefit to this even if you're still vulnerable to a full disk/server failure. '''[=#Q11_dynamically_add_servers Q11:] Do I need to shutdown all clients/servers to add a storage node?''' A: No, You can add or remove clients or servers anytime you like. The central "Introducer" is responsible for telling clients and servers about each other, and it acts as a simple publish-subscribe hub, so everything is very dynamic. Clients re-evaluate the list of available servers each time they do an upload. This is great for long-term servers, but can cause a problem right then the node starts up. if you've just started your client and upload a file before it has a chance to connect to all of the servers, your upload may fail due to insufficient servers. Usually you can just try again (your client will usually have finished connecting to all the servers in the time it takes you to see the error message and click retry). '''[=#Q12_server_location_distribution Q12:] If I had 3 locations each with 5 storage nodes, could I configure the grid to ensure a file is written to each location so that I could handle all servers at a particular location going down?''' A: Not directly. We have [wiki:ServerSelection a wiki page] and some tickets (linked from the wiki page) about this but it's deeper than it looks and we haven't come to a conclusion on how to build it. The current system will try to distribute the shares as widely as possible, using a different pseudo-random permutation for each file, but it is completely unaware of server properties like "location". If you have more free servers than shares, it will only put one share on any given server, but you might wind up with more shares in one location than the others. For example, if you have 15 servers in three locations A:1/2/3/4/5, B:6/7/8/9/10, C:11/12/13/14/15, and use the default {{{3-of-10}}} encoding, your worst case is winding up with shares on 1/2/3/4/5/6/7/8/9/10, and not use location C at all. The most *likely* case is that you'll wind up with 3 or 4 shares in each location, but there's nothing in the system to enforce that: it's just shuffling all the servers into a ring, starting at 0, and assigning shares to servers around and around the ring until all the shares have a home. The possible distributions of shares into locations (A, B, C) are: (3, 3, 4) 1500[[BR]] (2, 4, 4) 750[[BR]] (2, 3, 5) 600[[BR]] (1, 4, 5) 150[[BR]] (0, 5, 5) 3[[BR]] sum = 3003[[BR]] So you've got a 50% chance of the ideal distribution, and a 1/1000 chance of the worst-case distribution. '''[=#Q13_modify_a_section_of_mutable_file Q13:] Is it possible to modify a mutable file by "patching" it? Also... if I have a file stored and I want to update a section of the file in the middle, is that possible or would be file need to be downloaded, patched and re-uploaded?''' A: Some steps have been taken toward implementing this. There are two kinds of mutable file: "Small Distributed Mutable Files" (SDMF), and "Medium Distributed Mutable Files" (MDMF). MDMF files are broken into segments (default size 128 KiB), and are designed to allow replacing only the segments changed by a write. However, although MDMF files work correctly and are the preferred format (soon to be the default in Tahoe-LAFS v1.11), writes to them currently (as of Tahoe-LAFS v1.10.1) still replace the whole file. Once writing individual segments is implemented, a write within a single segment will only require the upload of N/k*128KiB or about 440KiB, for the default segment size and {{{3-of-10}}} encoding. Kevan Carstensen implemented MDMF thanks in part to the sponsorship of Google Summer Of Code. '''[=#Q14_unique_node_id Q14:] How can Tahoe-LAFS ensure that every node ID is unique?''' A: The node ID is the secure hash of the SSL public key certificate of the node. As long the node's public key is unique and the secure hash function doesn't allow collisions, then the node ID will be unique. '''[=#Q15_same_file_same_cap Q15:] If upload the same file again and again, Tahoe-LAFS will return the same capability. How does Tahoe-LAFS identify that the client is same, when I upload files mutiple times, is it based on node ID?''' A: For immutable files this is true—the resulting capability will be the same each time you upload the same file contents. The capability is derived from two pieces of information: The content of the file and the "[//trac/tahoe-lafs/wiki/Convergence%20Secret convergence secret]". By default, the convergence secret is randomly generated by the node when it first starts up, then stored in the node's base directory ({{{~/.tahoe}}}) and re-used after that. So the same file content uploaded from the same node will always have the same cap string. Uploading the file from a different node with a different convergence secret would result in a different cap string—and in a second copy of the file's contents stored on the grid. If you want files you upload to converge (also known as "deduplicate") with files uploaded by someone else, just make sure you're using the same convergence secret as they are. '''[=#Q15.1_dedupe_dangers Q15.1:] Isn't deduplication dangerous? Can someone figure out whether or not I have a certain file?''' A: It is dangerous, [//hacktahoelafs/drew_perttula.html even more so than most people realize]! But, Tahoe-LAFS provides a defense: [source:docs/convergence-secret.rst]. '''[=#Q16_move_node_to_different_machine Q16:] If I move the client node base directory to different machine and start the client there, will the node have the same node ID as on the previous machine?''' A: Yes, the node ID is stored in the {{{my_nodeid}}} file in each node's base directory, and it is derived from the SSL public/private keypair which is stored in {{{private/node.pem}}} relative to the base directory. As long as you move both of those then the node on the new machine will have the same node ID. If you are moving these files into an existing base directory of a node that has already been run, then you will also need to delete or move aside {{{private/*.furl}}} under that directory, otherwise the node won't start. '''[=#Q17_multiple_introducers Q17:] Is it possible to run multiple introducers on the same grid?''' A: Faruque Sarker has been working on this as a Google Summer of Code project. His changes are blocked due to needing more people to test them, review their code, and write more unit tests. For more information please take a look at ticket #68 '''[=#Q18_unobtrusive_software Q18:] Will this thing run only when I tell it to? Will it use up a lot of my network bandwidth, CPU, or RAM?''' A: Tahoe-LAFS is designed to be unobtrusive. First of all, it doesn't start at all except when you tell it to—you start it with {{{tahoe start}}} and stop it with {{{tahoe stop}}}. Secondly, the software doesn't act as a server unless you configure it to do so—it isn't like peer-to-peer software which automatically acts as a server as well as a client. Thirdly, the client doesn't do anything except in response to the user starting an upload or a download—it doesn't do anything automatically or in the background (this might change in future, to support background repair for example, but probably only if you explicitly enable it). Fourthly, with two minor exceptions described below, the server doesn't do anything either, except in response to clients doing uploads or downloads. Finally, even when the server is actively serving clients it isn't too intensive of a process. It uses between 40 and 56 MB of RAM on a 64-bit Linux server. We used to run eight of them on a single-core 2 GHz Opteron and had plenty of CPU to spare, so it isn't too CPU intensive. The two minor exceptions are that the server periodically inspects all of the ciphertext that it is storing on behalf of clients. It is configured to do this "in the background", by doing it only for a second at a time and waiting for a few seconds in between each step. The intent is that this will not noticably impact other users of the same server. For all the details about when these background processes run and what they do, read the documentation in XXX [source:src/allmydata/storage/crawler.py?annotate=blame&rev=3cb99364e6a83d0064d2838a0c470278903e19ac storage/crawler.py] and [source:src/allmydata/storage/expirer.py?annotate=blame&rev=e76092e16c64019857441e9020d6d8ba2bdaa0bc storage/expirer.py]. '''[=#Q19_repair Q19:] If a storage server dies and new one is installed, will Tahoe-LAFS automatically generate a new share of each file to store on the new one?''' A: Not automatically (see also [#Q18_unobtrusive_software Q18]). There is a repair operation, but it starts only when the use triggers it, by clicking on the "repair" button on the web user interface or running the "tahoe check" command. You can, of course, execute the "tahoe check" command from a script. Kevin Reid posted [//pipermail/tahoe-dev/2009-October/003012.html his cron script] with which he has configured his node to repair all files every night. '''[=#Q20_revocation Q20:] What about revoking access to a file or directory?''' Please see these mailing list threads: * [//pipermail/tahoe-dev/2011-June/006387.html Tahoe Access Control] * [//pipermail/tahoe-dev/2011-June/006388.html question about sharing...] (especially [//pipermail/tahoe-dev/2011-June/006427.html this message by Brian Warner]) * [//pipermail/tahoe-dev/2011-June/006424.html revocation of read-access to an immutable file] '''[=#Q21_NAT Q21:] How come sometimes my client is connected to my server even though the server is behind NAT?''' A: Ideally, all clients attempt to open connections to all servers, and all servers attempt to open connections to all clients. So, if the client is not behind NAT, then even if the server is behind NAT. However, this is not currently the case. **Currently** what it does is that all clients attempt to open connections to all servers, but if there is a connection between two Tahoe-LAFS processes (== Tahoe-LAFS nodes) it can re-use that connection for any client or server in either node. So, when you enable a storage server on the public facing server, that causes the node behind NAT to initiate a TCP connection to the node on the public facing server. Once that connection is established, that enables the node there to *use* the server behind NAT. Related issue: comment:7:ticket:1086 '''[=#Q22_literalcaps Q22:] What are literal caps?''' A: Literal caps (or LIT caps) are simply the base32 encoding of the file data, and are used for very small files. The threshold is 55 bytes (source: XXX [source:src/allmydata/immutable/upload.py?annotate=blame&rev=196bd583b6c4959c60d3f73cdcefc9edda6a38ae#L1504 immutable/upload.py]), which is the break-even point at which the LIT filecap is the same length as a typical CHK filecap. They are sufficient (you don't even need network access to turn the LIT filecap into the data), and necessary (if you don't know the filecap for my data, you can't figure out the data). See this mailing list thread: * [//pipermail/tahoe-dev/2010-April/004235.html Storing a small file leads to a weird read capability] (especially [//pipermail/tahoe-dev/2010-April/004237.html this message by Brian Warner]) Literal caps are supported for immutable files and immutable directories (see [wiki:Capabilities the Capabilities wiki page]). Whenever the contents of the file or directory are small enough that it would be more efficient to fit the contents into the cap itself than the store the contents remotely and use the cap to fetch it, then it becomes a literal cap. '''[=#Q23_FUSE Q23:] Can I access files stored in Tahoe-LAFS via FUSE?''' A: Yes, but it is not recommended because while it will work for certain usages, its performance will crawl to a halt for other usages. You can try it and see if your particular uses happen to fit into its performance contours. Tahoe-LAFS comes with an [source:docs/frontends/FTP-and-SFTP.rst SFTP server]. If you point [http://fuse.sourceforge.net/sshfs.html sshfs] at the SFTP server then you have access to Tahoe-LAFS through FUSE. Alternately, [wiki:pyFilesystem pyfilesystem] interfaces directly with Tahoe-LAFS through the latter's [source:docs/frontends/webapi.rst web-API] and provides both FUSE and Microsoft Windows filesystem access. See #1353 for discussion of possible improvements to FUSE integration. There will be fatal performance problems with the FUSE interface if your apps use it in a way that doesn't fit Tahoe-LAFS's semantics, and the FUSE layer is required to make many copies of entire files in order to emulate the desired semantics. See [http://lists.alioth.debian.org/pipermail/freedombox-discuss/2011-November/003162.html Zooko's post to freedombox-discuss] and [https://plus.google.com/108313527900507320366/posts/ZrgdgLhV3NG Zooko's post to Google+]. '''[=#Q24_smallgrid Q24:] How I should setup k,h,N on my small private grid?'' A: Assuming at least some of your nodes are on same LAN and others are widely distributed(VPSeses over internet,'recycled servers' from for example atlas networks, etc). And total number of nodes is not too big. And all your nodes under your control(at least - they are VPSes/Servers which are in your/friends you trust to pay in time name).And - if some storage nodes are permanently down - you can reconfigure gateways you use manually and run repair. Please note, it's much better to have nodes with roughly same space(500Gb and 5Gb in same grid is not a good idea,if you for some odd reason must have them both - read http://bigpig.org/twiki/bin/view/Main/VolunteerGrid2Philosophies and think again,if you still think you must - don't count them in S in below calculations) Let's say S=number_of_nodes), setup k=3,N=S,h=N-2, use N=number_of_nodes_total, h=N-2 following links can also be helpful: https://tahoe-lafs.org/pipermail/tahoe-dev/2011-October/006754.html https://tahoe-lafs.org/pipermail/tahoe-dev/2011-October/006757.html '''[=#Q25_sharespreading Q25:] Is there a process or command to make shares spread to new storage servers?''' A: This is called "rebalancing". It isn't currently implemented, but the repair function can accomplish a similar result sometimes. Repair of immutables will upload shares to servers if necessary to reach "servers-of-happiness", which sometimes has the desired effect of uploading shares to newly added servers. Repair of mutables never uploads new shares. Here are tickets about improving rebalancing behavior: #232, #1657, #699, #661, #543. '''[=#Q26_compile_error Q26:] What do I do about this compile error?''' A: See [wiki:CompileError]. '''[=#Q27_ipv6 Q27:] Is IPv6 supported?''' A: Not yet, see ticket #867. '''[=#Q28_delete_files Q28:] How to delete files/folder from Tahoe-LAFS grid?''' A: Tahoe-LAFS is designed to allow multiple users to access the same files. You wouldn't like it if one day a file of yours had been deleted by somebody else, right? Therefore, we have to solve The Garbage Collection Problem. The current solution is: do lease-renewal every month or so, and delete any files that nobody has lease-renewed in more than a month or so. For more details read [[https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/garbage-collection.rst|Garbage Collection in Tahoe]]. '''[=#Q29_bandwidth_performance Q29:] What is actual operational read/write speed of Tahoe-LAFS grid?''' A: It depends on multiple factors starting from your hardware and ending with TPC/IP stack parameters. And much-much more! Practically the following results have been reported: '''16Mbps in throughput for writing and about 8.8Mbps in reading''' (based on Grid of 24 storage nodes on 24 VM's running under !OpenStack in 4 data centres; each VM had two 2 VCPU's, 4GB of RAM, and 50GB of disk space). You can read the following section for more details: [[https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Performance|Performance]]. '''[=#Q30_authorization Q30:] How can I prevent intruders from using my Tahoe-LAFS web-interface? Even without knowing exact object caps they will be able to see statistics and be able to upload objects.''' A: There is no such built-in authorization capability in Tahoe-LAFS. Security is based on secret object caps. Meanwhile you can forbid unauthorized access to your Tahoe-LAFS WUI by using firewall (iptables, ipfw etc.) and combining it with proxy-server authorization and redirection (nginx, apache, squid etc.) '''[=#Q31_multiple_users Q31:] I've got multiple users connected to same directory and their experience is really poor. What can I do?''' A: If many people have write access to the same directories, then they'll probably get failures and stalls a lot, regardless of the frontend, and they might eventually if they keep doing it lose the data, if a bunch of servers get disconnected right when they are doing that. Currently we advise people to adopt a style of usage where each user gets exclusive write-access to one directory, and read-access to the directories of many other users. It is a limitation that we hope to lift in the future. '''[=#Q32_remove_storage_server Q32:] I have removed storage servers but their names still appear on the list. How can I remove them from list?''' A: You can just ignore them — having their names on the list doesn't do any harm. But, if you restart the introducer their names will disappear from the list. '''[=#Q33_stats_access Q33:] Can I tell how much storage space is offered by storage servers?''' A: That information is currently broadcast by storage servers and collected by clients, but not yet displayed to the human user of the client. For more information, see [source:trunk/docs/stats.rst docs/stats.rst]. The ticket to add a display of that information to the human is #648. '''[=#Q34_better_logging Q34:] There's not a lot of information in the logs. How can I get more verbose logging? The default logfiles provided by Tahoe do not provide much information. To really see what's going on, you will need to use the "flogtool" utility to expose detailed logging. For more information, see [source:trunk/docs/logging.rst] '''[=#Q35_backing_up_caps Q35:] How do I backup the decryption key and other information necessary to recover my files? In Tahoe-LAFS the decryption keys are bundled together with the other information into a short string called a ''capability'' (see the [//trac/tahoe-lafs/browser/docs/architecture.rst#capabilities architecture document]). To backup your capabilities, we recommend printing two or more copies of them out on a printer and storing the resulting pieces of paper in a safe place.