| 1 | = Known Issues = |
|---|
| 2 | |
|---|
| 3 | Below is a list of known issues in older releases of Tahoe-LAFS, and how to |
|---|
| 4 | manage them. The current version of this file can be found at |
|---|
| 5 | |
|---|
| 6 | https://tahoe-lafs.org/source/tahoe/trunk/docs/historical/historical_known_issues.txt |
|---|
| 7 | |
|---|
| 8 | Issues in newer releases of Tahoe-LAFS can be found at: |
|---|
| 9 | |
|---|
| 10 | https://tahoe-lafs.org/source/tahoe/trunk/docs/known_issues.rst |
|---|
| 11 | |
|---|
| 12 | == issues in Tahoe v1.8.2, released 30-Jan-2011 == |
|---|
| 13 | |
|---|
| 14 | Unauthorized deletion of an immutable file by its storage index |
|---|
| 15 | --------------------------------------------------------------- |
|---|
| 16 | |
|---|
| 17 | Due to a flaw in the Tahoe-LAFS storage server software in v1.3.0 through |
|---|
| 18 | v1.8.2, a person who knows the "storage index" that identifies an immutable |
|---|
| 19 | file can cause the server to delete its shares of that file. |
|---|
| 20 | |
|---|
| 21 | If an attacker can cause enough shares to be deleted from enough storage |
|---|
| 22 | servers, this deletes the file. |
|---|
| 23 | |
|---|
| 24 | This vulnerability does not enable anyone to read file contents without |
|---|
| 25 | authorization (confidentiality), nor to change the contents of a file |
|---|
| 26 | (integrity). |
|---|
| 27 | |
|---|
| 28 | A person could learn the storage index of a file in several ways: |
|---|
| 29 | |
|---|
| 30 | 1. By being granted the authority to read the immutable file—i.e. by being |
|---|
| 31 | granted a read capability to the file. They can determine the file's |
|---|
| 32 | storage index from its read capability. |
|---|
| 33 | |
|---|
| 34 | 2. By being granted a verify capability to the file. They can determine the |
|---|
| 35 | file's storage index from its verify capability. This case probably |
|---|
| 36 | doesn't happen often because users typically don't share verify caps. |
|---|
| 37 | |
|---|
| 38 | 3. By operating a storage server, and receiving a request from a client that |
|---|
| 39 | has a read cap or a verify cap. If the client attempts to upload, |
|---|
| 40 | download, or verify the file with their storage server, even if it doesn't |
|---|
| 41 | actually have the file, then they can learn the storage index of the file. |
|---|
| 42 | |
|---|
| 43 | 4. By gaining read access to an existing storage server's local filesystem, |
|---|
| 44 | and inspecting the directory structure that it stores its shares in. They |
|---|
| 45 | can thus learn the storage indexes of all files that the server is holding |
|---|
| 46 | at least one share of. Normally only the operator of an existing storage |
|---|
| 47 | server would be able to inspect its local filesystem, so this requires |
|---|
| 48 | either being such an operator of an existing storage server, or somehow |
|---|
| 49 | gaining the ability to inspect the local filesystem of an existing storage |
|---|
| 50 | server. |
|---|
| 51 | |
|---|
| 52 | *how to manage it* |
|---|
| 53 | |
|---|
| 54 | Tahoe-LAFS version v1.8.3 or newer (except v1.9a1) no longer has this flaw; |
|---|
| 55 | if you upgrade a storage server to a fixed release then that server is no |
|---|
| 56 | longer vulnerable to this problem. |
|---|
| 57 | |
|---|
| 58 | Note that the issue is local to each storage server independently of other |
|---|
| 59 | storage servers—when you upgrade a storage server then that particular |
|---|
| 60 | storage server can no longer be tricked into deleting its shares of the |
|---|
| 61 | target file. |
|---|
| 62 | |
|---|
| 63 | If you can't immediately upgrade your storage server to a version of |
|---|
| 64 | Tahoe-LAFS that eliminates this vulnerability, then you could temporarily |
|---|
| 65 | shut down your storage server. This would of course negatively impact |
|---|
| 66 | availability—clients would not be able to upload or download shares to that |
|---|
| 67 | particular storage server while it was shut down—but it would protect the |
|---|
| 68 | shares already stored on that server from being deleted as long as the server |
|---|
| 69 | is shut down. |
|---|
| 70 | |
|---|
| 71 | If the servers that store shares of your file are running a version of |
|---|
| 72 | Tahoe-LAFS with this vulnerability, then you should think about whether |
|---|
| 73 | someone can learn the storage indexes of your files by one of the methods |
|---|
| 74 | described above. A person can not exploit this vulnerability unless they have |
|---|
| 75 | received a read cap or verify cap, or they control a storage server that has |
|---|
| 76 | been queried about this file by a client that has a read cap or a verify cap. |
|---|
| 77 | |
|---|
| 78 | Tahoe-LAFS does not currently have a mechanism to limit which storage servers |
|---|
| 79 | can connect to your grid, but it does have a way to see which storage servers |
|---|
| 80 | have been connected to the grid. The Introducer's front page in the Web User |
|---|
| 81 | Interface has a list of all storage servers that the Introducer has ever seen |
|---|
| 82 | and the first time and the most recent time that it saw them. Each Tahoe-LAFS |
|---|
| 83 | gateway maintains a similar list on its front page in its Web User Interface, |
|---|
| 84 | showing all of the storage servers that it learned about from the Introducer, |
|---|
| 85 | when it first connected to that storage server, and when it most recently |
|---|
| 86 | connected to that storage server. These lists are stored in memory and are |
|---|
| 87 | reset to empty when the process is restarted. |
|---|
| 88 | |
|---|
| 89 | See ticket `#1528`_ for technical details. |
|---|
| 90 | |
|---|
| 91 | .. _#1528: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1528 |
|---|
| 92 | |
|---|
| 93 | |
|---|
| 94 | |
|---|
| 95 | == issues in Tahoe v1.1.0, released 2008-06-11 == |
|---|
| 96 | |
|---|
| 97 | (Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21.) |
|---|
| 98 | |
|---|
| 99 | === more than one file can match an immutable file cap === |
|---|
| 100 | |
|---|
| 101 | In Tahoe v1.0 and v1.1, a flaw in the cryptographic integrity check |
|---|
| 102 | makes it possible for the original uploader of an immutable file to |
|---|
| 103 | produce more than one immutable file matching the same capability, so |
|---|
| 104 | that different downloads using the same capability could result in |
|---|
| 105 | different files. This flaw can be exploited only by the original |
|---|
| 106 | uploader of an immutable file, which means that it is not a severe |
|---|
| 107 | vulnerability: you can still rely on the integrity check to make sure |
|---|
| 108 | that the file you download with a given capability is a file that the |
|---|
| 109 | original uploader intended. The only issue is that you can't assume |
|---|
| 110 | that every time you use the same capability to download a file you'll |
|---|
| 111 | get the same file. |
|---|
| 112 | |
|---|
| 113 | ==== how to manage it ==== |
|---|
| 114 | |
|---|
| 115 | This was fixed in Tahoe v1.2.0, released 2008-07-21, under ticket |
|---|
| 116 | #491. Upgrade to that release of Tahoe and then you can rely on the |
|---|
| 117 | property that there is only one file that you can download using a |
|---|
| 118 | given capability. If you are still using Tahoe v1.0 or v1.1, then |
|---|
| 119 | remember that the original uploader could produce multiple files that |
|---|
| 120 | match the same capability, so for example if someone gives you a |
|---|
| 121 | capability, and you use it to download a file, and you give that |
|---|
| 122 | capability to your friend, and he uses it to download a file, you and |
|---|
| 123 | your friend could get different files. |
|---|
| 124 | |
|---|
| 125 | |
|---|
| 126 | === server out of space when writing mutable file === |
|---|
| 127 | |
|---|
| 128 | If a v1.0 or v1.1 storage server runs out of disk space or is |
|---|
| 129 | otherwise unable to write to its local filesystem, then problems can |
|---|
| 130 | ensue. For immutable files, this will not lead to any problem (the |
|---|
| 131 | attempt to upload that share to that server will fail, the partially |
|---|
| 132 | uploaded share will be deleted from the storage server's "incoming |
|---|
| 133 | shares" directory, and the client will move on to using another |
|---|
| 134 | storage server instead). |
|---|
| 135 | |
|---|
| 136 | If the write was an attempt to modify an existing mutable file, |
|---|
| 137 | however, a problem will result: when the attempt to write the new |
|---|
| 138 | share fails (e.g. due to insufficient disk space), then it will be |
|---|
| 139 | aborted and the old share will be left in place. If enough such old |
|---|
| 140 | shares are left, then a subsequent read may get those old shares and |
|---|
| 141 | see the file in its earlier state, which is a "rollback" failure. |
|---|
| 142 | With the default parameters (3-of-10), six old shares will be enough |
|---|
| 143 | to potentially lead to a rollback failure. |
|---|
| 144 | |
|---|
| 145 | ==== how to manage it ==== |
|---|
| 146 | |
|---|
| 147 | Make sure your Tahoe storage servers don't run out of disk space. |
|---|
| 148 | This means refusing storage requests before the disk fills up. There |
|---|
| 149 | are a couple of ways to do that with v1.1. |
|---|
| 150 | |
|---|
| 151 | First, there is a configuration option named "sizelimit" which will |
|---|
| 152 | cause the storage server to do a "du" style recursive examination of |
|---|
| 153 | its directories at startup, and then if the sum of the size of files |
|---|
| 154 | found therein is greater than the "sizelimit" number, it will reject |
|---|
| 155 | requests by clients to write new immutable shares. |
|---|
| 156 | |
|---|
| 157 | However, that can take a long time (something on the order of a minute |
|---|
| 158 | of examination of the filesystem for each 10 GB of data stored in the |
|---|
| 159 | Tahoe server), and the Tahoe server will be unavailable to clients |
|---|
| 160 | during that time. |
|---|
| 161 | |
|---|
| 162 | Another option is to set the "readonly_storage" configuration option |
|---|
| 163 | on the storage server before startup. This will cause the storage |
|---|
| 164 | server to reject all requests to upload new immutable shares. |
|---|
| 165 | |
|---|
| 166 | Note that neither of these configurations affect mutable shares: even |
|---|
| 167 | if sizelimit is configured and the storage server currently has |
|---|
| 168 | greater space used than allowed, or even if readonly_storage is |
|---|
| 169 | configured, servers will continue to accept new mutable shares and |
|---|
| 170 | will continue to accept requests to overwrite existing mutable shares. |
|---|
| 171 | |
|---|
| 172 | Mutable files are typically used only for directories, and are usually |
|---|
| 173 | much smaller than immutable files, so if you use one of these |
|---|
| 174 | configurations to stop the influx of immutable files while there is |
|---|
| 175 | still sufficient disk space to receive an influx of (much smaller) |
|---|
| 176 | mutable files, you may be able to avoid the potential for "rollback" |
|---|
| 177 | failure. |
|---|
| 178 | |
|---|
| 179 | A future version of Tahoe will include a fix for this issue. Here is |
|---|
| 180 | [https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2008-May/000628.html the |
|---|
| 181 | mailing list discussion] about how that future version will work. |
|---|
| 182 | |
|---|
| 183 | |
|---|
| 184 | === pyOpenSSL/Twisted defect causes false alarms in tests === |
|---|
| 185 | |
|---|
| 186 | The combination of Twisted v8.0 or Twisted v8.1 with pyOpenSSL v0.7 |
|---|
| 187 | causes the Tahoe v1.1 unit tests to fail, even though the behavior of |
|---|
| 188 | Tahoe itself which is being tested is correct. |
|---|
| 189 | |
|---|
| 190 | ==== how to manage it ==== |
|---|
| 191 | |
|---|
| 192 | If you are using Twisted v8.0 or Twisted v8.1 and pyOpenSSL v0.7, then |
|---|
| 193 | please ignore ERROR "Reactor was unclean" in test_system and |
|---|
| 194 | test_introducer. Upgrading to a newer version of Twisted or pyOpenSSL |
|---|
| 195 | will cause those false alarms to stop happening (as will downgrading |
|---|
| 196 | to an older version of either of those packages). |
|---|
| 197 | |
|---|
| 198 | == issues in Tahoe v1.0.0, released 2008-03-25 == |
|---|
| 199 | |
|---|
| 200 | (Tahoe v1.0 was superceded by v1.1 which was released 2008-06-11.) |
|---|
| 201 | |
|---|
| 202 | === server out of space when writing mutable file === |
|---|
| 203 | |
|---|
| 204 | In addition to the problems caused by insufficient disk space |
|---|
| 205 | described above, v1.0 clients which are writing mutable files when the |
|---|
| 206 | servers fail to write to their filesystem are likely to think the |
|---|
| 207 | write succeeded, when it in fact failed. This can cause data loss. |
|---|
| 208 | |
|---|
| 209 | ==== how to manage it ==== |
|---|
| 210 | |
|---|
| 211 | Upgrade client to v1.1, or make sure that servers are always able to |
|---|
| 212 | write to their local filesystem (including that there is space |
|---|
| 213 | available) as described in "server out of space when writing mutable |
|---|
| 214 | file" above. |
|---|
| 215 | |
|---|
| 216 | |
|---|
| 217 | === server out of space when writing immutable file === |
|---|
| 218 | |
|---|
| 219 | Tahoe v1.0 clients are using v1.0 servers which are unable to write to |
|---|
| 220 | their filesystem during an immutable upload will correctly detect the |
|---|
| 221 | first failure, but if they retry the upload without restarting the |
|---|
| 222 | client, or if another client attempts to upload the same file, the |
|---|
| 223 | second upload may appear to succeed when it hasn't, which can lead to |
|---|
| 224 | data loss. |
|---|
| 225 | |
|---|
| 226 | ==== how to manage it ==== |
|---|
| 227 | |
|---|
| 228 | Upgrading either or both of the client and the server to v1.1 will fix |
|---|
| 229 | this issue. Also it can be avoided by ensuring that the servers are |
|---|
| 230 | always able to write to their local filesystem (including that there |
|---|
| 231 | is space available) as described in "server out of space when writing |
|---|
| 232 | mutable file" above. |
|---|
| 233 | |
|---|
| 234 | |
|---|
| 235 | === large directories or mutable files of certain sizes === |
|---|
| 236 | |
|---|
| 237 | If a client attempts to upload a large mutable file with a size |
|---|
| 238 | greater than about 3,139,000 and less than or equal to 3,500,000 bytes |
|---|
| 239 | then it will fail but appear to succeed, which can lead to data loss. |
|---|
| 240 | |
|---|
| 241 | (Mutable files larger than 3,500,000 are refused outright). The |
|---|
| 242 | symptom of the failure is very high memory usage (3 GB of memory) and |
|---|
| 243 | 100% CPU for about 5 minutes, before it appears to succeed, although |
|---|
| 244 | it hasn't. |
|---|
| 245 | |
|---|
| 246 | Directories are stored in mutable files, and a directory of |
|---|
| 247 | approximately 9000 entries may fall into this range of mutable file |
|---|
| 248 | sizes (depending on the size of the filenames or other metadata |
|---|
| 249 | associated with the entries). |
|---|
| 250 | |
|---|
| 251 | ==== how to manage it ==== |
|---|
| 252 | |
|---|
| 253 | This was fixed in v1.1, under ticket #379. If the client is upgraded |
|---|
| 254 | to v1.1, then it will fail cleanly instead of falsely appearing to |
|---|
| 255 | succeed when it tries to write a file whose size is in this range. If |
|---|
| 256 | the server is also upgraded to v1.1, then writes of mutable files |
|---|
| 257 | whose size is in this range will succeed. (If the server is upgraded |
|---|
| 258 | to v1.1 but the client is still v1.0 then the client will still suffer |
|---|
| 259 | this failure.) |
|---|
| 260 | |
|---|
| 261 | |
|---|
| 262 | === uploading files greater than 12 GiB === |
|---|
| 263 | |
|---|
| 264 | If a Tahoe v1.0 client uploads a file greater than 12 GiB in size, the file will |
|---|
| 265 | be silently corrupted so that it is not retrievable, but the client will think |
|---|
| 266 | that it succeeded. This is a "data loss" failure. |
|---|
| 267 | |
|---|
| 268 | ==== how to manage it ==== |
|---|
| 269 | |
|---|
| 270 | Don't upload files larger than 12 GiB. If you have previously uploaded files of |
|---|
| 271 | that size, assume that they have been corrupted and are not retrievable from the |
|---|
| 272 | Tahoe storage grid. Tahoe v1.1 clients will refuse to upload files larger than |
|---|
| 273 | 12 GiB with a clean failure. A future release of Tahoe will remove this |
|---|
| 274 | limitation so that larger files can be uploaded. |
|---|