Changeset 2cad199 in trunk for docs/architecture.rst
- Timestamp:
- 2017-06-06T17:01:52Z (8 years ago)
- Branches:
- master
- Children:
- 0977e52, b8010ad
- Parents:
- 958f79d4 (diff), 705dc85 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent. - File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified docs/architecture.rst ¶
r958f79d4 r2cad199 11 11 5. `Server Selection`_ 12 12 6. `Swarming Download, Trickling Upload`_ 13 7. `The File systemLayer`_13 7. `The File Store Layer`_ 14 14 8. `Leases, Refreshing, Garbage Collection`_ 15 15 9. `File Repairer`_ … … 23 23 (See the `docs/specifications directory`_ for more details.) 24 24 25 There are three layers: the key-value store, the file system, and the25 There are three layers: the key-value store, the file store, and the 26 26 application. 27 27 … … 34 34 a few bytes and as large as tens of gigabytes are in common use. 35 35 36 The middle layer is the decentralized file system: a directed graph in which36 The middle layer is the decentralized file store: a directed graph in which 37 37 the intermediate nodes are directories and the leaf nodes are files. The leaf 38 38 nodes contain only the data -- they contain no metadata other than the length … … 41 41 different metadata if it is referred to through different edges. 42 42 43 The top layer consists of the applications using the file system.43 The top layer consists of the applications using the file store. 44 44 Allmydata.com used it for a backup service: the application periodically 45 copies files from the local disk onto the decentralized file system. We later45 copies files from the local disk onto the decentralized file store. We later 46 46 provide read-only access to those files, allowing users to recover them. 47 47 There are several other applications built on top of the Tahoe-LAFS 48 file system(see the RelatedProjects_ page of the wiki for a list).48 file store (see the RelatedProjects_ page of the wiki for a list). 49 49 50 50 .. _docs/specifications directory: https://github.com/tahoe-lafs/tahoe-lafs/tree/master/docs/specifications … … 158 158 self-authenticating, meaning that nobody can trick you into accepting a file 159 159 that doesn't match the capability you used to refer to that file. The 160 file systemlayer (described below) adds human-meaningful names atop the160 file store layer (described below) adds human-meaningful names atop the 161 161 key-value layer. 162 162 … … 320 320 321 321 322 The File systemLayer322 The File Store Layer 323 323 ==================== 324 324 325 The "file system" layer is responsible for mapping human-meaningful pathnames325 The "file store" layer is responsible for mapping human-meaningful pathnames 326 326 (directories and filenames) to pieces of data. The actual bytes inside these 327 files are referenced by capability, but the file systemlayer is where the327 files are referenced by capability, but the file store layer is where the 328 328 directory names, file names, and metadata are kept. 329 329 330 The file systemlayer is a graph of directories. Each directory contains a330 The file store layer is a graph of directories. Each directory contains a 331 331 table of named children. These children are either other directories or 332 332 files. All children are referenced by their capability. … … 354 354 ====================================== 355 355 356 When a file or directory in the virtual filesystem is no longer referenced,357 th e space that its shares occupied on each storage server can be freed,358 making room for other shares. Tahoe-LAFS uses a garbage collection ("GC") 359 mechanism to implement this space-reclamation process. Each share has one or 360 more"leases", which are managed by clients who want the file/directory to be356 When a file or directory in the file store is no longer referenced, the space 357 that its shares occupied on each storage server can be freed, making room for 358 other shares. Tahoe-LAFS uses a garbage collection ("GC") mechanism to 359 implement this space-reclamation process. Each share has one or more 360 "leases", which are managed by clients who want the file/directory to be 361 361 retained. The storage server accepts each share for a pre-defined period of 362 362 time, and is allowed to delete the share if all of the leases are cancelled … … 379 379 permanent data loss (affecting the preservation of the file). Hard drives 380 380 crash, power supplies explode, coffee spills, and asteroids strike. The goal 381 of a robust distributed file systemis to survive these setbacks.381 of a robust distributed file store is to survive these setbacks. 382 382 383 383 To work against this slow, continual loss of shares, a File Checker is used … … 487 487 488 488 The application layer can provide whatever access model is desired, built on 489 top of this capability access model. The first big user of this system so far 490 is allmydata.com. The allmydata.com access model currently works like a 491 normal web site, using username and password to give a user access to her 492 "virtual drive". In addition, allmydata.com users can share individual files 493 (using a file sharing interface built on top of the immutable file read 494 capabilities). 489 top of this capability access model. 495 490 496 491
Note: See TracChangeset
for help on using the changeset viewer.