source: trunk/docs/backupdb.rst

Last change on this file was 82579ce, checked in by Zooko Wilcox-O'Hearn <zooko@…>, at 2013-11-08T21:08:05Z

magic first line tells emacs to use utf8+bom

Add ".. -*- coding: utf-8-with-signature -*-" to the first line of each .rst
file. This tells emacs to treat the file contents as utf-8, and also to prepend
a so-called utf-8 "bom" marker at the beginning of the file. This patch also
prepends those markers to each of those files.

  • Property mode set to 100644
File size: 8.2 KB
Line 
1.. -*- coding: utf-8-with-signature -*-
2
3==================
4The Tahoe BackupDB
5==================
6
71.  `Overview`_
82.  `Schema`_
93.  `Upload Operation`_
104.  `Directory Operations`_
11
12Overview
13========
14To speed up backup operations, Tahoe maintains a small database known as the
15"backupdb". This is used to avoid re-uploading files which have already been
16uploaded recently.
17
18This database lives in ``~/.tahoe/private/backupdb.sqlite``, and is a SQLite
19single-file database. It is used by the "``tahoe backup``" command. In the
20future, it may optionally be used by other commands such as "``tahoe cp``".
21
22The purpose of this database is twofold: to manage the file-to-cap
23translation (the "upload" step) and the directory-to-cap translation (the
24"mkdir-immutable" step).
25
26The overall goal of optimizing backup is to reduce the work required when the
27source disk has not changed (much) since the last backup. In the ideal case,
28running "``tahoe backup``" twice in a row, with no intervening changes to the
29disk, will not require any network traffic. Minimal changes to the source
30disk should result in minimal traffic.
31
32This database is optional. If it is deleted, the worst effect is that a
33subsequent backup operation may use more effort (network bandwidth, CPU
34cycles, and disk IO) than it would have without the backupdb.
35
36The database uses sqlite3, which is included as part of the standard Python
37library with Python 2.5 and later. For Python 2.4, Tahoe will try to install the
38"pysqlite" package at build-time, but this will succeed only if sqlite3 with
39development headers is already installed.  On Debian and Debian derivatives
40you can install the "python-pysqlite2" package (which, despite the name,
41actually provides sqlite3 rather than sqlite2). On old distributions such
42as Debian etch (4.0 "oldstable") or Ubuntu Edgy (6.10) the "python-pysqlite2"
43package won't work, but the "sqlite3-dev" package will.
44
45Schema
46======
47
48The database contains the following tables::
49
50  CREATE TABLE version
51  (
52   version integer  # contains one row, set to 1
53  );
54 
55  CREATE TABLE local_files
56  (
57   path  varchar(1024),  PRIMARY KEY -- index, this is an absolute UTF-8-encoded local filename
58   size  integer,         -- os.stat(fn)[stat.ST_SIZE]
59   mtime number,          -- os.stat(fn)[stat.ST_MTIME]
60   ctime number,          -- os.stat(fn)[stat.ST_CTIME]
61   fileid integer
62  );
63 
64  CREATE TABLE caps
65  (
66   fileid integer PRIMARY KEY AUTOINCREMENT,
67   filecap varchar(256) UNIQUE    -- URI:CHK:...
68  );
69 
70  CREATE TABLE last_upload
71  (
72   fileid INTEGER PRIMARY KEY,
73   last_uploaded TIMESTAMP,
74   last_checked TIMESTAMP
75  );
76 
77  CREATE TABLE directories
78  (
79   dirhash varchar(256) PRIMARY KEY,
80   dircap varchar(256),
81   last_uploaded TIMESTAMP,
82   last_checked TIMESTAMP
83  );
84
85Upload Operation
86================
87
88The upload process starts with a pathname (like ``~/.emacs``) and wants to end up
89with a file-cap (like ``URI:CHK:...``).
90
91The first step is to convert the path to an absolute form
92(``/home/warner/.emacs``) and do a lookup in the local_files table. If the path
93is not present in this table, the file must be uploaded. The upload process
94is:
95
961. record the file's size, ctime (which is the directory-entry change time or
97   file creation time depending on OS) and modification time
98
992. upload the file into the grid, obtaining an immutable file read-cap
100
1013. add an entry to the 'caps' table, with the read-cap, to get a fileid
102
1034. add an entry to the 'last_upload' table, with the current time
104
1055. add an entry to the 'local_files' table, with the fileid, the path,
106   and the local file's size/ctime/mtime
107
108If the path *is* present in 'local_files', the easy-to-compute identifying
109information is compared: file size and ctime/mtime. If these differ, the file
110must be uploaded. The row is removed from the local_files table, and the
111upload process above is followed.
112
113If the path is present but ctime or mtime differs, the file may have changed.
114If the size differs, then the file has certainly changed. At this point, a
115future version of the "backup" command might hash the file and look for a
116match in an as-yet-defined table, in the hopes that the file has simply been
117moved from somewhere else on the disk. This enhancement requires changes to
118the Tahoe upload API before it can be significantly more efficient than
119simply handing the file to Tahoe and relying upon the normal convergence to
120notice the similarity.
121
122If ctime, mtime, or size is different, the client will upload the file, as
123above.
124
125If these identifiers are the same, the client will assume that the file is
126unchanged (unless the ``--ignore-timestamps`` option is provided, in which
127case the client always re-uploads the file), and it may be allowed to skip
128the upload. For safety, however, we require the client periodically perform a
129filecheck on these probably-already-uploaded files, and re-upload anything
130that doesn't look healthy. The client looks the fileid up in the
131'last_checked' table, to see how long it has been since the file was last
132checked.
133
134A "random early check" algorithm should be used, in which a check is
135performed with a probability that increases with the age of the previous
136results. E.g. files that were last checked within a month are not checked,
137files that were checked 5 weeks ago are re-checked with 25% probability, 6
138weeks with 50%, more than 8 weeks are always checked. This reduces the
139"thundering herd" of filechecks-on-everything that would otherwise result
140when a backup operation is run one month after the original backup. If a
141filecheck reveals the file is not healthy, it is re-uploaded.
142
143If the filecheck shows the file is healthy, or if the filecheck was skipped,
144the client gets to skip the upload, and uses the previous filecap (from the
145'caps' table) to add to the parent directory.
146
147If a new file is uploaded, a new entry is put in the 'caps' and 'last_upload'
148table, and an entry is made in the 'local_files' table to reflect the mapping
149from local disk pathname to uploaded filecap. If an old file is re-uploaded,
150the 'last_upload' entry is updated with the new timestamps. If an old file is
151checked and found healthy, the 'last_upload' entry is updated.
152
153Relying upon timestamps is a compromise between efficiency and safety: a file
154which is modified without changing the timestamp or size will be treated as
155unmodified, and the "``tahoe backup``" command will not copy the new contents
156into the grid. The ``--no-timestamps`` option can be used to disable this
157optimization, forcing every byte of the file to be hashed and encoded.
158
159Directory Operations
160====================
161
162Once the contents of a directory are known (a filecap for each file, and a
163dircap for each directory), the backup process must find or create a tahoe
164directory node with the same contents. The contents are hashed, and the hash
165is queried in the 'directories' table. If found, the last-checked timestamp
166is used to perform the same random-early-check algorithm described for files
167above, but no new upload is performed. Since "``tahoe backup``" creates immutable
168directories, it is perfectly safe to re-use a directory from a previous
169backup.
170
171If not found, the web-API "mkdir-immutable" operation is used to create a new
172directory, and an entry is stored in the table.
173
174The comparison operation ignores timestamps and metadata, and pays attention
175solely to the file names and contents.
176
177By using a directory-contents hash, the "``tahoe backup``" command is able to
178re-use directories from other places in the backed up data, or from old
179backups. This means that renaming a directory and moving a subdirectory to a
180new parent both count as "minor changes" and will result in minimal Tahoe
181operations and subsequent network traffic (new directories will be created
182for the modified directory and all of its ancestors). It also means that you
183can perform a backup ("#1"), delete a file or directory, perform a backup
184("#2"), restore it, and then the next backup ("#3") will re-use the
185directories from backup #1.
186
187The best case is a null backup, in which nothing has changed. This will
188result in minimal network bandwidth: one directory read and two modifies. The
189``Archives/`` directory must be read to locate the latest backup, and must be
190modified to add a new snapshot, and the ``Latest/`` directory will be updated to
191point to that same snapshot.
192
Note: See TracBrowser for help on using the repository browser.