Ticket #999: pluggable-backends-davidsarah-v17.darcs.patch

File pluggable-backends-davidsarah-v17.darcs.patch, 841.1 KB (added by davidsarah, at 2011-09-29T08:24:10Z)

Completes the splitting of IStoredShare into IShareForReading and IShareForWriting. Does not include configuration changes.

Line 
155 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
16  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
17
18Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
19  * Fix some incorrect attribute accesses. refs #999
20
21Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
22  * docs/backends/S3.rst: remove Issues section. refs #999
23
24Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
25  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
26
27Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
28  * More fixes to tests needed for pluggable backends. refs #999
29
30Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
31  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
32
33Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
34  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
35
36Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
37  * Fix some more test failures. refs #999
38
39Thu Sep 22 19:30:08 BST 2011  david-sarah@jacaranda.org
40  * Fix most of the crawler tests. refs #999
41
42Thu Sep 22 19:33:23 BST 2011  david-sarah@jacaranda.org
43  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
44
45Fri Sep 23 02:20:44 BST 2011  david-sarah@jacaranda.org
46  * Blank line cleanups.
47
48Fri Sep 23 05:08:25 BST 2011  david-sarah@jacaranda.org
49  * mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
50
51Fri Sep 23 05:10:03 BST 2011  david-sarah@jacaranda.org
52  * A few comment cleanups. refs #999
53
54Fri Sep 23 05:11:15 BST 2011  david-sarah@jacaranda.org
55  * Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
56
57Fri Sep 23 05:13:14 BST 2011  david-sarah@jacaranda.org
58  * Add incomplete S3 backend. refs #999
59
60Fri Sep 23 21:37:23 BST 2011  david-sarah@jacaranda.org
61  * interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
62
63Fri Sep 23 21:44:25 BST 2011  david-sarah@jacaranda.org
64  * Remove redundant si_s argument from check_write_enabler. refs #999
65
66Fri Sep 23 21:46:11 BST 2011  david-sarah@jacaranda.org
67  * Implement readv for immutable shares. refs #999
68
69Fri Sep 23 21:49:14 BST 2011  david-sarah@jacaranda.org
70  * The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
71
72Fri Sep 23 21:49:45 BST 2011  david-sarah@jacaranda.org
73  * Make EmptyShare.check_testv a simple function. refs #999
74
75Fri Sep 23 21:52:19 BST 2011  david-sarah@jacaranda.org
76  * Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
77
78Fri Sep 23 21:53:45 BST 2011  david-sarah@jacaranda.org
79  * Update the S3 backend. refs #999
80
81Fri Sep 23 21:55:10 BST 2011  david-sarah@jacaranda.org
82  * Minor cleanup to disk backend. refs #999
83
84Fri Sep 23 23:09:35 BST 2011  david-sarah@jacaranda.org
85  * Add 'has-immutable-readv' to server version information. refs #999
86
87Tue Sep 27 08:09:47 BST 2011  david-sarah@jacaranda.org
88  * util/deferredutil.py: add some utilities for asynchronous iteration. refs #999
89
90Tue Sep 27 08:14:03 BST 2011  david-sarah@jacaranda.org
91  * test_storage.py: fix test_status_bad_disk_stats. refs #999
92
93Tue Sep 27 08:15:44 BST 2011  david-sarah@jacaranda.org
94  * Cleanups to disk backend. refs #999
95
96Tue Sep 27 08:18:55 BST 2011  david-sarah@jacaranda.org
97  * Cleanups to S3 backend (not including Deferred changes). refs #999
98
99Tue Sep 27 08:28:48 BST 2011  david-sarah@jacaranda.org
100  * test_storage.py: fix test_no_st_blocks. refs #999
101
102Tue Sep 27 08:35:30 BST 2011  david-sarah@jacaranda.org
103  * mutable/publish.py: resolve conflicting patches. refs #999
104
105Wed Sep 28 02:37:29 BST 2011  david-sarah@jacaranda.org
106  * Undo an incompatible change to RIStorageServer. refs #999
107
108Wed Sep 28 02:38:57 BST 2011  david-sarah@jacaranda.org
109  * test_system.py: incorrect arguments were being passed to the constructor for MutableDiskShare. refs #999
110
111Wed Sep 28 02:40:19 BST 2011  david-sarah@jacaranda.org
112  * test_system.py: more debug output for a failing check in test_filesystem. refs #999
113
114Wed Sep 28 02:40:49 BST 2011  david-sarah@jacaranda.org
115  * scripts/debug.py: fix incorrect arguments to dump_immutable_share. refs #999
116
117Wed Sep 28 02:41:26 BST 2011  david-sarah@jacaranda.org
118  * mutable/publish.py: don't crash if there are no writers in _report_verinfo. refs #999
119
120Tue Sep 27 08:39:03 BST 2011  david-sarah@jacaranda.org
121  * Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999
122
123Wed Sep 28 06:23:24 BST 2011  david-sarah@jacaranda.org
124  * Use factory functions to create share objects rather than their constructors, to allow the factory to return a Deferred. Also change some methods on IShareSet and IStoredShare to return Deferreds. Refactor some constants associated with mutable shares. refs #999
125
126Thu Sep 29 04:53:41 BST 2011  david-sarah@jacaranda.org
127  * Add some debugging code (switched off) to no_network.py. When switched on (PRINT_TRACEBACKS = True), this prints the stack trace associated with the caller of a remote method, mitigating the problem that the traceback normally gets lost at that point. TODO: think of a better way to preserve the traceback that can be enabled by default. refs #999
128
129Thu Sep 29 04:55:37 BST 2011  david-sarah@jacaranda.org
130  * no_network.py: add some assertions that the things we wrap using LocalWrapper are not Deferred (which is not supported and causes hard-to-debug failures). refs #999
131
132Thu Sep 29 04:56:44 BST 2011  david-sarah@jacaranda.org
133  * More asyncification of tests. refs #999
134
135Thu Sep 29 05:01:36 BST 2011  david-sarah@jacaranda.org
136  * Make get_sharesets_for_prefix synchronous for the time being (returning a Deferred breaks crawlers). refs #999
137
138Thu Sep 29 05:05:39 BST 2011  david-sarah@jacaranda.org
139  * scripts/debug.py: take account of some API changes. refs #999
140
141Thu Sep 29 05:06:57 BST 2011  david-sarah@jacaranda.org
142  * Add some debugging assertions that share objects are not Deferred. refs #999
143
144Thu Sep 29 05:08:00 BST 2011  david-sarah@jacaranda.org
145  * Fix some incorrect or incomplete asyncifications. refs #999
146
147Thu Sep 29 05:11:10 BST 2011  david-sarah@jacaranda.org
148  * Comment out an assertion that was causing all mutable tests to fail. THIS IS PROBABLY WRONG. refs #999
149
150Thu Sep 29 06:50:38 BST 2011  zooko@zooko.com
151  * split Immutable S3 Share into for-reading and for-writing classes, remove unused (as far as I can tell) methods, use cStringIO for buffering the writes
152  TODO: define the interfaces that the new classes claim to implement
153
154Thu Sep 29 08:55:44 BST 2011  david-sarah@jacaranda.org
155  * Complete the splitting of the immutable IStoredShare interface into IShareForReading and IShareForWriting. Also remove the 'load' method from shares, and other minor interface changes. refs #999
156
157Thu Sep 29 09:05:30 BST 2011  david-sarah@jacaranda.org
158  * Add get_s3_share function in place of S3ShareSet._load_shares. refs #999
159
160Thu Sep 29 09:07:12 BST 2011  david-sarah@jacaranda.org
161  * Make the make_bucket_writer method synchronous. refs #999
162
163Thu Sep 29 09:11:32 BST 2011  david-sarah@jacaranda.org
164  * Move the implementation of lease methods to disk_backend.py, and add stub implementations in s3_backend.py that raise NotImplementedError. Fix the lease methods in the disk backend to be synchronous. Also make sure that get_shares() returns a Deferred list sorted by shnum. refs #999
165
166Thu Sep 29 09:13:31 BST 2011  david-sarah@jacaranda.org
167  * test_storage.py: fix an incorrect argument in construction of S3Backend. refs #999
168
169New patches:
170
171[interfaces.py: 'which -> that' grammar cleanup.
172david-sarah@jacaranda.org**20110825003217
173 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
174] {
175hunk ./src/allmydata/interfaces.py 38
176     the StubClient. This object doesn't actually offer any services, but the
177     announcement helps the Introducer keep track of which clients are
178     subscribed (so the grid admin can keep track of things like the size of
179-    the grid and the client versions in use. This is the (empty)
180+    the grid and the client versions in use). This is the (empty)
181     RemoteInterface for the StubClient."""
182 
183 class RIBucketWriter(RemoteInterface):
184hunk ./src/allmydata/interfaces.py 276
185         (binary) storage index string, and 'shnum' is the integer share
186         number. 'reason' is a human-readable explanation of the problem,
187         probably including some expected hash values and the computed ones
188-        which did not match. Corruption advisories for mutable shares should
189+        that did not match. Corruption advisories for mutable shares should
190         include a hash of the public key (the same value that appears in the
191         mutable-file verify-cap), since the current share format does not
192         store that on disk.
193hunk ./src/allmydata/interfaces.py 413
194           remote_host: the IAddress, if connected, otherwise None
195 
196         This method is intended for monitoring interfaces, such as a web page
197-        which describes connecting and connected peers.
198+        that describes connecting and connected peers.
199         """
200 
201     def get_all_peerids():
202hunk ./src/allmydata/interfaces.py 515
203 
204     # TODO: rename to get_read_cap()
205     def get_readonly():
206-        """Return another IURI instance, which represents a read-only form of
207+        """Return another IURI instance that represents a read-only form of
208         this one. If is_readonly() is True, this returns self."""
209 
210     def get_verify_cap():
211hunk ./src/allmydata/interfaces.py 542
212         passing into init_from_string."""
213 
214 class IDirnodeURI(Interface):
215-    """I am a URI which represents a dirnode."""
216+    """I am a URI that represents a dirnode."""
217 
218 class IFileURI(Interface):
219hunk ./src/allmydata/interfaces.py 545
220-    """I am a URI which represents a filenode."""
221+    """I am a URI that represents a filenode."""
222     def get_size():
223         """Return the length (in bytes) of the file that I represent."""
224 
225hunk ./src/allmydata/interfaces.py 553
226     pass
227 
228 class IMutableFileURI(Interface):
229-    """I am a URI which represents a mutable filenode."""
230+    """I am a URI that represents a mutable filenode."""
231     def get_extension_params():
232         """Return the extension parameters in the URI"""
233 
234hunk ./src/allmydata/interfaces.py 856
235         """
236 
237 class IFileNode(IFilesystemNode):
238-    """I am a node which represents a file: a sequence of bytes. I am not a
239+    """I am a node that represents a file: a sequence of bytes. I am not a
240     container, like IDirectoryNode."""
241     def get_best_readable_version():
242         """Return a Deferred that fires with an IReadable for the 'best'
243hunk ./src/allmydata/interfaces.py 905
244     multiple versions of a file present in the grid, some of which might be
245     unrecoverable (i.e. have fewer than 'k' shares). These versions are
246     loosely ordered: each has a sequence number and a hash, and any version
247-    with seqnum=N was uploaded by a node which has seen at least one version
248+    with seqnum=N was uploaded by a node that has seen at least one version
249     with seqnum=N-1.
250 
251     The 'servermap' (an instance of IMutableFileServerMap) is used to
252hunk ./src/allmydata/interfaces.py 1014
253         as a guide to where the shares are located.
254 
255         I return a Deferred that fires with the requested contents, or
256-        errbacks with UnrecoverableFileError. Note that a servermap which was
257+        errbacks with UnrecoverableFileError. Note that a servermap that was
258         updated with MODE_ANYTHING or MODE_READ may not know about shares for
259         all versions (those modes stop querying servers as soon as they can
260         fulfil their goals), so you may want to use MODE_CHECK (which checks
261hunk ./src/allmydata/interfaces.py 1073
262     """Upload was unable to satisfy 'servers_of_happiness'"""
263 
264 class UnableToFetchCriticalDownloadDataError(Exception):
265-    """I was unable to fetch some piece of critical data which is supposed to
266+    """I was unable to fetch some piece of critical data that is supposed to
267     be identically present in all shares."""
268 
269 class NoServersError(Exception):
270hunk ./src/allmydata/interfaces.py 1085
271     exists, and overwrite= was set to False."""
272 
273 class NoSuchChildError(Exception):
274-    """A directory node was asked to fetch a child which does not exist."""
275+    """A directory node was asked to fetch a child that does not exist."""
276 
277 class ChildOfWrongTypeError(Exception):
278     """An operation was attempted on a child of the wrong type (file or directory)."""
279hunk ./src/allmydata/interfaces.py 1403
280         if you initially thought you were going to use 10 peers, started
281         encoding, and then two of the peers dropped out: you could use
282         desired_share_ids= to skip the work (both memory and CPU) of
283-        producing shares for the peers which are no longer available.
284+        producing shares for the peers that are no longer available.
285 
286         """
287 
288hunk ./src/allmydata/interfaces.py 1478
289         if you initially thought you were going to use 10 peers, started
290         encoding, and then two of the peers dropped out: you could use
291         desired_share_ids= to skip the work (both memory and CPU) of
292-        producing shares for the peers which are no longer available.
293+        producing shares for the peers that are no longer available.
294 
295         For each call, encode() will return a Deferred that fires with two
296         lists, one containing shares and the other containing the shareids.
297hunk ./src/allmydata/interfaces.py 1535
298         required to be of the same length.  The i'th element of their_shareids
299         is required to be the shareid of the i'th buffer in some_shares.
300 
301-        This returns a Deferred which fires with a sequence of buffers. This
302+        This returns a Deferred that fires with a sequence of buffers. This
303         sequence will contain all of the segments of the original data, in
304         order. The sum of the lengths of all of the buffers will be the
305         'data_size' value passed into the original ICodecEncode.set_params()
306hunk ./src/allmydata/interfaces.py 1582
307         Encoding parameters can be set in three ways. 1: The Encoder class
308         provides defaults (3/7/10). 2: the Encoder can be constructed with
309         an 'options' dictionary, in which the
310-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
311+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
312         set_params((k,d,n)) can be called.
313 
314         If you intend to use set_params(), you must call it before
315hunk ./src/allmydata/interfaces.py 1780
316         produced, so that the segment hashes can be generated with only a
317         single pass.
318 
319-        This returns a Deferred which fires with a sequence of hashes, using:
320+        This returns a Deferred that fires with a sequence of hashes, using:
321 
322          tuple(segment_hashes[first:last])
323 
324hunk ./src/allmydata/interfaces.py 1796
325     def get_plaintext_hash():
326         """OBSOLETE; Get the hash of the whole plaintext.
327 
328-        This returns a Deferred which fires with a tagged SHA-256 hash of the
329+        This returns a Deferred that fires with a tagged SHA-256 hash of the
330         whole plaintext, obtained from hashutil.plaintext_hash(data).
331         """
332 
333hunk ./src/allmydata/interfaces.py 1856
334         be used to encrypt the data. The key will also be hashed to derive
335         the StorageIndex.
336 
337-        Uploadables which want to achieve convergence should hash their file
338+        Uploadables that want to achieve convergence should hash their file
339         contents and the serialized_encoding_parameters to form the key
340         (which of course requires a full pass over the data). Uploadables can
341         use the upload.ConvergentUploadMixin class to achieve this
342hunk ./src/allmydata/interfaces.py 1862
343         automatically.
344 
345-        Uploadables which do not care about convergence (or do not wish to
346+        Uploadables that do not care about convergence (or do not wish to
347         make multiple passes over the data) can simply return a
348         strongly-random 16 byte string.
349 
350hunk ./src/allmydata/interfaces.py 1872
351 
352     def read(length):
353         """Return a Deferred that fires with a list of strings (perhaps with
354-        only a single element) which, when concatenated together, contain the
355+        only a single element) that, when concatenated together, contain the
356         next 'length' bytes of data. If EOF is near, this may provide fewer
357         than 'length' bytes. The total number of bytes provided by read()
358         before it signals EOF must equal the size provided by get_size().
359hunk ./src/allmydata/interfaces.py 1919
360 
361     def read(length):
362         """
363-        Returns a list of strings which, when concatenated, are the next
364+        Returns a list of strings that, when concatenated, are the next
365         length bytes of the file, or fewer if there are fewer bytes
366         between the current location and the end of the file.
367         """
368hunk ./src/allmydata/interfaces.py 1932
369 
370 class IUploadResults(Interface):
371     """I am returned by upload() methods. I contain a number of public
372-    attributes which can be read to determine the results of the upload. Some
373+    attributes that can be read to determine the results of the upload. Some
374     of these are functional, some are timing information. All of these may be
375     None.
376 
377hunk ./src/allmydata/interfaces.py 1965
378 
379 class IDownloadResults(Interface):
380     """I am created internally by download() methods. I contain a number of
381-    public attributes which contain details about the download process.::
382+    public attributes that contain details about the download process.::
383 
384      .file_size : the size of the file, in bytes
385      .servers_used : set of server peerids that were used during download
386hunk ./src/allmydata/interfaces.py 1991
387 class IUploader(Interface):
388     def upload(uploadable):
389         """Upload the file. 'uploadable' must impement IUploadable. This
390-        returns a Deferred which fires with an IUploadResults instance, from
391+        returns a Deferred that fires with an IUploadResults instance, from
392         which the URI of the file can be obtained as results.uri ."""
393 
394     def upload_ssk(write_capability, new_version, uploadable):
395hunk ./src/allmydata/interfaces.py 2041
396         kind of lease that is obtained (which account number to claim, etc).
397 
398         TODO: any problems seen during checking will be reported to the
399-        health-manager.furl, a centralized object which is responsible for
400+        health-manager.furl, a centralized object that is responsible for
401         figuring out why files are unhealthy so corrective action can be
402         taken.
403         """
404hunk ./src/allmydata/interfaces.py 2056
405         will be put in the check-and-repair results. The Deferred will not
406         fire until the repair is complete.
407 
408-        This returns a Deferred which fires with an instance of
409+        This returns a Deferred that fires with an instance of
410         ICheckAndRepairResults."""
411 
412 class IDeepCheckable(Interface):
413hunk ./src/allmydata/interfaces.py 2141
414                               that was found to be corrupt. Each share
415                               locator is a list of (serverid, storage_index,
416                               sharenum).
417-         count-incompatible-shares: the number of shares which are of a share
418+         count-incompatible-shares: the number of shares that are of a share
419                                     format unknown to this checker
420          list-incompatible-shares: a list of 'share locators', one for each
421                                    share that was found to be of an unknown
422hunk ./src/allmydata/interfaces.py 2148
423                                    format. Each share locator is a list of
424                                    (serverid, storage_index, sharenum).
425          servers-responding: list of (binary) storage server identifiers,
426-                             one for each server which responded to the share
427+                             one for each server that responded to the share
428                              query (even if they said they didn't have
429                              shares, and even if they said they did have
430                              shares but then didn't send them when asked, or
431hunk ./src/allmydata/interfaces.py 2345
432         will use the data in the checker results to guide the repair process,
433         such as which servers provided bad data and should therefore be
434         avoided. The ICheckResults object is inside the
435-        ICheckAndRepairResults object, which is returned by the
436+        ICheckAndRepairResults object that is returned by the
437         ICheckable.check() method::
438 
439          d = filenode.check(repair=False)
440hunk ./src/allmydata/interfaces.py 2436
441         methods to create new objects. I return synchronously."""
442 
443     def create_mutable_file(contents=None, keysize=None):
444-        """I create a new mutable file, and return a Deferred which will fire
445+        """I create a new mutable file, and return a Deferred that will fire
446         with the IMutableFileNode instance when it is ready. If contents= is
447         provided (a bytestring), it will be used as the initial contents of
448         the new file, otherwise the file will contain zero bytes. keysize= is
449hunk ./src/allmydata/interfaces.py 2444
450         usual."""
451 
452     def create_new_mutable_directory(initial_children={}):
453-        """I create a new mutable directory, and return a Deferred which will
454+        """I create a new mutable directory, and return a Deferred that will
455         fire with the IDirectoryNode instance when it is ready. If
456         initial_children= is provided (a dict mapping unicode child name to
457         (childnode, metadata_dict) tuples), the directory will be populated
458hunk ./src/allmydata/interfaces.py 2452
459 
460 class IClientStatus(Interface):
461     def list_all_uploads():
462-        """Return a list of uploader objects, one for each upload which
463+        """Return a list of uploader objects, one for each upload that
464         currently has an object available (tracked with weakrefs). This is
465         intended for debugging purposes."""
466     def list_active_uploads():
467hunk ./src/allmydata/interfaces.py 2462
468         started uploads."""
469 
470     def list_all_downloads():
471-        """Return a list of downloader objects, one for each download which
472+        """Return a list of downloader objects, one for each download that
473         currently has an object available (tracked with weakrefs). This is
474         intended for debugging purposes."""
475     def list_active_downloads():
476hunk ./src/allmydata/interfaces.py 2689
477 
478     def provide(provider=RIStatsProvider, nickname=str):
479         """
480-        @param provider: a stats collector instance which should be polled
481+        @param provider: a stats collector instance that should be polled
482                          periodically by the gatherer to collect stats.
483         @param nickname: a name useful to identify the provided client
484         """
485hunk ./src/allmydata/interfaces.py 2722
486 
487 class IValidatedThingProxy(Interface):
488     def start():
489-        """ Acquire a thing and validate it. Return a deferred which is
490+        """ Acquire a thing and validate it. Return a deferred that is
491         eventually fired with self if the thing is valid or errbacked if it
492         can't be acquired or validated."""
493 
494}
495[Pluggable backends -- new and moved files, changes to moved files. refs #999
496david-sarah@jacaranda.org**20110919232926
497 Ignore-this: ec5d2d1362a092d919e84327d3092424
498] {
499adddir ./src/allmydata/storage/backends
500adddir ./src/allmydata/storage/backends/disk
501move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
502move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
503adddir ./src/allmydata/storage/backends/null
504addfile ./src/allmydata/storage/backends/__init__.py
505addfile ./src/allmydata/storage/backends/base.py
506hunk ./src/allmydata/storage/backends/base.py 1
507+
508+from twisted.application import service
509+
510+from allmydata.storage.common import si_b2a
511+from allmydata.storage.lease import LeaseInfo
512+from allmydata.storage.bucket import BucketReader
513+
514+
515+class Backend(service.MultiService):
516+    def __init__(self):
517+        service.MultiService.__init__(self)
518+
519+
520+class ShareSet(object):
521+    """
522+    This class implements shareset logic that could work for all backends, but
523+    might be useful to override for efficiency.
524+    """
525+
526+    def __init__(self, storageindex):
527+        self.storageindex = storageindex
528+
529+    def get_storage_index(self):
530+        return self.storageindex
531+
532+    def get_storage_index_string(self):
533+        return si_b2a(self.storageindex)
534+
535+    def renew_lease(self, renew_secret, new_expiration_time):
536+        found_shares = False
537+        for share in self.get_shares():
538+            found_shares = True
539+            share.renew_lease(renew_secret, new_expiration_time)
540+
541+        if not found_shares:
542+            raise IndexError("no such lease to renew")
543+
544+    def get_leases(self):
545+        # Since all shares get the same lease data, we just grab the leases
546+        # from the first share.
547+        try:
548+            sf = self.get_shares().next()
549+            return sf.get_leases()
550+        except StopIteration:
551+            return iter([])
552+
553+    def add_or_renew_lease(self, lease_info):
554+        # This implementation assumes that lease data is duplicated in
555+        # all shares of a shareset, which might not be true for all backends.
556+        for share in self.get_shares():
557+            share.add_or_renew_lease(lease_info)
558+
559+    def make_bucket_reader(self, storageserver, share):
560+        return BucketReader(storageserver, share)
561+
562+    def testv_and_readv_and_writev(self, storageserver, secrets,
563+                                   test_and_write_vectors, read_vector,
564+                                   expiration_time):
565+        # The implementation here depends on the following helper methods,
566+        # which must be provided by subclasses:
567+        #
568+        # def _clean_up_after_unlink(self):
569+        #     """clean up resources associated with the shareset after some
570+        #     shares might have been deleted"""
571+        #
572+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
573+        #     """create a mutable share with the given shnum and write_enabler"""
574+
575+        # secrets might be a triple with cancel_secret in secrets[2], but if
576+        # so we ignore the cancel_secret.
577+        write_enabler = secrets[0]
578+        renew_secret = secrets[1]
579+
580+        si_s = self.get_storage_index_string()
581+        shares = {}
582+        for share in self.get_shares():
583+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
584+            # have a parameter saying what type it's expecting.
585+            if share.sharetype == "mutable":
586+                share.check_write_enabler(write_enabler, si_s)
587+                shares[share.get_shnum()] = share
588+
589+        # write_enabler is good for all existing shares
590+
591+        # now evaluate test vectors
592+        testv_is_good = True
593+        for sharenum in test_and_write_vectors:
594+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
595+            if sharenum in shares:
596+                if not shares[sharenum].check_testv(testv):
597+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
598+                    testv_is_good = False
599+                    break
600+            else:
601+                # compare the vectors against an empty share, in which all
602+                # reads return empty strings
603+                if not EmptyShare().check_testv(testv):
604+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
605+                                                                testv))
606+                    testv_is_good = False
607+                    break
608+
609+        # gather the read vectors, before we do any writes
610+        read_data = {}
611+        for shnum, share in shares.items():
612+            read_data[shnum] = share.readv(read_vector)
613+
614+        ownerid = 1 # TODO
615+        lease_info = LeaseInfo(ownerid, renew_secret,
616+                               expiration_time, storageserver.get_serverid())
617+
618+        if testv_is_good:
619+            # now apply the write vectors
620+            for shnum in test_and_write_vectors:
621+                (testv, datav, new_length) = test_and_write_vectors[shnum]
622+                if new_length == 0:
623+                    if shnum in shares:
624+                        shares[shnum].unlink()
625+                else:
626+                    if shnum not in shares:
627+                        # allocate a new share
628+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
629+                        shares[shnum] = share
630+                    shares[shnum].writev(datav, new_length)
631+                    # and update the lease
632+                    shares[shnum].add_or_renew_lease(lease_info)
633+
634+            if new_length == 0:
635+                self._clean_up_after_unlink()
636+
637+        return (testv_is_good, read_data)
638+
639+    def readv(self, wanted_shnums, read_vector):
640+        """
641+        Read a vector from the numbered shares in this shareset. An empty
642+        shares list means to return data from all known shares.
643+
644+        @param wanted_shnums=ListOf(int)
645+        @param read_vector=ReadVector
646+        @return DictOf(int, ReadData): shnum -> results, with one key per share
647+        """
648+        datavs = {}
649+        for share in self.get_shares():
650+            shnum = share.get_shnum()
651+            if not wanted_shnums or shnum in wanted_shnums:
652+                datavs[shnum] = share.readv(read_vector)
653+
654+        return datavs
655+
656+
657+def testv_compare(a, op, b):
658+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
659+    if op == "lt":
660+        return a < b
661+    if op == "le":
662+        return a <= b
663+    if op == "eq":
664+        return a == b
665+    if op == "ne":
666+        return a != b
667+    if op == "ge":
668+        return a >= b
669+    if op == "gt":
670+        return a > b
671+    # never reached
672+
673+
674+class EmptyShare:
675+    def check_testv(self, testv):
676+        test_good = True
677+        for (offset, length, operator, specimen) in testv:
678+            data = ""
679+            if not testv_compare(data, operator, specimen):
680+                test_good = False
681+                break
682+        return test_good
683+
684addfile ./src/allmydata/storage/backends/disk/__init__.py
685addfile ./src/allmydata/storage/backends/disk/disk_backend.py
686hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
687+
688+import re
689+
690+from twisted.python.filepath import UnlistableError
691+
692+from zope.interface import implements
693+from allmydata.interfaces import IStorageBackend, IShareSet
694+from allmydata.util import fileutil, log, time_format
695+from allmydata.storage.common import si_b2a, si_a2b
696+from allmydata.storage.bucket import BucketWriter
697+from allmydata.storage.backends.base import Backend, ShareSet
698+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
699+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
700+
701+# storage/
702+# storage/shares/incoming
703+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
704+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
705+# storage/shares/$START/$STORAGEINDEX
706+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
707+
708+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
709+# base-32 chars).
710+# $SHARENUM matches this regex:
711+NUM_RE=re.compile("^[0-9]+$")
712+
713+
714+def si_si2dir(startfp, storageindex):
715+    sia = si_b2a(storageindex)
716+    newfp = startfp.child(sia[:2])
717+    return newfp.child(sia)
718+
719+
720+def get_share(fp):
721+    f = fp.open('rb')
722+    try:
723+        prefix = f.read(32)
724+    finally:
725+        f.close()
726+
727+    if prefix == MutableDiskShare.MAGIC:
728+        return MutableDiskShare(fp)
729+    else:
730+        # assume it's immutable
731+        return ImmutableDiskShare(fp)
732+
733+
734+class DiskBackend(Backend):
735+    implements(IStorageBackend)
736+
737+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
738+        Backend.__init__(self)
739+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
740+        self._setup_corruption_advisory()
741+
742+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
743+        self._storedir = storedir
744+        self._readonly = readonly
745+        self._reserved_space = int(reserved_space)
746+        self._discard_storage = discard_storage
747+        self._sharedir = self._storedir.child("shares")
748+        fileutil.fp_make_dirs(self._sharedir)
749+        self._incomingdir = self._sharedir.child('incoming')
750+        self._clean_incomplete()
751+        if self._reserved_space and (self.get_available_space() is None):
752+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
753+                    umid="0wZ27w", level=log.UNUSUAL)
754+
755+    def _clean_incomplete(self):
756+        fileutil.fp_remove(self._incomingdir)
757+        fileutil.fp_make_dirs(self._incomingdir)
758+
759+    def _setup_corruption_advisory(self):
760+        # we don't actually create the corruption-advisory dir until necessary
761+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
762+
763+    def _make_shareset(self, sharehomedir):
764+        return self.get_shareset(si_a2b(sharehomedir.basename()))
765+
766+    def get_sharesets_for_prefix(self, prefix):
767+        prefixfp = self._sharedir.child(prefix)
768+        try:
769+            sharesets = map(self._make_shareset, prefixfp.children())
770+            def _by_base32si(b):
771+                return b.get_storage_index_string()
772+            sharesets.sort(key=_by_base32si)
773+        except EnvironmentError:
774+            sharesets = []
775+        return sharesets
776+
777+    def get_shareset(self, storageindex):
778+        sharehomedir = si_si2dir(self._sharedir, storageindex)
779+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
780+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
781+
782+    def fill_in_space_stats(self, stats):
783+        stats['storage_server.reserved_space'] = self._reserved_space
784+        try:
785+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
786+            writeable = disk['avail'] > 0
787+
788+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
789+            stats['storage_server.disk_total'] = disk['total']
790+            stats['storage_server.disk_used'] = disk['used']
791+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
792+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
793+            stats['storage_server.disk_avail'] = disk['avail']
794+        except AttributeError:
795+            writeable = True
796+        except EnvironmentError:
797+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
798+            writeable = False
799+
800+        if self._readonly:
801+            stats['storage_server.disk_avail'] = 0
802+            writeable = False
803+
804+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
805+
806+    def get_available_space(self):
807+        if self._readonly:
808+            return 0
809+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
810+
811+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
812+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
813+        now = time_format.iso_utc(sep="T")
814+        si_s = si_b2a(storageindex)
815+
816+        # Windows can't handle colons in the filename.
817+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
818+        f = self._corruption_advisory_dir.child(name).open("w")
819+        try:
820+            f.write("report: Share Corruption\n")
821+            f.write("type: %s\n" % sharetype)
822+            f.write("storage_index: %s\n" % si_s)
823+            f.write("share_number: %d\n" % shnum)
824+            f.write("\n")
825+            f.write(reason)
826+            f.write("\n")
827+        finally:
828+            f.close()
829+
830+        log.msg(format=("client claims corruption in (%(share_type)s) " +
831+                        "%(si)s-%(shnum)d: %(reason)s"),
832+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
833+                level=log.SCARY, umid="SGx2fA")
834+
835+
836+class DiskShareSet(ShareSet):
837+    implements(IShareSet)
838+
839+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
840+        ShareSet.__init__(self, storageindex)
841+        self._sharehomedir = sharehomedir
842+        self._incominghomedir = incominghomedir
843+        self._discard_storage = discard_storage
844+
845+    def get_overhead(self):
846+        return (fileutil.get_disk_usage(self._sharehomedir) +
847+                fileutil.get_disk_usage(self._incominghomedir))
848+
849+    def get_shares(self):
850+        """
851+        Generate IStorageBackendShare objects for shares we have for this storage index.
852+        ("Shares we have" means completed ones, excluding incoming ones.)
853+        """
854+        try:
855+            for fp in self._sharehomedir.children():
856+                shnumstr = fp.basename()
857+                if not NUM_RE.match(shnumstr):
858+                    continue
859+                sharehome = self._sharehomedir.child(shnumstr)
860+                yield self.get_share(sharehome)
861+        except UnlistableError:
862+            # There is no shares directory at all.
863+            pass
864+
865+    def has_incoming(self, shnum):
866+        if self._incominghomedir is None:
867+            return False
868+        return self._incominghomedir.child(str(shnum)).exists()
869+
870+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
871+        sharehome = self._sharehomedir.child(str(shnum))
872+        incominghome = self._incominghomedir.child(str(shnum))
873+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
874+                                   max_size=max_space_per_bucket, create=True)
875+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
876+        if self._discard_storage:
877+            bw.throw_out_all_data = True
878+        return bw
879+
880+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
881+        fileutil.fp_make_dirs(self._sharehomedir)
882+        sharehome = self._sharehomedir.child(str(shnum))
883+        serverid = storageserver.get_serverid()
884+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
885+
886+    def _clean_up_after_unlink(self):
887+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
888+
889hunk ./src/allmydata/storage/backends/disk/immutable.py 1
890-import os, stat, struct, time
891 
892hunk ./src/allmydata/storage/backends/disk/immutable.py 2
893-from foolscap.api import Referenceable
894+import struct
895 
896 from zope.interface import implements
897hunk ./src/allmydata/storage/backends/disk/immutable.py 5
898-from allmydata.interfaces import RIBucketWriter, RIBucketReader
899-from allmydata.util import base32, fileutil, log
900+
901+from allmydata.interfaces import IStoredShare
902+from allmydata.util import fileutil
903 from allmydata.util.assertutil import precondition
904hunk ./src/allmydata/storage/backends/disk/immutable.py 9
905+from allmydata.util.fileutil import fp_make_dirs
906 from allmydata.util.hashutil import constant_time_compare
907hunk ./src/allmydata/storage/backends/disk/immutable.py 11
908+from allmydata.util.encodingutil import quote_filepath
909+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
910 from allmydata.storage.lease import LeaseInfo
911hunk ./src/allmydata/storage/backends/disk/immutable.py 14
912-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
913-     DataTooLargeError
914+
915 
916 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
917 # and share data. The share data is accessed by RIBucketWriter.write and
918hunk ./src/allmydata/storage/backends/disk/immutable.py 41
919 # then the value stored in this field will be the actual share data length
920 # modulo 2**32.
921 
922-class ShareFile:
923-    LEASE_SIZE = struct.calcsize(">L32s32sL")
924+class ImmutableDiskShare(object):
925+    implements(IStoredShare)
926+
927     sharetype = "immutable"
928hunk ./src/allmydata/storage/backends/disk/immutable.py 45
929+    LEASE_SIZE = struct.calcsize(">L32s32sL")
930+
931 
932hunk ./src/allmydata/storage/backends/disk/immutable.py 48
933-    def __init__(self, filename, max_size=None, create=False):
934-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
935+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
936+        """ If max_size is not None then I won't allow more than
937+        max_size to be written to me. If create=True then max_size
938+        must not be None. """
939         precondition((max_size is not None) or (not create), max_size, create)
940hunk ./src/allmydata/storage/backends/disk/immutable.py 53
941-        self.home = filename
942+        self._storageindex = storageindex
943         self._max_size = max_size
944hunk ./src/allmydata/storage/backends/disk/immutable.py 55
945+        self._incominghome = incominghome
946+        self._home = finalhome
947+        self._shnum = shnum
948         if create:
949             # touch the file, so later callers will see that we're working on
950             # it. Also construct the metadata.
951hunk ./src/allmydata/storage/backends/disk/immutable.py 61
952-            assert not os.path.exists(self.home)
953-            fileutil.make_dirs(os.path.dirname(self.home))
954-            f = open(self.home, 'wb')
955+            assert not finalhome.exists()
956+            fp_make_dirs(self._incominghome.parent())
957             # The second field -- the four-byte share data length -- is no
958             # longer used as of Tahoe v1.3.0, but we continue to write it in
959             # there in case someone downgrades a storage server from >=
960hunk ./src/allmydata/storage/backends/disk/immutable.py 72
961             # the largest length that can fit into the field. That way, even
962             # if this does happen, the old < v1.3.0 server will still allow
963             # clients to read the first part of the share.
964-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
965-            f.close()
966+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
967             self._lease_offset = max_size + 0x0c
968             self._num_leases = 0
969         else:
970hunk ./src/allmydata/storage/backends/disk/immutable.py 76
971-            f = open(self.home, 'rb')
972-            filesize = os.path.getsize(self.home)
973-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
974-            f.close()
975+            f = self._home.open(mode='rb')
976+            try:
977+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
978+            finally:
979+                f.close()
980+            filesize = self._home.getsize()
981             if version != 1:
982                 msg = "sharefile %s had version %d but we wanted 1" % \
983hunk ./src/allmydata/storage/backends/disk/immutable.py 84
984-                      (filename, version)
985+                      (self._home, version)
986                 raise UnknownImmutableContainerVersionError(msg)
987             self._num_leases = num_leases
988             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
989hunk ./src/allmydata/storage/backends/disk/immutable.py 90
990         self._data_offset = 0xc
991 
992+    def __repr__(self):
993+        return ("<ImmutableDiskShare %s:%r at %s>"
994+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
995+
996+    def close(self):
997+        fileutil.fp_make_dirs(self._home.parent())
998+        self._incominghome.moveTo(self._home)
999+        try:
1000+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
1001+            # We try to delete the parent (.../ab/abcde) to avoid leaving
1002+            # these directories lying around forever, but the delete might
1003+            # fail if we're working on another share for the same storage
1004+            # index (like ab/abcde/5). The alternative approach would be to
1005+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1006+            # ShareWriter), each of which is responsible for a single
1007+            # directory on disk, and have them use reference counting of
1008+            # their children to know when they should do the rmdir. This
1009+            # approach is simpler, but relies on os.rmdir refusing to delete
1010+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
1011+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
1012+            # we also delete the grandparent (prefix) directory, .../ab ,
1013+            # again to avoid leaving directories lying around. This might
1014+            # fail if there is another bucket open that shares a prefix (like
1015+            # ab/abfff).
1016+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
1017+            # we leave the great-grandparent (incoming/) directory in place.
1018+        except EnvironmentError:
1019+            # ignore the "can't rmdir because the directory is not empty"
1020+            # exceptions, those are normal consequences of the
1021+            # above-mentioned conditions.
1022+            pass
1023+        pass
1024+
1025+    def get_used_space(self):
1026+        return (fileutil.get_used_space(self._home) +
1027+                fileutil.get_used_space(self._incominghome))
1028+
1029+    def get_storage_index(self):
1030+        return self._storageindex
1031+
1032+    def get_shnum(self):
1033+        return self._shnum
1034+
1035     def unlink(self):
1036hunk ./src/allmydata/storage/backends/disk/immutable.py 134
1037-        os.unlink(self.home)
1038+        self._home.remove()
1039+
1040+    def get_size(self):
1041+        return self._home.getsize()
1042+
1043+    def get_data_length(self):
1044+        return self._lease_offset - self._data_offset
1045+
1046+    #def readv(self, read_vector):
1047+    #    ...
1048 
1049     def read_share_data(self, offset, length):
1050         precondition(offset >= 0)
1051hunk ./src/allmydata/storage/backends/disk/immutable.py 147
1052-        # reads beyond the end of the data are truncated. Reads that start
1053+
1054+        # Reads beyond the end of the data are truncated. Reads that start
1055         # beyond the end of the data return an empty string.
1056         seekpos = self._data_offset+offset
1057         actuallength = max(0, min(length, self._lease_offset-seekpos))
1058hunk ./src/allmydata/storage/backends/disk/immutable.py 154
1059         if actuallength == 0:
1060             return ""
1061-        f = open(self.home, 'rb')
1062-        f.seek(seekpos)
1063-        return f.read(actuallength)
1064+        f = self._home.open(mode='rb')
1065+        try:
1066+            f.seek(seekpos)
1067+            sharedata = f.read(actuallength)
1068+        finally:
1069+            f.close()
1070+        return sharedata
1071 
1072     def write_share_data(self, offset, data):
1073         length = len(data)
1074hunk ./src/allmydata/storage/backends/disk/immutable.py 167
1075         precondition(offset >= 0, offset)
1076         if self._max_size is not None and offset+length > self._max_size:
1077             raise DataTooLargeError(self._max_size, offset, length)
1078-        f = open(self.home, 'rb+')
1079-        real_offset = self._data_offset+offset
1080-        f.seek(real_offset)
1081-        assert f.tell() == real_offset
1082-        f.write(data)
1083-        f.close()
1084+        f = self._incominghome.open(mode='rb+')
1085+        try:
1086+            real_offset = self._data_offset+offset
1087+            f.seek(real_offset)
1088+            assert f.tell() == real_offset
1089+            f.write(data)
1090+        finally:
1091+            f.close()
1092 
1093     def _write_lease_record(self, f, lease_number, lease_info):
1094         offset = self._lease_offset + lease_number * self.LEASE_SIZE
1095hunk ./src/allmydata/storage/backends/disk/immutable.py 184
1096 
1097     def _read_num_leases(self, f):
1098         f.seek(0x08)
1099-        (num_leases,) = struct.unpack(">L", f.read(4))
1100+        ro = f.read(4)
1101+        (num_leases,) = struct.unpack(">L", ro)
1102         return num_leases
1103 
1104     def _write_num_leases(self, f, num_leases):
1105hunk ./src/allmydata/storage/backends/disk/immutable.py 195
1106     def _truncate_leases(self, f, num_leases):
1107         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1108 
1109+    # These lease operations are intended for use by disk_backend.py.
1110+    # Other clients should not depend on the fact that the disk backend
1111+    # stores leases in share files.
1112+
1113     def get_leases(self):
1114         """Yields a LeaseInfo instance for all leases."""
1115hunk ./src/allmydata/storage/backends/disk/immutable.py 201
1116-        f = open(self.home, 'rb')
1117-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1118-        f.seek(self._lease_offset)
1119-        for i in range(num_leases):
1120-            data = f.read(self.LEASE_SIZE)
1121-            if data:
1122-                yield LeaseInfo().from_immutable_data(data)
1123+        f = self._home.open(mode='rb')
1124+        try:
1125+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1126+            f.seek(self._lease_offset)
1127+            for i in range(num_leases):
1128+                data = f.read(self.LEASE_SIZE)
1129+                if data:
1130+                    yield LeaseInfo().from_immutable_data(data)
1131+        finally:
1132+            f.close()
1133 
1134     def add_lease(self, lease_info):
1135hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1136-        f = open(self.home, 'rb+')
1137-        num_leases = self._read_num_leases(f)
1138-        self._write_lease_record(f, num_leases, lease_info)
1139-        self._write_num_leases(f, num_leases+1)
1140-        f.close()
1141+        f = self._incominghome.open(mode='rb')
1142+        try:
1143+            num_leases = self._read_num_leases(f)
1144+        finally:
1145+            f.close()
1146+        f = self._home.open(mode='wb+')
1147+        try:
1148+            self._write_lease_record(f, num_leases, lease_info)
1149+            self._write_num_leases(f, num_leases+1)
1150+        finally:
1151+            f.close()
1152 
1153     def renew_lease(self, renew_secret, new_expire_time):
1154hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1155-        for i,lease in enumerate(self.get_leases()):
1156-            if constant_time_compare(lease.renew_secret, renew_secret):
1157-                # yup. See if we need to update the owner time.
1158-                if new_expire_time > lease.expiration_time:
1159-                    # yes
1160-                    lease.expiration_time = new_expire_time
1161-                    f = open(self.home, 'rb+')
1162-                    self._write_lease_record(f, i, lease)
1163-                    f.close()
1164-                return
1165+        try:
1166+            for i, lease in enumerate(self.get_leases()):
1167+                if constant_time_compare(lease.renew_secret, renew_secret):
1168+                    # yup. See if we need to update the owner time.
1169+                    if new_expire_time > lease.expiration_time:
1170+                        # yes
1171+                        lease.expiration_time = new_expire_time
1172+                        f = self._home.open('rb+')
1173+                        try:
1174+                            self._write_lease_record(f, i, lease)
1175+                        finally:
1176+                            f.close()
1177+                    return
1178+        except IndexError, e:
1179+            raise Exception("IndexError: %s" % (e,))
1180         raise IndexError("unable to renew non-existent lease")
1181 
1182     def add_or_renew_lease(self, lease_info):
1183hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1184                              lease_info.expiration_time)
1185         except IndexError:
1186             self.add_lease(lease_info)
1187-
1188-
1189-    def cancel_lease(self, cancel_secret):
1190-        """Remove a lease with the given cancel_secret. If the last lease is
1191-        cancelled, the file will be removed. Return the number of bytes that
1192-        were freed (by truncating the list of leases, and possibly by
1193-        deleting the file. Raise IndexError if there was no lease with the
1194-        given cancel_secret.
1195-        """
1196-
1197-        leases = list(self.get_leases())
1198-        num_leases_removed = 0
1199-        for i,lease in enumerate(leases):
1200-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1201-                leases[i] = None
1202-                num_leases_removed += 1
1203-        if not num_leases_removed:
1204-            raise IndexError("unable to find matching lease to cancel")
1205-        if num_leases_removed:
1206-            # pack and write out the remaining leases. We write these out in
1207-            # the same order as they were added, so that if we crash while
1208-            # doing this, we won't lose any non-cancelled leases.
1209-            leases = [l for l in leases if l] # remove the cancelled leases
1210-            f = open(self.home, 'rb+')
1211-            for i,lease in enumerate(leases):
1212-                self._write_lease_record(f, i, lease)
1213-            self._write_num_leases(f, len(leases))
1214-            self._truncate_leases(f, len(leases))
1215-            f.close()
1216-        space_freed = self.LEASE_SIZE * num_leases_removed
1217-        if not len(leases):
1218-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1219-            self.unlink()
1220-        return space_freed
1221-
1222-
1223-class BucketWriter(Referenceable):
1224-    implements(RIBucketWriter)
1225-
1226-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1227-        self.ss = ss
1228-        self.incominghome = incominghome
1229-        self.finalhome = finalhome
1230-        self._max_size = max_size # don't allow the client to write more than this
1231-        self._canary = canary
1232-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1233-        self.closed = False
1234-        self.throw_out_all_data = False
1235-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1236-        # also, add our lease to the file now, so that other ones can be
1237-        # added by simultaneous uploaders
1238-        self._sharefile.add_lease(lease_info)
1239-
1240-    def allocated_size(self):
1241-        return self._max_size
1242-
1243-    def remote_write(self, offset, data):
1244-        start = time.time()
1245-        precondition(not self.closed)
1246-        if self.throw_out_all_data:
1247-            return
1248-        self._sharefile.write_share_data(offset, data)
1249-        self.ss.add_latency("write", time.time() - start)
1250-        self.ss.count("write")
1251-
1252-    def remote_close(self):
1253-        precondition(not self.closed)
1254-        start = time.time()
1255-
1256-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1257-        fileutil.rename(self.incominghome, self.finalhome)
1258-        try:
1259-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1260-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1261-            # these directories lying around forever, but the delete might
1262-            # fail if we're working on another share for the same storage
1263-            # index (like ab/abcde/5). The alternative approach would be to
1264-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1265-            # ShareWriter), each of which is responsible for a single
1266-            # directory on disk, and have them use reference counting of
1267-            # their children to know when they should do the rmdir. This
1268-            # approach is simpler, but relies on os.rmdir refusing to delete
1269-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1270-            os.rmdir(os.path.dirname(self.incominghome))
1271-            # we also delete the grandparent (prefix) directory, .../ab ,
1272-            # again to avoid leaving directories lying around. This might
1273-            # fail if there is another bucket open that shares a prefix (like
1274-            # ab/abfff).
1275-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1276-            # we leave the great-grandparent (incoming/) directory in place.
1277-        except EnvironmentError:
1278-            # ignore the "can't rmdir because the directory is not empty"
1279-            # exceptions, those are normal consequences of the
1280-            # above-mentioned conditions.
1281-            pass
1282-        self._sharefile = None
1283-        self.closed = True
1284-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1285-
1286-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1287-        self.ss.bucket_writer_closed(self, filelen)
1288-        self.ss.add_latency("close", time.time() - start)
1289-        self.ss.count("close")
1290-
1291-    def _disconnected(self):
1292-        if not self.closed:
1293-            self._abort()
1294-
1295-    def remote_abort(self):
1296-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1297-                facility="tahoe.storage", level=log.UNUSUAL)
1298-        if not self.closed:
1299-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1300-        self._abort()
1301-        self.ss.count("abort")
1302-
1303-    def _abort(self):
1304-        if self.closed:
1305-            return
1306-
1307-        os.remove(self.incominghome)
1308-        # if we were the last share to be moved, remove the incoming/
1309-        # directory that was our parent
1310-        parentdir = os.path.split(self.incominghome)[0]
1311-        if not os.listdir(parentdir):
1312-            os.rmdir(parentdir)
1313-        self._sharefile = None
1314-
1315-        # We are now considered closed for further writing. We must tell
1316-        # the storage server about this so that it stops expecting us to
1317-        # use the space it allocated for us earlier.
1318-        self.closed = True
1319-        self.ss.bucket_writer_closed(self, 0)
1320-
1321-
1322-class BucketReader(Referenceable):
1323-    implements(RIBucketReader)
1324-
1325-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1326-        self.ss = ss
1327-        self._share_file = ShareFile(sharefname)
1328-        self.storage_index = storage_index
1329-        self.shnum = shnum
1330-
1331-    def __repr__(self):
1332-        return "<%s %s %s>" % (self.__class__.__name__,
1333-                               base32.b2a_l(self.storage_index[:8], 60),
1334-                               self.shnum)
1335-
1336-    def remote_read(self, offset, length):
1337-        start = time.time()
1338-        data = self._share_file.read_share_data(offset, length)
1339-        self.ss.add_latency("read", time.time() - start)
1340-        self.ss.count("read")
1341-        return data
1342-
1343-    def remote_advise_corrupt_share(self, reason):
1344-        return self.ss.remote_advise_corrupt_share("immutable",
1345-                                                   self.storage_index,
1346-                                                   self.shnum,
1347-                                                   reason)
1348hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1349-import os, stat, struct
1350 
1351hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1352-from allmydata.interfaces import BadWriteEnablerError
1353-from allmydata.util import idlib, log
1354+import struct
1355+
1356+from zope.interface import implements
1357+
1358+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1359+from allmydata.util import fileutil, idlib, log
1360 from allmydata.util.assertutil import precondition
1361 from allmydata.util.hashutil import constant_time_compare
1362hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1363-from allmydata.storage.lease import LeaseInfo
1364-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1365+from allmydata.util.encodingutil import quote_filepath
1366+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1367      DataTooLargeError
1368hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1369+from allmydata.storage.lease import LeaseInfo
1370+from allmydata.storage.backends.base import testv_compare
1371 
1372hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1373-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1374-# has a different layout. See docs/mutable.txt for more details.
1375+
1376+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1377+# It has a different layout. See docs/mutable.rst for more details.
1378 
1379 # #   offset    size    name
1380 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1381hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1382 #                        4    4   expiration timestamp
1383 #                        8   32   renewal token
1384 #                        40  32   cancel token
1385-#                        72  20   nodeid which accepted the tokens
1386+#                        72  20   nodeid that accepted the tokens
1387 # 7   468       (a)     data
1388 # 8   ??        4       count of extra leases
1389 # 9   ??        n*92    extra leases
1390hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1391 
1392 
1393-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1394+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1395 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1396 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1397 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1398hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1399 
1400-class MutableShareFile:
1401+
1402+class MutableDiskShare(object):
1403+    implements(IStoredMutableShare)
1404 
1405     sharetype = "mutable"
1406     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1407hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1408     assert LEASE_SIZE == 92
1409     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1410     assert DATA_OFFSET == 468, DATA_OFFSET
1411+
1412     # our sharefiles share with a recognizable string, plus some random
1413     # binary data to reduce the chance that a regular text file will look
1414     # like a sharefile.
1415hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1416     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1417     # TODO: decide upon a policy for max share size
1418 
1419-    def __init__(self, filename, parent=None):
1420-        self.home = filename
1421-        if os.path.exists(self.home):
1422+    def __init__(self, storageindex, shnum, home, parent=None):
1423+        self._storageindex = storageindex
1424+        self._shnum = shnum
1425+        self._home = home
1426+        if self._home.exists():
1427             # we don't cache anything, just check the magic
1428hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1429-            f = open(self.home, 'rb')
1430-            data = f.read(self.HEADER_SIZE)
1431-            (magic,
1432-             write_enabler_nodeid, write_enabler,
1433-             data_length, extra_least_offset) = \
1434-             struct.unpack(">32s20s32sQQ", data)
1435-            if magic != self.MAGIC:
1436-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1437-                      (filename, magic, self.MAGIC)
1438-                raise UnknownMutableContainerVersionError(msg)
1439+            f = self._home.open('rb')
1440+            try:
1441+                data = f.read(self.HEADER_SIZE)
1442+                (magic,
1443+                 write_enabler_nodeid, write_enabler,
1444+                 data_length, extra_least_offset) = \
1445+                 struct.unpack(">32s20s32sQQ", data)
1446+                if magic != self.MAGIC:
1447+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1448+                          (quote_filepath(self._home), magic, self.MAGIC)
1449+                    raise UnknownMutableContainerVersionError(msg)
1450+            finally:
1451+                f.close()
1452         self.parent = parent # for logging
1453 
1454     def log(self, *args, **kwargs):
1455hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1456         return self.parent.log(*args, **kwargs)
1457 
1458-    def create(self, my_nodeid, write_enabler):
1459-        assert not os.path.exists(self.home)
1460+    def create(self, serverid, write_enabler):
1461+        assert not self._home.exists()
1462         data_length = 0
1463         extra_lease_offset = (self.HEADER_SIZE
1464                               + 4 * self.LEASE_SIZE
1465hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1466                               + data_length)
1467         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1468         num_extra_leases = 0
1469-        f = open(self.home, 'wb')
1470-        header = struct.pack(">32s20s32sQQ",
1471-                             self.MAGIC, my_nodeid, write_enabler,
1472-                             data_length, extra_lease_offset,
1473-                             )
1474-        leases = ("\x00"*self.LEASE_SIZE) * 4
1475-        f.write(header + leases)
1476-        # data goes here, empty after creation
1477-        f.write(struct.pack(">L", num_extra_leases))
1478-        # extra leases go here, none at creation
1479-        f.close()
1480+        f = self._home.open('wb')
1481+        try:
1482+            header = struct.pack(">32s20s32sQQ",
1483+                                 self.MAGIC, serverid, write_enabler,
1484+                                 data_length, extra_lease_offset,
1485+                                 )
1486+            leases = ("\x00"*self.LEASE_SIZE) * 4
1487+            f.write(header + leases)
1488+            # data goes here, empty after creation
1489+            f.write(struct.pack(">L", num_extra_leases))
1490+            # extra leases go here, none at creation
1491+        finally:
1492+            f.close()
1493+
1494+    def __repr__(self):
1495+        return ("<MutableDiskShare %s:%r at %s>"
1496+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1497+
1498+    def get_used_space(self):
1499+        return fileutil.get_used_space(self._home)
1500+
1501+    def get_storage_index(self):
1502+        return self._storageindex
1503+
1504+    def get_shnum(self):
1505+        return self._shnum
1506 
1507     def unlink(self):
1508hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1509-        os.unlink(self.home)
1510+        self._home.remove()
1511 
1512     def _read_data_length(self, f):
1513         f.seek(self.DATA_LENGTH_OFFSET)
1514hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1515 
1516     def get_leases(self):
1517         """Yields a LeaseInfo instance for all leases."""
1518-        f = open(self.home, 'rb')
1519-        for i, lease in self._enumerate_leases(f):
1520-            yield lease
1521-        f.close()
1522+        f = self._home.open('rb')
1523+        try:
1524+            for i, lease in self._enumerate_leases(f):
1525+                yield lease
1526+        finally:
1527+            f.close()
1528 
1529     def _enumerate_leases(self, f):
1530         for i in range(self._get_num_lease_slots(f)):
1531hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1532             try:
1533                 data = self._read_lease_record(f, i)
1534                 if data is not None:
1535-                    yield i,data
1536+                    yield i, data
1537             except IndexError:
1538                 return
1539 
1540hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1541+    # These lease operations are intended for use by disk_backend.py.
1542+    # Other non-test clients should not depend on the fact that the disk
1543+    # backend stores leases in share files.
1544+
1545     def add_lease(self, lease_info):
1546         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1547hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1548-        f = open(self.home, 'rb+')
1549-        num_lease_slots = self._get_num_lease_slots(f)
1550-        empty_slot = self._get_first_empty_lease_slot(f)
1551-        if empty_slot is not None:
1552-            self._write_lease_record(f, empty_slot, lease_info)
1553-        else:
1554-            self._write_lease_record(f, num_lease_slots, lease_info)
1555-        f.close()
1556+        f = self._home.open('rb+')
1557+        try:
1558+            num_lease_slots = self._get_num_lease_slots(f)
1559+            empty_slot = self._get_first_empty_lease_slot(f)
1560+            if empty_slot is not None:
1561+                self._write_lease_record(f, empty_slot, lease_info)
1562+            else:
1563+                self._write_lease_record(f, num_lease_slots, lease_info)
1564+        finally:
1565+            f.close()
1566 
1567     def renew_lease(self, renew_secret, new_expire_time):
1568         accepting_nodeids = set()
1569hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1570-        f = open(self.home, 'rb+')
1571-        for (leasenum,lease) in self._enumerate_leases(f):
1572-            if constant_time_compare(lease.renew_secret, renew_secret):
1573-                # yup. See if we need to update the owner time.
1574-                if new_expire_time > lease.expiration_time:
1575-                    # yes
1576-                    lease.expiration_time = new_expire_time
1577-                    self._write_lease_record(f, leasenum, lease)
1578-                f.close()
1579-                return
1580-            accepting_nodeids.add(lease.nodeid)
1581-        f.close()
1582+        f = self._home.open('rb+')
1583+        try:
1584+            for (leasenum, lease) in self._enumerate_leases(f):
1585+                if constant_time_compare(lease.renew_secret, renew_secret):
1586+                    # yup. See if we need to update the owner time.
1587+                    if new_expire_time > lease.expiration_time:
1588+                        # yes
1589+                        lease.expiration_time = new_expire_time
1590+                        self._write_lease_record(f, leasenum, lease)
1591+                    return
1592+                accepting_nodeids.add(lease.nodeid)
1593+        finally:
1594+            f.close()
1595         # Return the accepting_nodeids set, to give the client a chance to
1596hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1597-        # update the leases on a share which has been migrated from its
1598+        # update the leases on a share that has been migrated from its
1599         # original server to a new one.
1600         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1601                " nodeids: ")
1602hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1603         except IndexError:
1604             self.add_lease(lease_info)
1605 
1606-    def cancel_lease(self, cancel_secret):
1607-        """Remove any leases with the given cancel_secret. If the last lease
1608-        is cancelled, the file will be removed. Return the number of bytes
1609-        that were freed (by truncating the list of leases, and possibly by
1610-        deleting the file. Raise IndexError if there was no lease with the
1611-        given cancel_secret."""
1612-
1613-        accepting_nodeids = set()
1614-        modified = 0
1615-        remaining = 0
1616-        blank_lease = LeaseInfo(owner_num=0,
1617-                                renew_secret="\x00"*32,
1618-                                cancel_secret="\x00"*32,
1619-                                expiration_time=0,
1620-                                nodeid="\x00"*20)
1621-        f = open(self.home, 'rb+')
1622-        for (leasenum,lease) in self._enumerate_leases(f):
1623-            accepting_nodeids.add(lease.nodeid)
1624-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1625-                self._write_lease_record(f, leasenum, blank_lease)
1626-                modified += 1
1627-            else:
1628-                remaining += 1
1629-        if modified:
1630-            freed_space = self._pack_leases(f)
1631-            f.close()
1632-            if not remaining:
1633-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1634-                self.unlink()
1635-            return freed_space
1636-
1637-        msg = ("Unable to cancel non-existent lease. I have leases "
1638-               "accepted by nodeids: ")
1639-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1640-                         for anid in accepting_nodeids])
1641-        msg += " ."
1642-        raise IndexError(msg)
1643-
1644-    def _pack_leases(self, f):
1645-        # TODO: reclaim space from cancelled leases
1646-        return 0
1647-
1648     def _read_write_enabler_and_nodeid(self, f):
1649         f.seek(0)
1650         data = f.read(self.HEADER_SIZE)
1651hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1652 
1653     def readv(self, readv):
1654         datav = []
1655-        f = open(self.home, 'rb')
1656-        for (offset, length) in readv:
1657-            datav.append(self._read_share_data(f, offset, length))
1658-        f.close()
1659+        f = self._home.open('rb')
1660+        try:
1661+            for (offset, length) in readv:
1662+                datav.append(self._read_share_data(f, offset, length))
1663+        finally:
1664+            f.close()
1665         return datav
1666 
1667hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1668-#    def remote_get_length(self):
1669-#        f = open(self.home, 'rb')
1670-#        data_length = self._read_data_length(f)
1671-#        f.close()
1672-#        return data_length
1673+    def get_size(self):
1674+        return self._home.getsize()
1675+
1676+    def get_data_length(self):
1677+        f = self._home.open('rb')
1678+        try:
1679+            data_length = self._read_data_length(f)
1680+        finally:
1681+            f.close()
1682+        return data_length
1683 
1684     def check_write_enabler(self, write_enabler, si_s):
1685hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1686-        f = open(self.home, 'rb+')
1687-        (real_write_enabler, write_enabler_nodeid) = \
1688-                             self._read_write_enabler_and_nodeid(f)
1689-        f.close()
1690+        f = self._home.open('rb+')
1691+        try:
1692+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1693+        finally:
1694+            f.close()
1695         # avoid a timing attack
1696         #if write_enabler != real_write_enabler:
1697         if not constant_time_compare(write_enabler, real_write_enabler):
1698hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1699 
1700     def check_testv(self, testv):
1701         test_good = True
1702-        f = open(self.home, 'rb+')
1703-        for (offset, length, operator, specimen) in testv:
1704-            data = self._read_share_data(f, offset, length)
1705-            if not testv_compare(data, operator, specimen):
1706-                test_good = False
1707-                break
1708-        f.close()
1709+        f = self._home.open('rb+')
1710+        try:
1711+            for (offset, length, operator, specimen) in testv:
1712+                data = self._read_share_data(f, offset, length)
1713+                if not testv_compare(data, operator, specimen):
1714+                    test_good = False
1715+                    break
1716+        finally:
1717+            f.close()
1718         return test_good
1719 
1720     def writev(self, datav, new_length):
1721hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1722-        f = open(self.home, 'rb+')
1723-        for (offset, data) in datav:
1724-            self._write_share_data(f, offset, data)
1725-        if new_length is not None:
1726-            cur_length = self._read_data_length(f)
1727-            if new_length < cur_length:
1728-                self._write_data_length(f, new_length)
1729-                # TODO: if we're going to shrink the share file when the
1730-                # share data has shrunk, then call
1731-                # self._change_container_size() here.
1732-        f.close()
1733-
1734-def testv_compare(a, op, b):
1735-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1736-    if op == "lt":
1737-        return a < b
1738-    if op == "le":
1739-        return a <= b
1740-    if op == "eq":
1741-        return a == b
1742-    if op == "ne":
1743-        return a != b
1744-    if op == "ge":
1745-        return a >= b
1746-    if op == "gt":
1747-        return a > b
1748-    # never reached
1749+        f = self._home.open('rb+')
1750+        try:
1751+            for (offset, data) in datav:
1752+                self._write_share_data(f, offset, data)
1753+            if new_length is not None:
1754+                cur_length = self._read_data_length(f)
1755+                if new_length < cur_length:
1756+                    self._write_data_length(f, new_length)
1757+                    # TODO: if we're going to shrink the share file when the
1758+                    # share data has shrunk, then call
1759+                    # self._change_container_size() here.
1760+        finally:
1761+            f.close()
1762 
1763hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1764-class EmptyShare:
1765+    def close(self):
1766+        pass
1767 
1768hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1769-    def check_testv(self, testv):
1770-        test_good = True
1771-        for (offset, length, operator, specimen) in testv:
1772-            data = ""
1773-            if not testv_compare(data, operator, specimen):
1774-                test_good = False
1775-                break
1776-        return test_good
1777 
1778hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1779-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1780-    ms = MutableShareFile(filename, parent)
1781-    ms.create(my_nodeid, write_enabler)
1782+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1783+    ms = MutableDiskShare(fp, parent)
1784+    ms.create(serverid, write_enabler)
1785     del ms
1786hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1787-    return MutableShareFile(filename, parent)
1788-
1789+    return MutableDiskShare(fp, parent)
1790addfile ./src/allmydata/storage/backends/null/__init__.py
1791addfile ./src/allmydata/storage/backends/null/null_backend.py
1792hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1793 
1794+import os, struct
1795+
1796+from zope.interface import implements
1797+
1798+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1799+from allmydata.util.assertutil import precondition
1800+from allmydata.util.hashutil import constant_time_compare
1801+from allmydata.storage.backends.base import Backend, ShareSet
1802+from allmydata.storage.bucket import BucketWriter
1803+from allmydata.storage.common import si_b2a
1804+from allmydata.storage.lease import LeaseInfo
1805+
1806+
1807+class NullBackend(Backend):
1808+    implements(IStorageBackend)
1809+
1810+    def __init__(self):
1811+        Backend.__init__(self)
1812+
1813+    def get_available_space(self, reserved_space):
1814+        return None
1815+
1816+    def get_sharesets_for_prefix(self, prefix):
1817+        pass
1818+
1819+    def get_shareset(self, storageindex):
1820+        return NullShareSet(storageindex)
1821+
1822+    def fill_in_space_stats(self, stats):
1823+        pass
1824+
1825+    def set_storage_server(self, ss):
1826+        self.ss = ss
1827+
1828+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1829+        pass
1830+
1831+
1832+class NullShareSet(ShareSet):
1833+    implements(IShareSet)
1834+
1835+    def __init__(self, storageindex):
1836+        self.storageindex = storageindex
1837+
1838+    def get_overhead(self):
1839+        return 0
1840+
1841+    def get_incoming_shnums(self):
1842+        return frozenset()
1843+
1844+    def get_shares(self):
1845+        pass
1846+
1847+    def get_share(self, shnum):
1848+        return None
1849+
1850+    def get_storage_index(self):
1851+        return self.storageindex
1852+
1853+    def get_storage_index_string(self):
1854+        return si_b2a(self.storageindex)
1855+
1856+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1857+        immutableshare = ImmutableNullShare()
1858+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1859+
1860+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1861+        return MutableNullShare()
1862+
1863+    def _clean_up_after_unlink(self):
1864+        pass
1865+
1866+
1867+class ImmutableNullShare:
1868+    implements(IStoredShare)
1869+    sharetype = "immutable"
1870+
1871+    def __init__(self):
1872+        """ If max_size is not None then I won't allow more than
1873+        max_size to be written to me. If create=True then max_size
1874+        must not be None. """
1875+        pass
1876+
1877+    def get_shnum(self):
1878+        return self.shnum
1879+
1880+    def unlink(self):
1881+        os.unlink(self.fname)
1882+
1883+    def read_share_data(self, offset, length):
1884+        precondition(offset >= 0)
1885+        # Reads beyond the end of the data are truncated. Reads that start
1886+        # beyond the end of the data return an empty string.
1887+        seekpos = self._data_offset+offset
1888+        fsize = os.path.getsize(self.fname)
1889+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1890+        if actuallength == 0:
1891+            return ""
1892+        f = open(self.fname, 'rb')
1893+        f.seek(seekpos)
1894+        return f.read(actuallength)
1895+
1896+    def write_share_data(self, offset, data):
1897+        pass
1898+
1899+    def _write_lease_record(self, f, lease_number, lease_info):
1900+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1901+        f.seek(offset)
1902+        assert f.tell() == offset
1903+        f.write(lease_info.to_immutable_data())
1904+
1905+    def _read_num_leases(self, f):
1906+        f.seek(0x08)
1907+        (num_leases,) = struct.unpack(">L", f.read(4))
1908+        return num_leases
1909+
1910+    def _write_num_leases(self, f, num_leases):
1911+        f.seek(0x08)
1912+        f.write(struct.pack(">L", num_leases))
1913+
1914+    def _truncate_leases(self, f, num_leases):
1915+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1916+
1917+    def get_leases(self):
1918+        """Yields a LeaseInfo instance for all leases."""
1919+        f = open(self.fname, 'rb')
1920+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1921+        f.seek(self._lease_offset)
1922+        for i in range(num_leases):
1923+            data = f.read(self.LEASE_SIZE)
1924+            if data:
1925+                yield LeaseInfo().from_immutable_data(data)
1926+
1927+    def add_lease(self, lease):
1928+        pass
1929+
1930+    def renew_lease(self, renew_secret, new_expire_time):
1931+        for i,lease in enumerate(self.get_leases()):
1932+            if constant_time_compare(lease.renew_secret, renew_secret):
1933+                # yup. See if we need to update the owner time.
1934+                if new_expire_time > lease.expiration_time:
1935+                    # yes
1936+                    lease.expiration_time = new_expire_time
1937+                    f = open(self.fname, 'rb+')
1938+                    self._write_lease_record(f, i, lease)
1939+                    f.close()
1940+                return
1941+        raise IndexError("unable to renew non-existent lease")
1942+
1943+    def add_or_renew_lease(self, lease_info):
1944+        try:
1945+            self.renew_lease(lease_info.renew_secret,
1946+                             lease_info.expiration_time)
1947+        except IndexError:
1948+            self.add_lease(lease_info)
1949+
1950+
1951+class MutableNullShare:
1952+    implements(IStoredMutableShare)
1953+    sharetype = "mutable"
1954+
1955+    """ XXX: TODO """
1956addfile ./src/allmydata/storage/bucket.py
1957hunk ./src/allmydata/storage/bucket.py 1
1958+
1959+import time
1960+
1961+from foolscap.api import Referenceable
1962+
1963+from zope.interface import implements
1964+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1965+from allmydata.util import base32, log
1966+from allmydata.util.assertutil import precondition
1967+
1968+
1969+class BucketWriter(Referenceable):
1970+    implements(RIBucketWriter)
1971+
1972+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1973+        self.ss = ss
1974+        self._max_size = max_size # don't allow the client to write more than this
1975+        self._canary = canary
1976+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1977+        self.closed = False
1978+        self.throw_out_all_data = False
1979+        self._share = immutableshare
1980+        # also, add our lease to the file now, so that other ones can be
1981+        # added by simultaneous uploaders
1982+        self._share.add_lease(lease_info)
1983+
1984+    def allocated_size(self):
1985+        return self._max_size
1986+
1987+    def remote_write(self, offset, data):
1988+        start = time.time()
1989+        precondition(not self.closed)
1990+        if self.throw_out_all_data:
1991+            return
1992+        self._share.write_share_data(offset, data)
1993+        self.ss.add_latency("write", time.time() - start)
1994+        self.ss.count("write")
1995+
1996+    def remote_close(self):
1997+        precondition(not self.closed)
1998+        start = time.time()
1999+
2000+        self._share.close()
2001+        filelen = self._share.stat()
2002+        self._share = None
2003+
2004+        self.closed = True
2005+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2006+
2007+        self.ss.bucket_writer_closed(self, filelen)
2008+        self.ss.add_latency("close", time.time() - start)
2009+        self.ss.count("close")
2010+
2011+    def _disconnected(self):
2012+        if not self.closed:
2013+            self._abort()
2014+
2015+    def remote_abort(self):
2016+        log.msg("storage: aborting write to share %r" % self._share,
2017+                facility="tahoe.storage", level=log.UNUSUAL)
2018+        if not self.closed:
2019+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2020+        self._abort()
2021+        self.ss.count("abort")
2022+
2023+    def _abort(self):
2024+        if self.closed:
2025+            return
2026+        self._share.unlink()
2027+        self._share = None
2028+
2029+        # We are now considered closed for further writing. We must tell
2030+        # the storage server about this so that it stops expecting us to
2031+        # use the space it allocated for us earlier.
2032+        self.closed = True
2033+        self.ss.bucket_writer_closed(self, 0)
2034+
2035+
2036+class BucketReader(Referenceable):
2037+    implements(RIBucketReader)
2038+
2039+    def __init__(self, ss, share):
2040+        self.ss = ss
2041+        self._share = share
2042+        self.storageindex = share.storageindex
2043+        self.shnum = share.shnum
2044+
2045+    def __repr__(self):
2046+        return "<%s %s %s>" % (self.__class__.__name__,
2047+                               base32.b2a_l(self.storageindex[:8], 60),
2048+                               self.shnum)
2049+
2050+    def remote_read(self, offset, length):
2051+        start = time.time()
2052+        data = self._share.read_share_data(offset, length)
2053+        self.ss.add_latency("read", time.time() - start)
2054+        self.ss.count("read")
2055+        return data
2056+
2057+    def remote_advise_corrupt_share(self, reason):
2058+        return self.ss.remote_advise_corrupt_share("immutable",
2059+                                                   self.storageindex,
2060+                                                   self.shnum,
2061+                                                   reason)
2062addfile ./src/allmydata/test/test_backends.py
2063hunk ./src/allmydata/test/test_backends.py 1
2064+import os, stat
2065+from twisted.trial import unittest
2066+from allmydata.util.log import msg
2067+from allmydata.test.common_util import ReallyEqualMixin
2068+import mock
2069+
2070+# This is the code that we're going to be testing.
2071+from allmydata.storage.server import StorageServer
2072+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
2073+from allmydata.storage.backends.null.null_backend import NullBackend
2074+
2075+# The following share file content was generated with
2076+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2077+# with share data == 'a'. The total size of this input
2078+# is 85 bytes.
2079+shareversionnumber = '\x00\x00\x00\x01'
2080+sharedatalength = '\x00\x00\x00\x01'
2081+numberofleases = '\x00\x00\x00\x01'
2082+shareinputdata = 'a'
2083+ownernumber = '\x00\x00\x00\x00'
2084+renewsecret  = 'x'*32
2085+cancelsecret = 'y'*32
2086+expirationtime = '\x00(\xde\x80'
2087+nextlease = ''
2088+containerdata = shareversionnumber + sharedatalength + numberofleases
2089+client_data = shareinputdata + ownernumber + renewsecret + \
2090+    cancelsecret + expirationtime + nextlease
2091+share_data = containerdata + client_data
2092+testnodeid = 'testnodeidxxxxxxxxxx'
2093+
2094+
2095+class MockFileSystem(unittest.TestCase):
2096+    """ I simulate a filesystem that the code under test can use. I simulate
2097+    just the parts of the filesystem that the current implementation of Disk
2098+    backend needs. """
2099+    def setUp(self):
2100+        # Make patcher, patch, and effects for disk-using functions.
2101+        msg( "%s.setUp()" % (self,))
2102+        self.mockedfilepaths = {}
2103+        # keys are pathnames, values are MockFilePath objects. This is necessary because
2104+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
2105+        # self.mockedfilepaths has the relevant information.
2106+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
2107+        self.basedir = self.storedir.child('shares')
2108+        self.baseincdir = self.basedir.child('incoming')
2109+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2110+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2111+        self.shareincomingname = self.sharedirincomingname.child('0')
2112+        self.sharefinalname = self.sharedirfinalname.child('0')
2113+
2114+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
2115+        # or LeaseCheckingCrawler.
2116+
2117+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
2118+        self.FilePathFake.__enter__()
2119+
2120+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
2121+        FakeBCC = self.BCountingCrawler.__enter__()
2122+        FakeBCC.side_effect = self.call_FakeBCC
2123+
2124+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
2125+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
2126+        FakeLCC.side_effect = self.call_FakeLCC
2127+
2128+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
2129+        GetSpace = self.get_available_space.__enter__()
2130+        GetSpace.side_effect = self.call_get_available_space
2131+
2132+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2133+        getsize = self.statforsize.__enter__()
2134+        getsize.side_effect = self.call_statforsize
2135+
2136+    def call_FakeBCC(self, StateFile):
2137+        return MockBCC()
2138+
2139+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2140+        return MockLCC()
2141+
2142+    def call_get_available_space(self, storedir, reservedspace):
2143+        # The input vector has an input size of 85.
2144+        return 85 - reservedspace
2145+
2146+    def call_statforsize(self, fakefpname):
2147+        return self.mockedfilepaths[fakefpname].fileobject.size()
2148+
2149+    def tearDown(self):
2150+        msg( "%s.tearDown()" % (self,))
2151+        self.FilePathFake.__exit__()
2152+        self.mockedfilepaths = {}
2153+
2154+
2155+class MockFilePath:
2156+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2157+        #  I can't just make the values MockFileObjects because they may be directories.
2158+        self.mockedfilepaths = ffpathsenvironment
2159+        self.path = pathstring
2160+        self.existence = existence
2161+        if not self.mockedfilepaths.has_key(self.path):
2162+            #  The first MockFilePath object is special
2163+            self.mockedfilepaths[self.path] = self
2164+            self.fileobject = None
2165+        else:
2166+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2167+        self.spawn = {}
2168+        self.antecedent = os.path.dirname(self.path)
2169+
2170+    def setContent(self, contentstring):
2171+        # This method rewrites the data in the file that corresponds to its path
2172+        # name whether it preexisted or not.
2173+        self.fileobject = MockFileObject(contentstring)
2174+        self.existence = True
2175+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2176+        self.mockedfilepaths[self.path].existence = self.existence
2177+        self.setparents()
2178+
2179+    def create(self):
2180+        # This method chokes if there's a pre-existing file!
2181+        if self.mockedfilepaths[self.path].fileobject:
2182+            raise OSError
2183+        else:
2184+            self.existence = True
2185+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2186+            self.mockedfilepaths[self.path].existence = self.existence
2187+            self.setparents()
2188+
2189+    def open(self, mode='r'):
2190+        # XXX Makes no use of mode.
2191+        if not self.mockedfilepaths[self.path].fileobject:
2192+            # If there's no fileobject there already then make one and put it there.
2193+            self.fileobject = MockFileObject()
2194+            self.existence = True
2195+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2196+            self.mockedfilepaths[self.path].existence = self.existence
2197+        else:
2198+            # Otherwise get a ref to it.
2199+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2200+            self.existence = self.mockedfilepaths[self.path].existence
2201+        return self.fileobject.open(mode)
2202+
2203+    def child(self, childstring):
2204+        arg2child = os.path.join(self.path, childstring)
2205+        child = MockFilePath(arg2child, self.mockedfilepaths)
2206+        return child
2207+
2208+    def children(self):
2209+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2210+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2211+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2212+        self.spawn = frozenset(childrenfromffs)
2213+        return self.spawn
2214+
2215+    def parent(self):
2216+        if self.mockedfilepaths.has_key(self.antecedent):
2217+            parent = self.mockedfilepaths[self.antecedent]
2218+        else:
2219+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2220+        return parent
2221+
2222+    def parents(self):
2223+        antecedents = []
2224+        def f(fps, antecedents):
2225+            newfps = os.path.split(fps)[0]
2226+            if newfps:
2227+                antecedents.append(newfps)
2228+                f(newfps, antecedents)
2229+        f(self.path, antecedents)
2230+        return antecedents
2231+
2232+    def setparents(self):
2233+        for fps in self.parents():
2234+            if not self.mockedfilepaths.has_key(fps):
2235+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2236+
2237+    def basename(self):
2238+        return os.path.split(self.path)[1]
2239+
2240+    def moveTo(self, newffp):
2241+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2242+        if self.mockedfilepaths[newffp.path].exists():
2243+            raise OSError
2244+        else:
2245+            self.mockedfilepaths[newffp.path] = self
2246+            self.path = newffp.path
2247+
2248+    def getsize(self):
2249+        return self.fileobject.getsize()
2250+
2251+    def exists(self):
2252+        return self.existence
2253+
2254+    def isdir(self):
2255+        return True
2256+
2257+    def makedirs(self):
2258+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2259+        pass
2260+
2261+    def remove(self):
2262+        pass
2263+
2264+
2265+class MockFileObject:
2266+    def __init__(self, contentstring=''):
2267+        self.buffer = contentstring
2268+        self.pos = 0
2269+    def open(self, mode='r'):
2270+        return self
2271+    def write(self, instring):
2272+        begin = self.pos
2273+        padlen = begin - len(self.buffer)
2274+        if padlen > 0:
2275+            self.buffer += '\x00' * padlen
2276+        end = self.pos + len(instring)
2277+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2278+        self.pos = end
2279+    def close(self):
2280+        self.pos = 0
2281+    def seek(self, pos):
2282+        self.pos = pos
2283+    def read(self, numberbytes):
2284+        return self.buffer[self.pos:self.pos+numberbytes]
2285+    def tell(self):
2286+        return self.pos
2287+    def size(self):
2288+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2289+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2290+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2291+        return {stat.ST_SIZE:len(self.buffer)}
2292+    def getsize(self):
2293+        return len(self.buffer)
2294+
2295+class MockBCC:
2296+    def setServiceParent(self, Parent):
2297+        pass
2298+
2299+
2300+class MockLCC:
2301+    def setServiceParent(self, Parent):
2302+        pass
2303+
2304+
2305+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2306+    """ NullBackend is just for testing and executable documentation, so
2307+    this test is actually a test of StorageServer in which we're using
2308+    NullBackend as helper code for the test, rather than a test of
2309+    NullBackend. """
2310+    def setUp(self):
2311+        self.ss = StorageServer(testnodeid, NullBackend())
2312+
2313+    @mock.patch('os.mkdir')
2314+    @mock.patch('__builtin__.open')
2315+    @mock.patch('os.listdir')
2316+    @mock.patch('os.path.isdir')
2317+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2318+        """
2319+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2320+        generates the correct return types when given test-vector arguments. That
2321+        bs is of the correct type is verified by attempting to invoke remote_write
2322+        on bs[0].
2323+        """
2324+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2325+        bs[0].remote_write(0, 'a')
2326+        self.failIf(mockisdir.called)
2327+        self.failIf(mocklistdir.called)
2328+        self.failIf(mockopen.called)
2329+        self.failIf(mockmkdir.called)
2330+
2331+
2332+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2333+    def test_create_server_disk_backend(self):
2334+        """ This tests whether a server instance can be constructed with a
2335+        filesystem backend. To pass the test, it mustn't use the filesystem
2336+        outside of its configured storedir. """
2337+        StorageServer(testnodeid, DiskBackend(self.storedir))
2338+
2339+
2340+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2341+    """ This tests both the StorageServer and the Disk backend together. """
2342+    def setUp(self):
2343+        MockFileSystem.setUp(self)
2344+        try:
2345+            self.backend = DiskBackend(self.storedir)
2346+            self.ss = StorageServer(testnodeid, self.backend)
2347+
2348+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2349+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2350+        except:
2351+            MockFileSystem.tearDown(self)
2352+            raise
2353+
2354+    @mock.patch('time.time')
2355+    @mock.patch('allmydata.util.fileutil.get_available_space')
2356+    def test_out_of_space(self, mockget_available_space, mocktime):
2357+        mocktime.return_value = 0
2358+
2359+        def call_get_available_space(dir, reserve):
2360+            return 0
2361+
2362+        mockget_available_space.side_effect = call_get_available_space
2363+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2364+        self.failUnlessReallyEqual(bsc, {})
2365+
2366+    @mock.patch('time.time')
2367+    def test_write_and_read_share(self, mocktime):
2368+        """
2369+        Write a new share, read it, and test the server's (and disk backend's)
2370+        handling of simultaneous and successive attempts to write the same
2371+        share.
2372+        """
2373+        mocktime.return_value = 0
2374+        # Inspect incoming and fail unless it's empty.
2375+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2376+
2377+        self.failUnlessReallyEqual(incomingset, frozenset())
2378+
2379+        # Populate incoming with the sharenum: 0.
2380+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2381+
2382+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2383+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2384+
2385+
2386+
2387+        # Attempt to create a second share writer with the same sharenum.
2388+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2389+
2390+        # Show that no sharewriter results from a remote_allocate_buckets
2391+        # with the same si and sharenum, until BucketWriter.remote_close()
2392+        # has been called.
2393+        self.failIf(bsa)
2394+
2395+        # Test allocated size.
2396+        spaceint = self.ss.allocated_size()
2397+        self.failUnlessReallyEqual(spaceint, 1)
2398+
2399+        # Write 'a' to shnum 0. Only tested together with close and read.
2400+        bs[0].remote_write(0, 'a')
2401+
2402+        # Preclose: Inspect final, failUnless nothing there.
2403+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2404+        bs[0].remote_close()
2405+
2406+        # Postclose: (Omnibus) failUnless written data is in final.
2407+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2408+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2409+        contents = sharesinfinal[0].read_share_data(0, 73)
2410+        self.failUnlessReallyEqual(contents, client_data)
2411+
2412+        # Exercise the case that the share we're asking to allocate is
2413+        # already (completely) uploaded.
2414+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2415+
2416+
2417+    def test_read_old_share(self):
2418+        """ This tests whether the code correctly finds and reads
2419+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2420+        servers. There is a similar test in test_download, but that one
2421+        is from the perspective of the client and exercises a deeper
2422+        stack of code. This one is for exercising just the
2423+        StorageServer object. """
2424+        # Contruct a file with the appropriate contents in the mockfilesystem.
2425+        datalen = len(share_data)
2426+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2427+        finalhome.setContent(share_data)
2428+
2429+        # Now begin the test.
2430+        bs = self.ss.remote_get_buckets('teststorage_index')
2431+
2432+        self.failUnlessEqual(len(bs), 1)
2433+        b = bs['0']
2434+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2435+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2436+        # If you try to read past the end you get the as much data as is there.
2437+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2438+        # If you start reading past the end of the file you get the empty string.
2439+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2440}
2441[Pluggable backends -- all other changes. refs #999
2442david-sarah@jacaranda.org**20110919233256
2443 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2444] {
2445hunk ./src/allmydata/client.py 245
2446             sharetypes.append("immutable")
2447         if self.get_config("storage", "expire.mutable", True, boolean=True):
2448             sharetypes.append("mutable")
2449-        expiration_sharetypes = tuple(sharetypes)
2450 
2451hunk ./src/allmydata/client.py 246
2452+        expiration_policy = {
2453+            'enabled': expire,
2454+            'mode': mode,
2455+            'override_lease_duration': o_l_d,
2456+            'cutoff_date': cutoff_date,
2457+            'sharetypes': tuple(sharetypes),
2458+        }
2459         ss = StorageServer(storedir, self.nodeid,
2460                            reserved_space=reserved,
2461                            discard_storage=discard,
2462hunk ./src/allmydata/client.py 258
2463                            readonly_storage=readonly,
2464                            stats_provider=self.stats_provider,
2465-                           expiration_enabled=expire,
2466-                           expiration_mode=mode,
2467-                           expiration_override_lease_duration=o_l_d,
2468-                           expiration_cutoff_date=cutoff_date,
2469-                           expiration_sharetypes=expiration_sharetypes)
2470+                           expiration_policy=expiration_policy)
2471         self.add_service(ss)
2472 
2473         d = self.when_tub_ready()
2474hunk ./src/allmydata/immutable/offloaded.py 306
2475         if os.path.exists(self._encoding_file):
2476             self.log("ciphertext already present, bypassing fetch",
2477                      level=log.UNUSUAL)
2478+            # XXX the following comment is probably stale, since
2479+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2480+            #
2481             # we'll still need the plaintext hashes (when
2482             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2483             # called), and currently the easiest way to get them is to ask
2484hunk ./src/allmydata/immutable/upload.py 765
2485             self._status.set_progress(1, progress)
2486         return cryptdata
2487 
2488-
2489     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2490hunk ./src/allmydata/immutable/upload.py 766
2491+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2492+        plaintext segments, i.e. get the tagged hashes of the given segments.
2493+        The segment size is expected to be generated by the
2494+        IEncryptedUploadable before any plaintext is read or ciphertext
2495+        produced, so that the segment hashes can be generated with only a
2496+        single pass.
2497+
2498+        This returns a Deferred that fires with a sequence of hashes, using:
2499+
2500+         tuple(segment_hashes[first:last])
2501+
2502+        'num_segments' is used to assert that the number of segments that the
2503+        IEncryptedUploadable handled matches the number of segments that the
2504+        encoder was expecting.
2505+
2506+        This method must not be called until the final byte has been read
2507+        from read_encrypted(). Once this method is called, read_encrypted()
2508+        can never be called again.
2509+        """
2510         # this is currently unused, but will live again when we fix #453
2511         if len(self._plaintext_segment_hashes) < num_segments:
2512             # close out the last one
2513hunk ./src/allmydata/immutable/upload.py 803
2514         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2515 
2516     def get_plaintext_hash(self):
2517+        """OBSOLETE; Get the hash of the whole plaintext.
2518+
2519+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2520+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2521+        """
2522+        # this is currently unused, but will live again when we fix #453
2523         h = self._plaintext_hasher.digest()
2524         return defer.succeed(h)
2525 
2526hunk ./src/allmydata/interfaces.py 29
2527 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2528 Offset = Number
2529 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2530-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2531-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2532-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2533+WriteEnablerSecret = Hash # used to protect mutable share modifications
2534+LeaseRenewSecret = Hash # used to protect lease renewal requests
2535+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2536 
2537 class RIStubClient(RemoteInterface):
2538     """Each client publishes a service announcement for a dummy object called
2539hunk ./src/allmydata/interfaces.py 106
2540                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2541                          allocated_size=Offset, canary=Referenceable):
2542         """
2543-        @param storage_index: the index of the bucket to be created or
2544+        @param storage_index: the index of the shareset to be created or
2545                               increfed.
2546         @param sharenums: these are the share numbers (probably between 0 and
2547                           99) that the sender is proposing to store on this
2548hunk ./src/allmydata/interfaces.py 111
2549                           server.
2550-        @param renew_secret: This is the secret used to protect bucket refresh
2551+        @param renew_secret: This is the secret used to protect lease renewal.
2552                              This secret is generated by the client and
2553                              stored for later comparison by the server. Each
2554                              server is given a different secret.
2555hunk ./src/allmydata/interfaces.py 115
2556-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2557-        @param canary: If the canary is lost before close(), the bucket is
2558+        @param cancel_secret: ignored
2559+        @param canary: If the canary is lost before close(), the allocation is
2560                        deleted.
2561         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2562                  already have and allocated is what we hereby agree to accept.
2563hunk ./src/allmydata/interfaces.py 129
2564                   renew_secret=LeaseRenewSecret,
2565                   cancel_secret=LeaseCancelSecret):
2566         """
2567-        Add a new lease on the given bucket. If the renew_secret matches an
2568+        Add a new lease on the given shareset. If the renew_secret matches an
2569         existing lease, that lease will be renewed instead. If there is no
2570hunk ./src/allmydata/interfaces.py 131
2571-        bucket for the given storage_index, return silently. (note that in
2572+        shareset for the given storage_index, return silently. (Note that in
2573         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2574hunk ./src/allmydata/interfaces.py 133
2575-        bucket)
2576+        shareset.)
2577         """
2578         return Any() # returns None now, but future versions might change
2579 
2580hunk ./src/allmydata/interfaces.py 139
2581     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2582         """
2583-        Renew the lease on a given bucket, resetting the timer to 31 days.
2584-        Some networks will use this, some will not. If there is no bucket for
2585+        Renew the lease on a given shareset, resetting the timer to 31 days.
2586+        Some networks will use this, some will not. If there is no shareset for
2587         the given storage_index, IndexError will be raised.
2588 
2589         For mutable shares, if the given renew_secret does not match an
2590hunk ./src/allmydata/interfaces.py 146
2591         existing lease, IndexError will be raised with a note listing the
2592         server-nodeids on the existing leases, so leases on migrated shares
2593-        can be renewed or cancelled. For immutable shares, IndexError
2594-        (without the note) will be raised.
2595+        can be renewed. For immutable shares, IndexError (without the note)
2596+        will be raised.
2597         """
2598         return Any()
2599 
2600hunk ./src/allmydata/interfaces.py 154
2601     def get_buckets(storage_index=StorageIndex):
2602         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2603 
2604-
2605-
2606     def slot_readv(storage_index=StorageIndex,
2607                    shares=ListOf(int), readv=ReadVector):
2608         """Read a vector from the numbered shares associated with the given
2609hunk ./src/allmydata/interfaces.py 163
2610 
2611     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2612                                         secrets=TupleOf(WriteEnablerSecret,
2613-                                                        LeaseRenewSecret,
2614-                                                        LeaseCancelSecret),
2615+                                                        LeaseRenewSecret),
2616                                         tw_vectors=TestAndWriteVectorsForShares,
2617                                         r_vector=ReadVector,
2618                                         ):
2619hunk ./src/allmydata/interfaces.py 167
2620-        """General-purpose test-and-set operation for mutable slots. Perform
2621-        a bunch of comparisons against the existing shares. If they all pass,
2622-        then apply a bunch of write vectors to those shares. Then use the
2623-        read vectors to extract data from all the shares and return the data.
2624+        """
2625+        General-purpose atomic test-read-and-set operation for mutable slots.
2626+        Perform a bunch of comparisons against the existing shares. If they
2627+        all pass: use the read vectors to extract data from all the shares,
2628+        then apply a bunch of write vectors to those shares. Return the read
2629+        data, which does not include any modifications made by the writes.
2630 
2631         This method is, um, large. The goal is to allow clients to update all
2632         the shares associated with a mutable file in a single round trip.
2633hunk ./src/allmydata/interfaces.py 177
2634 
2635-        @param storage_index: the index of the bucket to be created or
2636+        @param storage_index: the index of the shareset to be created or
2637                               increfed.
2638         @param write_enabler: a secret that is stored along with the slot.
2639                               Writes are accepted from any caller who can
2640hunk ./src/allmydata/interfaces.py 183
2641                               present the matching secret. A different secret
2642                               should be used for each slot*server pair.
2643-        @param renew_secret: This is the secret used to protect bucket refresh
2644+        @param renew_secret: This is the secret used to protect lease renewal.
2645                              This secret is generated by the client and
2646                              stored for later comparison by the server. Each
2647                              server is given a different secret.
2648hunk ./src/allmydata/interfaces.py 187
2649-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2650+        @param cancel_secret: ignored
2651 
2652hunk ./src/allmydata/interfaces.py 189
2653-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2654-        cancel_secret). The first is required to perform any write. The
2655-        latter two are used when allocating new shares. To simply acquire a
2656-        new lease on existing shares, use an empty testv and an empty writev.
2657+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2658+        The write_enabler is required to perform any write. The renew_secret
2659+        is used when allocating new shares.
2660 
2661         Each share can have a separate test vector (i.e. a list of
2662         comparisons to perform). If all vectors for all shares pass, then all
2663hunk ./src/allmydata/interfaces.py 280
2664         store that on disk.
2665         """
2666 
2667-class IStorageBucketWriter(Interface):
2668+
2669+class IStorageBackend(Interface):
2670     """
2671hunk ./src/allmydata/interfaces.py 283
2672-    Objects of this kind live on the client side.
2673+    Objects of this kind live on the server side and are used by the
2674+    storage server object.
2675     """
2676hunk ./src/allmydata/interfaces.py 286
2677-    def put_block(segmentnum=int, data=ShareData):
2678-        """@param data: For most segments, this data will be 'blocksize'
2679-        bytes in length. The last segment might be shorter.
2680-        @return: a Deferred that fires (with None) when the operation completes
2681+    def get_available_space():
2682+        """
2683+        Returns available space for share storage in bytes, or
2684+        None if this information is not available or if the available
2685+        space is unlimited.
2686+
2687+        If the backend is configured for read-only mode then this will
2688+        return 0.
2689+        """
2690+
2691+    def get_sharesets_for_prefix(prefix):
2692+        """
2693+        Generates IShareSet objects for all storage indices matching the
2694+        given prefix for which this backend holds shares.
2695+        """
2696+
2697+    def get_shareset(storageindex):
2698+        """
2699+        Get an IShareSet object for the given storage index.
2700+        """
2701+
2702+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2703+        """
2704+        Clients who discover hash failures in shares that they have
2705+        downloaded from me will use this method to inform me about the
2706+        failures. I will record their concern so that my operator can
2707+        manually inspect the shares in question.
2708+
2709+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2710+        share number. 'reason' is a human-readable explanation of the problem,
2711+        probably including some expected hash values and the computed ones
2712+        that did not match. Corruption advisories for mutable shares should
2713+        include a hash of the public key (the same value that appears in the
2714+        mutable-file verify-cap), since the current share format does not
2715+        store that on disk.
2716+
2717+        @param storageindex=str
2718+        @param sharetype=str
2719+        @param shnum=int
2720+        @param reason=str
2721+        """
2722+
2723+
2724+class IShareSet(Interface):
2725+    def get_storage_index():
2726+        """
2727+        Returns the storage index for this shareset.
2728+        """
2729+
2730+    def get_storage_index_string():
2731+        """
2732+        Returns the base32-encoded storage index for this shareset.
2733+        """
2734+
2735+    def get_overhead():
2736+        """
2737+        Returns the storage overhead, in bytes, of this shareset (exclusive
2738+        of the space used by its shares).
2739+        """
2740+
2741+    def get_shares():
2742+        """
2743+        Generates the IStoredShare objects held in this shareset.
2744+        """
2745+
2746+    def has_incoming(shnum):
2747+        """
2748+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2749+        """
2750+
2751+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2752+        """
2753+        Create a bucket writer that can be used to write data to a given share.
2754+
2755+        @param storageserver=RIStorageServer
2756+        @param shnum=int: A share number in this shareset
2757+        @param max_space_per_bucket=int: The maximum space allocated for the
2758+                 share, in bytes
2759+        @param lease_info=LeaseInfo: The initial lease information
2760+        @param canary=Referenceable: If the canary is lost before close(), the
2761+                 bucket is deleted.
2762+        @return an IStorageBucketWriter for the given share
2763+        """
2764+
2765+    def make_bucket_reader(storageserver, share):
2766+        """
2767+        Create a bucket reader that can be used to read data from a given share.
2768+
2769+        @param storageserver=RIStorageServer
2770+        @param share=IStoredShare
2771+        @return an IStorageBucketReader for the given share
2772+        """
2773+
2774+    def readv(wanted_shnums, read_vector):
2775+        """
2776+        Read a vector from the numbered shares in this shareset. An empty
2777+        wanted_shnums list means to return data from all known shares.
2778+
2779+        @param wanted_shnums=ListOf(int)
2780+        @param read_vector=ReadVector
2781+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2782+        """
2783+
2784+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2785+        """
2786+        General-purpose atomic test-read-and-set operation for mutable slots.
2787+        Perform a bunch of comparisons against the existing shares in this
2788+        shareset. If they all pass: use the read vectors to extract data from
2789+        all the shares, then apply a bunch of write vectors to those shares.
2790+        Return the read data, which does not include any modifications made by
2791+        the writes.
2792+
2793+        See the similar method in RIStorageServer for more detail.
2794+
2795+        @param storageserver=RIStorageServer
2796+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2797+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2798+        @param read_vector=ReadVector
2799+        @param expiration_time=int
2800+        @return TupleOf(bool, DictOf(int, ReadData))
2801+        """
2802+
2803+    def add_or_renew_lease(lease_info):
2804+        """
2805+        Add a new lease on the shares in this shareset. If the renew_secret
2806+        matches an existing lease, that lease will be renewed instead. If
2807+        there are no shares in this shareset, return silently.
2808+
2809+        @param lease_info=LeaseInfo
2810+        """
2811+
2812+    def renew_lease(renew_secret, new_expiration_time):
2813+        """
2814+        Renew a lease on the shares in this shareset, resetting the timer
2815+        to 31 days. Some grids will use this, some will not. If there are no
2816+        shares in this shareset, IndexError will be raised.
2817+
2818+        For mutable shares, if the given renew_secret does not match an
2819+        existing lease, IndexError will be raised with a note listing the
2820+        server-nodeids on the existing leases, so leases on migrated shares
2821+        can be renewed. For immutable shares, IndexError (without the note)
2822+        will be raised.
2823+
2824+        @param renew_secret=LeaseRenewSecret
2825+        """
2826+
2827+
2828+class IStoredShare(Interface):
2829+    """
2830+    This object contains as much as all of the share data.  It is intended
2831+    for lazy evaluation, such that in many use cases substantially less than
2832+    all of the share data will be accessed.
2833+    """
2834+    def close():
2835+        """
2836+        Complete writing to this share.
2837+        """
2838+
2839+    def get_storage_index():
2840+        """
2841+        Returns the storage index.
2842+        """
2843+
2844+    def get_shnum():
2845+        """
2846+        Returns the share number.
2847+        """
2848+
2849+    def get_data_length():
2850+        """
2851+        Returns the data length in bytes.
2852+        """
2853+
2854+    def get_size():
2855+        """
2856+        Returns the size of the share in bytes.
2857+        """
2858+
2859+    def get_used_space():
2860+        """
2861+        Returns the amount of backend storage including overhead, in bytes, used
2862+        by this share.
2863+        """
2864+
2865+    def unlink():
2866+        """
2867+        Signal that this share can be removed from the backend storage. This does
2868+        not guarantee that the share data will be immediately inaccessible, or
2869+        that it will be securely erased.
2870+        """
2871+
2872+    def readv(read_vector):
2873+        """
2874+        XXX
2875+        """
2876+
2877+
2878+class IStoredMutableShare(IStoredShare):
2879+    def check_write_enabler(write_enabler, si_s):
2880+        """
2881+        XXX
2882         """
2883 
2884hunk ./src/allmydata/interfaces.py 489
2885-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2886+    def check_testv(test_vector):
2887+        """
2888+        XXX
2889+        """
2890+
2891+    def writev(datav, new_length):
2892+        """
2893+        XXX
2894+        """
2895+
2896+
2897+class IStorageBucketWriter(Interface):
2898+    """
2899+    Objects of this kind live on the client side.
2900+    """
2901+    def put_block(segmentnum, data):
2902         """
2903hunk ./src/allmydata/interfaces.py 506
2904+        @param segmentnum=int
2905+        @param data=ShareData: For most segments, this data will be 'blocksize'
2906+        bytes in length. The last segment might be shorter.
2907         @return: a Deferred that fires (with None) when the operation completes
2908         """
2909 
2910hunk ./src/allmydata/interfaces.py 512
2911-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2912+    def put_crypttext_hashes(hashes):
2913         """
2914hunk ./src/allmydata/interfaces.py 514
2915+        @param hashes=ListOf(Hash)
2916         @return: a Deferred that fires (with None) when the operation completes
2917         """
2918 
2919hunk ./src/allmydata/interfaces.py 518
2920-    def put_block_hashes(blockhashes=ListOf(Hash)):
2921+    def put_block_hashes(blockhashes):
2922         """
2923hunk ./src/allmydata/interfaces.py 520
2924+        @param blockhashes=ListOf(Hash)
2925         @return: a Deferred that fires (with None) when the operation completes
2926         """
2927 
2928hunk ./src/allmydata/interfaces.py 524
2929-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2930+    def put_share_hashes(sharehashes):
2931         """
2932hunk ./src/allmydata/interfaces.py 526
2933+        @param sharehashes=ListOf(TupleOf(int, Hash))
2934         @return: a Deferred that fires (with None) when the operation completes
2935         """
2936 
2937hunk ./src/allmydata/interfaces.py 530
2938-    def put_uri_extension(data=URIExtensionData):
2939+    def put_uri_extension(data):
2940         """This block of data contains integrity-checking information (hashes
2941         of plaintext, crypttext, and shares), as well as encoding parameters
2942         that are necessary to recover the data. This is a serialized dict
2943hunk ./src/allmydata/interfaces.py 535
2944         mapping strings to other strings. The hash of this data is kept in
2945-        the URI and verified before any of the data is used. All buckets for
2946-        a given file contain identical copies of this data.
2947+        the URI and verified before any of the data is used. All share
2948+        containers for a given file contain identical copies of this data.
2949 
2950         The serialization format is specified with the following pseudocode:
2951         for k in sorted(dict.keys()):
2952hunk ./src/allmydata/interfaces.py 543
2953             assert re.match(r'^[a-zA-Z_\-]+$', k)
2954             write(k + ':' + netstring(dict[k]))
2955 
2956+        @param data=URIExtensionData
2957         @return: a Deferred that fires (with None) when the operation completes
2958         """
2959 
2960hunk ./src/allmydata/interfaces.py 558
2961 
2962 class IStorageBucketReader(Interface):
2963 
2964-    def get_block_data(blocknum=int, blocksize=int, size=int):
2965+    def get_block_data(blocknum, blocksize, size):
2966         """Most blocks will be the same size. The last block might be shorter
2967         than the others.
2968 
2969hunk ./src/allmydata/interfaces.py 562
2970+        @param blocknum=int
2971+        @param blocksize=int
2972+        @param size=int
2973         @return: ShareData
2974         """
2975 
2976hunk ./src/allmydata/interfaces.py 573
2977         @return: ListOf(Hash)
2978         """
2979 
2980-    def get_block_hashes(at_least_these=SetOf(int)):
2981+    def get_block_hashes(at_least_these=()):
2982         """
2983hunk ./src/allmydata/interfaces.py 575
2984+        @param at_least_these=SetOf(int)
2985         @return: ListOf(Hash)
2986         """
2987 
2988hunk ./src/allmydata/interfaces.py 579
2989-    def get_share_hashes(at_least_these=SetOf(int)):
2990+    def get_share_hashes():
2991         """
2992         @return: ListOf(TupleOf(int, Hash))
2993         """
2994hunk ./src/allmydata/interfaces.py 611
2995         @return: unicode nickname, or None
2996         """
2997 
2998-    # methods moved from IntroducerClient, need review
2999-    def get_all_connections():
3000-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
3001-        each active connection we've established to a remote service. This is
3002-        mostly useful for unit tests that need to wait until a certain number
3003-        of connections have been made."""
3004-
3005-    def get_all_connectors():
3006-        """Return a dict that maps from (nodeid, service_name) to a
3007-        RemoteServiceConnector instance for all services that we are actively
3008-        trying to connect to. Each RemoteServiceConnector has the following
3009-        public attributes::
3010-
3011-          service_name: the type of service provided, like 'storage'
3012-          announcement_time: when we first heard about this service
3013-          last_connect_time: when we last established a connection
3014-          last_loss_time: when we last lost a connection
3015-
3016-          version: the peer's version, from the most recent connection
3017-          oldest_supported: the peer's oldest supported version, same
3018-
3019-          rref: the RemoteReference, if connected, otherwise None
3020-          remote_host: the IAddress, if connected, otherwise None
3021-
3022-        This method is intended for monitoring interfaces, such as a web page
3023-        that describes connecting and connected peers.
3024-        """
3025-
3026-    def get_all_peerids():
3027-        """Return a frozenset of all peerids to whom we have a connection (to
3028-        one or more services) established. Mostly useful for unit tests."""
3029-
3030-    def get_all_connections_for(service_name):
3031-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
3032-        for each active connection that provides the given SERVICE_NAME."""
3033-
3034-    def get_permuted_peers(service_name, key):
3035-        """Returns an ordered list of (peerid, rref) tuples, selecting from
3036-        the connections that provide SERVICE_NAME, using a hash-based
3037-        permutation keyed by KEY. This randomizes the service list in a
3038-        repeatable way, to distribute load over many peers.
3039-        """
3040-
3041 
3042 class IMutableSlotWriter(Interface):
3043     """
3044hunk ./src/allmydata/interfaces.py 616
3045     The interface for a writer around a mutable slot on a remote server.
3046     """
3047-    def set_checkstring(checkstring, *args):
3048+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
3049         """
3050         Set the checkstring that I will pass to the remote server when
3051         writing.
3052hunk ./src/allmydata/interfaces.py 640
3053         Add a block and salt to the share.
3054         """
3055 
3056-    def put_encprivey(encprivkey):
3057+    def put_encprivkey(encprivkey):
3058         """
3059         Add the encrypted private key to the share.
3060         """
3061hunk ./src/allmydata/interfaces.py 645
3062 
3063-    def put_blockhashes(blockhashes=list):
3064+    def put_blockhashes(blockhashes):
3065         """
3066hunk ./src/allmydata/interfaces.py 647
3067+        @param blockhashes=list
3068         Add the block hash tree to the share.
3069         """
3070 
3071hunk ./src/allmydata/interfaces.py 651
3072-    def put_sharehashes(sharehashes=dict):
3073+    def put_sharehashes(sharehashes):
3074         """
3075hunk ./src/allmydata/interfaces.py 653
3076+        @param sharehashes=dict
3077         Add the share hash chain to the share.
3078         """
3079 
3080hunk ./src/allmydata/interfaces.py 739
3081     def get_extension_params():
3082         """Return the extension parameters in the URI"""
3083 
3084-    def set_extension_params():
3085+    def set_extension_params(params):
3086         """Set the extension parameters that should be in the URI"""
3087 
3088 class IDirectoryURI(Interface):
3089hunk ./src/allmydata/interfaces.py 879
3090         writer-visible data using this writekey.
3091         """
3092 
3093-    # TODO: Can this be overwrite instead of replace?
3094-    def replace(new_contents):
3095-        """Replace the contents of the mutable file, provided that no other
3096+    def overwrite(new_contents):
3097+        """Overwrite the contents of the mutable file, provided that no other
3098         node has published (or is attempting to publish, concurrently) a
3099         newer version of the file than this one.
3100 
3101hunk ./src/allmydata/interfaces.py 1346
3102         is empty, the metadata will be an empty dictionary.
3103         """
3104 
3105-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
3106+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
3107         """I add a child (by writecap+readcap) at the specific name. I return
3108         a Deferred that fires when the operation finishes. If overwrite= is
3109         True, I will replace any existing child of the same name, otherwise
3110hunk ./src/allmydata/interfaces.py 1745
3111     Block Hash, and the encoding parameters, both of which must be included
3112     in the URI.
3113 
3114-    I do not choose shareholders, that is left to the IUploader. I must be
3115-    given a dict of RemoteReferences to storage buckets that are ready and
3116-    willing to receive data.
3117+    I do not choose shareholders, that is left to the IUploader.
3118     """
3119 
3120     def set_size(size):
3121hunk ./src/allmydata/interfaces.py 1752
3122         """Specify the number of bytes that will be encoded. This must be
3123         peformed before get_serialized_params() can be called.
3124         """
3125+
3126     def set_params(params):
3127         """Override the default encoding parameters. 'params' is a tuple of
3128         (k,d,n), where 'k' is the number of required shares, 'd' is the
3129hunk ./src/allmydata/interfaces.py 1848
3130     download, validate, decode, and decrypt data from them, writing the
3131     results to an output file.
3132 
3133-    I do not locate the shareholders, that is left to the IDownloader. I must
3134-    be given a dict of RemoteReferences to storage buckets that are ready to
3135-    send data.
3136+    I do not locate the shareholders, that is left to the IDownloader.
3137     """
3138 
3139     def setup(outfile):
3140hunk ./src/allmydata/interfaces.py 1950
3141         resuming an interrupted upload (where we need to compute the
3142         plaintext hashes, but don't need the redundant encrypted data)."""
3143 
3144-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3145-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3146-        plaintext segments, i.e. get the tagged hashes of the given segments.
3147-        The segment size is expected to be generated by the
3148-        IEncryptedUploadable before any plaintext is read or ciphertext
3149-        produced, so that the segment hashes can be generated with only a
3150-        single pass.
3151-
3152-        This returns a Deferred that fires with a sequence of hashes, using:
3153-
3154-         tuple(segment_hashes[first:last])
3155-
3156-        'num_segments' is used to assert that the number of segments that the
3157-        IEncryptedUploadable handled matches the number of segments that the
3158-        encoder was expecting.
3159-
3160-        This method must not be called until the final byte has been read
3161-        from read_encrypted(). Once this method is called, read_encrypted()
3162-        can never be called again.
3163-        """
3164-
3165-    def get_plaintext_hash():
3166-        """OBSOLETE; Get the hash of the whole plaintext.
3167-
3168-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3169-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3170-        """
3171-
3172     def close():
3173         """Just like IUploadable.close()."""
3174 
3175hunk ./src/allmydata/interfaces.py 2144
3176         returns a Deferred that fires with an IUploadResults instance, from
3177         which the URI of the file can be obtained as results.uri ."""
3178 
3179-    def upload_ssk(write_capability, new_version, uploadable):
3180-        """TODO: how should this work?"""
3181-
3182 class ICheckable(Interface):
3183     def check(monitor, verify=False, add_lease=False):
3184         """Check up on my health, optionally repairing any problems.
3185hunk ./src/allmydata/interfaces.py 2505
3186 
3187 class IRepairResults(Interface):
3188     """I contain the results of a repair operation."""
3189-    def get_successful(self):
3190+    def get_successful():
3191         """Returns a boolean: True if the repair made the file healthy, False
3192         if not. Repair failure generally indicates a file that has been
3193         damaged beyond repair."""
3194hunk ./src/allmydata/interfaces.py 2577
3195     Tahoe process will typically have a single NodeMaker, but unit tests may
3196     create simplified/mocked forms for testing purposes.
3197     """
3198-    def create_from_cap(writecap, readcap=None, **kwargs):
3199+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3200         """I create an IFilesystemNode from the given writecap/readcap. I can
3201         only provide nodes for existing file/directory objects: use my other
3202         methods to create new objects. I return synchronously."""
3203hunk ./src/allmydata/monitor.py 30
3204 
3205     # the following methods are provided for the operation code
3206 
3207-    def is_cancelled(self):
3208+    def is_cancelled():
3209         """Returns True if the operation has been cancelled. If True,
3210         operation code should stop creating new work, and attempt to stop any
3211         work already in progress."""
3212hunk ./src/allmydata/monitor.py 35
3213 
3214-    def raise_if_cancelled(self):
3215+    def raise_if_cancelled():
3216         """Raise OperationCancelledError if the operation has been cancelled.
3217         Operation code that has a robust error-handling path can simply call
3218         this periodically."""
3219hunk ./src/allmydata/monitor.py 40
3220 
3221-    def set_status(self, status):
3222+    def set_status(status):
3223         """Sets the Monitor's 'status' object to an arbitrary value.
3224         Different operations will store different sorts of status information
3225         here. Operation code should use get+modify+set sequences to update
3226hunk ./src/allmydata/monitor.py 46
3227         this."""
3228 
3229-    def get_status(self):
3230+    def get_status():
3231         """Return the status object. If the operation failed, this will be a
3232         Failure instance."""
3233 
3234hunk ./src/allmydata/monitor.py 50
3235-    def finish(self, status):
3236+    def finish(status):
3237         """Call this when the operation is done, successful or not. The
3238         Monitor's lifetime is influenced by the completion of the operation
3239         it is monitoring. The Monitor's 'status' value will be set with the
3240hunk ./src/allmydata/monitor.py 63
3241 
3242     # the following methods are provided for the initiator of the operation
3243 
3244-    def is_finished(self):
3245+    def is_finished():
3246         """Return a boolean, True if the operation is done (whether
3247         successful or failed), False if it is still running."""
3248 
3249hunk ./src/allmydata/monitor.py 67
3250-    def when_done(self):
3251+    def when_done():
3252         """Return a Deferred that fires when the operation is complete. It
3253         will fire with the operation status, the same value as returned by
3254         get_status()."""
3255hunk ./src/allmydata/monitor.py 72
3256 
3257-    def cancel(self):
3258+    def cancel():
3259         """Cancel the operation as soon as possible. is_cancelled() will
3260         start returning True after this is called."""
3261 
3262hunk ./src/allmydata/mutable/filenode.py 753
3263         self._writekey = writekey
3264         self._serializer = defer.succeed(None)
3265 
3266-
3267     def get_sequence_number(self):
3268         """
3269         Get the sequence number of the mutable version that I represent.
3270hunk ./src/allmydata/mutable/filenode.py 759
3271         """
3272         return self._version[0] # verinfo[0] == the sequence number
3273 
3274+    def get_servermap(self):
3275+        return self._servermap
3276 
3277hunk ./src/allmydata/mutable/filenode.py 762
3278-    # TODO: Terminology?
3279     def get_writekey(self):
3280         """
3281         I return a writekey or None if I don't have a writekey.
3282hunk ./src/allmydata/mutable/filenode.py 768
3283         """
3284         return self._writekey
3285 
3286-
3287     def set_downloader_hints(self, hints):
3288         """
3289         I set the downloader hints.
3290hunk ./src/allmydata/mutable/filenode.py 776
3291 
3292         self._downloader_hints = hints
3293 
3294-
3295     def get_downloader_hints(self):
3296         """
3297         I return the downloader hints.
3298hunk ./src/allmydata/mutable/filenode.py 782
3299         """
3300         return self._downloader_hints
3301 
3302-
3303     def overwrite(self, new_contents):
3304         """
3305         I overwrite the contents of this mutable file version with the
3306hunk ./src/allmydata/mutable/filenode.py 791
3307 
3308         return self._do_serialized(self._overwrite, new_contents)
3309 
3310-
3311     def _overwrite(self, new_contents):
3312         assert IMutableUploadable.providedBy(new_contents)
3313         assert self._servermap.last_update_mode == MODE_WRITE
3314hunk ./src/allmydata/mutable/filenode.py 797
3315 
3316         return self._upload(new_contents)
3317 
3318-
3319     def modify(self, modifier, backoffer=None):
3320         """I use a modifier callback to apply a change to the mutable file.
3321         I implement the following pseudocode::
3322hunk ./src/allmydata/mutable/filenode.py 841
3323 
3324         return self._do_serialized(self._modify, modifier, backoffer)
3325 
3326-
3327     def _modify(self, modifier, backoffer):
3328         if backoffer is None:
3329             backoffer = BackoffAgent().delay
3330hunk ./src/allmydata/mutable/filenode.py 846
3331         return self._modify_and_retry(modifier, backoffer, True)
3332 
3333-
3334     def _modify_and_retry(self, modifier, backoffer, first_time):
3335         """
3336         I try to apply modifier to the contents of this version of the
3337hunk ./src/allmydata/mutable/filenode.py 878
3338         d.addErrback(_retry)
3339         return d
3340 
3341-
3342     def _modify_once(self, modifier, first_time):
3343         """
3344         I attempt to apply a modifier to the contents of the mutable
3345hunk ./src/allmydata/mutable/filenode.py 913
3346         d.addCallback(_apply)
3347         return d
3348 
3349-
3350     def is_readonly(self):
3351         """
3352         I return True if this MutableFileVersion provides no write
3353hunk ./src/allmydata/mutable/filenode.py 921
3354         """
3355         return self._writekey is None
3356 
3357-
3358     def is_mutable(self):
3359         """
3360         I return True, since mutable files are always mutable by
3361hunk ./src/allmydata/mutable/filenode.py 928
3362         """
3363         return True
3364 
3365-
3366     def get_storage_index(self):
3367         """
3368         I return the storage index of the reference that I encapsulate.
3369hunk ./src/allmydata/mutable/filenode.py 934
3370         """
3371         return self._storage_index
3372 
3373-
3374     def get_size(self):
3375         """
3376         I return the length, in bytes, of this readable object.
3377hunk ./src/allmydata/mutable/filenode.py 940
3378         """
3379         return self._servermap.size_of_version(self._version)
3380 
3381-
3382     def download_to_data(self, fetch_privkey=False):
3383         """
3384         I return a Deferred that fires with the contents of this
3385hunk ./src/allmydata/mutable/filenode.py 951
3386         d.addCallback(lambda mc: "".join(mc.chunks))
3387         return d
3388 
3389-
3390     def _try_to_download_data(self):
3391         """
3392         I am an unserialized cousin of download_to_data; I am called
3393hunk ./src/allmydata/mutable/filenode.py 963
3394         d.addCallback(lambda mc: "".join(mc.chunks))
3395         return d
3396 
3397-
3398     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3399         """
3400         I read a portion (possibly all) of the mutable file that I
3401hunk ./src/allmydata/mutable/filenode.py 971
3402         return self._do_serialized(self._read, consumer, offset, size,
3403                                    fetch_privkey)
3404 
3405-
3406     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3407         """
3408         I am the serialized companion of read.
3409hunk ./src/allmydata/mutable/filenode.py 981
3410         d = r.download(consumer, offset, size)
3411         return d
3412 
3413-
3414     def _do_serialized(self, cb, *args, **kwargs):
3415         # note: to avoid deadlock, this callable is *not* allowed to invoke
3416         # other serialized methods within this (or any other)
3417hunk ./src/allmydata/mutable/filenode.py 999
3418         self._serializer.addErrback(log.err)
3419         return d
3420 
3421-
3422     def _upload(self, new_contents):
3423         #assert self._pubkey, "update_servermap must be called before publish"
3424         p = Publish(self._node, self._storage_broker, self._servermap)
3425hunk ./src/allmydata/mutable/filenode.py 1009
3426         d.addCallback(self._did_upload, new_contents.get_size())
3427         return d
3428 
3429-
3430     def _did_upload(self, res, size):
3431         self._most_recent_size = size
3432         return res
3433hunk ./src/allmydata/mutable/filenode.py 1029
3434         """
3435         return self._do_serialized(self._update, data, offset)
3436 
3437-
3438     def _update(self, data, offset):
3439         """
3440         I update the mutable file version represented by this particular
3441hunk ./src/allmydata/mutable/filenode.py 1058
3442         d.addCallback(self._build_uploadable_and_finish, data, offset)
3443         return d
3444 
3445-
3446     def _do_modify_update(self, data, offset):
3447         """
3448         I perform a file update by modifying the contents of the file
3449hunk ./src/allmydata/mutable/filenode.py 1073
3450             return new
3451         return self._modify(m, None)
3452 
3453-
3454     def _do_update_update(self, data, offset):
3455         """
3456         I start the Servermap update that gets us the data we need to
3457hunk ./src/allmydata/mutable/filenode.py 1108
3458         return self._update_servermap(update_range=(start_segment,
3459                                                     end_segment))
3460 
3461-
3462     def _decode_and_decrypt_segments(self, ignored, data, offset):
3463         """
3464         After the servermap update, I take the encrypted and encoded
3465hunk ./src/allmydata/mutable/filenode.py 1148
3466         d3 = defer.succeed(blockhashes)
3467         return deferredutil.gatherResults([d1, d2, d3])
3468 
3469-
3470     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3471         """
3472         After the process has the plaintext segments, I build the
3473hunk ./src/allmydata/mutable/filenode.py 1163
3474         p = Publish(self._node, self._storage_broker, self._servermap)
3475         return p.update(u, offset, segments_and_bht[2], self._version)
3476 
3477-
3478     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3479         """
3480         I update the servermap. I return a Deferred that fires when the
3481hunk ./src/allmydata/storage/common.py 1
3482-
3483-import os.path
3484 from allmydata.util import base32
3485 
3486 class DataTooLargeError(Exception):
3487hunk ./src/allmydata/storage/common.py 5
3488     pass
3489+
3490 class UnknownMutableContainerVersionError(Exception):
3491     pass
3492hunk ./src/allmydata/storage/common.py 8
3493+
3494 class UnknownImmutableContainerVersionError(Exception):
3495     pass
3496 
3497hunk ./src/allmydata/storage/common.py 18
3498 
3499 def si_a2b(ascii_storageindex):
3500     return base32.a2b(ascii_storageindex)
3501-
3502-def storage_index_to_dir(storageindex):
3503-    sia = si_b2a(storageindex)
3504-    return os.path.join(sia[:2], sia)
3505hunk ./src/allmydata/storage/crawler.py 2
3506 
3507-import os, time, struct
3508+import time, struct
3509 import cPickle as pickle
3510 from twisted.internet import reactor
3511 from twisted.application import service
3512hunk ./src/allmydata/storage/crawler.py 6
3513+
3514+from allmydata.util.assertutil import precondition
3515+from allmydata.interfaces import IStorageBackend
3516 from allmydata.storage.common import si_b2a
3517hunk ./src/allmydata/storage/crawler.py 10
3518-from allmydata.util import fileutil
3519+
3520 
3521 class TimeSliceExceeded(Exception):
3522     pass
3523hunk ./src/allmydata/storage/crawler.py 15
3524 
3525+
3526 class ShareCrawler(service.MultiService):
3527hunk ./src/allmydata/storage/crawler.py 17
3528-    """A ShareCrawler subclass is attached to a StorageServer, and
3529-    periodically walks all of its shares, processing each one in some
3530-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3531-    since large servers can easily have a terabyte of shares, in several
3532-    million files, which can take hours or days to read.
3533+    """
3534+    An instance of a subclass of ShareCrawler is attached to a storage
3535+    backend, and periodically walks the backend's shares, processing them
3536+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3537+    the host, since large servers can easily have a terabyte of shares in
3538+    several million files, which can take hours or days to read.
3539 
3540     Once the crawler starts a cycle, it will proceed at a rate limited by the
3541     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3542hunk ./src/allmydata/storage/crawler.py 33
3543     long enough to ensure that 'minimum_cycle_time' elapses between the start
3544     of two consecutive cycles.
3545 
3546-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3547+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3548     grid will cause the prefixdir contents to be mostly cached in the kernel,
3549hunk ./src/allmydata/storage/crawler.py 35
3550-    or that the number of buckets in each prefixdir will be small enough to
3551-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3552-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3553+    or that the number of sharesets in each prefixdir will be small enough to
3554+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3555+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3556     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3557     time, and 17ms to list the second time.
3558 
3559hunk ./src/allmydata/storage/crawler.py 41
3560-    To use a crawler, create a subclass which implements the process_bucket()
3561-    method. It will be called with a prefixdir and a base32 storage index
3562-    string. process_bucket() must run synchronously. Any keys added to
3563-    self.state will be preserved. Override add_initial_state() to set up
3564-    initial state keys. Override finished_cycle() to perform additional
3565-    processing when the cycle is complete. Any status that the crawler
3566-    produces should be put in the self.state dictionary. Status renderers
3567-    (like a web page which describes the accomplishments of your crawler)
3568-    will use crawler.get_state() to retrieve this dictionary; they can
3569-    present the contents as they see fit.
3570+    To implement a crawler, create a subclass that implements the
3571+    process_shareset() method. It will be called with a prefixdir and an
3572+    object providing the IShareSet interface. process_shareset() must run
3573+    synchronously. Any keys added to self.state will be preserved. Override
3574+    add_initial_state() to set up initial state keys. Override
3575+    finished_cycle() to perform additional processing when the cycle is
3576+    complete. Any status that the crawler produces should be put in the
3577+    self.state dictionary. Status renderers (like a web page describing the
3578+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3579+    this dictionary; they can present the contents as they see fit.
3580 
3581hunk ./src/allmydata/storage/crawler.py 52
3582-    Then create an instance, with a reference to a StorageServer and a
3583-    filename where it can store persistent state. The statefile is used to
3584-    keep track of how far around the ring the process has travelled, as well
3585-    as timing history to allow the pace to be predicted and controlled. The
3586-    statefile will be updated and written to disk after each time slice (just
3587-    before the crawler yields to the reactor), and also after each cycle is
3588-    finished, and also when stopService() is called. Note that this means
3589-    that a crawler which is interrupted with SIGKILL while it is in the
3590-    middle of a time slice will lose progress: the next time the node is
3591-    started, the crawler will repeat some unknown amount of work.
3592+    Then create an instance, with a reference to a backend object providing
3593+    the IStorageBackend interface, and a filename where it can store
3594+    persistent state. The statefile is used to keep track of how far around
3595+    the ring the process has travelled, as well as timing history to allow
3596+    the pace to be predicted and controlled. The statefile will be updated
3597+    and written to disk after each time slice (just before the crawler yields
3598+    to the reactor), and also after each cycle is finished, and also when
3599+    stopService() is called. Note that this means that a crawler that is
3600+    interrupted with SIGKILL while it is in the middle of a time slice will
3601+    lose progress: the next time the node is started, the crawler will repeat
3602+    some unknown amount of work.
3603 
3604     The crawler instance must be started with startService() before it will
3605hunk ./src/allmydata/storage/crawler.py 65
3606-    do any work. To make it stop doing work, call stopService().
3607+    do any work. To make it stop doing work, call stopService(). A crawler
3608+    is usually a child service of a StorageServer, although it should not
3609+    depend on that.
3610+
3611+    For historical reasons, some dictionary key names use the term "bucket"
3612+    for what is now preferably called a "shareset" (the set of shares that a
3613+    server holds under a given storage index).
3614     """
3615 
3616     slow_start = 300 # don't start crawling for 5 minutes after startup
3617hunk ./src/allmydata/storage/crawler.py 80
3618     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3619     minimum_cycle_time = 300 # don't run a cycle faster than this
3620 
3621-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3622+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3623+        precondition(IStorageBackend.providedBy(backend), backend)
3624         service.MultiService.__init__(self)
3625hunk ./src/allmydata/storage/crawler.py 83
3626+        self.backend = backend
3627+        self.statefp = statefp
3628         if allowed_cpu_percentage is not None:
3629             self.allowed_cpu_percentage = allowed_cpu_percentage
3630hunk ./src/allmydata/storage/crawler.py 87
3631-        self.server = server
3632-        self.sharedir = server.sharedir
3633-        self.statefile = statefile
3634         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3635                          for i in range(2**10)]
3636         self.prefixes.sort()
3637hunk ./src/allmydata/storage/crawler.py 91
3638         self.timer = None
3639-        self.bucket_cache = (None, [])
3640+        self.shareset_cache = (None, [])
3641         self.current_sleep_time = None
3642         self.next_wake_time = None
3643         self.last_prefix_finished_time = None
3644hunk ./src/allmydata/storage/crawler.py 154
3645                 left = len(self.prefixes) - self.last_complete_prefix_index
3646                 remaining = left * self.last_prefix_elapsed_time
3647                 # TODO: remainder of this prefix: we need to estimate the
3648-                # per-bucket time, probably by measuring the time spent on
3649-                # this prefix so far, divided by the number of buckets we've
3650+                # per-shareset time, probably by measuring the time spent on
3651+                # this prefix so far, divided by the number of sharesets we've
3652                 # processed.
3653             d["estimated-cycle-complete-time-left"] = remaining
3654             # it's possible to call get_progress() from inside a crawler's
3655hunk ./src/allmydata/storage/crawler.py 175
3656         state dictionary.
3657 
3658         If we are not currently sleeping (i.e. get_state() was called from
3659-        inside the process_prefixdir, process_bucket, or finished_cycle()
3660+        inside the process_prefixdir, process_shareset, or finished_cycle()
3661         methods, or if startService has not yet been called on this crawler),
3662         these two keys will be None.
3663 
3664hunk ./src/allmydata/storage/crawler.py 188
3665     def load_state(self):
3666         # we use this to store state for both the crawler's internals and
3667         # anything the subclass-specific code needs. The state is stored
3668-        # after each bucket is processed, after each prefixdir is processed,
3669+        # after each shareset is processed, after each prefixdir is processed,
3670         # and after a cycle is complete. The internal keys we use are:
3671         #  ["version"]: int, always 1
3672         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3673hunk ./src/allmydata/storage/crawler.py 202
3674         #                            are sleeping between cycles, or if we
3675         #                            have not yet finished any prefixdir since
3676         #                            a cycle was started
3677-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3678-        #                            of the last bucket to be processed, or
3679-        #                            None if we are sleeping between cycles
3680+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3681+        #                            shareset to be processed, or None if we
3682+        #                            are sleeping between cycles
3683         try:
3684hunk ./src/allmydata/storage/crawler.py 206
3685-            f = open(self.statefile, "rb")
3686-            state = pickle.load(f)
3687-            f.close()
3688+            state = pickle.loads(self.statefp.getContent())
3689         except EnvironmentError:
3690             state = {"version": 1,
3691                      "last-cycle-finished": None,
3692hunk ./src/allmydata/storage/crawler.py 242
3693         else:
3694             last_complete_prefix = self.prefixes[lcpi]
3695         self.state["last-complete-prefix"] = last_complete_prefix
3696-        tmpfile = self.statefile + ".tmp"
3697-        f = open(tmpfile, "wb")
3698-        pickle.dump(self.state, f)
3699-        f.close()
3700-        fileutil.move_into_place(tmpfile, self.statefile)
3701+        self.statefp.setContent(pickle.dumps(self.state))
3702 
3703     def startService(self):
3704         # arrange things to look like we were just sleeping, so
3705hunk ./src/allmydata/storage/crawler.py 284
3706         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3707         # if the math gets weird, or a timequake happens, don't sleep
3708         # forever. Note that this means that, while a cycle is running, we
3709-        # will process at least one bucket every 5 minutes, no matter how
3710-        # long that bucket takes.
3711+        # will process at least one shareset every 5 minutes, no matter how
3712+        # long that shareset takes.
3713         sleep_time = max(0.0, min(sleep_time, 299))
3714         if finished_cycle:
3715             # how long should we sleep between cycles? Don't run faster than
3716hunk ./src/allmydata/storage/crawler.py 315
3717         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3718             # if we want to yield earlier, just raise TimeSliceExceeded()
3719             prefix = self.prefixes[i]
3720-            prefixdir = os.path.join(self.sharedir, prefix)
3721-            if i == self.bucket_cache[0]:
3722-                buckets = self.bucket_cache[1]
3723+            if i == self.shareset_cache[0]:
3724+                sharesets = self.shareset_cache[1]
3725             else:
3726hunk ./src/allmydata/storage/crawler.py 318
3727-                try:
3728-                    buckets = os.listdir(prefixdir)
3729-                    buckets.sort()
3730-                except EnvironmentError:
3731-                    buckets = []
3732-                self.bucket_cache = (i, buckets)
3733-            self.process_prefixdir(cycle, prefix, prefixdir,
3734-                                   buckets, start_slice)
3735+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3736+                self.shareset_cache = (i, sharesets)
3737+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3738             self.last_complete_prefix_index = i
3739 
3740             now = time.time()
3741hunk ./src/allmydata/storage/crawler.py 345
3742         self.finished_cycle(cycle)
3743         self.save_state()
3744 
3745-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3746-        """This gets a list of bucket names (i.e. storage index strings,
3747+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3748+        """
3749+        This gets a list of shareset names (i.e. storage index strings,
3750         base32-encoded) in sorted order.
3751 
3752         You can override this if your crawler doesn't care about the actual
3753hunk ./src/allmydata/storage/crawler.py 352
3754         shares, for example a crawler which merely keeps track of how many
3755-        buckets are being managed by this server.
3756+        sharesets are being managed by this server.
3757 
3758hunk ./src/allmydata/storage/crawler.py 354
3759-        Subclasses which *do* care about actual bucket should leave this
3760-        method along, and implement process_bucket() instead.
3761+        Subclasses which *do* care about actual shareset should leave this
3762+        method alone, and implement process_shareset() instead.
3763         """
3764 
3765hunk ./src/allmydata/storage/crawler.py 358
3766-        for bucket in buckets:
3767-            if bucket <= self.state["last-complete-bucket"]:
3768+        for shareset in sharesets:
3769+            base32si = shareset.get_storage_index_string()
3770+            if base32si <= self.state["last-complete-bucket"]:
3771                 continue
3772hunk ./src/allmydata/storage/crawler.py 362
3773-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3774-            self.state["last-complete-bucket"] = bucket
3775+            self.process_shareset(cycle, prefix, shareset)
3776+            self.state["last-complete-bucket"] = base32si
3777             if time.time() >= start_slice + self.cpu_slice:
3778                 raise TimeSliceExceeded()
3779 
3780hunk ./src/allmydata/storage/crawler.py 370
3781     # the remaining methods are explictly for subclasses to implement.
3782 
3783     def started_cycle(self, cycle):
3784-        """Notify a subclass that the crawler is about to start a cycle.
3785+        """
3786+        Notify a subclass that the crawler is about to start a cycle.
3787 
3788         This method is for subclasses to override. No upcall is necessary.
3789         """
3790hunk ./src/allmydata/storage/crawler.py 377
3791         pass
3792 
3793-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3794-        """Examine a single bucket. Subclasses should do whatever they want
3795+    def process_shareset(self, cycle, prefix, shareset):
3796+        """
3797+        Examine a single shareset. Subclasses should do whatever they want
3798         to do to the shares therein, then update self.state as necessary.
3799 
3800         If the crawler is never interrupted by SIGKILL, this method will be
3801hunk ./src/allmydata/storage/crawler.py 383
3802-        called exactly once per share (per cycle). If it *is* interrupted,
3803+        called exactly once per shareset (per cycle). If it *is* interrupted,
3804         then the next time the node is started, some amount of work will be
3805         duplicated, according to when self.save_state() was last called. By
3806         default, save_state() is called at the end of each timeslice, and
3807hunk ./src/allmydata/storage/crawler.py 391
3808 
3809         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3810         records to a database), you can call save_state() at the end of your
3811-        process_bucket() method. This will reduce the maximum duplicated work
3812-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3813-        per bucket (and some disk writes), which will count against your
3814-        allowed_cpu_percentage, and which may be considerable if
3815-        process_bucket() runs quickly.
3816+        process_shareset() method. This will reduce the maximum duplicated
3817+        work to one shareset per SIGKILL. It will also add overhead, probably
3818+        1-20ms per shareset (and some disk writes), which will count against
3819+        your allowed_cpu_percentage, and which may be considerable if
3820+        process_shareset() runs quickly.
3821 
3822         This method is for subclasses to override. No upcall is necessary.
3823         """
3824hunk ./src/allmydata/storage/crawler.py 402
3825         pass
3826 
3827     def finished_prefix(self, cycle, prefix):
3828-        """Notify a subclass that the crawler has just finished processing a
3829-        prefix directory (all buckets with the same two-character/10bit
3830+        """
3831+        Notify a subclass that the crawler has just finished processing a
3832+        prefix directory (all sharesets with the same two-character/10-bit
3833         prefix). To impose a limit on how much work might be duplicated by a
3834         SIGKILL that occurs during a timeslice, you can call
3835         self.save_state() here, but be aware that it may represent a
3836hunk ./src/allmydata/storage/crawler.py 415
3837         pass
3838 
3839     def finished_cycle(self, cycle):
3840-        """Notify subclass that a cycle (one complete traversal of all
3841+        """
3842+        Notify subclass that a cycle (one complete traversal of all
3843         prefixdirs) has just finished. 'cycle' is the number of the cycle
3844         that just finished. This method should perform summary work and
3845         update self.state to publish information to status displays.
3846hunk ./src/allmydata/storage/crawler.py 433
3847         pass
3848 
3849     def yielding(self, sleep_time):
3850-        """The crawler is about to sleep for 'sleep_time' seconds. This
3851+        """
3852+        The crawler is about to sleep for 'sleep_time' seconds. This
3853         method is mostly for the convenience of unit tests.
3854 
3855         This method is for subclasses to override. No upcall is necessary.
3856hunk ./src/allmydata/storage/crawler.py 443
3857 
3858 
3859 class BucketCountingCrawler(ShareCrawler):
3860-    """I keep track of how many buckets are being managed by this server.
3861-    This is equivalent to the number of distributed files and directories for
3862-    which I am providing storage. The actual number of files+directories in
3863-    the full grid is probably higher (especially when there are more servers
3864-    than 'N', the number of generated shares), because some files+directories
3865-    will have shares on other servers instead of me. Also note that the
3866-    number of buckets will differ from the number of shares in small grids,
3867-    when more than one share is placed on a single server.
3868+    """
3869+    I keep track of how many sharesets, each corresponding to a storage index,
3870+    are being managed by this server. This is equivalent to the number of
3871+    distributed files and directories for which I am providing storage. The
3872+    actual number of files and directories in the full grid is probably higher
3873+    (especially when there are more servers than 'N', the number of generated
3874+    shares), because some files and directories will have shares on other
3875+    servers instead of me. Also note that the number of sharesets will differ
3876+    from the number of shares in small grids, when more than one share is
3877+    placed on a single server.
3878     """
3879 
3880     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3881hunk ./src/allmydata/storage/crawler.py 457
3882 
3883-    def __init__(self, server, statefile, num_sample_prefixes=1):
3884-        ShareCrawler.__init__(self, server, statefile)
3885+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3886+        ShareCrawler.__init__(self, backend, statefp)
3887         self.num_sample_prefixes = num_sample_prefixes
3888 
3889     def add_initial_state(self):
3890hunk ./src/allmydata/storage/crawler.py 471
3891         self.state.setdefault("last-complete-bucket-count", None)
3892         self.state.setdefault("storage-index-samples", {})
3893 
3894-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3895+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3896         # we override process_prefixdir() because we don't want to look at
3897hunk ./src/allmydata/storage/crawler.py 473
3898-        # the individual buckets. We'll save state after each one. On my
3899+        # the individual sharesets. We'll save state after each one. On my
3900         # laptop, a mostly-empty storage server can process about 70
3901         # prefixdirs in a 1.0s slice.
3902         if cycle not in self.state["bucket-counts"]:
3903hunk ./src/allmydata/storage/crawler.py 478
3904             self.state["bucket-counts"][cycle] = {}
3905-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3906+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3907         if prefix in self.prefixes[:self.num_sample_prefixes]:
3908hunk ./src/allmydata/storage/crawler.py 480
3909-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3910+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3911 
3912     def finished_cycle(self, cycle):
3913         last_counts = self.state["bucket-counts"].get(cycle, [])
3914hunk ./src/allmydata/storage/crawler.py 486
3915         if len(last_counts) == len(self.prefixes):
3916             # great, we have a whole cycle.
3917-            num_buckets = sum(last_counts.values())
3918-            self.state["last-complete-bucket-count"] = num_buckets
3919+            num_sharesets = sum(last_counts.values())
3920+            self.state["last-complete-bucket-count"] = num_sharesets
3921             # get rid of old counts
3922             for old_cycle in list(self.state["bucket-counts"].keys()):
3923                 if old_cycle != cycle:
3924hunk ./src/allmydata/storage/crawler.py 494
3925                     del self.state["bucket-counts"][old_cycle]
3926         # get rid of old samples too
3927         for prefix in list(self.state["storage-index-samples"].keys()):
3928-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3929+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3930             if old_cycle != cycle:
3931                 del self.state["storage-index-samples"][prefix]
3932hunk ./src/allmydata/storage/crawler.py 497
3933-
3934hunk ./src/allmydata/storage/expirer.py 1
3935-import time, os, pickle, struct
3936+
3937+import time, pickle, struct
3938+from twisted.python import log as twlog
3939+
3940 from allmydata.storage.crawler import ShareCrawler
3941hunk ./src/allmydata/storage/expirer.py 6
3942-from allmydata.storage.shares import get_share_file
3943-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3944+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3945      UnknownImmutableContainerVersionError
3946hunk ./src/allmydata/storage/expirer.py 8
3947-from twisted.python import log as twlog
3948+
3949 
3950 class LeaseCheckingCrawler(ShareCrawler):
3951     """I examine the leases on all shares, determining which are still valid
3952hunk ./src/allmydata/storage/expirer.py 17
3953     removed.
3954 
3955     I collect statistics on the leases and make these available to a web
3956-    status page, including::
3957+    status page, including:
3958 
3959     Space recovered during this cycle-so-far:
3960      actual (only if expiration_enabled=True):
3961hunk ./src/allmydata/storage/expirer.py 21
3962-      num-buckets, num-shares, sum of share sizes, real disk usage
3963+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3964       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3965        space used by the directory)
3966      what it would have been with the original lease expiration time
3967hunk ./src/allmydata/storage/expirer.py 32
3968 
3969     Space recovered during the last 10 cycles  <-- saved in separate pickle
3970 
3971-    Shares/buckets examined:
3972+    Shares/storage-indices examined:
3973      this cycle-so-far
3974      prediction of rest of cycle
3975      during last 10 cycles <-- separate pickle
3976hunk ./src/allmydata/storage/expirer.py 42
3977     Histogram of leases-per-share:
3978      this-cycle-to-date
3979      last 10 cycles <-- separate pickle
3980-    Histogram of lease ages, buckets = 1day
3981+    Histogram of lease ages, storage-indices over 1 day
3982      cycle-to-date
3983      last 10 cycles <-- separate pickle
3984 
3985hunk ./src/allmydata/storage/expirer.py 53
3986     slow_start = 360 # wait 6 minutes after startup
3987     minimum_cycle_time = 12*60*60 # not more than twice per day
3988 
3989-    def __init__(self, server, statefile, historyfile,
3990-                 expiration_enabled, mode,
3991-                 override_lease_duration, # used if expiration_mode=="age"
3992-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3993-                 sharetypes):
3994-        self.historyfile = historyfile
3995-        self.expiration_enabled = expiration_enabled
3996-        self.mode = mode
3997+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3998+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3999+        self.historyfp = historyfp
4000+        ShareCrawler.__init__(self, backend, statefp)
4001+
4002+        self.expiration_enabled = expiration_policy['enabled']
4003+        self.mode = expiration_policy['mode']
4004         self.override_lease_duration = None
4005         self.cutoff_date = None
4006         if self.mode == "age":
4007hunk ./src/allmydata/storage/expirer.py 63
4008-            assert isinstance(override_lease_duration, (int, type(None)))
4009-            self.override_lease_duration = override_lease_duration # seconds
4010+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
4011+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
4012         elif self.mode == "cutoff-date":
4013hunk ./src/allmydata/storage/expirer.py 66
4014-            assert isinstance(cutoff_date, int) # seconds-since-epoch
4015-            assert cutoff_date is not None
4016-            self.cutoff_date = cutoff_date
4017+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
4018+            self.cutoff_date = expiration_policy['cutoff_date']
4019         else:
4020hunk ./src/allmydata/storage/expirer.py 69
4021-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
4022-        self.sharetypes_to_expire = sharetypes
4023-        ShareCrawler.__init__(self, server, statefile)
4024+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
4025+        self.sharetypes_to_expire = expiration_policy['sharetypes']
4026 
4027     def add_initial_state(self):
4028         # we fill ["cycle-to-date"] here (even though they will be reset in
4029hunk ./src/allmydata/storage/expirer.py 84
4030             self.state["cycle-to-date"].setdefault(k, so_far[k])
4031 
4032         # initialize history
4033-        if not os.path.exists(self.historyfile):
4034+        if not self.historyfp.exists():
4035             history = {} # cyclenum -> dict
4036hunk ./src/allmydata/storage/expirer.py 86
4037-            f = open(self.historyfile, "wb")
4038-            pickle.dump(history, f)
4039-            f.close()
4040+            self.historyfp.setContent(pickle.dumps(history))
4041 
4042     def create_empty_cycle_dict(self):
4043         recovered = self.create_empty_recovered_dict()
4044hunk ./src/allmydata/storage/expirer.py 99
4045 
4046     def create_empty_recovered_dict(self):
4047         recovered = {}
4048+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
4049         for a in ("actual", "original", "configured", "examined"):
4050             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
4051                 recovered[a+"-"+b] = 0
4052hunk ./src/allmydata/storage/expirer.py 110
4053     def started_cycle(self, cycle):
4054         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
4055 
4056-    def stat(self, fn):
4057-        return os.stat(fn)
4058-
4059-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
4060-        bucketdir = os.path.join(prefixdir, storage_index_b32)
4061-        s = self.stat(bucketdir)
4062+    def process_storage_index(self, cycle, prefix, container):
4063         would_keep_shares = []
4064         wks = None
4065hunk ./src/allmydata/storage/expirer.py 113
4066+        sharetype = None
4067 
4068hunk ./src/allmydata/storage/expirer.py 115
4069-        for fn in os.listdir(bucketdir):
4070-            try:
4071-                shnum = int(fn)
4072-            except ValueError:
4073-                continue # non-numeric means not a sharefile
4074-            sharefile = os.path.join(bucketdir, fn)
4075+        for share in container.get_shares():
4076+            sharetype = share.sharetype
4077             try:
4078hunk ./src/allmydata/storage/expirer.py 118
4079-                wks = self.process_share(sharefile)
4080+                wks = self.process_share(share)
4081             except (UnknownMutableContainerVersionError,
4082                     UnknownImmutableContainerVersionError,
4083                     struct.error):
4084hunk ./src/allmydata/storage/expirer.py 122
4085-                twlog.msg("lease-checker error processing %s" % sharefile)
4086+                twlog.msg("lease-checker error processing %r" % (share,))
4087                 twlog.err()
4088hunk ./src/allmydata/storage/expirer.py 124
4089-                which = (storage_index_b32, shnum)
4090+                which = (si_b2a(share.storageindex), share.get_shnum())
4091                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
4092                 wks = (1, 1, 1, "unknown")
4093             would_keep_shares.append(wks)
4094hunk ./src/allmydata/storage/expirer.py 129
4095 
4096-        sharetype = None
4097+        container_type = None
4098         if wks:
4099hunk ./src/allmydata/storage/expirer.py 131
4100-            # use the last share's sharetype as the buckettype
4101-            sharetype = wks[3]
4102+            # use the last share's sharetype as the container type
4103+            container_type = wks[3]
4104         rec = self.state["cycle-to-date"]["space-recovered"]
4105         self.increment(rec, "examined-buckets", 1)
4106         if sharetype:
4107hunk ./src/allmydata/storage/expirer.py 136
4108-            self.increment(rec, "examined-buckets-"+sharetype, 1)
4109+            self.increment(rec, "examined-buckets-"+container_type, 1)
4110+
4111+        container_diskbytes = container.get_overhead()
4112 
4113hunk ./src/allmydata/storage/expirer.py 140
4114-        try:
4115-            bucket_diskbytes = s.st_blocks * 512
4116-        except AttributeError:
4117-            bucket_diskbytes = 0 # no stat().st_blocks on windows
4118         if sum([wks[0] for wks in would_keep_shares]) == 0:
4119hunk ./src/allmydata/storage/expirer.py 141
4120-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
4121+            self.increment_container_space("original", container_diskbytes, sharetype)
4122         if sum([wks[1] for wks in would_keep_shares]) == 0:
4123hunk ./src/allmydata/storage/expirer.py 143
4124-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
4125+            self.increment_container_space("configured", container_diskbytes, sharetype)
4126         if sum([wks[2] for wks in would_keep_shares]) == 0:
4127hunk ./src/allmydata/storage/expirer.py 145
4128-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
4129+            self.increment_container_space("actual", container_diskbytes, sharetype)
4130 
4131hunk ./src/allmydata/storage/expirer.py 147
4132-    def process_share(self, sharefilename):
4133-        # first, find out what kind of a share it is
4134-        sf = get_share_file(sharefilename)
4135-        sharetype = sf.sharetype
4136+    def process_share(self, share):
4137+        sharetype = share.sharetype
4138         now = time.time()
4139hunk ./src/allmydata/storage/expirer.py 150
4140-        s = self.stat(sharefilename)
4141+        sharebytes = share.get_size()
4142+        diskbytes = share.get_used_space()
4143 
4144         num_leases = 0
4145         num_valid_leases_original = 0
4146hunk ./src/allmydata/storage/expirer.py 158
4147         num_valid_leases_configured = 0
4148         expired_leases_configured = []
4149 
4150-        for li in sf.get_leases():
4151+        for li in share.get_leases():
4152             num_leases += 1
4153             original_expiration_time = li.get_expiration_time()
4154             grant_renew_time = li.get_grant_renew_time_time()
4155hunk ./src/allmydata/storage/expirer.py 171
4156 
4157             #  expired-or-not according to our configured age limit
4158             expired = False
4159-            if self.mode == "age":
4160-                age_limit = original_expiration_time
4161-                if self.override_lease_duration is not None:
4162-                    age_limit = self.override_lease_duration
4163-                if age > age_limit:
4164-                    expired = True
4165-            else:
4166-                assert self.mode == "cutoff-date"
4167-                if grant_renew_time < self.cutoff_date:
4168-                    expired = True
4169-            if sharetype not in self.sharetypes_to_expire:
4170-                expired = False
4171+            if sharetype in self.sharetypes_to_expire:
4172+                if self.mode == "age":
4173+                    age_limit = original_expiration_time
4174+                    if self.override_lease_duration is not None:
4175+                        age_limit = self.override_lease_duration
4176+                    if age > age_limit:
4177+                        expired = True
4178+                else:
4179+                    assert self.mode == "cutoff-date"
4180+                    if grant_renew_time < self.cutoff_date:
4181+                        expired = True
4182 
4183             if expired:
4184                 expired_leases_configured.append(li)
4185hunk ./src/allmydata/storage/expirer.py 190
4186 
4187         so_far = self.state["cycle-to-date"]
4188         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4189-        self.increment_space("examined", s, sharetype)
4190+        self.increment_space("examined", diskbytes, sharetype)
4191 
4192         would_keep_share = [1, 1, 1, sharetype]
4193 
4194hunk ./src/allmydata/storage/expirer.py 196
4195         if self.expiration_enabled:
4196             for li in expired_leases_configured:
4197-                sf.cancel_lease(li.cancel_secret)
4198+                share.cancel_lease(li.cancel_secret)
4199 
4200         if num_valid_leases_original == 0:
4201             would_keep_share[0] = 0
4202hunk ./src/allmydata/storage/expirer.py 200
4203-            self.increment_space("original", s, sharetype)
4204+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4205 
4206         if num_valid_leases_configured == 0:
4207             would_keep_share[1] = 0
4208hunk ./src/allmydata/storage/expirer.py 204
4209-            self.increment_space("configured", s, sharetype)
4210+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4211             if self.expiration_enabled:
4212                 would_keep_share[2] = 0
4213hunk ./src/allmydata/storage/expirer.py 207
4214-                self.increment_space("actual", s, sharetype)
4215+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4216 
4217         return would_keep_share
4218 
4219hunk ./src/allmydata/storage/expirer.py 211
4220-    def increment_space(self, a, s, sharetype):
4221-        sharebytes = s.st_size
4222-        try:
4223-            # note that stat(2) says that st_blocks is 512 bytes, and that
4224-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4225-            # independent of the block-size that st_blocks uses.
4226-            diskbytes = s.st_blocks * 512
4227-        except AttributeError:
4228-            # the docs say that st_blocks is only on linux. I also see it on
4229-            # MacOS. But it isn't available on windows.
4230-            diskbytes = sharebytes
4231+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4232         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4233         self.increment(so_far_sr, a+"-shares", 1)
4234         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4235hunk ./src/allmydata/storage/expirer.py 221
4236             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4237             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4238 
4239-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4240+    def increment_container_space(self, a, container_diskbytes, container_type):
4241         rec = self.state["cycle-to-date"]["space-recovered"]
4242hunk ./src/allmydata/storage/expirer.py 223
4243-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4244+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4245         self.increment(rec, a+"-buckets", 1)
4246hunk ./src/allmydata/storage/expirer.py 225
4247-        if sharetype:
4248-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4249-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4250+        if container_type:
4251+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4252+            self.increment(rec, a+"-buckets-"+container_type, 1)
4253 
4254     def increment(self, d, k, delta=1):
4255         if k not in d:
4256hunk ./src/allmydata/storage/expirer.py 281
4257         # copy() needs to become a deepcopy
4258         h["space-recovered"] = s["space-recovered"].copy()
4259 
4260-        history = pickle.load(open(self.historyfile, "rb"))
4261+        history = pickle.load(self.historyfp.getContent())
4262         history[cycle] = h
4263         while len(history) > 10:
4264             oldcycles = sorted(history.keys())
4265hunk ./src/allmydata/storage/expirer.py 286
4266             del history[oldcycles[0]]
4267-        f = open(self.historyfile, "wb")
4268-        pickle.dump(history, f)
4269-        f.close()
4270+        self.historyfp.setContent(pickle.dumps(history))
4271 
4272     def get_state(self):
4273         """In addition to the crawler state described in
4274hunk ./src/allmydata/storage/expirer.py 355
4275         progress = self.get_progress()
4276 
4277         state = ShareCrawler.get_state(self) # does a shallow copy
4278-        history = pickle.load(open(self.historyfile, "rb"))
4279+        history = pickle.load(self.historyfp.getContent())
4280         state["history"] = history
4281 
4282         if not progress["cycle-in-progress"]:
4283hunk ./src/allmydata/storage/lease.py 3
4284 import struct, time
4285 
4286+
4287+class NonExistentLeaseError(Exception):
4288+    pass
4289+
4290 class LeaseInfo:
4291     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4292                  expiration_time=None, nodeid=None):
4293hunk ./src/allmydata/storage/lease.py 21
4294 
4295     def get_expiration_time(self):
4296         return self.expiration_time
4297+
4298     def get_grant_renew_time_time(self):
4299         # hack, based upon fixed 31day expiration period
4300         return self.expiration_time - 31*24*60*60
4301hunk ./src/allmydata/storage/lease.py 25
4302+
4303     def get_age(self):
4304         return time.time() - self.get_grant_renew_time_time()
4305 
4306hunk ./src/allmydata/storage/lease.py 36
4307          self.expiration_time) = struct.unpack(">L32s32sL", data)
4308         self.nodeid = None
4309         return self
4310+
4311     def to_immutable_data(self):
4312         return struct.pack(">L32s32sL",
4313                            self.owner_num,
4314hunk ./src/allmydata/storage/lease.py 49
4315                            int(self.expiration_time),
4316                            self.renew_secret, self.cancel_secret,
4317                            self.nodeid)
4318+
4319     def from_mutable_data(self, data):
4320         (self.owner_num,
4321          self.expiration_time,
4322hunk ./src/allmydata/storage/server.py 1
4323-import os, re, weakref, struct, time
4324+import weakref, time
4325 
4326 from foolscap.api import Referenceable
4327 from twisted.application import service
4328hunk ./src/allmydata/storage/server.py 7
4329 
4330 from zope.interface import implements
4331-from allmydata.interfaces import RIStorageServer, IStatsProducer
4332-from allmydata.util import fileutil, idlib, log, time_format
4333+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4334+from allmydata.util.assertutil import precondition
4335+from allmydata.util import idlib, log
4336 import allmydata # for __full_version__
4337 
4338hunk ./src/allmydata/storage/server.py 12
4339-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4340-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4341+from allmydata.storage.common import si_a2b, si_b2a
4342+[si_a2b]  # hush pyflakes
4343 from allmydata.storage.lease import LeaseInfo
4344hunk ./src/allmydata/storage/server.py 15
4345-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4346-     create_mutable_sharefile
4347-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4348-from allmydata.storage.crawler import BucketCountingCrawler
4349 from allmydata.storage.expirer import LeaseCheckingCrawler
4350hunk ./src/allmydata/storage/server.py 16
4351-
4352-# storage/
4353-# storage/shares/incoming
4354-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4355-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4356-# storage/shares/$START/$STORAGEINDEX
4357-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4358-
4359-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4360-# base-32 chars).
4361-
4362-# $SHARENUM matches this regex:
4363-NUM_RE=re.compile("^[0-9]+$")
4364-
4365+from allmydata.storage.crawler import BucketCountingCrawler
4366 
4367 
4368 class StorageServer(service.MultiService, Referenceable):
4369hunk ./src/allmydata/storage/server.py 21
4370     implements(RIStorageServer, IStatsProducer)
4371+
4372     name = 'storage'
4373     LeaseCheckerClass = LeaseCheckingCrawler
4374hunk ./src/allmydata/storage/server.py 24
4375+    DEFAULT_EXPIRATION_POLICY = {
4376+        'enabled': False,
4377+        'mode': 'age',
4378+        'override_lease_duration': None,
4379+        'cutoff_date': None,
4380+        'sharetypes': ('mutable', 'immutable'),
4381+    }
4382 
4383hunk ./src/allmydata/storage/server.py 32
4384-    def __init__(self, storedir, nodeid, reserved_space=0,
4385-                 discard_storage=False, readonly_storage=False,
4386+    def __init__(self, serverid, backend, statedir,
4387                  stats_provider=None,
4388hunk ./src/allmydata/storage/server.py 34
4389-                 expiration_enabled=False,
4390-                 expiration_mode="age",
4391-                 expiration_override_lease_duration=None,
4392-                 expiration_cutoff_date=None,
4393-                 expiration_sharetypes=("mutable", "immutable")):
4394+                 expiration_policy=None):
4395         service.MultiService.__init__(self)
4396hunk ./src/allmydata/storage/server.py 36
4397-        assert isinstance(nodeid, str)
4398-        assert len(nodeid) == 20
4399-        self.my_nodeid = nodeid
4400-        self.storedir = storedir
4401-        sharedir = os.path.join(storedir, "shares")
4402-        fileutil.make_dirs(sharedir)
4403-        self.sharedir = sharedir
4404-        # we don't actually create the corruption-advisory dir until necessary
4405-        self.corruption_advisory_dir = os.path.join(storedir,
4406-                                                    "corruption-advisories")
4407-        self.reserved_space = int(reserved_space)
4408-        self.no_storage = discard_storage
4409-        self.readonly_storage = readonly_storage
4410+        precondition(IStorageBackend.providedBy(backend), backend)
4411+        precondition(isinstance(serverid, str), serverid)
4412+        precondition(len(serverid) == 20, serverid)
4413+
4414+        self._serverid = serverid
4415         self.stats_provider = stats_provider
4416         if self.stats_provider:
4417             self.stats_provider.register_producer(self)
4418hunk ./src/allmydata/storage/server.py 44
4419-        self.incomingdir = os.path.join(sharedir, 'incoming')
4420-        self._clean_incomplete()
4421-        fileutil.make_dirs(self.incomingdir)
4422         self._active_writers = weakref.WeakKeyDictionary()
4423hunk ./src/allmydata/storage/server.py 45
4424+        self.backend = backend
4425+        self.backend.setServiceParent(self)
4426+        self._statedir = statedir
4427         log.msg("StorageServer created", facility="tahoe.storage")
4428 
4429hunk ./src/allmydata/storage/server.py 50
4430-        if reserved_space:
4431-            if self.get_available_space() is None:
4432-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4433-                        umin="0wZ27w", level=log.UNUSUAL)
4434-
4435         self.latencies = {"allocate": [], # immutable
4436                           "write": [],
4437                           "close": [],
4438hunk ./src/allmydata/storage/server.py 61
4439                           "renew": [],
4440                           "cancel": [],
4441                           }
4442-        self.add_bucket_counter()
4443-
4444-        statefile = os.path.join(self.storedir, "lease_checker.state")
4445-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4446-        klass = self.LeaseCheckerClass
4447-        self.lease_checker = klass(self, statefile, historyfile,
4448-                                   expiration_enabled, expiration_mode,
4449-                                   expiration_override_lease_duration,
4450-                                   expiration_cutoff_date,
4451-                                   expiration_sharetypes)
4452-        self.lease_checker.setServiceParent(self)
4453+        self._setup_bucket_counter()
4454+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4455 
4456     def __repr__(self):
4457hunk ./src/allmydata/storage/server.py 65
4458-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4459+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4460 
4461hunk ./src/allmydata/storage/server.py 67
4462-    def add_bucket_counter(self):
4463-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4464-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4465+    def _setup_bucket_counter(self):
4466+        statefp = self._statedir.child("bucket_counter.state")
4467+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4468         self.bucket_counter.setServiceParent(self)
4469 
4470hunk ./src/allmydata/storage/server.py 72
4471+    def _setup_lease_checker(self, expiration_policy):
4472+        statefp = self._statedir.child("lease_checker.state")
4473+        historyfp = self._statedir.child("lease_checker.history")
4474+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4475+        self.lease_checker.setServiceParent(self)
4476+
4477     def count(self, name, delta=1):
4478         if self.stats_provider:
4479             self.stats_provider.count("storage_server." + name, delta)
4480hunk ./src/allmydata/storage/server.py 92
4481         """Return a dict, indexed by category, that contains a dict of
4482         latency numbers for each category. If there are sufficient samples
4483         for unambiguous interpretation, each dict will contain the
4484-        following keys: mean, 01_0_percentile, 10_0_percentile,
4485+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4486         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4487         99_0_percentile, 99_9_percentile.  If there are insufficient
4488         samples for a given percentile to be interpreted unambiguously
4489hunk ./src/allmydata/storage/server.py 114
4490             else:
4491                 stats["mean"] = None
4492 
4493-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4494-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4495-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4496+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4497+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4498+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4499                              (0.999, "99_9_percentile", 1000)]
4500 
4501             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4502hunk ./src/allmydata/storage/server.py 133
4503             kwargs["facility"] = "tahoe.storage"
4504         return log.msg(*args, **kwargs)
4505 
4506-    def _clean_incomplete(self):
4507-        fileutil.rm_dir(self.incomingdir)
4508+    def get_serverid(self):
4509+        return self._serverid
4510 
4511     def get_stats(self):
4512         # remember: RIStatsProvider requires that our return dict
4513hunk ./src/allmydata/storage/server.py 138
4514-        # contains numeric values.
4515+        # contains numeric, or None values.
4516         stats = { 'storage_server.allocated': self.allocated_size(), }
4517hunk ./src/allmydata/storage/server.py 140
4518-        stats['storage_server.reserved_space'] = self.reserved_space
4519         for category,ld in self.get_latencies().items():
4520             for name,v in ld.items():
4521                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4522hunk ./src/allmydata/storage/server.py 144
4523 
4524-        try:
4525-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4526-            writeable = disk['avail'] > 0
4527-
4528-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4529-            stats['storage_server.disk_total'] = disk['total']
4530-            stats['storage_server.disk_used'] = disk['used']
4531-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4532-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4533-            stats['storage_server.disk_avail'] = disk['avail']
4534-        except AttributeError:
4535-            writeable = True
4536-        except EnvironmentError:
4537-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4538-            writeable = False
4539-
4540-        if self.readonly_storage:
4541-            stats['storage_server.disk_avail'] = 0
4542-            writeable = False
4543+        self.backend.fill_in_space_stats(stats)
4544 
4545hunk ./src/allmydata/storage/server.py 146
4546-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4547         s = self.bucket_counter.get_state()
4548         bucket_count = s.get("last-complete-bucket-count")
4549         if bucket_count:
4550hunk ./src/allmydata/storage/server.py 153
4551         return stats
4552 
4553     def get_available_space(self):
4554-        """Returns available space for share storage in bytes, or None if no
4555-        API to get this information is available."""
4556-
4557-        if self.readonly_storage:
4558-            return 0
4559-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4560+        return self.backend.get_available_space()
4561 
4562     def allocated_size(self):
4563         space = 0
4564hunk ./src/allmydata/storage/server.py 162
4565         return space
4566 
4567     def remote_get_version(self):
4568-        remaining_space = self.get_available_space()
4569+        remaining_space = self.backend.get_available_space()
4570         if remaining_space is None:
4571             # We're on a platform that has no API to get disk stats.
4572             remaining_space = 2**64
4573hunk ./src/allmydata/storage/server.py 178
4574                     }
4575         return version
4576 
4577-    def remote_allocate_buckets(self, storage_index,
4578+    def remote_allocate_buckets(self, storageindex,
4579                                 renew_secret, cancel_secret,
4580                                 sharenums, allocated_size,
4581                                 canary, owner_num=0):
4582hunk ./src/allmydata/storage/server.py 182
4583+        # cancel_secret is no longer used.
4584         # owner_num is not for clients to set, but rather it should be
4585hunk ./src/allmydata/storage/server.py 184
4586-        # curried into the PersonalStorageServer instance that is dedicated
4587-        # to a particular owner.
4588+        # curried into a StorageServer instance dedicated to a particular
4589+        # owner.
4590         start = time.time()
4591         self.count("allocate")
4592hunk ./src/allmydata/storage/server.py 188
4593-        alreadygot = set()
4594         bucketwriters = {} # k: shnum, v: BucketWriter
4595hunk ./src/allmydata/storage/server.py 189
4596-        si_dir = storage_index_to_dir(storage_index)
4597-        si_s = si_b2a(storage_index)
4598 
4599hunk ./src/allmydata/storage/server.py 190
4600+        si_s = si_b2a(storageindex)
4601         log.msg("storage: allocate_buckets %s" % si_s)
4602 
4603hunk ./src/allmydata/storage/server.py 193
4604-        # in this implementation, the lease information (including secrets)
4605-        # goes into the share files themselves. It could also be put into a
4606-        # separate database. Note that the lease should not be added until
4607-        # the BucketWriter has been closed.
4608+        # Note that the lease should not be added until the BucketWriter
4609+        # has been closed.
4610         expire_time = time.time() + 31*24*60*60
4611hunk ./src/allmydata/storage/server.py 196
4612-        lease_info = LeaseInfo(owner_num,
4613-                               renew_secret, cancel_secret,
4614-                               expire_time, self.my_nodeid)
4615+        lease_info = LeaseInfo(owner_num, renew_secret,
4616+                               expire_time, self._serverid)
4617 
4618         max_space_per_bucket = allocated_size
4619 
4620hunk ./src/allmydata/storage/server.py 201
4621-        remaining_space = self.get_available_space()
4622+        remaining_space = self.backend.get_available_space()
4623         limited = remaining_space is not None
4624         if limited:
4625hunk ./src/allmydata/storage/server.py 204
4626-            # this is a bit conservative, since some of this allocated_size()
4627-            # has already been written to disk, where it will show up in
4628+            # This is a bit conservative, since some of this allocated_size()
4629+            # has already been written to the backend, where it will show up in
4630             # get_available_space.
4631             remaining_space -= self.allocated_size()
4632hunk ./src/allmydata/storage/server.py 208
4633-        # self.readonly_storage causes remaining_space <= 0
4634+            # If the backend is read-only, remaining_space will be <= 0.
4635+
4636+        shareset = self.backend.get_shareset(storageindex)
4637 
4638hunk ./src/allmydata/storage/server.py 212
4639-        # fill alreadygot with all shares that we have, not just the ones
4640+        # Fill alreadygot with all shares that we have, not just the ones
4641         # they asked about: this will save them a lot of work. Add or update
4642         # leases for all of them: if they want us to hold shares for this
4643hunk ./src/allmydata/storage/server.py 215
4644-        # file, they'll want us to hold leases for this file.
4645-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4646-            alreadygot.add(shnum)
4647-            sf = ShareFile(fn)
4648-            sf.add_or_renew_lease(lease_info)
4649+        # file, they'll want us to hold leases for all the shares of it.
4650+        #
4651+        # XXX should we be making the assumption here that lease info is
4652+        # duplicated in all shares?
4653+        alreadygot = set()
4654+        for share in shareset.get_shares():
4655+            share.add_or_renew_lease(lease_info)
4656+            alreadygot.add(share.shnum)
4657 
4658hunk ./src/allmydata/storage/server.py 224
4659-        for shnum in sharenums:
4660-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4661-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4662-            if os.path.exists(finalhome):
4663-                # great! we already have it. easy.
4664-                pass
4665-            elif os.path.exists(incominghome):
4666+        for shnum in sharenums - alreadygot:
4667+            if shareset.has_incoming(shnum):
4668                 # Note that we don't create BucketWriters for shnums that
4669                 # have a partial share (in incoming/), so if a second upload
4670                 # occurs while the first is still in progress, the second
4671hunk ./src/allmydata/storage/server.py 232
4672                 # uploader will use different storage servers.
4673                 pass
4674             elif (not limited) or (remaining_space >= max_space_per_bucket):
4675-                # ok! we need to create the new share file.
4676-                bw = BucketWriter(self, incominghome, finalhome,
4677-                                  max_space_per_bucket, lease_info, canary)
4678-                if self.no_storage:
4679-                    bw.throw_out_all_data = True
4680+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4681+                                                 lease_info, canary)
4682                 bucketwriters[shnum] = bw
4683                 self._active_writers[bw] = 1
4684                 if limited:
4685hunk ./src/allmydata/storage/server.py 239
4686                     remaining_space -= max_space_per_bucket
4687             else:
4688-                # bummer! not enough space to accept this bucket
4689+                # Bummer not enough space to accept this share.
4690                 pass
4691 
4692hunk ./src/allmydata/storage/server.py 242
4693-        if bucketwriters:
4694-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4695-
4696         self.add_latency("allocate", time.time() - start)
4697         return alreadygot, bucketwriters
4698 
4699hunk ./src/allmydata/storage/server.py 245
4700-    def _iter_share_files(self, storage_index):
4701-        for shnum, filename in self._get_bucket_shares(storage_index):
4702-            f = open(filename, 'rb')
4703-            header = f.read(32)
4704-            f.close()
4705-            if header[:32] == MutableShareFile.MAGIC:
4706-                sf = MutableShareFile(filename, self)
4707-                # note: if the share has been migrated, the renew_lease()
4708-                # call will throw an exception, with information to help the
4709-                # client update the lease.
4710-            elif header[:4] == struct.pack(">L", 1):
4711-                sf = ShareFile(filename)
4712-            else:
4713-                continue # non-sharefile
4714-            yield sf
4715-
4716-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4717+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4718                          owner_num=1):
4719hunk ./src/allmydata/storage/server.py 247
4720+        # cancel_secret is no longer used.
4721         start = time.time()
4722         self.count("add-lease")
4723         new_expire_time = time.time() + 31*24*60*60
4724hunk ./src/allmydata/storage/server.py 251
4725-        lease_info = LeaseInfo(owner_num,
4726-                               renew_secret, cancel_secret,
4727-                               new_expire_time, self.my_nodeid)
4728-        for sf in self._iter_share_files(storage_index):
4729-            sf.add_or_renew_lease(lease_info)
4730-        self.add_latency("add-lease", time.time() - start)
4731-        return None
4732+        lease_info = LeaseInfo(owner_num, renew_secret,
4733+                               new_expire_time, self._serverid)
4734 
4735hunk ./src/allmydata/storage/server.py 254
4736-    def remote_renew_lease(self, storage_index, renew_secret):
4737+        try:
4738+            self.backend.add_or_renew_lease(lease_info)
4739+        finally:
4740+            self.add_latency("add-lease", time.time() - start)
4741+
4742+    def remote_renew_lease(self, storageindex, renew_secret):
4743         start = time.time()
4744         self.count("renew")
4745hunk ./src/allmydata/storage/server.py 262
4746-        new_expire_time = time.time() + 31*24*60*60
4747-        found_buckets = False
4748-        for sf in self._iter_share_files(storage_index):
4749-            found_buckets = True
4750-            sf.renew_lease(renew_secret, new_expire_time)
4751-        self.add_latency("renew", time.time() - start)
4752-        if not found_buckets:
4753-            raise IndexError("no such lease to renew")
4754+
4755+        try:
4756+            shareset = self.backend.get_shareset(storageindex)
4757+            new_expiration_time = start + 31*24*60*60   # one month from now
4758+            shareset.renew_lease(renew_secret, new_expiration_time)
4759+        finally:
4760+            self.add_latency("renew", time.time() - start)
4761 
4762     def bucket_writer_closed(self, bw, consumed_size):
4763         if self.stats_provider:
4764hunk ./src/allmydata/storage/server.py 275
4765             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4766         del self._active_writers[bw]
4767 
4768-    def _get_bucket_shares(self, storage_index):
4769-        """Return a list of (shnum, pathname) tuples for files that hold
4770-        shares for this storage_index. In each tuple, 'shnum' will always be
4771-        the integer form of the last component of 'pathname'."""
4772-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4773-        try:
4774-            for f in os.listdir(storagedir):
4775-                if NUM_RE.match(f):
4776-                    filename = os.path.join(storagedir, f)
4777-                    yield (int(f), filename)
4778-        except OSError:
4779-            # Commonly caused by there being no buckets at all.
4780-            pass
4781-
4782-    def remote_get_buckets(self, storage_index):
4783+    def remote_get_buckets(self, storageindex):
4784         start = time.time()
4785         self.count("get")
4786hunk ./src/allmydata/storage/server.py 278
4787-        si_s = si_b2a(storage_index)
4788+        si_s = si_b2a(storageindex)
4789         log.msg("storage: get_buckets %s" % si_s)
4790         bucketreaders = {} # k: sharenum, v: BucketReader
4791hunk ./src/allmydata/storage/server.py 281
4792-        for shnum, filename in self._get_bucket_shares(storage_index):
4793-            bucketreaders[shnum] = BucketReader(self, filename,
4794-                                                storage_index, shnum)
4795-        self.add_latency("get", time.time() - start)
4796-        return bucketreaders
4797 
4798hunk ./src/allmydata/storage/server.py 282
4799-    def get_leases(self, storage_index):
4800-        """Provide an iterator that yields all of the leases attached to this
4801-        bucket. Each lease is returned as a LeaseInfo instance.
4802+        try:
4803+            shareset = self.backend.get_shareset(storageindex)
4804+            for share in shareset.get_shares():
4805+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4806+            return bucketreaders
4807+        finally:
4808+            self.add_latency("get", time.time() - start)
4809 
4810hunk ./src/allmydata/storage/server.py 290
4811-        This method is not for client use.
4812+    def get_leases(self, storageindex):
4813         """
4814hunk ./src/allmydata/storage/server.py 292
4815+        Provide an iterator that yields all of the leases attached to this
4816+        bucket. Each lease is returned as a LeaseInfo instance.
4817 
4818hunk ./src/allmydata/storage/server.py 295
4819-        # since all shares get the same lease data, we just grab the leases
4820-        # from the first share
4821-        try:
4822-            shnum, filename = self._get_bucket_shares(storage_index).next()
4823-            sf = ShareFile(filename)
4824-            return sf.get_leases()
4825-        except StopIteration:
4826-            return iter([])
4827+        This method is not for client use. XXX do we need it at all?
4828+        """
4829+        return self.backend.get_shareset(storageindex).get_leases()
4830 
4831hunk ./src/allmydata/storage/server.py 299
4832-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4833+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4834                                                secrets,
4835                                                test_and_write_vectors,
4836                                                read_vector):
4837hunk ./src/allmydata/storage/server.py 305
4838         start = time.time()
4839         self.count("writev")
4840-        si_s = si_b2a(storage_index)
4841+        si_s = si_b2a(storageindex)
4842         log.msg("storage: slot_writev %s" % si_s)
4843hunk ./src/allmydata/storage/server.py 307
4844-        si_dir = storage_index_to_dir(storage_index)
4845-        (write_enabler, renew_secret, cancel_secret) = secrets
4846-        # shares exist if there is a file for them
4847-        bucketdir = os.path.join(self.sharedir, si_dir)
4848-        shares = {}
4849-        if os.path.isdir(bucketdir):
4850-            for sharenum_s in os.listdir(bucketdir):
4851-                try:
4852-                    sharenum = int(sharenum_s)
4853-                except ValueError:
4854-                    continue
4855-                filename = os.path.join(bucketdir, sharenum_s)
4856-                msf = MutableShareFile(filename, self)
4857-                msf.check_write_enabler(write_enabler, si_s)
4858-                shares[sharenum] = msf
4859-        # write_enabler is good for all existing shares.
4860-
4861-        # Now evaluate test vectors.
4862-        testv_is_good = True
4863-        for sharenum in test_and_write_vectors:
4864-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4865-            if sharenum in shares:
4866-                if not shares[sharenum].check_testv(testv):
4867-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4868-                    testv_is_good = False
4869-                    break
4870-            else:
4871-                # compare the vectors against an empty share, in which all
4872-                # reads return empty strings.
4873-                if not EmptyShare().check_testv(testv):
4874-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4875-                                                                testv))
4876-                    testv_is_good = False
4877-                    break
4878-
4879-        # now gather the read vectors, before we do any writes
4880-        read_data = {}
4881-        for sharenum, share in shares.items():
4882-            read_data[sharenum] = share.readv(read_vector)
4883-
4884-        ownerid = 1 # TODO
4885-        expire_time = time.time() + 31*24*60*60   # one month
4886-        lease_info = LeaseInfo(ownerid,
4887-                               renew_secret, cancel_secret,
4888-                               expire_time, self.my_nodeid)
4889-
4890-        if testv_is_good:
4891-            # now apply the write vectors
4892-            for sharenum in test_and_write_vectors:
4893-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4894-                if new_length == 0:
4895-                    if sharenum in shares:
4896-                        shares[sharenum].unlink()
4897-                else:
4898-                    if sharenum not in shares:
4899-                        # allocate a new share
4900-                        allocated_size = 2000 # arbitrary, really
4901-                        share = self._allocate_slot_share(bucketdir, secrets,
4902-                                                          sharenum,
4903-                                                          allocated_size,
4904-                                                          owner_num=0)
4905-                        shares[sharenum] = share
4906-                    shares[sharenum].writev(datav, new_length)
4907-                    # and update the lease
4908-                    shares[sharenum].add_or_renew_lease(lease_info)
4909-
4910-            if new_length == 0:
4911-                # delete empty bucket directories
4912-                if not os.listdir(bucketdir):
4913-                    os.rmdir(bucketdir)
4914 
4915hunk ./src/allmydata/storage/server.py 308
4916+        try:
4917+            shareset = self.backend.get_shareset(storageindex)
4918+            expiration_time = start + 31*24*60*60   # one month from now
4919+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4920+                                                       read_vector, expiration_time)
4921+        finally:
4922+            self.add_latency("writev", time.time() - start)
4923 
4924hunk ./src/allmydata/storage/server.py 316
4925-        # all done
4926-        self.add_latency("writev", time.time() - start)
4927-        return (testv_is_good, read_data)
4928-
4929-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4930-                             allocated_size, owner_num=0):
4931-        (write_enabler, renew_secret, cancel_secret) = secrets
4932-        my_nodeid = self.my_nodeid
4933-        fileutil.make_dirs(bucketdir)
4934-        filename = os.path.join(bucketdir, "%d" % sharenum)
4935-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4936-                                         self)
4937-        return share
4938-
4939-    def remote_slot_readv(self, storage_index, shares, readv):
4940+    def remote_slot_readv(self, storageindex, shares, readv):
4941         start = time.time()
4942         self.count("readv")
4943hunk ./src/allmydata/storage/server.py 319
4944-        si_s = si_b2a(storage_index)
4945-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4946-                     facility="tahoe.storage", level=log.OPERATIONAL)
4947-        si_dir = storage_index_to_dir(storage_index)
4948-        # shares exist if there is a file for them
4949-        bucketdir = os.path.join(self.sharedir, si_dir)
4950-        if not os.path.isdir(bucketdir):
4951+        si_s = si_b2a(storageindex)
4952+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4953+                facility="tahoe.storage", level=log.OPERATIONAL)
4954+
4955+        try:
4956+            shareset = self.backend.get_shareset(storageindex)
4957+            return shareset.readv(self, shares, readv)
4958+        finally:
4959             self.add_latency("readv", time.time() - start)
4960hunk ./src/allmydata/storage/server.py 328
4961-            return {}
4962-        datavs = {}
4963-        for sharenum_s in os.listdir(bucketdir):
4964-            try:
4965-                sharenum = int(sharenum_s)
4966-            except ValueError:
4967-                continue
4968-            if sharenum in shares or not shares:
4969-                filename = os.path.join(bucketdir, sharenum_s)
4970-                msf = MutableShareFile(filename, self)
4971-                datavs[sharenum] = msf.readv(readv)
4972-        log.msg("returning shares %s" % (datavs.keys(),),
4973-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4974-        self.add_latency("readv", time.time() - start)
4975-        return datavs
4976 
4977hunk ./src/allmydata/storage/server.py 329
4978-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4979-                                    reason):
4980-        fileutil.make_dirs(self.corruption_advisory_dir)
4981-        now = time_format.iso_utc(sep="T")
4982-        si_s = si_b2a(storage_index)
4983-        # windows can't handle colons in the filename
4984-        fn = os.path.join(self.corruption_advisory_dir,
4985-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4986-        f = open(fn, "w")
4987-        f.write("report: Share Corruption\n")
4988-        f.write("type: %s\n" % share_type)
4989-        f.write("storage_index: %s\n" % si_s)
4990-        f.write("share_number: %d\n" % shnum)
4991-        f.write("\n")
4992-        f.write(reason)
4993-        f.write("\n")
4994-        f.close()
4995-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4996-                        "%(si)s-%(shnum)d: %(reason)s"),
4997-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4998-                level=log.SCARY, umid="SGx2fA")
4999-        return None
5000+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
5001+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
5002hunk ./src/allmydata/test/common.py 20
5003 from allmydata.mutable.common import CorruptShareError
5004 from allmydata.mutable.layout import unpack_header
5005 from allmydata.mutable.publish import MutableData
5006-from allmydata.storage.mutable import MutableShareFile
5007+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5008 from allmydata.util import hashutil, log, fileutil, pollmixin
5009 from allmydata.util.assertutil import precondition
5010 from allmydata.util.consumer import download_to_data
5011hunk ./src/allmydata/test/common.py 1297
5012 
5013 def _corrupt_mutable_share_data(data, debug=False):
5014     prefix = data[:32]
5015-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
5016-    data_offset = MutableShareFile.DATA_OFFSET
5017+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
5018+    data_offset = MutableDiskShare.DATA_OFFSET
5019     sharetype = data[data_offset:data_offset+1]
5020     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
5021     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
5022hunk ./src/allmydata/test/no_network.py 21
5023 from twisted.application import service
5024 from twisted.internet import defer, reactor
5025 from twisted.python.failure import Failure
5026+from twisted.python.filepath import FilePath
5027 from foolscap.api import Referenceable, fireEventually, RemoteException
5028 from base64 import b32encode
5029hunk ./src/allmydata/test/no_network.py 24
5030+
5031 from allmydata import uri as tahoe_uri
5032 from allmydata.client import Client
5033hunk ./src/allmydata/test/no_network.py 27
5034-from allmydata.storage.server import StorageServer, storage_index_to_dir
5035+from allmydata.storage.server import StorageServer
5036+from allmydata.storage.backends.disk.disk_backend import DiskBackend
5037 from allmydata.util import fileutil, idlib, hashutil
5038 from allmydata.util.hashutil import sha1
5039 from allmydata.test.common_web import HTTPClientGETFactory
5040hunk ./src/allmydata/test/no_network.py 155
5041             seed = server.get_permutation_seed()
5042             return sha1(peer_selection_index + seed).digest()
5043         return sorted(self.get_connected_servers(), key=_permuted)
5044+
5045     def get_connected_servers(self):
5046         return self.client._servers
5047hunk ./src/allmydata/test/no_network.py 158
5048+
5049     def get_nickname_for_serverid(self, serverid):
5050         return None
5051 
5052hunk ./src/allmydata/test/no_network.py 162
5053+    def get_known_servers(self):
5054+        return self.get_connected_servers()
5055+
5056+    def get_all_serverids(self):
5057+        return self.client.get_all_serverids()
5058+
5059+
5060 class NoNetworkClient(Client):
5061     def create_tub(self):
5062         pass
5063hunk ./src/allmydata/test/no_network.py 262
5064 
5065     def make_server(self, i, readonly=False):
5066         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
5067-        serverdir = os.path.join(self.basedir, "servers",
5068-                                 idlib.shortnodeid_b2a(serverid), "storage")
5069-        fileutil.make_dirs(serverdir)
5070-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
5071-                           readonly_storage=readonly)
5072+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
5073+
5074+        # The backend will make the storage directory and any necessary parents.
5075+        backend = DiskBackend(storagedir, readonly=readonly)
5076+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
5077         ss._no_network_server_number = i
5078         return ss
5079 
5080hunk ./src/allmydata/test/no_network.py 276
5081         middleman = service.MultiService()
5082         middleman.setServiceParent(self)
5083         ss.setServiceParent(middleman)
5084-        serverid = ss.my_nodeid
5085+        serverid = ss.get_serverid()
5086         self.servers_by_number[i] = ss
5087         wrapper = wrap_storage_server(ss)
5088         self.wrappers_by_id[serverid] = wrapper
5089hunk ./src/allmydata/test/no_network.py 295
5090         # it's enough to remove the server from c._servers (we don't actually
5091         # have to detach and stopService it)
5092         for i,ss in self.servers_by_number.items():
5093-            if ss.my_nodeid == serverid:
5094+            if ss.get_serverid() == serverid:
5095                 del self.servers_by_number[i]
5096                 break
5097         del self.wrappers_by_id[serverid]
5098hunk ./src/allmydata/test/no_network.py 345
5099     def get_clientdir(self, i=0):
5100         return self.g.clients[i].basedir
5101 
5102+    def get_server(self, i):
5103+        return self.g.servers_by_number[i]
5104+
5105     def get_serverdir(self, i):
5106hunk ./src/allmydata/test/no_network.py 349
5107-        return self.g.servers_by_number[i].storedir
5108+        return self.g.servers_by_number[i].backend.storedir
5109+
5110+    def remove_server(self, i):
5111+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
5112 
5113     def iterate_servers(self):
5114         for i in sorted(self.g.servers_by_number.keys()):
5115hunk ./src/allmydata/test/no_network.py 357
5116             ss = self.g.servers_by_number[i]
5117-            yield (i, ss, ss.storedir)
5118+            yield (i, ss, ss.backend.storedir)
5119 
5120     def find_uri_shares(self, uri):
5121         si = tahoe_uri.from_string(uri).get_storage_index()
5122hunk ./src/allmydata/test/no_network.py 361
5123-        prefixdir = storage_index_to_dir(si)
5124         shares = []
5125         for i,ss in self.g.servers_by_number.items():
5126hunk ./src/allmydata/test/no_network.py 363
5127-            serverid = ss.my_nodeid
5128-            basedir = os.path.join(ss.sharedir, prefixdir)
5129-            if not os.path.exists(basedir):
5130-                continue
5131-            for f in os.listdir(basedir):
5132-                try:
5133-                    shnum = int(f)
5134-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5135-                except ValueError:
5136-                    pass
5137+            for share in ss.backend.get_shareset(si).get_shares():
5138+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5139         return sorted(shares)
5140 
5141hunk ./src/allmydata/test/no_network.py 367
5142+    def count_leases(self, uri):
5143+        """Return (filename, leasecount) pairs in arbitrary order."""
5144+        si = tahoe_uri.from_string(uri).get_storage_index()
5145+        lease_counts = []
5146+        for i,ss in self.g.servers_by_number.items():
5147+            for share in ss.backend.get_shareset(si).get_shares():
5148+                num_leases = len(list(share.get_leases()))
5149+                lease_counts.append( (share._home.path, num_leases) )
5150+        return lease_counts
5151+
5152     def copy_shares(self, uri):
5153         shares = {}
5154hunk ./src/allmydata/test/no_network.py 379
5155-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5156-            shares[sharefile] = open(sharefile, "rb").read()
5157+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5158+            shares[sharefp.path] = sharefp.getContent()
5159         return shares
5160 
5161hunk ./src/allmydata/test/no_network.py 383
5162+    def copy_share(self, from_share, uri, to_server):
5163+        si = uri.from_string(self.uri).get_storage_index()
5164+        (i_shnum, i_serverid, i_sharefp) = from_share
5165+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5166+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5167+
5168     def restore_all_shares(self, shares):
5169hunk ./src/allmydata/test/no_network.py 390
5170-        for sharefile, data in shares.items():
5171-            open(sharefile, "wb").write(data)
5172+        for share, data in shares.items():
5173+            share.home.setContent(data)
5174 
5175hunk ./src/allmydata/test/no_network.py 393
5176-    def delete_share(self, (shnum, serverid, sharefile)):
5177-        os.unlink(sharefile)
5178+    def delete_share(self, (shnum, serverid, sharefp)):
5179+        sharefp.remove()
5180 
5181     def delete_shares_numbered(self, uri, shnums):
5182hunk ./src/allmydata/test/no_network.py 397
5183-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5184+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5185             if i_shnum in shnums:
5186hunk ./src/allmydata/test/no_network.py 399
5187-                os.unlink(i_sharefile)
5188+                i_sharefp.remove()
5189 
5190hunk ./src/allmydata/test/no_network.py 401
5191-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5192-        sharedata = open(sharefile, "rb").read()
5193-        corruptdata = corruptor_function(sharedata)
5194-        open(sharefile, "wb").write(corruptdata)
5195+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5196+        sharedata = sharefp.getContent()
5197+        corruptdata = corruptor_function(sharedata, debug=debug)
5198+        sharefp.setContent(corruptdata)
5199 
5200     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5201hunk ./src/allmydata/test/no_network.py 407
5202-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5203+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5204             if i_shnum in shnums:
5205hunk ./src/allmydata/test/no_network.py 409
5206-                sharedata = open(i_sharefile, "rb").read()
5207-                corruptdata = corruptor(sharedata, debug=debug)
5208-                open(i_sharefile, "wb").write(corruptdata)
5209+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5210 
5211     def corrupt_all_shares(self, uri, corruptor, debug=False):
5212hunk ./src/allmydata/test/no_network.py 412
5213-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5214-            sharedata = open(i_sharefile, "rb").read()
5215-            corruptdata = corruptor(sharedata, debug=debug)
5216-            open(i_sharefile, "wb").write(corruptdata)
5217+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5218+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5219 
5220     def GET(self, urlpath, followRedirect=False, return_response=False,
5221             method="GET", clientnum=0, **kwargs):
5222hunk ./src/allmydata/test/test_download.py 6
5223 # a previous run. This asserts that the current code is capable of decoding
5224 # shares from a previous version.
5225 
5226-import os
5227 from twisted.trial import unittest
5228 from twisted.internet import defer, reactor
5229 from allmydata import uri
5230hunk ./src/allmydata/test/test_download.py 9
5231-from allmydata.storage.server import storage_index_to_dir
5232 from allmydata.util import base32, fileutil, spans, log, hashutil
5233 from allmydata.util.consumer import download_to_data, MemoryConsumer
5234 from allmydata.immutable import upload, layout
5235hunk ./src/allmydata/test/test_download.py 85
5236         u = upload.Data(plaintext, None)
5237         d = self.c0.upload(u)
5238         f = open("stored_shares.py", "w")
5239-        def _created_immutable(ur):
5240-            # write the generated shares and URI to a file, which can then be
5241-            # incorporated into this one next time.
5242-            f.write('immutable_uri = "%s"\n' % ur.uri)
5243-            f.write('immutable_shares = {\n')
5244-            si = uri.from_string(ur.uri).get_storage_index()
5245-            si_dir = storage_index_to_dir(si)
5246+
5247+        def _write_py(uri):
5248+            si = uri.from_string(uri).get_storage_index()
5249             for (i,ss,ssdir) in self.iterate_servers():
5250hunk ./src/allmydata/test/test_download.py 89
5251-                sharedir = os.path.join(ssdir, "shares", si_dir)
5252                 shares = {}
5253hunk ./src/allmydata/test/test_download.py 90
5254-                for fn in os.listdir(sharedir):
5255-                    shnum = int(fn)
5256-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5257-                    shares[shnum] = sharedata
5258-                fileutil.rm_dir(sharedir)
5259+                shareset = ss.backend.get_shareset(si)
5260+                for share in shareset.get_shares():
5261+                    sharedata = share._home.getContent()
5262+                    shares[share.get_shnum()] = sharedata
5263+
5264+                fileutil.fp_remove(shareset._sharehomedir)
5265                 if shares:
5266                     f.write(' %d: { # client[%d]\n' % (i, i))
5267                     for shnum in sorted(shares.keys()):
5268hunk ./src/allmydata/test/test_download.py 103
5269                                 (shnum, base32.b2a(shares[shnum])))
5270                     f.write('    },\n')
5271             f.write('}\n')
5272-            f.write('\n')
5273 
5274hunk ./src/allmydata/test/test_download.py 104
5275+        def _created_immutable(ur):
5276+            # write the generated shares and URI to a file, which can then be
5277+            # incorporated into this one next time.
5278+            f.write('immutable_uri = "%s"\n' % ur.uri)
5279+            f.write('immutable_shares = {\n')
5280+            _write_py(ur.uri)
5281+            f.write('\n')
5282         d.addCallback(_created_immutable)
5283 
5284         d.addCallback(lambda ignored:
5285hunk ./src/allmydata/test/test_download.py 118
5286         def _created_mutable(n):
5287             f.write('mutable_uri = "%s"\n' % n.get_uri())
5288             f.write('mutable_shares = {\n')
5289-            si = uri.from_string(n.get_uri()).get_storage_index()
5290-            si_dir = storage_index_to_dir(si)
5291-            for (i,ss,ssdir) in self.iterate_servers():
5292-                sharedir = os.path.join(ssdir, "shares", si_dir)
5293-                shares = {}
5294-                for fn in os.listdir(sharedir):
5295-                    shnum = int(fn)
5296-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5297-                    shares[shnum] = sharedata
5298-                fileutil.rm_dir(sharedir)
5299-                if shares:
5300-                    f.write(' %d: { # client[%d]\n' % (i, i))
5301-                    for shnum in sorted(shares.keys()):
5302-                        f.write('  %d: base32.a2b("%s"),\n' %
5303-                                (shnum, base32.b2a(shares[shnum])))
5304-                    f.write('    },\n')
5305-            f.write('}\n')
5306-
5307-            f.close()
5308+            _write_py(n.get_uri())
5309         d.addCallback(_created_mutable)
5310 
5311         def _done(ignored):
5312hunk ./src/allmydata/test/test_download.py 123
5313             f.close()
5314-        d.addCallback(_done)
5315+        d.addBoth(_done)
5316 
5317         return d
5318 
5319hunk ./src/allmydata/test/test_download.py 127
5320+    def _write_shares(self, uri, shares):
5321+        si = uri.from_string(uri).get_storage_index()
5322+        for i in shares:
5323+            shares_for_server = shares[i]
5324+            for shnum in shares_for_server:
5325+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5326+                fileutil.fp_make_dirs(share_dir)
5327+                share_dir.child(str(shnum)).setContent(shares[shnum])
5328+
5329     def load_shares(self, ignored=None):
5330         # this uses the data generated by create_shares() to populate the
5331         # storage servers with pre-generated shares
5332hunk ./src/allmydata/test/test_download.py 139
5333-        si = uri.from_string(immutable_uri).get_storage_index()
5334-        si_dir = storage_index_to_dir(si)
5335-        for i in immutable_shares:
5336-            shares = immutable_shares[i]
5337-            for shnum in shares:
5338-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5339-                fileutil.make_dirs(dn)
5340-                fn = os.path.join(dn, str(shnum))
5341-                f = open(fn, "wb")
5342-                f.write(shares[shnum])
5343-                f.close()
5344-
5345-        si = uri.from_string(mutable_uri).get_storage_index()
5346-        si_dir = storage_index_to_dir(si)
5347-        for i in mutable_shares:
5348-            shares = mutable_shares[i]
5349-            for shnum in shares:
5350-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5351-                fileutil.make_dirs(dn)
5352-                fn = os.path.join(dn, str(shnum))
5353-                f = open(fn, "wb")
5354-                f.write(shares[shnum])
5355-                f.close()
5356+        self._write_shares(immutable_uri, immutable_shares)
5357+        self._write_shares(mutable_uri, mutable_shares)
5358 
5359     def download_immutable(self, ignored=None):
5360         n = self.c0.create_node_from_uri(immutable_uri)
5361hunk ./src/allmydata/test/test_download.py 183
5362 
5363         self.load_shares()
5364         si = uri.from_string(immutable_uri).get_storage_index()
5365-        si_dir = storage_index_to_dir(si)
5366 
5367         n = self.c0.create_node_from_uri(immutable_uri)
5368         d = download_to_data(n)
5369hunk ./src/allmydata/test/test_download.py 198
5370                 for clientnum in immutable_shares:
5371                     for shnum in immutable_shares[clientnum]:
5372                         if s._shnum == shnum:
5373-                            fn = os.path.join(self.get_serverdir(clientnum),
5374-                                              "shares", si_dir, str(shnum))
5375-                            os.unlink(fn)
5376+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5377+                            share_dir.child(str(shnum)).remove()
5378         d.addCallback(_clobber_some_shares)
5379         d.addCallback(lambda ign: download_to_data(n))
5380         d.addCallback(_got_data)
5381hunk ./src/allmydata/test/test_download.py 212
5382                 for shnum in immutable_shares[clientnum]:
5383                     if shnum == save_me:
5384                         continue
5385-                    fn = os.path.join(self.get_serverdir(clientnum),
5386-                                      "shares", si_dir, str(shnum))
5387-                    if os.path.exists(fn):
5388-                        os.unlink(fn)
5389+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5390+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5391             # now the download should fail with NotEnoughSharesError
5392             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5393                                    download_to_data, n)
5394hunk ./src/allmydata/test/test_download.py 223
5395             # delete the last remaining share
5396             for clientnum in immutable_shares:
5397                 for shnum in immutable_shares[clientnum]:
5398-                    fn = os.path.join(self.get_serverdir(clientnum),
5399-                                      "shares", si_dir, str(shnum))
5400-                    if os.path.exists(fn):
5401-                        os.unlink(fn)
5402+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5403+                    share_dir.child(str(shnum)).remove()
5404             # now a new download should fail with NoSharesError. We want a
5405             # new ImmutableFileNode so it will forget about the old shares.
5406             # If we merely called create_node_from_uri() without first
5407hunk ./src/allmydata/test/test_download.py 801
5408         # will report two shares, and the ShareFinder will handle the
5409         # duplicate by attaching both to the same CommonShare instance.
5410         si = uri.from_string(immutable_uri).get_storage_index()
5411-        si_dir = storage_index_to_dir(si)
5412-        sh0_file = [sharefile
5413-                    for (shnum, serverid, sharefile)
5414-                    in self.find_uri_shares(immutable_uri)
5415-                    if shnum == 0][0]
5416-        sh0_data = open(sh0_file, "rb").read()
5417+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5418+                          in self.find_uri_shares(immutable_uri)
5419+                          if shnum == 0][0]
5420+        sh0_data = sh0_fp.getContent()
5421         for clientnum in immutable_shares:
5422             if 0 in immutable_shares[clientnum]:
5423                 continue
5424hunk ./src/allmydata/test/test_download.py 808
5425-            cdir = self.get_serverdir(clientnum)
5426-            target = os.path.join(cdir, "shares", si_dir, "0")
5427-            outf = open(target, "wb")
5428-            outf.write(sh0_data)
5429-            outf.close()
5430+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5431+            fileutil.fp_make_dirs(cdir)
5432+            cdir.child(str(shnum)).setContent(sh0_data)
5433 
5434         d = self.download_immutable()
5435         return d
5436hunk ./src/allmydata/test/test_encode.py 134
5437         d.addCallback(_try)
5438         return d
5439 
5440-    def get_share_hashes(self, at_least_these=()):
5441+    def get_share_hashes(self):
5442         d = self._start()
5443         def _try(unused=None):
5444             if self.mode == "bad sharehash":
5445hunk ./src/allmydata/test/test_hung_server.py 3
5446 # -*- coding: utf-8 -*-
5447 
5448-import os, shutil
5449 from twisted.trial import unittest
5450 from twisted.internet import defer
5451hunk ./src/allmydata/test/test_hung_server.py 5
5452-from allmydata import uri
5453+
5454 from allmydata.util.consumer import download_to_data
5455 from allmydata.immutable import upload
5456 from allmydata.mutable.common import UnrecoverableFileError
5457hunk ./src/allmydata/test/test_hung_server.py 10
5458 from allmydata.mutable.publish import MutableData
5459-from allmydata.storage.common import storage_index_to_dir
5460 from allmydata.test.no_network import GridTestMixin
5461 from allmydata.test.common import ShouldFailMixin
5462 from allmydata.util.pollmixin import PollMixin
5463hunk ./src/allmydata/test/test_hung_server.py 18
5464 immutable_plaintext = "data" * 10000
5465 mutable_plaintext = "muta" * 10000
5466 
5467+
5468 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5469                              unittest.TestCase):
5470     # Many of these tests take around 60 seconds on François's ARM buildslave:
5471hunk ./src/allmydata/test/test_hung_server.py 31
5472     timeout = 240
5473 
5474     def _break(self, servers):
5475-        for (id, ss) in servers:
5476-            self.g.break_server(id)
5477+        for ss in servers:
5478+            self.g.break_server(ss.get_serverid())
5479 
5480     def _hang(self, servers, **kwargs):
5481hunk ./src/allmydata/test/test_hung_server.py 35
5482-        for (id, ss) in servers:
5483-            self.g.hang_server(id, **kwargs)
5484+        for ss in servers:
5485+            self.g.hang_server(ss.get_serverid(), **kwargs)
5486 
5487     def _unhang(self, servers, **kwargs):
5488hunk ./src/allmydata/test/test_hung_server.py 39
5489-        for (id, ss) in servers:
5490-            self.g.unhang_server(id, **kwargs)
5491+        for ss in servers:
5492+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5493 
5494     def _hang_shares(self, shnums, **kwargs):
5495         # hang all servers who are holding the given shares
5496hunk ./src/allmydata/test/test_hung_server.py 52
5497                     hung_serverids.add(i_serverid)
5498 
5499     def _delete_all_shares_from(self, servers):
5500-        serverids = [id for (id, ss) in servers]
5501-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5502+        serverids = [ss.get_serverid() for ss in servers]
5503+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5504             if i_serverid in serverids:
5505hunk ./src/allmydata/test/test_hung_server.py 55
5506-                os.unlink(i_sharefile)
5507+                i_sharefp.remove()
5508 
5509     def _corrupt_all_shares_in(self, servers, corruptor_func):
5510hunk ./src/allmydata/test/test_hung_server.py 58
5511-        serverids = [id for (id, ss) in servers]
5512-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5513+        serverids = [ss.get_serverid() for ss in servers]
5514+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5515             if i_serverid in serverids:
5516hunk ./src/allmydata/test/test_hung_server.py 61
5517-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5518+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5519 
5520     def _copy_all_shares_from(self, from_servers, to_server):
5521hunk ./src/allmydata/test/test_hung_server.py 64
5522-        serverids = [id for (id, ss) in from_servers]
5523-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5524+        serverids = [ss.get_serverid() for ss in from_servers]
5525+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5526             if i_serverid in serverids:
5527hunk ./src/allmydata/test/test_hung_server.py 67
5528-                self._copy_share((i_shnum, i_sharefile), to_server)
5529+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5530 
5531hunk ./src/allmydata/test/test_hung_server.py 69
5532-    def _copy_share(self, share, to_server):
5533-        (sharenum, sharefile) = share
5534-        (id, ss) = to_server
5535-        shares_dir = os.path.join(ss.original.storedir, "shares")
5536-        si = uri.from_string(self.uri).get_storage_index()
5537-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5538-        if not os.path.exists(si_dir):
5539-            os.makedirs(si_dir)
5540-        new_sharefile = os.path.join(si_dir, str(sharenum))
5541-        shutil.copy(sharefile, new_sharefile)
5542         self.shares = self.find_uri_shares(self.uri)
5543hunk ./src/allmydata/test/test_hung_server.py 70
5544-        # Make sure that the storage server has the share.
5545-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5546-                        in self.shares)
5547-
5548-    def _corrupt_share(self, share, corruptor_func):
5549-        (sharenum, sharefile) = share
5550-        data = open(sharefile, "rb").read()
5551-        newdata = corruptor_func(data)
5552-        os.unlink(sharefile)
5553-        wf = open(sharefile, "wb")
5554-        wf.write(newdata)
5555-        wf.close()
5556 
5557     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5558         self.mutable = mutable
5559hunk ./src/allmydata/test/test_hung_server.py 82
5560 
5561         self.c0 = self.g.clients[0]
5562         nm = self.c0.nodemaker
5563-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5564-                               for s in nm.storage_broker.get_connected_servers()])
5565+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5566+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5567         self.servers = self.servers[5:] + self.servers[:5]
5568 
5569         if mutable:
5570hunk ./src/allmydata/test/test_hung_server.py 244
5571             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5572             # will retire before the download is complete and the ShareFinder
5573             # is shut off. That will leave 4 OVERDUE and 1
5574-            # stuck-but-not-overdue, for a total of 5 requests in in
5575+            # stuck-but-not-overdue, for a total of 5 requests in
5576             # _sf.pending_requests
5577             for t in self._sf.overdue_timers.values()[:4]:
5578                 t.reset(-1.0)
5579hunk ./src/allmydata/test/test_mutable.py 21
5580 from foolscap.api import eventually, fireEventually
5581 from foolscap.logging import log
5582 from allmydata.storage_client import StorageFarmBroker
5583-from allmydata.storage.common import storage_index_to_dir
5584 from allmydata.scripts import debug
5585 
5586 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5587hunk ./src/allmydata/test/test_mutable.py 3669
5588         # Now execute each assignment by writing the storage.
5589         for (share, servernum) in assignments:
5590             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5591-            storedir = self.get_serverdir(servernum)
5592-            storage_path = os.path.join(storedir, "shares",
5593-                                        storage_index_to_dir(si))
5594-            fileutil.make_dirs(storage_path)
5595-            fileutil.write(os.path.join(storage_path, "%d" % share),
5596-                           sharedata)
5597+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5598+            fileutil.fp_make_dirs(storage_dir)
5599+            storage_dir.child("%d" % share).setContent(sharedata)
5600         # ...and verify that the shares are there.
5601         shares = self.find_uri_shares(self.sdmf_old_cap)
5602         assert len(shares) == 10
5603hunk ./src/allmydata/test/test_provisioning.py 13
5604 from nevow import inevow
5605 from zope.interface import implements
5606 
5607-class MyRequest:
5608+class MockRequest:
5609     implements(inevow.IRequest)
5610     pass
5611 
5612hunk ./src/allmydata/test/test_provisioning.py 26
5613     def test_load(self):
5614         pt = provisioning.ProvisioningTool()
5615         self.fields = {}
5616-        #r = MyRequest()
5617+        #r = MockRequest()
5618         #r.fields = self.fields
5619         #ctx = RequestContext()
5620         #unfilled = pt.renderSynchronously(ctx)
5621hunk ./src/allmydata/test/test_repairer.py 537
5622         # happiness setting.
5623         def _delete_some_servers(ignored):
5624             for i in xrange(7):
5625-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5626+                self.remove_server(i)
5627 
5628             assert len(self.g.servers_by_number) == 3
5629 
5630hunk ./src/allmydata/test/test_storage.py 14
5631 from allmydata import interfaces
5632 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5633 from allmydata.storage.server import StorageServer
5634-from allmydata.storage.mutable import MutableShareFile
5635-from allmydata.storage.immutable import BucketWriter, BucketReader
5636-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5637+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5638+from allmydata.storage.bucket import BucketWriter, BucketReader
5639+from allmydata.storage.common import DataTooLargeError, \
5640      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5641 from allmydata.storage.lease import LeaseInfo
5642 from allmydata.storage.crawler import BucketCountingCrawler
5643hunk ./src/allmydata/test/test_storage.py 474
5644         w[0].remote_write(0, "\xff"*10)
5645         w[0].remote_close()
5646 
5647-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5648-        f = open(fn, "rb+")
5649+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5650+        f = fp.open("rb+")
5651         f.seek(0)
5652         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5653         f.close()
5654hunk ./src/allmydata/test/test_storage.py 814
5655     def test_bad_magic(self):
5656         ss = self.create("test_bad_magic")
5657         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5658-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5659-        f = open(fn, "rb+")
5660+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5661+        f = fp.open("rb+")
5662         f.seek(0)
5663         f.write("BAD MAGIC")
5664         f.close()
5665hunk ./src/allmydata/test/test_storage.py 842
5666 
5667         # Trying to make the container too large (by sending a write vector
5668         # whose offset is too high) will raise an exception.
5669-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5670+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5671         self.failUnlessRaises(DataTooLargeError,
5672                               rstaraw, "si1", secrets,
5673                               {0: ([], [(TOOBIG,data)], None)},
5674hunk ./src/allmydata/test/test_storage.py 1229
5675 
5676         # create a random non-numeric file in the bucket directory, to
5677         # exercise the code that's supposed to ignore those.
5678-        bucket_dir = os.path.join(self.workdir("test_leases"),
5679-                                  "shares", storage_index_to_dir("si1"))
5680-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5681-        f.write("you ought to be ignoring me\n")
5682-        f.close()
5683+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5684+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5685 
5686hunk ./src/allmydata/test/test_storage.py 1232
5687-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5688+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5689         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5690 
5691         # add-lease on a missing storage index is silently ignored
5692hunk ./src/allmydata/test/test_storage.py 3118
5693         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5694 
5695         # add a non-sharefile to exercise another code path
5696-        fn = os.path.join(ss.sharedir,
5697-                          storage_index_to_dir(immutable_si_0),
5698-                          "not-a-share")
5699-        f = open(fn, "wb")
5700-        f.write("I am not a share.\n")
5701-        f.close()
5702+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5703+        fp.setContent("I am not a share.\n")
5704 
5705         # this is before the crawl has started, so we're not in a cycle yet
5706         initial_state = lc.get_state()
5707hunk ./src/allmydata/test/test_storage.py 3282
5708     def test_expire_age(self):
5709         basedir = "storage/LeaseCrawler/expire_age"
5710         fileutil.make_dirs(basedir)
5711-        # setting expiration_time to 2000 means that any lease which is more
5712-        # than 2000s old will be expired.
5713-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5714-                                       expiration_enabled=True,
5715-                                       expiration_mode="age",
5716-                                       expiration_override_lease_duration=2000)
5717+        # setting 'override_lease_duration' to 2000 means that any lease that
5718+        # is more than 2000 seconds old will be expired.
5719+        expiration_policy = {
5720+            'enabled': True,
5721+            'mode': 'age',
5722+            'override_lease_duration': 2000,
5723+            'sharetypes': ('mutable', 'immutable'),
5724+        }
5725+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5726         # make it start sooner than usual.
5727         lc = ss.lease_checker
5728         lc.slow_start = 0
5729hunk ./src/allmydata/test/test_storage.py 3423
5730     def test_expire_cutoff_date(self):
5731         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5732         fileutil.make_dirs(basedir)
5733-        # setting cutoff-date to 2000 seconds ago means that any lease which
5734-        # is more than 2000s old will be expired.
5735+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5736+        # is more than 2000 seconds old will be expired.
5737         now = time.time()
5738         then = int(now - 2000)
5739hunk ./src/allmydata/test/test_storage.py 3427
5740-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5741-                                       expiration_enabled=True,
5742-                                       expiration_mode="cutoff-date",
5743-                                       expiration_cutoff_date=then)
5744+        expiration_policy = {
5745+            'enabled': True,
5746+            'mode': 'cutoff-date',
5747+            'cutoff_date': then,
5748+            'sharetypes': ('mutable', 'immutable'),
5749+        }
5750+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5751         # make it start sooner than usual.
5752         lc = ss.lease_checker
5753         lc.slow_start = 0
5754hunk ./src/allmydata/test/test_storage.py 3575
5755     def test_only_immutable(self):
5756         basedir = "storage/LeaseCrawler/only_immutable"
5757         fileutil.make_dirs(basedir)
5758+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5759+        # is more than 2000 seconds old will be expired.
5760         now = time.time()
5761         then = int(now - 2000)
5762hunk ./src/allmydata/test/test_storage.py 3579
5763-        ss = StorageServer(basedir, "\x00" * 20,
5764-                           expiration_enabled=True,
5765-                           expiration_mode="cutoff-date",
5766-                           expiration_cutoff_date=then,
5767-                           expiration_sharetypes=("immutable",))
5768+        expiration_policy = {
5769+            'enabled': True,
5770+            'mode': 'cutoff-date',
5771+            'cutoff_date': then,
5772+            'sharetypes': ('immutable',),
5773+        }
5774+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5775         lc = ss.lease_checker
5776         lc.slow_start = 0
5777         webstatus = StorageStatus(ss)
5778hunk ./src/allmydata/test/test_storage.py 3636
5779     def test_only_mutable(self):
5780         basedir = "storage/LeaseCrawler/only_mutable"
5781         fileutil.make_dirs(basedir)
5782+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5783+        # is more than 2000 seconds old will be expired.
5784         now = time.time()
5785         then = int(now - 2000)
5786hunk ./src/allmydata/test/test_storage.py 3640
5787-        ss = StorageServer(basedir, "\x00" * 20,
5788-                           expiration_enabled=True,
5789-                           expiration_mode="cutoff-date",
5790-                           expiration_cutoff_date=then,
5791-                           expiration_sharetypes=("mutable",))
5792+        expiration_policy = {
5793+            'enabled': True,
5794+            'mode': 'cutoff-date',
5795+            'cutoff_date': then,
5796+            'sharetypes': ('mutable',),
5797+        }
5798+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5799         lc = ss.lease_checker
5800         lc.slow_start = 0
5801         webstatus = StorageStatus(ss)
5802hunk ./src/allmydata/test/test_storage.py 3819
5803     def test_no_st_blocks(self):
5804         basedir = "storage/LeaseCrawler/no_st_blocks"
5805         fileutil.make_dirs(basedir)
5806-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5807-                                        expiration_mode="age",
5808-                                        expiration_override_lease_duration=-1000)
5809-        # a negative expiration_time= means the "configured-"
5810+        # A negative 'override_lease_duration' means that the "configured-"
5811         # space-recovered counts will be non-zero, since all shares will have
5812hunk ./src/allmydata/test/test_storage.py 3821
5813-        # expired by then
5814+        # expired by then.
5815+        expiration_policy = {
5816+            'enabled': True,
5817+            'mode': 'age',
5818+            'override_lease_duration': -1000,
5819+            'sharetypes': ('mutable', 'immutable'),
5820+        }
5821+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5822 
5823         # make it start sooner than usual.
5824         lc = ss.lease_checker
5825hunk ./src/allmydata/test/test_storage.py 3877
5826         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5827         first = min(self.sis)
5828         first_b32 = base32.b2a(first)
5829-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5830-        f = open(fn, "rb+")
5831+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5832+        f = fp.open("rb+")
5833         f.seek(0)
5834         f.write("BAD MAGIC")
5835         f.close()
5836hunk ./src/allmydata/test/test_storage.py 3890
5837 
5838         # also create an empty bucket
5839         empty_si = base32.b2a("\x04"*16)
5840-        empty_bucket_dir = os.path.join(ss.sharedir,
5841-                                        storage_index_to_dir(empty_si))
5842-        fileutil.make_dirs(empty_bucket_dir)
5843+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5844+        fileutil.fp_make_dirs(empty_bucket_dir)
5845 
5846         ss.setServiceParent(self.s)
5847 
5848hunk ./src/allmydata/test/test_system.py 10
5849 
5850 import allmydata
5851 from allmydata import uri
5852-from allmydata.storage.mutable import MutableShareFile
5853+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5854 from allmydata.storage.server import si_a2b
5855 from allmydata.immutable import offloaded, upload
5856 from allmydata.immutable.literal import LiteralFileNode
5857hunk ./src/allmydata/test/test_system.py 421
5858         return shares
5859 
5860     def _corrupt_mutable_share(self, filename, which):
5861-        msf = MutableShareFile(filename)
5862+        msf = MutableDiskShare(filename)
5863         datav = msf.readv([ (0, 1000000) ])
5864         final_share = datav[0]
5865         assert len(final_share) < 1000000 # ought to be truncated
5866hunk ./src/allmydata/test/test_upload.py 22
5867 from allmydata.util.happinessutil import servers_of_happiness, \
5868                                          shares_by_server, merge_servers
5869 from allmydata.storage_client import StorageFarmBroker
5870-from allmydata.storage.server import storage_index_to_dir
5871 
5872 MiB = 1024*1024
5873 
5874hunk ./src/allmydata/test/test_upload.py 821
5875 
5876     def _copy_share_to_server(self, share_number, server_number):
5877         ss = self.g.servers_by_number[server_number]
5878-        # Copy share i from the directory associated with the first
5879-        # storage server to the directory associated with this one.
5880-        assert self.g, "I tried to find a grid at self.g, but failed"
5881-        assert self.shares, "I tried to find shares at self.shares, but failed"
5882-        old_share_location = self.shares[share_number][2]
5883-        new_share_location = os.path.join(ss.storedir, "shares")
5884-        si = uri.from_string(self.uri).get_storage_index()
5885-        new_share_location = os.path.join(new_share_location,
5886-                                          storage_index_to_dir(si))
5887-        if not os.path.exists(new_share_location):
5888-            os.makedirs(new_share_location)
5889-        new_share_location = os.path.join(new_share_location,
5890-                                          str(share_number))
5891-        if old_share_location != new_share_location:
5892-            shutil.copy(old_share_location, new_share_location)
5893-        shares = self.find_uri_shares(self.uri)
5894-        # Make sure that the storage server has the share.
5895-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5896-                        in shares)
5897+        self.copy_share(self.shares[share_number], ss)
5898 
5899     def _setup_grid(self):
5900         """
5901hunk ./src/allmydata/test/test_upload.py 1103
5902                 self._copy_share_to_server(i, 2)
5903         d.addCallback(_copy_shares)
5904         # Remove the first server, and add a placeholder with share 0
5905-        d.addCallback(lambda ign:
5906-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5907+        d.addCallback(lambda ign: self.remove_server(0))
5908         d.addCallback(lambda ign:
5909             self._add_server_with_share(server_number=4, share_number=0))
5910         # Now try uploading.
5911hunk ./src/allmydata/test/test_upload.py 1134
5912         d.addCallback(lambda ign:
5913             self._add_server(server_number=4))
5914         d.addCallback(_copy_shares)
5915-        d.addCallback(lambda ign:
5916-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5917+        d.addCallback(lambda ign: self.remove_server(0))
5918         d.addCallback(_reset_encoding_parameters)
5919         d.addCallback(lambda client:
5920             client.upload(upload.Data("data" * 10000, convergence="")))
5921hunk ./src/allmydata/test/test_upload.py 1196
5922                 self._copy_share_to_server(i, 2)
5923         d.addCallback(_copy_shares)
5924         # Remove server 0, and add another in its place
5925-        d.addCallback(lambda ign:
5926-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5927+        d.addCallback(lambda ign: self.remove_server(0))
5928         d.addCallback(lambda ign:
5929             self._add_server_with_share(server_number=4, share_number=0,
5930                                         readonly=True))
5931hunk ./src/allmydata/test/test_upload.py 1237
5932             for i in xrange(1, 10):
5933                 self._copy_share_to_server(i, 2)
5934         d.addCallback(_copy_shares)
5935-        d.addCallback(lambda ign:
5936-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5937+        d.addCallback(lambda ign: self.remove_server(0))
5938         def _reset_encoding_parameters(ign, happy=4):
5939             client = self.g.clients[0]
5940             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5941hunk ./src/allmydata/test/test_upload.py 1273
5942         # remove the original server
5943         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5944         #  all the shares)
5945-        def _remove_server(ign):
5946-            server = self.g.servers_by_number[0]
5947-            self.g.remove_server(server.my_nodeid)
5948-        d.addCallback(_remove_server)
5949+        d.addCallback(lambda ign: self.remove_server(0))
5950         # This should succeed; we still have 4 servers, and the
5951         # happiness of the upload is 4.
5952         d.addCallback(lambda ign:
5953hunk ./src/allmydata/test/test_upload.py 1285
5954         d.addCallback(lambda ign:
5955             self._setup_and_upload())
5956         d.addCallback(_do_server_setup)
5957-        d.addCallback(_remove_server)
5958+        d.addCallback(lambda ign: self.remove_server(0))
5959         d.addCallback(lambda ign:
5960             self.shouldFail(UploadUnhappinessError,
5961                             "test_dropped_servers_in_encoder",
5962hunk ./src/allmydata/test/test_upload.py 1307
5963             self._add_server_with_share(4, 7, readonly=True)
5964             self._add_server_with_share(5, 8, readonly=True)
5965         d.addCallback(_do_server_setup_2)
5966-        d.addCallback(_remove_server)
5967+        d.addCallback(lambda ign: self.remove_server(0))
5968         d.addCallback(lambda ign:
5969             self._do_upload_with_broken_servers(1))
5970         d.addCallback(_set_basedir)
5971hunk ./src/allmydata/test/test_upload.py 1314
5972         d.addCallback(lambda ign:
5973             self._setup_and_upload())
5974         d.addCallback(_do_server_setup_2)
5975-        d.addCallback(_remove_server)
5976+        d.addCallback(lambda ign: self.remove_server(0))
5977         d.addCallback(lambda ign:
5978             self.shouldFail(UploadUnhappinessError,
5979                             "test_dropped_servers_in_encoder",
5980hunk ./src/allmydata/test/test_upload.py 1528
5981             for i in xrange(1, 10):
5982                 self._copy_share_to_server(i, 1)
5983         d.addCallback(_copy_shares)
5984-        d.addCallback(lambda ign:
5985-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5986+        d.addCallback(lambda ign: self.remove_server(0))
5987         def _prepare_client(ign):
5988             client = self.g.clients[0]
5989             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5990hunk ./src/allmydata/test/test_upload.py 1550
5991         def _setup(ign):
5992             for i in xrange(1, 11):
5993                 self._add_server(server_number=i)
5994-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5995+            self.remove_server(0)
5996             c = self.g.clients[0]
5997             # We set happy to an unsatisfiable value so that we can check the
5998             # counting in the exception message. The same progress message
5999hunk ./src/allmydata/test/test_upload.py 1577
6000                 self._add_server(server_number=i)
6001             self._add_server(server_number=11, readonly=True)
6002             self._add_server(server_number=12, readonly=True)
6003-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6004+            self.remove_server(0)
6005             c = self.g.clients[0]
6006             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
6007             return c
6008hunk ./src/allmydata/test/test_upload.py 1605
6009             # the first one that the selector sees.
6010             for i in xrange(10):
6011                 self._copy_share_to_server(i, 9)
6012-            # Remove server 0, and its contents
6013-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6014+            self.remove_server(0)
6015             # Make happiness unsatisfiable
6016             c = self.g.clients[0]
6017             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
6018hunk ./src/allmydata/test/test_upload.py 1625
6019         def _then(ign):
6020             for i in xrange(1, 11):
6021                 self._add_server(server_number=i, readonly=True)
6022-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6023+            self.remove_server(0)
6024             c = self.g.clients[0]
6025             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
6026             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6027hunk ./src/allmydata/test/test_upload.py 1661
6028             self._add_server(server_number=4, readonly=True))
6029         d.addCallback(lambda ign:
6030             self._add_server(server_number=5, readonly=True))
6031-        d.addCallback(lambda ign:
6032-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6033+        d.addCallback(lambda ign: self.remove_server(0))
6034         def _reset_encoding_parameters(ign, happy=4):
6035             client = self.g.clients[0]
6036             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
6037hunk ./src/allmydata/test/test_upload.py 1696
6038         d.addCallback(lambda ign:
6039             self._add_server(server_number=2))
6040         def _break_server_2(ign):
6041-            serverid = self.g.servers_by_number[2].my_nodeid
6042+            serverid = self.get_server(2).get_serverid()
6043             self.g.break_server(serverid)
6044         d.addCallback(_break_server_2)
6045         d.addCallback(lambda ign:
6046hunk ./src/allmydata/test/test_upload.py 1705
6047             self._add_server(server_number=4, readonly=True))
6048         d.addCallback(lambda ign:
6049             self._add_server(server_number=5, readonly=True))
6050-        d.addCallback(lambda ign:
6051-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6052+        d.addCallback(lambda ign: self.remove_server(0))
6053         d.addCallback(_reset_encoding_parameters)
6054         d.addCallback(lambda client:
6055             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
6056hunk ./src/allmydata/test/test_upload.py 1816
6057             # Copy shares
6058             self._copy_share_to_server(1, 1)
6059             self._copy_share_to_server(2, 1)
6060-            # Remove server 0
6061-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6062+            self.remove_server(0)
6063             client = self.g.clients[0]
6064             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
6065             return client
6066hunk ./src/allmydata/test/test_upload.py 1930
6067                                         readonly=True)
6068             self._add_server_with_share(server_number=4, share_number=3,
6069                                         readonly=True)
6070-            # Remove server 0.
6071-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6072+            self.remove_server(0)
6073             # Set the client appropriately
6074             c = self.g.clients[0]
6075             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6076hunk ./src/allmydata/test/test_util.py 9
6077 from twisted.trial import unittest
6078 from twisted.internet import defer, reactor
6079 from twisted.python.failure import Failure
6080+from twisted.python.filepath import FilePath
6081 from twisted.python import log
6082 from pycryptopp.hash.sha256 import SHA256 as _hash
6083 
6084hunk ./src/allmydata/test/test_util.py 508
6085                 os.chdir(saved_cwd)
6086 
6087     def test_disk_stats(self):
6088-        avail = fileutil.get_available_space('.', 2**14)
6089+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
6090         if avail == 0:
6091             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
6092 
6093hunk ./src/allmydata/test/test_util.py 512
6094-        disk = fileutil.get_disk_stats('.', 2**13)
6095+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
6096         self.failUnless(disk['total'] > 0, disk['total'])
6097         self.failUnless(disk['used'] > 0, disk['used'])
6098         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
6099hunk ./src/allmydata/test/test_util.py 521
6100 
6101     def test_disk_stats_avail_nonnegative(self):
6102         # This test will spuriously fail if you have more than 2^128
6103-        # bytes of available space on your filesystem.
6104-        disk = fileutil.get_disk_stats('.', 2**128)
6105+        # bytes of available space on your filesystem (lucky you).
6106+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
6107         self.failUnlessEqual(disk['avail'], 0)
6108 
6109 class PollMixinTests(unittest.TestCase):
6110hunk ./src/allmydata/test/test_web.py 12
6111 from twisted.python import failure, log
6112 from nevow import rend
6113 from allmydata import interfaces, uri, webish, dirnode
6114-from allmydata.storage.shares import get_share_file
6115 from allmydata.storage_client import StorageFarmBroker
6116 from allmydata.immutable import upload
6117 from allmydata.immutable.downloader.status import DownloadStatus
6118hunk ./src/allmydata/test/test_web.py 4111
6119             good_shares = self.find_uri_shares(self.uris["good"])
6120             self.failUnlessReallyEqual(len(good_shares), 10)
6121             sick_shares = self.find_uri_shares(self.uris["sick"])
6122-            os.unlink(sick_shares[0][2])
6123+            sick_shares[0][2].remove()
6124             dead_shares = self.find_uri_shares(self.uris["dead"])
6125             for i in range(1, 10):
6126hunk ./src/allmydata/test/test_web.py 4114
6127-                os.unlink(dead_shares[i][2])
6128+                dead_shares[i][2].remove()
6129             c_shares = self.find_uri_shares(self.uris["corrupt"])
6130             cso = CorruptShareOptions()
6131             cso.stdout = StringIO()
6132hunk ./src/allmydata/test/test_web.py 4118
6133-            cso.parseOptions([c_shares[0][2]])
6134+            cso.parseOptions([c_shares[0][2].path])
6135             corrupt_share(cso)
6136         d.addCallback(_clobber_shares)
6137 
6138hunk ./src/allmydata/test/test_web.py 4253
6139             good_shares = self.find_uri_shares(self.uris["good"])
6140             self.failUnlessReallyEqual(len(good_shares), 10)
6141             sick_shares = self.find_uri_shares(self.uris["sick"])
6142-            os.unlink(sick_shares[0][2])
6143+            sick_shares[0][2].remove()
6144             dead_shares = self.find_uri_shares(self.uris["dead"])
6145             for i in range(1, 10):
6146hunk ./src/allmydata/test/test_web.py 4256
6147-                os.unlink(dead_shares[i][2])
6148+                dead_shares[i][2].remove()
6149             c_shares = self.find_uri_shares(self.uris["corrupt"])
6150             cso = CorruptShareOptions()
6151             cso.stdout = StringIO()
6152hunk ./src/allmydata/test/test_web.py 4260
6153-            cso.parseOptions([c_shares[0][2]])
6154+            cso.parseOptions([c_shares[0][2].path])
6155             corrupt_share(cso)
6156         d.addCallback(_clobber_shares)
6157 
6158hunk ./src/allmydata/test/test_web.py 4319
6159 
6160         def _clobber_shares(ignored):
6161             sick_shares = self.find_uri_shares(self.uris["sick"])
6162-            os.unlink(sick_shares[0][2])
6163+            sick_shares[0][2].remove()
6164         d.addCallback(_clobber_shares)
6165 
6166         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6167hunk ./src/allmydata/test/test_web.py 4811
6168             good_shares = self.find_uri_shares(self.uris["good"])
6169             self.failUnlessReallyEqual(len(good_shares), 10)
6170             sick_shares = self.find_uri_shares(self.uris["sick"])
6171-            os.unlink(sick_shares[0][2])
6172+            sick_shares[0][2].remove()
6173             #dead_shares = self.find_uri_shares(self.uris["dead"])
6174             #for i in range(1, 10):
6175hunk ./src/allmydata/test/test_web.py 4814
6176-            #    os.unlink(dead_shares[i][2])
6177+            #    dead_shares[i][2].remove()
6178 
6179             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6180             #cso = CorruptShareOptions()
6181hunk ./src/allmydata/test/test_web.py 4819
6182             #cso.stdout = StringIO()
6183-            #cso.parseOptions([c_shares[0][2]])
6184+            #cso.parseOptions([c_shares[0][2].path])
6185             #corrupt_share(cso)
6186         d.addCallback(_clobber_shares)
6187 
6188hunk ./src/allmydata/test/test_web.py 4870
6189         d.addErrback(self.explain_web_error)
6190         return d
6191 
6192-    def _count_leases(self, ignored, which):
6193-        u = self.uris[which]
6194-        shares = self.find_uri_shares(u)
6195-        lease_counts = []
6196-        for shnum, serverid, fn in shares:
6197-            sf = get_share_file(fn)
6198-            num_leases = len(list(sf.get_leases()))
6199-            lease_counts.append( (fn, num_leases) )
6200-        return lease_counts
6201-
6202-    def _assert_leasecount(self, lease_counts, expected):
6203+    def _assert_leasecount(self, ignored, which, expected):
6204+        lease_counts = self.count_leases(self.uris[which])
6205         for (fn, num_leases) in lease_counts:
6206             if num_leases != expected:
6207                 self.fail("expected %d leases, have %d, on %s" %
6208hunk ./src/allmydata/test/test_web.py 4903
6209                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6210         d.addCallback(_compute_fileurls)
6211 
6212-        d.addCallback(self._count_leases, "one")
6213-        d.addCallback(self._assert_leasecount, 1)
6214-        d.addCallback(self._count_leases, "two")
6215-        d.addCallback(self._assert_leasecount, 1)
6216-        d.addCallback(self._count_leases, "mutable")
6217-        d.addCallback(self._assert_leasecount, 1)
6218+        d.addCallback(self._assert_leasecount, "one", 1)
6219+        d.addCallback(self._assert_leasecount, "two", 1)
6220+        d.addCallback(self._assert_leasecount, "mutable", 1)
6221 
6222         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6223         def _got_html_good(res):
6224hunk ./src/allmydata/test/test_web.py 4913
6225             self.failIf("Not Healthy" in res, res)
6226         d.addCallback(_got_html_good)
6227 
6228-        d.addCallback(self._count_leases, "one")
6229-        d.addCallback(self._assert_leasecount, 1)
6230-        d.addCallback(self._count_leases, "two")
6231-        d.addCallback(self._assert_leasecount, 1)
6232-        d.addCallback(self._count_leases, "mutable")
6233-        d.addCallback(self._assert_leasecount, 1)
6234+        d.addCallback(self._assert_leasecount, "one", 1)
6235+        d.addCallback(self._assert_leasecount, "two", 1)
6236+        d.addCallback(self._assert_leasecount, "mutable", 1)
6237 
6238         # this CHECK uses the original client, which uses the same
6239         # lease-secrets, so it will just renew the original lease
6240hunk ./src/allmydata/test/test_web.py 4922
6241         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6242         d.addCallback(_got_html_good)
6243 
6244-        d.addCallback(self._count_leases, "one")
6245-        d.addCallback(self._assert_leasecount, 1)
6246-        d.addCallback(self._count_leases, "two")
6247-        d.addCallback(self._assert_leasecount, 1)
6248-        d.addCallback(self._count_leases, "mutable")
6249-        d.addCallback(self._assert_leasecount, 1)
6250+        d.addCallback(self._assert_leasecount, "one", 1)
6251+        d.addCallback(self._assert_leasecount, "two", 1)
6252+        d.addCallback(self._assert_leasecount, "mutable", 1)
6253 
6254         # this CHECK uses an alternate client, which adds a second lease
6255         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6256hunk ./src/allmydata/test/test_web.py 4930
6257         d.addCallback(_got_html_good)
6258 
6259-        d.addCallback(self._count_leases, "one")
6260-        d.addCallback(self._assert_leasecount, 2)
6261-        d.addCallback(self._count_leases, "two")
6262-        d.addCallback(self._assert_leasecount, 1)
6263-        d.addCallback(self._count_leases, "mutable")
6264-        d.addCallback(self._assert_leasecount, 1)
6265+        d.addCallback(self._assert_leasecount, "one", 2)
6266+        d.addCallback(self._assert_leasecount, "two", 1)
6267+        d.addCallback(self._assert_leasecount, "mutable", 1)
6268 
6269         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6270         d.addCallback(_got_html_good)
6271hunk ./src/allmydata/test/test_web.py 4937
6272 
6273-        d.addCallback(self._count_leases, "one")
6274-        d.addCallback(self._assert_leasecount, 2)
6275-        d.addCallback(self._count_leases, "two")
6276-        d.addCallback(self._assert_leasecount, 1)
6277-        d.addCallback(self._count_leases, "mutable")
6278-        d.addCallback(self._assert_leasecount, 1)
6279+        d.addCallback(self._assert_leasecount, "one", 2)
6280+        d.addCallback(self._assert_leasecount, "two", 1)
6281+        d.addCallback(self._assert_leasecount, "mutable", 1)
6282 
6283         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6284                       clientnum=1)
6285hunk ./src/allmydata/test/test_web.py 4945
6286         d.addCallback(_got_html_good)
6287 
6288-        d.addCallback(self._count_leases, "one")
6289-        d.addCallback(self._assert_leasecount, 2)
6290-        d.addCallback(self._count_leases, "two")
6291-        d.addCallback(self._assert_leasecount, 1)
6292-        d.addCallback(self._count_leases, "mutable")
6293-        d.addCallback(self._assert_leasecount, 2)
6294+        d.addCallback(self._assert_leasecount, "one", 2)
6295+        d.addCallback(self._assert_leasecount, "two", 1)
6296+        d.addCallback(self._assert_leasecount, "mutable", 2)
6297 
6298         d.addErrback(self.explain_web_error)
6299         return d
6300hunk ./src/allmydata/test/test_web.py 4989
6301             self.failUnlessReallyEqual(len(units), 4+1)
6302         d.addCallback(_done)
6303 
6304-        d.addCallback(self._count_leases, "root")
6305-        d.addCallback(self._assert_leasecount, 1)
6306-        d.addCallback(self._count_leases, "one")
6307-        d.addCallback(self._assert_leasecount, 1)
6308-        d.addCallback(self._count_leases, "mutable")
6309-        d.addCallback(self._assert_leasecount, 1)
6310+        d.addCallback(self._assert_leasecount, "root", 1)
6311+        d.addCallback(self._assert_leasecount, "one", 1)
6312+        d.addCallback(self._assert_leasecount, "mutable", 1)
6313 
6314         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6315         d.addCallback(_done)
6316hunk ./src/allmydata/test/test_web.py 4996
6317 
6318-        d.addCallback(self._count_leases, "root")
6319-        d.addCallback(self._assert_leasecount, 1)
6320-        d.addCallback(self._count_leases, "one")
6321-        d.addCallback(self._assert_leasecount, 1)
6322-        d.addCallback(self._count_leases, "mutable")
6323-        d.addCallback(self._assert_leasecount, 1)
6324+        d.addCallback(self._assert_leasecount, "root", 1)
6325+        d.addCallback(self._assert_leasecount, "one", 1)
6326+        d.addCallback(self._assert_leasecount, "mutable", 1)
6327 
6328         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6329                       clientnum=1)
6330hunk ./src/allmydata/test/test_web.py 5004
6331         d.addCallback(_done)
6332 
6333-        d.addCallback(self._count_leases, "root")
6334-        d.addCallback(self._assert_leasecount, 2)
6335-        d.addCallback(self._count_leases, "one")
6336-        d.addCallback(self._assert_leasecount, 2)
6337-        d.addCallback(self._count_leases, "mutable")
6338-        d.addCallback(self._assert_leasecount, 2)
6339+        d.addCallback(self._assert_leasecount, "root", 2)
6340+        d.addCallback(self._assert_leasecount, "one", 2)
6341+        d.addCallback(self._assert_leasecount, "mutable", 2)
6342 
6343         d.addErrback(self.explain_web_error)
6344         return d
6345merger 0.0 (
6346hunk ./src/allmydata/uri.py 829
6347+    def is_readonly(self):
6348+        return True
6349+
6350+    def get_readonly(self):
6351+        return self
6352+
6353+
6354hunk ./src/allmydata/uri.py 829
6355+    def is_readonly(self):
6356+        return True
6357+
6358+    def get_readonly(self):
6359+        return self
6360+
6361+
6362)
6363merger 0.0 (
6364hunk ./src/allmydata/uri.py 848
6365+    def is_readonly(self):
6366+        return True
6367+
6368+    def get_readonly(self):
6369+        return self
6370+
6371hunk ./src/allmydata/uri.py 848
6372+    def is_readonly(self):
6373+        return True
6374+
6375+    def get_readonly(self):
6376+        return self
6377+
6378)
6379hunk ./src/allmydata/util/encodingutil.py 221
6380 def quote_path(path, quotemarks=True):
6381     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6382 
6383+def quote_filepath(fp, quotemarks=True, encoding=None):
6384+    path = fp.path
6385+    if isinstance(path, str):
6386+        try:
6387+            path = path.decode(filesystem_encoding)
6388+        except UnicodeDecodeError:
6389+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6390+
6391+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6392+
6393 
6394 def unicode_platform():
6395     """
6396hunk ./src/allmydata/util/fileutil.py 5
6397 Futz with files like a pro.
6398 """
6399 
6400-import sys, exceptions, os, stat, tempfile, time, binascii
6401+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6402+
6403+from allmydata.util.assertutil import precondition
6404 
6405 from twisted.python import log
6406hunk ./src/allmydata/util/fileutil.py 10
6407+from twisted.python.filepath import FilePath, UnlistableError
6408 
6409 from pycryptopp.cipher.aes import AES
6410 
6411hunk ./src/allmydata/util/fileutil.py 189
6412             raise tx
6413         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6414 
6415-def rm_dir(dirname):
6416+def fp_make_dirs(dirfp):
6417+    """
6418+    An idempotent version of FilePath.makedirs().  If the dir already
6419+    exists, do nothing and return without raising an exception.  If this
6420+    call creates the dir, return without raising an exception.  If there is
6421+    an error that prevents creation or if the directory gets deleted after
6422+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6423+    exists, raise an exception.
6424+    """
6425+    log.msg( "xxx 0 %s" % (dirfp,))
6426+    tx = None
6427+    try:
6428+        dirfp.makedirs()
6429+    except OSError, x:
6430+        tx = x
6431+
6432+    if not dirfp.isdir():
6433+        if tx:
6434+            raise tx
6435+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6436+
6437+def fp_rmdir_if_empty(dirfp):
6438+    """ Remove the directory if it is empty. """
6439+    try:
6440+        os.rmdir(dirfp.path)
6441+    except OSError, e:
6442+        if e.errno != errno.ENOTEMPTY:
6443+            raise
6444+    else:
6445+        dirfp.changed()
6446+
6447+def rmtree(dirname):
6448     """
6449     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6450     already gone, do nothing and return without raising an exception.  If this
6451hunk ./src/allmydata/util/fileutil.py 239
6452             else:
6453                 remove(fullname)
6454         os.rmdir(dirname)
6455-    except Exception, le:
6456-        # Ignore "No such file or directory"
6457-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6458+    except EnvironmentError, le:
6459+        # Ignore "No such file or directory", collect any other exception.
6460+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6461             excs.append(le)
6462hunk ./src/allmydata/util/fileutil.py 243
6463+    except Exception, le:
6464+        excs.append(le)
6465 
6466     # Okay, now we've recursively removed everything, ignoring any "No
6467     # such file or directory" errors, and collecting any other errors.
6468hunk ./src/allmydata/util/fileutil.py 256
6469             raise OSError, "Failed to remove dir for unknown reason."
6470         raise OSError, excs
6471 
6472+def fp_remove(fp):
6473+    """
6474+    An idempotent version of shutil.rmtree().  If the file/dir is already
6475+    gone, do nothing and return without raising an exception.  If this call
6476+    removes the file/dir, return without raising an exception.  If there is
6477+    an error that prevents removal, or if a file or directory at the same
6478+    path gets created again by someone else after this deletes it and before
6479+    this checks that it is gone, raise an exception.
6480+    """
6481+    try:
6482+        fp.remove()
6483+    except UnlistableError, e:
6484+        if e.originalException.errno != errno.ENOENT:
6485+            raise
6486+    except OSError, e:
6487+        if e.errno != errno.ENOENT:
6488+            raise
6489+
6490+def rm_dir(dirname):
6491+    # Renamed to be like shutil.rmtree and unlike rmdir.
6492+    return rmtree(dirname)
6493 
6494 def remove_if_possible(f):
6495     try:
6496hunk ./src/allmydata/util/fileutil.py 387
6497         import traceback
6498         traceback.print_exc()
6499 
6500-def get_disk_stats(whichdir, reserved_space=0):
6501+def get_disk_stats(whichdirfp, reserved_space=0):
6502     """Return disk statistics for the storage disk, in the form of a dict
6503     with the following fields.
6504       total:            total bytes on disk
6505hunk ./src/allmydata/util/fileutil.py 408
6506     you can pass how many bytes you would like to leave unused on this
6507     filesystem as reserved_space.
6508     """
6509+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6510 
6511     if have_GetDiskFreeSpaceExW:
6512         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6513hunk ./src/allmydata/util/fileutil.py 419
6514         n_free_for_nonroot = c_ulonglong(0)
6515         n_total            = c_ulonglong(0)
6516         n_free_for_root    = c_ulonglong(0)
6517-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6518-                                               byref(n_total),
6519-                                               byref(n_free_for_root))
6520+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6521+                                                      byref(n_total),
6522+                                                      byref(n_free_for_root))
6523         if retval == 0:
6524             raise OSError("Windows error %d attempting to get disk statistics for %r"
6525hunk ./src/allmydata/util/fileutil.py 424
6526-                          % (GetLastError(), whichdir))
6527+                          % (GetLastError(), whichdirfp.path))
6528         free_for_nonroot = n_free_for_nonroot.value
6529         total            = n_total.value
6530         free_for_root    = n_free_for_root.value
6531hunk ./src/allmydata/util/fileutil.py 433
6532         # <http://docs.python.org/library/os.html#os.statvfs>
6533         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6534         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6535-        s = os.statvfs(whichdir)
6536+        s = os.statvfs(whichdirfp.path)
6537 
6538         # on my mac laptop:
6539         #  statvfs(2) is a wrapper around statfs(2).
6540hunk ./src/allmydata/util/fileutil.py 460
6541              'avail': avail,
6542            }
6543 
6544-def get_available_space(whichdir, reserved_space):
6545+def get_available_space(whichdirfp, reserved_space):
6546     """Returns available space for share storage in bytes, or None if no
6547     API to get this information is available.
6548 
6549hunk ./src/allmydata/util/fileutil.py 472
6550     you can pass how many bytes you would like to leave unused on this
6551     filesystem as reserved_space.
6552     """
6553+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6554     try:
6555hunk ./src/allmydata/util/fileutil.py 474
6556-        return get_disk_stats(whichdir, reserved_space)['avail']
6557+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6558     except AttributeError:
6559         return None
6560hunk ./src/allmydata/util/fileutil.py 477
6561-    except EnvironmentError:
6562-        log.msg("OS call to get disk statistics failed")
6563+
6564+
6565+def get_used_space(fp):
6566+    if fp is None:
6567         return 0
6568hunk ./src/allmydata/util/fileutil.py 482
6569+    try:
6570+        s = os.stat(fp.path)
6571+    except EnvironmentError:
6572+        if not fp.exists():
6573+            return 0
6574+        raise
6575+    else:
6576+        # POSIX defines st_blocks (originally a BSDism):
6577+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6578+        # but does not require stat() to give it a "meaningful value"
6579+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6580+        # and says:
6581+        #   "The unit for the st_blocks member of the stat structure is not defined
6582+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6583+        #    It may differ on a file system basis. There is no correlation between
6584+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6585+        #    structure members."
6586+        #
6587+        # The Linux docs define it as "the number of blocks allocated to the file,
6588+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6589+        # not set the attribute on Windows.
6590+        #
6591+        # We consider platforms that define st_blocks but give it a wrong value, or
6592+        # measure it in a unit other than 512 bytes, to be broken. See also
6593+        # <http://bugs.python.org/issue12350>.
6594+
6595+        if hasattr(s, 'st_blocks'):
6596+            return s.st_blocks * 512
6597+        else:
6598+            return s.st_size
6599}
6600[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6601david-sarah@jacaranda.org**20110920033803
6602 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6603] {
6604hunk ./src/allmydata/client.py 9
6605 from twisted.internet import reactor, defer
6606 from twisted.application import service
6607 from twisted.application.internet import TimerService
6608+from twisted.python.filepath import FilePath
6609 from foolscap.api import Referenceable
6610 from pycryptopp.publickey import rsa
6611 
6612hunk ./src/allmydata/client.py 15
6613 import allmydata
6614 from allmydata.storage.server import StorageServer
6615+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6616 from allmydata import storage_client
6617 from allmydata.immutable.upload import Uploader
6618 from allmydata.immutable.offloaded import Helper
6619hunk ./src/allmydata/client.py 213
6620             return
6621         readonly = self.get_config("storage", "readonly", False, boolean=True)
6622 
6623-        storedir = os.path.join(self.basedir, self.STOREDIR)
6624+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6625 
6626         data = self.get_config("storage", "reserved_space", None)
6627         reserved = None
6628hunk ./src/allmydata/client.py 255
6629             'cutoff_date': cutoff_date,
6630             'sharetypes': tuple(sharetypes),
6631         }
6632-        ss = StorageServer(storedir, self.nodeid,
6633-                           reserved_space=reserved,
6634-                           discard_storage=discard,
6635-                           readonly_storage=readonly,
6636+
6637+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6638+                              discard_storage=discard)
6639+        ss = StorageServer(nodeid, backend, storedir,
6640                            stats_provider=self.stats_provider,
6641                            expiration_policy=expiration_policy)
6642         self.add_service(ss)
6643hunk ./src/allmydata/interfaces.py 348
6644 
6645     def get_shares():
6646         """
6647-        Generates the IStoredShare objects held in this shareset.
6648+        Generates IStoredShare objects for all completed shares in this shareset.
6649         """
6650 
6651     def has_incoming(shnum):
6652hunk ./src/allmydata/storage/backends/base.py 69
6653         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6654         #     """create a mutable share with the given shnum and write_enabler"""
6655 
6656-        # secrets might be a triple with cancel_secret in secrets[2], but if
6657-        # so we ignore the cancel_secret.
6658         write_enabler = secrets[0]
6659         renew_secret = secrets[1]
6660hunk ./src/allmydata/storage/backends/base.py 71
6661+        cancel_secret = '\x00'*32
6662+        if len(secrets) > 2:
6663+            cancel_secret = secrets[2]
6664 
6665         si_s = self.get_storage_index_string()
6666         shares = {}
6667hunk ./src/allmydata/storage/backends/base.py 110
6668             read_data[shnum] = share.readv(read_vector)
6669 
6670         ownerid = 1 # TODO
6671-        lease_info = LeaseInfo(ownerid, renew_secret,
6672+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6673                                expiration_time, storageserver.get_serverid())
6674 
6675         if testv_is_good:
6676hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6677     return newfp.child(sia)
6678 
6679 
6680-def get_share(fp):
6681+def get_share(storageindex, shnum, fp):
6682     f = fp.open('rb')
6683     try:
6684         prefix = f.read(32)
6685hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6686         f.close()
6687 
6688     if prefix == MutableDiskShare.MAGIC:
6689-        return MutableDiskShare(fp)
6690+        return MutableDiskShare(storageindex, shnum, fp)
6691     else:
6692         # assume it's immutable
6693hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6694-        return ImmutableDiskShare(fp)
6695+        return ImmutableDiskShare(storageindex, shnum, fp)
6696 
6697 
6698 class DiskBackend(Backend):
6699hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6700                 if not NUM_RE.match(shnumstr):
6701                     continue
6702                 sharehome = self._sharehomedir.child(shnumstr)
6703-                yield self.get_share(sharehome)
6704+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6705         except UnlistableError:
6706             # There is no shares directory at all.
6707             pass
6708hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6709         return self._incominghomedir.child(str(shnum)).exists()
6710 
6711     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6712-        sharehome = self._sharehomedir.child(str(shnum))
6713+        finalhome = self._sharehomedir.child(str(shnum))
6714         incominghome = self._incominghomedir.child(str(shnum))
6715hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6716-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6717-                                   max_size=max_space_per_bucket, create=True)
6718+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6719+                                   max_size=max_space_per_bucket)
6720         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6721         if self._discard_storage:
6722             bw.throw_out_all_data = True
6723hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6724         fileutil.fp_make_dirs(self._sharehomedir)
6725         sharehome = self._sharehomedir.child(str(shnum))
6726         serverid = storageserver.get_serverid()
6727-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6728+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6729 
6730     def _clean_up_after_unlink(self):
6731         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6732hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6733     LEASE_SIZE = struct.calcsize(">L32s32sL")
6734 
6735 
6736-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6737-        """ If max_size is not None then I won't allow more than
6738-        max_size to be written to me. If create=True then max_size
6739-        must not be None. """
6740-        precondition((max_size is not None) or (not create), max_size, create)
6741+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6742+        """
6743+        If max_size is not None then I won't allow more than max_size to be written to me.
6744+        If finalhome is not None (meaning that we are creating the share) then max_size
6745+        must not be None.
6746+        """
6747+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6748         self._storageindex = storageindex
6749         self._max_size = max_size
6750hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6751-        self._incominghome = incominghome
6752-        self._home = finalhome
6753+
6754+        # If we are creating the share, _finalhome refers to the final path and
6755+        # _home to the incoming path. Otherwise, _finalhome is None.
6756+        self._finalhome = finalhome
6757+        self._home = home
6758         self._shnum = shnum
6759hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6760-        if create:
6761-            # touch the file, so later callers will see that we're working on
6762+
6763+        if self._finalhome is not None:
6764+            # Touch the file, so later callers will see that we're working on
6765             # it. Also construct the metadata.
6766hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6767-            assert not finalhome.exists()
6768-            fp_make_dirs(self._incominghome.parent())
6769+            assert not self._finalhome.exists()
6770+            fp_make_dirs(self._home.parent())
6771             # The second field -- the four-byte share data length -- is no
6772             # longer used as of Tahoe v1.3.0, but we continue to write it in
6773             # there in case someone downgrades a storage server from >=
6774hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6775             # the largest length that can fit into the field. That way, even
6776             # if this does happen, the old < v1.3.0 server will still allow
6777             # clients to read the first part of the share.
6778-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6779+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6780             self._lease_offset = max_size + 0x0c
6781             self._num_leases = 0
6782         else:
6783hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6784                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6785 
6786     def close(self):
6787-        fileutil.fp_make_dirs(self._home.parent())
6788-        self._incominghome.moveTo(self._home)
6789-        try:
6790-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6791-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6792-            # these directories lying around forever, but the delete might
6793-            # fail if we're working on another share for the same storage
6794-            # index (like ab/abcde/5). The alternative approach would be to
6795-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6796-            # ShareWriter), each of which is responsible for a single
6797-            # directory on disk, and have them use reference counting of
6798-            # their children to know when they should do the rmdir. This
6799-            # approach is simpler, but relies on os.rmdir refusing to delete
6800-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6801-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6802-            # we also delete the grandparent (prefix) directory, .../ab ,
6803-            # again to avoid leaving directories lying around. This might
6804-            # fail if there is another bucket open that shares a prefix (like
6805-            # ab/abfff).
6806-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6807-            # we leave the great-grandparent (incoming/) directory in place.
6808-        except EnvironmentError:
6809-            # ignore the "can't rmdir because the directory is not empty"
6810-            # exceptions, those are normal consequences of the
6811-            # above-mentioned conditions.
6812-            pass
6813-        pass
6814+        fileutil.fp_make_dirs(self._finalhome.parent())
6815+        self._home.moveTo(self._finalhome)
6816+
6817+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6818+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6819+        # these directories lying around forever, but the delete might
6820+        # fail if we're working on another share for the same storage
6821+        # index (like ab/abcde/5). The alternative approach would be to
6822+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6823+        # ShareWriter), each of which is responsible for a single
6824+        # directory on disk, and have them use reference counting of
6825+        # their children to know when they should do the rmdir. This
6826+        # approach is simpler, but relies on os.rmdir (used by
6827+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6828+        # Do *not* use fileutil.fp_remove() here!
6829+        parent = self._home.parent()
6830+        fileutil.fp_rmdir_if_empty(parent)
6831+
6832+        # we also delete the grandparent (prefix) directory, .../ab ,
6833+        # again to avoid leaving directories lying around. This might
6834+        # fail if there is another bucket open that shares a prefix (like
6835+        # ab/abfff).
6836+        fileutil.fp_rmdir_if_empty(parent.parent())
6837+
6838+        # we leave the great-grandparent (incoming/) directory in place.
6839+
6840+        # allow lease changes after closing.
6841+        self._home = self._finalhome
6842+        self._finalhome = None
6843 
6844     def get_used_space(self):
6845hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6846-        return (fileutil.get_used_space(self._home) +
6847-                fileutil.get_used_space(self._incominghome))
6848+        return (fileutil.get_used_space(self._finalhome) +
6849+                fileutil.get_used_space(self._home))
6850 
6851     def get_storage_index(self):
6852         return self._storageindex
6853hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6854         precondition(offset >= 0, offset)
6855         if self._max_size is not None and offset+length > self._max_size:
6856             raise DataTooLargeError(self._max_size, offset, length)
6857-        f = self._incominghome.open(mode='rb+')
6858+        f = self._home.open(mode='rb+')
6859         try:
6860             real_offset = self._data_offset+offset
6861             f.seek(real_offset)
6862hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6863 
6864     # These lease operations are intended for use by disk_backend.py.
6865     # Other clients should not depend on the fact that the disk backend
6866-    # stores leases in share files.
6867+    # stores leases in share files. XXX bucket.py also relies on this.
6868 
6869     def get_leases(self):
6870         """Yields a LeaseInfo instance for all leases."""
6871hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6872             f.close()
6873 
6874     def add_lease(self, lease_info):
6875-        f = self._incominghome.open(mode='rb')
6876+        f = self._home.open(mode='rb+')
6877         try:
6878             num_leases = self._read_num_leases(f)
6879hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6880-        finally:
6881-            f.close()
6882-        f = self._home.open(mode='wb+')
6883-        try:
6884             self._write_lease_record(f, num_leases, lease_info)
6885             self._write_num_leases(f, num_leases+1)
6886         finally:
6887hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6888         pass
6889 
6890 
6891-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6892-    ms = MutableDiskShare(fp, parent)
6893+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6894+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6895     ms.create(serverid, write_enabler)
6896     del ms
6897hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6898-    return MutableDiskShare(fp, parent)
6899+    return MutableDiskShare(storageindex, shnum, fp, parent)
6900hunk ./src/allmydata/storage/bucket.py 44
6901         start = time.time()
6902 
6903         self._share.close()
6904-        filelen = self._share.stat()
6905+        # XXX should this be self._share.get_used_space() ?
6906+        consumed_size = self._share.get_size()
6907         self._share = None
6908 
6909         self.closed = True
6910hunk ./src/allmydata/storage/bucket.py 51
6911         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6912 
6913-        self.ss.bucket_writer_closed(self, filelen)
6914+        self.ss.bucket_writer_closed(self, consumed_size)
6915         self.ss.add_latency("close", time.time() - start)
6916         self.ss.count("close")
6917 
6918hunk ./src/allmydata/storage/server.py 182
6919                                 renew_secret, cancel_secret,
6920                                 sharenums, allocated_size,
6921                                 canary, owner_num=0):
6922-        # cancel_secret is no longer used.
6923         # owner_num is not for clients to set, but rather it should be
6924         # curried into a StorageServer instance dedicated to a particular
6925         # owner.
6926hunk ./src/allmydata/storage/server.py 195
6927         # Note that the lease should not be added until the BucketWriter
6928         # has been closed.
6929         expire_time = time.time() + 31*24*60*60
6930-        lease_info = LeaseInfo(owner_num, renew_secret,
6931+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6932                                expire_time, self._serverid)
6933 
6934         max_space_per_bucket = allocated_size
6935hunk ./src/allmydata/test/no_network.py 349
6936         return self.g.servers_by_number[i]
6937 
6938     def get_serverdir(self, i):
6939-        return self.g.servers_by_number[i].backend.storedir
6940+        return self.g.servers_by_number[i].backend._storedir
6941 
6942     def remove_server(self, i):
6943         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6944hunk ./src/allmydata/test/no_network.py 357
6945     def iterate_servers(self):
6946         for i in sorted(self.g.servers_by_number.keys()):
6947             ss = self.g.servers_by_number[i]
6948-            yield (i, ss, ss.backend.storedir)
6949+            yield (i, ss, ss.backend._storedir)
6950 
6951     def find_uri_shares(self, uri):
6952         si = tahoe_uri.from_string(uri).get_storage_index()
6953hunk ./src/allmydata/test/no_network.py 384
6954         return shares
6955 
6956     def copy_share(self, from_share, uri, to_server):
6957-        si = uri.from_string(self.uri).get_storage_index()
6958+        si = tahoe_uri.from_string(uri).get_storage_index()
6959         (i_shnum, i_serverid, i_sharefp) = from_share
6960         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6961         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6962hunk ./src/allmydata/test/test_download.py 127
6963 
6964         return d
6965 
6966-    def _write_shares(self, uri, shares):
6967-        si = uri.from_string(uri).get_storage_index()
6968+    def _write_shares(self, fileuri, shares):
6969+        si = uri.from_string(fileuri).get_storage_index()
6970         for i in shares:
6971             shares_for_server = shares[i]
6972             for shnum in shares_for_server:
6973hunk ./src/allmydata/test/test_hung_server.py 36
6974 
6975     def _hang(self, servers, **kwargs):
6976         for ss in servers:
6977-            self.g.hang_server(ss.get_serverid(), **kwargs)
6978+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6979 
6980     def _unhang(self, servers, **kwargs):
6981         for ss in servers:
6982hunk ./src/allmydata/test/test_hung_server.py 40
6983-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6984+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6985 
6986     def _hang_shares(self, shnums, **kwargs):
6987         # hang all servers who are holding the given shares
6988hunk ./src/allmydata/test/test_hung_server.py 52
6989                     hung_serverids.add(i_serverid)
6990 
6991     def _delete_all_shares_from(self, servers):
6992-        serverids = [ss.get_serverid() for ss in servers]
6993+        serverids = [ss.original.get_serverid() for ss in servers]
6994         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6995             if i_serverid in serverids:
6996                 i_sharefp.remove()
6997hunk ./src/allmydata/test/test_hung_server.py 58
6998 
6999     def _corrupt_all_shares_in(self, servers, corruptor_func):
7000-        serverids = [ss.get_serverid() for ss in servers]
7001+        serverids = [ss.original.get_serverid() for ss in servers]
7002         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7003             if i_serverid in serverids:
7004                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
7005hunk ./src/allmydata/test/test_hung_server.py 64
7006 
7007     def _copy_all_shares_from(self, from_servers, to_server):
7008-        serverids = [ss.get_serverid() for ss in from_servers]
7009+        serverids = [ss.original.get_serverid() for ss in from_servers]
7010         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7011             if i_serverid in serverids:
7012                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7013hunk ./src/allmydata/test/test_mutable.py 2990
7014             fso = debug.FindSharesOptions()
7015             storage_index = base32.b2a(n.get_storage_index())
7016             fso.si_s = storage_index
7017-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
7018+            fso.nodedirs = [unicode(storedir.parent().path)
7019                             for (i,ss,storedir)
7020                             in self.iterate_servers()]
7021             fso.stdout = StringIO()
7022hunk ./src/allmydata/test/test_upload.py 818
7023         if share_number is not None:
7024             self._copy_share_to_server(share_number, server_number)
7025 
7026-
7027     def _copy_share_to_server(self, share_number, server_number):
7028         ss = self.g.servers_by_number[server_number]
7029hunk ./src/allmydata/test/test_upload.py 820
7030-        self.copy_share(self.shares[share_number], ss)
7031+        self.copy_share(self.shares[share_number], self.uri, ss)
7032 
7033     def _setup_grid(self):
7034         """
7035}
7036[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
7037david-sarah@jacaranda.org**20110920171737
7038 Ignore-this: 5947e864682a43cb04e557334cda7c19
7039] {
7040adddir ./docs/backends
7041addfile ./docs/backends/S3.rst
7042hunk ./docs/backends/S3.rst 1
7043+====================================================
7044+Storing Shares in Amazon Simple Storage Service (S3)
7045+====================================================
7046+
7047+S3 is a commercial storage service provided by Amazon, described at
7048+`<https://aws.amazon.com/s3/>`_.
7049+
7050+The Tahoe-LAFS storage server can be configured to store its shares in
7051+an S3 bucket, rather than on local filesystem. To enable this, add the
7052+following keys to the server's ``tahoe.cfg`` file:
7053+
7054+``[storage]``
7055+
7056+``backend = s3``
7057+
7058+    This turns off the local filesystem backend and enables use of S3.
7059+
7060+``s3.access_key_id = (string, required)``
7061+``s3.secret_access_key = (string, required)``
7062+
7063+    These two give the storage server permission to access your Amazon
7064+    Web Services account, allowing them to upload and download shares
7065+    from S3.
7066+
7067+``s3.bucket = (string, required)``
7068+
7069+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
7070+    storage server will only modify and access objects in the configured S3
7071+    bucket.
7072+
7073+``s3.url = (URL string, optional)``
7074+
7075+    This URL tells the storage server how to access the S3 service. It
7076+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
7077+    else, you may be able to use some other S3-like service if it is
7078+    sufficiently compatible.
7079+
7080+``s3.max_space = (str, optional)``
7081+
7082+    This tells the server to limit how much space can be used in the S3
7083+    bucket. Before each share is uploaded, the server will ask S3 for the
7084+    current bucket usage, and will only accept the share if it does not cause
7085+    the usage to grow above this limit.
7086+
7087+    The string contains a number, with an optional case-insensitive scale
7088+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7089+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7090+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7091+    thing.
7092+
7093+    If ``s3.max_space`` is omitted, the default behavior is to allow
7094+    unlimited usage.
7095+
7096+
7097+Once configured, the WUI "storage server" page will provide information about
7098+how much space is being used and how many shares are being stored.
7099+
7100+
7101+Issues
7102+------
7103+
7104+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7105+is configured to store shares in S3 rather than on local disk, some common
7106+operations may behave differently:
7107+
7108+* Lease crawling/expiration is not yet implemented. As a result, shares will
7109+  be retained forever, and the Storage Server status web page will not show
7110+  information about the number of mutable/immutable shares present.
7111+
7112+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7113+  each share upload, causing the upload process to run slightly slower and
7114+  incur more S3 request charges.
7115addfile ./docs/backends/disk.rst
7116hunk ./docs/backends/disk.rst 1
7117+====================================
7118+Storing Shares on a Local Filesystem
7119+====================================
7120+
7121+The "disk" backend stores shares on the local filesystem. Versions of
7122+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
7123+
7124+``[storage]``
7125+
7126+``backend = disk``
7127+
7128+    This enables use of the disk backend, and is the default.
7129+
7130+``reserved_space = (str, optional)``
7131+
7132+    If provided, this value defines how much disk space is reserved: the
7133+    storage server will not accept any share that causes the amount of free
7134+    disk space to drop below this value. (The free space is measured by a
7135+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7136+    space available to the user account under which the storage server runs.)
7137+
7138+    This string contains a number, with an optional case-insensitive scale
7139+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7140+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7141+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7142+    thing.
7143+
7144+    "``tahoe create-node``" generates a tahoe.cfg with
7145+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7146+    reservation to suit your needs.
7147+
7148+``expire.enabled =``
7149+
7150+``expire.mode =``
7151+
7152+``expire.override_lease_duration =``
7153+
7154+``expire.cutoff_date =``
7155+
7156+``expire.immutable =``
7157+
7158+``expire.mutable =``
7159+
7160+    These settings control garbage collection, causing the server to
7161+    delete shares that no longer have an up-to-date lease on them. Please
7162+    see `<garbage-collection.rst>`_ for full details.
7163hunk ./docs/configuration.rst 436
7164     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7165     status of this bug. The default value is ``False``.
7166 
7167-``reserved_space = (str, optional)``
7168+``backend = (string, optional)``
7169 
7170hunk ./docs/configuration.rst 438
7171-    If provided, this value defines how much disk space is reserved: the
7172-    storage server will not accept any share that causes the amount of free
7173-    disk space to drop below this value. (The free space is measured by a
7174-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7175-    space available to the user account under which the storage server runs.)
7176+    Storage servers can store the data into different "backends". Clients
7177+    need not be aware of which backend is used by a server. The default
7178+    value is ``disk``.
7179 
7180hunk ./docs/configuration.rst 442
7181-    This string contains a number, with an optional case-insensitive scale
7182-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7183-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7184-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7185-    thing.
7186+``backend = disk``
7187 
7188hunk ./docs/configuration.rst 444
7189-    "``tahoe create-node``" generates a tahoe.cfg with
7190-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7191-    reservation to suit your needs.
7192+    The default is to store shares on the local filesystem (in
7193+    BASEDIR/storage/shares/). For configuration details (including how to
7194+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7195 
7196hunk ./docs/configuration.rst 448
7197-``expire.enabled =``
7198+``backend = S3``
7199 
7200hunk ./docs/configuration.rst 450
7201-``expire.mode =``
7202-
7203-``expire.override_lease_duration =``
7204-
7205-``expire.cutoff_date =``
7206-
7207-``expire.immutable =``
7208-
7209-``expire.mutable =``
7210-
7211-    These settings control garbage collection, in which the server will
7212-    delete shares that no longer have an up-to-date lease on them. Please see
7213-    `<garbage-collection.rst>`_ for full details.
7214+    The storage server can store all shares to an Amazon Simple Storage
7215+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7216 
7217 
7218 Running A Helper
7219}
7220[Fix some incorrect attribute accesses. refs #999
7221david-sarah@jacaranda.org**20110921031207
7222 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7223] {
7224hunk ./src/allmydata/client.py 258
7225 
7226         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7227                               discard_storage=discard)
7228-        ss = StorageServer(nodeid, backend, storedir,
7229+        ss = StorageServer(self.nodeid, backend, storedir,
7230                            stats_provider=self.stats_provider,
7231                            expiration_policy=expiration_policy)
7232         self.add_service(ss)
7233hunk ./src/allmydata/interfaces.py 449
7234         Returns the storage index.
7235         """
7236 
7237+    def get_storage_index_string():
7238+        """
7239+        Returns the base32-encoded storage index.
7240+        """
7241+
7242     def get_shnum():
7243         """
7244         Returns the share number.
7245hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7246     def get_storage_index(self):
7247         return self._storageindex
7248 
7249+    def get_storage_index_string(self):
7250+        return si_b2a(self._storageindex)
7251+
7252     def get_shnum(self):
7253         return self._shnum
7254 
7255hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7256     def get_storage_index(self):
7257         return self._storageindex
7258 
7259+    def get_storage_index_string(self):
7260+        return si_b2a(self._storageindex)
7261+
7262     def get_shnum(self):
7263         return self._shnum
7264 
7265hunk ./src/allmydata/storage/bucket.py 86
7266     def __init__(self, ss, share):
7267         self.ss = ss
7268         self._share = share
7269-        self.storageindex = share.storageindex
7270-        self.shnum = share.shnum
7271+        self.storageindex = share.get_storage_index()
7272+        self.shnum = share.get_shnum()
7273 
7274     def __repr__(self):
7275         return "<%s %s %s>" % (self.__class__.__name__,
7276hunk ./src/allmydata/storage/expirer.py 6
7277 from twisted.python import log as twlog
7278 
7279 from allmydata.storage.crawler import ShareCrawler
7280-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7281+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7282      UnknownImmutableContainerVersionError
7283 
7284 
7285hunk ./src/allmydata/storage/expirer.py 124
7286                     struct.error):
7287                 twlog.msg("lease-checker error processing %r" % (share,))
7288                 twlog.err()
7289-                which = (si_b2a(share.storageindex), share.get_shnum())
7290+                which = (share.get_storage_index_string(), share.get_shnum())
7291                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7292                 wks = (1, 1, 1, "unknown")
7293             would_keep_shares.append(wks)
7294hunk ./src/allmydata/storage/server.py 221
7295         alreadygot = set()
7296         for share in shareset.get_shares():
7297             share.add_or_renew_lease(lease_info)
7298-            alreadygot.add(share.shnum)
7299+            alreadygot.add(share.get_shnum())
7300 
7301         for shnum in sharenums - alreadygot:
7302             if shareset.has_incoming(shnum):
7303hunk ./src/allmydata/storage/server.py 324
7304 
7305         try:
7306             shareset = self.backend.get_shareset(storageindex)
7307-            return shareset.readv(self, shares, readv)
7308+            return shareset.readv(shares, readv)
7309         finally:
7310             self.add_latency("readv", time.time() - start)
7311 
7312hunk ./src/allmydata/storage/shares.py 1
7313-#! /usr/bin/python
7314-
7315-from allmydata.storage.mutable import MutableShareFile
7316-from allmydata.storage.immutable import ShareFile
7317-
7318-def get_share_file(filename):
7319-    f = open(filename, "rb")
7320-    prefix = f.read(32)
7321-    f.close()
7322-    if prefix == MutableShareFile.MAGIC:
7323-        return MutableShareFile(filename)
7324-    # otherwise assume it's immutable
7325-    return ShareFile(filename)
7326-
7327rmfile ./src/allmydata/storage/shares.py
7328hunk ./src/allmydata/test/no_network.py 387
7329         si = tahoe_uri.from_string(uri).get_storage_index()
7330         (i_shnum, i_serverid, i_sharefp) = from_share
7331         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7332+        fileutil.fp_make_dirs(shares_dir)
7333         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7334 
7335     def restore_all_shares(self, shares):
7336hunk ./src/allmydata/test/no_network.py 391
7337-        for share, data in shares.items():
7338-            share.home.setContent(data)
7339+        for sharepath, data in shares.items():
7340+            FilePath(sharepath).setContent(data)
7341 
7342     def delete_share(self, (shnum, serverid, sharefp)):
7343         sharefp.remove()
7344hunk ./src/allmydata/test/test_upload.py 744
7345         servertoshnums = {} # k: server, v: set(shnum)
7346 
7347         for i, c in self.g.servers_by_number.iteritems():
7348-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7349+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7350                 for fn in fns:
7351                     try:
7352                         sharenum = int(fn)
7353}
7354[docs/backends/S3.rst: remove Issues section. refs #999
7355david-sarah@jacaranda.org**20110921031625
7356 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7357] hunk ./docs/backends/S3.rst 57
7358 
7359 Once configured, the WUI "storage server" page will provide information about
7360 how much space is being used and how many shares are being stored.
7361-
7362-
7363-Issues
7364-------
7365-
7366-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7367-is configured to store shares in S3 rather than on local disk, some common
7368-operations may behave differently:
7369-
7370-* Lease crawling/expiration is not yet implemented. As a result, shares will
7371-  be retained forever, and the Storage Server status web page will not show
7372-  information about the number of mutable/immutable shares present.
7373-
7374-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7375-  each share upload, causing the upload process to run slightly slower and
7376-  incur more S3 request charges.
7377[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7378david-sarah@jacaranda.org**20110921031705
7379 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7380] {
7381hunk ./docs/backends/S3.rst 38
7382     else, you may be able to use some other S3-like service if it is
7383     sufficiently compatible.
7384 
7385-``s3.max_space = (str, optional)``
7386+``s3.max_space = (quantity of space, optional)``
7387 
7388     This tells the server to limit how much space can be used in the S3
7389     bucket. Before each share is uploaded, the server will ask S3 for the
7390hunk ./docs/backends/disk.rst 14
7391 
7392     This enables use of the disk backend, and is the default.
7393 
7394-``reserved_space = (str, optional)``
7395+``reserved_space = (quantity of space, optional)``
7396 
7397     If provided, this value defines how much disk space is reserved: the
7398     storage server will not accept any share that causes the amount of free
7399}
7400[More fixes to tests needed for pluggable backends. refs #999
7401david-sarah@jacaranda.org**20110921184649
7402 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7403] {
7404hunk ./src/allmydata/scripts/debug.py 8
7405 from twisted.python import usage, failure
7406 from twisted.internet import defer
7407 from twisted.scripts import trial as twisted_trial
7408+from twisted.python.filepath import FilePath
7409 
7410 
7411 class DumpOptions(usage.Options):
7412hunk ./src/allmydata/scripts/debug.py 38
7413         self['filename'] = argv_to_abspath(filename)
7414 
7415 def dump_share(options):
7416-    from allmydata.storage.mutable import MutableShareFile
7417+    from allmydata.storage.backends.disk.disk_backend import get_share
7418     from allmydata.util.encodingutil import quote_output
7419 
7420     out = options.stdout
7421hunk ./src/allmydata/scripts/debug.py 46
7422     # check the version, to see if we have a mutable or immutable share
7423     print >>out, "share filename: %s" % quote_output(options['filename'])
7424 
7425-    f = open(options['filename'], "rb")
7426-    prefix = f.read(32)
7427-    f.close()
7428-    if prefix == MutableShareFile.MAGIC:
7429-        return dump_mutable_share(options)
7430-    # otherwise assume it's immutable
7431-    return dump_immutable_share(options)
7432-
7433-def dump_immutable_share(options):
7434-    from allmydata.storage.immutable import ShareFile
7435+    share = get_share("", 0, fp)
7436+    if share.sharetype == "mutable":
7437+        return dump_mutable_share(options, share)
7438+    else:
7439+        assert share.sharetype == "immutable", share.sharetype
7440+        return dump_immutable_share(options)
7441 
7442hunk ./src/allmydata/scripts/debug.py 53
7443+def dump_immutable_share(options, share):
7444     out = options.stdout
7445hunk ./src/allmydata/scripts/debug.py 55
7446-    f = ShareFile(options['filename'])
7447     if not options["leases-only"]:
7448hunk ./src/allmydata/scripts/debug.py 56
7449-        dump_immutable_chk_share(f, out, options)
7450-    dump_immutable_lease_info(f, out)
7451+        dump_immutable_chk_share(share, out, options)
7452+    dump_immutable_lease_info(share, out)
7453     print >>out
7454     return 0
7455 
7456hunk ./src/allmydata/scripts/debug.py 166
7457     return when
7458 
7459 
7460-def dump_mutable_share(options):
7461-    from allmydata.storage.mutable import MutableShareFile
7462+def dump_mutable_share(options, m):
7463     from allmydata.util import base32, idlib
7464     out = options.stdout
7465hunk ./src/allmydata/scripts/debug.py 169
7466-    m = MutableShareFile(options['filename'])
7467     f = open(options['filename'], "rb")
7468     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7469     num_extra_leases = m._read_num_extra_leases(f)
7470hunk ./src/allmydata/scripts/debug.py 641
7471     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7472     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7473     """
7474-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7475-    from allmydata.util.encodingutil import listdir_unicode
7476+    from allmydata.storage.server import si_a2b
7477+    from allmydata.storage.backends.disk_backend import si_si2dir
7478+    from allmydata.util.encodingutil import quote_filepath
7479 
7480     out = options.stdout
7481hunk ./src/allmydata/scripts/debug.py 646
7482-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7483-    for d in options.nodedirs:
7484-        d = os.path.join(d, "storage/shares", sharedir)
7485-        if os.path.exists(d):
7486-            for shnum in listdir_unicode(d):
7487-                print >>out, os.path.join(d, shnum)
7488+    si = si_a2b(options.si_s)
7489+    for nodedir in options.nodedirs:
7490+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7491+        if sharedir.exists():
7492+            for sharefp in sharedir.children():
7493+                print >>out, quote_filepath(sharefp, quotemarks=False)
7494 
7495     return 0
7496 
7497hunk ./src/allmydata/scripts/debug.py 878
7498         print >>err, "Error processing %s" % quote_output(si_dir)
7499         failure.Failure().printTraceback(err)
7500 
7501+
7502 class CorruptShareOptions(usage.Options):
7503     def getSynopsis(self):
7504         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7505hunk ./src/allmydata/scripts/debug.py 902
7506 Obviously, this command should not be used in normal operation.
7507 """
7508         return t
7509+
7510     def parseArgs(self, filename):
7511         self['filename'] = filename
7512 
7513hunk ./src/allmydata/scripts/debug.py 907
7514 def corrupt_share(options):
7515+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7516+
7517+def do_corrupt_share(out, fp, offset="block-random"):
7518     import random
7519hunk ./src/allmydata/scripts/debug.py 911
7520-    from allmydata.storage.mutable import MutableShareFile
7521-    from allmydata.storage.immutable import ShareFile
7522+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7523+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7524     from allmydata.mutable.layout import unpack_header
7525     from allmydata.immutable.layout import ReadBucketProxy
7526hunk ./src/allmydata/scripts/debug.py 915
7527-    out = options.stdout
7528-    fn = options['filename']
7529-    assert options["offset"] == "block-random", "other offsets not implemented"
7530+
7531+    assert offset == "block-random", "other offsets not implemented"
7532+
7533     # first, what kind of share is it?
7534 
7535     def flip_bit(start, end):
7536hunk ./src/allmydata/scripts/debug.py 924
7537         offset = random.randrange(start, end)
7538         bit = random.randrange(0, 8)
7539         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7540-        f = open(fn, "rb+")
7541-        f.seek(offset)
7542-        d = f.read(1)
7543-        d = chr(ord(d) ^ 0x01)
7544-        f.seek(offset)
7545-        f.write(d)
7546-        f.close()
7547+        f = fp.open("rb+")
7548+        try:
7549+            f.seek(offset)
7550+            d = f.read(1)
7551+            d = chr(ord(d) ^ 0x01)
7552+            f.seek(offset)
7553+            f.write(d)
7554+        finally:
7555+            f.close()
7556 
7557hunk ./src/allmydata/scripts/debug.py 934
7558-    f = open(fn, "rb")
7559-    prefix = f.read(32)
7560-    f.close()
7561-    if prefix == MutableShareFile.MAGIC:
7562-        # mutable
7563-        m = MutableShareFile(fn)
7564-        f = open(fn, "rb")
7565-        f.seek(m.DATA_OFFSET)
7566-        data = f.read(2000)
7567-        # make sure this slot contains an SMDF share
7568-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7569+    f = fp.open("rb")
7570+    try:
7571+        prefix = f.read(32)
7572+    finally:
7573         f.close()
7574hunk ./src/allmydata/scripts/debug.py 939
7575+    if prefix == MutableDiskShare.MAGIC:
7576+        # mutable
7577+        m = MutableDiskShare("", 0, fp)
7578+        f = fp.open("rb")
7579+        try:
7580+            f.seek(m.DATA_OFFSET)
7581+            data = f.read(2000)
7582+            # make sure this slot contains an SMDF share
7583+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7584+        finally:
7585+            f.close()
7586 
7587         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7588          ig_datalen, offsets) = unpack_header(data)
7589hunk ./src/allmydata/scripts/debug.py 960
7590         flip_bit(start, end)
7591     else:
7592         # otherwise assume it's immutable
7593-        f = ShareFile(fn)
7594+        f = ImmutableDiskShare("", 0, fp)
7595         bp = ReadBucketProxy(None, None, '')
7596         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7597         start = f._data_offset + offsets["data"]
7598hunk ./src/allmydata/storage/backends/base.py 92
7599             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7600             if sharenum in shares:
7601                 if not shares[sharenum].check_testv(testv):
7602-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7603+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7604                     testv_is_good = False
7605                     break
7606             else:
7607hunk ./src/allmydata/storage/backends/base.py 99
7608                 # compare the vectors against an empty share, in which all
7609                 # reads return empty strings
7610                 if not EmptyShare().check_testv(testv):
7611-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7612-                                                                testv))
7613+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7614                     testv_is_good = False
7615                     break
7616 
7617hunk ./src/allmydata/test/test_cli.py 2892
7618             # delete one, corrupt a second
7619             shares = self.find_uri_shares(self.uri)
7620             self.failUnlessReallyEqual(len(shares), 10)
7621-            os.unlink(shares[0][2])
7622-            cso = debug.CorruptShareOptions()
7623-            cso.stdout = StringIO()
7624-            cso.parseOptions([shares[1][2]])
7625+            shares[0][2].remove()
7626+            stdout = StringIO()
7627+            sharefile = shares[1][2]
7628             storage_index = uri.from_string(self.uri).get_storage_index()
7629             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7630                                        (base32.b2a(shares[1][1]),
7631hunk ./src/allmydata/test/test_cli.py 2900
7632                                         base32.b2a(storage_index),
7633                                         shares[1][0])
7634-            debug.corrupt_share(cso)
7635+            debug.do_corrupt_share(stdout, sharefile)
7636         d.addCallback(_clobber_shares)
7637 
7638         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7639hunk ./src/allmydata/test/test_cli.py 3017
7640         def _clobber_shares(ignored):
7641             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7642             self.failUnlessReallyEqual(len(shares), 10)
7643-            os.unlink(shares[0][2])
7644+            shares[0][2].remove()
7645 
7646             shares = self.find_uri_shares(self.uris["mutable"])
7647hunk ./src/allmydata/test/test_cli.py 3020
7648-            cso = debug.CorruptShareOptions()
7649-            cso.stdout = StringIO()
7650-            cso.parseOptions([shares[1][2]])
7651+            stdout = StringIO()
7652+            sharefile = shares[1][2]
7653             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7654             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7655                                        (base32.b2a(shares[1][1]),
7656hunk ./src/allmydata/test/test_cli.py 3027
7657                                         base32.b2a(storage_index),
7658                                         shares[1][0])
7659-            debug.corrupt_share(cso)
7660+            debug.do_corrupt_share(stdout, sharefile)
7661         d.addCallback(_clobber_shares)
7662 
7663         # root
7664hunk ./src/allmydata/test/test_client.py 90
7665                            "enabled = true\n" + \
7666                            "reserved_space = 1000\n")
7667         c = client.Client(basedir)
7668-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7669+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7670 
7671     def test_reserved_2(self):
7672         basedir = "client.Basic.test_reserved_2"
7673hunk ./src/allmydata/test/test_client.py 101
7674                            "enabled = true\n" + \
7675                            "reserved_space = 10K\n")
7676         c = client.Client(basedir)
7677-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7678+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7679 
7680     def test_reserved_3(self):
7681         basedir = "client.Basic.test_reserved_3"
7682hunk ./src/allmydata/test/test_client.py 112
7683                            "enabled = true\n" + \
7684                            "reserved_space = 5mB\n")
7685         c = client.Client(basedir)
7686-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7687+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7688                              5*1000*1000)
7689 
7690     def test_reserved_4(self):
7691hunk ./src/allmydata/test/test_client.py 124
7692                            "enabled = true\n" + \
7693                            "reserved_space = 78Gb\n")
7694         c = client.Client(basedir)
7695-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7696+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7697                              78*1000*1000*1000)
7698 
7699     def test_reserved_bad(self):
7700hunk ./src/allmydata/test/test_client.py 136
7701                            "enabled = true\n" + \
7702                            "reserved_space = bogus\n")
7703         c = client.Client(basedir)
7704-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7705+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7706 
7707     def _permute(self, sb, key):
7708         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7709hunk ./src/allmydata/test/test_crawler.py 7
7710 from twisted.trial import unittest
7711 from twisted.application import service
7712 from twisted.internet import defer
7713+from twisted.python.filepath import FilePath
7714 from foolscap.api import eventually, fireEventually
7715 
7716 from allmydata.util import fileutil, hashutil, pollmixin
7717hunk ./src/allmydata/test/test_crawler.py 13
7718 from allmydata.storage.server import StorageServer, si_b2a
7719 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7720+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7721 
7722 from allmydata.test.test_storage import FakeCanary
7723 from allmydata.test.common_util import StallMixin
7724hunk ./src/allmydata/test/test_crawler.py 115
7725 
7726     def test_immediate(self):
7727         self.basedir = "crawler/Basic/immediate"
7728-        fileutil.make_dirs(self.basedir)
7729         serverid = "\x00" * 20
7730hunk ./src/allmydata/test/test_crawler.py 116
7731-        ss = StorageServer(self.basedir, serverid)
7732+        fp = FilePath(self.basedir)
7733+        backend = DiskBackend(fp)
7734+        ss = StorageServer(serverid, backend, fp)
7735         ss.setServiceParent(self.s)
7736 
7737         sis = [self.write(i, ss, serverid) for i in range(10)]
7738hunk ./src/allmydata/test/test_crawler.py 122
7739-        statefile = os.path.join(self.basedir, "statefile")
7740+        statefp = fp.child("statefile")
7741 
7742hunk ./src/allmydata/test/test_crawler.py 124
7743-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7744+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7745         c.load_state()
7746 
7747         c.start_current_prefix(time.time())
7748hunk ./src/allmydata/test/test_crawler.py 137
7749         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7750 
7751         # check that a new crawler picks up on the state file properly
7752-        c2 = BucketEnumeratingCrawler(ss, statefile)
7753+        c2 = BucketEnumeratingCrawler(backend, statefp)
7754         c2.load_state()
7755 
7756         c2.start_current_prefix(time.time())
7757hunk ./src/allmydata/test/test_crawler.py 145
7758 
7759     def test_service(self):
7760         self.basedir = "crawler/Basic/service"
7761-        fileutil.make_dirs(self.basedir)
7762         serverid = "\x00" * 20
7763hunk ./src/allmydata/test/test_crawler.py 146
7764-        ss = StorageServer(self.basedir, serverid)
7765+        fp = FilePath(self.basedir)
7766+        backend = DiskBackend(fp)
7767+        ss = StorageServer(serverid, backend, fp)
7768         ss.setServiceParent(self.s)
7769 
7770         sis = [self.write(i, ss, serverid) for i in range(10)]
7771hunk ./src/allmydata/test/test_crawler.py 153
7772 
7773-        statefile = os.path.join(self.basedir, "statefile")
7774-        c = BucketEnumeratingCrawler(ss, statefile)
7775+        statefp = fp.child("statefile")
7776+        c = BucketEnumeratingCrawler(backend, statefp)
7777         c.setServiceParent(self.s)
7778 
7779         # it should be legal to call get_state() and get_progress() right
7780hunk ./src/allmydata/test/test_crawler.py 174
7781 
7782     def test_paced(self):
7783         self.basedir = "crawler/Basic/paced"
7784-        fileutil.make_dirs(self.basedir)
7785         serverid = "\x00" * 20
7786hunk ./src/allmydata/test/test_crawler.py 175
7787-        ss = StorageServer(self.basedir, serverid)
7788+        fp = FilePath(self.basedir)
7789+        backend = DiskBackend(fp)
7790+        ss = StorageServer(serverid, backend, fp)
7791         ss.setServiceParent(self.s)
7792 
7793         # put four buckets in each prefixdir
7794hunk ./src/allmydata/test/test_crawler.py 186
7795             for tail in range(4):
7796                 sis.append(self.write(i, ss, serverid, tail))
7797 
7798-        statefile = os.path.join(self.basedir, "statefile")
7799+        statefp = fp.child("statefile")
7800 
7801hunk ./src/allmydata/test/test_crawler.py 188
7802-        c = PacedCrawler(ss, statefile)
7803+        c = PacedCrawler(backend, statefp)
7804         c.load_state()
7805         try:
7806             c.start_current_prefix(time.time())
7807hunk ./src/allmydata/test/test_crawler.py 213
7808         del c
7809 
7810         # start a new crawler, it should start from the beginning
7811-        c = PacedCrawler(ss, statefile)
7812+        c = PacedCrawler(backend, statefp)
7813         c.load_state()
7814         try:
7815             c.start_current_prefix(time.time())
7816hunk ./src/allmydata/test/test_crawler.py 226
7817         c.cpu_slice = PacedCrawler.cpu_slice
7818 
7819         # a third crawler should pick up from where it left off
7820-        c2 = PacedCrawler(ss, statefile)
7821+        c2 = PacedCrawler(backend, statefp)
7822         c2.all_buckets = c.all_buckets[:]
7823         c2.load_state()
7824         c2.countdown = -1
7825hunk ./src/allmydata/test/test_crawler.py 237
7826 
7827         # now stop it at the end of a bucket (countdown=4), to exercise a
7828         # different place that checks the time
7829-        c = PacedCrawler(ss, statefile)
7830+        c = PacedCrawler(backend, statefp)
7831         c.load_state()
7832         c.countdown = 4
7833         try:
7834hunk ./src/allmydata/test/test_crawler.py 256
7835 
7836         # stop it again at the end of the bucket, check that a new checker
7837         # picks up correctly
7838-        c = PacedCrawler(ss, statefile)
7839+        c = PacedCrawler(backend, statefp)
7840         c.load_state()
7841         c.countdown = 4
7842         try:
7843hunk ./src/allmydata/test/test_crawler.py 266
7844         # that should stop at the end of one of the buckets.
7845         c.save_state()
7846 
7847-        c2 = PacedCrawler(ss, statefile)
7848+        c2 = PacedCrawler(backend, statefp)
7849         c2.all_buckets = c.all_buckets[:]
7850         c2.load_state()
7851         c2.countdown = -1
7852hunk ./src/allmydata/test/test_crawler.py 277
7853 
7854     def test_paced_service(self):
7855         self.basedir = "crawler/Basic/paced_service"
7856-        fileutil.make_dirs(self.basedir)
7857         serverid = "\x00" * 20
7858hunk ./src/allmydata/test/test_crawler.py 278
7859-        ss = StorageServer(self.basedir, serverid)
7860+        fp = FilePath(self.basedir)
7861+        backend = DiskBackend(fp)
7862+        ss = StorageServer(serverid, backend, fp)
7863         ss.setServiceParent(self.s)
7864 
7865         sis = [self.write(i, ss, serverid) for i in range(10)]
7866hunk ./src/allmydata/test/test_crawler.py 285
7867 
7868-        statefile = os.path.join(self.basedir, "statefile")
7869-        c = PacedCrawler(ss, statefile)
7870+        statefp = fp.child("statefile")
7871+        c = PacedCrawler(backend, statefp)
7872 
7873         did_check_progress = [False]
7874         def check_progress():
7875hunk ./src/allmydata/test/test_crawler.py 345
7876         # and read the stdout when it runs.
7877 
7878         self.basedir = "crawler/Basic/cpu_usage"
7879-        fileutil.make_dirs(self.basedir)
7880         serverid = "\x00" * 20
7881hunk ./src/allmydata/test/test_crawler.py 346
7882-        ss = StorageServer(self.basedir, serverid)
7883+        fp = FilePath(self.basedir)
7884+        backend = DiskBackend(fp)
7885+        ss = StorageServer(serverid, backend, fp)
7886         ss.setServiceParent(self.s)
7887 
7888         for i in range(10):
7889hunk ./src/allmydata/test/test_crawler.py 354
7890             self.write(i, ss, serverid)
7891 
7892-        statefile = os.path.join(self.basedir, "statefile")
7893-        c = ConsumingCrawler(ss, statefile)
7894+        statefp = fp.child("statefile")
7895+        c = ConsumingCrawler(backend, statefp)
7896         c.setServiceParent(self.s)
7897 
7898         # this will run as fast as it can, consuming about 50ms per call to
7899hunk ./src/allmydata/test/test_crawler.py 391
7900 
7901     def test_empty_subclass(self):
7902         self.basedir = "crawler/Basic/empty_subclass"
7903-        fileutil.make_dirs(self.basedir)
7904         serverid = "\x00" * 20
7905hunk ./src/allmydata/test/test_crawler.py 392
7906-        ss = StorageServer(self.basedir, serverid)
7907+        fp = FilePath(self.basedir)
7908+        backend = DiskBackend(fp)
7909+        ss = StorageServer(serverid, backend, fp)
7910         ss.setServiceParent(self.s)
7911 
7912         for i in range(10):
7913hunk ./src/allmydata/test/test_crawler.py 400
7914             self.write(i, ss, serverid)
7915 
7916-        statefile = os.path.join(self.basedir, "statefile")
7917-        c = ShareCrawler(ss, statefile)
7918+        statefp = fp.child("statefile")
7919+        c = ShareCrawler(backend, statefp)
7920         c.slow_start = 0
7921         c.setServiceParent(self.s)
7922 
7923hunk ./src/allmydata/test/test_crawler.py 417
7924         d.addCallback(_done)
7925         return d
7926 
7927-
7928     def test_oneshot(self):
7929         self.basedir = "crawler/Basic/oneshot"
7930hunk ./src/allmydata/test/test_crawler.py 419
7931-        fileutil.make_dirs(self.basedir)
7932         serverid = "\x00" * 20
7933hunk ./src/allmydata/test/test_crawler.py 420
7934-        ss = StorageServer(self.basedir, serverid)
7935+        fp = FilePath(self.basedir)
7936+        backend = DiskBackend(fp)
7937+        ss = StorageServer(serverid, backend, fp)
7938         ss.setServiceParent(self.s)
7939 
7940         for i in range(30):
7941hunk ./src/allmydata/test/test_crawler.py 428
7942             self.write(i, ss, serverid)
7943 
7944-        statefile = os.path.join(self.basedir, "statefile")
7945-        c = OneShotCrawler(ss, statefile)
7946+        statefp = fp.child("statefile")
7947+        c = OneShotCrawler(backend, statefp)
7948         c.setServiceParent(self.s)
7949 
7950         d = c.finished_d
7951hunk ./src/allmydata/test/test_crawler.py 447
7952             self.failUnlessEqual(s["current-cycle"], None)
7953         d.addCallback(_check)
7954         return d
7955-
7956hunk ./src/allmydata/test/test_deepcheck.py 23
7957      ShouldFailMixin
7958 from allmydata.test.common_util import StallMixin
7959 from allmydata.test.no_network import GridTestMixin
7960+from allmydata.scripts import debug
7961+
7962 
7963 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7964 
7965hunk ./src/allmydata/test/test_deepcheck.py 905
7966         d.addErrback(self.explain_error)
7967         return d
7968 
7969-
7970-
7971     def set_up_damaged_tree(self):
7972         # 6.4s
7973 
7974hunk ./src/allmydata/test/test_deepcheck.py 989
7975 
7976         return d
7977 
7978-    def _run_cli(self, argv):
7979-        stdout, stderr = StringIO(), StringIO()
7980-        # this can only do synchronous operations
7981-        assert argv[0] == "debug"
7982-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7983-        return stdout.getvalue()
7984-
7985     def _delete_some_shares(self, node):
7986         self.delete_shares_numbered(node.get_uri(), [0,1])
7987 
7988hunk ./src/allmydata/test/test_deepcheck.py 995
7989     def _corrupt_some_shares(self, node):
7990         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7991             if shnum in (0,1):
7992-                self._run_cli(["debug", "corrupt-share", sharefile])
7993+                debug.do_corrupt_share(StringIO(), sharefile)
7994 
7995     def _delete_most_shares(self, node):
7996         self.delete_shares_numbered(node.get_uri(), range(1,10))
7997hunk ./src/allmydata/test/test_deepcheck.py 1000
7998 
7999-
8000     def check_is_healthy(self, cr, where):
8001         try:
8002             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
8003hunk ./src/allmydata/test/test_download.py 134
8004             for shnum in shares_for_server:
8005                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
8006                 fileutil.fp_make_dirs(share_dir)
8007-                share_dir.child(str(shnum)).setContent(shares[shnum])
8008+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
8009 
8010     def load_shares(self, ignored=None):
8011         # this uses the data generated by create_shares() to populate the
8012hunk ./src/allmydata/test/test_hung_server.py 32
8013 
8014     def _break(self, servers):
8015         for ss in servers:
8016-            self.g.break_server(ss.get_serverid())
8017+            self.g.break_server(ss.original.get_serverid())
8018 
8019     def _hang(self, servers, **kwargs):
8020         for ss in servers:
8021hunk ./src/allmydata/test/test_hung_server.py 67
8022         serverids = [ss.original.get_serverid() for ss in from_servers]
8023         for (i_shnum, i_serverid, i_sharefp) in self.shares:
8024             if i_serverid in serverids:
8025-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
8026+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
8027 
8028         self.shares = self.find_uri_shares(self.uri)
8029 
8030hunk ./src/allmydata/test/test_mutable.py 3669
8031         # Now execute each assignment by writing the storage.
8032         for (share, servernum) in assignments:
8033             sharedata = base64.b64decode(self.sdmf_old_shares[share])
8034-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
8035+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
8036             fileutil.fp_make_dirs(storage_dir)
8037             storage_dir.child("%d" % share).setContent(sharedata)
8038         # ...and verify that the shares are there.
8039hunk ./src/allmydata/test/test_no_network.py 10
8040 from allmydata.immutable.upload import Data
8041 from allmydata.util.consumer import download_to_data
8042 
8043+
8044 class Harness(unittest.TestCase):
8045     def setUp(self):
8046         self.s = service.MultiService()
8047hunk ./src/allmydata/test/test_storage.py 1
8048-import time, os.path, platform, stat, re, simplejson, struct, shutil
8049+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8050 
8051 import mock
8052 
8053hunk ./src/allmydata/test/test_storage.py 6
8054 from twisted.trial import unittest
8055-
8056 from twisted.internet import defer
8057 from twisted.application import service
8058hunk ./src/allmydata/test/test_storage.py 8
8059+from twisted.python.filepath import FilePath
8060 from foolscap.api import fireEventually
8061hunk ./src/allmydata/test/test_storage.py 10
8062-import itertools
8063+
8064 from allmydata import interfaces
8065 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8066 from allmydata.storage.server import StorageServer
8067hunk ./src/allmydata/test/test_storage.py 14
8068+from allmydata.storage.backends.disk.disk_backend import DiskBackend
8069 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8070 from allmydata.storage.bucket import BucketWriter, BucketReader
8071 from allmydata.storage.common import DataTooLargeError, \
8072hunk ./src/allmydata/test/test_storage.py 310
8073         return self.sparent.stopService()
8074 
8075     def workdir(self, name):
8076-        basedir = os.path.join("storage", "Server", name)
8077-        return basedir
8078+        return FilePath("storage").child("Server").child(name)
8079 
8080     def create(self, name, reserved_space=0, klass=StorageServer):
8081         workdir = self.workdir(name)
8082hunk ./src/allmydata/test/test_storage.py 314
8083-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
8084+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
8085+        ss = klass("\x00" * 20, backend, workdir,
8086                    stats_provider=FakeStatsProvider())
8087         ss.setServiceParent(self.sparent)
8088         return ss
8089hunk ./src/allmydata/test/test_storage.py 1386
8090 
8091     def tearDown(self):
8092         self.sparent.stopService()
8093-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
8094+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
8095 
8096 
8097     def write_enabler(self, we_tag):
8098hunk ./src/allmydata/test/test_storage.py 2781
8099         return self.sparent.stopService()
8100 
8101     def workdir(self, name):
8102-        basedir = os.path.join("storage", "Server", name)
8103-        return basedir
8104+        return FilePath("storage").child("Server").child(name)
8105 
8106     def create(self, name):
8107         workdir = self.workdir(name)
8108hunk ./src/allmydata/test/test_storage.py 2785
8109-        ss = StorageServer(workdir, "\x00" * 20)
8110+        backend = DiskBackend(workdir)
8111+        ss = StorageServer("\x00" * 20, backend, workdir)
8112         ss.setServiceParent(self.sparent)
8113         return ss
8114 
8115hunk ./src/allmydata/test/test_storage.py 4061
8116         }
8117 
8118         basedir = "storage/WebStatus/status_right_disk_stats"
8119-        fileutil.make_dirs(basedir)
8120-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
8121-        expecteddir = ss.sharedir
8122+        fp = FilePath(basedir)
8123+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
8124+        ss = StorageServer("\x00" * 20, backend, fp)
8125+        expecteddir = backend._sharedir
8126         ss.setServiceParent(self.s)
8127         w = StorageStatus(ss)
8128         html = w.renderSynchronously()
8129hunk ./src/allmydata/test/test_storage.py 4084
8130 
8131     def test_readonly(self):
8132         basedir = "storage/WebStatus/readonly"
8133-        fileutil.make_dirs(basedir)
8134-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8135+        fp = FilePath(basedir)
8136+        backend = DiskBackend(fp, readonly=True)
8137+        ss = StorageServer("\x00" * 20, backend, fp)
8138         ss.setServiceParent(self.s)
8139         w = StorageStatus(ss)
8140         html = w.renderSynchronously()
8141hunk ./src/allmydata/test/test_storage.py 4096
8142 
8143     def test_reserved(self):
8144         basedir = "storage/WebStatus/reserved"
8145-        fileutil.make_dirs(basedir)
8146-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8147-        ss.setServiceParent(self.s)
8148-        w = StorageStatus(ss)
8149-        html = w.renderSynchronously()
8150-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8151-        s = remove_tags(html)
8152-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8153-
8154-    def test_huge_reserved(self):
8155-        basedir = "storage/WebStatus/reserved"
8156-        fileutil.make_dirs(basedir)
8157-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8158+        fp = FilePath(basedir)
8159+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8160+        ss = StorageServer("\x00" * 20, backend, fp)
8161         ss.setServiceParent(self.s)
8162         w = StorageStatus(ss)
8163         html = w.renderSynchronously()
8164hunk ./src/allmydata/test/test_upload.py 3
8165 # -*- coding: utf-8 -*-
8166 
8167-import os, shutil
8168+import os
8169 from cStringIO import StringIO
8170 from twisted.trial import unittest
8171 from twisted.python.failure import Failure
8172hunk ./src/allmydata/test/test_upload.py 14
8173 from allmydata import uri, monitor, client
8174 from allmydata.immutable import upload, encode
8175 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8176-from allmydata.util import log
8177+from allmydata.util import log, fileutil
8178 from allmydata.util.assertutil import precondition
8179 from allmydata.util.deferredutil import DeferredListShouldSucceed
8180 from allmydata.test.no_network import GridTestMixin
8181hunk ./src/allmydata/test/test_upload.py 972
8182                                         readonly=True))
8183         # Remove the first share from server 0.
8184         def _remove_share_0_from_server_0():
8185-            share_location = self.shares[0][2]
8186-            os.remove(share_location)
8187+            self.shares[0][2].remove()
8188         d.addCallback(lambda ign:
8189             _remove_share_0_from_server_0())
8190         # Set happy = 4 in the client.
8191hunk ./src/allmydata/test/test_upload.py 1847
8192             self._copy_share_to_server(3, 1)
8193             storedir = self.get_serverdir(0)
8194             # remove the storedir, wiping out any existing shares
8195-            shutil.rmtree(storedir)
8196+            fileutil.fp_remove(storedir)
8197             # create an empty storedir to replace the one we just removed
8198hunk ./src/allmydata/test/test_upload.py 1849
8199-            os.mkdir(storedir)
8200+            storedir.mkdir()
8201             client = self.g.clients[0]
8202             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8203             return client
8204hunk ./src/allmydata/test/test_upload.py 1888
8205             self._copy_share_to_server(3, 1)
8206             storedir = self.get_serverdir(0)
8207             # remove the storedir, wiping out any existing shares
8208-            shutil.rmtree(storedir)
8209+            fileutil.fp_remove(storedir)
8210             # create an empty storedir to replace the one we just removed
8211hunk ./src/allmydata/test/test_upload.py 1890
8212-            os.mkdir(storedir)
8213+            storedir.mkdir()
8214             client = self.g.clients[0]
8215             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8216             return client
8217hunk ./src/allmydata/test/test_web.py 4870
8218         d.addErrback(self.explain_web_error)
8219         return d
8220 
8221-    def _assert_leasecount(self, ignored, which, expected):
8222+    def _assert_leasecount(self, which, expected):
8223         lease_counts = self.count_leases(self.uris[which])
8224         for (fn, num_leases) in lease_counts:
8225             if num_leases != expected:
8226hunk ./src/allmydata/test/test_web.py 4903
8227                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8228         d.addCallback(_compute_fileurls)
8229 
8230-        d.addCallback(self._assert_leasecount, "one", 1)
8231-        d.addCallback(self._assert_leasecount, "two", 1)
8232-        d.addCallback(self._assert_leasecount, "mutable", 1)
8233+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8234+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8235+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8236 
8237         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8238         def _got_html_good(res):
8239hunk ./src/allmydata/test/test_web.py 4913
8240             self.failIf("Not Healthy" in res, res)
8241         d.addCallback(_got_html_good)
8242 
8243-        d.addCallback(self._assert_leasecount, "one", 1)
8244-        d.addCallback(self._assert_leasecount, "two", 1)
8245-        d.addCallback(self._assert_leasecount, "mutable", 1)
8246+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8247+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8248+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8249 
8250         # this CHECK uses the original client, which uses the same
8251         # lease-secrets, so it will just renew the original lease
8252hunk ./src/allmydata/test/test_web.py 4922
8253         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8254         d.addCallback(_got_html_good)
8255 
8256-        d.addCallback(self._assert_leasecount, "one", 1)
8257-        d.addCallback(self._assert_leasecount, "two", 1)
8258-        d.addCallback(self._assert_leasecount, "mutable", 1)
8259+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8260+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8261+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8262 
8263         # this CHECK uses an alternate client, which adds a second lease
8264         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8265hunk ./src/allmydata/test/test_web.py 4930
8266         d.addCallback(_got_html_good)
8267 
8268-        d.addCallback(self._assert_leasecount, "one", 2)
8269-        d.addCallback(self._assert_leasecount, "two", 1)
8270-        d.addCallback(self._assert_leasecount, "mutable", 1)
8271+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8272+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8273+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8274 
8275         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8276         d.addCallback(_got_html_good)
8277hunk ./src/allmydata/test/test_web.py 4937
8278 
8279-        d.addCallback(self._assert_leasecount, "one", 2)
8280-        d.addCallback(self._assert_leasecount, "two", 1)
8281-        d.addCallback(self._assert_leasecount, "mutable", 1)
8282+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8283+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8284+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8285 
8286         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8287                       clientnum=1)
8288hunk ./src/allmydata/test/test_web.py 4945
8289         d.addCallback(_got_html_good)
8290 
8291-        d.addCallback(self._assert_leasecount, "one", 2)
8292-        d.addCallback(self._assert_leasecount, "two", 1)
8293-        d.addCallback(self._assert_leasecount, "mutable", 2)
8294+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8295+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8296+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8297 
8298         d.addErrback(self.explain_web_error)
8299         return d
8300hunk ./src/allmydata/test/test_web.py 4989
8301             self.failUnlessReallyEqual(len(units), 4+1)
8302         d.addCallback(_done)
8303 
8304-        d.addCallback(self._assert_leasecount, "root", 1)
8305-        d.addCallback(self._assert_leasecount, "one", 1)
8306-        d.addCallback(self._assert_leasecount, "mutable", 1)
8307+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8308+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8309+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8310 
8311         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8312         d.addCallback(_done)
8313hunk ./src/allmydata/test/test_web.py 4996
8314 
8315-        d.addCallback(self._assert_leasecount, "root", 1)
8316-        d.addCallback(self._assert_leasecount, "one", 1)
8317-        d.addCallback(self._assert_leasecount, "mutable", 1)
8318+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8319+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8320+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8321 
8322         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8323                       clientnum=1)
8324hunk ./src/allmydata/test/test_web.py 5004
8325         d.addCallback(_done)
8326 
8327-        d.addCallback(self._assert_leasecount, "root", 2