Ticket #999: pluggable-backends-davidsarah-v13a.darcs.patch

File pluggable-backends-davidsarah-v13a.darcs.patch, 532.1 KB (added by davidsarah, at 2011-09-28T01:45:53Z)

This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.

Line 
139 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
16  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
17
18Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
19  * Fix some incorrect attribute accesses. refs #999
20
21Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
22  * docs/backends/S3.rst: remove Issues section. refs #999
23
24Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
25  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
26
27Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
28  * More fixes to tests needed for pluggable backends. refs #999
29
30Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
31  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
32
33Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
34  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
35
36Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
37  * Fix some more test failures. refs #999
38
39Thu Sep 22 19:30:08 BST 2011  david-sarah@jacaranda.org
40  * Fix most of the crawler tests. refs #999
41
42Thu Sep 22 19:33:23 BST 2011  david-sarah@jacaranda.org
43  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
44
45Fri Sep 23 02:20:44 BST 2011  david-sarah@jacaranda.org
46  * Blank line cleanups.
47
48Fri Sep 23 05:08:25 BST 2011  david-sarah@jacaranda.org
49  * mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
50
51Fri Sep 23 05:10:03 BST 2011  david-sarah@jacaranda.org
52  * A few comment cleanups. refs #999
53
54Fri Sep 23 05:11:15 BST 2011  david-sarah@jacaranda.org
55  * Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
56
57Fri Sep 23 05:13:14 BST 2011  david-sarah@jacaranda.org
58  * Add incomplete S3 backend. refs #999
59
60Fri Sep 23 21:37:23 BST 2011  david-sarah@jacaranda.org
61  * interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
62
63Fri Sep 23 21:44:25 BST 2011  david-sarah@jacaranda.org
64  * Remove redundant si_s argument from check_write_enabler. refs #999
65
66Fri Sep 23 21:46:11 BST 2011  david-sarah@jacaranda.org
67  * Implement readv for immutable shares. refs #999
68
69Fri Sep 23 21:49:14 BST 2011  david-sarah@jacaranda.org
70  * The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
71
72Fri Sep 23 21:49:45 BST 2011  david-sarah@jacaranda.org
73  * Make EmptyShare.check_testv a simple function. refs #999
74
75Fri Sep 23 21:52:19 BST 2011  david-sarah@jacaranda.org
76  * Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
77
78Fri Sep 23 21:53:45 BST 2011  david-sarah@jacaranda.org
79  * Update the S3 backend. refs #999
80
81Fri Sep 23 21:55:10 BST 2011  david-sarah@jacaranda.org
82  * Minor cleanup to disk backend. refs #999
83
84Fri Sep 23 23:09:35 BST 2011  david-sarah@jacaranda.org
85  * Add 'has-immutable-readv' to server version information. refs #999
86
87Tue Sep 27 08:09:47 BST 2011  david-sarah@jacaranda.org
88  * util/deferredutil.py: add some utilities for asynchronous iteration. refs #999
89
90Tue Sep 27 08:14:03 BST 2011  david-sarah@jacaranda.org
91  * test_storage.py: fix test_status_bad_disk_stats. refs #999
92
93Tue Sep 27 08:15:44 BST 2011  david-sarah@jacaranda.org
94  * Cleanups to disk backend. refs #999
95
96Tue Sep 27 08:18:55 BST 2011  david-sarah@jacaranda.org
97  * Cleanups to S3 backend (not including Deferred changes). refs #999
98
99Tue Sep 27 08:28:48 BST 2011  david-sarah@jacaranda.org
100  * test_storage.py: fix test_no_st_blocks. refs #999
101
102Tue Sep 27 08:35:30 BST 2011  david-sarah@jacaranda.org
103  * mutable/publish.py: resolve conflicting patches. refs #999
104
105Wed Sep 28 02:37:29 BST 2011  david-sarah@jacaranda.org
106  * Undo an incompatible change to RIStorageServer. refs #999
107
108Wed Sep 28 02:38:57 BST 2011  david-sarah@jacaranda.org
109  * test_system.py: incorrect arguments were being passed to the constructor for MutableDiskShare. refs #999
110
111Wed Sep 28 02:40:19 BST 2011  david-sarah@jacaranda.org
112  * test_system.py: more debug output for a failing check in test_filesystem. refs #999
113
114Wed Sep 28 02:40:49 BST 2011  david-sarah@jacaranda.org
115  * scripts/debug.py: fix incorrect arguments to dump_immutable_share. refs #999
116
117Wed Sep 28 02:41:26 BST 2011  david-sarah@jacaranda.org
118  * mutable/publish.py: don't crash if there are no writers in _report_verinfo. refs #999
119
120New patches:
121
122[interfaces.py: 'which -> that' grammar cleanup.
123david-sarah@jacaranda.org**20110825003217
124 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
125] {
126hunk ./src/allmydata/interfaces.py 38
127     the StubClient. This object doesn't actually offer any services, but the
128     announcement helps the Introducer keep track of which clients are
129     subscribed (so the grid admin can keep track of things like the size of
130-    the grid and the client versions in use. This is the (empty)
131+    the grid and the client versions in use). This is the (empty)
132     RemoteInterface for the StubClient."""
133 
134 class RIBucketWriter(RemoteInterface):
135hunk ./src/allmydata/interfaces.py 276
136         (binary) storage index string, and 'shnum' is the integer share
137         number. 'reason' is a human-readable explanation of the problem,
138         probably including some expected hash values and the computed ones
139-        which did not match. Corruption advisories for mutable shares should
140+        that did not match. Corruption advisories for mutable shares should
141         include a hash of the public key (the same value that appears in the
142         mutable-file verify-cap), since the current share format does not
143         store that on disk.
144hunk ./src/allmydata/interfaces.py 413
145           remote_host: the IAddress, if connected, otherwise None
146 
147         This method is intended for monitoring interfaces, such as a web page
148-        which describes connecting and connected peers.
149+        that describes connecting and connected peers.
150         """
151 
152     def get_all_peerids():
153hunk ./src/allmydata/interfaces.py 515
154 
155     # TODO: rename to get_read_cap()
156     def get_readonly():
157-        """Return another IURI instance, which represents a read-only form of
158+        """Return another IURI instance that represents a read-only form of
159         this one. If is_readonly() is True, this returns self."""
160 
161     def get_verify_cap():
162hunk ./src/allmydata/interfaces.py 542
163         passing into init_from_string."""
164 
165 class IDirnodeURI(Interface):
166-    """I am a URI which represents a dirnode."""
167+    """I am a URI that represents a dirnode."""
168 
169 class IFileURI(Interface):
170hunk ./src/allmydata/interfaces.py 545
171-    """I am a URI which represents a filenode."""
172+    """I am a URI that represents a filenode."""
173     def get_size():
174         """Return the length (in bytes) of the file that I represent."""
175 
176hunk ./src/allmydata/interfaces.py 553
177     pass
178 
179 class IMutableFileURI(Interface):
180-    """I am a URI which represents a mutable filenode."""
181+    """I am a URI that represents a mutable filenode."""
182     def get_extension_params():
183         """Return the extension parameters in the URI"""
184 
185hunk ./src/allmydata/interfaces.py 856
186         """
187 
188 class IFileNode(IFilesystemNode):
189-    """I am a node which represents a file: a sequence of bytes. I am not a
190+    """I am a node that represents a file: a sequence of bytes. I am not a
191     container, like IDirectoryNode."""
192     def get_best_readable_version():
193         """Return a Deferred that fires with an IReadable for the 'best'
194hunk ./src/allmydata/interfaces.py 905
195     multiple versions of a file present in the grid, some of which might be
196     unrecoverable (i.e. have fewer than 'k' shares). These versions are
197     loosely ordered: each has a sequence number and a hash, and any version
198-    with seqnum=N was uploaded by a node which has seen at least one version
199+    with seqnum=N was uploaded by a node that has seen at least one version
200     with seqnum=N-1.
201 
202     The 'servermap' (an instance of IMutableFileServerMap) is used to
203hunk ./src/allmydata/interfaces.py 1014
204         as a guide to where the shares are located.
205 
206         I return a Deferred that fires with the requested contents, or
207-        errbacks with UnrecoverableFileError. Note that a servermap which was
208+        errbacks with UnrecoverableFileError. Note that a servermap that was
209         updated with MODE_ANYTHING or MODE_READ may not know about shares for
210         all versions (those modes stop querying servers as soon as they can
211         fulfil their goals), so you may want to use MODE_CHECK (which checks
212hunk ./src/allmydata/interfaces.py 1073
213     """Upload was unable to satisfy 'servers_of_happiness'"""
214 
215 class UnableToFetchCriticalDownloadDataError(Exception):
216-    """I was unable to fetch some piece of critical data which is supposed to
217+    """I was unable to fetch some piece of critical data that is supposed to
218     be identically present in all shares."""
219 
220 class NoServersError(Exception):
221hunk ./src/allmydata/interfaces.py 1085
222     exists, and overwrite= was set to False."""
223 
224 class NoSuchChildError(Exception):
225-    """A directory node was asked to fetch a child which does not exist."""
226+    """A directory node was asked to fetch a child that does not exist."""
227 
228 class ChildOfWrongTypeError(Exception):
229     """An operation was attempted on a child of the wrong type (file or directory)."""
230hunk ./src/allmydata/interfaces.py 1403
231         if you initially thought you were going to use 10 peers, started
232         encoding, and then two of the peers dropped out: you could use
233         desired_share_ids= to skip the work (both memory and CPU) of
234-        producing shares for the peers which are no longer available.
235+        producing shares for the peers that are no longer available.
236 
237         """
238 
239hunk ./src/allmydata/interfaces.py 1478
240         if you initially thought you were going to use 10 peers, started
241         encoding, and then two of the peers dropped out: you could use
242         desired_share_ids= to skip the work (both memory and CPU) of
243-        producing shares for the peers which are no longer available.
244+        producing shares for the peers that are no longer available.
245 
246         For each call, encode() will return a Deferred that fires with two
247         lists, one containing shares and the other containing the shareids.
248hunk ./src/allmydata/interfaces.py 1535
249         required to be of the same length.  The i'th element of their_shareids
250         is required to be the shareid of the i'th buffer in some_shares.
251 
252-        This returns a Deferred which fires with a sequence of buffers. This
253+        This returns a Deferred that fires with a sequence of buffers. This
254         sequence will contain all of the segments of the original data, in
255         order. The sum of the lengths of all of the buffers will be the
256         'data_size' value passed into the original ICodecEncode.set_params()
257hunk ./src/allmydata/interfaces.py 1582
258         Encoding parameters can be set in three ways. 1: The Encoder class
259         provides defaults (3/7/10). 2: the Encoder can be constructed with
260         an 'options' dictionary, in which the
261-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
262+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
263         set_params((k,d,n)) can be called.
264 
265         If you intend to use set_params(), you must call it before
266hunk ./src/allmydata/interfaces.py 1780
267         produced, so that the segment hashes can be generated with only a
268         single pass.
269 
270-        This returns a Deferred which fires with a sequence of hashes, using:
271+        This returns a Deferred that fires with a sequence of hashes, using:
272 
273          tuple(segment_hashes[first:last])
274 
275hunk ./src/allmydata/interfaces.py 1796
276     def get_plaintext_hash():
277         """OBSOLETE; Get the hash of the whole plaintext.
278 
279-        This returns a Deferred which fires with a tagged SHA-256 hash of the
280+        This returns a Deferred that fires with a tagged SHA-256 hash of the
281         whole plaintext, obtained from hashutil.plaintext_hash(data).
282         """
283 
284hunk ./src/allmydata/interfaces.py 1856
285         be used to encrypt the data. The key will also be hashed to derive
286         the StorageIndex.
287 
288-        Uploadables which want to achieve convergence should hash their file
289+        Uploadables that want to achieve convergence should hash their file
290         contents and the serialized_encoding_parameters to form the key
291         (which of course requires a full pass over the data). Uploadables can
292         use the upload.ConvergentUploadMixin class to achieve this
293hunk ./src/allmydata/interfaces.py 1862
294         automatically.
295 
296-        Uploadables which do not care about convergence (or do not wish to
297+        Uploadables that do not care about convergence (or do not wish to
298         make multiple passes over the data) can simply return a
299         strongly-random 16 byte string.
300 
301hunk ./src/allmydata/interfaces.py 1872
302 
303     def read(length):
304         """Return a Deferred that fires with a list of strings (perhaps with
305-        only a single element) which, when concatenated together, contain the
306+        only a single element) that, when concatenated together, contain the
307         next 'length' bytes of data. If EOF is near, this may provide fewer
308         than 'length' bytes. The total number of bytes provided by read()
309         before it signals EOF must equal the size provided by get_size().
310hunk ./src/allmydata/interfaces.py 1919
311 
312     def read(length):
313         """
314-        Returns a list of strings which, when concatenated, are the next
315+        Returns a list of strings that, when concatenated, are the next
316         length bytes of the file, or fewer if there are fewer bytes
317         between the current location and the end of the file.
318         """
319hunk ./src/allmydata/interfaces.py 1932
320 
321 class IUploadResults(Interface):
322     """I am returned by upload() methods. I contain a number of public
323-    attributes which can be read to determine the results of the upload. Some
324+    attributes that can be read to determine the results of the upload. Some
325     of these are functional, some are timing information. All of these may be
326     None.
327 
328hunk ./src/allmydata/interfaces.py 1965
329 
330 class IDownloadResults(Interface):
331     """I am created internally by download() methods. I contain a number of
332-    public attributes which contain details about the download process.::
333+    public attributes that contain details about the download process.::
334 
335      .file_size : the size of the file, in bytes
336      .servers_used : set of server peerids that were used during download
337hunk ./src/allmydata/interfaces.py 1991
338 class IUploader(Interface):
339     def upload(uploadable):
340         """Upload the file. 'uploadable' must impement IUploadable. This
341-        returns a Deferred which fires with an IUploadResults instance, from
342+        returns a Deferred that fires with an IUploadResults instance, from
343         which the URI of the file can be obtained as results.uri ."""
344 
345     def upload_ssk(write_capability, new_version, uploadable):
346hunk ./src/allmydata/interfaces.py 2041
347         kind of lease that is obtained (which account number to claim, etc).
348 
349         TODO: any problems seen during checking will be reported to the
350-        health-manager.furl, a centralized object which is responsible for
351+        health-manager.furl, a centralized object that is responsible for
352         figuring out why files are unhealthy so corrective action can be
353         taken.
354         """
355hunk ./src/allmydata/interfaces.py 2056
356         will be put in the check-and-repair results. The Deferred will not
357         fire until the repair is complete.
358 
359-        This returns a Deferred which fires with an instance of
360+        This returns a Deferred that fires with an instance of
361         ICheckAndRepairResults."""
362 
363 class IDeepCheckable(Interface):
364hunk ./src/allmydata/interfaces.py 2141
365                               that was found to be corrupt. Each share
366                               locator is a list of (serverid, storage_index,
367                               sharenum).
368-         count-incompatible-shares: the number of shares which are of a share
369+         count-incompatible-shares: the number of shares that are of a share
370                                     format unknown to this checker
371          list-incompatible-shares: a list of 'share locators', one for each
372                                    share that was found to be of an unknown
373hunk ./src/allmydata/interfaces.py 2148
374                                    format. Each share locator is a list of
375                                    (serverid, storage_index, sharenum).
376          servers-responding: list of (binary) storage server identifiers,
377-                             one for each server which responded to the share
378+                             one for each server that responded to the share
379                              query (even if they said they didn't have
380                              shares, and even if they said they did have
381                              shares but then didn't send them when asked, or
382hunk ./src/allmydata/interfaces.py 2345
383         will use the data in the checker results to guide the repair process,
384         such as which servers provided bad data and should therefore be
385         avoided. The ICheckResults object is inside the
386-        ICheckAndRepairResults object, which is returned by the
387+        ICheckAndRepairResults object that is returned by the
388         ICheckable.check() method::
389 
390          d = filenode.check(repair=False)
391hunk ./src/allmydata/interfaces.py 2436
392         methods to create new objects. I return synchronously."""
393 
394     def create_mutable_file(contents=None, keysize=None):
395-        """I create a new mutable file, and return a Deferred which will fire
396+        """I create a new mutable file, and return a Deferred that will fire
397         with the IMutableFileNode instance when it is ready. If contents= is
398         provided (a bytestring), it will be used as the initial contents of
399         the new file, otherwise the file will contain zero bytes. keysize= is
400hunk ./src/allmydata/interfaces.py 2444
401         usual."""
402 
403     def create_new_mutable_directory(initial_children={}):
404-        """I create a new mutable directory, and return a Deferred which will
405+        """I create a new mutable directory, and return a Deferred that will
406         fire with the IDirectoryNode instance when it is ready. If
407         initial_children= is provided (a dict mapping unicode child name to
408         (childnode, metadata_dict) tuples), the directory will be populated
409hunk ./src/allmydata/interfaces.py 2452
410 
411 class IClientStatus(Interface):
412     def list_all_uploads():
413-        """Return a list of uploader objects, one for each upload which
414+        """Return a list of uploader objects, one for each upload that
415         currently has an object available (tracked with weakrefs). This is
416         intended for debugging purposes."""
417     def list_active_uploads():
418hunk ./src/allmydata/interfaces.py 2462
419         started uploads."""
420 
421     def list_all_downloads():
422-        """Return a list of downloader objects, one for each download which
423+        """Return a list of downloader objects, one for each download that
424         currently has an object available (tracked with weakrefs). This is
425         intended for debugging purposes."""
426     def list_active_downloads():
427hunk ./src/allmydata/interfaces.py 2689
428 
429     def provide(provider=RIStatsProvider, nickname=str):
430         """
431-        @param provider: a stats collector instance which should be polled
432+        @param provider: a stats collector instance that should be polled
433                          periodically by the gatherer to collect stats.
434         @param nickname: a name useful to identify the provided client
435         """
436hunk ./src/allmydata/interfaces.py 2722
437 
438 class IValidatedThingProxy(Interface):
439     def start():
440-        """ Acquire a thing and validate it. Return a deferred which is
441+        """ Acquire a thing and validate it. Return a deferred that is
442         eventually fired with self if the thing is valid or errbacked if it
443         can't be acquired or validated."""
444 
445}
446[Pluggable backends -- new and moved files, changes to moved files. refs #999
447david-sarah@jacaranda.org**20110919232926
448 Ignore-this: ec5d2d1362a092d919e84327d3092424
449] {
450adddir ./src/allmydata/storage/backends
451adddir ./src/allmydata/storage/backends/disk
452move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
453move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
454adddir ./src/allmydata/storage/backends/null
455addfile ./src/allmydata/storage/backends/__init__.py
456addfile ./src/allmydata/storage/backends/base.py
457hunk ./src/allmydata/storage/backends/base.py 1
458+
459+from twisted.application import service
460+
461+from allmydata.storage.common import si_b2a
462+from allmydata.storage.lease import LeaseInfo
463+from allmydata.storage.bucket import BucketReader
464+
465+
466+class Backend(service.MultiService):
467+    def __init__(self):
468+        service.MultiService.__init__(self)
469+
470+
471+class ShareSet(object):
472+    """
473+    This class implements shareset logic that could work for all backends, but
474+    might be useful to override for efficiency.
475+    """
476+
477+    def __init__(self, storageindex):
478+        self.storageindex = storageindex
479+
480+    def get_storage_index(self):
481+        return self.storageindex
482+
483+    def get_storage_index_string(self):
484+        return si_b2a(self.storageindex)
485+
486+    def renew_lease(self, renew_secret, new_expiration_time):
487+        found_shares = False
488+        for share in self.get_shares():
489+            found_shares = True
490+            share.renew_lease(renew_secret, new_expiration_time)
491+
492+        if not found_shares:
493+            raise IndexError("no such lease to renew")
494+
495+    def get_leases(self):
496+        # Since all shares get the same lease data, we just grab the leases
497+        # from the first share.
498+        try:
499+            sf = self.get_shares().next()
500+            return sf.get_leases()
501+        except StopIteration:
502+            return iter([])
503+
504+    def add_or_renew_lease(self, lease_info):
505+        # This implementation assumes that lease data is duplicated in
506+        # all shares of a shareset, which might not be true for all backends.
507+        for share in self.get_shares():
508+            share.add_or_renew_lease(lease_info)
509+
510+    def make_bucket_reader(self, storageserver, share):
511+        return BucketReader(storageserver, share)
512+
513+    def testv_and_readv_and_writev(self, storageserver, secrets,
514+                                   test_and_write_vectors, read_vector,
515+                                   expiration_time):
516+        # The implementation here depends on the following helper methods,
517+        # which must be provided by subclasses:
518+        #
519+        # def _clean_up_after_unlink(self):
520+        #     """clean up resources associated with the shareset after some
521+        #     shares might have been deleted"""
522+        #
523+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
524+        #     """create a mutable share with the given shnum and write_enabler"""
525+
526+        # secrets might be a triple with cancel_secret in secrets[2], but if
527+        # so we ignore the cancel_secret.
528+        write_enabler = secrets[0]
529+        renew_secret = secrets[1]
530+
531+        si_s = self.get_storage_index_string()
532+        shares = {}
533+        for share in self.get_shares():
534+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
535+            # have a parameter saying what type it's expecting.
536+            if share.sharetype == "mutable":
537+                share.check_write_enabler(write_enabler, si_s)
538+                shares[share.get_shnum()] = share
539+
540+        # write_enabler is good for all existing shares
541+
542+        # now evaluate test vectors
543+        testv_is_good = True
544+        for sharenum in test_and_write_vectors:
545+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
546+            if sharenum in shares:
547+                if not shares[sharenum].check_testv(testv):
548+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
549+                    testv_is_good = False
550+                    break
551+            else:
552+                # compare the vectors against an empty share, in which all
553+                # reads return empty strings
554+                if not EmptyShare().check_testv(testv):
555+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
556+                                                                testv))
557+                    testv_is_good = False
558+                    break
559+
560+        # gather the read vectors, before we do any writes
561+        read_data = {}
562+        for shnum, share in shares.items():
563+            read_data[shnum] = share.readv(read_vector)
564+
565+        ownerid = 1 # TODO
566+        lease_info = LeaseInfo(ownerid, renew_secret,
567+                               expiration_time, storageserver.get_serverid())
568+
569+        if testv_is_good:
570+            # now apply the write vectors
571+            for shnum in test_and_write_vectors:
572+                (testv, datav, new_length) = test_and_write_vectors[shnum]
573+                if new_length == 0:
574+                    if shnum in shares:
575+                        shares[shnum].unlink()
576+                else:
577+                    if shnum not in shares:
578+                        # allocate a new share
579+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
580+                        shares[shnum] = share
581+                    shares[shnum].writev(datav, new_length)
582+                    # and update the lease
583+                    shares[shnum].add_or_renew_lease(lease_info)
584+
585+            if new_length == 0:
586+                self._clean_up_after_unlink()
587+
588+        return (testv_is_good, read_data)
589+
590+    def readv(self, wanted_shnums, read_vector):
591+        """
592+        Read a vector from the numbered shares in this shareset. An empty
593+        shares list means to return data from all known shares.
594+
595+        @param wanted_shnums=ListOf(int)
596+        @param read_vector=ReadVector
597+        @return DictOf(int, ReadData): shnum -> results, with one key per share
598+        """
599+        datavs = {}
600+        for share in self.get_shares():
601+            shnum = share.get_shnum()
602+            if not wanted_shnums or shnum in wanted_shnums:
603+                datavs[shnum] = share.readv(read_vector)
604+
605+        return datavs
606+
607+
608+def testv_compare(a, op, b):
609+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
610+    if op == "lt":
611+        return a < b
612+    if op == "le":
613+        return a <= b
614+    if op == "eq":
615+        return a == b
616+    if op == "ne":
617+        return a != b
618+    if op == "ge":
619+        return a >= b
620+    if op == "gt":
621+        return a > b
622+    # never reached
623+
624+
625+class EmptyShare:
626+    def check_testv(self, testv):
627+        test_good = True
628+        for (offset, length, operator, specimen) in testv:
629+            data = ""
630+            if not testv_compare(data, operator, specimen):
631+                test_good = False
632+                break
633+        return test_good
634+
635addfile ./src/allmydata/storage/backends/disk/__init__.py
636addfile ./src/allmydata/storage/backends/disk/disk_backend.py
637hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
638+
639+import re
640+
641+from twisted.python.filepath import UnlistableError
642+
643+from zope.interface import implements
644+from allmydata.interfaces import IStorageBackend, IShareSet
645+from allmydata.util import fileutil, log, time_format
646+from allmydata.storage.common import si_b2a, si_a2b
647+from allmydata.storage.bucket import BucketWriter
648+from allmydata.storage.backends.base import Backend, ShareSet
649+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
650+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
651+
652+# storage/
653+# storage/shares/incoming
654+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
655+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
656+# storage/shares/$START/$STORAGEINDEX
657+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
658+
659+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
660+# base-32 chars).
661+# $SHARENUM matches this regex:
662+NUM_RE=re.compile("^[0-9]+$")
663+
664+
665+def si_si2dir(startfp, storageindex):
666+    sia = si_b2a(storageindex)
667+    newfp = startfp.child(sia[:2])
668+    return newfp.child(sia)
669+
670+
671+def get_share(fp):
672+    f = fp.open('rb')
673+    try:
674+        prefix = f.read(32)
675+    finally:
676+        f.close()
677+
678+    if prefix == MutableDiskShare.MAGIC:
679+        return MutableDiskShare(fp)
680+    else:
681+        # assume it's immutable
682+        return ImmutableDiskShare(fp)
683+
684+
685+class DiskBackend(Backend):
686+    implements(IStorageBackend)
687+
688+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
689+        Backend.__init__(self)
690+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
691+        self._setup_corruption_advisory()
692+
693+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
694+        self._storedir = storedir
695+        self._readonly = readonly
696+        self._reserved_space = int(reserved_space)
697+        self._discard_storage = discard_storage
698+        self._sharedir = self._storedir.child("shares")
699+        fileutil.fp_make_dirs(self._sharedir)
700+        self._incomingdir = self._sharedir.child('incoming')
701+        self._clean_incomplete()
702+        if self._reserved_space and (self.get_available_space() is None):
703+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
704+                    umid="0wZ27w", level=log.UNUSUAL)
705+
706+    def _clean_incomplete(self):
707+        fileutil.fp_remove(self._incomingdir)
708+        fileutil.fp_make_dirs(self._incomingdir)
709+
710+    def _setup_corruption_advisory(self):
711+        # we don't actually create the corruption-advisory dir until necessary
712+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
713+
714+    def _make_shareset(self, sharehomedir):
715+        return self.get_shareset(si_a2b(sharehomedir.basename()))
716+
717+    def get_sharesets_for_prefix(self, prefix):
718+        prefixfp = self._sharedir.child(prefix)
719+        try:
720+            sharesets = map(self._make_shareset, prefixfp.children())
721+            def _by_base32si(b):
722+                return b.get_storage_index_string()
723+            sharesets.sort(key=_by_base32si)
724+        except EnvironmentError:
725+            sharesets = []
726+        return sharesets
727+
728+    def get_shareset(self, storageindex):
729+        sharehomedir = si_si2dir(self._sharedir, storageindex)
730+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
731+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
732+
733+    def fill_in_space_stats(self, stats):
734+        stats['storage_server.reserved_space'] = self._reserved_space
735+        try:
736+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
737+            writeable = disk['avail'] > 0
738+
739+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
740+            stats['storage_server.disk_total'] = disk['total']
741+            stats['storage_server.disk_used'] = disk['used']
742+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
743+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
744+            stats['storage_server.disk_avail'] = disk['avail']
745+        except AttributeError:
746+            writeable = True
747+        except EnvironmentError:
748+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
749+            writeable = False
750+
751+        if self._readonly:
752+            stats['storage_server.disk_avail'] = 0
753+            writeable = False
754+
755+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
756+
757+    def get_available_space(self):
758+        if self._readonly:
759+            return 0
760+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
761+
762+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
763+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
764+        now = time_format.iso_utc(sep="T")
765+        si_s = si_b2a(storageindex)
766+
767+        # Windows can't handle colons in the filename.
768+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
769+        f = self._corruption_advisory_dir.child(name).open("w")
770+        try:
771+            f.write("report: Share Corruption\n")
772+            f.write("type: %s\n" % sharetype)
773+            f.write("storage_index: %s\n" % si_s)
774+            f.write("share_number: %d\n" % shnum)
775+            f.write("\n")
776+            f.write(reason)
777+            f.write("\n")
778+        finally:
779+            f.close()
780+
781+        log.msg(format=("client claims corruption in (%(share_type)s) " +
782+                        "%(si)s-%(shnum)d: %(reason)s"),
783+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
784+                level=log.SCARY, umid="SGx2fA")
785+
786+
787+class DiskShareSet(ShareSet):
788+    implements(IShareSet)
789+
790+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
791+        ShareSet.__init__(self, storageindex)
792+        self._sharehomedir = sharehomedir
793+        self._incominghomedir = incominghomedir
794+        self._discard_storage = discard_storage
795+
796+    def get_overhead(self):
797+        return (fileutil.get_disk_usage(self._sharehomedir) +
798+                fileutil.get_disk_usage(self._incominghomedir))
799+
800+    def get_shares(self):
801+        """
802+        Generate IStorageBackendShare objects for shares we have for this storage index.
803+        ("Shares we have" means completed ones, excluding incoming ones.)
804+        """
805+        try:
806+            for fp in self._sharehomedir.children():
807+                shnumstr = fp.basename()
808+                if not NUM_RE.match(shnumstr):
809+                    continue
810+                sharehome = self._sharehomedir.child(shnumstr)
811+                yield self.get_share(sharehome)
812+        except UnlistableError:
813+            # There is no shares directory at all.
814+            pass
815+
816+    def has_incoming(self, shnum):
817+        if self._incominghomedir is None:
818+            return False
819+        return self._incominghomedir.child(str(shnum)).exists()
820+
821+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
822+        sharehome = self._sharehomedir.child(str(shnum))
823+        incominghome = self._incominghomedir.child(str(shnum))
824+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
825+                                   max_size=max_space_per_bucket, create=True)
826+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
827+        if self._discard_storage:
828+            bw.throw_out_all_data = True
829+        return bw
830+
831+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
832+        fileutil.fp_make_dirs(self._sharehomedir)
833+        sharehome = self._sharehomedir.child(str(shnum))
834+        serverid = storageserver.get_serverid()
835+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
836+
837+    def _clean_up_after_unlink(self):
838+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
839+
840hunk ./src/allmydata/storage/backends/disk/immutable.py 1
841-import os, stat, struct, time
842 
843hunk ./src/allmydata/storage/backends/disk/immutable.py 2
844-from foolscap.api import Referenceable
845+import struct
846 
847 from zope.interface import implements
848hunk ./src/allmydata/storage/backends/disk/immutable.py 5
849-from allmydata.interfaces import RIBucketWriter, RIBucketReader
850-from allmydata.util import base32, fileutil, log
851+
852+from allmydata.interfaces import IStoredShare
853+from allmydata.util import fileutil
854 from allmydata.util.assertutil import precondition
855hunk ./src/allmydata/storage/backends/disk/immutable.py 9
856+from allmydata.util.fileutil import fp_make_dirs
857 from allmydata.util.hashutil import constant_time_compare
858hunk ./src/allmydata/storage/backends/disk/immutable.py 11
859+from allmydata.util.encodingutil import quote_filepath
860+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
861 from allmydata.storage.lease import LeaseInfo
862hunk ./src/allmydata/storage/backends/disk/immutable.py 14
863-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
864-     DataTooLargeError
865+
866 
867 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
868 # and share data. The share data is accessed by RIBucketWriter.write and
869hunk ./src/allmydata/storage/backends/disk/immutable.py 41
870 # then the value stored in this field will be the actual share data length
871 # modulo 2**32.
872 
873-class ShareFile:
874-    LEASE_SIZE = struct.calcsize(">L32s32sL")
875+class ImmutableDiskShare(object):
876+    implements(IStoredShare)
877+
878     sharetype = "immutable"
879hunk ./src/allmydata/storage/backends/disk/immutable.py 45
880+    LEASE_SIZE = struct.calcsize(">L32s32sL")
881+
882 
883hunk ./src/allmydata/storage/backends/disk/immutable.py 48
884-    def __init__(self, filename, max_size=None, create=False):
885-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
886+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
887+        """ If max_size is not None then I won't allow more than
888+        max_size to be written to me. If create=True then max_size
889+        must not be None. """
890         precondition((max_size is not None) or (not create), max_size, create)
891hunk ./src/allmydata/storage/backends/disk/immutable.py 53
892-        self.home = filename
893+        self._storageindex = storageindex
894         self._max_size = max_size
895hunk ./src/allmydata/storage/backends/disk/immutable.py 55
896+        self._incominghome = incominghome
897+        self._home = finalhome
898+        self._shnum = shnum
899         if create:
900             # touch the file, so later callers will see that we're working on
901             # it. Also construct the metadata.
902hunk ./src/allmydata/storage/backends/disk/immutable.py 61
903-            assert not os.path.exists(self.home)
904-            fileutil.make_dirs(os.path.dirname(self.home))
905-            f = open(self.home, 'wb')
906+            assert not finalhome.exists()
907+            fp_make_dirs(self._incominghome.parent())
908             # The second field -- the four-byte share data length -- is no
909             # longer used as of Tahoe v1.3.0, but we continue to write it in
910             # there in case someone downgrades a storage server from >=
911hunk ./src/allmydata/storage/backends/disk/immutable.py 72
912             # the largest length that can fit into the field. That way, even
913             # if this does happen, the old < v1.3.0 server will still allow
914             # clients to read the first part of the share.
915-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
916-            f.close()
917+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
918             self._lease_offset = max_size + 0x0c
919             self._num_leases = 0
920         else:
921hunk ./src/allmydata/storage/backends/disk/immutable.py 76
922-            f = open(self.home, 'rb')
923-            filesize = os.path.getsize(self.home)
924-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
925-            f.close()
926+            f = self._home.open(mode='rb')
927+            try:
928+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
929+            finally:
930+                f.close()
931+            filesize = self._home.getsize()
932             if version != 1:
933                 msg = "sharefile %s had version %d but we wanted 1" % \
934hunk ./src/allmydata/storage/backends/disk/immutable.py 84
935-                      (filename, version)
936+                      (self._home, version)
937                 raise UnknownImmutableContainerVersionError(msg)
938             self._num_leases = num_leases
939             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
940hunk ./src/allmydata/storage/backends/disk/immutable.py 90
941         self._data_offset = 0xc
942 
943+    def __repr__(self):
944+        return ("<ImmutableDiskShare %s:%r at %s>"
945+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
946+
947+    def close(self):
948+        fileutil.fp_make_dirs(self._home.parent())
949+        self._incominghome.moveTo(self._home)
950+        try:
951+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
952+            # We try to delete the parent (.../ab/abcde) to avoid leaving
953+            # these directories lying around forever, but the delete might
954+            # fail if we're working on another share for the same storage
955+            # index (like ab/abcde/5). The alternative approach would be to
956+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
957+            # ShareWriter), each of which is responsible for a single
958+            # directory on disk, and have them use reference counting of
959+            # their children to know when they should do the rmdir. This
960+            # approach is simpler, but relies on os.rmdir refusing to delete
961+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
962+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
963+            # we also delete the grandparent (prefix) directory, .../ab ,
964+            # again to avoid leaving directories lying around. This might
965+            # fail if there is another bucket open that shares a prefix (like
966+            # ab/abfff).
967+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
968+            # we leave the great-grandparent (incoming/) directory in place.
969+        except EnvironmentError:
970+            # ignore the "can't rmdir because the directory is not empty"
971+            # exceptions, those are normal consequences of the
972+            # above-mentioned conditions.
973+            pass
974+        pass
975+
976+    def get_used_space(self):
977+        return (fileutil.get_used_space(self._home) +
978+                fileutil.get_used_space(self._incominghome))
979+
980+    def get_storage_index(self):
981+        return self._storageindex
982+
983+    def get_shnum(self):
984+        return self._shnum
985+
986     def unlink(self):
987hunk ./src/allmydata/storage/backends/disk/immutable.py 134
988-        os.unlink(self.home)
989+        self._home.remove()
990+
991+    def get_size(self):
992+        return self._home.getsize()
993+
994+    def get_data_length(self):
995+        return self._lease_offset - self._data_offset
996+
997+    #def readv(self, read_vector):
998+    #    ...
999 
1000     def read_share_data(self, offset, length):
1001         precondition(offset >= 0)
1002hunk ./src/allmydata/storage/backends/disk/immutable.py 147
1003-        # reads beyond the end of the data are truncated. Reads that start
1004+
1005+        # Reads beyond the end of the data are truncated. Reads that start
1006         # beyond the end of the data return an empty string.
1007         seekpos = self._data_offset+offset
1008         actuallength = max(0, min(length, self._lease_offset-seekpos))
1009hunk ./src/allmydata/storage/backends/disk/immutable.py 154
1010         if actuallength == 0:
1011             return ""
1012-        f = open(self.home, 'rb')
1013-        f.seek(seekpos)
1014-        return f.read(actuallength)
1015+        f = self._home.open(mode='rb')
1016+        try:
1017+            f.seek(seekpos)
1018+            sharedata = f.read(actuallength)
1019+        finally:
1020+            f.close()
1021+        return sharedata
1022 
1023     def write_share_data(self, offset, data):
1024         length = len(data)
1025hunk ./src/allmydata/storage/backends/disk/immutable.py 167
1026         precondition(offset >= 0, offset)
1027         if self._max_size is not None and offset+length > self._max_size:
1028             raise DataTooLargeError(self._max_size, offset, length)
1029-        f = open(self.home, 'rb+')
1030-        real_offset = self._data_offset+offset
1031-        f.seek(real_offset)
1032-        assert f.tell() == real_offset
1033-        f.write(data)
1034-        f.close()
1035+        f = self._incominghome.open(mode='rb+')
1036+        try:
1037+            real_offset = self._data_offset+offset
1038+            f.seek(real_offset)
1039+            assert f.tell() == real_offset
1040+            f.write(data)
1041+        finally:
1042+            f.close()
1043 
1044     def _write_lease_record(self, f, lease_number, lease_info):
1045         offset = self._lease_offset + lease_number * self.LEASE_SIZE
1046hunk ./src/allmydata/storage/backends/disk/immutable.py 184
1047 
1048     def _read_num_leases(self, f):
1049         f.seek(0x08)
1050-        (num_leases,) = struct.unpack(">L", f.read(4))
1051+        ro = f.read(4)
1052+        (num_leases,) = struct.unpack(">L", ro)
1053         return num_leases
1054 
1055     def _write_num_leases(self, f, num_leases):
1056hunk ./src/allmydata/storage/backends/disk/immutable.py 195
1057     def _truncate_leases(self, f, num_leases):
1058         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1059 
1060+    # These lease operations are intended for use by disk_backend.py.
1061+    # Other clients should not depend on the fact that the disk backend
1062+    # stores leases in share files.
1063+
1064     def get_leases(self):
1065         """Yields a LeaseInfo instance for all leases."""
1066hunk ./src/allmydata/storage/backends/disk/immutable.py 201
1067-        f = open(self.home, 'rb')
1068-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1069-        f.seek(self._lease_offset)
1070-        for i in range(num_leases):
1071-            data = f.read(self.LEASE_SIZE)
1072-            if data:
1073-                yield LeaseInfo().from_immutable_data(data)
1074+        f = self._home.open(mode='rb')
1075+        try:
1076+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1077+            f.seek(self._lease_offset)
1078+            for i in range(num_leases):
1079+                data = f.read(self.LEASE_SIZE)
1080+                if data:
1081+                    yield LeaseInfo().from_immutable_data(data)
1082+        finally:
1083+            f.close()
1084 
1085     def add_lease(self, lease_info):
1086hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1087-        f = open(self.home, 'rb+')
1088-        num_leases = self._read_num_leases(f)
1089-        self._write_lease_record(f, num_leases, lease_info)
1090-        self._write_num_leases(f, num_leases+1)
1091-        f.close()
1092+        f = self._incominghome.open(mode='rb')
1093+        try:
1094+            num_leases = self._read_num_leases(f)
1095+        finally:
1096+            f.close()
1097+        f = self._home.open(mode='wb+')
1098+        try:
1099+            self._write_lease_record(f, num_leases, lease_info)
1100+            self._write_num_leases(f, num_leases+1)
1101+        finally:
1102+            f.close()
1103 
1104     def renew_lease(self, renew_secret, new_expire_time):
1105hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1106-        for i,lease in enumerate(self.get_leases()):
1107-            if constant_time_compare(lease.renew_secret, renew_secret):
1108-                # yup. See if we need to update the owner time.
1109-                if new_expire_time > lease.expiration_time:
1110-                    # yes
1111-                    lease.expiration_time = new_expire_time
1112-                    f = open(self.home, 'rb+')
1113-                    self._write_lease_record(f, i, lease)
1114-                    f.close()
1115-                return
1116+        try:
1117+            for i, lease in enumerate(self.get_leases()):
1118+                if constant_time_compare(lease.renew_secret, renew_secret):
1119+                    # yup. See if we need to update the owner time.
1120+                    if new_expire_time > lease.expiration_time:
1121+                        # yes
1122+                        lease.expiration_time = new_expire_time
1123+                        f = self._home.open('rb+')
1124+                        try:
1125+                            self._write_lease_record(f, i, lease)
1126+                        finally:
1127+                            f.close()
1128+                    return
1129+        except IndexError, e:
1130+            raise Exception("IndexError: %s" % (e,))
1131         raise IndexError("unable to renew non-existent lease")
1132 
1133     def add_or_renew_lease(self, lease_info):
1134hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1135                              lease_info.expiration_time)
1136         except IndexError:
1137             self.add_lease(lease_info)
1138-
1139-
1140-    def cancel_lease(self, cancel_secret):
1141-        """Remove a lease with the given cancel_secret. If the last lease is
1142-        cancelled, the file will be removed. Return the number of bytes that
1143-        were freed (by truncating the list of leases, and possibly by
1144-        deleting the file. Raise IndexError if there was no lease with the
1145-        given cancel_secret.
1146-        """
1147-
1148-        leases = list(self.get_leases())
1149-        num_leases_removed = 0
1150-        for i,lease in enumerate(leases):
1151-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1152-                leases[i] = None
1153-                num_leases_removed += 1
1154-        if not num_leases_removed:
1155-            raise IndexError("unable to find matching lease to cancel")
1156-        if num_leases_removed:
1157-            # pack and write out the remaining leases. We write these out in
1158-            # the same order as they were added, so that if we crash while
1159-            # doing this, we won't lose any non-cancelled leases.
1160-            leases = [l for l in leases if l] # remove the cancelled leases
1161-            f = open(self.home, 'rb+')
1162-            for i,lease in enumerate(leases):
1163-                self._write_lease_record(f, i, lease)
1164-            self._write_num_leases(f, len(leases))
1165-            self._truncate_leases(f, len(leases))
1166-            f.close()
1167-        space_freed = self.LEASE_SIZE * num_leases_removed
1168-        if not len(leases):
1169-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1170-            self.unlink()
1171-        return space_freed
1172-
1173-
1174-class BucketWriter(Referenceable):
1175-    implements(RIBucketWriter)
1176-
1177-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1178-        self.ss = ss
1179-        self.incominghome = incominghome
1180-        self.finalhome = finalhome
1181-        self._max_size = max_size # don't allow the client to write more than this
1182-        self._canary = canary
1183-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1184-        self.closed = False
1185-        self.throw_out_all_data = False
1186-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1187-        # also, add our lease to the file now, so that other ones can be
1188-        # added by simultaneous uploaders
1189-        self._sharefile.add_lease(lease_info)
1190-
1191-    def allocated_size(self):
1192-        return self._max_size
1193-
1194-    def remote_write(self, offset, data):
1195-        start = time.time()
1196-        precondition(not self.closed)
1197-        if self.throw_out_all_data:
1198-            return
1199-        self._sharefile.write_share_data(offset, data)
1200-        self.ss.add_latency("write", time.time() - start)
1201-        self.ss.count("write")
1202-
1203-    def remote_close(self):
1204-        precondition(not self.closed)
1205-        start = time.time()
1206-
1207-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1208-        fileutil.rename(self.incominghome, self.finalhome)
1209-        try:
1210-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1211-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1212-            # these directories lying around forever, but the delete might
1213-            # fail if we're working on another share for the same storage
1214-            # index (like ab/abcde/5). The alternative approach would be to
1215-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1216-            # ShareWriter), each of which is responsible for a single
1217-            # directory on disk, and have them use reference counting of
1218-            # their children to know when they should do the rmdir. This
1219-            # approach is simpler, but relies on os.rmdir refusing to delete
1220-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1221-            os.rmdir(os.path.dirname(self.incominghome))
1222-            # we also delete the grandparent (prefix) directory, .../ab ,
1223-            # again to avoid leaving directories lying around. This might
1224-            # fail if there is another bucket open that shares a prefix (like
1225-            # ab/abfff).
1226-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1227-            # we leave the great-grandparent (incoming/) directory in place.
1228-        except EnvironmentError:
1229-            # ignore the "can't rmdir because the directory is not empty"
1230-            # exceptions, those are normal consequences of the
1231-            # above-mentioned conditions.
1232-            pass
1233-        self._sharefile = None
1234-        self.closed = True
1235-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1236-
1237-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1238-        self.ss.bucket_writer_closed(self, filelen)
1239-        self.ss.add_latency("close", time.time() - start)
1240-        self.ss.count("close")
1241-
1242-    def _disconnected(self):
1243-        if not self.closed:
1244-            self._abort()
1245-
1246-    def remote_abort(self):
1247-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1248-                facility="tahoe.storage", level=log.UNUSUAL)
1249-        if not self.closed:
1250-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1251-        self._abort()
1252-        self.ss.count("abort")
1253-
1254-    def _abort(self):
1255-        if self.closed:
1256-            return
1257-
1258-        os.remove(self.incominghome)
1259-        # if we were the last share to be moved, remove the incoming/
1260-        # directory that was our parent
1261-        parentdir = os.path.split(self.incominghome)[0]
1262-        if not os.listdir(parentdir):
1263-            os.rmdir(parentdir)
1264-        self._sharefile = None
1265-
1266-        # We are now considered closed for further writing. We must tell
1267-        # the storage server about this so that it stops expecting us to
1268-        # use the space it allocated for us earlier.
1269-        self.closed = True
1270-        self.ss.bucket_writer_closed(self, 0)
1271-
1272-
1273-class BucketReader(Referenceable):
1274-    implements(RIBucketReader)
1275-
1276-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1277-        self.ss = ss
1278-        self._share_file = ShareFile(sharefname)
1279-        self.storage_index = storage_index
1280-        self.shnum = shnum
1281-
1282-    def __repr__(self):
1283-        return "<%s %s %s>" % (self.__class__.__name__,
1284-                               base32.b2a_l(self.storage_index[:8], 60),
1285-                               self.shnum)
1286-
1287-    def remote_read(self, offset, length):
1288-        start = time.time()
1289-        data = self._share_file.read_share_data(offset, length)
1290-        self.ss.add_latency("read", time.time() - start)
1291-        self.ss.count("read")
1292-        return data
1293-
1294-    def remote_advise_corrupt_share(self, reason):
1295-        return self.ss.remote_advise_corrupt_share("immutable",
1296-                                                   self.storage_index,
1297-                                                   self.shnum,
1298-                                                   reason)
1299hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1300-import os, stat, struct
1301 
1302hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1303-from allmydata.interfaces import BadWriteEnablerError
1304-from allmydata.util import idlib, log
1305+import struct
1306+
1307+from zope.interface import implements
1308+
1309+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1310+from allmydata.util import fileutil, idlib, log
1311 from allmydata.util.assertutil import precondition
1312 from allmydata.util.hashutil import constant_time_compare
1313hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1314-from allmydata.storage.lease import LeaseInfo
1315-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1316+from allmydata.util.encodingutil import quote_filepath
1317+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1318      DataTooLargeError
1319hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1320+from allmydata.storage.lease import LeaseInfo
1321+from allmydata.storage.backends.base import testv_compare
1322 
1323hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1324-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1325-# has a different layout. See docs/mutable.txt for more details.
1326+
1327+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1328+# It has a different layout. See docs/mutable.rst for more details.
1329 
1330 # #   offset    size    name
1331 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1332hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1333 #                        4    4   expiration timestamp
1334 #                        8   32   renewal token
1335 #                        40  32   cancel token
1336-#                        72  20   nodeid which accepted the tokens
1337+#                        72  20   nodeid that accepted the tokens
1338 # 7   468       (a)     data
1339 # 8   ??        4       count of extra leases
1340 # 9   ??        n*92    extra leases
1341hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1342 
1343 
1344-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1345+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1346 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1347 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1348 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1349hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1350 
1351-class MutableShareFile:
1352+
1353+class MutableDiskShare(object):
1354+    implements(IStoredMutableShare)
1355 
1356     sharetype = "mutable"
1357     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1358hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1359     assert LEASE_SIZE == 92
1360     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1361     assert DATA_OFFSET == 468, DATA_OFFSET
1362+
1363     # our sharefiles share with a recognizable string, plus some random
1364     # binary data to reduce the chance that a regular text file will look
1365     # like a sharefile.
1366hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1367     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1368     # TODO: decide upon a policy for max share size
1369 
1370-    def __init__(self, filename, parent=None):
1371-        self.home = filename
1372-        if os.path.exists(self.home):
1373+    def __init__(self, storageindex, shnum, home, parent=None):
1374+        self._storageindex = storageindex
1375+        self._shnum = shnum
1376+        self._home = home
1377+        if self._home.exists():
1378             # we don't cache anything, just check the magic
1379hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1380-            f = open(self.home, 'rb')
1381-            data = f.read(self.HEADER_SIZE)
1382-            (magic,
1383-             write_enabler_nodeid, write_enabler,
1384-             data_length, extra_least_offset) = \
1385-             struct.unpack(">32s20s32sQQ", data)
1386-            if magic != self.MAGIC:
1387-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1388-                      (filename, magic, self.MAGIC)
1389-                raise UnknownMutableContainerVersionError(msg)
1390+            f = self._home.open('rb')
1391+            try:
1392+                data = f.read(self.HEADER_SIZE)
1393+                (magic,
1394+                 write_enabler_nodeid, write_enabler,
1395+                 data_length, extra_least_offset) = \
1396+                 struct.unpack(">32s20s32sQQ", data)
1397+                if magic != self.MAGIC:
1398+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1399+                          (quote_filepath(self._home), magic, self.MAGIC)
1400+                    raise UnknownMutableContainerVersionError(msg)
1401+            finally:
1402+                f.close()
1403         self.parent = parent # for logging
1404 
1405     def log(self, *args, **kwargs):
1406hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1407         return self.parent.log(*args, **kwargs)
1408 
1409-    def create(self, my_nodeid, write_enabler):
1410-        assert not os.path.exists(self.home)
1411+    def create(self, serverid, write_enabler):
1412+        assert not self._home.exists()
1413         data_length = 0
1414         extra_lease_offset = (self.HEADER_SIZE
1415                               + 4 * self.LEASE_SIZE
1416hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1417                               + data_length)
1418         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1419         num_extra_leases = 0
1420-        f = open(self.home, 'wb')
1421-        header = struct.pack(">32s20s32sQQ",
1422-                             self.MAGIC, my_nodeid, write_enabler,
1423-                             data_length, extra_lease_offset,
1424-                             )
1425-        leases = ("\x00"*self.LEASE_SIZE) * 4
1426-        f.write(header + leases)
1427-        # data goes here, empty after creation
1428-        f.write(struct.pack(">L", num_extra_leases))
1429-        # extra leases go here, none at creation
1430-        f.close()
1431+        f = self._home.open('wb')
1432+        try:
1433+            header = struct.pack(">32s20s32sQQ",
1434+                                 self.MAGIC, serverid, write_enabler,
1435+                                 data_length, extra_lease_offset,
1436+                                 )
1437+            leases = ("\x00"*self.LEASE_SIZE) * 4
1438+            f.write(header + leases)
1439+            # data goes here, empty after creation
1440+            f.write(struct.pack(">L", num_extra_leases))
1441+            # extra leases go here, none at creation
1442+        finally:
1443+            f.close()
1444+
1445+    def __repr__(self):
1446+        return ("<MutableDiskShare %s:%r at %s>"
1447+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1448+
1449+    def get_used_space(self):
1450+        return fileutil.get_used_space(self._home)
1451+
1452+    def get_storage_index(self):
1453+        return self._storageindex
1454+
1455+    def get_shnum(self):
1456+        return self._shnum
1457 
1458     def unlink(self):
1459hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1460-        os.unlink(self.home)
1461+        self._home.remove()
1462 
1463     def _read_data_length(self, f):
1464         f.seek(self.DATA_LENGTH_OFFSET)
1465hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1466 
1467     def get_leases(self):
1468         """Yields a LeaseInfo instance for all leases."""
1469-        f = open(self.home, 'rb')
1470-        for i, lease in self._enumerate_leases(f):
1471-            yield lease
1472-        f.close()
1473+        f = self._home.open('rb')
1474+        try:
1475+            for i, lease in self._enumerate_leases(f):
1476+                yield lease
1477+        finally:
1478+            f.close()
1479 
1480     def _enumerate_leases(self, f):
1481         for i in range(self._get_num_lease_slots(f)):
1482hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1483             try:
1484                 data = self._read_lease_record(f, i)
1485                 if data is not None:
1486-                    yield i,data
1487+                    yield i, data
1488             except IndexError:
1489                 return
1490 
1491hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1492+    # These lease operations are intended for use by disk_backend.py.
1493+    # Other non-test clients should not depend on the fact that the disk
1494+    # backend stores leases in share files.
1495+
1496     def add_lease(self, lease_info):
1497         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1498hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1499-        f = open(self.home, 'rb+')
1500-        num_lease_slots = self._get_num_lease_slots(f)
1501-        empty_slot = self._get_first_empty_lease_slot(f)
1502-        if empty_slot is not None:
1503-            self._write_lease_record(f, empty_slot, lease_info)
1504-        else:
1505-            self._write_lease_record(f, num_lease_slots, lease_info)
1506-        f.close()
1507+        f = self._home.open('rb+')
1508+        try:
1509+            num_lease_slots = self._get_num_lease_slots(f)
1510+            empty_slot = self._get_first_empty_lease_slot(f)
1511+            if empty_slot is not None:
1512+                self._write_lease_record(f, empty_slot, lease_info)
1513+            else:
1514+                self._write_lease_record(f, num_lease_slots, lease_info)
1515+        finally:
1516+            f.close()
1517 
1518     def renew_lease(self, renew_secret, new_expire_time):
1519         accepting_nodeids = set()
1520hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1521-        f = open(self.home, 'rb+')
1522-        for (leasenum,lease) in self._enumerate_leases(f):
1523-            if constant_time_compare(lease.renew_secret, renew_secret):
1524-                # yup. See if we need to update the owner time.
1525-                if new_expire_time > lease.expiration_time:
1526-                    # yes
1527-                    lease.expiration_time = new_expire_time
1528-                    self._write_lease_record(f, leasenum, lease)
1529-                f.close()
1530-                return
1531-            accepting_nodeids.add(lease.nodeid)
1532-        f.close()
1533+        f = self._home.open('rb+')
1534+        try:
1535+            for (leasenum, lease) in self._enumerate_leases(f):
1536+                if constant_time_compare(lease.renew_secret, renew_secret):
1537+                    # yup. See if we need to update the owner time.
1538+                    if new_expire_time > lease.expiration_time:
1539+                        # yes
1540+                        lease.expiration_time = new_expire_time
1541+                        self._write_lease_record(f, leasenum, lease)
1542+                    return
1543+                accepting_nodeids.add(lease.nodeid)
1544+        finally:
1545+            f.close()
1546         # Return the accepting_nodeids set, to give the client a chance to
1547hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1548-        # update the leases on a share which has been migrated from its
1549+        # update the leases on a share that has been migrated from its
1550         # original server to a new one.
1551         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1552                " nodeids: ")
1553hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1554         except IndexError:
1555             self.add_lease(lease_info)
1556 
1557-    def cancel_lease(self, cancel_secret):
1558-        """Remove any leases with the given cancel_secret. If the last lease
1559-        is cancelled, the file will be removed. Return the number of bytes
1560-        that were freed (by truncating the list of leases, and possibly by
1561-        deleting the file. Raise IndexError if there was no lease with the
1562-        given cancel_secret."""
1563-
1564-        accepting_nodeids = set()
1565-        modified = 0
1566-        remaining = 0
1567-        blank_lease = LeaseInfo(owner_num=0,
1568-                                renew_secret="\x00"*32,
1569-                                cancel_secret="\x00"*32,
1570-                                expiration_time=0,
1571-                                nodeid="\x00"*20)
1572-        f = open(self.home, 'rb+')
1573-        for (leasenum,lease) in self._enumerate_leases(f):
1574-            accepting_nodeids.add(lease.nodeid)
1575-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1576-                self._write_lease_record(f, leasenum, blank_lease)
1577-                modified += 1
1578-            else:
1579-                remaining += 1
1580-        if modified:
1581-            freed_space = self._pack_leases(f)
1582-            f.close()
1583-            if not remaining:
1584-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1585-                self.unlink()
1586-            return freed_space
1587-
1588-        msg = ("Unable to cancel non-existent lease. I have leases "
1589-               "accepted by nodeids: ")
1590-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1591-                         for anid in accepting_nodeids])
1592-        msg += " ."
1593-        raise IndexError(msg)
1594-
1595-    def _pack_leases(self, f):
1596-        # TODO: reclaim space from cancelled leases
1597-        return 0
1598-
1599     def _read_write_enabler_and_nodeid(self, f):
1600         f.seek(0)
1601         data = f.read(self.HEADER_SIZE)
1602hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1603 
1604     def readv(self, readv):
1605         datav = []
1606-        f = open(self.home, 'rb')
1607-        for (offset, length) in readv:
1608-            datav.append(self._read_share_data(f, offset, length))
1609-        f.close()
1610+        f = self._home.open('rb')
1611+        try:
1612+            for (offset, length) in readv:
1613+                datav.append(self._read_share_data(f, offset, length))
1614+        finally:
1615+            f.close()
1616         return datav
1617 
1618hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1619-#    def remote_get_length(self):
1620-#        f = open(self.home, 'rb')
1621-#        data_length = self._read_data_length(f)
1622-#        f.close()
1623-#        return data_length
1624+    def get_size(self):
1625+        return self._home.getsize()
1626+
1627+    def get_data_length(self):
1628+        f = self._home.open('rb')
1629+        try:
1630+            data_length = self._read_data_length(f)
1631+        finally:
1632+            f.close()
1633+        return data_length
1634 
1635     def check_write_enabler(self, write_enabler, si_s):
1636hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1637-        f = open(self.home, 'rb+')
1638-        (real_write_enabler, write_enabler_nodeid) = \
1639-                             self._read_write_enabler_and_nodeid(f)
1640-        f.close()
1641+        f = self._home.open('rb+')
1642+        try:
1643+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1644+        finally:
1645+            f.close()
1646         # avoid a timing attack
1647         #if write_enabler != real_write_enabler:
1648         if not constant_time_compare(write_enabler, real_write_enabler):
1649hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1650 
1651     def check_testv(self, testv):
1652         test_good = True
1653-        f = open(self.home, 'rb+')
1654-        for (offset, length, operator, specimen) in testv:
1655-            data = self._read_share_data(f, offset, length)
1656-            if not testv_compare(data, operator, specimen):
1657-                test_good = False
1658-                break
1659-        f.close()
1660+        f = self._home.open('rb+')
1661+        try:
1662+            for (offset, length, operator, specimen) in testv:
1663+                data = self._read_share_data(f, offset, length)
1664+                if not testv_compare(data, operator, specimen):
1665+                    test_good = False
1666+                    break
1667+        finally:
1668+            f.close()
1669         return test_good
1670 
1671     def writev(self, datav, new_length):
1672hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1673-        f = open(self.home, 'rb+')
1674-        for (offset, data) in datav:
1675-            self._write_share_data(f, offset, data)
1676-        if new_length is not None:
1677-            cur_length = self._read_data_length(f)
1678-            if new_length < cur_length:
1679-                self._write_data_length(f, new_length)
1680-                # TODO: if we're going to shrink the share file when the
1681-                # share data has shrunk, then call
1682-                # self._change_container_size() here.
1683-        f.close()
1684-
1685-def testv_compare(a, op, b):
1686-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1687-    if op == "lt":
1688-        return a < b
1689-    if op == "le":
1690-        return a <= b
1691-    if op == "eq":
1692-        return a == b
1693-    if op == "ne":
1694-        return a != b
1695-    if op == "ge":
1696-        return a >= b
1697-    if op == "gt":
1698-        return a > b
1699-    # never reached
1700+        f = self._home.open('rb+')
1701+        try:
1702+            for (offset, data) in datav:
1703+                self._write_share_data(f, offset, data)
1704+            if new_length is not None:
1705+                cur_length = self._read_data_length(f)
1706+                if new_length < cur_length:
1707+                    self._write_data_length(f, new_length)
1708+                    # TODO: if we're going to shrink the share file when the
1709+                    # share data has shrunk, then call
1710+                    # self._change_container_size() here.
1711+        finally:
1712+            f.close()
1713 
1714hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1715-class EmptyShare:
1716+    def close(self):
1717+        pass
1718 
1719hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1720-    def check_testv(self, testv):
1721-        test_good = True
1722-        for (offset, length, operator, specimen) in testv:
1723-            data = ""
1724-            if not testv_compare(data, operator, specimen):
1725-                test_good = False
1726-                break
1727-        return test_good
1728 
1729hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1730-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1731-    ms = MutableShareFile(filename, parent)
1732-    ms.create(my_nodeid, write_enabler)
1733+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1734+    ms = MutableDiskShare(fp, parent)
1735+    ms.create(serverid, write_enabler)
1736     del ms
1737hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1738-    return MutableShareFile(filename, parent)
1739-
1740+    return MutableDiskShare(fp, parent)
1741addfile ./src/allmydata/storage/backends/null/__init__.py
1742addfile ./src/allmydata/storage/backends/null/null_backend.py
1743hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1744 
1745+import os, struct
1746+
1747+from zope.interface import implements
1748+
1749+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1750+from allmydata.util.assertutil import precondition
1751+from allmydata.util.hashutil import constant_time_compare
1752+from allmydata.storage.backends.base import Backend, ShareSet
1753+from allmydata.storage.bucket import BucketWriter
1754+from allmydata.storage.common import si_b2a
1755+from allmydata.storage.lease import LeaseInfo
1756+
1757+
1758+class NullBackend(Backend):
1759+    implements(IStorageBackend)
1760+
1761+    def __init__(self):
1762+        Backend.__init__(self)
1763+
1764+    def get_available_space(self, reserved_space):
1765+        return None
1766+
1767+    def get_sharesets_for_prefix(self, prefix):
1768+        pass
1769+
1770+    def get_shareset(self, storageindex):
1771+        return NullShareSet(storageindex)
1772+
1773+    def fill_in_space_stats(self, stats):
1774+        pass
1775+
1776+    def set_storage_server(self, ss):
1777+        self.ss = ss
1778+
1779+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1780+        pass
1781+
1782+
1783+class NullShareSet(ShareSet):
1784+    implements(IShareSet)
1785+
1786+    def __init__(self, storageindex):
1787+        self.storageindex = storageindex
1788+
1789+    def get_overhead(self):
1790+        return 0
1791+
1792+    def get_incoming_shnums(self):
1793+        return frozenset()
1794+
1795+    def get_shares(self):
1796+        pass
1797+
1798+    def get_share(self, shnum):
1799+        return None
1800+
1801+    def get_storage_index(self):
1802+        return self.storageindex
1803+
1804+    def get_storage_index_string(self):
1805+        return si_b2a(self.storageindex)
1806+
1807+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1808+        immutableshare = ImmutableNullShare()
1809+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1810+
1811+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1812+        return MutableNullShare()
1813+
1814+    def _clean_up_after_unlink(self):
1815+        pass
1816+
1817+
1818+class ImmutableNullShare:
1819+    implements(IStoredShare)
1820+    sharetype = "immutable"
1821+
1822+    def __init__(self):
1823+        """ If max_size is not None then I won't allow more than
1824+        max_size to be written to me. If create=True then max_size
1825+        must not be None. """
1826+        pass
1827+
1828+    def get_shnum(self):
1829+        return self.shnum
1830+
1831+    def unlink(self):
1832+        os.unlink(self.fname)
1833+
1834+    def read_share_data(self, offset, length):
1835+        precondition(offset >= 0)
1836+        # Reads beyond the end of the data are truncated. Reads that start
1837+        # beyond the end of the data return an empty string.
1838+        seekpos = self._data_offset+offset
1839+        fsize = os.path.getsize(self.fname)
1840+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1841+        if actuallength == 0:
1842+            return ""
1843+        f = open(self.fname, 'rb')
1844+        f.seek(seekpos)
1845+        return f.read(actuallength)
1846+
1847+    def write_share_data(self, offset, data):
1848+        pass
1849+
1850+    def _write_lease_record(self, f, lease_number, lease_info):
1851+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1852+        f.seek(offset)
1853+        assert f.tell() == offset
1854+        f.write(lease_info.to_immutable_data())
1855+
1856+    def _read_num_leases(self, f):
1857+        f.seek(0x08)
1858+        (num_leases,) = struct.unpack(">L", f.read(4))
1859+        return num_leases
1860+
1861+    def _write_num_leases(self, f, num_leases):
1862+        f.seek(0x08)
1863+        f.write(struct.pack(">L", num_leases))
1864+
1865+    def _truncate_leases(self, f, num_leases):
1866+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1867+
1868+    def get_leases(self):
1869+        """Yields a LeaseInfo instance for all leases."""
1870+        f = open(self.fname, 'rb')
1871+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1872+        f.seek(self._lease_offset)
1873+        for i in range(num_leases):
1874+            data = f.read(self.LEASE_SIZE)
1875+            if data:
1876+                yield LeaseInfo().from_immutable_data(data)
1877+
1878+    def add_lease(self, lease):
1879+        pass
1880+
1881+    def renew_lease(self, renew_secret, new_expire_time):
1882+        for i,lease in enumerate(self.get_leases()):
1883+            if constant_time_compare(lease.renew_secret, renew_secret):
1884+                # yup. See if we need to update the owner time.
1885+                if new_expire_time > lease.expiration_time:
1886+                    # yes
1887+                    lease.expiration_time = new_expire_time
1888+                    f = open(self.fname, 'rb+')
1889+                    self._write_lease_record(f, i, lease)
1890+                    f.close()
1891+                return
1892+        raise IndexError("unable to renew non-existent lease")
1893+
1894+    def add_or_renew_lease(self, lease_info):
1895+        try:
1896+            self.renew_lease(lease_info.renew_secret,
1897+                             lease_info.expiration_time)
1898+        except IndexError:
1899+            self.add_lease(lease_info)
1900+
1901+
1902+class MutableNullShare:
1903+    implements(IStoredMutableShare)
1904+    sharetype = "mutable"
1905+
1906+    """ XXX: TODO """
1907addfile ./src/allmydata/storage/bucket.py
1908hunk ./src/allmydata/storage/bucket.py 1
1909+
1910+import time
1911+
1912+from foolscap.api import Referenceable
1913+
1914+from zope.interface import implements
1915+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1916+from allmydata.util import base32, log
1917+from allmydata.util.assertutil import precondition
1918+
1919+
1920+class BucketWriter(Referenceable):
1921+    implements(RIBucketWriter)
1922+
1923+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1924+        self.ss = ss
1925+        self._max_size = max_size # don't allow the client to write more than this
1926+        self._canary = canary
1927+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1928+        self.closed = False
1929+        self.throw_out_all_data = False
1930+        self._share = immutableshare
1931+        # also, add our lease to the file now, so that other ones can be
1932+        # added by simultaneous uploaders
1933+        self._share.add_lease(lease_info)
1934+
1935+    def allocated_size(self):
1936+        return self._max_size
1937+
1938+    def remote_write(self, offset, data):
1939+        start = time.time()
1940+        precondition(not self.closed)
1941+        if self.throw_out_all_data:
1942+            return
1943+        self._share.write_share_data(offset, data)
1944+        self.ss.add_latency("write", time.time() - start)
1945+        self.ss.count("write")
1946+
1947+    def remote_close(self):
1948+        precondition(not self.closed)
1949+        start = time.time()
1950+
1951+        self._share.close()
1952+        filelen = self._share.stat()
1953+        self._share = None
1954+
1955+        self.closed = True
1956+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1957+
1958+        self.ss.bucket_writer_closed(self, filelen)
1959+        self.ss.add_latency("close", time.time() - start)
1960+        self.ss.count("close")
1961+
1962+    def _disconnected(self):
1963+        if not self.closed:
1964+            self._abort()
1965+
1966+    def remote_abort(self):
1967+        log.msg("storage: aborting write to share %r" % self._share,
1968+                facility="tahoe.storage", level=log.UNUSUAL)
1969+        if not self.closed:
1970+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1971+        self._abort()
1972+        self.ss.count("abort")
1973+
1974+    def _abort(self):
1975+        if self.closed:
1976+            return
1977+        self._share.unlink()
1978+        self._share = None
1979+
1980+        # We are now considered closed for further writing. We must tell
1981+        # the storage server about this so that it stops expecting us to
1982+        # use the space it allocated for us earlier.
1983+        self.closed = True
1984+        self.ss.bucket_writer_closed(self, 0)
1985+
1986+
1987+class BucketReader(Referenceable):
1988+    implements(RIBucketReader)
1989+
1990+    def __init__(self, ss, share):
1991+        self.ss = ss
1992+        self._share = share
1993+        self.storageindex = share.storageindex
1994+        self.shnum = share.shnum
1995+
1996+    def __repr__(self):
1997+        return "<%s %s %s>" % (self.__class__.__name__,
1998+                               base32.b2a_l(self.storageindex[:8], 60),
1999+                               self.shnum)
2000+
2001+    def remote_read(self, offset, length):
2002+        start = time.time()
2003+        data = self._share.read_share_data(offset, length)
2004+        self.ss.add_latency("read", time.time() - start)
2005+        self.ss.count("read")
2006+        return data
2007+
2008+    def remote_advise_corrupt_share(self, reason):
2009+        return self.ss.remote_advise_corrupt_share("immutable",
2010+                                                   self.storageindex,
2011+                                                   self.shnum,
2012+                                                   reason)
2013addfile ./src/allmydata/test/test_backends.py
2014hunk ./src/allmydata/test/test_backends.py 1
2015+import os, stat
2016+from twisted.trial import unittest
2017+from allmydata.util.log import msg
2018+from allmydata.test.common_util import ReallyEqualMixin
2019+import mock
2020+
2021+# This is the code that we're going to be testing.
2022+from allmydata.storage.server import StorageServer
2023+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
2024+from allmydata.storage.backends.null.null_backend import NullBackend
2025+
2026+# The following share file content was generated with
2027+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2028+# with share data == 'a'. The total size of this input
2029+# is 85 bytes.
2030+shareversionnumber = '\x00\x00\x00\x01'
2031+sharedatalength = '\x00\x00\x00\x01'
2032+numberofleases = '\x00\x00\x00\x01'
2033+shareinputdata = 'a'
2034+ownernumber = '\x00\x00\x00\x00'
2035+renewsecret  = 'x'*32
2036+cancelsecret = 'y'*32
2037+expirationtime = '\x00(\xde\x80'
2038+nextlease = ''
2039+containerdata = shareversionnumber + sharedatalength + numberofleases
2040+client_data = shareinputdata + ownernumber + renewsecret + \
2041+    cancelsecret + expirationtime + nextlease
2042+share_data = containerdata + client_data
2043+testnodeid = 'testnodeidxxxxxxxxxx'
2044+
2045+
2046+class MockFileSystem(unittest.TestCase):
2047+    """ I simulate a filesystem that the code under test can use. I simulate
2048+    just the parts of the filesystem that the current implementation of Disk
2049+    backend needs. """
2050+    def setUp(self):
2051+        # Make patcher, patch, and effects for disk-using functions.
2052+        msg( "%s.setUp()" % (self,))
2053+        self.mockedfilepaths = {}
2054+        # keys are pathnames, values are MockFilePath objects. This is necessary because
2055+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
2056+        # self.mockedfilepaths has the relevant information.
2057+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
2058+        self.basedir = self.storedir.child('shares')
2059+        self.baseincdir = self.basedir.child('incoming')
2060+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2061+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2062+        self.shareincomingname = self.sharedirincomingname.child('0')
2063+        self.sharefinalname = self.sharedirfinalname.child('0')
2064+
2065+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
2066+        # or LeaseCheckingCrawler.
2067+
2068+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
2069+        self.FilePathFake.__enter__()
2070+
2071+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
2072+        FakeBCC = self.BCountingCrawler.__enter__()
2073+        FakeBCC.side_effect = self.call_FakeBCC
2074+
2075+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
2076+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
2077+        FakeLCC.side_effect = self.call_FakeLCC
2078+
2079+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
2080+        GetSpace = self.get_available_space.__enter__()
2081+        GetSpace.side_effect = self.call_get_available_space
2082+
2083+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2084+        getsize = self.statforsize.__enter__()
2085+        getsize.side_effect = self.call_statforsize
2086+
2087+    def call_FakeBCC(self, StateFile):
2088+        return MockBCC()
2089+
2090+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2091+        return MockLCC()
2092+
2093+    def call_get_available_space(self, storedir, reservedspace):
2094+        # The input vector has an input size of 85.
2095+        return 85 - reservedspace
2096+
2097+    def call_statforsize(self, fakefpname):
2098+        return self.mockedfilepaths[fakefpname].fileobject.size()
2099+
2100+    def tearDown(self):
2101+        msg( "%s.tearDown()" % (self,))
2102+        self.FilePathFake.__exit__()
2103+        self.mockedfilepaths = {}
2104+
2105+
2106+class MockFilePath:
2107+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2108+        #  I can't just make the values MockFileObjects because they may be directories.
2109+        self.mockedfilepaths = ffpathsenvironment
2110+        self.path = pathstring
2111+        self.existence = existence
2112+        if not self.mockedfilepaths.has_key(self.path):
2113+            #  The first MockFilePath object is special
2114+            self.mockedfilepaths[self.path] = self
2115+            self.fileobject = None
2116+        else:
2117+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2118+        self.spawn = {}
2119+        self.antecedent = os.path.dirname(self.path)
2120+
2121+    def setContent(self, contentstring):
2122+        # This method rewrites the data in the file that corresponds to its path
2123+        # name whether it preexisted or not.
2124+        self.fileobject = MockFileObject(contentstring)
2125+        self.existence = True
2126+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2127+        self.mockedfilepaths[self.path].existence = self.existence
2128+        self.setparents()
2129+
2130+    def create(self):
2131+        # This method chokes if there's a pre-existing file!
2132+        if self.mockedfilepaths[self.path].fileobject:
2133+            raise OSError
2134+        else:
2135+            self.existence = True
2136+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2137+            self.mockedfilepaths[self.path].existence = self.existence
2138+            self.setparents()
2139+
2140+    def open(self, mode='r'):
2141+        # XXX Makes no use of mode.
2142+        if not self.mockedfilepaths[self.path].fileobject:
2143+            # If there's no fileobject there already then make one and put it there.
2144+            self.fileobject = MockFileObject()
2145+            self.existence = True
2146+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2147+            self.mockedfilepaths[self.path].existence = self.existence
2148+        else:
2149+            # Otherwise get a ref to it.
2150+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2151+            self.existence = self.mockedfilepaths[self.path].existence
2152+        return self.fileobject.open(mode)
2153+
2154+    def child(self, childstring):
2155+        arg2child = os.path.join(self.path, childstring)
2156+        child = MockFilePath(arg2child, self.mockedfilepaths)
2157+        return child
2158+
2159+    def children(self):
2160+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2161+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2162+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2163+        self.spawn = frozenset(childrenfromffs)
2164+        return self.spawn
2165+
2166+    def parent(self):
2167+        if self.mockedfilepaths.has_key(self.antecedent):
2168+            parent = self.mockedfilepaths[self.antecedent]
2169+        else:
2170+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2171+        return parent
2172+
2173+    def parents(self):
2174+        antecedents = []
2175+        def f(fps, antecedents):
2176+            newfps = os.path.split(fps)[0]
2177+            if newfps:
2178+                antecedents.append(newfps)
2179+                f(newfps, antecedents)
2180+        f(self.path, antecedents)
2181+        return antecedents
2182+
2183+    def setparents(self):
2184+        for fps in self.parents():
2185+            if not self.mockedfilepaths.has_key(fps):
2186+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2187+
2188+    def basename(self):
2189+        return os.path.split(self.path)[1]
2190+
2191+    def moveTo(self, newffp):
2192+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2193+        if self.mockedfilepaths[newffp.path].exists():
2194+            raise OSError
2195+        else:
2196+            self.mockedfilepaths[newffp.path] = self
2197+            self.path = newffp.path
2198+
2199+    def getsize(self):
2200+        return self.fileobject.getsize()
2201+
2202+    def exists(self):
2203+        return self.existence
2204+
2205+    def isdir(self):
2206+        return True
2207+
2208+    def makedirs(self):
2209+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2210+        pass
2211+
2212+    def remove(self):
2213+        pass
2214+
2215+
2216+class MockFileObject:
2217+    def __init__(self, contentstring=''):
2218+        self.buffer = contentstring
2219+        self.pos = 0
2220+    def open(self, mode='r'):
2221+        return self
2222+    def write(self, instring):
2223+        begin = self.pos
2224+        padlen = begin - len(self.buffer)
2225+        if padlen > 0:
2226+            self.buffer += '\x00' * padlen
2227+        end = self.pos + len(instring)
2228+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2229+        self.pos = end
2230+    def close(self):
2231+        self.pos = 0
2232+    def seek(self, pos):
2233+        self.pos = pos
2234+    def read(self, numberbytes):
2235+        return self.buffer[self.pos:self.pos+numberbytes]
2236+    def tell(self):
2237+        return self.pos
2238+    def size(self):
2239+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2240+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2241+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2242+        return {stat.ST_SIZE:len(self.buffer)}
2243+    def getsize(self):
2244+        return len(self.buffer)
2245+
2246+class MockBCC:
2247+    def setServiceParent(self, Parent):
2248+        pass
2249+
2250+
2251+class MockLCC:
2252+    def setServiceParent(self, Parent):
2253+        pass
2254+
2255+
2256+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2257+    """ NullBackend is just for testing and executable documentation, so
2258+    this test is actually a test of StorageServer in which we're using
2259+    NullBackend as helper code for the test, rather than a test of
2260+    NullBackend. """
2261+    def setUp(self):
2262+        self.ss = StorageServer(testnodeid, NullBackend())
2263+
2264+    @mock.patch('os.mkdir')
2265+    @mock.patch('__builtin__.open')
2266+    @mock.patch('os.listdir')
2267+    @mock.patch('os.path.isdir')
2268+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2269+        """
2270+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2271+        generates the correct return types when given test-vector arguments. That
2272+        bs is of the correct type is verified by attempting to invoke remote_write
2273+        on bs[0].
2274+        """
2275+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2276+        bs[0].remote_write(0, 'a')
2277+        self.failIf(mockisdir.called)
2278+        self.failIf(mocklistdir.called)
2279+        self.failIf(mockopen.called)
2280+        self.failIf(mockmkdir.called)
2281+
2282+
2283+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2284+    def test_create_server_disk_backend(self):
2285+        """ This tests whether a server instance can be constructed with a
2286+        filesystem backend. To pass the test, it mustn't use the filesystem
2287+        outside of its configured storedir. """
2288+        StorageServer(testnodeid, DiskBackend(self.storedir))
2289+
2290+
2291+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2292+    """ This tests both the StorageServer and the Disk backend together. """
2293+    def setUp(self):
2294+        MockFileSystem.setUp(self)
2295+        try:
2296+            self.backend = DiskBackend(self.storedir)
2297+            self.ss = StorageServer(testnodeid, self.backend)
2298+
2299+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2300+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2301+        except:
2302+            MockFileSystem.tearDown(self)
2303+            raise
2304+
2305+    @mock.patch('time.time')
2306+    @mock.patch('allmydata.util.fileutil.get_available_space')
2307+    def test_out_of_space(self, mockget_available_space, mocktime):
2308+        mocktime.return_value = 0
2309+
2310+        def call_get_available_space(dir, reserve):
2311+            return 0
2312+
2313+        mockget_available_space.side_effect = call_get_available_space
2314+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2315+        self.failUnlessReallyEqual(bsc, {})
2316+
2317+    @mock.patch('time.time')
2318+    def test_write_and_read_share(self, mocktime):
2319+        """
2320+        Write a new share, read it, and test the server's (and disk backend's)
2321+        handling of simultaneous and successive attempts to write the same
2322+        share.
2323+        """
2324+        mocktime.return_value = 0
2325+        # Inspect incoming and fail unless it's empty.
2326+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2327+
2328+        self.failUnlessReallyEqual(incomingset, frozenset())
2329+
2330+        # Populate incoming with the sharenum: 0.
2331+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2332+
2333+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2334+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2335+
2336+
2337+
2338+        # Attempt to create a second share writer with the same sharenum.
2339+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2340+
2341+        # Show that no sharewriter results from a remote_allocate_buckets
2342+        # with the same si and sharenum, until BucketWriter.remote_close()
2343+        # has been called.
2344+        self.failIf(bsa)
2345+
2346+        # Test allocated size.
2347+        spaceint = self.ss.allocated_size()
2348+        self.failUnlessReallyEqual(spaceint, 1)
2349+
2350+        # Write 'a' to shnum 0. Only tested together with close and read.
2351+        bs[0].remote_write(0, 'a')
2352+
2353+        # Preclose: Inspect final, failUnless nothing there.
2354+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2355+        bs[0].remote_close()
2356+
2357+        # Postclose: (Omnibus) failUnless written data is in final.
2358+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2359+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2360+        contents = sharesinfinal[0].read_share_data(0, 73)
2361+        self.failUnlessReallyEqual(contents, client_data)
2362+
2363+        # Exercise the case that the share we're asking to allocate is
2364+        # already (completely) uploaded.
2365+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2366+
2367+
2368+    def test_read_old_share(self):
2369+        """ This tests whether the code correctly finds and reads
2370+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2371+        servers. There is a similar test in test_download, but that one
2372+        is from the perspective of the client and exercises a deeper
2373+        stack of code. This one is for exercising just the
2374+        StorageServer object. """
2375+        # Contruct a file with the appropriate contents in the mockfilesystem.
2376+        datalen = len(share_data)
2377+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2378+        finalhome.setContent(share_data)
2379+
2380+        # Now begin the test.
2381+        bs = self.ss.remote_get_buckets('teststorage_index')
2382+
2383+        self.failUnlessEqual(len(bs), 1)
2384+        b = bs['0']
2385+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2386+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2387+        # If you try to read past the end you get the as much data as is there.
2388+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2389+        # If you start reading past the end of the file you get the empty string.
2390+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2391}
2392[Pluggable backends -- all other changes. refs #999
2393david-sarah@jacaranda.org**20110919233256
2394 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2395] {
2396hunk ./src/allmydata/client.py 245
2397             sharetypes.append("immutable")
2398         if self.get_config("storage", "expire.mutable", True, boolean=True):
2399             sharetypes.append("mutable")
2400-        expiration_sharetypes = tuple(sharetypes)
2401 
2402hunk ./src/allmydata/client.py 246
2403+        expiration_policy = {
2404+            'enabled': expire,
2405+            'mode': mode,
2406+            'override_lease_duration': o_l_d,
2407+            'cutoff_date': cutoff_date,
2408+            'sharetypes': tuple(sharetypes),
2409+        }
2410         ss = StorageServer(storedir, self.nodeid,
2411                            reserved_space=reserved,
2412                            discard_storage=discard,
2413hunk ./src/allmydata/client.py 258
2414                            readonly_storage=readonly,
2415                            stats_provider=self.stats_provider,
2416-                           expiration_enabled=expire,
2417-                           expiration_mode=mode,
2418-                           expiration_override_lease_duration=o_l_d,
2419-                           expiration_cutoff_date=cutoff_date,
2420-                           expiration_sharetypes=expiration_sharetypes)
2421+                           expiration_policy=expiration_policy)
2422         self.add_service(ss)
2423 
2424         d = self.when_tub_ready()
2425hunk ./src/allmydata/immutable/offloaded.py 306
2426         if os.path.exists(self._encoding_file):
2427             self.log("ciphertext already present, bypassing fetch",
2428                      level=log.UNUSUAL)
2429+            # XXX the following comment is probably stale, since
2430+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2431+            #
2432             # we'll still need the plaintext hashes (when
2433             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2434             # called), and currently the easiest way to get them is to ask
2435hunk ./src/allmydata/immutable/upload.py 765
2436             self._status.set_progress(1, progress)
2437         return cryptdata
2438 
2439-
2440     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2441hunk ./src/allmydata/immutable/upload.py 766
2442+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2443+        plaintext segments, i.e. get the tagged hashes of the given segments.
2444+        The segment size is expected to be generated by the
2445+        IEncryptedUploadable before any plaintext is read or ciphertext
2446+        produced, so that the segment hashes can be generated with only a
2447+        single pass.
2448+
2449+        This returns a Deferred that fires with a sequence of hashes, using:
2450+
2451+         tuple(segment_hashes[first:last])
2452+
2453+        'num_segments' is used to assert that the number of segments that the
2454+        IEncryptedUploadable handled matches the number of segments that the
2455+        encoder was expecting.
2456+
2457+        This method must not be called until the final byte has been read
2458+        from read_encrypted(). Once this method is called, read_encrypted()
2459+        can never be called again.
2460+        """
2461         # this is currently unused, but will live again when we fix #453
2462         if len(self._plaintext_segment_hashes) < num_segments:
2463             # close out the last one
2464hunk ./src/allmydata/immutable/upload.py 803
2465         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2466 
2467     def get_plaintext_hash(self):
2468+        """OBSOLETE; Get the hash of the whole plaintext.
2469+
2470+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2471+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2472+        """
2473+        # this is currently unused, but will live again when we fix #453
2474         h = self._plaintext_hasher.digest()
2475         return defer.succeed(h)
2476 
2477hunk ./src/allmydata/interfaces.py 29
2478 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2479 Offset = Number
2480 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2481-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2482-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2483-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2484+WriteEnablerSecret = Hash # used to protect mutable share modifications
2485+LeaseRenewSecret = Hash # used to protect lease renewal requests
2486+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2487 
2488 class RIStubClient(RemoteInterface):
2489     """Each client publishes a service announcement for a dummy object called
2490hunk ./src/allmydata/interfaces.py 106
2491                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2492                          allocated_size=Offset, canary=Referenceable):
2493         """
2494-        @param storage_index: the index of the bucket to be created or
2495+        @param storage_index: the index of the shareset to be created or
2496                               increfed.
2497         @param sharenums: these are the share numbers (probably between 0 and
2498                           99) that the sender is proposing to store on this
2499hunk ./src/allmydata/interfaces.py 111
2500                           server.
2501-        @param renew_secret: This is the secret used to protect bucket refresh
2502+        @param renew_secret: This is the secret used to protect lease renewal.
2503                              This secret is generated by the client and
2504                              stored for later comparison by the server. Each
2505                              server is given a different secret.
2506hunk ./src/allmydata/interfaces.py 115
2507-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2508-        @param canary: If the canary is lost before close(), the bucket is
2509+        @param cancel_secret: ignored
2510+        @param canary: If the canary is lost before close(), the allocation is
2511                        deleted.
2512         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2513                  already have and allocated is what we hereby agree to accept.
2514hunk ./src/allmydata/interfaces.py 129
2515                   renew_secret=LeaseRenewSecret,
2516                   cancel_secret=LeaseCancelSecret):
2517         """
2518-        Add a new lease on the given bucket. If the renew_secret matches an
2519+        Add a new lease on the given shareset. If the renew_secret matches an
2520         existing lease, that lease will be renewed instead. If there is no
2521hunk ./src/allmydata/interfaces.py 131
2522-        bucket for the given storage_index, return silently. (note that in
2523+        shareset for the given storage_index, return silently. (Note that in
2524         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2525hunk ./src/allmydata/interfaces.py 133
2526-        bucket)
2527+        shareset.)
2528         """
2529         return Any() # returns None now, but future versions might change
2530 
2531hunk ./src/allmydata/interfaces.py 139
2532     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2533         """
2534-        Renew the lease on a given bucket, resetting the timer to 31 days.
2535-        Some networks will use this, some will not. If there is no bucket for
2536+        Renew the lease on a given shareset, resetting the timer to 31 days.
2537+        Some networks will use this, some will not. If there is no shareset for
2538         the given storage_index, IndexError will be raised.
2539 
2540         For mutable shares, if the given renew_secret does not match an
2541hunk ./src/allmydata/interfaces.py 146
2542         existing lease, IndexError will be raised with a note listing the
2543         server-nodeids on the existing leases, so leases on migrated shares
2544-        can be renewed or cancelled. For immutable shares, IndexError
2545-        (without the note) will be raised.
2546+        can be renewed. For immutable shares, IndexError (without the note)
2547+        will be raised.
2548         """
2549         return Any()
2550 
2551hunk ./src/allmydata/interfaces.py 154
2552     def get_buckets(storage_index=StorageIndex):
2553         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2554 
2555-
2556-
2557     def slot_readv(storage_index=StorageIndex,
2558                    shares=ListOf(int), readv=ReadVector):
2559         """Read a vector from the numbered shares associated with the given
2560hunk ./src/allmydata/interfaces.py 163
2561 
2562     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2563                                         secrets=TupleOf(WriteEnablerSecret,
2564-                                                        LeaseRenewSecret,
2565-                                                        LeaseCancelSecret),
2566+                                                        LeaseRenewSecret),
2567                                         tw_vectors=TestAndWriteVectorsForShares,
2568                                         r_vector=ReadVector,
2569                                         ):
2570hunk ./src/allmydata/interfaces.py 167
2571-        """General-purpose test-and-set operation for mutable slots. Perform
2572-        a bunch of comparisons against the existing shares. If they all pass,
2573-        then apply a bunch of write vectors to those shares. Then use the
2574-        read vectors to extract data from all the shares and return the data.
2575+        """
2576+        General-purpose atomic test-read-and-set operation for mutable slots.
2577+        Perform a bunch of comparisons against the existing shares. If they
2578+        all pass: use the read vectors to extract data from all the shares,
2579+        then apply a bunch of write vectors to those shares. Return the read
2580+        data, which does not include any modifications made by the writes.
2581 
2582         This method is, um, large. The goal is to allow clients to update all
2583         the shares associated with a mutable file in a single round trip.
2584hunk ./src/allmydata/interfaces.py 177
2585 
2586-        @param storage_index: the index of the bucket to be created or
2587+        @param storage_index: the index of the shareset to be created or
2588                               increfed.
2589         @param write_enabler: a secret that is stored along with the slot.
2590                               Writes are accepted from any caller who can
2591hunk ./src/allmydata/interfaces.py 183
2592                               present the matching secret. A different secret
2593                               should be used for each slot*server pair.
2594-        @param renew_secret: This is the secret used to protect bucket refresh
2595+        @param renew_secret: This is the secret used to protect lease renewal.
2596                              This secret is generated by the client and
2597                              stored for later comparison by the server. Each
2598                              server is given a different secret.
2599hunk ./src/allmydata/interfaces.py 187
2600-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2601+        @param cancel_secret: ignored
2602 
2603hunk ./src/allmydata/interfaces.py 189
2604-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2605-        cancel_secret). The first is required to perform any write. The
2606-        latter two are used when allocating new shares. To simply acquire a
2607-        new lease on existing shares, use an empty testv and an empty writev.
2608+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2609+        The write_enabler is required to perform any write. The renew_secret
2610+        is used when allocating new shares.
2611 
2612         Each share can have a separate test vector (i.e. a list of
2613         comparisons to perform). If all vectors for all shares pass, then all
2614hunk ./src/allmydata/interfaces.py 280
2615         store that on disk.
2616         """
2617 
2618-class IStorageBucketWriter(Interface):
2619+
2620+class IStorageBackend(Interface):
2621     """
2622hunk ./src/allmydata/interfaces.py 283
2623-    Objects of this kind live on the client side.
2624+    Objects of this kind live on the server side and are used by the
2625+    storage server object.
2626     """
2627hunk ./src/allmydata/interfaces.py 286
2628-    def put_block(segmentnum=int, data=ShareData):
2629-        """@param data: For most segments, this data will be 'blocksize'
2630-        bytes in length. The last segment might be shorter.
2631-        @return: a Deferred that fires (with None) when the operation completes
2632+    def get_available_space():
2633+        """
2634+        Returns available space for share storage in bytes, or
2635+        None if this information is not available or if the available
2636+        space is unlimited.
2637+
2638+        If the backend is configured for read-only mode then this will
2639+        return 0.
2640+        """
2641+
2642+    def get_sharesets_for_prefix(prefix):
2643+        """
2644+        Generates IShareSet objects for all storage indices matching the
2645+        given prefix for which this backend holds shares.
2646+        """
2647+
2648+    def get_shareset(storageindex):
2649+        """
2650+        Get an IShareSet object for the given storage index.
2651+        """
2652+
2653+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2654+        """
2655+        Clients who discover hash failures in shares that they have
2656+        downloaded from me will use this method to inform me about the
2657+        failures. I will record their concern so that my operator can
2658+        manually inspect the shares in question.
2659+
2660+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2661+        share number. 'reason' is a human-readable explanation of the problem,
2662+        probably including some expected hash values and the computed ones
2663+        that did not match. Corruption advisories for mutable shares should
2664+        include a hash of the public key (the same value that appears in the
2665+        mutable-file verify-cap), since the current share format does not
2666+        store that on disk.
2667+
2668+        @param storageindex=str
2669+        @param sharetype=str
2670+        @param shnum=int
2671+        @param reason=str
2672+        """
2673+
2674+
2675+class IShareSet(Interface):
2676+    def get_storage_index():
2677+        """
2678+        Returns the storage index for this shareset.
2679+        """
2680+
2681+    def get_storage_index_string():
2682+        """
2683+        Returns the base32-encoded storage index for this shareset.
2684+        """
2685+
2686+    def get_overhead():
2687+        """
2688+        Returns the storage overhead, in bytes, of this shareset (exclusive
2689+        of the space used by its shares).
2690+        """
2691+
2692+    def get_shares():
2693+        """
2694+        Generates the IStoredShare objects held in this shareset.
2695+        """
2696+
2697+    def has_incoming(shnum):
2698+        """
2699+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2700+        """
2701+
2702+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2703+        """
2704+        Create a bucket writer that can be used to write data to a given share.
2705+
2706+        @param storageserver=RIStorageServer
2707+        @param shnum=int: A share number in this shareset
2708+        @param max_space_per_bucket=int: The maximum space allocated for the
2709+                 share, in bytes
2710+        @param lease_info=LeaseInfo: The initial lease information
2711+        @param canary=Referenceable: If the canary is lost before close(), the
2712+                 bucket is deleted.
2713+        @return an IStorageBucketWriter for the given share
2714+        """
2715+
2716+    def make_bucket_reader(storageserver, share):
2717+        """
2718+        Create a bucket reader that can be used to read data from a given share.
2719+
2720+        @param storageserver=RIStorageServer
2721+        @param share=IStoredShare
2722+        @return an IStorageBucketReader for the given share
2723+        """
2724+
2725+    def readv(wanted_shnums, read_vector):
2726+        """
2727+        Read a vector from the numbered shares in this shareset. An empty
2728+        wanted_shnums list means to return data from all known shares.
2729+
2730+        @param wanted_shnums=ListOf(int)
2731+        @param read_vector=ReadVector
2732+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2733+        """
2734+
2735+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2736+        """
2737+        General-purpose atomic test-read-and-set operation for mutable slots.
2738+        Perform a bunch of comparisons against the existing shares in this
2739+        shareset. If they all pass: use the read vectors to extract data from
2740+        all the shares, then apply a bunch of write vectors to those shares.
2741+        Return the read data, which does not include any modifications made by
2742+        the writes.
2743+
2744+        See the similar method in RIStorageServer for more detail.
2745+
2746+        @param storageserver=RIStorageServer
2747+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2748+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2749+        @param read_vector=ReadVector
2750+        @param expiration_time=int
2751+        @return TupleOf(bool, DictOf(int, ReadData))
2752+        """
2753+
2754+    def add_or_renew_lease(lease_info):
2755+        """
2756+        Add a new lease on the shares in this shareset. If the renew_secret
2757+        matches an existing lease, that lease will be renewed instead. If
2758+        there are no shares in this shareset, return silently.
2759+
2760+        @param lease_info=LeaseInfo
2761+        """
2762+
2763+    def renew_lease(renew_secret, new_expiration_time):
2764+        """
2765+        Renew a lease on the shares in this shareset, resetting the timer
2766+        to 31 days. Some grids will use this, some will not. If there are no
2767+        shares in this shareset, IndexError will be raised.
2768+
2769+        For mutable shares, if the given renew_secret does not match an
2770+        existing lease, IndexError will be raised with a note listing the
2771+        server-nodeids on the existing leases, so leases on migrated shares
2772+        can be renewed. For immutable shares, IndexError (without the note)
2773+        will be raised.
2774+
2775+        @param renew_secret=LeaseRenewSecret
2776+        """
2777+
2778+
2779+class IStoredShare(Interface):
2780+    """
2781+    This object contains as much as all of the share data.  It is intended
2782+    for lazy evaluation, such that in many use cases substantially less than
2783+    all of the share data will be accessed.
2784+    """
2785+    def close():
2786+        """
2787+        Complete writing to this share.
2788+        """
2789+
2790+    def get_storage_index():
2791+        """
2792+        Returns the storage index.
2793+        """
2794+
2795+    def get_shnum():
2796+        """
2797+        Returns the share number.
2798+        """
2799+
2800+    def get_data_length():
2801+        """
2802+        Returns the data length in bytes.
2803+        """
2804+
2805+    def get_size():
2806+        """
2807+        Returns the size of the share in bytes.
2808+        """
2809+
2810+    def get_used_space():
2811+        """
2812+        Returns the amount of backend storage including overhead, in bytes, used
2813+        by this share.
2814+        """
2815+
2816+    def unlink():
2817+        """
2818+        Signal that this share can be removed from the backend storage. This does
2819+        not guarantee that the share data will be immediately inaccessible, or
2820+        that it will be securely erased.
2821+        """
2822+
2823+    def readv(read_vector):
2824+        """
2825+        XXX
2826+        """
2827+
2828+
2829+class IStoredMutableShare(IStoredShare):
2830+    def check_write_enabler(write_enabler, si_s):
2831+        """
2832+        XXX
2833         """
2834 
2835hunk ./src/allmydata/interfaces.py 489
2836-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2837+    def check_testv(test_vector):
2838+        """
2839+        XXX
2840+        """
2841+
2842+    def writev(datav, new_length):
2843+        """
2844+        XXX
2845+        """
2846+
2847+
2848+class IStorageBucketWriter(Interface):
2849+    """
2850+    Objects of this kind live on the client side.
2851+    """
2852+    def put_block(segmentnum, data):
2853         """
2854hunk ./src/allmydata/interfaces.py 506
2855+        @param segmentnum=int
2856+        @param data=ShareData: For most segments, this data will be 'blocksize'
2857+        bytes in length. The last segment might be shorter.
2858         @return: a Deferred that fires (with None) when the operation completes
2859         """
2860 
2861hunk ./src/allmydata/interfaces.py 512
2862-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2863+    def put_crypttext_hashes(hashes):
2864         """
2865hunk ./src/allmydata/interfaces.py 514
2866+        @param hashes=ListOf(Hash)
2867         @return: a Deferred that fires (with None) when the operation completes
2868         """
2869 
2870hunk ./src/allmydata/interfaces.py 518
2871-    def put_block_hashes(blockhashes=ListOf(Hash)):
2872+    def put_block_hashes(blockhashes):
2873         """
2874hunk ./src/allmydata/interfaces.py 520
2875+        @param blockhashes=ListOf(Hash)
2876         @return: a Deferred that fires (with None) when the operation completes
2877         """
2878 
2879hunk ./src/allmydata/interfaces.py 524
2880-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2881+    def put_share_hashes(sharehashes):
2882         """
2883hunk ./src/allmydata/interfaces.py 526
2884+        @param sharehashes=ListOf(TupleOf(int, Hash))
2885         @return: a Deferred that fires (with None) when the operation completes
2886         """
2887 
2888hunk ./src/allmydata/interfaces.py 530
2889-    def put_uri_extension(data=URIExtensionData):
2890+    def put_uri_extension(data):
2891         """This block of data contains integrity-checking information (hashes
2892         of plaintext, crypttext, and shares), as well as encoding parameters
2893         that are necessary to recover the data. This is a serialized dict
2894hunk ./src/allmydata/interfaces.py 535
2895         mapping strings to other strings. The hash of this data is kept in
2896-        the URI and verified before any of the data is used. All buckets for
2897-        a given file contain identical copies of this data.
2898+        the URI and verified before any of the data is used. All share
2899+        containers for a given file contain identical copies of this data.
2900 
2901         The serialization format is specified with the following pseudocode:
2902         for k in sorted(dict.keys()):
2903hunk ./src/allmydata/interfaces.py 543
2904             assert re.match(r'^[a-zA-Z_\-]+$', k)
2905             write(k + ':' + netstring(dict[k]))
2906 
2907+        @param data=URIExtensionData
2908         @return: a Deferred that fires (with None) when the operation completes
2909         """
2910 
2911hunk ./src/allmydata/interfaces.py 558
2912 
2913 class IStorageBucketReader(Interface):
2914 
2915-    def get_block_data(blocknum=int, blocksize=int, size=int):
2916+    def get_block_data(blocknum, blocksize, size):
2917         """Most blocks will be the same size. The last block might be shorter
2918         than the others.
2919 
2920hunk ./src/allmydata/interfaces.py 562
2921+        @param blocknum=int
2922+        @param blocksize=int
2923+        @param size=int
2924         @return: ShareData
2925         """
2926 
2927hunk ./src/allmydata/interfaces.py 573
2928         @return: ListOf(Hash)
2929         """
2930 
2931-    def get_block_hashes(at_least_these=SetOf(int)):
2932+    def get_block_hashes(at_least_these=()):
2933         """
2934hunk ./src/allmydata/interfaces.py 575
2935+        @param at_least_these=SetOf(int)
2936         @return: ListOf(Hash)
2937         """
2938 
2939hunk ./src/allmydata/interfaces.py 579
2940-    def get_share_hashes(at_least_these=SetOf(int)):
2941+    def get_share_hashes():
2942         """
2943         @return: ListOf(TupleOf(int, Hash))
2944         """
2945hunk ./src/allmydata/interfaces.py 611
2946         @return: unicode nickname, or None
2947         """
2948 
2949-    # methods moved from IntroducerClient, need review
2950-    def get_all_connections():
2951-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2952-        each active connection we've established to a remote service. This is
2953-        mostly useful for unit tests that need to wait until a certain number
2954-        of connections have been made."""
2955-
2956-    def get_all_connectors():
2957-        """Return a dict that maps from (nodeid, service_name) to a
2958-        RemoteServiceConnector instance for all services that we are actively
2959-        trying to connect to. Each RemoteServiceConnector has the following
2960-        public attributes::
2961-
2962-          service_name: the type of service provided, like 'storage'
2963-          announcement_time: when we first heard about this service
2964-          last_connect_time: when we last established a connection
2965-          last_loss_time: when we last lost a connection
2966-
2967-          version: the peer's version, from the most recent connection
2968-          oldest_supported: the peer's oldest supported version, same
2969-
2970-          rref: the RemoteReference, if connected, otherwise None
2971-          remote_host: the IAddress, if connected, otherwise None
2972-
2973-        This method is intended for monitoring interfaces, such as a web page
2974-        that describes connecting and connected peers.
2975-        """
2976-
2977-    def get_all_peerids():
2978-        """Return a frozenset of all peerids to whom we have a connection (to
2979-        one or more services) established. Mostly useful for unit tests."""
2980-
2981-    def get_all_connections_for(service_name):
2982-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2983-        for each active connection that provides the given SERVICE_NAME."""
2984-
2985-    def get_permuted_peers(service_name, key):
2986-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2987-        the connections that provide SERVICE_NAME, using a hash-based
2988-        permutation keyed by KEY. This randomizes the service list in a
2989-        repeatable way, to distribute load over many peers.
2990-        """
2991-
2992 
2993 class IMutableSlotWriter(Interface):
2994     """
2995hunk ./src/allmydata/interfaces.py 616
2996     The interface for a writer around a mutable slot on a remote server.
2997     """
2998-    def set_checkstring(checkstring, *args):
2999+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
3000         """
3001         Set the checkstring that I will pass to the remote server when
3002         writing.
3003hunk ./src/allmydata/interfaces.py 640
3004         Add a block and salt to the share.
3005         """
3006 
3007-    def put_encprivey(encprivkey):
3008+    def put_encprivkey(encprivkey):
3009         """
3010         Add the encrypted private key to the share.
3011         """
3012hunk ./src/allmydata/interfaces.py 645
3013 
3014-    def put_blockhashes(blockhashes=list):
3015+    def put_blockhashes(blockhashes):
3016         """
3017hunk ./src/allmydata/interfaces.py 647
3018+        @param blockhashes=list
3019         Add the block hash tree to the share.
3020         """
3021 
3022hunk ./src/allmydata/interfaces.py 651
3023-    def put_sharehashes(sharehashes=dict):
3024+    def put_sharehashes(sharehashes):
3025         """
3026hunk ./src/allmydata/interfaces.py 653
3027+        @param sharehashes=dict
3028         Add the share hash chain to the share.
3029         """
3030 
3031hunk ./src/allmydata/interfaces.py 739
3032     def get_extension_params():
3033         """Return the extension parameters in the URI"""
3034 
3035-    def set_extension_params():
3036+    def set_extension_params(params):
3037         """Set the extension parameters that should be in the URI"""
3038 
3039 class IDirectoryURI(Interface):
3040hunk ./src/allmydata/interfaces.py 879
3041         writer-visible data using this writekey.
3042         """
3043 
3044-    # TODO: Can this be overwrite instead of replace?
3045-    def replace(new_contents):
3046-        """Replace the contents of the mutable file, provided that no other
3047+    def overwrite(new_contents):
3048+        """Overwrite the contents of the mutable file, provided that no other
3049         node has published (or is attempting to publish, concurrently) a
3050         newer version of the file than this one.
3051 
3052hunk ./src/allmydata/interfaces.py 1346
3053         is empty, the metadata will be an empty dictionary.
3054         """
3055 
3056-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
3057+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
3058         """I add a child (by writecap+readcap) at the specific name. I return
3059         a Deferred that fires when the operation finishes. If overwrite= is
3060         True, I will replace any existing child of the same name, otherwise
3061hunk ./src/allmydata/interfaces.py 1745
3062     Block Hash, and the encoding parameters, both of which must be included
3063     in the URI.
3064 
3065-    I do not choose shareholders, that is left to the IUploader. I must be
3066-    given a dict of RemoteReferences to storage buckets that are ready and
3067-    willing to receive data.
3068+    I do not choose shareholders, that is left to the IUploader.
3069     """
3070 
3071     def set_size(size):
3072hunk ./src/allmydata/interfaces.py 1752
3073         """Specify the number of bytes that will be encoded. This must be
3074         peformed before get_serialized_params() can be called.
3075         """
3076+
3077     def set_params(params):
3078         """Override the default encoding parameters. 'params' is a tuple of
3079         (k,d,n), where 'k' is the number of required shares, 'd' is the
3080hunk ./src/allmydata/interfaces.py 1848
3081     download, validate, decode, and decrypt data from them, writing the
3082     results to an output file.
3083 
3084-    I do not locate the shareholders, that is left to the IDownloader. I must
3085-    be given a dict of RemoteReferences to storage buckets that are ready to
3086-    send data.
3087+    I do not locate the shareholders, that is left to the IDownloader.
3088     """
3089 
3090     def setup(outfile):
3091hunk ./src/allmydata/interfaces.py 1950
3092         resuming an interrupted upload (where we need to compute the
3093         plaintext hashes, but don't need the redundant encrypted data)."""
3094 
3095-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3096-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3097-        plaintext segments, i.e. get the tagged hashes of the given segments.
3098-        The segment size is expected to be generated by the
3099-        IEncryptedUploadable before any plaintext is read or ciphertext
3100-        produced, so that the segment hashes can be generated with only a
3101-        single pass.
3102-
3103-        This returns a Deferred that fires with a sequence of hashes, using:
3104-
3105-         tuple(segment_hashes[first:last])
3106-
3107-        'num_segments' is used to assert that the number of segments that the
3108-        IEncryptedUploadable handled matches the number of segments that the
3109-        encoder was expecting.
3110-
3111-        This method must not be called until the final byte has been read
3112-        from read_encrypted(). Once this method is called, read_encrypted()
3113-        can never be called again.
3114-        """
3115-
3116-    def get_plaintext_hash():
3117-        """OBSOLETE; Get the hash of the whole plaintext.
3118-
3119-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3120-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3121-        """
3122-
3123     def close():
3124         """Just like IUploadable.close()."""
3125 
3126hunk ./src/allmydata/interfaces.py 2144
3127         returns a Deferred that fires with an IUploadResults instance, from
3128         which the URI of the file can be obtained as results.uri ."""
3129 
3130-    def upload_ssk(write_capability, new_version, uploadable):
3131-        """TODO: how should this work?"""
3132-
3133 class ICheckable(Interface):
3134     def check(monitor, verify=False, add_lease=False):
3135         """Check up on my health, optionally repairing any problems.
3136hunk ./src/allmydata/interfaces.py 2505
3137 
3138 class IRepairResults(Interface):
3139     """I contain the results of a repair operation."""
3140-    def get_successful(self):
3141+    def get_successful():
3142         """Returns a boolean: True if the repair made the file healthy, False
3143         if not. Repair failure generally indicates a file that has been
3144         damaged beyond repair."""
3145hunk ./src/allmydata/interfaces.py 2577
3146     Tahoe process will typically have a single NodeMaker, but unit tests may
3147     create simplified/mocked forms for testing purposes.
3148     """
3149-    def create_from_cap(writecap, readcap=None, **kwargs):
3150+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3151         """I create an IFilesystemNode from the given writecap/readcap. I can
3152         only provide nodes for existing file/directory objects: use my other
3153         methods to create new objects. I return synchronously."""
3154hunk ./src/allmydata/monitor.py 30
3155 
3156     # the following methods are provided for the operation code
3157 
3158-    def is_cancelled(self):
3159+    def is_cancelled():
3160         """Returns True if the operation has been cancelled. If True,
3161         operation code should stop creating new work, and attempt to stop any
3162         work already in progress."""
3163hunk ./src/allmydata/monitor.py 35
3164 
3165-    def raise_if_cancelled(self):
3166+    def raise_if_cancelled():
3167         """Raise OperationCancelledError if the operation has been cancelled.
3168         Operation code that has a robust error-handling path can simply call
3169         this periodically."""
3170hunk ./src/allmydata/monitor.py 40
3171 
3172-    def set_status(self, status):
3173+    def set_status(status):
3174         """Sets the Monitor's 'status' object to an arbitrary value.
3175         Different operations will store different sorts of status information
3176         here. Operation code should use get+modify+set sequences to update
3177hunk ./src/allmydata/monitor.py 46
3178         this."""
3179 
3180-    def get_status(self):
3181+    def get_status():
3182         """Return the status object. If the operation failed, this will be a
3183         Failure instance."""
3184 
3185hunk ./src/allmydata/monitor.py 50
3186-    def finish(self, status):
3187+    def finish(status):
3188         """Call this when the operation is done, successful or not. The
3189         Monitor's lifetime is influenced by the completion of the operation
3190         it is monitoring. The Monitor's 'status' value will be set with the
3191hunk ./src/allmydata/monitor.py 63
3192 
3193     # the following methods are provided for the initiator of the operation
3194 
3195-    def is_finished(self):
3196+    def is_finished():
3197         """Return a boolean, True if the operation is done (whether
3198         successful or failed), False if it is still running."""
3199 
3200hunk ./src/allmydata/monitor.py 67
3201-    def when_done(self):
3202+    def when_done():
3203         """Return a Deferred that fires when the operation is complete. It
3204         will fire with the operation status, the same value as returned by
3205         get_status()."""
3206hunk ./src/allmydata/monitor.py 72
3207 
3208-    def cancel(self):
3209+    def cancel():
3210         """Cancel the operation as soon as possible. is_cancelled() will
3211         start returning True after this is called."""
3212 
3213hunk ./src/allmydata/mutable/filenode.py 753
3214         self._writekey = writekey
3215         self._serializer = defer.succeed(None)
3216 
3217-
3218     def get_sequence_number(self):
3219         """
3220         Get the sequence number of the mutable version that I represent.
3221hunk ./src/allmydata/mutable/filenode.py 759
3222         """
3223         return self._version[0] # verinfo[0] == the sequence number
3224 
3225+    def get_servermap(self):
3226+        return self._servermap
3227 
3228hunk ./src/allmydata/mutable/filenode.py 762
3229-    # TODO: Terminology?
3230     def get_writekey(self):
3231         """
3232         I return a writekey or None if I don't have a writekey.
3233hunk ./src/allmydata/mutable/filenode.py 768
3234         """
3235         return self._writekey
3236 
3237-
3238     def set_downloader_hints(self, hints):
3239         """
3240         I set the downloader hints.
3241hunk ./src/allmydata/mutable/filenode.py 776
3242 
3243         self._downloader_hints = hints
3244 
3245-
3246     def get_downloader_hints(self):
3247         """
3248         I return the downloader hints.
3249hunk ./src/allmydata/mutable/filenode.py 782
3250         """
3251         return self._downloader_hints
3252 
3253-
3254     def overwrite(self, new_contents):
3255         """
3256         I overwrite the contents of this mutable file version with the
3257hunk ./src/allmydata/mutable/filenode.py 791
3258 
3259         return self._do_serialized(self._overwrite, new_contents)
3260 
3261-
3262     def _overwrite(self, new_contents):
3263         assert IMutableUploadable.providedBy(new_contents)
3264         assert self._servermap.last_update_mode == MODE_WRITE
3265hunk ./src/allmydata/mutable/filenode.py 797
3266 
3267         return self._upload(new_contents)
3268 
3269-
3270     def modify(self, modifier, backoffer=None):
3271         """I use a modifier callback to apply a change to the mutable file.
3272         I implement the following pseudocode::
3273hunk ./src/allmydata/mutable/filenode.py 841
3274 
3275         return self._do_serialized(self._modify, modifier, backoffer)
3276 
3277-
3278     def _modify(self, modifier, backoffer):
3279         if backoffer is None:
3280             backoffer = BackoffAgent().delay
3281hunk ./src/allmydata/mutable/filenode.py 846
3282         return self._modify_and_retry(modifier, backoffer, True)
3283 
3284-
3285     def _modify_and_retry(self, modifier, backoffer, first_time):
3286         """
3287         I try to apply modifier to the contents of this version of the
3288hunk ./src/allmydata/mutable/filenode.py 878
3289         d.addErrback(_retry)
3290         return d
3291 
3292-
3293     def _modify_once(self, modifier, first_time):
3294         """
3295         I attempt to apply a modifier to the contents of the mutable
3296hunk ./src/allmydata/mutable/filenode.py 913
3297         d.addCallback(_apply)
3298         return d
3299 
3300-
3301     def is_readonly(self):
3302         """
3303         I return True if this MutableFileVersion provides no write
3304hunk ./src/allmydata/mutable/filenode.py 921
3305         """
3306         return self._writekey is None
3307 
3308-
3309     def is_mutable(self):
3310         """
3311         I return True, since mutable files are always mutable by
3312hunk ./src/allmydata/mutable/filenode.py 928
3313         """
3314         return True
3315 
3316-
3317     def get_storage_index(self):
3318         """
3319         I return the storage index of the reference that I encapsulate.
3320hunk ./src/allmydata/mutable/filenode.py 934
3321         """
3322         return self._storage_index
3323 
3324-
3325     def get_size(self):
3326         """
3327         I return the length, in bytes, of this readable object.
3328hunk ./src/allmydata/mutable/filenode.py 940
3329         """
3330         return self._servermap.size_of_version(self._version)
3331 
3332-
3333     def download_to_data(self, fetch_privkey=False):
3334         """
3335         I return a Deferred that fires with the contents of this
3336hunk ./src/allmydata/mutable/filenode.py 951
3337         d.addCallback(lambda mc: "".join(mc.chunks))
3338         return d
3339 
3340-
3341     def _try_to_download_data(self):
3342         """
3343         I am an unserialized cousin of download_to_data; I am called
3344hunk ./src/allmydata/mutable/filenode.py 963
3345         d.addCallback(lambda mc: "".join(mc.chunks))
3346         return d
3347 
3348-
3349     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3350         """
3351         I read a portion (possibly all) of the mutable file that I
3352hunk ./src/allmydata/mutable/filenode.py 971
3353         return self._do_serialized(self._read, consumer, offset, size,
3354                                    fetch_privkey)
3355 
3356-
3357     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3358         """
3359         I am the serialized companion of read.
3360hunk ./src/allmydata/mutable/filenode.py 981
3361         d = r.download(consumer, offset, size)
3362         return d
3363 
3364-
3365     def _do_serialized(self, cb, *args, **kwargs):
3366         # note: to avoid deadlock, this callable is *not* allowed to invoke
3367         # other serialized methods within this (or any other)
3368hunk ./src/allmydata/mutable/filenode.py 999
3369         self._serializer.addErrback(log.err)
3370         return d
3371 
3372-
3373     def _upload(self, new_contents):
3374         #assert self._pubkey, "update_servermap must be called before publish"
3375         p = Publish(self._node, self._storage_broker, self._servermap)
3376hunk ./src/allmydata/mutable/filenode.py 1009
3377         d.addCallback(self._did_upload, new_contents.get_size())
3378         return d
3379 
3380-
3381     def _did_upload(self, res, size):
3382         self._most_recent_size = size
3383         return res
3384hunk ./src/allmydata/mutable/filenode.py 1029
3385         """
3386         return self._do_serialized(self._update, data, offset)
3387 
3388-
3389     def _update(self, data, offset):
3390         """
3391         I update the mutable file version represented by this particular
3392hunk ./src/allmydata/mutable/filenode.py 1058
3393         d.addCallback(self._build_uploadable_and_finish, data, offset)
3394         return d
3395 
3396-
3397     def _do_modify_update(self, data, offset):
3398         """
3399         I perform a file update by modifying the contents of the file
3400hunk ./src/allmydata/mutable/filenode.py 1073
3401             return new
3402         return self._modify(m, None)
3403 
3404-
3405     def _do_update_update(self, data, offset):
3406         """
3407         I start the Servermap update that gets us the data we need to
3408hunk ./src/allmydata/mutable/filenode.py 1108
3409         return self._update_servermap(update_range=(start_segment,
3410                                                     end_segment))
3411 
3412-
3413     def _decode_and_decrypt_segments(self, ignored, data, offset):
3414         """
3415         After the servermap update, I take the encrypted and encoded
3416hunk ./src/allmydata/mutable/filenode.py 1148
3417         d3 = defer.succeed(blockhashes)
3418         return deferredutil.gatherResults([d1, d2, d3])
3419 
3420-
3421     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3422         """
3423         After the process has the plaintext segments, I build the
3424hunk ./src/allmydata/mutable/filenode.py 1163
3425         p = Publish(self._node, self._storage_broker, self._servermap)
3426         return p.update(u, offset, segments_and_bht[2], self._version)
3427 
3428-
3429     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3430         """
3431         I update the servermap. I return a Deferred that fires when the
3432hunk ./src/allmydata/storage/common.py 1
3433-
3434-import os.path
3435 from allmydata.util import base32
3436 
3437 class DataTooLargeError(Exception):
3438hunk ./src/allmydata/storage/common.py 5
3439     pass
3440+
3441 class UnknownMutableContainerVersionError(Exception):
3442     pass
3443hunk ./src/allmydata/storage/common.py 8
3444+
3445 class UnknownImmutableContainerVersionError(Exception):
3446     pass
3447 
3448hunk ./src/allmydata/storage/common.py 18
3449 
3450 def si_a2b(ascii_storageindex):
3451     return base32.a2b(ascii_storageindex)
3452-
3453-def storage_index_to_dir(storageindex):
3454-    sia = si_b2a(storageindex)
3455-    return os.path.join(sia[:2], sia)
3456hunk ./src/allmydata/storage/crawler.py 2
3457 
3458-import os, time, struct
3459+import time, struct
3460 import cPickle as pickle
3461 from twisted.internet import reactor
3462 from twisted.application import service
3463hunk ./src/allmydata/storage/crawler.py 6
3464+
3465+from allmydata.util.assertutil import precondition
3466+from allmydata.interfaces import IStorageBackend
3467 from allmydata.storage.common import si_b2a
3468hunk ./src/allmydata/storage/crawler.py 10
3469-from allmydata.util import fileutil
3470+
3471 
3472 class TimeSliceExceeded(Exception):
3473     pass
3474hunk ./src/allmydata/storage/crawler.py 15
3475 
3476+
3477 class ShareCrawler(service.MultiService):
3478hunk ./src/allmydata/storage/crawler.py 17
3479-    """A ShareCrawler subclass is attached to a StorageServer, and
3480-    periodically walks all of its shares, processing each one in some
3481-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3482-    since large servers can easily have a terabyte of shares, in several
3483-    million files, which can take hours or days to read.
3484+    """
3485+    An instance of a subclass of ShareCrawler is attached to a storage
3486+    backend, and periodically walks the backend's shares, processing them
3487+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3488+    the host, since large servers can easily have a terabyte of shares in
3489+    several million files, which can take hours or days to read.
3490 
3491     Once the crawler starts a cycle, it will proceed at a rate limited by the
3492     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3493hunk ./src/allmydata/storage/crawler.py 33
3494     long enough to ensure that 'minimum_cycle_time' elapses between the start
3495     of two consecutive cycles.
3496 
3497-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3498+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3499     grid will cause the prefixdir contents to be mostly cached in the kernel,
3500hunk ./src/allmydata/storage/crawler.py 35
3501-    or that the number of buckets in each prefixdir will be small enough to
3502-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3503-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3504+    or that the number of sharesets in each prefixdir will be small enough to
3505+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3506+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3507     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3508     time, and 17ms to list the second time.
3509 
3510hunk ./src/allmydata/storage/crawler.py 41
3511-    To use a crawler, create a subclass which implements the process_bucket()
3512-    method. It will be called with a prefixdir and a base32 storage index
3513-    string. process_bucket() must run synchronously. Any keys added to
3514-    self.state will be preserved. Override add_initial_state() to set up
3515-    initial state keys. Override finished_cycle() to perform additional
3516-    processing when the cycle is complete. Any status that the crawler
3517-    produces should be put in the self.state dictionary. Status renderers
3518-    (like a web page which describes the accomplishments of your crawler)
3519-    will use crawler.get_state() to retrieve this dictionary; they can
3520-    present the contents as they see fit.
3521+    To implement a crawler, create a subclass that implements the
3522+    process_shareset() method. It will be called with a prefixdir and an
3523+    object providing the IShareSet interface. process_shareset() must run
3524+    synchronously. Any keys added to self.state will be preserved. Override
3525+    add_initial_state() to set up initial state keys. Override
3526+    finished_cycle() to perform additional processing when the cycle is
3527+    complete. Any status that the crawler produces should be put in the
3528+    self.state dictionary. Status renderers (like a web page describing the
3529+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3530+    this dictionary; they can present the contents as they see fit.
3531 
3532hunk ./src/allmydata/storage/crawler.py 52
3533-    Then create an instance, with a reference to a StorageServer and a
3534-    filename where it can store persistent state. The statefile is used to
3535-    keep track of how far around the ring the process has travelled, as well
3536-    as timing history to allow the pace to be predicted and controlled. The
3537-    statefile will be updated and written to disk after each time slice (just
3538-    before the crawler yields to the reactor), and also after each cycle is
3539-    finished, and also when stopService() is called. Note that this means
3540-    that a crawler which is interrupted with SIGKILL while it is in the
3541-    middle of a time slice will lose progress: the next time the node is
3542-    started, the crawler will repeat some unknown amount of work.
3543+    Then create an instance, with a reference to a backend object providing
3544+    the IStorageBackend interface, and a filename where it can store
3545+    persistent state. The statefile is used to keep track of how far around
3546+    the ring the process has travelled, as well as timing history to allow
3547+    the pace to be predicted and controlled. The statefile will be updated
3548+    and written to disk after each time slice (just before the crawler yields
3549+    to the reactor), and also after each cycle is finished, and also when
3550+    stopService() is called. Note that this means that a crawler that is
3551+    interrupted with SIGKILL while it is in the middle of a time slice will
3552+    lose progress: the next time the node is started, the crawler will repeat
3553+    some unknown amount of work.
3554 
3555     The crawler instance must be started with startService() before it will
3556hunk ./src/allmydata/storage/crawler.py 65
3557-    do any work. To make it stop doing work, call stopService().
3558+    do any work. To make it stop doing work, call stopService(). A crawler
3559+    is usually a child service of a StorageServer, although it should not
3560+    depend on that.
3561+
3562+    For historical reasons, some dictionary key names use the term "bucket"
3563+    for what is now preferably called a "shareset" (the set of shares that a
3564+    server holds under a given storage index).
3565     """
3566 
3567     slow_start = 300 # don't start crawling for 5 minutes after startup
3568hunk ./src/allmydata/storage/crawler.py 80
3569     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3570     minimum_cycle_time = 300 # don't run a cycle faster than this
3571 
3572-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3573+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3574+        precondition(IStorageBackend.providedBy(backend), backend)
3575         service.MultiService.__init__(self)
3576hunk ./src/allmydata/storage/crawler.py 83
3577+        self.backend = backend
3578+        self.statefp = statefp
3579         if allowed_cpu_percentage is not None:
3580             self.allowed_cpu_percentage = allowed_cpu_percentage
3581hunk ./src/allmydata/storage/crawler.py 87
3582-        self.server = server
3583-        self.sharedir = server.sharedir
3584-        self.statefile = statefile
3585         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3586                          for i in range(2**10)]
3587         self.prefixes.sort()
3588hunk ./src/allmydata/storage/crawler.py 91
3589         self.timer = None
3590-        self.bucket_cache = (None, [])
3591+        self.shareset_cache = (None, [])
3592         self.current_sleep_time = None
3593         self.next_wake_time = None
3594         self.last_prefix_finished_time = None
3595hunk ./src/allmydata/storage/crawler.py 154
3596                 left = len(self.prefixes) - self.last_complete_prefix_index
3597                 remaining = left * self.last_prefix_elapsed_time
3598                 # TODO: remainder of this prefix: we need to estimate the
3599-                # per-bucket time, probably by measuring the time spent on
3600-                # this prefix so far, divided by the number of buckets we've
3601+                # per-shareset time, probably by measuring the time spent on
3602+                # this prefix so far, divided by the number of sharesets we've
3603                 # processed.
3604             d["estimated-cycle-complete-time-left"] = remaining
3605             # it's possible to call get_progress() from inside a crawler's
3606hunk ./src/allmydata/storage/crawler.py 175
3607         state dictionary.
3608 
3609         If we are not currently sleeping (i.e. get_state() was called from
3610-        inside the process_prefixdir, process_bucket, or finished_cycle()
3611+        inside the process_prefixdir, process_shareset, or finished_cycle()
3612         methods, or if startService has not yet been called on this crawler),
3613         these two keys will be None.
3614 
3615hunk ./src/allmydata/storage/crawler.py 188
3616     def load_state(self):
3617         # we use this to store state for both the crawler's internals and
3618         # anything the subclass-specific code needs. The state is stored
3619-        # after each bucket is processed, after each prefixdir is processed,
3620+        # after each shareset is processed, after each prefixdir is processed,
3621         # and after a cycle is complete. The internal keys we use are:
3622         #  ["version"]: int, always 1
3623         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3624hunk ./src/allmydata/storage/crawler.py 202
3625         #                            are sleeping between cycles, or if we
3626         #                            have not yet finished any prefixdir since
3627         #                            a cycle was started
3628-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3629-        #                            of the last bucket to be processed, or
3630-        #                            None if we are sleeping between cycles
3631+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3632+        #                            shareset to be processed, or None if we
3633+        #                            are sleeping between cycles
3634         try:
3635hunk ./src/allmydata/storage/crawler.py 206
3636-            f = open(self.statefile, "rb")
3637-            state = pickle.load(f)
3638-            f.close()
3639+            state = pickle.loads(self.statefp.getContent())
3640         except EnvironmentError:
3641             state = {"version": 1,
3642                      "last-cycle-finished": None,
3643hunk ./src/allmydata/storage/crawler.py 242
3644         else:
3645             last_complete_prefix = self.prefixes[lcpi]
3646         self.state["last-complete-prefix"] = last_complete_prefix
3647-        tmpfile = self.statefile + ".tmp"
3648-        f = open(tmpfile, "wb")
3649-        pickle.dump(self.state, f)
3650-        f.close()
3651-        fileutil.move_into_place(tmpfile, self.statefile)
3652+        self.statefp.setContent(pickle.dumps(self.state))
3653 
3654     def startService(self):
3655         # arrange things to look like we were just sleeping, so
3656hunk ./src/allmydata/storage/crawler.py 284
3657         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3658         # if the math gets weird, or a timequake happens, don't sleep
3659         # forever. Note that this means that, while a cycle is running, we
3660-        # will process at least one bucket every 5 minutes, no matter how
3661-        # long that bucket takes.
3662+        # will process at least one shareset every 5 minutes, no matter how
3663+        # long that shareset takes.
3664         sleep_time = max(0.0, min(sleep_time, 299))
3665         if finished_cycle:
3666             # how long should we sleep between cycles? Don't run faster than
3667hunk ./src/allmydata/storage/crawler.py 315
3668         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3669             # if we want to yield earlier, just raise TimeSliceExceeded()
3670             prefix = self.prefixes[i]
3671-            prefixdir = os.path.join(self.sharedir, prefix)
3672-            if i == self.bucket_cache[0]:
3673-                buckets = self.bucket_cache[1]
3674+            if i == self.shareset_cache[0]:
3675+                sharesets = self.shareset_cache[1]
3676             else:
3677hunk ./src/allmydata/storage/crawler.py 318
3678-                try:
3679-                    buckets = os.listdir(prefixdir)
3680-                    buckets.sort()
3681-                except EnvironmentError:
3682-                    buckets = []
3683-                self.bucket_cache = (i, buckets)
3684-            self.process_prefixdir(cycle, prefix, prefixdir,
3685-                                   buckets, start_slice)
3686+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3687+                self.shareset_cache = (i, sharesets)
3688+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3689             self.last_complete_prefix_index = i
3690 
3691             now = time.time()
3692hunk ./src/allmydata/storage/crawler.py 345
3693         self.finished_cycle(cycle)
3694         self.save_state()
3695 
3696-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3697-        """This gets a list of bucket names (i.e. storage index strings,
3698+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3699+        """
3700+        This gets a list of shareset names (i.e. storage index strings,
3701         base32-encoded) in sorted order.
3702 
3703         You can override this if your crawler doesn't care about the actual
3704hunk ./src/allmydata/storage/crawler.py 352
3705         shares, for example a crawler which merely keeps track of how many
3706-        buckets are being managed by this server.
3707+        sharesets are being managed by this server.
3708 
3709hunk ./src/allmydata/storage/crawler.py 354
3710-        Subclasses which *do* care about actual bucket should leave this
3711-        method along, and implement process_bucket() instead.
3712+        Subclasses which *do* care about actual shareset should leave this
3713+        method alone, and implement process_shareset() instead.
3714         """
3715 
3716hunk ./src/allmydata/storage/crawler.py 358
3717-        for bucket in buckets:
3718-            if bucket <= self.state["last-complete-bucket"]:
3719+        for shareset in sharesets:
3720+            base32si = shareset.get_storage_index_string()
3721+            if base32si <= self.state["last-complete-bucket"]:
3722                 continue
3723hunk ./src/allmydata/storage/crawler.py 362
3724-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3725-            self.state["last-complete-bucket"] = bucket
3726+            self.process_shareset(cycle, prefix, shareset)
3727+            self.state["last-complete-bucket"] = base32si
3728             if time.time() >= start_slice + self.cpu_slice:
3729                 raise TimeSliceExceeded()
3730 
3731hunk ./src/allmydata/storage/crawler.py 370
3732     # the remaining methods are explictly for subclasses to implement.
3733 
3734     def started_cycle(self, cycle):
3735-        """Notify a subclass that the crawler is about to start a cycle.
3736+        """
3737+        Notify a subclass that the crawler is about to start a cycle.
3738 
3739         This method is for subclasses to override. No upcall is necessary.
3740         """
3741hunk ./src/allmydata/storage/crawler.py 377
3742         pass
3743 
3744-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3745-        """Examine a single bucket. Subclasses should do whatever they want
3746+    def process_shareset(self, cycle, prefix, shareset):
3747+        """
3748+        Examine a single shareset. Subclasses should do whatever they want
3749         to do to the shares therein, then update self.state as necessary.
3750 
3751         If the crawler is never interrupted by SIGKILL, this method will be
3752hunk ./src/allmydata/storage/crawler.py 383
3753-        called exactly once per share (per cycle). If it *is* interrupted,
3754+        called exactly once per shareset (per cycle). If it *is* interrupted,
3755         then the next time the node is started, some amount of work will be
3756         duplicated, according to when self.save_state() was last called. By
3757         default, save_state() is called at the end of each timeslice, and
3758hunk ./src/allmydata/storage/crawler.py 391
3759 
3760         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3761         records to a database), you can call save_state() at the end of your
3762-        process_bucket() method. This will reduce the maximum duplicated work
3763-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3764-        per bucket (and some disk writes), which will count against your
3765-        allowed_cpu_percentage, and which may be considerable if
3766-        process_bucket() runs quickly.
3767+        process_shareset() method. This will reduce the maximum duplicated
3768+        work to one shareset per SIGKILL. It will also add overhead, probably
3769+        1-20ms per shareset (and some disk writes), which will count against
3770+        your allowed_cpu_percentage, and which may be considerable if
3771+        process_shareset() runs quickly.
3772 
3773         This method is for subclasses to override. No upcall is necessary.
3774         """
3775hunk ./src/allmydata/storage/crawler.py 402
3776         pass
3777 
3778     def finished_prefix(self, cycle, prefix):
3779-        """Notify a subclass that the crawler has just finished processing a
3780-        prefix directory (all buckets with the same two-character/10bit
3781+        """
3782+        Notify a subclass that the crawler has just finished processing a
3783+        prefix directory (all sharesets with the same two-character/10-bit
3784         prefix). To impose a limit on how much work might be duplicated by a
3785         SIGKILL that occurs during a timeslice, you can call
3786         self.save_state() here, but be aware that it may represent a
3787hunk ./src/allmydata/storage/crawler.py 415
3788         pass
3789 
3790     def finished_cycle(self, cycle):
3791-        """Notify subclass that a cycle (one complete traversal of all
3792+        """
3793+        Notify subclass that a cycle (one complete traversal of all
3794         prefixdirs) has just finished. 'cycle' is the number of the cycle
3795         that just finished. This method should perform summary work and
3796         update self.state to publish information to status displays.
3797hunk ./src/allmydata/storage/crawler.py 433
3798         pass
3799 
3800     def yielding(self, sleep_time):
3801-        """The crawler is about to sleep for 'sleep_time' seconds. This
3802+        """
3803+        The crawler is about to sleep for 'sleep_time' seconds. This
3804         method is mostly for the convenience of unit tests.
3805 
3806         This method is for subclasses to override. No upcall is necessary.
3807hunk ./src/allmydata/storage/crawler.py 443
3808 
3809 
3810 class BucketCountingCrawler(ShareCrawler):
3811-    """I keep track of how many buckets are being managed by this server.
3812-    This is equivalent to the number of distributed files and directories for
3813-    which I am providing storage. The actual number of files+directories in
3814-    the full grid is probably higher (especially when there are more servers
3815-    than 'N', the number of generated shares), because some files+directories
3816-    will have shares on other servers instead of me. Also note that the
3817-    number of buckets will differ from the number of shares in small grids,
3818-    when more than one share is placed on a single server.
3819+    """
3820+    I keep track of how many sharesets, each corresponding to a storage index,
3821+    are being managed by this server. This is equivalent to the number of
3822+    distributed files and directories for which I am providing storage. The
3823+    actual number of files and directories in the full grid is probably higher
3824+    (especially when there are more servers than 'N', the number of generated
3825+    shares), because some files and directories will have shares on other
3826+    servers instead of me. Also note that the number of sharesets will differ
3827+    from the number of shares in small grids, when more than one share is
3828+    placed on a single server.
3829     """
3830 
3831     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3832hunk ./src/allmydata/storage/crawler.py 457
3833 
3834-    def __init__(self, server, statefile, num_sample_prefixes=1):
3835-        ShareCrawler.__init__(self, server, statefile)
3836+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3837+        ShareCrawler.__init__(self, backend, statefp)
3838         self.num_sample_prefixes = num_sample_prefixes
3839 
3840     def add_initial_state(self):
3841hunk ./src/allmydata/storage/crawler.py 471
3842         self.state.setdefault("last-complete-bucket-count", None)
3843         self.state.setdefault("storage-index-samples", {})
3844 
3845-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3846+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3847         # we override process_prefixdir() because we don't want to look at
3848hunk ./src/allmydata/storage/crawler.py 473
3849-        # the individual buckets. We'll save state after each one. On my
3850+        # the individual sharesets. We'll save state after each one. On my
3851         # laptop, a mostly-empty storage server can process about 70
3852         # prefixdirs in a 1.0s slice.
3853         if cycle not in self.state["bucket-counts"]:
3854hunk ./src/allmydata/storage/crawler.py 478
3855             self.state["bucket-counts"][cycle] = {}
3856-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3857+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3858         if prefix in self.prefixes[:self.num_sample_prefixes]:
3859hunk ./src/allmydata/storage/crawler.py 480
3860-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3861+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3862 
3863     def finished_cycle(self, cycle):
3864         last_counts = self.state["bucket-counts"].get(cycle, [])
3865hunk ./src/allmydata/storage/crawler.py 486
3866         if len(last_counts) == len(self.prefixes):
3867             # great, we have a whole cycle.
3868-            num_buckets = sum(last_counts.values())
3869-            self.state["last-complete-bucket-count"] = num_buckets
3870+            num_sharesets = sum(last_counts.values())
3871+            self.state["last-complete-bucket-count"] = num_sharesets
3872             # get rid of old counts
3873             for old_cycle in list(self.state["bucket-counts"].keys()):
3874                 if old_cycle != cycle:
3875hunk ./src/allmydata/storage/crawler.py 494
3876                     del self.state["bucket-counts"][old_cycle]
3877         # get rid of old samples too
3878         for prefix in list(self.state["storage-index-samples"].keys()):
3879-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3880+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3881             if old_cycle != cycle:
3882                 del self.state["storage-index-samples"][prefix]
3883hunk ./src/allmydata/storage/crawler.py 497
3884-
3885hunk ./src/allmydata/storage/expirer.py 1
3886-import time, os, pickle, struct
3887+
3888+import time, pickle, struct
3889+from twisted.python import log as twlog
3890+
3891 from allmydata.storage.crawler import ShareCrawler
3892hunk ./src/allmydata/storage/expirer.py 6
3893-from allmydata.storage.shares import get_share_file
3894-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3895+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3896      UnknownImmutableContainerVersionError
3897hunk ./src/allmydata/storage/expirer.py 8
3898-from twisted.python import log as twlog
3899+
3900 
3901 class LeaseCheckingCrawler(ShareCrawler):
3902     """I examine the leases on all shares, determining which are still valid
3903hunk ./src/allmydata/storage/expirer.py 17
3904     removed.
3905 
3906     I collect statistics on the leases and make these available to a web
3907-    status page, including::
3908+    status page, including:
3909 
3910     Space recovered during this cycle-so-far:
3911      actual (only if expiration_enabled=True):
3912hunk ./src/allmydata/storage/expirer.py 21
3913-      num-buckets, num-shares, sum of share sizes, real disk usage
3914+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3915       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3916        space used by the directory)
3917      what it would have been with the original lease expiration time
3918hunk ./src/allmydata/storage/expirer.py 32
3919 
3920     Space recovered during the last 10 cycles  <-- saved in separate pickle
3921 
3922-    Shares/buckets examined:
3923+    Shares/storage-indices examined:
3924      this cycle-so-far
3925      prediction of rest of cycle
3926      during last 10 cycles <-- separate pickle
3927hunk ./src/allmydata/storage/expirer.py 42
3928     Histogram of leases-per-share:
3929      this-cycle-to-date
3930      last 10 cycles <-- separate pickle
3931-    Histogram of lease ages, buckets = 1day
3932+    Histogram of lease ages, storage-indices over 1 day
3933      cycle-to-date
3934      last 10 cycles <-- separate pickle
3935 
3936hunk ./src/allmydata/storage/expirer.py 53
3937     slow_start = 360 # wait 6 minutes after startup
3938     minimum_cycle_time = 12*60*60 # not more than twice per day
3939 
3940-    def __init__(self, server, statefile, historyfile,
3941-                 expiration_enabled, mode,
3942-                 override_lease_duration, # used if expiration_mode=="age"
3943-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3944-                 sharetypes):
3945-        self.historyfile = historyfile
3946-        self.expiration_enabled = expiration_enabled
3947-        self.mode = mode
3948+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3949+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3950+        self.historyfp = historyfp
3951+        ShareCrawler.__init__(self, backend, statefp)
3952+
3953+        self.expiration_enabled = expiration_policy['enabled']
3954+        self.mode = expiration_policy['mode']
3955         self.override_lease_duration = None
3956         self.cutoff_date = None
3957         if self.mode == "age":
3958hunk ./src/allmydata/storage/expirer.py 63
3959-            assert isinstance(override_lease_duration, (int, type(None)))
3960-            self.override_lease_duration = override_lease_duration # seconds
3961+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3962+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3963         elif self.mode == "cutoff-date":
3964hunk ./src/allmydata/storage/expirer.py 66
3965-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3966-            assert cutoff_date is not None
3967-            self.cutoff_date = cutoff_date
3968+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3969+            self.cutoff_date = expiration_policy['cutoff_date']
3970         else:
3971hunk ./src/allmydata/storage/expirer.py 69
3972-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3973-        self.sharetypes_to_expire = sharetypes
3974-        ShareCrawler.__init__(self, server, statefile)
3975+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3976+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3977 
3978     def add_initial_state(self):
3979         # we fill ["cycle-to-date"] here (even though they will be reset in
3980hunk ./src/allmydata/storage/expirer.py 84
3981             self.state["cycle-to-date"].setdefault(k, so_far[k])
3982 
3983         # initialize history
3984-        if not os.path.exists(self.historyfile):
3985+        if not self.historyfp.exists():
3986             history = {} # cyclenum -> dict
3987hunk ./src/allmydata/storage/expirer.py 86
3988-            f = open(self.historyfile, "wb")
3989-            pickle.dump(history, f)
3990-            f.close()
3991+            self.historyfp.setContent(pickle.dumps(history))
3992 
3993     def create_empty_cycle_dict(self):
3994         recovered = self.create_empty_recovered_dict()
3995hunk ./src/allmydata/storage/expirer.py 99
3996 
3997     def create_empty_recovered_dict(self):
3998         recovered = {}
3999+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
4000         for a in ("actual", "original", "configured", "examined"):
4001             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
4002                 recovered[a+"-"+b] = 0
4003hunk ./src/allmydata/storage/expirer.py 110
4004     def started_cycle(self, cycle):
4005         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
4006 
4007-    def stat(self, fn):
4008-        return os.stat(fn)
4009-
4010-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
4011-        bucketdir = os.path.join(prefixdir, storage_index_b32)
4012-        s = self.stat(bucketdir)
4013+    def process_storage_index(self, cycle, prefix, container):
4014         would_keep_shares = []
4015         wks = None
4016hunk ./src/allmydata/storage/expirer.py 113
4017+        sharetype = None
4018 
4019hunk ./src/allmydata/storage/expirer.py 115
4020-        for fn in os.listdir(bucketdir):
4021-            try:
4022-                shnum = int(fn)
4023-            except ValueError:
4024-                continue # non-numeric means not a sharefile
4025-            sharefile = os.path.join(bucketdir, fn)
4026+        for share in container.get_shares():
4027+            sharetype = share.sharetype
4028             try:
4029hunk ./src/allmydata/storage/expirer.py 118
4030-                wks = self.process_share(sharefile)
4031+                wks = self.process_share(share)
4032             except (UnknownMutableContainerVersionError,
4033                     UnknownImmutableContainerVersionError,
4034                     struct.error):
4035hunk ./src/allmydata/storage/expirer.py 122
4036-                twlog.msg("lease-checker error processing %s" % sharefile)
4037+                twlog.msg("lease-checker error processing %r" % (share,))
4038                 twlog.err()
4039hunk ./src/allmydata/storage/expirer.py 124
4040-                which = (storage_index_b32, shnum)
4041+                which = (si_b2a(share.storageindex), share.get_shnum())
4042                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
4043                 wks = (1, 1, 1, "unknown")
4044             would_keep_shares.append(wks)
4045hunk ./src/allmydata/storage/expirer.py 129
4046 
4047-        sharetype = None
4048+        container_type = None
4049         if wks:
4050hunk ./src/allmydata/storage/expirer.py 131
4051-            # use the last share's sharetype as the buckettype
4052-            sharetype = wks[3]
4053+            # use the last share's sharetype as the container type
4054+            container_type = wks[3]
4055         rec = self.state["cycle-to-date"]["space-recovered"]
4056         self.increment(rec, "examined-buckets", 1)
4057         if sharetype:
4058hunk ./src/allmydata/storage/expirer.py 136
4059-            self.increment(rec, "examined-buckets-"+sharetype, 1)
4060+            self.increment(rec, "examined-buckets-"+container_type, 1)
4061+
4062+        container_diskbytes = container.get_overhead()
4063 
4064hunk ./src/allmydata/storage/expirer.py 140
4065-        try:
4066-            bucket_diskbytes = s.st_blocks * 512
4067-        except AttributeError:
4068-            bucket_diskbytes = 0 # no stat().st_blocks on windows
4069         if sum([wks[0] for wks in would_keep_shares]) == 0:
4070hunk ./src/allmydata/storage/expirer.py 141
4071-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
4072+            self.increment_container_space("original", container_diskbytes, sharetype)
4073         if sum([wks[1] for wks in would_keep_shares]) == 0:
4074hunk ./src/allmydata/storage/expirer.py 143
4075-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
4076+            self.increment_container_space("configured", container_diskbytes, sharetype)
4077         if sum([wks[2] for wks in would_keep_shares]) == 0:
4078hunk ./src/allmydata/storage/expirer.py 145
4079-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
4080+            self.increment_container_space("actual", container_diskbytes, sharetype)
4081 
4082hunk ./src/allmydata/storage/expirer.py 147
4083-    def process_share(self, sharefilename):
4084-        # first, find out what kind of a share it is
4085-        sf = get_share_file(sharefilename)
4086-        sharetype = sf.sharetype
4087+    def process_share(self, share):
4088+        sharetype = share.sharetype
4089         now = time.time()
4090hunk ./src/allmydata/storage/expirer.py 150
4091-        s = self.stat(sharefilename)
4092+        sharebytes = share.get_size()
4093+        diskbytes = share.get_used_space()
4094 
4095         num_leases = 0
4096         num_valid_leases_original = 0
4097hunk ./src/allmydata/storage/expirer.py 158
4098         num_valid_leases_configured = 0
4099         expired_leases_configured = []
4100 
4101-        for li in sf.get_leases():
4102+        for li in share.get_leases():
4103             num_leases += 1
4104             original_expiration_time = li.get_expiration_time()
4105             grant_renew_time = li.get_grant_renew_time_time()
4106hunk ./src/allmydata/storage/expirer.py 171
4107 
4108             #  expired-or-not according to our configured age limit
4109             expired = False
4110-            if self.mode == "age":
4111-                age_limit = original_expiration_time
4112-                if self.override_lease_duration is not None:
4113-                    age_limit = self.override_lease_duration
4114-                if age > age_limit:
4115-                    expired = True
4116-            else:
4117-                assert self.mode == "cutoff-date"
4118-                if grant_renew_time < self.cutoff_date:
4119-                    expired = True
4120-            if sharetype not in self.sharetypes_to_expire:
4121-                expired = False
4122+            if sharetype in self.sharetypes_to_expire:
4123+                if self.mode == "age":
4124+                    age_limit = original_expiration_time
4125+                    if self.override_lease_duration is not None:
4126+                        age_limit = self.override_lease_duration
4127+                    if age > age_limit:
4128+                        expired = True
4129+                else:
4130+                    assert self.mode == "cutoff-date"
4131+                    if grant_renew_time < self.cutoff_date:
4132+                        expired = True
4133 
4134             if expired:
4135                 expired_leases_configured.append(li)
4136hunk ./src/allmydata/storage/expirer.py 190
4137 
4138         so_far = self.state["cycle-to-date"]
4139         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4140-        self.increment_space("examined", s, sharetype)
4141+        self.increment_space("examined", diskbytes, sharetype)
4142 
4143         would_keep_share = [1, 1, 1, sharetype]
4144 
4145hunk ./src/allmydata/storage/expirer.py 196
4146         if self.expiration_enabled:
4147             for li in expired_leases_configured:
4148-                sf.cancel_lease(li.cancel_secret)
4149+                share.cancel_lease(li.cancel_secret)
4150 
4151         if num_valid_leases_original == 0:
4152             would_keep_share[0] = 0
4153hunk ./src/allmydata/storage/expirer.py 200
4154-            self.increment_space("original", s, sharetype)
4155+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4156 
4157         if num_valid_leases_configured == 0:
4158             would_keep_share[1] = 0
4159hunk ./src/allmydata/storage/expirer.py 204
4160-            self.increment_space("configured", s, sharetype)
4161+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4162             if self.expiration_enabled:
4163                 would_keep_share[2] = 0
4164hunk ./src/allmydata/storage/expirer.py 207
4165-                self.increment_space("actual", s, sharetype)
4166+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4167 
4168         return would_keep_share
4169 
4170hunk ./src/allmydata/storage/expirer.py 211
4171-    def increment_space(self, a, s, sharetype):
4172-        sharebytes = s.st_size
4173-        try:
4174-            # note that stat(2) says that st_blocks is 512 bytes, and that
4175-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4176-            # independent of the block-size that st_blocks uses.
4177-            diskbytes = s.st_blocks * 512
4178-        except AttributeError:
4179-            # the docs say that st_blocks is only on linux. I also see it on
4180-            # MacOS. But it isn't available on windows.
4181-            diskbytes = sharebytes
4182+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4183         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4184         self.increment(so_far_sr, a+"-shares", 1)
4185         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4186hunk ./src/allmydata/storage/expirer.py 221
4187             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4188             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4189 
4190-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4191+    def increment_container_space(self, a, container_diskbytes, container_type):
4192         rec = self.state["cycle-to-date"]["space-recovered"]
4193hunk ./src/allmydata/storage/expirer.py 223
4194-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4195+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4196         self.increment(rec, a+"-buckets", 1)
4197hunk ./src/allmydata/storage/expirer.py 225
4198-        if sharetype:
4199-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4200-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4201+        if container_type:
4202+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4203+            self.increment(rec, a+"-buckets-"+container_type, 1)
4204 
4205     def increment(self, d, k, delta=1):
4206         if k not in d:
4207hunk ./src/allmydata/storage/expirer.py 281
4208         # copy() needs to become a deepcopy
4209         h["space-recovered"] = s["space-recovered"].copy()
4210 
4211-        history = pickle.load(open(self.historyfile, "rb"))
4212+        history = pickle.load(self.historyfp.getContent())
4213         history[cycle] = h
4214         while len(history) > 10:
4215             oldcycles = sorted(history.keys())
4216hunk ./src/allmydata/storage/expirer.py 286
4217             del history[oldcycles[0]]
4218-        f = open(self.historyfile, "wb")
4219-        pickle.dump(history, f)
4220-        f.close()
4221+        self.historyfp.setContent(pickle.dumps(history))
4222 
4223     def get_state(self):
4224         """In addition to the crawler state described in
4225hunk ./src/allmydata/storage/expirer.py 355
4226         progress = self.get_progress()
4227 
4228         state = ShareCrawler.get_state(self) # does a shallow copy
4229-        history = pickle.load(open(self.historyfile, "rb"))
4230+        history = pickle.load(self.historyfp.getContent())
4231         state["history"] = history
4232 
4233         if not progress["cycle-in-progress"]:
4234hunk ./src/allmydata/storage/lease.py 3
4235 import struct, time
4236 
4237+
4238+class NonExistentLeaseError(Exception):
4239+    pass
4240+
4241 class LeaseInfo:
4242     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4243                  expiration_time=None, nodeid=None):
4244hunk ./src/allmydata/storage/lease.py 21
4245 
4246     def get_expiration_time(self):
4247         return self.expiration_time
4248+
4249     def get_grant_renew_time_time(self):
4250         # hack, based upon fixed 31day expiration period
4251         return self.expiration_time - 31*24*60*60
4252hunk ./src/allmydata/storage/lease.py 25
4253+
4254     def get_age(self):
4255         return time.time() - self.get_grant_renew_time_time()
4256 
4257hunk ./src/allmydata/storage/lease.py 36
4258          self.expiration_time) = struct.unpack(">L32s32sL", data)
4259         self.nodeid = None
4260         return self
4261+
4262     def to_immutable_data(self):
4263         return struct.pack(">L32s32sL",
4264                            self.owner_num,
4265hunk ./src/allmydata/storage/lease.py 49
4266                            int(self.expiration_time),
4267                            self.renew_secret, self.cancel_secret,
4268                            self.nodeid)
4269+
4270     def from_mutable_data(self, data):
4271         (self.owner_num,
4272          self.expiration_time,
4273hunk ./src/allmydata/storage/server.py 1
4274-import os, re, weakref, struct, time
4275+import weakref, time
4276 
4277 from foolscap.api import Referenceable
4278 from twisted.application import service
4279hunk ./src/allmydata/storage/server.py 7
4280 
4281 from zope.interface import implements
4282-from allmydata.interfaces import RIStorageServer, IStatsProducer
4283-from allmydata.util import fileutil, idlib, log, time_format
4284+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4285+from allmydata.util.assertutil import precondition
4286+from allmydata.util import idlib, log
4287 import allmydata # for __full_version__
4288 
4289hunk ./src/allmydata/storage/server.py 12
4290-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4291-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4292+from allmydata.storage.common import si_a2b, si_b2a
4293+[si_a2b]  # hush pyflakes
4294 from allmydata.storage.lease import LeaseInfo
4295hunk ./src/allmydata/storage/server.py 15
4296-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4297-     create_mutable_sharefile
4298-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4299-from allmydata.storage.crawler import BucketCountingCrawler
4300 from allmydata.storage.expirer import LeaseCheckingCrawler
4301hunk ./src/allmydata/storage/server.py 16
4302-
4303-# storage/
4304-# storage/shares/incoming
4305-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4306-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4307-# storage/shares/$START/$STORAGEINDEX
4308-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4309-
4310-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4311-# base-32 chars).
4312-
4313-# $SHARENUM matches this regex:
4314-NUM_RE=re.compile("^[0-9]+$")
4315-
4316+from allmydata.storage.crawler import BucketCountingCrawler
4317 
4318 
4319 class StorageServer(service.MultiService, Referenceable):
4320hunk ./src/allmydata/storage/server.py 21
4321     implements(RIStorageServer, IStatsProducer)
4322+
4323     name = 'storage'
4324     LeaseCheckerClass = LeaseCheckingCrawler
4325hunk ./src/allmydata/storage/server.py 24
4326+    DEFAULT_EXPIRATION_POLICY = {
4327+        'enabled': False,
4328+        'mode': 'age',
4329+        'override_lease_duration': None,
4330+        'cutoff_date': None,
4331+        'sharetypes': ('mutable', 'immutable'),
4332+    }
4333 
4334hunk ./src/allmydata/storage/server.py 32
4335-    def __init__(self, storedir, nodeid, reserved_space=0,
4336-                 discard_storage=False, readonly_storage=False,
4337+    def __init__(self, serverid, backend, statedir,
4338                  stats_provider=None,
4339hunk ./src/allmydata/storage/server.py 34
4340-                 expiration_enabled=False,
4341-                 expiration_mode="age",
4342-                 expiration_override_lease_duration=None,
4343-                 expiration_cutoff_date=None,
4344-                 expiration_sharetypes=("mutable", "immutable")):
4345+                 expiration_policy=None):
4346         service.MultiService.__init__(self)
4347hunk ./src/allmydata/storage/server.py 36
4348-        assert isinstance(nodeid, str)
4349-        assert len(nodeid) == 20
4350-        self.my_nodeid = nodeid
4351-        self.storedir = storedir
4352-        sharedir = os.path.join(storedir, "shares")
4353-        fileutil.make_dirs(sharedir)
4354-        self.sharedir = sharedir
4355-        # we don't actually create the corruption-advisory dir until necessary
4356-        self.corruption_advisory_dir = os.path.join(storedir,
4357-                                                    "corruption-advisories")
4358-        self.reserved_space = int(reserved_space)
4359-        self.no_storage = discard_storage
4360-        self.readonly_storage = readonly_storage
4361+        precondition(IStorageBackend.providedBy(backend), backend)
4362+        precondition(isinstance(serverid, str), serverid)
4363+        precondition(len(serverid) == 20, serverid)
4364+
4365+        self._serverid = serverid
4366         self.stats_provider = stats_provider
4367         if self.stats_provider:
4368             self.stats_provider.register_producer(self)
4369hunk ./src/allmydata/storage/server.py 44
4370-        self.incomingdir = os.path.join(sharedir, 'incoming')
4371-        self._clean_incomplete()
4372-        fileutil.make_dirs(self.incomingdir)
4373         self._active_writers = weakref.WeakKeyDictionary()
4374hunk ./src/allmydata/storage/server.py 45
4375+        self.backend = backend
4376+        self.backend.setServiceParent(self)
4377+        self._statedir = statedir
4378         log.msg("StorageServer created", facility="tahoe.storage")
4379 
4380hunk ./src/allmydata/storage/server.py 50
4381-        if reserved_space:
4382-            if self.get_available_space() is None:
4383-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4384-                        umin="0wZ27w", level=log.UNUSUAL)
4385-
4386         self.latencies = {"allocate": [], # immutable
4387                           "write": [],
4388                           "close": [],
4389hunk ./src/allmydata/storage/server.py 61
4390                           "renew": [],
4391                           "cancel": [],
4392                           }
4393-        self.add_bucket_counter()
4394-
4395-        statefile = os.path.join(self.storedir, "lease_checker.state")
4396-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4397-        klass = self.LeaseCheckerClass
4398-        self.lease_checker = klass(self, statefile, historyfile,
4399-                                   expiration_enabled, expiration_mode,
4400-                                   expiration_override_lease_duration,
4401-                                   expiration_cutoff_date,
4402-                                   expiration_sharetypes)
4403-        self.lease_checker.setServiceParent(self)
4404+        self._setup_bucket_counter()
4405+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4406 
4407     def __repr__(self):
4408hunk ./src/allmydata/storage/server.py 65
4409-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4410+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4411 
4412hunk ./src/allmydata/storage/server.py 67
4413-    def add_bucket_counter(self):
4414-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4415-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4416+    def _setup_bucket_counter(self):
4417+        statefp = self._statedir.child("bucket_counter.state")
4418+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4419         self.bucket_counter.setServiceParent(self)
4420 
4421hunk ./src/allmydata/storage/server.py 72
4422+    def _setup_lease_checker(self, expiration_policy):
4423+        statefp = self._statedir.child("lease_checker.state")
4424+        historyfp = self._statedir.child("lease_checker.history")
4425+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4426+        self.lease_checker.setServiceParent(self)
4427+
4428     def count(self, name, delta=1):
4429         if self.stats_provider:
4430             self.stats_provider.count("storage_server." + name, delta)
4431hunk ./src/allmydata/storage/server.py 92
4432         """Return a dict, indexed by category, that contains a dict of
4433         latency numbers for each category. If there are sufficient samples
4434         for unambiguous interpretation, each dict will contain the
4435-        following keys: mean, 01_0_percentile, 10_0_percentile,
4436+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4437         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4438         99_0_percentile, 99_9_percentile.  If there are insufficient
4439         samples for a given percentile to be interpreted unambiguously
4440hunk ./src/allmydata/storage/server.py 114
4441             else:
4442                 stats["mean"] = None
4443 
4444-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4445-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4446-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4447+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4448+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4449+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4450                              (0.999, "99_9_percentile", 1000)]
4451 
4452             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4453hunk ./src/allmydata/storage/server.py 133
4454             kwargs["facility"] = "tahoe.storage"
4455         return log.msg(*args, **kwargs)
4456 
4457-    def _clean_incomplete(self):
4458-        fileutil.rm_dir(self.incomingdir)
4459+    def get_serverid(self):
4460+        return self._serverid
4461 
4462     def get_stats(self):
4463         # remember: RIStatsProvider requires that our return dict
4464hunk ./src/allmydata/storage/server.py 138
4465-        # contains numeric values.
4466+        # contains numeric, or None values.
4467         stats = { 'storage_server.allocated': self.allocated_size(), }
4468hunk ./src/allmydata/storage/server.py 140
4469-        stats['storage_server.reserved_space'] = self.reserved_space
4470         for category,ld in self.get_latencies().items():
4471             for name,v in ld.items():
4472                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4473hunk ./src/allmydata/storage/server.py 144
4474 
4475-        try:
4476-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4477-            writeable = disk['avail'] > 0
4478-
4479-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4480-            stats['storage_server.disk_total'] = disk['total']
4481-            stats['storage_server.disk_used'] = disk['used']
4482-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4483-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4484-            stats['storage_server.disk_avail'] = disk['avail']
4485-        except AttributeError:
4486-            writeable = True
4487-        except EnvironmentError:
4488-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4489-            writeable = False
4490-
4491-        if self.readonly_storage:
4492-            stats['storage_server.disk_avail'] = 0
4493-            writeable = False
4494+        self.backend.fill_in_space_stats(stats)
4495 
4496hunk ./src/allmydata/storage/server.py 146
4497-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4498         s = self.bucket_counter.get_state()
4499         bucket_count = s.get("last-complete-bucket-count")
4500         if bucket_count:
4501hunk ./src/allmydata/storage/server.py 153
4502         return stats
4503 
4504     def get_available_space(self):
4505-        """Returns available space for share storage in bytes, or None if no
4506-        API to get this information is available."""
4507-
4508-        if self.readonly_storage:
4509-            return 0
4510-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4511+        return self.backend.get_available_space()
4512 
4513     def allocated_size(self):
4514         space = 0
4515hunk ./src/allmydata/storage/server.py 162
4516         return space
4517 
4518     def remote_get_version(self):
4519-        remaining_space = self.get_available_space()
4520+        remaining_space = self.backend.get_available_space()
4521         if remaining_space is None:
4522             # We're on a platform that has no API to get disk stats.
4523             remaining_space = 2**64
4524hunk ./src/allmydata/storage/server.py 178
4525                     }
4526         return version
4527 
4528-    def remote_allocate_buckets(self, storage_index,
4529+    def remote_allocate_buckets(self, storageindex,
4530                                 renew_secret, cancel_secret,
4531                                 sharenums, allocated_size,
4532                                 canary, owner_num=0):
4533hunk ./src/allmydata/storage/server.py 182
4534+        # cancel_secret is no longer used.
4535         # owner_num is not for clients to set, but rather it should be
4536hunk ./src/allmydata/storage/server.py 184
4537-        # curried into the PersonalStorageServer instance that is dedicated
4538-        # to a particular owner.
4539+        # curried into a StorageServer instance dedicated to a particular
4540+        # owner.
4541         start = time.time()
4542         self.count("allocate")
4543hunk ./src/allmydata/storage/server.py 188
4544-        alreadygot = set()
4545         bucketwriters = {} # k: shnum, v: BucketWriter
4546hunk ./src/allmydata/storage/server.py 189
4547-        si_dir = storage_index_to_dir(storage_index)
4548-        si_s = si_b2a(storage_index)
4549 
4550hunk ./src/allmydata/storage/server.py 190
4551+        si_s = si_b2a(storageindex)
4552         log.msg("storage: allocate_buckets %s" % si_s)
4553 
4554hunk ./src/allmydata/storage/server.py 193
4555-        # in this implementation, the lease information (including secrets)
4556-        # goes into the share files themselves. It could also be put into a
4557-        # separate database. Note that the lease should not be added until
4558-        # the BucketWriter has been closed.
4559+        # Note that the lease should not be added until the BucketWriter
4560+        # has been closed.
4561         expire_time = time.time() + 31*24*60*60
4562hunk ./src/allmydata/storage/server.py 196
4563-        lease_info = LeaseInfo(owner_num,
4564-                               renew_secret, cancel_secret,
4565-                               expire_time, self.my_nodeid)
4566+        lease_info = LeaseInfo(owner_num, renew_secret,
4567+                               expire_time, self._serverid)
4568 
4569         max_space_per_bucket = allocated_size
4570 
4571hunk ./src/allmydata/storage/server.py 201
4572-        remaining_space = self.get_available_space()
4573+        remaining_space = self.backend.get_available_space()
4574         limited = remaining_space is not None
4575         if limited:
4576hunk ./src/allmydata/storage/server.py 204
4577-            # this is a bit conservative, since some of this allocated_size()
4578-            # has already been written to disk, where it will show up in
4579+            # This is a bit conservative, since some of this allocated_size()
4580+            # has already been written to the backend, where it will show up in
4581             # get_available_space.
4582             remaining_space -= self.allocated_size()
4583hunk ./src/allmydata/storage/server.py 208
4584-        # self.readonly_storage causes remaining_space <= 0
4585+            # If the backend is read-only, remaining_space will be <= 0.
4586+
4587+        shareset = self.backend.get_shareset(storageindex)
4588 
4589hunk ./src/allmydata/storage/server.py 212
4590-        # fill alreadygot with all shares that we have, not just the ones
4591+        # Fill alreadygot with all shares that we have, not just the ones
4592         # they asked about: this will save them a lot of work. Add or update
4593         # leases for all of them: if they want us to hold shares for this
4594hunk ./src/allmydata/storage/server.py 215
4595-        # file, they'll want us to hold leases for this file.
4596-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4597-            alreadygot.add(shnum)
4598-            sf = ShareFile(fn)
4599-            sf.add_or_renew_lease(lease_info)
4600+        # file, they'll want us to hold leases for all the shares of it.
4601+        #
4602+        # XXX should we be making the assumption here that lease info is
4603+        # duplicated in all shares?
4604+        alreadygot = set()
4605+        for share in shareset.get_shares():
4606+            share.add_or_renew_lease(lease_info)
4607+            alreadygot.add(share.shnum)
4608 
4609hunk ./src/allmydata/storage/server.py 224
4610-        for shnum in sharenums:
4611-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4612-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4613-            if os.path.exists(finalhome):
4614-                # great! we already have it. easy.
4615-                pass
4616-            elif os.path.exists(incominghome):
4617+        for shnum in sharenums - alreadygot:
4618+            if shareset.has_incoming(shnum):
4619                 # Note that we don't create BucketWriters for shnums that
4620                 # have a partial share (in incoming/), so if a second upload
4621                 # occurs while the first is still in progress, the second
4622hunk ./src/allmydata/storage/server.py 232
4623                 # uploader will use different storage servers.
4624                 pass
4625             elif (not limited) or (remaining_space >= max_space_per_bucket):
4626-                # ok! we need to create the new share file.
4627-                bw = BucketWriter(self, incominghome, finalhome,
4628-                                  max_space_per_bucket, lease_info, canary)
4629-                if self.no_storage:
4630-                    bw.throw_out_all_data = True
4631+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4632+                                                 lease_info, canary)
4633                 bucketwriters[shnum] = bw
4634                 self._active_writers[bw] = 1
4635                 if limited:
4636hunk ./src/allmydata/storage/server.py 239
4637                     remaining_space -= max_space_per_bucket
4638             else:
4639-                # bummer! not enough space to accept this bucket
4640+                # Bummer not enough space to accept this share.
4641                 pass
4642 
4643hunk ./src/allmydata/storage/server.py 242
4644-        if bucketwriters:
4645-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4646-
4647         self.add_latency("allocate", time.time() - start)
4648         return alreadygot, bucketwriters
4649 
4650hunk ./src/allmydata/storage/server.py 245
4651-    def _iter_share_files(self, storage_index):
4652-        for shnum, filename in self._get_bucket_shares(storage_index):
4653-            f = open(filename, 'rb')
4654-            header = f.read(32)
4655-            f.close()
4656-            if header[:32] == MutableShareFile.MAGIC:
4657-                sf = MutableShareFile(filename, self)
4658-                # note: if the share has been migrated, the renew_lease()
4659-                # call will throw an exception, with information to help the
4660-                # client update the lease.
4661-            elif header[:4] == struct.pack(">L", 1):
4662-                sf = ShareFile(filename)
4663-            else:
4664-                continue # non-sharefile
4665-            yield sf
4666-
4667-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4668+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4669                          owner_num=1):
4670hunk ./src/allmydata/storage/server.py 247
4671+        # cancel_secret is no longer used.
4672         start = time.time()
4673         self.count("add-lease")
4674         new_expire_time = time.time() + 31*24*60*60
4675hunk ./src/allmydata/storage/server.py 251
4676-        lease_info = LeaseInfo(owner_num,
4677-                               renew_secret, cancel_secret,
4678-                               new_expire_time, self.my_nodeid)
4679-        for sf in self._iter_share_files(storage_index):
4680-            sf.add_or_renew_lease(lease_info)
4681-        self.add_latency("add-lease", time.time() - start)
4682-        return None
4683+        lease_info = LeaseInfo(owner_num, renew_secret,
4684+                               new_expire_time, self._serverid)
4685 
4686hunk ./src/allmydata/storage/server.py 254
4687-    def remote_renew_lease(self, storage_index, renew_secret):
4688+        try:
4689+            self.backend.add_or_renew_lease(lease_info)
4690+        finally:
4691+            self.add_latency("add-lease", time.time() - start)
4692+
4693+    def remote_renew_lease(self, storageindex, renew_secret):
4694         start = time.time()
4695         self.count("renew")
4696hunk ./src/allmydata/storage/server.py 262
4697-        new_expire_time = time.time() + 31*24*60*60
4698-        found_buckets = False
4699-        for sf in self._iter_share_files(storage_index):
4700-            found_buckets = True
4701-            sf.renew_lease(renew_secret, new_expire_time)
4702-        self.add_latency("renew", time.time() - start)
4703-        if not found_buckets:
4704-            raise IndexError("no such lease to renew")
4705+
4706+        try:
4707+            shareset = self.backend.get_shareset(storageindex)
4708+            new_expiration_time = start + 31*24*60*60   # one month from now
4709+            shareset.renew_lease(renew_secret, new_expiration_time)
4710+        finally:
4711+            self.add_latency("renew", time.time() - start)
4712 
4713     def bucket_writer_closed(self, bw, consumed_size):
4714         if self.stats_provider:
4715hunk ./src/allmydata/storage/server.py 275
4716             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4717         del self._active_writers[bw]
4718 
4719-    def _get_bucket_shares(self, storage_index):
4720-        """Return a list of (shnum, pathname) tuples for files that hold
4721-        shares for this storage_index. In each tuple, 'shnum' will always be
4722-        the integer form of the last component of 'pathname'."""
4723-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4724-        try:
4725-            for f in os.listdir(storagedir):
4726-                if NUM_RE.match(f):
4727-                    filename = os.path.join(storagedir, f)
4728-                    yield (int(f), filename)
4729-        except OSError:
4730-            # Commonly caused by there being no buckets at all.
4731-            pass
4732-
4733-    def remote_get_buckets(self, storage_index):
4734+    def remote_get_buckets(self, storageindex):
4735         start = time.time()
4736         self.count("get")
4737hunk ./src/allmydata/storage/server.py 278
4738-        si_s = si_b2a(storage_index)
4739+        si_s = si_b2a(storageindex)
4740         log.msg("storage: get_buckets %s" % si_s)
4741         bucketreaders = {} # k: sharenum, v: BucketReader
4742hunk ./src/allmydata/storage/server.py 281
4743-        for shnum, filename in self._get_bucket_shares(storage_index):
4744-            bucketreaders[shnum] = BucketReader(self, filename,
4745-                                                storage_index, shnum)
4746-        self.add_latency("get", time.time() - start)
4747-        return bucketreaders
4748 
4749hunk ./src/allmydata/storage/server.py 282
4750-    def get_leases(self, storage_index):
4751-        """Provide an iterator that yields all of the leases attached to this
4752-        bucket. Each lease is returned as a LeaseInfo instance.
4753+        try:
4754+            shareset = self.backend.get_shareset(storageindex)
4755+            for share in shareset.get_shares():
4756+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4757+            return bucketreaders
4758+        finally:
4759+            self.add_latency("get", time.time() - start)
4760 
4761hunk ./src/allmydata/storage/server.py 290
4762-        This method is not for client use.
4763+    def get_leases(self, storageindex):
4764         """
4765hunk ./src/allmydata/storage/server.py 292
4766+        Provide an iterator that yields all of the leases attached to this
4767+        bucket. Each lease is returned as a LeaseInfo instance.
4768 
4769hunk ./src/allmydata/storage/server.py 295
4770-        # since all shares get the same lease data, we just grab the leases
4771-        # from the first share
4772-        try:
4773-            shnum, filename = self._get_bucket_shares(storage_index).next()
4774-            sf = ShareFile(filename)
4775-            return sf.get_leases()
4776-        except StopIteration:
4777-            return iter([])
4778+        This method is not for client use. XXX do we need it at all?
4779+        """
4780+        return self.backend.get_shareset(storageindex).get_leases()
4781 
4782hunk ./src/allmydata/storage/server.py 299
4783-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4784+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4785                                                secrets,
4786                                                test_and_write_vectors,
4787                                                read_vector):
4788hunk ./src/allmydata/storage/server.py 305
4789         start = time.time()
4790         self.count("writev")
4791-        si_s = si_b2a(storage_index)
4792+        si_s = si_b2a(storageindex)
4793         log.msg("storage: slot_writev %s" % si_s)
4794hunk ./src/allmydata/storage/server.py 307
4795-        si_dir = storage_index_to_dir(storage_index)
4796-        (write_enabler, renew_secret, cancel_secret) = secrets
4797-        # shares exist if there is a file for them
4798-        bucketdir = os.path.join(self.sharedir, si_dir)
4799-        shares = {}
4800-        if os.path.isdir(bucketdir):
4801-            for sharenum_s in os.listdir(bucketdir):
4802-                try:
4803-                    sharenum = int(sharenum_s)
4804-                except ValueError:
4805-                    continue
4806-                filename = os.path.join(bucketdir, sharenum_s)
4807-                msf = MutableShareFile(filename, self)
4808-                msf.check_write_enabler(write_enabler, si_s)
4809-                shares[sharenum] = msf
4810-        # write_enabler is good for all existing shares.
4811-
4812-        # Now evaluate test vectors.
4813-        testv_is_good = True
4814-        for sharenum in test_and_write_vectors:
4815-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4816-            if sharenum in shares:
4817-                if not shares[sharenum].check_testv(testv):
4818-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4819-                    testv_is_good = False
4820-                    break
4821-            else:
4822-                # compare the vectors against an empty share, in which all
4823-                # reads return empty strings.
4824-                if not EmptyShare().check_testv(testv):
4825-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4826-                                                                testv))
4827-                    testv_is_good = False
4828-                    break
4829-
4830-        # now gather the read vectors, before we do any writes
4831-        read_data = {}
4832-        for sharenum, share in shares.items():
4833-            read_data[sharenum] = share.readv(read_vector)
4834-
4835-        ownerid = 1 # TODO
4836-        expire_time = time.time() + 31*24*60*60   # one month
4837-        lease_info = LeaseInfo(ownerid,
4838-                               renew_secret, cancel_secret,
4839-                               expire_time, self.my_nodeid)
4840-
4841-        if testv_is_good:
4842-            # now apply the write vectors
4843-            for sharenum in test_and_write_vectors:
4844-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4845-                if new_length == 0:
4846-                    if sharenum in shares:
4847-                        shares[sharenum].unlink()
4848-                else:
4849-                    if sharenum not in shares:
4850-                        # allocate a new share
4851-                        allocated_size = 2000 # arbitrary, really
4852-                        share = self._allocate_slot_share(bucketdir, secrets,
4853-                                                          sharenum,
4854-                                                          allocated_size,
4855-                                                          owner_num=0)
4856-                        shares[sharenum] = share
4857-                    shares[sharenum].writev(datav, new_length)
4858-                    # and update the lease
4859-                    shares[sharenum].add_or_renew_lease(lease_info)
4860-
4861-            if new_length == 0:
4862-                # delete empty bucket directories
4863-                if not os.listdir(bucketdir):
4864-                    os.rmdir(bucketdir)
4865 
4866hunk ./src/allmydata/storage/server.py 308
4867+        try:
4868+            shareset = self.backend.get_shareset(storageindex)
4869+            expiration_time = start + 31*24*60*60   # one month from now
4870+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4871+                                                       read_vector, expiration_time)
4872+        finally:
4873+            self.add_latency("writev", time.time() - start)
4874 
4875hunk ./src/allmydata/storage/server.py 316
4876-        # all done
4877-        self.add_latency("writev", time.time() - start)
4878-        return (testv_is_good, read_data)
4879-
4880-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4881-                             allocated_size, owner_num=0):
4882-        (write_enabler, renew_secret, cancel_secret) = secrets
4883-        my_nodeid = self.my_nodeid
4884-        fileutil.make_dirs(bucketdir)
4885-        filename = os.path.join(bucketdir, "%d" % sharenum)
4886-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4887-                                         self)
4888-        return share
4889-
4890-    def remote_slot_readv(self, storage_index, shares, readv):
4891+    def remote_slot_readv(self, storageindex, shares, readv):
4892         start = time.time()
4893         self.count("readv")
4894hunk ./src/allmydata/storage/server.py 319
4895-        si_s = si_b2a(storage_index)
4896-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4897-                     facility="tahoe.storage", level=log.OPERATIONAL)
4898-        si_dir = storage_index_to_dir(storage_index)
4899-        # shares exist if there is a file for them
4900-        bucketdir = os.path.join(self.sharedir, si_dir)
4901-        if not os.path.isdir(bucketdir):
4902+        si_s = si_b2a(storageindex)
4903+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4904+                facility="tahoe.storage", level=log.OPERATIONAL)
4905+
4906+        try:
4907+            shareset = self.backend.get_shareset(storageindex)
4908+            return shareset.readv(self, shares, readv)
4909+        finally:
4910             self.add_latency("readv", time.time() - start)
4911hunk ./src/allmydata/storage/server.py 328
4912-            return {}
4913-        datavs = {}
4914-        for sharenum_s in os.listdir(bucketdir):
4915-            try:
4916-                sharenum = int(sharenum_s)
4917-            except ValueError:
4918-                continue
4919-            if sharenum in shares or not shares:
4920-                filename = os.path.join(bucketdir, sharenum_s)
4921-                msf = MutableShareFile(filename, self)
4922-                datavs[sharenum] = msf.readv(readv)
4923-        log.msg("returning shares %s" % (datavs.keys(),),
4924-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4925-        self.add_latency("readv", time.time() - start)
4926-        return datavs
4927 
4928hunk ./src/allmydata/storage/server.py 329
4929-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4930-                                    reason):
4931-        fileutil.make_dirs(self.corruption_advisory_dir)
4932-        now = time_format.iso_utc(sep="T")
4933-        si_s = si_b2a(storage_index)
4934-        # windows can't handle colons in the filename
4935-        fn = os.path.join(self.corruption_advisory_dir,
4936-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4937-        f = open(fn, "w")
4938-        f.write("report: Share Corruption\n")
4939-        f.write("type: %s\n" % share_type)
4940-        f.write("storage_index: %s\n" % si_s)
4941-        f.write("share_number: %d\n" % shnum)
4942-        f.write("\n")
4943-        f.write(reason)
4944-        f.write("\n")
4945-        f.close()
4946-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4947-                        "%(si)s-%(shnum)d: %(reason)s"),
4948-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4949-                level=log.SCARY, umid="SGx2fA")
4950-        return None
4951+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4952+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4953hunk ./src/allmydata/test/common.py 20
4954 from allmydata.mutable.common import CorruptShareError
4955 from allmydata.mutable.layout import unpack_header
4956 from allmydata.mutable.publish import MutableData
4957-from allmydata.storage.mutable import MutableShareFile
4958+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4959 from allmydata.util import hashutil, log, fileutil, pollmixin
4960 from allmydata.util.assertutil import precondition
4961 from allmydata.util.consumer import download_to_data
4962hunk ./src/allmydata/test/common.py 1297
4963 
4964 def _corrupt_mutable_share_data(data, debug=False):
4965     prefix = data[:32]
4966-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4967-    data_offset = MutableShareFile.DATA_OFFSET
4968+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4969+    data_offset = MutableDiskShare.DATA_OFFSET
4970     sharetype = data[data_offset:data_offset+1]
4971     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4972     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4973hunk ./src/allmydata/test/no_network.py 21
4974 from twisted.application import service
4975 from twisted.internet import defer, reactor
4976 from twisted.python.failure import Failure
4977+from twisted.python.filepath import FilePath
4978 from foolscap.api import Referenceable, fireEventually, RemoteException
4979 from base64 import b32encode
4980hunk ./src/allmydata/test/no_network.py 24
4981+
4982 from allmydata import uri as tahoe_uri
4983 from allmydata.client import Client
4984hunk ./src/allmydata/test/no_network.py 27
4985-from allmydata.storage.server import StorageServer, storage_index_to_dir
4986+from allmydata.storage.server import StorageServer
4987+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4988 from allmydata.util import fileutil, idlib, hashutil
4989 from allmydata.util.hashutil import sha1
4990 from allmydata.test.common_web import HTTPClientGETFactory
4991hunk ./src/allmydata/test/no_network.py 155
4992             seed = server.get_permutation_seed()
4993             return sha1(peer_selection_index + seed).digest()
4994         return sorted(self.get_connected_servers(), key=_permuted)
4995+
4996     def get_connected_servers(self):
4997         return self.client._servers
4998hunk ./src/allmydata/test/no_network.py 158
4999+
5000     def get_nickname_for_serverid(self, serverid):
5001         return None
5002 
5003hunk ./src/allmydata/test/no_network.py 162
5004+    def get_known_servers(self):
5005+        return self.get_connected_servers()
5006+
5007+    def get_all_serverids(self):
5008+        return self.client.get_all_serverids()
5009+
5010+
5011 class NoNetworkClient(Client):
5012     def create_tub(self):
5013         pass
5014hunk ./src/allmydata/test/no_network.py 262
5015 
5016     def make_server(self, i, readonly=False):
5017         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
5018-        serverdir = os.path.join(self.basedir, "servers",
5019-                                 idlib.shortnodeid_b2a(serverid), "storage")
5020-        fileutil.make_dirs(serverdir)
5021-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
5022-                           readonly_storage=readonly)
5023+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
5024+
5025+        # The backend will make the storage directory and any necessary parents.
5026+        backend = DiskBackend(storagedir, readonly=readonly)
5027+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
5028         ss._no_network_server_number = i
5029         return ss
5030 
5031hunk ./src/allmydata/test/no_network.py 276
5032         middleman = service.MultiService()
5033         middleman.setServiceParent(self)
5034         ss.setServiceParent(middleman)
5035-        serverid = ss.my_nodeid
5036+        serverid = ss.get_serverid()
5037         self.servers_by_number[i] = ss
5038         wrapper = wrap_storage_server(ss)
5039         self.wrappers_by_id[serverid] = wrapper
5040hunk ./src/allmydata/test/no_network.py 295
5041         # it's enough to remove the server from c._servers (we don't actually
5042         # have to detach and stopService it)
5043         for i,ss in self.servers_by_number.items():
5044-            if ss.my_nodeid == serverid:
5045+            if ss.get_serverid() == serverid:
5046                 del self.servers_by_number[i]
5047                 break
5048         del self.wrappers_by_id[serverid]
5049hunk ./src/allmydata/test/no_network.py 345
5050     def get_clientdir(self, i=0):
5051         return self.g.clients[i].basedir
5052 
5053+    def get_server(self, i):
5054+        return self.g.servers_by_number[i]
5055+
5056     def get_serverdir(self, i):
5057hunk ./src/allmydata/test/no_network.py 349
5058-        return self.g.servers_by_number[i].storedir
5059+        return self.g.servers_by_number[i].backend.storedir
5060+
5061+    def remove_server(self, i):
5062+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
5063 
5064     def iterate_servers(self):
5065         for i in sorted(self.g.servers_by_number.keys()):
5066hunk ./src/allmydata/test/no_network.py 357
5067             ss = self.g.servers_by_number[i]
5068-            yield (i, ss, ss.storedir)
5069+            yield (i, ss, ss.backend.storedir)
5070 
5071     def find_uri_shares(self, uri):
5072         si = tahoe_uri.from_string(uri).get_storage_index()
5073hunk ./src/allmydata/test/no_network.py 361
5074-        prefixdir = storage_index_to_dir(si)
5075         shares = []
5076         for i,ss in self.g.servers_by_number.items():
5077hunk ./src/allmydata/test/no_network.py 363
5078-            serverid = ss.my_nodeid
5079-            basedir = os.path.join(ss.sharedir, prefixdir)
5080-            if not os.path.exists(basedir):
5081-                continue
5082-            for f in os.listdir(basedir):
5083-                try:
5084-                    shnum = int(f)
5085-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5086-                except ValueError:
5087-                    pass
5088+            for share in ss.backend.get_shareset(si).get_shares():
5089+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5090         return sorted(shares)
5091 
5092hunk ./src/allmydata/test/no_network.py 367
5093+    def count_leases(self, uri):
5094+        """Return (filename, leasecount) pairs in arbitrary order."""
5095+        si = tahoe_uri.from_string(uri).get_storage_index()
5096+        lease_counts = []
5097+        for i,ss in self.g.servers_by_number.items():
5098+            for share in ss.backend.get_shareset(si).get_shares():
5099+                num_leases = len(list(share.get_leases()))
5100+                lease_counts.append( (share._home.path, num_leases) )
5101+        return lease_counts
5102+
5103     def copy_shares(self, uri):
5104         shares = {}
5105hunk ./src/allmydata/test/no_network.py 379
5106-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5107-            shares[sharefile] = open(sharefile, "rb").read()
5108+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5109+            shares[sharefp.path] = sharefp.getContent()
5110         return shares
5111 
5112hunk ./src/allmydata/test/no_network.py 383
5113+    def copy_share(self, from_share, uri, to_server):
5114+        si = uri.from_string(self.uri).get_storage_index()
5115+        (i_shnum, i_serverid, i_sharefp) = from_share
5116+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5117+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5118+
5119     def restore_all_shares(self, shares):
5120hunk ./src/allmydata/test/no_network.py 390
5121-        for sharefile, data in shares.items():
5122-            open(sharefile, "wb").write(data)
5123+        for share, data in shares.items():
5124+            share.home.setContent(data)
5125 
5126hunk ./src/allmydata/test/no_network.py 393
5127-    def delete_share(self, (shnum, serverid, sharefile)):
5128-        os.unlink(sharefile)
5129+    def delete_share(self, (shnum, serverid, sharefp)):
5130+        sharefp.remove()
5131 
5132     def delete_shares_numbered(self, uri, shnums):
5133hunk ./src/allmydata/test/no_network.py 397
5134-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5135+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5136             if i_shnum in shnums:
5137hunk ./src/allmydata/test/no_network.py 399
5138-                os.unlink(i_sharefile)
5139+                i_sharefp.remove()
5140 
5141hunk ./src/allmydata/test/no_network.py 401
5142-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5143-        sharedata = open(sharefile, "rb").read()
5144-        corruptdata = corruptor_function(sharedata)
5145-        open(sharefile, "wb").write(corruptdata)
5146+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5147+        sharedata = sharefp.getContent()
5148+        corruptdata = corruptor_function(sharedata, debug=debug)
5149+        sharefp.setContent(corruptdata)
5150 
5151     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5152hunk ./src/allmydata/test/no_network.py 407
5153-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5154+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5155             if i_shnum in shnums:
5156hunk ./src/allmydata/test/no_network.py 409
5157-                sharedata = open(i_sharefile, "rb").read()
5158-                corruptdata = corruptor(sharedata, debug=debug)
5159-                open(i_sharefile, "wb").write(corruptdata)
5160+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5161 
5162     def corrupt_all_shares(self, uri, corruptor, debug=False):
5163hunk ./src/allmydata/test/no_network.py 412
5164-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5165-            sharedata = open(i_sharefile, "rb").read()
5166-            corruptdata = corruptor(sharedata, debug=debug)
5167-            open(i_sharefile, "wb").write(corruptdata)
5168+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5169+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5170 
5171     def GET(self, urlpath, followRedirect=False, return_response=False,
5172             method="GET", clientnum=0, **kwargs):
5173hunk ./src/allmydata/test/test_download.py 6
5174 # a previous run. This asserts that the current code is capable of decoding
5175 # shares from a previous version.
5176 
5177-import os
5178 from twisted.trial import unittest
5179 from twisted.internet import defer, reactor
5180 from allmydata import uri
5181hunk ./src/allmydata/test/test_download.py 9
5182-from allmydata.storage.server import storage_index_to_dir
5183 from allmydata.util import base32, fileutil, spans, log, hashutil
5184 from allmydata.util.consumer import download_to_data, MemoryConsumer
5185 from allmydata.immutable import upload, layout
5186hunk ./src/allmydata/test/test_download.py 85
5187         u = upload.Data(plaintext, None)
5188         d = self.c0.upload(u)
5189         f = open("stored_shares.py", "w")
5190-        def _created_immutable(ur):
5191-            # write the generated shares and URI to a file, which can then be
5192-            # incorporated into this one next time.
5193-            f.write('immutable_uri = "%s"\n' % ur.uri)
5194-            f.write('immutable_shares = {\n')
5195-            si = uri.from_string(ur.uri).get_storage_index()
5196-            si_dir = storage_index_to_dir(si)
5197+
5198+        def _write_py(uri):
5199+            si = uri.from_string(uri).get_storage_index()
5200             for (i,ss,ssdir) in self.iterate_servers():
5201hunk ./src/allmydata/test/test_download.py 89
5202-                sharedir = os.path.join(ssdir, "shares", si_dir)
5203                 shares = {}
5204hunk ./src/allmydata/test/test_download.py 90
5205-                for fn in os.listdir(sharedir):
5206-                    shnum = int(fn)
5207-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5208-                    shares[shnum] = sharedata
5209-                fileutil.rm_dir(sharedir)
5210+                shareset = ss.backend.get_shareset(si)
5211+                for share in shareset.get_shares():
5212+                    sharedata = share._home.getContent()
5213+                    shares[share.get_shnum()] = sharedata
5214+
5215+                fileutil.fp_remove(shareset._sharehomedir)
5216                 if shares:
5217                     f.write(' %d: { # client[%d]\n' % (i, i))
5218                     for shnum in sorted(shares.keys()):
5219hunk ./src/allmydata/test/test_download.py 103
5220                                 (shnum, base32.b2a(shares[shnum])))
5221                     f.write('    },\n')
5222             f.write('}\n')
5223-            f.write('\n')
5224 
5225hunk ./src/allmydata/test/test_download.py 104
5226+        def _created_immutable(ur):
5227+            # write the generated shares and URI to a file, which can then be
5228+            # incorporated into this one next time.
5229+            f.write('immutable_uri = "%s"\n' % ur.uri)
5230+            f.write('immutable_shares = {\n')
5231+            _write_py(ur.uri)
5232+            f.write('\n')
5233         d.addCallback(_created_immutable)
5234 
5235         d.addCallback(lambda ignored:
5236hunk ./src/allmydata/test/test_download.py 118
5237         def _created_mutable(n):
5238             f.write('mutable_uri = "%s"\n' % n.get_uri())
5239             f.write('mutable_shares = {\n')
5240-            si = uri.from_string(n.get_uri()).get_storage_index()
5241-            si_dir = storage_index_to_dir(si)
5242-            for (i,ss,ssdir) in self.iterate_servers():
5243-                sharedir = os.path.join(ssdir, "shares", si_dir)
5244-                shares = {}
5245-                for fn in os.listdir(sharedir):
5246-                    shnum = int(fn)
5247-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5248-                    shares[shnum] = sharedata
5249-                fileutil.rm_dir(sharedir)
5250-                if shares:
5251-                    f.write(' %d: { # client[%d]\n' % (i, i))
5252-                    for shnum in sorted(shares.keys()):
5253-                        f.write('  %d: base32.a2b("%s"),\n' %
5254-                                (shnum, base32.b2a(shares[shnum])))
5255-                    f.write('    },\n')
5256-            f.write('}\n')
5257-
5258-            f.close()
5259+            _write_py(n.get_uri())
5260         d.addCallback(_created_mutable)
5261 
5262         def _done(ignored):
5263hunk ./src/allmydata/test/test_download.py 123
5264             f.close()
5265-        d.addCallback(_done)
5266+        d.addBoth(_done)
5267 
5268         return d
5269 
5270hunk ./src/allmydata/test/test_download.py 127
5271+    def _write_shares(self, uri, shares):
5272+        si = uri.from_string(uri).get_storage_index()
5273+        for i in shares:
5274+            shares_for_server = shares[i]
5275+            for shnum in shares_for_server:
5276+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5277+                fileutil.fp_make_dirs(share_dir)
5278+                share_dir.child(str(shnum)).setContent(shares[shnum])
5279+
5280     def load_shares(self, ignored=None):
5281         # this uses the data generated by create_shares() to populate the
5282         # storage servers with pre-generated shares
5283hunk ./src/allmydata/test/test_download.py 139
5284-        si = uri.from_string(immutable_uri).get_storage_index()
5285-        si_dir = storage_index_to_dir(si)
5286-        for i in immutable_shares:
5287-            shares = immutable_shares[i]
5288-            for shnum in shares:
5289-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5290-                fileutil.make_dirs(dn)
5291-                fn = os.path.join(dn, str(shnum))
5292-                f = open(fn, "wb")
5293-                f.write(shares[shnum])
5294-                f.close()
5295-
5296-        si = uri.from_string(mutable_uri).get_storage_index()
5297-        si_dir = storage_index_to_dir(si)
5298-        for i in mutable_shares:
5299-            shares = mutable_shares[i]
5300-            for shnum in shares:
5301-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5302-                fileutil.make_dirs(dn)
5303-                fn = os.path.join(dn, str(shnum))
5304-                f = open(fn, "wb")
5305-                f.write(shares[shnum])
5306-                f.close()
5307+        self._write_shares(immutable_uri, immutable_shares)
5308+        self._write_shares(mutable_uri, mutable_shares)
5309 
5310     def download_immutable(self, ignored=None):
5311         n = self.c0.create_node_from_uri(immutable_uri)
5312hunk ./src/allmydata/test/test_download.py 183
5313 
5314         self.load_shares()
5315         si = uri.from_string(immutable_uri).get_storage_index()
5316-        si_dir = storage_index_to_dir(si)
5317 
5318         n = self.c0.create_node_from_uri(immutable_uri)
5319         d = download_to_data(n)
5320hunk ./src/allmydata/test/test_download.py 198
5321                 for clientnum in immutable_shares:
5322                     for shnum in immutable_shares[clientnum]:
5323                         if s._shnum == shnum:
5324-                            fn = os.path.join(self.get_serverdir(clientnum),
5325-                                              "shares", si_dir, str(shnum))
5326-                            os.unlink(fn)
5327+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5328+                            share_dir.child(str(shnum)).remove()
5329         d.addCallback(_clobber_some_shares)
5330         d.addCallback(lambda ign: download_to_data(n))
5331         d.addCallback(_got_data)
5332hunk ./src/allmydata/test/test_download.py 212
5333                 for shnum in immutable_shares[clientnum]:
5334                     if shnum == save_me:
5335                         continue
5336-                    fn = os.path.join(self.get_serverdir(clientnum),
5337-                                      "shares", si_dir, str(shnum))
5338-                    if os.path.exists(fn):
5339-                        os.unlink(fn)
5340+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5341+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5342             # now the download should fail with NotEnoughSharesError
5343             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5344                                    download_to_data, n)
5345hunk ./src/allmydata/test/test_download.py 223
5346             # delete the last remaining share
5347             for clientnum in immutable_shares:
5348                 for shnum in immutable_shares[clientnum]:
5349-                    fn = os.path.join(self.get_serverdir(clientnum),
5350-                                      "shares", si_dir, str(shnum))
5351-                    if os.path.exists(fn):
5352-                        os.unlink(fn)
5353+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5354+                    share_dir.child(str(shnum)).remove()
5355             # now a new download should fail with NoSharesError. We want a
5356             # new ImmutableFileNode so it will forget about the old shares.
5357             # If we merely called create_node_from_uri() without first
5358hunk ./src/allmydata/test/test_download.py 801
5359         # will report two shares, and the ShareFinder will handle the
5360         # duplicate by attaching both to the same CommonShare instance.
5361         si = uri.from_string(immutable_uri).get_storage_index()
5362-        si_dir = storage_index_to_dir(si)
5363-        sh0_file = [sharefile
5364-                    for (shnum, serverid, sharefile)
5365-                    in self.find_uri_shares(immutable_uri)
5366-                    if shnum == 0][0]
5367-        sh0_data = open(sh0_file, "rb").read()
5368+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5369+                          in self.find_uri_shares(immutable_uri)
5370+                          if shnum == 0][0]
5371+        sh0_data = sh0_fp.getContent()
5372         for clientnum in immutable_shares:
5373             if 0 in immutable_shares[clientnum]:
5374                 continue
5375hunk ./src/allmydata/test/test_download.py 808
5376-            cdir = self.get_serverdir(clientnum)
5377-            target = os.path.join(cdir, "shares", si_dir, "0")
5378-            outf = open(target, "wb")
5379-            outf.write(sh0_data)
5380-            outf.close()
5381+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5382+            fileutil.fp_make_dirs(cdir)
5383+            cdir.child(str(shnum)).setContent(sh0_data)
5384 
5385         d = self.download_immutable()
5386         return d
5387hunk ./src/allmydata/test/test_encode.py 134
5388         d.addCallback(_try)
5389         return d
5390 
5391-    def get_share_hashes(self, at_least_these=()):
5392+    def get_share_hashes(self):
5393         d = self._start()
5394         def _try(unused=None):
5395             if self.mode == "bad sharehash":
5396hunk ./src/allmydata/test/test_hung_server.py 3
5397 # -*- coding: utf-8 -*-
5398 
5399-import os, shutil
5400 from twisted.trial import unittest
5401 from twisted.internet import defer
5402hunk ./src/allmydata/test/test_hung_server.py 5
5403-from allmydata import uri
5404+
5405 from allmydata.util.consumer import download_to_data
5406 from allmydata.immutable import upload
5407 from allmydata.mutable.common import UnrecoverableFileError
5408hunk ./src/allmydata/test/test_hung_server.py 10
5409 from allmydata.mutable.publish import MutableData
5410-from allmydata.storage.common import storage_index_to_dir
5411 from allmydata.test.no_network import GridTestMixin
5412 from allmydata.test.common import ShouldFailMixin
5413 from allmydata.util.pollmixin import PollMixin
5414hunk ./src/allmydata/test/test_hung_server.py 18
5415 immutable_plaintext = "data" * 10000
5416 mutable_plaintext = "muta" * 10000
5417 
5418+
5419 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5420                              unittest.TestCase):
5421     # Many of these tests take around 60 seconds on François's ARM buildslave:
5422hunk ./src/allmydata/test/test_hung_server.py 31
5423     timeout = 240
5424 
5425     def _break(self, servers):
5426-        for (id, ss) in servers:
5427-            self.g.break_server(id)
5428+        for ss in servers:
5429+            self.g.break_server(ss.get_serverid())
5430 
5431     def _hang(self, servers, **kwargs):
5432hunk ./src/allmydata/test/test_hung_server.py 35
5433-        for (id, ss) in servers:
5434-            self.g.hang_server(id, **kwargs)
5435+        for ss in servers:
5436+            self.g.hang_server(ss.get_serverid(), **kwargs)
5437 
5438     def _unhang(self, servers, **kwargs):
5439hunk ./src/allmydata/test/test_hung_server.py 39
5440-        for (id, ss) in servers:
5441-            self.g.unhang_server(id, **kwargs)
5442+        for ss in servers:
5443+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5444 
5445     def _hang_shares(self, shnums, **kwargs):
5446         # hang all servers who are holding the given shares
5447hunk ./src/allmydata/test/test_hung_server.py 52
5448                     hung_serverids.add(i_serverid)
5449 
5450     def _delete_all_shares_from(self, servers):
5451-        serverids = [id for (id, ss) in servers]
5452-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5453+        serverids = [ss.get_serverid() for ss in servers]
5454+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5455             if i_serverid in serverids:
5456hunk ./src/allmydata/test/test_hung_server.py 55
5457-                os.unlink(i_sharefile)
5458+                i_sharefp.remove()
5459 
5460     def _corrupt_all_shares_in(self, servers, corruptor_func):
5461hunk ./src/allmydata/test/test_hung_server.py 58
5462-        serverids = [id for (id, ss) in servers]
5463-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5464+        serverids = [ss.get_serverid() for ss in servers]
5465+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5466             if i_serverid in serverids:
5467hunk ./src/allmydata/test/test_hung_server.py 61
5468-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5469+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5470 
5471     def _copy_all_shares_from(self, from_servers, to_server):
5472hunk ./src/allmydata/test/test_hung_server.py 64
5473-        serverids = [id for (id, ss) in from_servers]
5474-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5475+        serverids = [ss.get_serverid() for ss in from_servers]
5476+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5477             if i_serverid in serverids:
5478hunk ./src/allmydata/test/test_hung_server.py 67
5479-                self._copy_share((i_shnum, i_sharefile), to_server)
5480+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5481 
5482hunk ./src/allmydata/test/test_hung_server.py 69
5483-    def _copy_share(self, share, to_server):
5484-        (sharenum, sharefile) = share
5485-        (id, ss) = to_server
5486-        shares_dir = os.path.join(ss.original.storedir, "shares")
5487-        si = uri.from_string(self.uri).get_storage_index()
5488-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5489-        if not os.path.exists(si_dir):
5490-            os.makedirs(si_dir)
5491-        new_sharefile = os.path.join(si_dir, str(sharenum))
5492-        shutil.copy(sharefile, new_sharefile)
5493         self.shares = self.find_uri_shares(self.uri)
5494hunk ./src/allmydata/test/test_hung_server.py 70
5495-        # Make sure that the storage server has the share.
5496-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5497-                        in self.shares)
5498-
5499-    def _corrupt_share(self, share, corruptor_func):
5500-        (sharenum, sharefile) = share
5501-        data = open(sharefile, "rb").read()
5502-        newdata = corruptor_func(data)
5503-        os.unlink(sharefile)
5504-        wf = open(sharefile, "wb")
5505-        wf.write(newdata)
5506-        wf.close()
5507 
5508     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5509         self.mutable = mutable
5510hunk ./src/allmydata/test/test_hung_server.py 82
5511 
5512         self.c0 = self.g.clients[0]
5513         nm = self.c0.nodemaker
5514-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5515-                               for s in nm.storage_broker.get_connected_servers()])
5516+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5517+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5518         self.servers = self.servers[5:] + self.servers[:5]
5519 
5520         if mutable:
5521hunk ./src/allmydata/test/test_hung_server.py 244
5522             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5523             # will retire before the download is complete and the ShareFinder
5524             # is shut off. That will leave 4 OVERDUE and 1
5525-            # stuck-but-not-overdue, for a total of 5 requests in in
5526+            # stuck-but-not-overdue, for a total of 5 requests in
5527             # _sf.pending_requests
5528             for t in self._sf.overdue_timers.values()[:4]:
5529                 t.reset(-1.0)
5530hunk ./src/allmydata/test/test_mutable.py 21
5531 from foolscap.api import eventually, fireEventually
5532 from foolscap.logging import log
5533 from allmydata.storage_client import StorageFarmBroker
5534-from allmydata.storage.common import storage_index_to_dir
5535 from allmydata.scripts import debug
5536 
5537 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5538hunk ./src/allmydata/test/test_mutable.py 3669
5539         # Now execute each assignment by writing the storage.
5540         for (share, servernum) in assignments:
5541             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5542-            storedir = self.get_serverdir(servernum)
5543-            storage_path = os.path.join(storedir, "shares",
5544-                                        storage_index_to_dir(si))
5545-            fileutil.make_dirs(storage_path)
5546-            fileutil.write(os.path.join(storage_path, "%d" % share),
5547-                           sharedata)
5548+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5549+            fileutil.fp_make_dirs(storage_dir)
5550+            storage_dir.child("%d" % share).setContent(sharedata)
5551         # ...and verify that the shares are there.
5552         shares = self.find_uri_shares(self.sdmf_old_cap)
5553         assert len(shares) == 10
5554hunk ./src/allmydata/test/test_provisioning.py 13
5555 from nevow import inevow
5556 from zope.interface import implements
5557 
5558-class MyRequest:
5559+class MockRequest:
5560     implements(inevow.IRequest)
5561     pass
5562 
5563hunk ./src/allmydata/test/test_provisioning.py 26
5564     def test_load(self):
5565         pt = provisioning.ProvisioningTool()
5566         self.fields = {}
5567-        #r = MyRequest()
5568+        #r = MockRequest()
5569         #r.fields = self.fields
5570         #ctx = RequestContext()
5571         #unfilled = pt.renderSynchronously(ctx)
5572hunk ./src/allmydata/test/test_repairer.py 537
5573         # happiness setting.
5574         def _delete_some_servers(ignored):
5575             for i in xrange(7):
5576-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5577+                self.remove_server(i)
5578 
5579             assert len(self.g.servers_by_number) == 3
5580 
5581hunk ./src/allmydata/test/test_storage.py 14
5582 from allmydata import interfaces
5583 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5584 from allmydata.storage.server import StorageServer
5585-from allmydata.storage.mutable import MutableShareFile
5586-from allmydata.storage.immutable import BucketWriter, BucketReader
5587-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5588+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5589+from allmydata.storage.bucket import BucketWriter, BucketReader
5590+from allmydata.storage.common import DataTooLargeError, \
5591      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5592 from allmydata.storage.lease import LeaseInfo
5593 from allmydata.storage.crawler import BucketCountingCrawler
5594hunk ./src/allmydata/test/test_storage.py 474
5595         w[0].remote_write(0, "\xff"*10)
5596         w[0].remote_close()
5597 
5598-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5599-        f = open(fn, "rb+")
5600+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5601+        f = fp.open("rb+")
5602         f.seek(0)
5603         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5604         f.close()
5605hunk ./src/allmydata/test/test_storage.py 814
5606     def test_bad_magic(self):
5607         ss = self.create("test_bad_magic")
5608         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5609-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5610-        f = open(fn, "rb+")
5611+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5612+        f = fp.open("rb+")
5613         f.seek(0)
5614         f.write("BAD MAGIC")
5615         f.close()
5616hunk ./src/allmydata/test/test_storage.py 842
5617 
5618         # Trying to make the container too large (by sending a write vector
5619         # whose offset is too high) will raise an exception.
5620-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5621+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5622         self.failUnlessRaises(DataTooLargeError,
5623                               rstaraw, "si1", secrets,
5624                               {0: ([], [(TOOBIG,data)], None)},
5625hunk ./src/allmydata/test/test_storage.py 1229
5626 
5627         # create a random non-numeric file in the bucket directory, to
5628         # exercise the code that's supposed to ignore those.
5629-        bucket_dir = os.path.join(self.workdir("test_leases"),
5630-                                  "shares", storage_index_to_dir("si1"))
5631-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5632-        f.write("you ought to be ignoring me\n")
5633-        f.close()
5634+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5635+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5636 
5637hunk ./src/allmydata/test/test_storage.py 1232
5638-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5639+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5640         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5641 
5642         # add-lease on a missing storage index is silently ignored
5643hunk ./src/allmydata/test/test_storage.py 3118
5644         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5645 
5646         # add a non-sharefile to exercise another code path
5647-        fn = os.path.join(ss.sharedir,
5648-                          storage_index_to_dir(immutable_si_0),
5649-                          "not-a-share")
5650-        f = open(fn, "wb")
5651-        f.write("I am not a share.\n")
5652-        f.close()
5653+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5654+        fp.setContent("I am not a share.\n")
5655 
5656         # this is before the crawl has started, so we're not in a cycle yet
5657         initial_state = lc.get_state()
5658hunk ./src/allmydata/test/test_storage.py 3282
5659     def test_expire_age(self):
5660         basedir = "storage/LeaseCrawler/expire_age"
5661         fileutil.make_dirs(basedir)
5662-        # setting expiration_time to 2000 means that any lease which is more
5663-        # than 2000s old will be expired.
5664-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5665-                                       expiration_enabled=True,
5666-                                       expiration_mode="age",
5667-                                       expiration_override_lease_duration=2000)
5668+        # setting 'override_lease_duration' to 2000 means that any lease that
5669+        # is more than 2000 seconds old will be expired.
5670+        expiration_policy = {
5671+            'enabled': True,
5672+            'mode': 'age',
5673+            'override_lease_duration': 2000,
5674+            'sharetypes': ('mutable', 'immutable'),
5675+        }
5676+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5677         # make it start sooner than usual.
5678         lc = ss.lease_checker
5679         lc.slow_start = 0
5680hunk ./src/allmydata/test/test_storage.py 3423
5681     def test_expire_cutoff_date(self):
5682         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5683         fileutil.make_dirs(basedir)
5684-        # setting cutoff-date to 2000 seconds ago means that any lease which
5685-        # is more than 2000s old will be expired.
5686+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5687+        # is more than 2000 seconds old will be expired.
5688         now = time.time()
5689         then = int(now - 2000)
5690hunk ./src/allmydata/test/test_storage.py 3427
5691-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5692-                                       expiration_enabled=True,
5693-                                       expiration_mode="cutoff-date",
5694-                                       expiration_cutoff_date=then)
5695+        expiration_policy = {
5696+            'enabled': True,
5697+            'mode': 'cutoff-date',
5698+            'cutoff_date': then,
5699+            'sharetypes': ('mutable', 'immutable'),
5700+        }
5701+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5702         # make it start sooner than usual.
5703         lc = ss.lease_checker
5704         lc.slow_start = 0
5705hunk ./src/allmydata/test/test_storage.py 3575
5706     def test_only_immutable(self):
5707         basedir = "storage/LeaseCrawler/only_immutable"
5708         fileutil.make_dirs(basedir)
5709+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5710+        # is more than 2000 seconds old will be expired.
5711         now = time.time()
5712         then = int(now - 2000)
5713hunk ./src/allmydata/test/test_storage.py 3579
5714-        ss = StorageServer(basedir, "\x00" * 20,
5715-                           expiration_enabled=True,
5716-                           expiration_mode="cutoff-date",
5717-                           expiration_cutoff_date=then,
5718-                           expiration_sharetypes=("immutable",))
5719+        expiration_policy = {
5720+            'enabled': True,
5721+            'mode': 'cutoff-date',
5722+            'cutoff_date': then,
5723+            'sharetypes': ('immutable',),
5724+        }
5725+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5726         lc = ss.lease_checker
5727         lc.slow_start = 0
5728         webstatus = StorageStatus(ss)
5729hunk ./src/allmydata/test/test_storage.py 3636
5730     def test_only_mutable(self):
5731         basedir = "storage/LeaseCrawler/only_mutable"
5732         fileutil.make_dirs(basedir)
5733+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5734+        # is more than 2000 seconds old will be expired.
5735         now = time.time()
5736         then = int(now - 2000)
5737hunk ./src/allmydata/test/test_storage.py 3640
5738-        ss = StorageServer(basedir, "\x00" * 20,
5739-                           expiration_enabled=True,
5740-                           expiration_mode="cutoff-date",
5741-                           expiration_cutoff_date=then,
5742-                           expiration_sharetypes=("mutable",))
5743+        expiration_policy = {
5744+            'enabled': True,
5745+            'mode': 'cutoff-date',
5746+            'cutoff_date': then,
5747+            'sharetypes': ('mutable',),
5748+        }
5749+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5750         lc = ss.lease_checker
5751         lc.slow_start = 0
5752         webstatus = StorageStatus(ss)
5753hunk ./src/allmydata/test/test_storage.py 3819
5754     def test_no_st_blocks(self):
5755         basedir = "storage/LeaseCrawler/no_st_blocks"
5756         fileutil.make_dirs(basedir)
5757-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5758-                                        expiration_mode="age",
5759-                                        expiration_override_lease_duration=-1000)
5760-        # a negative expiration_time= means the "configured-"
5761+        # A negative 'override_lease_duration' means that the "configured-"
5762         # space-recovered counts will be non-zero, since all shares will have
5763hunk ./src/allmydata/test/test_storage.py 3821
5764-        # expired by then
5765+        # expired by then.
5766+        expiration_policy = {
5767+            'enabled': True,
5768+            'mode': 'age',
5769+            'override_lease_duration': -1000,
5770+            'sharetypes': ('mutable', 'immutable'),
5771+        }
5772+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5773 
5774         # make it start sooner than usual.
5775         lc = ss.lease_checker
5776hunk ./src/allmydata/test/test_storage.py 3877
5777         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5778         first = min(self.sis)
5779         first_b32 = base32.b2a(first)
5780-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5781-        f = open(fn, "rb+")
5782+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5783+        f = fp.open("rb+")
5784         f.seek(0)
5785         f.write("BAD MAGIC")
5786         f.close()
5787hunk ./src/allmydata/test/test_storage.py 3890
5788 
5789         # also create an empty bucket
5790         empty_si = base32.b2a("\x04"*16)
5791-        empty_bucket_dir = os.path.join(ss.sharedir,
5792-                                        storage_index_to_dir(empty_si))
5793-        fileutil.make_dirs(empty_bucket_dir)
5794+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5795+        fileutil.fp_make_dirs(empty_bucket_dir)
5796 
5797         ss.setServiceParent(self.s)
5798 
5799hunk ./src/allmydata/test/test_system.py 10
5800 
5801 import allmydata
5802 from allmydata import uri
5803-from allmydata.storage.mutable import MutableShareFile
5804+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5805 from allmydata.storage.server import si_a2b
5806 from allmydata.immutable import offloaded, upload
5807 from allmydata.immutable.literal import LiteralFileNode
5808hunk ./src/allmydata/test/test_system.py 421
5809         return shares
5810 
5811     def _corrupt_mutable_share(self, filename, which):
5812-        msf = MutableShareFile(filename)
5813+        msf = MutableDiskShare(filename)
5814         datav = msf.readv([ (0, 1000000) ])
5815         final_share = datav[0]
5816         assert len(final_share) < 1000000 # ought to be truncated
5817hunk ./src/allmydata/test/test_upload.py 22
5818 from allmydata.util.happinessutil import servers_of_happiness, \
5819                                          shares_by_server, merge_servers
5820 from allmydata.storage_client import StorageFarmBroker
5821-from allmydata.storage.server import storage_index_to_dir
5822 
5823 MiB = 1024*1024
5824 
5825hunk ./src/allmydata/test/test_upload.py 821
5826 
5827     def _copy_share_to_server(self, share_number, server_number):
5828         ss = self.g.servers_by_number[server_number]
5829-        # Copy share i from the directory associated with the first
5830-        # storage server to the directory associated with this one.
5831-        assert self.g, "I tried to find a grid at self.g, but failed"
5832-        assert self.shares, "I tried to find shares at self.shares, but failed"
5833-        old_share_location = self.shares[share_number][2]
5834-        new_share_location = os.path.join(ss.storedir, "shares")
5835-        si = uri.from_string(self.uri).get_storage_index()
5836-        new_share_location = os.path.join(new_share_location,
5837-                                          storage_index_to_dir(si))
5838-        if not os.path.exists(new_share_location):
5839-            os.makedirs(new_share_location)
5840-        new_share_location = os.path.join(new_share_location,
5841-                                          str(share_number))
5842-        if old_share_location != new_share_location:
5843-            shutil.copy(old_share_location, new_share_location)
5844-        shares = self.find_uri_shares(self.uri)
5845-        # Make sure that the storage server has the share.
5846-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5847-                        in shares)
5848+        self.copy_share(self.shares[share_number], ss)
5849 
5850     def _setup_grid(self):
5851         """
5852hunk ./src/allmydata/test/test_upload.py 1103
5853                 self._copy_share_to_server(i, 2)
5854         d.addCallback(_copy_shares)
5855         # Remove the first server, and add a placeholder with share 0
5856-        d.addCallback(lambda ign:
5857-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5858+        d.addCallback(lambda ign: self.remove_server(0))
5859         d.addCallback(lambda ign:
5860             self._add_server_with_share(server_number=4, share_number=0))
5861         # Now try uploading.
5862hunk ./src/allmydata/test/test_upload.py 1134
5863         d.addCallback(lambda ign:
5864             self._add_server(server_number=4))
5865         d.addCallback(_copy_shares)
5866-        d.addCallback(lambda ign:
5867-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5868+        d.addCallback(lambda ign: self.remove_server(0))
5869         d.addCallback(_reset_encoding_parameters)
5870         d.addCallback(lambda client:
5871             client.upload(upload.Data("data" * 10000, convergence="")))
5872hunk ./src/allmydata/test/test_upload.py 1196
5873                 self._copy_share_to_server(i, 2)
5874         d.addCallback(_copy_shares)
5875         # Remove server 0, and add another in its place
5876-        d.addCallback(lambda ign:
5877-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5878+        d.addCallback(lambda ign: self.remove_server(0))
5879         d.addCallback(lambda ign:
5880             self._add_server_with_share(server_number=4, share_number=0,
5881                                         readonly=True))
5882hunk ./src/allmydata/test/test_upload.py 1237
5883             for i in xrange(1, 10):
5884                 self._copy_share_to_server(i, 2)
5885         d.addCallback(_copy_shares)
5886-        d.addCallback(lambda ign:
5887-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5888+        d.addCallback(lambda ign: self.remove_server(0))
5889         def _reset_encoding_parameters(ign, happy=4):
5890             client = self.g.clients[0]
5891             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5892hunk ./src/allmydata/test/test_upload.py 1273
5893         # remove the original server
5894         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5895         #  all the shares)
5896-        def _remove_server(ign):
5897-            server = self.g.servers_by_number[0]
5898-            self.g.remove_server(server.my_nodeid)
5899-        d.addCallback(_remove_server)
5900+        d.addCallback(lambda ign: self.remove_server(0))
5901         # This should succeed; we still have 4 servers, and the
5902         # happiness of the upload is 4.
5903         d.addCallback(lambda ign:
5904hunk ./src/allmydata/test/test_upload.py 1285
5905         d.addCallback(lambda ign:
5906             self._setup_and_upload())
5907         d.addCallback(_do_server_setup)
5908-        d.addCallback(_remove_server)
5909+        d.addCallback(lambda ign: self.remove_server(0))
5910         d.addCallback(lambda ign:
5911             self.shouldFail(UploadUnhappinessError,
5912                             "test_dropped_servers_in_encoder",
5913hunk ./src/allmydata/test/test_upload.py 1307
5914             self._add_server_with_share(4, 7, readonly=True)
5915             self._add_server_with_share(5, 8, readonly=True)
5916         d.addCallback(_do_server_setup_2)
5917-        d.addCallback(_remove_server)
5918+        d.addCallback(lambda ign: self.remove_server(0))
5919         d.addCallback(lambda ign:
5920             self._do_upload_with_broken_servers(1))
5921         d.addCallback(_set_basedir)
5922hunk ./src/allmydata/test/test_upload.py 1314
5923         d.addCallback(lambda ign:
5924             self._setup_and_upload())
5925         d.addCallback(_do_server_setup_2)
5926-        d.addCallback(_remove_server)
5927+        d.addCallback(lambda ign: self.remove_server(0))
5928         d.addCallback(lambda ign:
5929             self.shouldFail(UploadUnhappinessError,
5930                             "test_dropped_servers_in_encoder",
5931hunk ./src/allmydata/test/test_upload.py 1528
5932             for i in xrange(1, 10):
5933                 self._copy_share_to_server(i, 1)
5934         d.addCallback(_copy_shares)
5935-        d.addCallback(lambda ign:
5936-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5937+        d.addCallback(lambda ign: self.remove_server(0))
5938         def _prepare_client(ign):
5939             client = self.g.clients[0]
5940             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5941hunk ./src/allmydata/test/test_upload.py 1550
5942         def _setup(ign):
5943             for i in xrange(1, 11):
5944                 self._add_server(server_number=i)
5945-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5946+            self.remove_server(0)
5947             c = self.g.clients[0]
5948             # We set happy to an unsatisfiable value so that we can check the
5949             # counting in the exception message. The same progress message
5950hunk ./src/allmydata/test/test_upload.py 1577
5951                 self._add_server(server_number=i)
5952             self._add_server(server_number=11, readonly=True)
5953             self._add_server(server_number=12, readonly=True)
5954-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5955+            self.remove_server(0)
5956             c = self.g.clients[0]
5957             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5958             return c
5959hunk ./src/allmydata/test/test_upload.py 1605
5960             # the first one that the selector sees.
5961             for i in xrange(10):
5962                 self._copy_share_to_server(i, 9)
5963-            # Remove server 0, and its contents
5964-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5965+            self.remove_server(0)
5966             # Make happiness unsatisfiable
5967             c = self.g.clients[0]
5968             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5969hunk ./src/allmydata/test/test_upload.py 1625
5970         def _then(ign):
5971             for i in xrange(1, 11):
5972                 self._add_server(server_number=i, readonly=True)
5973-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5974+            self.remove_server(0)
5975             c = self.g.clients[0]
5976             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5977             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5978hunk ./src/allmydata/test/test_upload.py 1661
5979             self._add_server(server_number=4, readonly=True))
5980         d.addCallback(lambda ign:
5981             self._add_server(server_number=5, readonly=True))
5982-        d.addCallback(lambda ign:
5983-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5984+        d.addCallback(lambda ign: self.remove_server(0))
5985         def _reset_encoding_parameters(ign, happy=4):
5986             client = self.g.clients[0]
5987             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5988hunk ./src/allmydata/test/test_upload.py 1696
5989         d.addCallback(lambda ign:
5990             self._add_server(server_number=2))
5991         def _break_server_2(ign):
5992-            serverid = self.g.servers_by_number[2].my_nodeid
5993+            serverid = self.get_server(2).get_serverid()
5994             self.g.break_server(serverid)
5995         d.addCallback(_break_server_2)
5996         d.addCallback(lambda ign:
5997hunk ./src/allmydata/test/test_upload.py 1705
5998             self._add_server(server_number=4, readonly=True))
5999         d.addCallback(lambda ign:
6000             self._add_server(server_number=5, readonly=True))
6001-        d.addCallback(lambda ign:
6002-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6003+        d.addCallback(lambda ign: self.remove_server(0))
6004         d.addCallback(_reset_encoding_parameters)
6005         d.addCallback(lambda client:
6006             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
6007hunk ./src/allmydata/test/test_upload.py 1816
6008             # Copy shares
6009             self._copy_share_to_server(1, 1)
6010             self._copy_share_to_server(2, 1)
6011-            # Remove server 0
6012-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6013+            self.remove_server(0)
6014             client = self.g.clients[0]
6015             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
6016             return client
6017hunk ./src/allmydata/test/test_upload.py 1930
6018                                         readonly=True)
6019             self._add_server_with_share(server_number=4, share_number=3,
6020                                         readonly=True)
6021-            # Remove server 0.
6022-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6023+            self.remove_server(0)
6024             # Set the client appropriately
6025             c = self.g.clients[0]
6026             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6027hunk ./src/allmydata/test/test_util.py 9
6028 from twisted.trial import unittest
6029 from twisted.internet import defer, reactor
6030 from twisted.python.failure import Failure
6031+from twisted.python.filepath import FilePath
6032 from twisted.python import log
6033 from pycryptopp.hash.sha256 import SHA256 as _hash
6034 
6035hunk ./src/allmydata/test/test_util.py 508
6036                 os.chdir(saved_cwd)
6037 
6038     def test_disk_stats(self):
6039-        avail = fileutil.get_available_space('.', 2**14)
6040+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
6041         if avail == 0:
6042             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
6043 
6044hunk ./src/allmydata/test/test_util.py 512
6045-        disk = fileutil.get_disk_stats('.', 2**13)
6046+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
6047         self.failUnless(disk['total'] > 0, disk['total'])
6048         self.failUnless(disk['used'] > 0, disk['used'])
6049         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
6050hunk ./src/allmydata/test/test_util.py 521
6051 
6052     def test_disk_stats_avail_nonnegative(self):
6053         # This test will spuriously fail if you have more than 2^128
6054-        # bytes of available space on your filesystem.
6055-        disk = fileutil.get_disk_stats('.', 2**128)
6056+        # bytes of available space on your filesystem (lucky you).
6057+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
6058         self.failUnlessEqual(disk['avail'], 0)
6059 
6060 class PollMixinTests(unittest.TestCase):
6061hunk ./src/allmydata/test/test_web.py 12
6062 from twisted.python import failure, log
6063 from nevow import rend
6064 from allmydata import interfaces, uri, webish, dirnode
6065-from allmydata.storage.shares import get_share_file
6066 from allmydata.storage_client import StorageFarmBroker
6067 from allmydata.immutable import upload
6068 from allmydata.immutable.downloader.status import DownloadStatus
6069hunk ./src/allmydata/test/test_web.py 4111
6070             good_shares = self.find_uri_shares(self.uris["good"])
6071             self.failUnlessReallyEqual(len(good_shares), 10)
6072             sick_shares = self.find_uri_shares(self.uris["sick"])
6073-            os.unlink(sick_shares[0][2])
6074+            sick_shares[0][2].remove()
6075             dead_shares = self.find_uri_shares(self.uris["dead"])
6076             for i in range(1, 10):
6077hunk ./src/allmydata/test/test_web.py 4114
6078-                os.unlink(dead_shares[i][2])
6079+                dead_shares[i][2].remove()
6080             c_shares = self.find_uri_shares(self.uris["corrupt"])
6081             cso = CorruptShareOptions()
6082             cso.stdout = StringIO()
6083hunk ./src/allmydata/test/test_web.py 4118
6084-            cso.parseOptions([c_shares[0][2]])
6085+            cso.parseOptions([c_shares[0][2].path])
6086             corrupt_share(cso)
6087         d.addCallback(_clobber_shares)
6088 
6089hunk ./src/allmydata/test/test_web.py 4253
6090             good_shares = self.find_uri_shares(self.uris["good"])
6091             self.failUnlessReallyEqual(len(good_shares), 10)
6092             sick_shares = self.find_uri_shares(self.uris["sick"])
6093-            os.unlink(sick_shares[0][2])
6094+            sick_shares[0][2].remove()
6095             dead_shares = self.find_uri_shares(self.uris["dead"])
6096             for i in range(1, 10):
6097hunk ./src/allmydata/test/test_web.py 4256
6098-                os.unlink(dead_shares[i][2])
6099+                dead_shares[i][2].remove()
6100             c_shares = self.find_uri_shares(self.uris["corrupt"])
6101             cso = CorruptShareOptions()
6102             cso.stdout = StringIO()
6103hunk ./src/allmydata/test/test_web.py 4260
6104-            cso.parseOptions([c_shares[0][2]])
6105+            cso.parseOptions([c_shares[0][2].path])
6106             corrupt_share(cso)
6107         d.addCallback(_clobber_shares)
6108 
6109hunk ./src/allmydata/test/test_web.py 4319
6110 
6111         def _clobber_shares(ignored):
6112             sick_shares = self.find_uri_shares(self.uris["sick"])
6113-            os.unlink(sick_shares[0][2])
6114+            sick_shares[0][2].remove()
6115         d.addCallback(_clobber_shares)
6116 
6117         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6118hunk ./src/allmydata/test/test_web.py 4811
6119             good_shares = self.find_uri_shares(self.uris["good"])
6120             self.failUnlessReallyEqual(len(good_shares), 10)
6121             sick_shares = self.find_uri_shares(self.uris["sick"])
6122-            os.unlink(sick_shares[0][2])
6123+            sick_shares[0][2].remove()
6124             #dead_shares = self.find_uri_shares(self.uris["dead"])
6125             #for i in range(1, 10):
6126hunk ./src/allmydata/test/test_web.py 4814
6127-            #    os.unlink(dead_shares[i][2])
6128+            #    dead_shares[i][2].remove()
6129 
6130             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6131             #cso = CorruptShareOptions()
6132hunk ./src/allmydata/test/test_web.py 4819
6133             #cso.stdout = StringIO()
6134-            #cso.parseOptions([c_shares[0][2]])
6135+            #cso.parseOptions([c_shares[0][2].path])
6136             #corrupt_share(cso)
6137         d.addCallback(_clobber_shares)
6138 
6139hunk ./src/allmydata/test/test_web.py 4870
6140         d.addErrback(self.explain_web_error)
6141         return d
6142 
6143-    def _count_leases(self, ignored, which):
6144-        u = self.uris[which]
6145-        shares = self.find_uri_shares(u)
6146-        lease_counts = []
6147-        for shnum, serverid, fn in shares:
6148-            sf = get_share_file(fn)
6149-            num_leases = len(list(sf.get_leases()))
6150-            lease_counts.append( (fn, num_leases) )
6151-        return lease_counts
6152-
6153-    def _assert_leasecount(self, lease_counts, expected):
6154+    def _assert_leasecount(self, ignored, which, expected):
6155+        lease_counts = self.count_leases(self.uris[which])
6156         for (fn, num_leases) in lease_counts:
6157             if num_leases != expected:
6158                 self.fail("expected %d leases, have %d, on %s" %
6159hunk ./src/allmydata/test/test_web.py 4903
6160                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6161         d.addCallback(_compute_fileurls)
6162 
6163-        d.addCallback(self._count_leases, "one")
6164-        d.addCallback(self._assert_leasecount, 1)
6165-        d.addCallback(self._count_leases, "two")
6166-        d.addCallback(self._assert_leasecount, 1)
6167-        d.addCallback(self._count_leases, "mutable")
6168-        d.addCallback(self._assert_leasecount, 1)
6169+        d.addCallback(self._assert_leasecount, "one", 1)
6170+        d.addCallback(self._assert_leasecount, "two", 1)
6171+        d.addCallback(self._assert_leasecount, "mutable", 1)
6172 
6173         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6174         def _got_html_good(res):
6175hunk ./src/allmydata/test/test_web.py 4913
6176             self.failIf("Not Healthy" in res, res)
6177         d.addCallback(_got_html_good)
6178 
6179-        d.addCallback(self._count_leases, "one")
6180-        d.addCallback(self._assert_leasecount, 1)
6181-        d.addCallback(self._count_leases, "two")
6182-        d.addCallback(self._assert_leasecount, 1)
6183-        d.addCallback(self._count_leases, "mutable")
6184-        d.addCallback(self._assert_leasecount, 1)
6185+        d.addCallback(self._assert_leasecount, "one", 1)
6186+        d.addCallback(self._assert_leasecount, "two", 1)
6187+        d.addCallback(self._assert_leasecount, "mutable", 1)
6188 
6189         # this CHECK uses the original client, which uses the same
6190         # lease-secrets, so it will just renew the original lease
6191hunk ./src/allmydata/test/test_web.py 4922
6192         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6193         d.addCallback(_got_html_good)
6194 
6195-        d.addCallback(self._count_leases, "one")
6196-        d.addCallback(self._assert_leasecount, 1)
6197-        d.addCallback(self._count_leases, "two")
6198-        d.addCallback(self._assert_leasecount, 1)
6199-        d.addCallback(self._count_leases, "mutable")
6200-        d.addCallback(self._assert_leasecount, 1)
6201+        d.addCallback(self._assert_leasecount, "one", 1)
6202+        d.addCallback(self._assert_leasecount, "two", 1)
6203+        d.addCallback(self._assert_leasecount, "mutable", 1)
6204 
6205         # this CHECK uses an alternate client, which adds a second lease
6206         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6207hunk ./src/allmydata/test/test_web.py 4930
6208         d.addCallback(_got_html_good)
6209 
6210-        d.addCallback(self._count_leases, "one")
6211-        d.addCallback(self._assert_leasecount, 2)
6212-        d.addCallback(self._count_leases, "two")
6213-        d.addCallback(self._assert_leasecount, 1)
6214-        d.addCallback(self._count_leases, "mutable")
6215-        d.addCallback(self._assert_leasecount, 1)
6216+        d.addCallback(self._assert_leasecount, "one", 2)
6217+        d.addCallback(self._assert_leasecount, "two", 1)
6218+        d.addCallback(self._assert_leasecount, "mutable", 1)
6219 
6220         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6221         d.addCallback(_got_html_good)
6222hunk ./src/allmydata/test/test_web.py 4937
6223 
6224-        d.addCallback(self._count_leases, "one")
6225-        d.addCallback(self._assert_leasecount, 2)
6226-        d.addCallback(self._count_leases, "two")
6227-        d.addCallback(self._assert_leasecount, 1)
6228-        d.addCallback(self._count_leases, "mutable")
6229-        d.addCallback(self._assert_leasecount, 1)
6230+        d.addCallback(self._assert_leasecount, "one", 2)
6231+        d.addCallback(self._assert_leasecount, "two", 1)
6232+        d.addCallback(self._assert_leasecount, "mutable", 1)
6233 
6234         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6235                       clientnum=1)
6236hunk ./src/allmydata/test/test_web.py 4945
6237         d.addCallback(_got_html_good)
6238 
6239-        d.addCallback(self._count_leases, "one")
6240-        d.addCallback(self._assert_leasecount, 2)
6241-        d.addCallback(self._count_leases, "two")
6242-        d.addCallback(self._assert_leasecount, 1)
6243-        d.addCallback(self._count_leases, "mutable")
6244-        d.addCallback(self._assert_leasecount, 2)
6245+        d.addCallback(self._assert_leasecount, "one", 2)
6246+        d.addCallback(self._assert_leasecount, "two", 1)
6247+        d.addCallback(self._assert_leasecount, "mutable", 2)
6248 
6249         d.addErrback(self.explain_web_error)
6250         return d
6251hunk ./src/allmydata/test/test_web.py 4989
6252             self.failUnlessReallyEqual(len(units), 4+1)
6253         d.addCallback(_done)
6254 
6255-        d.addCallback(self._count_leases, "root")
6256-        d.addCallback(self._assert_leasecount, 1)
6257-        d.addCallback(self._count_leases, "one")
6258-        d.addCallback(self._assert_leasecount, 1)
6259-        d.addCallback(self._count_leases, "mutable")
6260-        d.addCallback(self._assert_leasecount, 1)
6261+        d.addCallback(self._assert_leasecount, "root", 1)
6262+        d.addCallback(self._assert_leasecount, "one", 1)
6263+        d.addCallback(self._assert_leasecount, "mutable", 1)
6264 
6265         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6266         d.addCallback(_done)
6267hunk ./src/allmydata/test/test_web.py 4996
6268 
6269-        d.addCallback(self._count_leases, "root")
6270-        d.addCallback(self._assert_leasecount, 1)
6271-        d.addCallback(self._count_leases, "one")
6272-        d.addCallback(self._assert_leasecount, 1)
6273-        d.addCallback(self._count_leases, "mutable")
6274-        d.addCallback(self._assert_leasecount, 1)
6275+        d.addCallback(self._assert_leasecount, "root", 1)
6276+        d.addCallback(self._assert_leasecount, "one", 1)
6277+        d.addCallback(self._assert_leasecount, "mutable", 1)
6278 
6279         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6280                       clientnum=1)
6281hunk ./src/allmydata/test/test_web.py 5004
6282         d.addCallback(_done)
6283 
6284-        d.addCallback(self._count_leases, "root")
6285-        d.addCallback(self._assert_leasecount, 2)
6286-        d.addCallback(self._count_leases, "one")
6287-        d.addCallback(self._assert_leasecount, 2)
6288-        d.addCallback(self._count_leases, "mutable")
6289-        d.addCallback(self._assert_leasecount, 2)
6290+        d.addCallback(self._assert_leasecount, "root", 2)
6291+        d.addCallback(self._assert_leasecount, "one", 2)
6292+        d.addCallback(self._assert_leasecount, "mutable", 2)
6293 
6294         d.addErrback(self.explain_web_error)
6295         return d
6296merger 0.0 (
6297hunk ./src/allmydata/uri.py 829
6298+    def is_readonly(self):
6299+        return True
6300+
6301+    def get_readonly(self):
6302+        return self
6303+
6304+
6305hunk ./src/allmydata/uri.py 829
6306+    def is_readonly(self):
6307+        return True
6308+
6309+    def get_readonly(self):
6310+        return self
6311+
6312+
6313)
6314merger 0.0 (
6315hunk ./src/allmydata/uri.py 848
6316+    def is_readonly(self):
6317+        return True
6318+
6319+    def get_readonly(self):
6320+        return self
6321+
6322hunk ./src/allmydata/uri.py 848
6323+    def is_readonly(self):
6324+        return True
6325+
6326+    def get_readonly(self):
6327+        return self
6328+
6329)
6330hunk ./src/allmydata/util/encodingutil.py 221
6331 def quote_path(path, quotemarks=True):
6332     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6333 
6334+def quote_filepath(fp, quotemarks=True, encoding=None):
6335+    path = fp.path
6336+    if isinstance(path, str):
6337+        try:
6338+            path = path.decode(filesystem_encoding)
6339+        except UnicodeDecodeError:
6340+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6341+
6342+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6343+
6344 
6345 def unicode_platform():
6346     """
6347hunk ./src/allmydata/util/fileutil.py 5
6348 Futz with files like a pro.
6349 """
6350 
6351-import sys, exceptions, os, stat, tempfile, time, binascii
6352+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6353+
6354+from allmydata.util.assertutil import precondition
6355 
6356 from twisted.python import log
6357hunk ./src/allmydata/util/fileutil.py 10
6358+from twisted.python.filepath import FilePath, UnlistableError
6359 
6360 from pycryptopp.cipher.aes import AES
6361 
6362hunk ./src/allmydata/util/fileutil.py 189
6363             raise tx
6364         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6365 
6366-def rm_dir(dirname):
6367+def fp_make_dirs(dirfp):
6368+    """
6369+    An idempotent version of FilePath.makedirs().  If the dir already
6370+    exists, do nothing and return without raising an exception.  If this
6371+    call creates the dir, return without raising an exception.  If there is
6372+    an error that prevents creation or if the directory gets deleted after
6373+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6374+    exists, raise an exception.
6375+    """
6376+    log.msg( "xxx 0 %s" % (dirfp,))
6377+    tx = None
6378+    try:
6379+        dirfp.makedirs()
6380+    except OSError, x:
6381+        tx = x
6382+
6383+    if not dirfp.isdir():
6384+        if tx:
6385+            raise tx
6386+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6387+
6388+def fp_rmdir_if_empty(dirfp):
6389+    """ Remove the directory if it is empty. """
6390+    try:
6391+        os.rmdir(dirfp.path)
6392+    except OSError, e:
6393+        if e.errno != errno.ENOTEMPTY:
6394+            raise
6395+    else:
6396+        dirfp.changed()
6397+
6398+def rmtree(dirname):
6399     """
6400     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6401     already gone, do nothing and return without raising an exception.  If this
6402hunk ./src/allmydata/util/fileutil.py 239
6403             else:
6404                 remove(fullname)
6405         os.rmdir(dirname)
6406-    except Exception, le:
6407-        # Ignore "No such file or directory"
6408-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6409+    except EnvironmentError, le:
6410+        # Ignore "No such file or directory", collect any other exception.
6411+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6412             excs.append(le)
6413hunk ./src/allmydata/util/fileutil.py 243
6414+    except Exception, le:
6415+        excs.append(le)
6416 
6417     # Okay, now we've recursively removed everything, ignoring any "No
6418     # such file or directory" errors, and collecting any other errors.
6419hunk ./src/allmydata/util/fileutil.py 256
6420             raise OSError, "Failed to remove dir for unknown reason."
6421         raise OSError, excs
6422 
6423+def fp_remove(fp):
6424+    """
6425+    An idempotent version of shutil.rmtree().  If the file/dir is already
6426+    gone, do nothing and return without raising an exception.  If this call
6427+    removes the file/dir, return without raising an exception.  If there is
6428+    an error that prevents removal, or if a file or directory at the same
6429+    path gets created again by someone else after this deletes it and before
6430+    this checks that it is gone, raise an exception.
6431+    """
6432+    try:
6433+        fp.remove()
6434+    except UnlistableError, e:
6435+        if e.originalException.errno != errno.ENOENT:
6436+            raise
6437+    except OSError, e:
6438+        if e.errno != errno.ENOENT:
6439+            raise
6440+
6441+def rm_dir(dirname):
6442+    # Renamed to be like shutil.rmtree and unlike rmdir.
6443+    return rmtree(dirname)
6444 
6445 def remove_if_possible(f):
6446     try:
6447hunk ./src/allmydata/util/fileutil.py 387
6448         import traceback
6449         traceback.print_exc()
6450 
6451-def get_disk_stats(whichdir, reserved_space=0):
6452+def get_disk_stats(whichdirfp, reserved_space=0):
6453     """Return disk statistics for the storage disk, in the form of a dict
6454     with the following fields.
6455       total:            total bytes on disk
6456hunk ./src/allmydata/util/fileutil.py 408
6457     you can pass how many bytes you would like to leave unused on this
6458     filesystem as reserved_space.
6459     """
6460+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6461 
6462     if have_GetDiskFreeSpaceExW:
6463         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6464hunk ./src/allmydata/util/fileutil.py 419
6465         n_free_for_nonroot = c_ulonglong(0)
6466         n_total            = c_ulonglong(0)
6467         n_free_for_root    = c_ulonglong(0)
6468-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6469-                                               byref(n_total),
6470-                                               byref(n_free_for_root))
6471+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6472+                                                      byref(n_total),
6473+                                                      byref(n_free_for_root))
6474         if retval == 0:
6475             raise OSError("Windows error %d attempting to get disk statistics for %r"
6476hunk ./src/allmydata/util/fileutil.py 424
6477-                          % (GetLastError(), whichdir))
6478+                          % (GetLastError(), whichdirfp.path))
6479         free_for_nonroot = n_free_for_nonroot.value
6480         total            = n_total.value
6481         free_for_root    = n_free_for_root.value
6482hunk ./src/allmydata/util/fileutil.py 433
6483         # <http://docs.python.org/library/os.html#os.statvfs>
6484         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6485         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6486-        s = os.statvfs(whichdir)
6487+        s = os.statvfs(whichdirfp.path)
6488 
6489         # on my mac laptop:
6490         #  statvfs(2) is a wrapper around statfs(2).
6491hunk ./src/allmydata/util/fileutil.py 460
6492              'avail': avail,
6493            }
6494 
6495-def get_available_space(whichdir, reserved_space):
6496+def get_available_space(whichdirfp, reserved_space):
6497     """Returns available space for share storage in bytes, or None if no
6498     API to get this information is available.
6499 
6500hunk ./src/allmydata/util/fileutil.py 472
6501     you can pass how many bytes you would like to leave unused on this
6502     filesystem as reserved_space.
6503     """
6504+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6505     try:
6506hunk ./src/allmydata/util/fileutil.py 474
6507-        return get_disk_stats(whichdir, reserved_space)['avail']
6508+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6509     except AttributeError:
6510         return None
6511hunk ./src/allmydata/util/fileutil.py 477
6512-    except EnvironmentError:
6513-        log.msg("OS call to get disk statistics failed")
6514+
6515+
6516+def get_used_space(fp):
6517+    if fp is None:
6518         return 0
6519hunk ./src/allmydata/util/fileutil.py 482
6520+    try:
6521+        s = os.stat(fp.path)
6522+    except EnvironmentError:
6523+        if not fp.exists():
6524+            return 0
6525+        raise
6526+    else:
6527+        # POSIX defines st_blocks (originally a BSDism):
6528+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6529+        # but does not require stat() to give it a "meaningful value"
6530+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6531+        # and says:
6532+        #   "The unit for the st_blocks member of the stat structure is not defined
6533+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6534+        #    It may differ on a file system basis. There is no correlation between
6535+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6536+        #    structure members."
6537+        #
6538+        # The Linux docs define it as "the number of blocks allocated to the file,
6539+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6540+        # not set the attribute on Windows.
6541+        #
6542+        # We consider platforms that define st_blocks but give it a wrong value, or
6543+        # measure it in a unit other than 512 bytes, to be broken. See also
6544+        # <http://bugs.python.org/issue12350>.
6545+
6546+        if hasattr(s, 'st_blocks'):
6547+            return s.st_blocks * 512
6548+        else:
6549+            return s.st_size
6550}
6551[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6552david-sarah@jacaranda.org**20110920033803
6553 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6554] {
6555hunk ./src/allmydata/client.py 9
6556 from twisted.internet import reactor, defer
6557 from twisted.application import service
6558 from twisted.application.internet import TimerService
6559+from twisted.python.filepath import FilePath
6560 from foolscap.api import Referenceable
6561 from pycryptopp.publickey import rsa
6562 
6563hunk ./src/allmydata/client.py 15
6564 import allmydata
6565 from allmydata.storage.server import StorageServer
6566+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6567 from allmydata import storage_client
6568 from allmydata.immutable.upload import Uploader
6569 from allmydata.immutable.offloaded import Helper
6570hunk ./src/allmydata/client.py 213
6571             return
6572         readonly = self.get_config("storage", "readonly", False, boolean=True)
6573 
6574-        storedir = os.path.join(self.basedir, self.STOREDIR)
6575+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6576 
6577         data = self.get_config("storage", "reserved_space", None)
6578         reserved = None
6579hunk ./src/allmydata/client.py 255
6580             'cutoff_date': cutoff_date,
6581             'sharetypes': tuple(sharetypes),
6582         }
6583-        ss = StorageServer(storedir, self.nodeid,
6584-                           reserved_space=reserved,
6585-                           discard_storage=discard,
6586-                           readonly_storage=readonly,
6587+
6588+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6589+                              discard_storage=discard)
6590+        ss = StorageServer(nodeid, backend, storedir,
6591                            stats_provider=self.stats_provider,
6592                            expiration_policy=expiration_policy)
6593         self.add_service(ss)
6594hunk ./src/allmydata/interfaces.py 348
6595 
6596     def get_shares():
6597         """
6598-        Generates the IStoredShare objects held in this shareset.
6599+        Generates IStoredShare objects for all completed shares in this shareset.
6600         """
6601 
6602     def has_incoming(shnum):
6603hunk ./src/allmydata/storage/backends/base.py 69
6604         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6605         #     """create a mutable share with the given shnum and write_enabler"""
6606 
6607-        # secrets might be a triple with cancel_secret in secrets[2], but if
6608-        # so we ignore the cancel_secret.
6609         write_enabler = secrets[0]
6610         renew_secret = secrets[1]
6611hunk ./src/allmydata/storage/backends/base.py 71
6612+        cancel_secret = '\x00'*32
6613+        if len(secrets) > 2:
6614+            cancel_secret = secrets[2]
6615 
6616         si_s = self.get_storage_index_string()
6617         shares = {}
6618hunk ./src/allmydata/storage/backends/base.py 110
6619             read_data[shnum] = share.readv(read_vector)
6620 
6621         ownerid = 1 # TODO
6622-        lease_info = LeaseInfo(ownerid, renew_secret,
6623+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6624                                expiration_time, storageserver.get_serverid())
6625 
6626         if testv_is_good:
6627hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6628     return newfp.child(sia)
6629 
6630 
6631-def get_share(fp):
6632+def get_share(storageindex, shnum, fp):
6633     f = fp.open('rb')
6634     try:
6635         prefix = f.read(32)
6636hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6637         f.close()
6638 
6639     if prefix == MutableDiskShare.MAGIC:
6640-        return MutableDiskShare(fp)
6641+        return MutableDiskShare(storageindex, shnum, fp)
6642     else:
6643         # assume it's immutable
6644hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6645-        return ImmutableDiskShare(fp)
6646+        return ImmutableDiskShare(storageindex, shnum, fp)
6647 
6648 
6649 class DiskBackend(Backend):
6650hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6651                 if not NUM_RE.match(shnumstr):
6652                     continue
6653                 sharehome = self._sharehomedir.child(shnumstr)
6654-                yield self.get_share(sharehome)
6655+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6656         except UnlistableError:
6657             # There is no shares directory at all.
6658             pass
6659hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6660         return self._incominghomedir.child(str(shnum)).exists()
6661 
6662     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6663-        sharehome = self._sharehomedir.child(str(shnum))
6664+        finalhome = self._sharehomedir.child(str(shnum))
6665         incominghome = self._incominghomedir.child(str(shnum))
6666hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6667-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6668-                                   max_size=max_space_per_bucket, create=True)
6669+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6670+                                   max_size=max_space_per_bucket)
6671         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6672         if self._discard_storage:
6673             bw.throw_out_all_data = True
6674hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6675         fileutil.fp_make_dirs(self._sharehomedir)
6676         sharehome = self._sharehomedir.child(str(shnum))
6677         serverid = storageserver.get_serverid()
6678-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6679+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6680 
6681     def _clean_up_after_unlink(self):
6682         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6683hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6684     LEASE_SIZE = struct.calcsize(">L32s32sL")
6685 
6686 
6687-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6688-        """ If max_size is not None then I won't allow more than
6689-        max_size to be written to me. If create=True then max_size
6690-        must not be None. """
6691-        precondition((max_size is not None) or (not create), max_size, create)
6692+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6693+        """
6694+        If max_size is not None then I won't allow more than max_size to be written to me.
6695+        If finalhome is not None (meaning that we are creating the share) then max_size
6696+        must not be None.
6697+        """
6698+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6699         self._storageindex = storageindex
6700         self._max_size = max_size
6701hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6702-        self._incominghome = incominghome
6703-        self._home = finalhome
6704+
6705+        # If we are creating the share, _finalhome refers to the final path and
6706+        # _home to the incoming path. Otherwise, _finalhome is None.
6707+        self._finalhome = finalhome
6708+        self._home = home
6709         self._shnum = shnum
6710hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6711-        if create:
6712-            # touch the file, so later callers will see that we're working on
6713+
6714+        if self._finalhome is not None:
6715+            # Touch the file, so later callers will see that we're working on
6716             # it. Also construct the metadata.
6717hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6718-            assert not finalhome.exists()
6719-            fp_make_dirs(self._incominghome.parent())
6720+            assert not self._finalhome.exists()
6721+            fp_make_dirs(self._home.parent())
6722             # The second field -- the four-byte share data length -- is no
6723             # longer used as of Tahoe v1.3.0, but we continue to write it in
6724             # there in case someone downgrades a storage server from >=
6725hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6726             # the largest length that can fit into the field. That way, even
6727             # if this does happen, the old < v1.3.0 server will still allow
6728             # clients to read the first part of the share.
6729-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6730+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6731             self._lease_offset = max_size + 0x0c
6732             self._num_leases = 0
6733         else:
6734hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6735                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6736 
6737     def close(self):
6738-        fileutil.fp_make_dirs(self._home.parent())
6739-        self._incominghome.moveTo(self._home)
6740-        try:
6741-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6742-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6743-            # these directories lying around forever, but the delete might
6744-            # fail if we're working on another share for the same storage
6745-            # index (like ab/abcde/5). The alternative approach would be to
6746-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6747-            # ShareWriter), each of which is responsible for a single
6748-            # directory on disk, and have them use reference counting of
6749-            # their children to know when they should do the rmdir. This
6750-            # approach is simpler, but relies on os.rmdir refusing to delete
6751-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6752-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6753-            # we also delete the grandparent (prefix) directory, .../ab ,
6754-            # again to avoid leaving directories lying around. This might
6755-            # fail if there is another bucket open that shares a prefix (like
6756-            # ab/abfff).
6757-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6758-            # we leave the great-grandparent (incoming/) directory in place.
6759-        except EnvironmentError:
6760-            # ignore the "can't rmdir because the directory is not empty"
6761-            # exceptions, those are normal consequences of the
6762-            # above-mentioned conditions.
6763-            pass
6764-        pass
6765+        fileutil.fp_make_dirs(self._finalhome.parent())
6766+        self._home.moveTo(self._finalhome)
6767+
6768+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6769+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6770+        # these directories lying around forever, but the delete might
6771+        # fail if we're working on another share for the same storage
6772+        # index (like ab/abcde/5). The alternative approach would be to
6773+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6774+        # ShareWriter), each of which is responsible for a single
6775+        # directory on disk, and have them use reference counting of
6776+        # their children to know when they should do the rmdir. This
6777+        # approach is simpler, but relies on os.rmdir (used by
6778+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6779+        # Do *not* use fileutil.fp_remove() here!
6780+        parent = self._home.parent()
6781+        fileutil.fp_rmdir_if_empty(parent)
6782+
6783+        # we also delete the grandparent (prefix) directory, .../ab ,
6784+        # again to avoid leaving directories lying around. This might
6785+        # fail if there is another bucket open that shares a prefix (like
6786+        # ab/abfff).
6787+        fileutil.fp_rmdir_if_empty(parent.parent())
6788+
6789+        # we leave the great-grandparent (incoming/) directory in place.
6790+
6791+        # allow lease changes after closing.
6792+        self._home = self._finalhome
6793+        self._finalhome = None
6794 
6795     def get_used_space(self):
6796hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6797-        return (fileutil.get_used_space(self._home) +
6798-                fileutil.get_used_space(self._incominghome))
6799+        return (fileutil.get_used_space(self._finalhome) +
6800+                fileutil.get_used_space(self._home))
6801 
6802     def get_storage_index(self):
6803         return self._storageindex
6804hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6805         precondition(offset >= 0, offset)
6806         if self._max_size is not None and offset+length > self._max_size:
6807             raise DataTooLargeError(self._max_size, offset, length)
6808-        f = self._incominghome.open(mode='rb+')
6809+        f = self._home.open(mode='rb+')
6810         try:
6811             real_offset = self._data_offset+offset
6812             f.seek(real_offset)
6813hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6814 
6815     # These lease operations are intended for use by disk_backend.py.
6816     # Other clients should not depend on the fact that the disk backend
6817-    # stores leases in share files.
6818+    # stores leases in share files. XXX bucket.py also relies on this.
6819 
6820     def get_leases(self):
6821         """Yields a LeaseInfo instance for all leases."""
6822hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6823             f.close()
6824 
6825     def add_lease(self, lease_info):
6826-        f = self._incominghome.open(mode='rb')
6827+        f = self._home.open(mode='rb+')
6828         try:
6829             num_leases = self._read_num_leases(f)
6830hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6831-        finally:
6832-            f.close()
6833-        f = self._home.open(mode='wb+')
6834-        try:
6835             self._write_lease_record(f, num_leases, lease_info)
6836             self._write_num_leases(f, num_leases+1)
6837         finally:
6838hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6839         pass
6840 
6841 
6842-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6843-    ms = MutableDiskShare(fp, parent)
6844+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6845+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6846     ms.create(serverid, write_enabler)
6847     del ms
6848hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6849-    return MutableDiskShare(fp, parent)
6850+    return MutableDiskShare(storageindex, shnum, fp, parent)
6851hunk ./src/allmydata/storage/bucket.py 44
6852         start = time.time()
6853 
6854         self._share.close()
6855-        filelen = self._share.stat()
6856+        # XXX should this be self._share.get_used_space() ?
6857+        consumed_size = self._share.get_size()
6858         self._share = None
6859 
6860         self.closed = True
6861hunk ./src/allmydata/storage/bucket.py 51
6862         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6863 
6864-        self.ss.bucket_writer_closed(self, filelen)
6865+        self.ss.bucket_writer_closed(self, consumed_size)
6866         self.ss.add_latency("close", time.time() - start)
6867         self.ss.count("close")
6868 
6869hunk ./src/allmydata/storage/server.py 182
6870                                 renew_secret, cancel_secret,
6871                                 sharenums, allocated_size,
6872                                 canary, owner_num=0):
6873-        # cancel_secret is no longer used.
6874         # owner_num is not for clients to set, but rather it should be
6875         # curried into a StorageServer instance dedicated to a particular
6876         # owner.
6877hunk ./src/allmydata/storage/server.py 195
6878         # Note that the lease should not be added until the BucketWriter
6879         # has been closed.
6880         expire_time = time.time() + 31*24*60*60
6881-        lease_info = LeaseInfo(owner_num, renew_secret,
6882+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6883                                expire_time, self._serverid)
6884 
6885         max_space_per_bucket = allocated_size
6886hunk ./src/allmydata/test/no_network.py 349
6887         return self.g.servers_by_number[i]
6888 
6889     def get_serverdir(self, i):
6890-        return self.g.servers_by_number[i].backend.storedir
6891+        return self.g.servers_by_number[i].backend._storedir
6892 
6893     def remove_server(self, i):
6894         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6895hunk ./src/allmydata/test/no_network.py 357
6896     def iterate_servers(self):
6897         for i in sorted(self.g.servers_by_number.keys()):
6898             ss = self.g.servers_by_number[i]
6899-            yield (i, ss, ss.backend.storedir)
6900+            yield (i, ss, ss.backend._storedir)
6901 
6902     def find_uri_shares(self, uri):
6903         si = tahoe_uri.from_string(uri).get_storage_index()
6904hunk ./src/allmydata/test/no_network.py 384
6905         return shares
6906 
6907     def copy_share(self, from_share, uri, to_server):
6908-        si = uri.from_string(self.uri).get_storage_index()
6909+        si = tahoe_uri.from_string(uri).get_storage_index()
6910         (i_shnum, i_serverid, i_sharefp) = from_share
6911         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6912         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6913hunk ./src/allmydata/test/test_download.py 127
6914 
6915         return d
6916 
6917-    def _write_shares(self, uri, shares):
6918-        si = uri.from_string(uri).get_storage_index()
6919+    def _write_shares(self, fileuri, shares):
6920+        si = uri.from_string(fileuri).get_storage_index()
6921         for i in shares:
6922             shares_for_server = shares[i]
6923             for shnum in shares_for_server:
6924hunk ./src/allmydata/test/test_hung_server.py 36
6925 
6926     def _hang(self, servers, **kwargs):
6927         for ss in servers:
6928-            self.g.hang_server(ss.get_serverid(), **kwargs)
6929+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6930 
6931     def _unhang(self, servers, **kwargs):
6932         for ss in servers:
6933hunk ./src/allmydata/test/test_hung_server.py 40
6934-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6935+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6936 
6937     def _hang_shares(self, shnums, **kwargs):
6938         # hang all servers who are holding the given shares
6939hunk ./src/allmydata/test/test_hung_server.py 52
6940                     hung_serverids.add(i_serverid)
6941 
6942     def _delete_all_shares_from(self, servers):
6943-        serverids = [ss.get_serverid() for ss in servers]
6944+        serverids = [ss.original.get_serverid() for ss in servers]
6945         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6946             if i_serverid in serverids:
6947                 i_sharefp.remove()
6948hunk ./src/allmydata/test/test_hung_server.py 58
6949 
6950     def _corrupt_all_shares_in(self, servers, corruptor_func):
6951-        serverids = [ss.get_serverid() for ss in servers]
6952+        serverids = [ss.original.get_serverid() for ss in servers]
6953         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6954             if i_serverid in serverids:
6955                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6956hunk ./src/allmydata/test/test_hung_server.py 64
6957 
6958     def _copy_all_shares_from(self, from_servers, to_server):
6959-        serverids = [ss.get_serverid() for ss in from_servers]
6960+        serverids = [ss.original.get_serverid() for ss in from_servers]
6961         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6962             if i_serverid in serverids:
6963                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6964hunk ./src/allmydata/test/test_mutable.py 2990
6965             fso = debug.FindSharesOptions()
6966             storage_index = base32.b2a(n.get_storage_index())
6967             fso.si_s = storage_index
6968-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6969+            fso.nodedirs = [unicode(storedir.parent().path)
6970                             for (i,ss,storedir)
6971                             in self.iterate_servers()]
6972             fso.stdout = StringIO()
6973hunk ./src/allmydata/test/test_upload.py 818
6974         if share_number is not None:
6975             self._copy_share_to_server(share_number, server_number)
6976 
6977-
6978     def _copy_share_to_server(self, share_number, server_number):
6979         ss = self.g.servers_by_number[server_number]
6980hunk ./src/allmydata/test/test_upload.py 820
6981-        self.copy_share(self.shares[share_number], ss)
6982+        self.copy_share(self.shares[share_number], self.uri, ss)
6983 
6984     def _setup_grid(self):
6985         """
6986}
6987[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
6988david-sarah@jacaranda.org**20110920171737
6989 Ignore-this: 5947e864682a43cb04e557334cda7c19
6990] {
6991adddir ./docs/backends
6992addfile ./docs/backends/S3.rst
6993hunk ./docs/backends/S3.rst 1
6994+====================================================
6995+Storing Shares in Amazon Simple Storage Service (S3)
6996+====================================================
6997+
6998+S3 is a commercial storage service provided by Amazon, described at
6999+`<https://aws.amazon.com/s3/>`_.
7000+
7001+The Tahoe-LAFS storage server can be configured to store its shares in
7002+an S3 bucket, rather than on local filesystem. To enable this, add the
7003+following keys to the server's ``tahoe.cfg`` file:
7004+
7005+``[storage]``
7006+
7007+``backend = s3``
7008+
7009+    This turns off the local filesystem backend and enables use of S3.
7010+
7011+``s3.access_key_id = (string, required)``
7012+``s3.secret_access_key = (string, required)``
7013+
7014+    These two give the storage server permission to access your Amazon
7015+    Web Services account, allowing them to upload and download shares
7016+    from S3.
7017+
7018+``s3.bucket = (string, required)``
7019+
7020+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
7021+    storage server will only modify and access objects in the configured S3
7022+    bucket.
7023+
7024+``s3.url = (URL string, optional)``
7025+
7026+    This URL tells the storage server how to access the S3 service. It
7027+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
7028+    else, you may be able to use some other S3-like service if it is
7029+    sufficiently compatible.
7030+
7031+``s3.max_space = (str, optional)``
7032+
7033+    This tells the server to limit how much space can be used in the S3
7034+    bucket. Before each share is uploaded, the server will ask S3 for the
7035+    current bucket usage, and will only accept the share if it does not cause
7036+    the usage to grow above this limit.
7037+
7038+    The string contains a number, with an optional case-insensitive scale
7039+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7040+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7041+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7042+    thing.
7043+
7044+    If ``s3.max_space`` is omitted, the default behavior is to allow
7045+    unlimited usage.
7046+
7047+
7048+Once configured, the WUI "storage server" page will provide information about
7049+how much space is being used and how many shares are being stored.
7050+
7051+
7052+Issues
7053+------
7054+
7055+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7056+is configured to store shares in S3 rather than on local disk, some common
7057+operations may behave differently:
7058+
7059+* Lease crawling/expiration is not yet implemented. As a result, shares will
7060+  be retained forever, and the Storage Server status web page will not show
7061+  information about the number of mutable/immutable shares present.
7062+
7063+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7064+  each share upload, causing the upload process to run slightly slower and
7065+  incur more S3 request charges.
7066addfile ./docs/backends/disk.rst
7067hunk ./docs/backends/disk.rst 1
7068+====================================
7069+Storing Shares on a Local Filesystem
7070+====================================
7071+
7072+The "disk" backend stores shares on the local filesystem. Versions of
7073+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
7074+
7075+``[storage]``
7076+
7077+``backend = disk``
7078+
7079+    This enables use of the disk backend, and is the default.
7080+
7081+``reserved_space = (str, optional)``
7082+
7083+    If provided, this value defines how much disk space is reserved: the
7084+    storage server will not accept any share that causes the amount of free
7085+    disk space to drop below this value. (The free space is measured by a
7086+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7087+    space available to the user account under which the storage server runs.)
7088+
7089+    This string contains a number, with an optional case-insensitive scale
7090+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7091+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7092+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7093+    thing.
7094+
7095+    "``tahoe create-node``" generates a tahoe.cfg with
7096+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7097+    reservation to suit your needs.
7098+
7099+``expire.enabled =``
7100+
7101+``expire.mode =``
7102+
7103+``expire.override_lease_duration =``
7104+
7105+``expire.cutoff_date =``
7106+
7107+``expire.immutable =``
7108+
7109+``expire.mutable =``
7110+
7111+    These settings control garbage collection, causing the server to
7112+    delete shares that no longer have an up-to-date lease on them. Please
7113+    see `<garbage-collection.rst>`_ for full details.
7114hunk ./docs/configuration.rst 436
7115     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7116     status of this bug. The default value is ``False``.
7117 
7118-``reserved_space = (str, optional)``
7119+``backend = (string, optional)``
7120 
7121hunk ./docs/configuration.rst 438
7122-    If provided, this value defines how much disk space is reserved: the
7123-    storage server will not accept any share that causes the amount of free
7124-    disk space to drop below this value. (The free space is measured by a
7125-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7126-    space available to the user account under which the storage server runs.)
7127+    Storage servers can store the data into different "backends". Clients
7128+    need not be aware of which backend is used by a server. The default
7129+    value is ``disk``.
7130 
7131hunk ./docs/configuration.rst 442
7132-    This string contains a number, with an optional case-insensitive scale
7133-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7134-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7135-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7136-    thing.
7137+``backend = disk``
7138 
7139hunk ./docs/configuration.rst 444
7140-    "``tahoe create-node``" generates a tahoe.cfg with
7141-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7142-    reservation to suit your needs.
7143+    The default is to store shares on the local filesystem (in
7144+    BASEDIR/storage/shares/). For configuration details (including how to
7145+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7146 
7147hunk ./docs/configuration.rst 448
7148-``expire.enabled =``
7149+``backend = S3``
7150 
7151hunk ./docs/configuration.rst 450
7152-``expire.mode =``
7153-
7154-``expire.override_lease_duration =``
7155-
7156-``expire.cutoff_date =``
7157-
7158-``expire.immutable =``
7159-
7160-``expire.mutable =``
7161-
7162-    These settings control garbage collection, in which the server will
7163-    delete shares that no longer have an up-to-date lease on them. Please see
7164-    `<garbage-collection.rst>`_ for full details.
7165+    The storage server can store all shares to an Amazon Simple Storage
7166+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7167 
7168 
7169 Running A Helper
7170}
7171[Fix some incorrect attribute accesses. refs #999
7172david-sarah@jacaranda.org**20110921031207
7173 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7174] {
7175hunk ./src/allmydata/client.py 258
7176 
7177         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7178                               discard_storage=discard)
7179-        ss = StorageServer(nodeid, backend, storedir,
7180+        ss = StorageServer(self.nodeid, backend, storedir,
7181                            stats_provider=self.stats_provider,
7182                            expiration_policy=expiration_policy)
7183         self.add_service(ss)
7184hunk ./src/allmydata/interfaces.py 449
7185         Returns the storage index.
7186         """
7187 
7188+    def get_storage_index_string():
7189+        """
7190+        Returns the base32-encoded storage index.
7191+        """
7192+
7193     def get_shnum():
7194         """
7195         Returns the share number.
7196hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7197     def get_storage_index(self):
7198         return self._storageindex
7199 
7200+    def get_storage_index_string(self):
7201+        return si_b2a(self._storageindex)
7202+
7203     def get_shnum(self):
7204         return self._shnum
7205 
7206hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7207     def get_storage_index(self):
7208         return self._storageindex
7209 
7210+    def get_storage_index_string(self):
7211+        return si_b2a(self._storageindex)
7212+
7213     def get_shnum(self):
7214         return self._shnum
7215 
7216hunk ./src/allmydata/storage/bucket.py 86
7217     def __init__(self, ss, share):
7218         self.ss = ss
7219         self._share = share
7220-        self.storageindex = share.storageindex
7221-        self.shnum = share.shnum
7222+        self.storageindex = share.get_storage_index()
7223+        self.shnum = share.get_shnum()
7224 
7225     def __repr__(self):
7226         return "<%s %s %s>" % (self.__class__.__name__,
7227hunk ./src/allmydata/storage/expirer.py 6
7228 from twisted.python import log as twlog
7229 
7230 from allmydata.storage.crawler import ShareCrawler
7231-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7232+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7233      UnknownImmutableContainerVersionError
7234 
7235 
7236hunk ./src/allmydata/storage/expirer.py 124
7237                     struct.error):
7238                 twlog.msg("lease-checker error processing %r" % (share,))
7239                 twlog.err()
7240-                which = (si_b2a(share.storageindex), share.get_shnum())
7241+                which = (share.get_storage_index_string(), share.get_shnum())
7242                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7243                 wks = (1, 1, 1, "unknown")
7244             would_keep_shares.append(wks)
7245hunk ./src/allmydata/storage/server.py 221
7246         alreadygot = set()
7247         for share in shareset.get_shares():
7248             share.add_or_renew_lease(lease_info)
7249-            alreadygot.add(share.shnum)
7250+            alreadygot.add(share.get_shnum())
7251 
7252         for shnum in sharenums - alreadygot:
7253             if shareset.has_incoming(shnum):
7254hunk ./src/allmydata/storage/server.py 324
7255 
7256         try:
7257             shareset = self.backend.get_shareset(storageindex)
7258-            return shareset.readv(self, shares, readv)
7259+            return shareset.readv(shares, readv)
7260         finally:
7261             self.add_latency("readv", time.time() - start)
7262 
7263hunk ./src/allmydata/storage/shares.py 1
7264-#! /usr/bin/python
7265-
7266-from allmydata.storage.mutable import MutableShareFile
7267-from allmydata.storage.immutable import ShareFile
7268-
7269-def get_share_file(filename):
7270-    f = open(filename, "rb")
7271-    prefix = f.read(32)
7272-    f.close()
7273-    if prefix == MutableShareFile.MAGIC:
7274-        return MutableShareFile(filename)
7275-    # otherwise assume it's immutable
7276-    return ShareFile(filename)
7277-
7278rmfile ./src/allmydata/storage/shares.py
7279hunk ./src/allmydata/test/no_network.py 387
7280         si = tahoe_uri.from_string(uri).get_storage_index()
7281         (i_shnum, i_serverid, i_sharefp) = from_share
7282         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7283+        fileutil.fp_make_dirs(shares_dir)
7284         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7285 
7286     def restore_all_shares(self, shares):
7287hunk ./src/allmydata/test/no_network.py 391
7288-        for share, data in shares.items():
7289-            share.home.setContent(data)
7290+        for sharepath, data in shares.items():
7291+            FilePath(sharepath).setContent(data)
7292 
7293     def delete_share(self, (shnum, serverid, sharefp)):
7294         sharefp.remove()
7295hunk ./src/allmydata/test/test_upload.py 744
7296         servertoshnums = {} # k: server, v: set(shnum)
7297 
7298         for i, c in self.g.servers_by_number.iteritems():
7299-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7300+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7301                 for fn in fns:
7302                     try:
7303                         sharenum = int(fn)
7304}
7305[docs/backends/S3.rst: remove Issues section. refs #999
7306david-sarah@jacaranda.org**20110921031625
7307 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7308] hunk ./docs/backends/S3.rst 57
7309 
7310 Once configured, the WUI "storage server" page will provide information about
7311 how much space is being used and how many shares are being stored.
7312-
7313-
7314-Issues
7315-------
7316-
7317-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7318-is configured to store shares in S3 rather than on local disk, some common
7319-operations may behave differently:
7320-
7321-* Lease crawling/expiration is not yet implemented. As a result, shares will
7322-  be retained forever, and the Storage Server status web page will not show
7323-  information about the number of mutable/immutable shares present.
7324-
7325-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7326-  each share upload, causing the upload process to run slightly slower and
7327-  incur more S3 request charges.
7328[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7329david-sarah@jacaranda.org**20110921031705
7330 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7331] {
7332hunk ./docs/backends/S3.rst 38
7333     else, you may be able to use some other S3-like service if it is
7334     sufficiently compatible.
7335 
7336-``s3.max_space = (str, optional)``
7337+``s3.max_space = (quantity of space, optional)``
7338 
7339     This tells the server to limit how much space can be used in the S3
7340     bucket. Before each share is uploaded, the server will ask S3 for the
7341hunk ./docs/backends/disk.rst 14
7342 
7343     This enables use of the disk backend, and is the default.
7344 
7345-``reserved_space = (str, optional)``
7346+``reserved_space = (quantity of space, optional)``
7347 
7348     If provided, this value defines how much disk space is reserved: the
7349     storage server will not accept any share that causes the amount of free
7350}
7351[More fixes to tests needed for pluggable backends. refs #999
7352david-sarah@jacaranda.org**20110921184649
7353 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7354] {
7355hunk ./src/allmydata/scripts/debug.py 8
7356 from twisted.python import usage, failure
7357 from twisted.internet import defer
7358 from twisted.scripts import trial as twisted_trial
7359+from twisted.python.filepath import FilePath
7360 
7361 
7362 class DumpOptions(usage.Options):
7363hunk ./src/allmydata/scripts/debug.py 38
7364         self['filename'] = argv_to_abspath(filename)
7365 
7366 def dump_share(options):
7367-    from allmydata.storage.mutable import MutableShareFile
7368+    from allmydata.storage.backends.disk.disk_backend import get_share
7369     from allmydata.util.encodingutil import quote_output
7370 
7371     out = options.stdout
7372hunk ./src/allmydata/scripts/debug.py 46
7373     # check the version, to see if we have a mutable or immutable share
7374     print >>out, "share filename: %s" % quote_output(options['filename'])
7375 
7376-    f = open(options['filename'], "rb")
7377-    prefix = f.read(32)
7378-    f.close()
7379-    if prefix == MutableShareFile.MAGIC:
7380-        return dump_mutable_share(options)
7381-    # otherwise assume it's immutable
7382-    return dump_immutable_share(options)
7383-
7384-def dump_immutable_share(options):
7385-    from allmydata.storage.immutable import ShareFile
7386+    share = get_share("", 0, fp)
7387+    if share.sharetype == "mutable":
7388+        return dump_mutable_share(options, share)
7389+    else:
7390+        assert share.sharetype == "immutable", share.sharetype
7391+        return dump_immutable_share(options)
7392 
7393hunk ./src/allmydata/scripts/debug.py 53
7394+def dump_immutable_share(options, share):
7395     out = options.stdout
7396hunk ./src/allmydata/scripts/debug.py 55
7397-    f = ShareFile(options['filename'])
7398     if not options["leases-only"]:
7399hunk ./src/allmydata/scripts/debug.py 56
7400-        dump_immutable_chk_share(f, out, options)
7401-    dump_immutable_lease_info(f, out)
7402+        dump_immutable_chk_share(share, out, options)
7403+    dump_immutable_lease_info(share, out)
7404     print >>out
7405     return 0
7406 
7407hunk ./src/allmydata/scripts/debug.py 166
7408     return when
7409 
7410 
7411-def dump_mutable_share(options):
7412-    from allmydata.storage.mutable import MutableShareFile
7413+def dump_mutable_share(options, m):
7414     from allmydata.util import base32, idlib
7415     out = options.stdout
7416hunk ./src/allmydata/scripts/debug.py 169
7417-    m = MutableShareFile(options['filename'])
7418     f = open(options['filename'], "rb")
7419     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7420     num_extra_leases = m._read_num_extra_leases(f)
7421hunk ./src/allmydata/scripts/debug.py 641
7422     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7423     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7424     """
7425-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7426-    from allmydata.util.encodingutil import listdir_unicode
7427+    from allmydata.storage.server import si_a2b
7428+    from allmydata.storage.backends.disk_backend import si_si2dir
7429+    from allmydata.util.encodingutil import quote_filepath
7430 
7431     out = options.stdout
7432hunk ./src/allmydata/scripts/debug.py 646
7433-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7434-    for d in options.nodedirs:
7435-        d = os.path.join(d, "storage/shares", sharedir)
7436-        if os.path.exists(d):
7437-            for shnum in listdir_unicode(d):
7438-                print >>out, os.path.join(d, shnum)
7439+    si = si_a2b(options.si_s)
7440+    for nodedir in options.nodedirs:
7441+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7442+        if sharedir.exists():
7443+            for sharefp in sharedir.children():
7444+                print >>out, quote_filepath(sharefp, quotemarks=False)
7445 
7446     return 0
7447 
7448hunk ./src/allmydata/scripts/debug.py 878
7449         print >>err, "Error processing %s" % quote_output(si_dir)
7450         failure.Failure().printTraceback(err)
7451 
7452+
7453 class CorruptShareOptions(usage.Options):
7454     def getSynopsis(self):
7455         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7456hunk ./src/allmydata/scripts/debug.py 902
7457 Obviously, this command should not be used in normal operation.
7458 """
7459         return t
7460+
7461     def parseArgs(self, filename):
7462         self['filename'] = filename
7463 
7464hunk ./src/allmydata/scripts/debug.py 907
7465 def corrupt_share(options):
7466+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7467+
7468+def do_corrupt_share(out, fp, offset="block-random"):
7469     import random
7470hunk ./src/allmydata/scripts/debug.py 911
7471-    from allmydata.storage.mutable import MutableShareFile
7472-    from allmydata.storage.immutable import ShareFile
7473+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7474+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7475     from allmydata.mutable.layout import unpack_header
7476     from allmydata.immutable.layout import ReadBucketProxy
7477hunk ./src/allmydata/scripts/debug.py 915
7478-    out = options.stdout
7479-    fn = options['filename']
7480-    assert options["offset"] == "block-random", "other offsets not implemented"
7481+
7482+    assert offset == "block-random", "other offsets not implemented"
7483+
7484     # first, what kind of share is it?
7485 
7486     def flip_bit(start, end):
7487hunk ./src/allmydata/scripts/debug.py 924
7488         offset = random.randrange(start, end)
7489         bit = random.randrange(0, 8)
7490         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7491-        f = open(fn, "rb+")
7492-        f.seek(offset)
7493-        d = f.read(1)
7494-        d = chr(ord(d) ^ 0x01)
7495-        f.seek(offset)
7496-        f.write(d)
7497-        f.close()
7498+        f = fp.open("rb+")
7499+        try:
7500+            f.seek(offset)
7501+            d = f.read(1)
7502+            d = chr(ord(d) ^ 0x01)
7503+            f.seek(offset)
7504+            f.write(d)
7505+        finally:
7506+            f.close()
7507 
7508hunk ./src/allmydata/scripts/debug.py 934
7509-    f = open(fn, "rb")
7510-    prefix = f.read(32)
7511-    f.close()
7512-    if prefix == MutableShareFile.MAGIC:
7513-        # mutable
7514-        m = MutableShareFile(fn)
7515-        f = open(fn, "rb")
7516-        f.seek(m.DATA_OFFSET)
7517-        data = f.read(2000)
7518-        # make sure this slot contains an SMDF share
7519-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7520+    f = fp.open("rb")
7521+    try:
7522+        prefix = f.read(32)
7523+    finally:
7524         f.close()
7525hunk ./src/allmydata/scripts/debug.py 939
7526+    if prefix == MutableDiskShare.MAGIC:
7527+        # mutable
7528+        m = MutableDiskShare("", 0, fp)
7529+        f = fp.open("rb")
7530+        try:
7531+            f.seek(m.DATA_OFFSET)
7532+            data = f.read(2000)
7533+            # make sure this slot contains an SMDF share
7534+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7535+        finally:
7536+            f.close()
7537 
7538         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7539          ig_datalen, offsets) = unpack_header(data)
7540hunk ./src/allmydata/scripts/debug.py 960
7541         flip_bit(start, end)
7542     else:
7543         # otherwise assume it's immutable
7544-        f = ShareFile(fn)
7545+        f = ImmutableDiskShare("", 0, fp)
7546         bp = ReadBucketProxy(None, None, '')
7547         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7548         start = f._data_offset + offsets["data"]
7549hunk ./src/allmydata/storage/backends/base.py 92
7550             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7551             if sharenum in shares:
7552                 if not shares[sharenum].check_testv(testv):
7553-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7554+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7555                     testv_is_good = False
7556                     break
7557             else:
7558hunk ./src/allmydata/storage/backends/base.py 99
7559                 # compare the vectors against an empty share, in which all
7560                 # reads return empty strings
7561                 if not EmptyShare().check_testv(testv):
7562-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7563-                                                                testv))
7564+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7565                     testv_is_good = False
7566                     break
7567 
7568hunk ./src/allmydata/test/test_cli.py 2892
7569             # delete one, corrupt a second
7570             shares = self.find_uri_shares(self.uri)
7571             self.failUnlessReallyEqual(len(shares), 10)
7572-            os.unlink(shares[0][2])
7573-            cso = debug.CorruptShareOptions()
7574-            cso.stdout = StringIO()
7575-            cso.parseOptions([shares[1][2]])
7576+            shares[0][2].remove()
7577+            stdout = StringIO()
7578+            sharefile = shares[1][2]
7579             storage_index = uri.from_string(self.uri).get_storage_index()
7580             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7581                                        (base32.b2a(shares[1][1]),
7582hunk ./src/allmydata/test/test_cli.py 2900
7583                                         base32.b2a(storage_index),
7584                                         shares[1][0])
7585-            debug.corrupt_share(cso)
7586+            debug.do_corrupt_share(stdout, sharefile)
7587         d.addCallback(_clobber_shares)
7588 
7589         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7590hunk ./src/allmydata/test/test_cli.py 3017
7591         def _clobber_shares(ignored):
7592             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7593             self.failUnlessReallyEqual(len(shares), 10)
7594-            os.unlink(shares[0][2])
7595+            shares[0][2].remove()
7596 
7597             shares = self.find_uri_shares(self.uris["mutable"])
7598hunk ./src/allmydata/test/test_cli.py 3020
7599-            cso = debug.CorruptShareOptions()
7600-            cso.stdout = StringIO()
7601-            cso.parseOptions([shares[1][2]])
7602+            stdout = StringIO()
7603+            sharefile = shares[1][2]
7604             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7605             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7606                                        (base32.b2a(shares[1][1]),
7607hunk ./src/allmydata/test/test_cli.py 3027
7608                                         base32.b2a(storage_index),
7609                                         shares[1][0])
7610-            debug.corrupt_share(cso)
7611+            debug.do_corrupt_share(stdout, sharefile)
7612         d.addCallback(_clobber_shares)
7613 
7614         # root
7615hunk ./src/allmydata/test/test_client.py 90
7616                            "enabled = true\n" + \
7617                            "reserved_space = 1000\n")
7618         c = client.Client(basedir)
7619-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7620+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7621 
7622     def test_reserved_2(self):
7623         basedir = "client.Basic.test_reserved_2"
7624hunk ./src/allmydata/test/test_client.py 101
7625                            "enabled = true\n" + \
7626                            "reserved_space = 10K\n")
7627         c = client.Client(basedir)
7628-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7629+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7630 
7631     def test_reserved_3(self):
7632         basedir = "client.Basic.test_reserved_3"
7633hunk ./src/allmydata/test/test_client.py 112
7634                            "enabled = true\n" + \
7635                            "reserved_space = 5mB\n")
7636         c = client.Client(basedir)
7637-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7638+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7639                              5*1000*1000)
7640 
7641     def test_reserved_4(self):
7642hunk ./src/allmydata/test/test_client.py 124
7643                            "enabled = true\n" + \
7644                            "reserved_space = 78Gb\n")
7645         c = client.Client(basedir)
7646-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7647+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7648                              78*1000*1000*1000)
7649 
7650     def test_reserved_bad(self):
7651hunk ./src/allmydata/test/test_client.py 136
7652                            "enabled = true\n" + \
7653                            "reserved_space = bogus\n")
7654         c = client.Client(basedir)
7655-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7656+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7657 
7658     def _permute(self, sb, key):
7659         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7660hunk ./src/allmydata/test/test_crawler.py 7
7661 from twisted.trial import unittest
7662 from twisted.application import service
7663 from twisted.internet import defer
7664+from twisted.python.filepath import FilePath
7665 from foolscap.api import eventually, fireEventually
7666 
7667 from allmydata.util import fileutil, hashutil, pollmixin
7668hunk ./src/allmydata/test/test_crawler.py 13
7669 from allmydata.storage.server import StorageServer, si_b2a
7670 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7671+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7672 
7673 from allmydata.test.test_storage import FakeCanary
7674 from allmydata.test.common_util import StallMixin
7675hunk ./src/allmydata/test/test_crawler.py 115
7676 
7677     def test_immediate(self):
7678         self.basedir = "crawler/Basic/immediate"
7679-        fileutil.make_dirs(self.basedir)
7680         serverid = "\x00" * 20
7681hunk ./src/allmydata/test/test_crawler.py 116
7682-        ss = StorageServer(self.basedir, serverid)
7683+        fp = FilePath(self.basedir)
7684+        backend = DiskBackend(fp)
7685+        ss = StorageServer(serverid, backend, fp)
7686         ss.setServiceParent(self.s)
7687 
7688         sis = [self.write(i, ss, serverid) for i in range(10)]
7689hunk ./src/allmydata/test/test_crawler.py 122
7690-        statefile = os.path.join(self.basedir, "statefile")
7691+        statefp = fp.child("statefile")
7692 
7693hunk ./src/allmydata/test/test_crawler.py 124
7694-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7695+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7696         c.load_state()
7697 
7698         c.start_current_prefix(time.time())
7699hunk ./src/allmydata/test/test_crawler.py 137
7700         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7701 
7702         # check that a new crawler picks up on the state file properly
7703-        c2 = BucketEnumeratingCrawler(ss, statefile)
7704+        c2 = BucketEnumeratingCrawler(backend, statefp)
7705         c2.load_state()
7706 
7707         c2.start_current_prefix(time.time())
7708hunk ./src/allmydata/test/test_crawler.py 145
7709 
7710     def test_service(self):
7711         self.basedir = "crawler/Basic/service"
7712-        fileutil.make_dirs(self.basedir)
7713         serverid = "\x00" * 20
7714hunk ./src/allmydata/test/test_crawler.py 146
7715-        ss = StorageServer(self.basedir, serverid)
7716+        fp = FilePath(self.basedir)
7717+        backend = DiskBackend(fp)
7718+        ss = StorageServer(serverid, backend, fp)
7719         ss.setServiceParent(self.s)
7720 
7721         sis = [self.write(i, ss, serverid) for i in range(10)]
7722hunk ./src/allmydata/test/test_crawler.py 153
7723 
7724-        statefile = os.path.join(self.basedir, "statefile")
7725-        c = BucketEnumeratingCrawler(ss, statefile)
7726+        statefp = fp.child("statefile")
7727+        c = BucketEnumeratingCrawler(backend, statefp)
7728         c.setServiceParent(self.s)
7729 
7730         # it should be legal to call get_state() and get_progress() right
7731hunk ./src/allmydata/test/test_crawler.py 174
7732 
7733     def test_paced(self):
7734         self.basedir = "crawler/Basic/paced"
7735-        fileutil.make_dirs(self.basedir)
7736         serverid = "\x00" * 20
7737hunk ./src/allmydata/test/test_crawler.py 175
7738-        ss = StorageServer(self.basedir, serverid)
7739+        fp = FilePath(self.basedir)
7740+        backend = DiskBackend(fp)
7741+        ss = StorageServer(serverid, backend, fp)
7742         ss.setServiceParent(self.s)
7743 
7744         # put four buckets in each prefixdir
7745hunk ./src/allmydata/test/test_crawler.py 186
7746             for tail in range(4):
7747                 sis.append(self.write(i, ss, serverid, tail))
7748 
7749-        statefile = os.path.join(self.basedir, "statefile")
7750+        statefp = fp.child("statefile")
7751 
7752hunk ./src/allmydata/test/test_crawler.py 188
7753-        c = PacedCrawler(ss, statefile)
7754+        c = PacedCrawler(backend, statefp)
7755         c.load_state()
7756         try:
7757             c.start_current_prefix(time.time())
7758hunk ./src/allmydata/test/test_crawler.py 213
7759         del c
7760 
7761         # start a new crawler, it should start from the beginning
7762-        c = PacedCrawler(ss, statefile)
7763+        c = PacedCrawler(backend, statefp)
7764         c.load_state()
7765         try:
7766             c.start_current_prefix(time.time())
7767hunk ./src/allmydata/test/test_crawler.py 226
7768         c.cpu_slice = PacedCrawler.cpu_slice
7769 
7770         # a third crawler should pick up from where it left off
7771-        c2 = PacedCrawler(ss, statefile)
7772+        c2 = PacedCrawler(backend, statefp)
7773         c2.all_buckets = c.all_buckets[:]
7774         c2.load_state()
7775         c2.countdown = -1
7776hunk ./src/allmydata/test/test_crawler.py 237
7777 
7778         # now stop it at the end of a bucket (countdown=4), to exercise a
7779         # different place that checks the time
7780-        c = PacedCrawler(ss, statefile)
7781+        c = PacedCrawler(backend, statefp)
7782         c.load_state()
7783         c.countdown = 4
7784         try:
7785hunk ./src/allmydata/test/test_crawler.py 256
7786 
7787         # stop it again at the end of the bucket, check that a new checker
7788         # picks up correctly
7789-        c = PacedCrawler(ss, statefile)
7790+        c = PacedCrawler(backend, statefp)
7791         c.load_state()
7792         c.countdown = 4
7793         try:
7794hunk ./src/allmydata/test/test_crawler.py 266
7795         # that should stop at the end of one of the buckets.
7796         c.save_state()
7797 
7798-        c2 = PacedCrawler(ss, statefile)
7799+        c2 = PacedCrawler(backend, statefp)
7800         c2.all_buckets = c.all_buckets[:]
7801         c2.load_state()
7802         c2.countdown = -1
7803hunk ./src/allmydata/test/test_crawler.py 277
7804 
7805     def test_paced_service(self):
7806         self.basedir = "crawler/Basic/paced_service"
7807-        fileutil.make_dirs(self.basedir)
7808         serverid = "\x00" * 20
7809hunk ./src/allmydata/test/test_crawler.py 278
7810-        ss = StorageServer(self.basedir, serverid)
7811+        fp = FilePath(self.basedir)
7812+        backend = DiskBackend(fp)
7813+        ss = StorageServer(serverid, backend, fp)
7814         ss.setServiceParent(self.s)
7815 
7816         sis = [self.write(i, ss, serverid) for i in range(10)]
7817hunk ./src/allmydata/test/test_crawler.py 285
7818 
7819-        statefile = os.path.join(self.basedir, "statefile")
7820-        c = PacedCrawler(ss, statefile)
7821+        statefp = fp.child("statefile")
7822+        c = PacedCrawler(backend, statefp)
7823 
7824         did_check_progress = [False]
7825         def check_progress():
7826hunk ./src/allmydata/test/test_crawler.py 345
7827         # and read the stdout when it runs.
7828 
7829         self.basedir = "crawler/Basic/cpu_usage"
7830-        fileutil.make_dirs(self.basedir)
7831         serverid = "\x00" * 20
7832hunk ./src/allmydata/test/test_crawler.py 346
7833-        ss = StorageServer(self.basedir, serverid)
7834+        fp = FilePath(self.basedir)
7835+        backend = DiskBackend(fp)
7836+        ss = StorageServer(serverid, backend, fp)
7837         ss.setServiceParent(self.s)
7838 
7839         for i in range(10):
7840hunk ./src/allmydata/test/test_crawler.py 354
7841             self.write(i, ss, serverid)
7842 
7843-        statefile = os.path.join(self.basedir, "statefile")
7844-        c = ConsumingCrawler(ss, statefile)
7845+        statefp = fp.child("statefile")
7846+        c = ConsumingCrawler(backend, statefp)
7847         c.setServiceParent(self.s)
7848 
7849         # this will run as fast as it can, consuming about 50ms per call to
7850hunk ./src/allmydata/test/test_crawler.py 391
7851 
7852     def test_empty_subclass(self):
7853         self.basedir = "crawler/Basic/empty_subclass"
7854-        fileutil.make_dirs(self.basedir)
7855         serverid = "\x00" * 20
7856hunk ./src/allmydata/test/test_crawler.py 392
7857-        ss = StorageServer(self.basedir, serverid)
7858+        fp = FilePath(self.basedir)
7859+        backend = DiskBackend(fp)
7860+        ss = StorageServer(serverid, backend, fp)
7861         ss.setServiceParent(self.s)
7862 
7863         for i in range(10):
7864hunk ./src/allmydata/test/test_crawler.py 400
7865             self.write(i, ss, serverid)
7866 
7867-        statefile = os.path.join(self.basedir, "statefile")
7868-        c = ShareCrawler(ss, statefile)
7869+        statefp = fp.child("statefile")
7870+        c = ShareCrawler(backend, statefp)
7871         c.slow_start = 0
7872         c.setServiceParent(self.s)
7873 
7874hunk ./src/allmydata/test/test_crawler.py 417
7875         d.addCallback(_done)
7876         return d
7877 
7878-
7879     def test_oneshot(self):
7880         self.basedir = "crawler/Basic/oneshot"
7881hunk ./src/allmydata/test/test_crawler.py 419
7882-        fileutil.make_dirs(self.basedir)
7883         serverid = "\x00" * 20
7884hunk ./src/allmydata/test/test_crawler.py 420
7885-        ss = StorageServer(self.basedir, serverid)
7886+        fp = FilePath(self.basedir)
7887+        backend = DiskBackend(fp)
7888+        ss = StorageServer(serverid, backend, fp)
7889         ss.setServiceParent(self.s)
7890 
7891         for i in range(30):
7892hunk ./src/allmydata/test/test_crawler.py 428
7893             self.write(i, ss, serverid)
7894 
7895-        statefile = os.path.join(self.basedir, "statefile")
7896-        c = OneShotCrawler(ss, statefile)
7897+        statefp = fp.child("statefile")
7898+        c = OneShotCrawler(backend, statefp)
7899         c.setServiceParent(self.s)
7900 
7901         d = c.finished_d
7902hunk ./src/allmydata/test/test_crawler.py 447
7903             self.failUnlessEqual(s["current-cycle"], None)
7904         d.addCallback(_check)
7905         return d
7906-
7907hunk ./src/allmydata/test/test_deepcheck.py 23
7908      ShouldFailMixin
7909 from allmydata.test.common_util import StallMixin
7910 from allmydata.test.no_network import GridTestMixin
7911+from allmydata.scripts import debug
7912+
7913 
7914 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7915 
7916hunk ./src/allmydata/test/test_deepcheck.py 905
7917         d.addErrback(self.explain_error)
7918         return d
7919 
7920-
7921-
7922     def set_up_damaged_tree(self):
7923         # 6.4s
7924 
7925hunk ./src/allmydata/test/test_deepcheck.py 989
7926 
7927         return d
7928 
7929-    def _run_cli(self, argv):
7930-        stdout, stderr = StringIO(), StringIO()
7931-        # this can only do synchronous operations
7932-        assert argv[0] == "debug"
7933-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7934-        return stdout.getvalue()
7935-
7936     def _delete_some_shares(self, node):
7937         self.delete_shares_numbered(node.get_uri(), [0,1])
7938 
7939hunk ./src/allmydata/test/test_deepcheck.py 995
7940     def _corrupt_some_shares(self, node):
7941         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7942             if shnum in (0,1):
7943-                self._run_cli(["debug", "corrupt-share", sharefile])
7944+                debug.do_corrupt_share(StringIO(), sharefile)
7945 
7946     def _delete_most_shares(self, node):
7947         self.delete_shares_numbered(node.get_uri(), range(1,10))
7948hunk ./src/allmydata/test/test_deepcheck.py 1000
7949 
7950-
7951     def check_is_healthy(self, cr, where):
7952         try:
7953             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
7954hunk ./src/allmydata/test/test_download.py 134
7955             for shnum in shares_for_server:
7956                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
7957                 fileutil.fp_make_dirs(share_dir)
7958-                share_dir.child(str(shnum)).setContent(shares[shnum])
7959+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
7960 
7961     def load_shares(self, ignored=None):
7962         # this uses the data generated by create_shares() to populate the
7963hunk ./src/allmydata/test/test_hung_server.py 32
7964 
7965     def _break(self, servers):
7966         for ss in servers:
7967-            self.g.break_server(ss.get_serverid())
7968+            self.g.break_server(ss.original.get_serverid())
7969 
7970     def _hang(self, servers, **kwargs):
7971         for ss in servers:
7972hunk ./src/allmydata/test/test_hung_server.py 67
7973         serverids = [ss.original.get_serverid() for ss in from_servers]
7974         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7975             if i_serverid in serverids:
7976-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7977+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
7978 
7979         self.shares = self.find_uri_shares(self.uri)
7980 
7981hunk ./src/allmydata/test/test_mutable.py 3669
7982         # Now execute each assignment by writing the storage.
7983         for (share, servernum) in assignments:
7984             sharedata = base64.b64decode(self.sdmf_old_shares[share])
7985-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
7986+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
7987             fileutil.fp_make_dirs(storage_dir)
7988             storage_dir.child("%d" % share).setContent(sharedata)
7989         # ...and verify that the shares are there.
7990hunk ./src/allmydata/test/test_no_network.py 10
7991 from allmydata.immutable.upload import Data
7992 from allmydata.util.consumer import download_to_data
7993 
7994+
7995 class Harness(unittest.TestCase):
7996     def setUp(self):
7997         self.s = service.MultiService()
7998hunk ./src/allmydata/test/test_storage.py 1
7999-import time, os.path, platform, stat, re, simplejson, struct, shutil
8000+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8001 
8002 import mock
8003 
8004hunk ./src/allmydata/test/test_storage.py 6
8005 from twisted.trial import unittest
8006-
8007 from twisted.internet import defer
8008 from twisted.application import service
8009hunk ./src/allmydata/test/test_storage.py 8
8010+from twisted.python.filepath import FilePath
8011 from foolscap.api import fireEventually
8012hunk ./src/allmydata/test/test_storage.py 10
8013-import itertools
8014+
8015 from allmydata import interfaces
8016 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8017 from allmydata.storage.server import StorageServer
8018hunk ./src/allmydata/test/test_storage.py 14
8019+from allmydata.storage.backends.disk.disk_backend import DiskBackend
8020 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8021 from allmydata.storage.bucket import BucketWriter, BucketReader
8022 from allmydata.storage.common import DataTooLargeError, \
8023hunk ./src/allmydata/test/test_storage.py 310
8024         return self.sparent.stopService()
8025 
8026     def workdir(self, name):
8027-        basedir = os.path.join("storage", "Server", name)
8028-        return basedir
8029+        return FilePath("storage").child("Server").child(name)
8030 
8031     def create(self, name, reserved_space=0, klass=StorageServer):
8032         workdir = self.workdir(name)
8033hunk ./src/allmydata/test/test_storage.py 314
8034-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
8035+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
8036+        ss = klass("\x00" * 20, backend, workdir,
8037                    stats_provider=FakeStatsProvider())
8038         ss.setServiceParent(self.sparent)
8039         return ss
8040hunk ./src/allmydata/test/test_storage.py 1386
8041 
8042     def tearDown(self):
8043         self.sparent.stopService()
8044-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
8045+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
8046 
8047 
8048     def write_enabler(self, we_tag):
8049hunk ./src/allmydata/test/test_storage.py 2781
8050         return self.sparent.stopService()
8051 
8052     def workdir(self, name):
8053-        basedir = os.path.join("storage", "Server", name)
8054-        return basedir
8055+        return FilePath("storage").child("Server").child(name)
8056 
8057     def create(self, name):
8058         workdir = self.workdir(name)
8059hunk ./src/allmydata/test/test_storage.py 2785
8060-        ss = StorageServer(workdir, "\x00" * 20)
8061+        backend = DiskBackend(workdir)
8062+        ss = StorageServer("\x00" * 20, backend, workdir)
8063         ss.setServiceParent(self.sparent)
8064         return ss
8065 
8066hunk ./src/allmydata/test/test_storage.py 4061
8067         }
8068 
8069         basedir = "storage/WebStatus/status_right_disk_stats"
8070-        fileutil.make_dirs(basedir)
8071-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
8072-        expecteddir = ss.sharedir
8073+        fp = FilePath(basedir)
8074+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
8075+        ss = StorageServer("\x00" * 20, backend, fp)
8076+        expecteddir = backend._sharedir
8077         ss.setServiceParent(self.s)
8078         w = StorageStatus(ss)
8079         html = w.renderSynchronously()
8080hunk ./src/allmydata/test/test_storage.py 4084
8081 
8082     def test_readonly(self):
8083         basedir = "storage/WebStatus/readonly"
8084-        fileutil.make_dirs(basedir)
8085-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8086+        fp = FilePath(basedir)
8087+        backend = DiskBackend(fp, readonly=True)
8088+        ss = StorageServer("\x00" * 20, backend, fp)
8089         ss.setServiceParent(self.s)
8090         w = StorageStatus(ss)
8091         html = w.renderSynchronously()
8092hunk ./src/allmydata/test/test_storage.py 4096
8093 
8094     def test_reserved(self):
8095         basedir = "storage/WebStatus/reserved"
8096-        fileutil.make_dirs(basedir)
8097-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8098-        ss.setServiceParent(self.s)
8099-        w = StorageStatus(ss)
8100-        html = w.renderSynchronously()
8101-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8102-        s = remove_tags(html)
8103-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8104-
8105-    def test_huge_reserved(self):
8106-        basedir = "storage/WebStatus/reserved"
8107-        fileutil.make_dirs(basedir)
8108-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8109+        fp = FilePath(basedir)
8110+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8111+        ss = StorageServer("\x00" * 20, backend, fp)
8112         ss.setServiceParent(self.s)
8113         w = StorageStatus(ss)
8114         html = w.renderSynchronously()
8115hunk ./src/allmydata/test/test_upload.py 3
8116 # -*- coding: utf-8 -*-
8117 
8118-import os, shutil
8119+import os
8120 from cStringIO import StringIO
8121 from twisted.trial import unittest
8122 from twisted.python.failure import Failure
8123hunk ./src/allmydata/test/test_upload.py 14
8124 from allmydata import uri, monitor, client
8125 from allmydata.immutable import upload, encode
8126 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8127-from allmydata.util import log
8128+from allmydata.util import log, fileutil
8129 from allmydata.util.assertutil import precondition
8130 from allmydata.util.deferredutil import DeferredListShouldSucceed
8131 from allmydata.test.no_network import GridTestMixin
8132hunk ./src/allmydata/test/test_upload.py 972
8133                                         readonly=True))
8134         # Remove the first share from server 0.
8135         def _remove_share_0_from_server_0():
8136-            share_location = self.shares[0][2]
8137-            os.remove(share_location)
8138+            self.shares[0][2].remove()
8139         d.addCallback(lambda ign:
8140             _remove_share_0_from_server_0())
8141         # Set happy = 4 in the client.
8142hunk ./src/allmydata/test/test_upload.py 1847
8143             self._copy_share_to_server(3, 1)
8144             storedir = self.get_serverdir(0)
8145             # remove the storedir, wiping out any existing shares
8146-            shutil.rmtree(storedir)
8147+            fileutil.fp_remove(storedir)
8148             # create an empty storedir to replace the one we just removed
8149hunk ./src/allmydata/test/test_upload.py 1849
8150-            os.mkdir(storedir)
8151+            storedir.mkdir()
8152             client = self.g.clients[0]
8153             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8154             return client
8155hunk ./src/allmydata/test/test_upload.py 1888
8156             self._copy_share_to_server(3, 1)
8157             storedir = self.get_serverdir(0)
8158             # remove the storedir, wiping out any existing shares
8159-            shutil.rmtree(storedir)
8160+            fileutil.fp_remove(storedir)
8161             # create an empty storedir to replace the one we just removed
8162hunk ./src/allmydata/test/test_upload.py 1890
8163-            os.mkdir(storedir)
8164+            storedir.mkdir()
8165             client = self.g.clients[0]
8166             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8167             return client
8168hunk ./src/allmydata/test/test_web.py 4870
8169         d.addErrback(self.explain_web_error)
8170         return d
8171 
8172-    def _assert_leasecount(self, ignored, which, expected):
8173+    def _assert_leasecount(self, which, expected):
8174         lease_counts = self.count_leases(self.uris[which])
8175         for (fn, num_leases) in lease_counts:
8176             if num_leases != expected:
8177hunk ./src/allmydata/test/test_web.py 4903
8178                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8179         d.addCallback(_compute_fileurls)
8180 
8181-        d.addCallback(self._assert_leasecount, "one", 1)
8182-        d.addCallback(self._assert_leasecount, "two", 1)
8183-        d.addCallback(self._assert_leasecount, "mutable", 1)
8184+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8185+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8186+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8187 
8188         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8189         def _got_html_good(res):
8190hunk ./src/allmydata/test/test_web.py 4913
8191             self.failIf("Not Healthy" in res, res)
8192         d.addCallback(_got_html_good)
8193 
8194-        d.addCallback(self._assert_leasecount, "one", 1)
8195-        d.addCallback(self._assert_leasecount, "two", 1)
8196-        d.addCallback(self._assert_leasecount, "mutable", 1)
8197+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8198+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8199+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8200 
8201         # this CHECK uses the original client, which uses the same
8202         # lease-secrets, so it will just renew the original lease
8203hunk ./src/allmydata/test/test_web.py 4922
8204         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8205         d.addCallback(_got_html_good)
8206 
8207-        d.addCallback(self._assert_leasecount, "one", 1)
8208-        d.addCallback(self._assert_leasecount, "two", 1)
8209-        d.addCallback(self._assert_leasecount, "mutable", 1)
8210+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8211+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8212+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8213 
8214         # this CHECK uses an alternate client, which adds a second lease
8215         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8216hunk ./src/allmydata/test/test_web.py 4930
8217         d.addCallback(_got_html_good)
8218 
8219-        d.addCallback(self._assert_leasecount, "one", 2)
8220-        d.addCallback(self._assert_leasecount, "two", 1)
8221-        d.addCallback(self._assert_leasecount, "mutable", 1)
8222+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8223+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8224+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8225 
8226         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8227         d.addCallback(_got_html_good)
8228hunk ./src/allmydata/test/test_web.py 4937
8229 
8230-        d.addCallback(self._assert_leasecount, "one", 2)
8231-        d.addCallback(self._assert_leasecount, "two", 1)
8232-        d.addCallback(self._assert_leasecount, "mutable", 1)
8233+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8234+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8235+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8236 
8237         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8238                       clientnum=1)
8239hunk ./src/allmydata/test/test_web.py 4945
8240         d.addCallback(_got_html_good)
8241 
8242-        d.addCallback(self._assert_leasecount, "one", 2)
8243-        d.addCallback(self._assert_leasecount, "two", 1)
8244-        d.addCallback(self._assert_leasecount, "mutable", 2)
8245+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8246+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8247+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8248 
8249         d.addErrback(self.explain_web_error)
8250         return d
8251hunk ./src/allmydata/test/test_web.py 4989
8252             self.failUnlessReallyEqual(len(units), 4+1)
8253         d.addCallback(_done)
8254 
8255-        d.addCallback(self._assert_leasecount, "root", 1)
8256-        d.addCallback(self._assert_leasecount, "one", 1)
8257-        d.addCallback(self._assert_leasecount, "mutable", 1)
8258+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8259+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8260+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8261 
8262         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8263         d.addCallback(_done)
8264hunk ./src/allmydata/test/test_web.py 4996
8265 
8266-        d.addCallback(self._assert_leasecount, "root", 1)
8267-        d.addCallback(self._assert_leasecount, "one", 1)
8268-        d.addCallback(self._assert_leasecount, "mutable", 1)
8269+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8270+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8271+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8272 
8273         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8274                       clientnum=1)
8275hunk ./src/allmydata/test/test_web.py 5004
8276         d.addCallback(_done)
8277 
8278-        d.addCallback(self._assert_leasecount, "root", 2)
8279-        d.addCallback(self._assert_leasecount, "one", 2)
8280-        d.addCallback(self._assert_leasecount, "mutable", 2)
8281+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
8282+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8283+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8284 
8285         d.addErrback(self.explain_web_error)
8286         return d
8287}
8288[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
8289david-sarah@jacaranda.org**20110921221421
8290 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
8291] {
8292hunk ./src/allmydata/scripts/debug.py 642
8293     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
8294     """
8295     from allmydata.storage.server import si_a2b
8296-    from allmydata.storage.backends.disk_backend import si_si2dir
8297+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
8298     from allmydata.util.encodingutil import quote_filepath
8299 
8300     out = options.stdout
8301hunk ./src/allmydata/scripts/debug.py 648
8302     si = si_a2b(options.si_s)
8303     for nodedir in options.nodedirs:
8304-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
8305+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
8306         if sharedir.exists():
8307             for sharefp in sharedir.children():
8308                 print >>out, quote_filepath(sharefp, quotemarks=False)
8309hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
8310         incominghome = self._incominghomedir.child(str(shnum))
8311         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
8312                                    max_size=max_space_per_bucket)
8313-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
8314+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
8315         if self._discard_storage:
8316             bw.throw_out_all_data = True
8317         return bw
8318hunk ./src/allmydata/storage/backends/disk/immutable.py 147
8319     def unlink(self):
8320         self._home.remove()
8321 
8322+    def get_allocated_size(self):
8323+        return self._max_size
8324+
8325     def get_size(self):
8326         return self._home.getsize()
8327 
8328hunk ./src/allmydata/storage/bucket.py 15
8329 class BucketWriter(Referenceable):