Ticket #999: pluggable-backends-davidsarah-v10.darcs.patch

File pluggable-backends-davidsarah-v10.darcs.patch, 425.7 KB (added by davidsarah, at 2011-09-22T18:38:53Z)

Fix most of the crawler tests. Reinstate the cancel_lease methods of ImmutableDiskShare? and MutableDiskShare?, since they are needed for lease expiry. refs #999

Line 
114 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
16  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
17
18Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
19  * Fix some incorrect attribute accesses. refs #999
20
21Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
22  * docs/backends/S3.rst: remove Issues section. refs #999
23
24Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
25  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
26
27Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
28  * More fixes to tests needed for pluggable backends. refs #999
29
30Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
31  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
32
33Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
34  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
35
36Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
37  * Fix some more test failures. refs #999
38
39Thu Sep 22 19:30:08 BST 2011  david-sarah@jacaranda.org
40  * Fix most of the crawler tests. refs #999
41
42Thu Sep 22 19:33:23 BST 2011  david-sarah@jacaranda.org
43  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
44
45New patches:
46
47[interfaces.py: 'which -> that' grammar cleanup.
48david-sarah@jacaranda.org**20110825003217
49 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
50] {
51hunk ./src/allmydata/interfaces.py 38
52     the StubClient. This object doesn't actually offer any services, but the
53     announcement helps the Introducer keep track of which clients are
54     subscribed (so the grid admin can keep track of things like the size of
55-    the grid and the client versions in use. This is the (empty)
56+    the grid and the client versions in use). This is the (empty)
57     RemoteInterface for the StubClient."""
58 
59 class RIBucketWriter(RemoteInterface):
60hunk ./src/allmydata/interfaces.py 276
61         (binary) storage index string, and 'shnum' is the integer share
62         number. 'reason' is a human-readable explanation of the problem,
63         probably including some expected hash values and the computed ones
64-        which did not match. Corruption advisories for mutable shares should
65+        that did not match. Corruption advisories for mutable shares should
66         include a hash of the public key (the same value that appears in the
67         mutable-file verify-cap), since the current share format does not
68         store that on disk.
69hunk ./src/allmydata/interfaces.py 413
70           remote_host: the IAddress, if connected, otherwise None
71 
72         This method is intended for monitoring interfaces, such as a web page
73-        which describes connecting and connected peers.
74+        that describes connecting and connected peers.
75         """
76 
77     def get_all_peerids():
78hunk ./src/allmydata/interfaces.py 515
79 
80     # TODO: rename to get_read_cap()
81     def get_readonly():
82-        """Return another IURI instance, which represents a read-only form of
83+        """Return another IURI instance that represents a read-only form of
84         this one. If is_readonly() is True, this returns self."""
85 
86     def get_verify_cap():
87hunk ./src/allmydata/interfaces.py 542
88         passing into init_from_string."""
89 
90 class IDirnodeURI(Interface):
91-    """I am a URI which represents a dirnode."""
92+    """I am a URI that represents a dirnode."""
93 
94 class IFileURI(Interface):
95hunk ./src/allmydata/interfaces.py 545
96-    """I am a URI which represents a filenode."""
97+    """I am a URI that represents a filenode."""
98     def get_size():
99         """Return the length (in bytes) of the file that I represent."""
100 
101hunk ./src/allmydata/interfaces.py 553
102     pass
103 
104 class IMutableFileURI(Interface):
105-    """I am a URI which represents a mutable filenode."""
106+    """I am a URI that represents a mutable filenode."""
107     def get_extension_params():
108         """Return the extension parameters in the URI"""
109 
110hunk ./src/allmydata/interfaces.py 856
111         """
112 
113 class IFileNode(IFilesystemNode):
114-    """I am a node which represents a file: a sequence of bytes. I am not a
115+    """I am a node that represents a file: a sequence of bytes. I am not a
116     container, like IDirectoryNode."""
117     def get_best_readable_version():
118         """Return a Deferred that fires with an IReadable for the 'best'
119hunk ./src/allmydata/interfaces.py 905
120     multiple versions of a file present in the grid, some of which might be
121     unrecoverable (i.e. have fewer than 'k' shares). These versions are
122     loosely ordered: each has a sequence number and a hash, and any version
123-    with seqnum=N was uploaded by a node which has seen at least one version
124+    with seqnum=N was uploaded by a node that has seen at least one version
125     with seqnum=N-1.
126 
127     The 'servermap' (an instance of IMutableFileServerMap) is used to
128hunk ./src/allmydata/interfaces.py 1014
129         as a guide to where the shares are located.
130 
131         I return a Deferred that fires with the requested contents, or
132-        errbacks with UnrecoverableFileError. Note that a servermap which was
133+        errbacks with UnrecoverableFileError. Note that a servermap that was
134         updated with MODE_ANYTHING or MODE_READ may not know about shares for
135         all versions (those modes stop querying servers as soon as they can
136         fulfil their goals), so you may want to use MODE_CHECK (which checks
137hunk ./src/allmydata/interfaces.py 1073
138     """Upload was unable to satisfy 'servers_of_happiness'"""
139 
140 class UnableToFetchCriticalDownloadDataError(Exception):
141-    """I was unable to fetch some piece of critical data which is supposed to
142+    """I was unable to fetch some piece of critical data that is supposed to
143     be identically present in all shares."""
144 
145 class NoServersError(Exception):
146hunk ./src/allmydata/interfaces.py 1085
147     exists, and overwrite= was set to False."""
148 
149 class NoSuchChildError(Exception):
150-    """A directory node was asked to fetch a child which does not exist."""
151+    """A directory node was asked to fetch a child that does not exist."""
152 
153 class ChildOfWrongTypeError(Exception):
154     """An operation was attempted on a child of the wrong type (file or directory)."""
155hunk ./src/allmydata/interfaces.py 1403
156         if you initially thought you were going to use 10 peers, started
157         encoding, and then two of the peers dropped out: you could use
158         desired_share_ids= to skip the work (both memory and CPU) of
159-        producing shares for the peers which are no longer available.
160+        producing shares for the peers that are no longer available.
161 
162         """
163 
164hunk ./src/allmydata/interfaces.py 1478
165         if you initially thought you were going to use 10 peers, started
166         encoding, and then two of the peers dropped out: you could use
167         desired_share_ids= to skip the work (both memory and CPU) of
168-        producing shares for the peers which are no longer available.
169+        producing shares for the peers that are no longer available.
170 
171         For each call, encode() will return a Deferred that fires with two
172         lists, one containing shares and the other containing the shareids.
173hunk ./src/allmydata/interfaces.py 1535
174         required to be of the same length.  The i'th element of their_shareids
175         is required to be the shareid of the i'th buffer in some_shares.
176 
177-        This returns a Deferred which fires with a sequence of buffers. This
178+        This returns a Deferred that fires with a sequence of buffers. This
179         sequence will contain all of the segments of the original data, in
180         order. The sum of the lengths of all of the buffers will be the
181         'data_size' value passed into the original ICodecEncode.set_params()
182hunk ./src/allmydata/interfaces.py 1582
183         Encoding parameters can be set in three ways. 1: The Encoder class
184         provides defaults (3/7/10). 2: the Encoder can be constructed with
185         an 'options' dictionary, in which the
186-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
187+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
188         set_params((k,d,n)) can be called.
189 
190         If you intend to use set_params(), you must call it before
191hunk ./src/allmydata/interfaces.py 1780
192         produced, so that the segment hashes can be generated with only a
193         single pass.
194 
195-        This returns a Deferred which fires with a sequence of hashes, using:
196+        This returns a Deferred that fires with a sequence of hashes, using:
197 
198          tuple(segment_hashes[first:last])
199 
200hunk ./src/allmydata/interfaces.py 1796
201     def get_plaintext_hash():
202         """OBSOLETE; Get the hash of the whole plaintext.
203 
204-        This returns a Deferred which fires with a tagged SHA-256 hash of the
205+        This returns a Deferred that fires with a tagged SHA-256 hash of the
206         whole plaintext, obtained from hashutil.plaintext_hash(data).
207         """
208 
209hunk ./src/allmydata/interfaces.py 1856
210         be used to encrypt the data. The key will also be hashed to derive
211         the StorageIndex.
212 
213-        Uploadables which want to achieve convergence should hash their file
214+        Uploadables that want to achieve convergence should hash their file
215         contents and the serialized_encoding_parameters to form the key
216         (which of course requires a full pass over the data). Uploadables can
217         use the upload.ConvergentUploadMixin class to achieve this
218hunk ./src/allmydata/interfaces.py 1862
219         automatically.
220 
221-        Uploadables which do not care about convergence (or do not wish to
222+        Uploadables that do not care about convergence (or do not wish to
223         make multiple passes over the data) can simply return a
224         strongly-random 16 byte string.
225 
226hunk ./src/allmydata/interfaces.py 1872
227 
228     def read(length):
229         """Return a Deferred that fires with a list of strings (perhaps with
230-        only a single element) which, when concatenated together, contain the
231+        only a single element) that, when concatenated together, contain the
232         next 'length' bytes of data. If EOF is near, this may provide fewer
233         than 'length' bytes. The total number of bytes provided by read()
234         before it signals EOF must equal the size provided by get_size().
235hunk ./src/allmydata/interfaces.py 1919
236 
237     def read(length):
238         """
239-        Returns a list of strings which, when concatenated, are the next
240+        Returns a list of strings that, when concatenated, are the next
241         length bytes of the file, or fewer if there are fewer bytes
242         between the current location and the end of the file.
243         """
244hunk ./src/allmydata/interfaces.py 1932
245 
246 class IUploadResults(Interface):
247     """I am returned by upload() methods. I contain a number of public
248-    attributes which can be read to determine the results of the upload. Some
249+    attributes that can be read to determine the results of the upload. Some
250     of these are functional, some are timing information. All of these may be
251     None.
252 
253hunk ./src/allmydata/interfaces.py 1965
254 
255 class IDownloadResults(Interface):
256     """I am created internally by download() methods. I contain a number of
257-    public attributes which contain details about the download process.::
258+    public attributes that contain details about the download process.::
259 
260      .file_size : the size of the file, in bytes
261      .servers_used : set of server peerids that were used during download
262hunk ./src/allmydata/interfaces.py 1991
263 class IUploader(Interface):
264     def upload(uploadable):
265         """Upload the file. 'uploadable' must impement IUploadable. This
266-        returns a Deferred which fires with an IUploadResults instance, from
267+        returns a Deferred that fires with an IUploadResults instance, from
268         which the URI of the file can be obtained as results.uri ."""
269 
270     def upload_ssk(write_capability, new_version, uploadable):
271hunk ./src/allmydata/interfaces.py 2041
272         kind of lease that is obtained (which account number to claim, etc).
273 
274         TODO: any problems seen during checking will be reported to the
275-        health-manager.furl, a centralized object which is responsible for
276+        health-manager.furl, a centralized object that is responsible for
277         figuring out why files are unhealthy so corrective action can be
278         taken.
279         """
280hunk ./src/allmydata/interfaces.py 2056
281         will be put in the check-and-repair results. The Deferred will not
282         fire until the repair is complete.
283 
284-        This returns a Deferred which fires with an instance of
285+        This returns a Deferred that fires with an instance of
286         ICheckAndRepairResults."""
287 
288 class IDeepCheckable(Interface):
289hunk ./src/allmydata/interfaces.py 2141
290                               that was found to be corrupt. Each share
291                               locator is a list of (serverid, storage_index,
292                               sharenum).
293-         count-incompatible-shares: the number of shares which are of a share
294+         count-incompatible-shares: the number of shares that are of a share
295                                     format unknown to this checker
296          list-incompatible-shares: a list of 'share locators', one for each
297                                    share that was found to be of an unknown
298hunk ./src/allmydata/interfaces.py 2148
299                                    format. Each share locator is a list of
300                                    (serverid, storage_index, sharenum).
301          servers-responding: list of (binary) storage server identifiers,
302-                             one for each server which responded to the share
303+                             one for each server that responded to the share
304                              query (even if they said they didn't have
305                              shares, and even if they said they did have
306                              shares but then didn't send them when asked, or
307hunk ./src/allmydata/interfaces.py 2345
308         will use the data in the checker results to guide the repair process,
309         such as which servers provided bad data and should therefore be
310         avoided. The ICheckResults object is inside the
311-        ICheckAndRepairResults object, which is returned by the
312+        ICheckAndRepairResults object that is returned by the
313         ICheckable.check() method::
314 
315          d = filenode.check(repair=False)
316hunk ./src/allmydata/interfaces.py 2436
317         methods to create new objects. I return synchronously."""
318 
319     def create_mutable_file(contents=None, keysize=None):
320-        """I create a new mutable file, and return a Deferred which will fire
321+        """I create a new mutable file, and return a Deferred that will fire
322         with the IMutableFileNode instance when it is ready. If contents= is
323         provided (a bytestring), it will be used as the initial contents of
324         the new file, otherwise the file will contain zero bytes. keysize= is
325hunk ./src/allmydata/interfaces.py 2444
326         usual."""
327 
328     def create_new_mutable_directory(initial_children={}):
329-        """I create a new mutable directory, and return a Deferred which will
330+        """I create a new mutable directory, and return a Deferred that will
331         fire with the IDirectoryNode instance when it is ready. If
332         initial_children= is provided (a dict mapping unicode child name to
333         (childnode, metadata_dict) tuples), the directory will be populated
334hunk ./src/allmydata/interfaces.py 2452
335 
336 class IClientStatus(Interface):
337     def list_all_uploads():
338-        """Return a list of uploader objects, one for each upload which
339+        """Return a list of uploader objects, one for each upload that
340         currently has an object available (tracked with weakrefs). This is
341         intended for debugging purposes."""
342     def list_active_uploads():
343hunk ./src/allmydata/interfaces.py 2462
344         started uploads."""
345 
346     def list_all_downloads():
347-        """Return a list of downloader objects, one for each download which
348+        """Return a list of downloader objects, one for each download that
349         currently has an object available (tracked with weakrefs). This is
350         intended for debugging purposes."""
351     def list_active_downloads():
352hunk ./src/allmydata/interfaces.py 2689
353 
354     def provide(provider=RIStatsProvider, nickname=str):
355         """
356-        @param provider: a stats collector instance which should be polled
357+        @param provider: a stats collector instance that should be polled
358                          periodically by the gatherer to collect stats.
359         @param nickname: a name useful to identify the provided client
360         """
361hunk ./src/allmydata/interfaces.py 2722
362 
363 class IValidatedThingProxy(Interface):
364     def start():
365-        """ Acquire a thing and validate it. Return a deferred which is
366+        """ Acquire a thing and validate it. Return a deferred that is
367         eventually fired with self if the thing is valid or errbacked if it
368         can't be acquired or validated."""
369 
370}
371[Pluggable backends -- new and moved files, changes to moved files. refs #999
372david-sarah@jacaranda.org**20110919232926
373 Ignore-this: ec5d2d1362a092d919e84327d3092424
374] {
375adddir ./src/allmydata/storage/backends
376adddir ./src/allmydata/storage/backends/disk
377move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
378move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
379adddir ./src/allmydata/storage/backends/null
380addfile ./src/allmydata/storage/backends/__init__.py
381addfile ./src/allmydata/storage/backends/base.py
382hunk ./src/allmydata/storage/backends/base.py 1
383+
384+from twisted.application import service
385+
386+from allmydata.storage.common import si_b2a
387+from allmydata.storage.lease import LeaseInfo
388+from allmydata.storage.bucket import BucketReader
389+
390+
391+class Backend(service.MultiService):
392+    def __init__(self):
393+        service.MultiService.__init__(self)
394+
395+
396+class ShareSet(object):
397+    """
398+    This class implements shareset logic that could work for all backends, but
399+    might be useful to override for efficiency.
400+    """
401+
402+    def __init__(self, storageindex):
403+        self.storageindex = storageindex
404+
405+    def get_storage_index(self):
406+        return self.storageindex
407+
408+    def get_storage_index_string(self):
409+        return si_b2a(self.storageindex)
410+
411+    def renew_lease(self, renew_secret, new_expiration_time):
412+        found_shares = False
413+        for share in self.get_shares():
414+            found_shares = True
415+            share.renew_lease(renew_secret, new_expiration_time)
416+
417+        if not found_shares:
418+            raise IndexError("no such lease to renew")
419+
420+    def get_leases(self):
421+        # Since all shares get the same lease data, we just grab the leases
422+        # from the first share.
423+        try:
424+            sf = self.get_shares().next()
425+            return sf.get_leases()
426+        except StopIteration:
427+            return iter([])
428+
429+    def add_or_renew_lease(self, lease_info):
430+        # This implementation assumes that lease data is duplicated in
431+        # all shares of a shareset, which might not be true for all backends.
432+        for share in self.get_shares():
433+            share.add_or_renew_lease(lease_info)
434+
435+    def make_bucket_reader(self, storageserver, share):
436+        return BucketReader(storageserver, share)
437+
438+    def testv_and_readv_and_writev(self, storageserver, secrets,
439+                                   test_and_write_vectors, read_vector,
440+                                   expiration_time):
441+        # The implementation here depends on the following helper methods,
442+        # which must be provided by subclasses:
443+        #
444+        # def _clean_up_after_unlink(self):
445+        #     """clean up resources associated with the shareset after some
446+        #     shares might have been deleted"""
447+        #
448+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
449+        #     """create a mutable share with the given shnum and write_enabler"""
450+
451+        # secrets might be a triple with cancel_secret in secrets[2], but if
452+        # so we ignore the cancel_secret.
453+        write_enabler = secrets[0]
454+        renew_secret = secrets[1]
455+
456+        si_s = self.get_storage_index_string()
457+        shares = {}
458+        for share in self.get_shares():
459+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
460+            # have a parameter saying what type it's expecting.
461+            if share.sharetype == "mutable":
462+                share.check_write_enabler(write_enabler, si_s)
463+                shares[share.get_shnum()] = share
464+
465+        # write_enabler is good for all existing shares
466+
467+        # now evaluate test vectors
468+        testv_is_good = True
469+        for sharenum in test_and_write_vectors:
470+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
471+            if sharenum in shares:
472+                if not shares[sharenum].check_testv(testv):
473+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
474+                    testv_is_good = False
475+                    break
476+            else:
477+                # compare the vectors against an empty share, in which all
478+                # reads return empty strings
479+                if not EmptyShare().check_testv(testv):
480+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
481+                                                                testv))
482+                    testv_is_good = False
483+                    break
484+
485+        # gather the read vectors, before we do any writes
486+        read_data = {}
487+        for shnum, share in shares.items():
488+            read_data[shnum] = share.readv(read_vector)
489+
490+        ownerid = 1 # TODO
491+        lease_info = LeaseInfo(ownerid, renew_secret,
492+                               expiration_time, storageserver.get_serverid())
493+
494+        if testv_is_good:
495+            # now apply the write vectors
496+            for shnum in test_and_write_vectors:
497+                (testv, datav, new_length) = test_and_write_vectors[shnum]
498+                if new_length == 0:
499+                    if shnum in shares:
500+                        shares[shnum].unlink()
501+                else:
502+                    if shnum not in shares:
503+                        # allocate a new share
504+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
505+                        shares[shnum] = share
506+                    shares[shnum].writev(datav, new_length)
507+                    # and update the lease
508+                    shares[shnum].add_or_renew_lease(lease_info)
509+
510+            if new_length == 0:
511+                self._clean_up_after_unlink()
512+
513+        return (testv_is_good, read_data)
514+
515+    def readv(self, wanted_shnums, read_vector):
516+        """
517+        Read a vector from the numbered shares in this shareset. An empty
518+        shares list means to return data from all known shares.
519+
520+        @param wanted_shnums=ListOf(int)
521+        @param read_vector=ReadVector
522+        @return DictOf(int, ReadData): shnum -> results, with one key per share
523+        """
524+        datavs = {}
525+        for share in self.get_shares():
526+            shnum = share.get_shnum()
527+            if not wanted_shnums or shnum in wanted_shnums:
528+                datavs[shnum] = share.readv(read_vector)
529+
530+        return datavs
531+
532+
533+def testv_compare(a, op, b):
534+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
535+    if op == "lt":
536+        return a < b
537+    if op == "le":
538+        return a <= b
539+    if op == "eq":
540+        return a == b
541+    if op == "ne":
542+        return a != b
543+    if op == "ge":
544+        return a >= b
545+    if op == "gt":
546+        return a > b
547+    # never reached
548+
549+
550+class EmptyShare:
551+    def check_testv(self, testv):
552+        test_good = True
553+        for (offset, length, operator, specimen) in testv:
554+            data = ""
555+            if not testv_compare(data, operator, specimen):
556+                test_good = False
557+                break
558+        return test_good
559+
560addfile ./src/allmydata/storage/backends/disk/__init__.py
561addfile ./src/allmydata/storage/backends/disk/disk_backend.py
562hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
563+
564+import re
565+
566+from twisted.python.filepath import UnlistableError
567+
568+from zope.interface import implements
569+from allmydata.interfaces import IStorageBackend, IShareSet
570+from allmydata.util import fileutil, log, time_format
571+from allmydata.storage.common import si_b2a, si_a2b
572+from allmydata.storage.bucket import BucketWriter
573+from allmydata.storage.backends.base import Backend, ShareSet
574+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
575+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
576+
577+# storage/
578+# storage/shares/incoming
579+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
580+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
581+# storage/shares/$START/$STORAGEINDEX
582+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
583+
584+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
585+# base-32 chars).
586+# $SHARENUM matches this regex:
587+NUM_RE=re.compile("^[0-9]+$")
588+
589+
590+def si_si2dir(startfp, storageindex):
591+    sia = si_b2a(storageindex)
592+    newfp = startfp.child(sia[:2])
593+    return newfp.child(sia)
594+
595+
596+def get_share(fp):
597+    f = fp.open('rb')
598+    try:
599+        prefix = f.read(32)
600+    finally:
601+        f.close()
602+
603+    if prefix == MutableDiskShare.MAGIC:
604+        return MutableDiskShare(fp)
605+    else:
606+        # assume it's immutable
607+        return ImmutableDiskShare(fp)
608+
609+
610+class DiskBackend(Backend):
611+    implements(IStorageBackend)
612+
613+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
614+        Backend.__init__(self)
615+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
616+        self._setup_corruption_advisory()
617+
618+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
619+        self._storedir = storedir
620+        self._readonly = readonly
621+        self._reserved_space = int(reserved_space)
622+        self._discard_storage = discard_storage
623+        self._sharedir = self._storedir.child("shares")
624+        fileutil.fp_make_dirs(self._sharedir)
625+        self._incomingdir = self._sharedir.child('incoming')
626+        self._clean_incomplete()
627+        if self._reserved_space and (self.get_available_space() is None):
628+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
629+                    umid="0wZ27w", level=log.UNUSUAL)
630+
631+    def _clean_incomplete(self):
632+        fileutil.fp_remove(self._incomingdir)
633+        fileutil.fp_make_dirs(self._incomingdir)
634+
635+    def _setup_corruption_advisory(self):
636+        # we don't actually create the corruption-advisory dir until necessary
637+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
638+
639+    def _make_shareset(self, sharehomedir):
640+        return self.get_shareset(si_a2b(sharehomedir.basename()))
641+
642+    def get_sharesets_for_prefix(self, prefix):
643+        prefixfp = self._sharedir.child(prefix)
644+        try:
645+            sharesets = map(self._make_shareset, prefixfp.children())
646+            def _by_base32si(b):
647+                return b.get_storage_index_string()
648+            sharesets.sort(key=_by_base32si)
649+        except EnvironmentError:
650+            sharesets = []
651+        return sharesets
652+
653+    def get_shareset(self, storageindex):
654+        sharehomedir = si_si2dir(self._sharedir, storageindex)
655+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
656+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
657+
658+    def fill_in_space_stats(self, stats):
659+        stats['storage_server.reserved_space'] = self._reserved_space
660+        try:
661+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
662+            writeable = disk['avail'] > 0
663+
664+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
665+            stats['storage_server.disk_total'] = disk['total']
666+            stats['storage_server.disk_used'] = disk['used']
667+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
668+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
669+            stats['storage_server.disk_avail'] = disk['avail']
670+        except AttributeError:
671+            writeable = True
672+        except EnvironmentError:
673+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
674+            writeable = False
675+
676+        if self._readonly:
677+            stats['storage_server.disk_avail'] = 0
678+            writeable = False
679+
680+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
681+
682+    def get_available_space(self):
683+        if self._readonly:
684+            return 0
685+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
686+
687+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
688+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
689+        now = time_format.iso_utc(sep="T")
690+        si_s = si_b2a(storageindex)
691+
692+        # Windows can't handle colons in the filename.
693+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
694+        f = self._corruption_advisory_dir.child(name).open("w")
695+        try:
696+            f.write("report: Share Corruption\n")
697+            f.write("type: %s\n" % sharetype)
698+            f.write("storage_index: %s\n" % si_s)
699+            f.write("share_number: %d\n" % shnum)
700+            f.write("\n")
701+            f.write(reason)
702+            f.write("\n")
703+        finally:
704+            f.close()
705+
706+        log.msg(format=("client claims corruption in (%(share_type)s) " +
707+                        "%(si)s-%(shnum)d: %(reason)s"),
708+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
709+                level=log.SCARY, umid="SGx2fA")
710+
711+
712+class DiskShareSet(ShareSet):
713+    implements(IShareSet)
714+
715+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
716+        ShareSet.__init__(self, storageindex)
717+        self._sharehomedir = sharehomedir
718+        self._incominghomedir = incominghomedir
719+        self._discard_storage = discard_storage
720+
721+    def get_overhead(self):
722+        return (fileutil.get_disk_usage(self._sharehomedir) +
723+                fileutil.get_disk_usage(self._incominghomedir))
724+
725+    def get_shares(self):
726+        """
727+        Generate IStorageBackendShare objects for shares we have for this storage index.
728+        ("Shares we have" means completed ones, excluding incoming ones.)
729+        """
730+        try:
731+            for fp in self._sharehomedir.children():
732+                shnumstr = fp.basename()
733+                if not NUM_RE.match(shnumstr):
734+                    continue
735+                sharehome = self._sharehomedir.child(shnumstr)
736+                yield self.get_share(sharehome)
737+        except UnlistableError:
738+            # There is no shares directory at all.
739+            pass
740+
741+    def has_incoming(self, shnum):
742+        if self._incominghomedir is None:
743+            return False
744+        return self._incominghomedir.child(str(shnum)).exists()
745+
746+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
747+        sharehome = self._sharehomedir.child(str(shnum))
748+        incominghome = self._incominghomedir.child(str(shnum))
749+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
750+                                   max_size=max_space_per_bucket, create=True)
751+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
752+        if self._discard_storage:
753+            bw.throw_out_all_data = True
754+        return bw
755+
756+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
757+        fileutil.fp_make_dirs(self._sharehomedir)
758+        sharehome = self._sharehomedir.child(str(shnum))
759+        serverid = storageserver.get_serverid()
760+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
761+
762+    def _clean_up_after_unlink(self):
763+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
764+
765hunk ./src/allmydata/storage/backends/disk/immutable.py 1
766-import os, stat, struct, time
767 
768hunk ./src/allmydata/storage/backends/disk/immutable.py 2
769-from foolscap.api import Referenceable
770+import struct
771 
772 from zope.interface import implements
773hunk ./src/allmydata/storage/backends/disk/immutable.py 5
774-from allmydata.interfaces import RIBucketWriter, RIBucketReader
775-from allmydata.util import base32, fileutil, log
776+
777+from allmydata.interfaces import IStoredShare
778+from allmydata.util import fileutil
779 from allmydata.util.assertutil import precondition
780hunk ./src/allmydata/storage/backends/disk/immutable.py 9
781+from allmydata.util.fileutil import fp_make_dirs
782 from allmydata.util.hashutil import constant_time_compare
783hunk ./src/allmydata/storage/backends/disk/immutable.py 11
784+from allmydata.util.encodingutil import quote_filepath
785+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
786 from allmydata.storage.lease import LeaseInfo
787hunk ./src/allmydata/storage/backends/disk/immutable.py 14
788-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
789-     DataTooLargeError
790+
791 
792 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
793 # and share data. The share data is accessed by RIBucketWriter.write and
794hunk ./src/allmydata/storage/backends/disk/immutable.py 41
795 # then the value stored in this field will be the actual share data length
796 # modulo 2**32.
797 
798-class ShareFile:
799-    LEASE_SIZE = struct.calcsize(">L32s32sL")
800+class ImmutableDiskShare(object):
801+    implements(IStoredShare)
802+
803     sharetype = "immutable"
804hunk ./src/allmydata/storage/backends/disk/immutable.py 45
805+    LEASE_SIZE = struct.calcsize(">L32s32sL")
806+
807 
808hunk ./src/allmydata/storage/backends/disk/immutable.py 48
809-    def __init__(self, filename, max_size=None, create=False):
810-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
811+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
812+        """ If max_size is not None then I won't allow more than
813+        max_size to be written to me. If create=True then max_size
814+        must not be None. """
815         precondition((max_size is not None) or (not create), max_size, create)
816hunk ./src/allmydata/storage/backends/disk/immutable.py 53
817-        self.home = filename
818+        self._storageindex = storageindex
819         self._max_size = max_size
820hunk ./src/allmydata/storage/backends/disk/immutable.py 55
821+        self._incominghome = incominghome
822+        self._home = finalhome
823+        self._shnum = shnum
824         if create:
825             # touch the file, so later callers will see that we're working on
826             # it. Also construct the metadata.
827hunk ./src/allmydata/storage/backends/disk/immutable.py 61
828-            assert not os.path.exists(self.home)
829-            fileutil.make_dirs(os.path.dirname(self.home))
830-            f = open(self.home, 'wb')
831+            assert not finalhome.exists()
832+            fp_make_dirs(self._incominghome.parent())
833             # The second field -- the four-byte share data length -- is no
834             # longer used as of Tahoe v1.3.0, but we continue to write it in
835             # there in case someone downgrades a storage server from >=
836hunk ./src/allmydata/storage/backends/disk/immutable.py 72
837             # the largest length that can fit into the field. That way, even
838             # if this does happen, the old < v1.3.0 server will still allow
839             # clients to read the first part of the share.
840-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
841-            f.close()
842+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
843             self._lease_offset = max_size + 0x0c
844             self._num_leases = 0
845         else:
846hunk ./src/allmydata/storage/backends/disk/immutable.py 76
847-            f = open(self.home, 'rb')
848-            filesize = os.path.getsize(self.home)
849-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
850-            f.close()
851+            f = self._home.open(mode='rb')
852+            try:
853+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
854+            finally:
855+                f.close()
856+            filesize = self._home.getsize()
857             if version != 1:
858                 msg = "sharefile %s had version %d but we wanted 1" % \
859hunk ./src/allmydata/storage/backends/disk/immutable.py 84
860-                      (filename, version)
861+                      (self._home, version)
862                 raise UnknownImmutableContainerVersionError(msg)
863             self._num_leases = num_leases
864             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
865hunk ./src/allmydata/storage/backends/disk/immutable.py 90
866         self._data_offset = 0xc
867 
868+    def __repr__(self):
869+        return ("<ImmutableDiskShare %s:%r at %s>"
870+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
871+
872+    def close(self):
873+        fileutil.fp_make_dirs(self._home.parent())
874+        self._incominghome.moveTo(self._home)
875+        try:
876+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
877+            # We try to delete the parent (.../ab/abcde) to avoid leaving
878+            # these directories lying around forever, but the delete might
879+            # fail if we're working on another share for the same storage
880+            # index (like ab/abcde/5). The alternative approach would be to
881+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
882+            # ShareWriter), each of which is responsible for a single
883+            # directory on disk, and have them use reference counting of
884+            # their children to know when they should do the rmdir. This
885+            # approach is simpler, but relies on os.rmdir refusing to delete
886+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
887+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
888+            # we also delete the grandparent (prefix) directory, .../ab ,
889+            # again to avoid leaving directories lying around. This might
890+            # fail if there is another bucket open that shares a prefix (like
891+            # ab/abfff).
892+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
893+            # we leave the great-grandparent (incoming/) directory in place.
894+        except EnvironmentError:
895+            # ignore the "can't rmdir because the directory is not empty"
896+            # exceptions, those are normal consequences of the
897+            # above-mentioned conditions.
898+            pass
899+        pass
900+
901+    def get_used_space(self):
902+        return (fileutil.get_used_space(self._home) +
903+                fileutil.get_used_space(self._incominghome))
904+
905+    def get_storage_index(self):
906+        return self._storageindex
907+
908+    def get_shnum(self):
909+        return self._shnum
910+
911     def unlink(self):
912hunk ./src/allmydata/storage/backends/disk/immutable.py 134
913-        os.unlink(self.home)
914+        self._home.remove()
915+
916+    def get_size(self):
917+        return self._home.getsize()
918+
919+    def get_data_length(self):
920+        return self._lease_offset - self._data_offset
921+
922+    #def readv(self, read_vector):
923+    #    ...
924 
925     def read_share_data(self, offset, length):
926         precondition(offset >= 0)
927hunk ./src/allmydata/storage/backends/disk/immutable.py 147
928-        # reads beyond the end of the data are truncated. Reads that start
929+
930+        # Reads beyond the end of the data are truncated. Reads that start
931         # beyond the end of the data return an empty string.
932         seekpos = self._data_offset+offset
933         actuallength = max(0, min(length, self._lease_offset-seekpos))
934hunk ./src/allmydata/storage/backends/disk/immutable.py 154
935         if actuallength == 0:
936             return ""
937-        f = open(self.home, 'rb')
938-        f.seek(seekpos)
939-        return f.read(actuallength)
940+        f = self._home.open(mode='rb')
941+        try:
942+            f.seek(seekpos)
943+            sharedata = f.read(actuallength)
944+        finally:
945+            f.close()
946+        return sharedata
947 
948     def write_share_data(self, offset, data):
949         length = len(data)
950hunk ./src/allmydata/storage/backends/disk/immutable.py 167
951         precondition(offset >= 0, offset)
952         if self._max_size is not None and offset+length > self._max_size:
953             raise DataTooLargeError(self._max_size, offset, length)
954-        f = open(self.home, 'rb+')
955-        real_offset = self._data_offset+offset
956-        f.seek(real_offset)
957-        assert f.tell() == real_offset
958-        f.write(data)
959-        f.close()
960+        f = self._incominghome.open(mode='rb+')
961+        try:
962+            real_offset = self._data_offset+offset
963+            f.seek(real_offset)
964+            assert f.tell() == real_offset
965+            f.write(data)
966+        finally:
967+            f.close()
968 
969     def _write_lease_record(self, f, lease_number, lease_info):
970         offset = self._lease_offset + lease_number * self.LEASE_SIZE
971hunk ./src/allmydata/storage/backends/disk/immutable.py 184
972 
973     def _read_num_leases(self, f):
974         f.seek(0x08)
975-        (num_leases,) = struct.unpack(">L", f.read(4))
976+        ro = f.read(4)
977+        (num_leases,) = struct.unpack(">L", ro)
978         return num_leases
979 
980     def _write_num_leases(self, f, num_leases):
981hunk ./src/allmydata/storage/backends/disk/immutable.py 195
982     def _truncate_leases(self, f, num_leases):
983         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
984 
985+    # These lease operations are intended for use by disk_backend.py.
986+    # Other clients should not depend on the fact that the disk backend
987+    # stores leases in share files.
988+
989     def get_leases(self):
990         """Yields a LeaseInfo instance for all leases."""
991hunk ./src/allmydata/storage/backends/disk/immutable.py 201
992-        f = open(self.home, 'rb')
993-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
994-        f.seek(self._lease_offset)
995-        for i in range(num_leases):
996-            data = f.read(self.LEASE_SIZE)
997-            if data:
998-                yield LeaseInfo().from_immutable_data(data)
999+        f = self._home.open(mode='rb')
1000+        try:
1001+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1002+            f.seek(self._lease_offset)
1003+            for i in range(num_leases):
1004+                data = f.read(self.LEASE_SIZE)
1005+                if data:
1006+                    yield LeaseInfo().from_immutable_data(data)
1007+        finally:
1008+            f.close()
1009 
1010     def add_lease(self, lease_info):
1011hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1012-        f = open(self.home, 'rb+')
1013-        num_leases = self._read_num_leases(f)
1014-        self._write_lease_record(f, num_leases, lease_info)
1015-        self._write_num_leases(f, num_leases+1)
1016-        f.close()
1017+        f = self._incominghome.open(mode='rb')
1018+        try:
1019+            num_leases = self._read_num_leases(f)
1020+        finally:
1021+            f.close()
1022+        f = self._home.open(mode='wb+')
1023+        try:
1024+            self._write_lease_record(f, num_leases, lease_info)
1025+            self._write_num_leases(f, num_leases+1)
1026+        finally:
1027+            f.close()
1028 
1029     def renew_lease(self, renew_secret, new_expire_time):
1030hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1031-        for i,lease in enumerate(self.get_leases()):
1032-            if constant_time_compare(lease.renew_secret, renew_secret):
1033-                # yup. See if we need to update the owner time.
1034-                if new_expire_time > lease.expiration_time:
1035-                    # yes
1036-                    lease.expiration_time = new_expire_time
1037-                    f = open(self.home, 'rb+')
1038-                    self._write_lease_record(f, i, lease)
1039-                    f.close()
1040-                return
1041+        try:
1042+            for i, lease in enumerate(self.get_leases()):
1043+                if constant_time_compare(lease.renew_secret, renew_secret):
1044+                    # yup. See if we need to update the owner time.
1045+                    if new_expire_time > lease.expiration_time:
1046+                        # yes
1047+                        lease.expiration_time = new_expire_time
1048+                        f = self._home.open('rb+')
1049+                        try:
1050+                            self._write_lease_record(f, i, lease)
1051+                        finally:
1052+                            f.close()
1053+                    return
1054+        except IndexError, e:
1055+            raise Exception("IndexError: %s" % (e,))
1056         raise IndexError("unable to renew non-existent lease")
1057 
1058     def add_or_renew_lease(self, lease_info):
1059hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1060                              lease_info.expiration_time)
1061         except IndexError:
1062             self.add_lease(lease_info)
1063-
1064-
1065-    def cancel_lease(self, cancel_secret):
1066-        """Remove a lease with the given cancel_secret. If the last lease is
1067-        cancelled, the file will be removed. Return the number of bytes that
1068-        were freed (by truncating the list of leases, and possibly by
1069-        deleting the file. Raise IndexError if there was no lease with the
1070-        given cancel_secret.
1071-        """
1072-
1073-        leases = list(self.get_leases())
1074-        num_leases_removed = 0
1075-        for i,lease in enumerate(leases):
1076-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1077-                leases[i] = None
1078-                num_leases_removed += 1
1079-        if not num_leases_removed:
1080-            raise IndexError("unable to find matching lease to cancel")
1081-        if num_leases_removed:
1082-            # pack and write out the remaining leases. We write these out in
1083-            # the same order as they were added, so that if we crash while
1084-            # doing this, we won't lose any non-cancelled leases.
1085-            leases = [l for l in leases if l] # remove the cancelled leases
1086-            f = open(self.home, 'rb+')
1087-            for i,lease in enumerate(leases):
1088-                self._write_lease_record(f, i, lease)
1089-            self._write_num_leases(f, len(leases))
1090-            self._truncate_leases(f, len(leases))
1091-            f.close()
1092-        space_freed = self.LEASE_SIZE * num_leases_removed
1093-        if not len(leases):
1094-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1095-            self.unlink()
1096-        return space_freed
1097-
1098-
1099-class BucketWriter(Referenceable):
1100-    implements(RIBucketWriter)
1101-
1102-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1103-        self.ss = ss
1104-        self.incominghome = incominghome
1105-        self.finalhome = finalhome
1106-        self._max_size = max_size # don't allow the client to write more than this
1107-        self._canary = canary
1108-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1109-        self.closed = False
1110-        self.throw_out_all_data = False
1111-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1112-        # also, add our lease to the file now, so that other ones can be
1113-        # added by simultaneous uploaders
1114-        self._sharefile.add_lease(lease_info)
1115-
1116-    def allocated_size(self):
1117-        return self._max_size
1118-
1119-    def remote_write(self, offset, data):
1120-        start = time.time()
1121-        precondition(not self.closed)
1122-        if self.throw_out_all_data:
1123-            return
1124-        self._sharefile.write_share_data(offset, data)
1125-        self.ss.add_latency("write", time.time() - start)
1126-        self.ss.count("write")
1127-
1128-    def remote_close(self):
1129-        precondition(not self.closed)
1130-        start = time.time()
1131-
1132-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1133-        fileutil.rename(self.incominghome, self.finalhome)
1134-        try:
1135-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1136-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1137-            # these directories lying around forever, but the delete might
1138-            # fail if we're working on another share for the same storage
1139-            # index (like ab/abcde/5). The alternative approach would be to
1140-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1141-            # ShareWriter), each of which is responsible for a single
1142-            # directory on disk, and have them use reference counting of
1143-            # their children to know when they should do the rmdir. This
1144-            # approach is simpler, but relies on os.rmdir refusing to delete
1145-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1146-            os.rmdir(os.path.dirname(self.incominghome))
1147-            # we also delete the grandparent (prefix) directory, .../ab ,
1148-            # again to avoid leaving directories lying around. This might
1149-            # fail if there is another bucket open that shares a prefix (like
1150-            # ab/abfff).
1151-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1152-            # we leave the great-grandparent (incoming/) directory in place.
1153-        except EnvironmentError:
1154-            # ignore the "can't rmdir because the directory is not empty"
1155-            # exceptions, those are normal consequences of the
1156-            # above-mentioned conditions.
1157-            pass
1158-        self._sharefile = None
1159-        self.closed = True
1160-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1161-
1162-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1163-        self.ss.bucket_writer_closed(self, filelen)
1164-        self.ss.add_latency("close", time.time() - start)
1165-        self.ss.count("close")
1166-
1167-    def _disconnected(self):
1168-        if not self.closed:
1169-            self._abort()
1170-
1171-    def remote_abort(self):
1172-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1173-                facility="tahoe.storage", level=log.UNUSUAL)
1174-        if not self.closed:
1175-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1176-        self._abort()
1177-        self.ss.count("abort")
1178-
1179-    def _abort(self):
1180-        if self.closed:
1181-            return
1182-
1183-        os.remove(self.incominghome)
1184-        # if we were the last share to be moved, remove the incoming/
1185-        # directory that was our parent
1186-        parentdir = os.path.split(self.incominghome)[0]
1187-        if not os.listdir(parentdir):
1188-            os.rmdir(parentdir)
1189-        self._sharefile = None
1190-
1191-        # We are now considered closed for further writing. We must tell
1192-        # the storage server about this so that it stops expecting us to
1193-        # use the space it allocated for us earlier.
1194-        self.closed = True
1195-        self.ss.bucket_writer_closed(self, 0)
1196-
1197-
1198-class BucketReader(Referenceable):
1199-    implements(RIBucketReader)
1200-
1201-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1202-        self.ss = ss
1203-        self._share_file = ShareFile(sharefname)
1204-        self.storage_index = storage_index
1205-        self.shnum = shnum
1206-
1207-    def __repr__(self):
1208-        return "<%s %s %s>" % (self.__class__.__name__,
1209-                               base32.b2a_l(self.storage_index[:8], 60),
1210-                               self.shnum)
1211-
1212-    def remote_read(self, offset, length):
1213-        start = time.time()
1214-        data = self._share_file.read_share_data(offset, length)
1215-        self.ss.add_latency("read", time.time() - start)
1216-        self.ss.count("read")
1217-        return data
1218-
1219-    def remote_advise_corrupt_share(self, reason):
1220-        return self.ss.remote_advise_corrupt_share("immutable",
1221-                                                   self.storage_index,
1222-                                                   self.shnum,
1223-                                                   reason)
1224hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1225-import os, stat, struct
1226 
1227hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1228-from allmydata.interfaces import BadWriteEnablerError
1229-from allmydata.util import idlib, log
1230+import struct
1231+
1232+from zope.interface import implements
1233+
1234+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1235+from allmydata.util import fileutil, idlib, log
1236 from allmydata.util.assertutil import precondition
1237 from allmydata.util.hashutil import constant_time_compare
1238hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1239-from allmydata.storage.lease import LeaseInfo
1240-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1241+from allmydata.util.encodingutil import quote_filepath
1242+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1243      DataTooLargeError
1244hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1245+from allmydata.storage.lease import LeaseInfo
1246+from allmydata.storage.backends.base import testv_compare
1247 
1248hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1249-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1250-# has a different layout. See docs/mutable.txt for more details.
1251+
1252+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1253+# It has a different layout. See docs/mutable.rst for more details.
1254 
1255 # #   offset    size    name
1256 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1257hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1258 #                        4    4   expiration timestamp
1259 #                        8   32   renewal token
1260 #                        40  32   cancel token
1261-#                        72  20   nodeid which accepted the tokens
1262+#                        72  20   nodeid that accepted the tokens
1263 # 7   468       (a)     data
1264 # 8   ??        4       count of extra leases
1265 # 9   ??        n*92    extra leases
1266hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1267 
1268 
1269-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1270+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1271 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1272 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1273 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1274hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1275 
1276-class MutableShareFile:
1277+
1278+class MutableDiskShare(object):
1279+    implements(IStoredMutableShare)
1280 
1281     sharetype = "mutable"
1282     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1283hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1284     assert LEASE_SIZE == 92
1285     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1286     assert DATA_OFFSET == 468, DATA_OFFSET
1287+
1288     # our sharefiles share with a recognizable string, plus some random
1289     # binary data to reduce the chance that a regular text file will look
1290     # like a sharefile.
1291hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1292     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1293     # TODO: decide upon a policy for max share size
1294 
1295-    def __init__(self, filename, parent=None):
1296-        self.home = filename
1297-        if os.path.exists(self.home):
1298+    def __init__(self, storageindex, shnum, home, parent=None):
1299+        self._storageindex = storageindex
1300+        self._shnum = shnum
1301+        self._home = home
1302+        if self._home.exists():
1303             # we don't cache anything, just check the magic
1304hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1305-            f = open(self.home, 'rb')
1306-            data = f.read(self.HEADER_SIZE)
1307-            (magic,
1308-             write_enabler_nodeid, write_enabler,
1309-             data_length, extra_least_offset) = \
1310-             struct.unpack(">32s20s32sQQ", data)
1311-            if magic != self.MAGIC:
1312-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1313-                      (filename, magic, self.MAGIC)
1314-                raise UnknownMutableContainerVersionError(msg)
1315+            f = self._home.open('rb')
1316+            try:
1317+                data = f.read(self.HEADER_SIZE)
1318+                (magic,
1319+                 write_enabler_nodeid, write_enabler,
1320+                 data_length, extra_least_offset) = \
1321+                 struct.unpack(">32s20s32sQQ", data)
1322+                if magic != self.MAGIC:
1323+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1324+                          (quote_filepath(self._home), magic, self.MAGIC)
1325+                    raise UnknownMutableContainerVersionError(msg)
1326+            finally:
1327+                f.close()
1328         self.parent = parent # for logging
1329 
1330     def log(self, *args, **kwargs):
1331hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1332         return self.parent.log(*args, **kwargs)
1333 
1334-    def create(self, my_nodeid, write_enabler):
1335-        assert not os.path.exists(self.home)
1336+    def create(self, serverid, write_enabler):
1337+        assert not self._home.exists()
1338         data_length = 0
1339         extra_lease_offset = (self.HEADER_SIZE
1340                               + 4 * self.LEASE_SIZE
1341hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1342                               + data_length)
1343         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1344         num_extra_leases = 0
1345-        f = open(self.home, 'wb')
1346-        header = struct.pack(">32s20s32sQQ",
1347-                             self.MAGIC, my_nodeid, write_enabler,
1348-                             data_length, extra_lease_offset,
1349-                             )
1350-        leases = ("\x00"*self.LEASE_SIZE) * 4
1351-        f.write(header + leases)
1352-        # data goes here, empty after creation
1353-        f.write(struct.pack(">L", num_extra_leases))
1354-        # extra leases go here, none at creation
1355-        f.close()
1356+        f = self._home.open('wb')
1357+        try:
1358+            header = struct.pack(">32s20s32sQQ",
1359+                                 self.MAGIC, serverid, write_enabler,
1360+                                 data_length, extra_lease_offset,
1361+                                 )
1362+            leases = ("\x00"*self.LEASE_SIZE) * 4
1363+            f.write(header + leases)
1364+            # data goes here, empty after creation
1365+            f.write(struct.pack(">L", num_extra_leases))
1366+            # extra leases go here, none at creation
1367+        finally:
1368+            f.close()
1369+
1370+    def __repr__(self):
1371+        return ("<MutableDiskShare %s:%r at %s>"
1372+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1373+
1374+    def get_used_space(self):
1375+        return fileutil.get_used_space(self._home)
1376+
1377+    def get_storage_index(self):
1378+        return self._storageindex
1379+
1380+    def get_shnum(self):
1381+        return self._shnum
1382 
1383     def unlink(self):
1384hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1385-        os.unlink(self.home)
1386+        self._home.remove()
1387 
1388     def _read_data_length(self, f):
1389         f.seek(self.DATA_LENGTH_OFFSET)
1390hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1391 
1392     def get_leases(self):
1393         """Yields a LeaseInfo instance for all leases."""
1394-        f = open(self.home, 'rb')
1395-        for i, lease in self._enumerate_leases(f):
1396-            yield lease
1397-        f.close()
1398+        f = self._home.open('rb')
1399+        try:
1400+            for i, lease in self._enumerate_leases(f):
1401+                yield lease
1402+        finally:
1403+            f.close()
1404 
1405     def _enumerate_leases(self, f):
1406         for i in range(self._get_num_lease_slots(f)):
1407hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1408             try:
1409                 data = self._read_lease_record(f, i)
1410                 if data is not None:
1411-                    yield i,data
1412+                    yield i, data
1413             except IndexError:
1414                 return
1415 
1416hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1417+    # These lease operations are intended for use by disk_backend.py.
1418+    # Other non-test clients should not depend on the fact that the disk
1419+    # backend stores leases in share files.
1420+
1421     def add_lease(self, lease_info):
1422         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1423hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1424-        f = open(self.home, 'rb+')
1425-        num_lease_slots = self._get_num_lease_slots(f)
1426-        empty_slot = self._get_first_empty_lease_slot(f)
1427-        if empty_slot is not None:
1428-            self._write_lease_record(f, empty_slot, lease_info)
1429-        else:
1430-            self._write_lease_record(f, num_lease_slots, lease_info)
1431-        f.close()
1432+        f = self._home.open('rb+')
1433+        try:
1434+            num_lease_slots = self._get_num_lease_slots(f)
1435+            empty_slot = self._get_first_empty_lease_slot(f)
1436+            if empty_slot is not None:
1437+                self._write_lease_record(f, empty_slot, lease_info)
1438+            else:
1439+                self._write_lease_record(f, num_lease_slots, lease_info)
1440+        finally:
1441+            f.close()
1442 
1443     def renew_lease(self, renew_secret, new_expire_time):
1444         accepting_nodeids = set()
1445hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1446-        f = open(self.home, 'rb+')
1447-        for (leasenum,lease) in self._enumerate_leases(f):
1448-            if constant_time_compare(lease.renew_secret, renew_secret):
1449-                # yup. See if we need to update the owner time.
1450-                if new_expire_time > lease.expiration_time:
1451-                    # yes
1452-                    lease.expiration_time = new_expire_time
1453-                    self._write_lease_record(f, leasenum, lease)
1454-                f.close()
1455-                return
1456-            accepting_nodeids.add(lease.nodeid)
1457-        f.close()
1458+        f = self._home.open('rb+')
1459+        try:
1460+            for (leasenum, lease) in self._enumerate_leases(f):
1461+                if constant_time_compare(lease.renew_secret, renew_secret):
1462+                    # yup. See if we need to update the owner time.
1463+                    if new_expire_time > lease.expiration_time:
1464+                        # yes
1465+                        lease.expiration_time = new_expire_time
1466+                        self._write_lease_record(f, leasenum, lease)
1467+                    return
1468+                accepting_nodeids.add(lease.nodeid)
1469+        finally:
1470+            f.close()
1471         # Return the accepting_nodeids set, to give the client a chance to
1472hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1473-        # update the leases on a share which has been migrated from its
1474+        # update the leases on a share that has been migrated from its
1475         # original server to a new one.
1476         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1477                " nodeids: ")
1478hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1479         except IndexError:
1480             self.add_lease(lease_info)
1481 
1482-    def cancel_lease(self, cancel_secret):
1483-        """Remove any leases with the given cancel_secret. If the last lease
1484-        is cancelled, the file will be removed. Return the number of bytes
1485-        that were freed (by truncating the list of leases, and possibly by
1486-        deleting the file. Raise IndexError if there was no lease with the
1487-        given cancel_secret."""
1488-
1489-        accepting_nodeids = set()
1490-        modified = 0
1491-        remaining = 0
1492-        blank_lease = LeaseInfo(owner_num=0,
1493-                                renew_secret="\x00"*32,
1494-                                cancel_secret="\x00"*32,
1495-                                expiration_time=0,
1496-                                nodeid="\x00"*20)
1497-        f = open(self.home, 'rb+')
1498-        for (leasenum,lease) in self._enumerate_leases(f):
1499-            accepting_nodeids.add(lease.nodeid)
1500-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1501-                self._write_lease_record(f, leasenum, blank_lease)
1502-                modified += 1
1503-            else:
1504-                remaining += 1
1505-        if modified:
1506-            freed_space = self._pack_leases(f)
1507-            f.close()
1508-            if not remaining:
1509-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1510-                self.unlink()
1511-            return freed_space
1512-
1513-        msg = ("Unable to cancel non-existent lease. I have leases "
1514-               "accepted by nodeids: ")
1515-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1516-                         for anid in accepting_nodeids])
1517-        msg += " ."
1518-        raise IndexError(msg)
1519-
1520-    def _pack_leases(self, f):
1521-        # TODO: reclaim space from cancelled leases
1522-        return 0
1523-
1524     def _read_write_enabler_and_nodeid(self, f):
1525         f.seek(0)
1526         data = f.read(self.HEADER_SIZE)
1527hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1528 
1529     def readv(self, readv):
1530         datav = []
1531-        f = open(self.home, 'rb')
1532-        for (offset, length) in readv:
1533-            datav.append(self._read_share_data(f, offset, length))
1534-        f.close()
1535+        f = self._home.open('rb')
1536+        try:
1537+            for (offset, length) in readv:
1538+                datav.append(self._read_share_data(f, offset, length))
1539+        finally:
1540+            f.close()
1541         return datav
1542 
1543hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1544-#    def remote_get_length(self):
1545-#        f = open(self.home, 'rb')
1546-#        data_length = self._read_data_length(f)
1547-#        f.close()
1548-#        return data_length
1549+    def get_size(self):
1550+        return self._home.getsize()
1551+
1552+    def get_data_length(self):
1553+        f = self._home.open('rb')
1554+        try:
1555+            data_length = self._read_data_length(f)
1556+        finally:
1557+            f.close()
1558+        return data_length
1559 
1560     def check_write_enabler(self, write_enabler, si_s):
1561hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1562-        f = open(self.home, 'rb+')
1563-        (real_write_enabler, write_enabler_nodeid) = \
1564-                             self._read_write_enabler_and_nodeid(f)
1565-        f.close()
1566+        f = self._home.open('rb+')
1567+        try:
1568+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1569+        finally:
1570+            f.close()
1571         # avoid a timing attack
1572         #if write_enabler != real_write_enabler:
1573         if not constant_time_compare(write_enabler, real_write_enabler):
1574hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1575 
1576     def check_testv(self, testv):
1577         test_good = True
1578-        f = open(self.home, 'rb+')
1579-        for (offset, length, operator, specimen) in testv:
1580-            data = self._read_share_data(f, offset, length)
1581-            if not testv_compare(data, operator, specimen):
1582-                test_good = False
1583-                break
1584-        f.close()
1585+        f = self._home.open('rb+')
1586+        try:
1587+            for (offset, length, operator, specimen) in testv:
1588+                data = self._read_share_data(f, offset, length)
1589+                if not testv_compare(data, operator, specimen):
1590+                    test_good = False
1591+                    break
1592+        finally:
1593+            f.close()
1594         return test_good
1595 
1596     def writev(self, datav, new_length):
1597hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1598-        f = open(self.home, 'rb+')
1599-        for (offset, data) in datav:
1600-            self._write_share_data(f, offset, data)
1601-        if new_length is not None:
1602-            cur_length = self._read_data_length(f)
1603-            if new_length < cur_length:
1604-                self._write_data_length(f, new_length)
1605-                # TODO: if we're going to shrink the share file when the
1606-                # share data has shrunk, then call
1607-                # self._change_container_size() here.
1608-        f.close()
1609-
1610-def testv_compare(a, op, b):
1611-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1612-    if op == "lt":
1613-        return a < b
1614-    if op == "le":
1615-        return a <= b
1616-    if op == "eq":
1617-        return a == b
1618-    if op == "ne":
1619-        return a != b
1620-    if op == "ge":
1621-        return a >= b
1622-    if op == "gt":
1623-        return a > b
1624-    # never reached
1625+        f = self._home.open('rb+')
1626+        try:
1627+            for (offset, data) in datav:
1628+                self._write_share_data(f, offset, data)
1629+            if new_length is not None:
1630+                cur_length = self._read_data_length(f)
1631+                if new_length < cur_length:
1632+                    self._write_data_length(f, new_length)
1633+                    # TODO: if we're going to shrink the share file when the
1634+                    # share data has shrunk, then call
1635+                    # self._change_container_size() here.
1636+        finally:
1637+            f.close()
1638 
1639hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1640-class EmptyShare:
1641+    def close(self):
1642+        pass
1643 
1644hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1645-    def check_testv(self, testv):
1646-        test_good = True
1647-        for (offset, length, operator, specimen) in testv:
1648-            data = ""
1649-            if not testv_compare(data, operator, specimen):
1650-                test_good = False
1651-                break
1652-        return test_good
1653 
1654hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1655-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1656-    ms = MutableShareFile(filename, parent)
1657-    ms.create(my_nodeid, write_enabler)
1658+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1659+    ms = MutableDiskShare(fp, parent)
1660+    ms.create(serverid, write_enabler)
1661     del ms
1662hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1663-    return MutableShareFile(filename, parent)
1664-
1665+    return MutableDiskShare(fp, parent)
1666addfile ./src/allmydata/storage/backends/null/__init__.py
1667addfile ./src/allmydata/storage/backends/null/null_backend.py
1668hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1669 
1670+import os, struct
1671+
1672+from zope.interface import implements
1673+
1674+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1675+from allmydata.util.assertutil import precondition
1676+from allmydata.util.hashutil import constant_time_compare
1677+from allmydata.storage.backends.base import Backend, ShareSet
1678+from allmydata.storage.bucket import BucketWriter
1679+from allmydata.storage.common import si_b2a
1680+from allmydata.storage.lease import LeaseInfo
1681+
1682+
1683+class NullBackend(Backend):
1684+    implements(IStorageBackend)
1685+
1686+    def __init__(self):
1687+        Backend.__init__(self)
1688+
1689+    def get_available_space(self, reserved_space):
1690+        return None
1691+
1692+    def get_sharesets_for_prefix(self, prefix):
1693+        pass
1694+
1695+    def get_shareset(self, storageindex):
1696+        return NullShareSet(storageindex)
1697+
1698+    def fill_in_space_stats(self, stats):
1699+        pass
1700+
1701+    def set_storage_server(self, ss):
1702+        self.ss = ss
1703+
1704+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1705+        pass
1706+
1707+
1708+class NullShareSet(ShareSet):
1709+    implements(IShareSet)
1710+
1711+    def __init__(self, storageindex):
1712+        self.storageindex = storageindex
1713+
1714+    def get_overhead(self):
1715+        return 0
1716+
1717+    def get_incoming_shnums(self):
1718+        return frozenset()
1719+
1720+    def get_shares(self):
1721+        pass
1722+
1723+    def get_share(self, shnum):
1724+        return None
1725+
1726+    def get_storage_index(self):
1727+        return self.storageindex
1728+
1729+    def get_storage_index_string(self):
1730+        return si_b2a(self.storageindex)
1731+
1732+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1733+        immutableshare = ImmutableNullShare()
1734+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1735+
1736+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1737+        return MutableNullShare()
1738+
1739+    def _clean_up_after_unlink(self):
1740+        pass
1741+
1742+
1743+class ImmutableNullShare:
1744+    implements(IStoredShare)
1745+    sharetype = "immutable"
1746+
1747+    def __init__(self):
1748+        """ If max_size is not None then I won't allow more than
1749+        max_size to be written to me. If create=True then max_size
1750+        must not be None. """
1751+        pass
1752+
1753+    def get_shnum(self):
1754+        return self.shnum
1755+
1756+    def unlink(self):
1757+        os.unlink(self.fname)
1758+
1759+    def read_share_data(self, offset, length):
1760+        precondition(offset >= 0)
1761+        # Reads beyond the end of the data are truncated. Reads that start
1762+        # beyond the end of the data return an empty string.
1763+        seekpos = self._data_offset+offset
1764+        fsize = os.path.getsize(self.fname)
1765+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1766+        if actuallength == 0:
1767+            return ""
1768+        f = open(self.fname, 'rb')
1769+        f.seek(seekpos)
1770+        return f.read(actuallength)
1771+
1772+    def write_share_data(self, offset, data):
1773+        pass
1774+
1775+    def _write_lease_record(self, f, lease_number, lease_info):
1776+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1777+        f.seek(offset)
1778+        assert f.tell() == offset
1779+        f.write(lease_info.to_immutable_data())
1780+
1781+    def _read_num_leases(self, f):
1782+        f.seek(0x08)
1783+        (num_leases,) = struct.unpack(">L", f.read(4))
1784+        return num_leases
1785+
1786+    def _write_num_leases(self, f, num_leases):
1787+        f.seek(0x08)
1788+        f.write(struct.pack(">L", num_leases))
1789+
1790+    def _truncate_leases(self, f, num_leases):
1791+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1792+
1793+    def get_leases(self):
1794+        """Yields a LeaseInfo instance for all leases."""
1795+        f = open(self.fname, 'rb')
1796+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1797+        f.seek(self._lease_offset)
1798+        for i in range(num_leases):
1799+            data = f.read(self.LEASE_SIZE)
1800+            if data:
1801+                yield LeaseInfo().from_immutable_data(data)
1802+
1803+    def add_lease(self, lease):
1804+        pass
1805+
1806+    def renew_lease(self, renew_secret, new_expire_time):
1807+        for i,lease in enumerate(self.get_leases()):
1808+            if constant_time_compare(lease.renew_secret, renew_secret):
1809+                # yup. See if we need to update the owner time.
1810+                if new_expire_time > lease.expiration_time:
1811+                    # yes
1812+                    lease.expiration_time = new_expire_time
1813+                    f = open(self.fname, 'rb+')
1814+                    self._write_lease_record(f, i, lease)
1815+                    f.close()
1816+                return
1817+        raise IndexError("unable to renew non-existent lease")
1818+
1819+    def add_or_renew_lease(self, lease_info):
1820+        try:
1821+            self.renew_lease(lease_info.renew_secret,
1822+                             lease_info.expiration_time)
1823+        except IndexError:
1824+            self.add_lease(lease_info)
1825+
1826+
1827+class MutableNullShare:
1828+    implements(IStoredMutableShare)
1829+    sharetype = "mutable"
1830+
1831+    """ XXX: TODO """
1832addfile ./src/allmydata/storage/bucket.py
1833hunk ./src/allmydata/storage/bucket.py 1
1834+
1835+import time
1836+
1837+from foolscap.api import Referenceable
1838+
1839+from zope.interface import implements
1840+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1841+from allmydata.util import base32, log
1842+from allmydata.util.assertutil import precondition
1843+
1844+
1845+class BucketWriter(Referenceable):
1846+    implements(RIBucketWriter)
1847+
1848+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1849+        self.ss = ss
1850+        self._max_size = max_size # don't allow the client to write more than this
1851+        self._canary = canary
1852+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1853+        self.closed = False
1854+        self.throw_out_all_data = False
1855+        self._share = immutableshare
1856+        # also, add our lease to the file now, so that other ones can be
1857+        # added by simultaneous uploaders
1858+        self._share.add_lease(lease_info)
1859+
1860+    def allocated_size(self):
1861+        return self._max_size
1862+
1863+    def remote_write(self, offset, data):
1864+        start = time.time()
1865+        precondition(not self.closed)
1866+        if self.throw_out_all_data:
1867+            return
1868+        self._share.write_share_data(offset, data)
1869+        self.ss.add_latency("write", time.time() - start)
1870+        self.ss.count("write")
1871+
1872+    def remote_close(self):
1873+        precondition(not self.closed)
1874+        start = time.time()
1875+
1876+        self._share.close()
1877+        filelen = self._share.stat()
1878+        self._share = None
1879+
1880+        self.closed = True
1881+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1882+
1883+        self.ss.bucket_writer_closed(self, filelen)
1884+        self.ss.add_latency("close", time.time() - start)
1885+        self.ss.count("close")
1886+
1887+    def _disconnected(self):
1888+        if not self.closed:
1889+            self._abort()
1890+
1891+    def remote_abort(self):
1892+        log.msg("storage: aborting write to share %r" % self._share,
1893+                facility="tahoe.storage", level=log.UNUSUAL)
1894+        if not self.closed:
1895+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1896+        self._abort()
1897+        self.ss.count("abort")
1898+
1899+    def _abort(self):
1900+        if self.closed:
1901+            return
1902+        self._share.unlink()
1903+        self._share = None
1904+
1905+        # We are now considered closed for further writing. We must tell
1906+        # the storage server about this so that it stops expecting us to
1907+        # use the space it allocated for us earlier.
1908+        self.closed = True
1909+        self.ss.bucket_writer_closed(self, 0)
1910+
1911+
1912+class BucketReader(Referenceable):
1913+    implements(RIBucketReader)
1914+
1915+    def __init__(self, ss, share):
1916+        self.ss = ss
1917+        self._share = share
1918+        self.storageindex = share.storageindex
1919+        self.shnum = share.shnum
1920+
1921+    def __repr__(self):
1922+        return "<%s %s %s>" % (self.__class__.__name__,
1923+                               base32.b2a_l(self.storageindex[:8], 60),
1924+                               self.shnum)
1925+
1926+    def remote_read(self, offset, length):
1927+        start = time.time()
1928+        data = self._share.read_share_data(offset, length)
1929+        self.ss.add_latency("read", time.time() - start)
1930+        self.ss.count("read")
1931+        return data
1932+
1933+    def remote_advise_corrupt_share(self, reason):
1934+        return self.ss.remote_advise_corrupt_share("immutable",
1935+                                                   self.storageindex,
1936+                                                   self.shnum,
1937+                                                   reason)
1938addfile ./src/allmydata/test/test_backends.py
1939hunk ./src/allmydata/test/test_backends.py 1
1940+import os, stat
1941+from twisted.trial import unittest
1942+from allmydata.util.log import msg
1943+from allmydata.test.common_util import ReallyEqualMixin
1944+import mock
1945+
1946+# This is the code that we're going to be testing.
1947+from allmydata.storage.server import StorageServer
1948+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
1949+from allmydata.storage.backends.null.null_backend import NullBackend
1950+
1951+# The following share file content was generated with
1952+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1953+# with share data == 'a'. The total size of this input
1954+# is 85 bytes.
1955+shareversionnumber = '\x00\x00\x00\x01'
1956+sharedatalength = '\x00\x00\x00\x01'
1957+numberofleases = '\x00\x00\x00\x01'
1958+shareinputdata = 'a'
1959+ownernumber = '\x00\x00\x00\x00'
1960+renewsecret  = 'x'*32
1961+cancelsecret = 'y'*32
1962+expirationtime = '\x00(\xde\x80'
1963+nextlease = ''
1964+containerdata = shareversionnumber + sharedatalength + numberofleases
1965+client_data = shareinputdata + ownernumber + renewsecret + \
1966+    cancelsecret + expirationtime + nextlease
1967+share_data = containerdata + client_data
1968+testnodeid = 'testnodeidxxxxxxxxxx'
1969+
1970+
1971+class MockFileSystem(unittest.TestCase):
1972+    """ I simulate a filesystem that the code under test can use. I simulate
1973+    just the parts of the filesystem that the current implementation of Disk
1974+    backend needs. """
1975+    def setUp(self):
1976+        # Make patcher, patch, and effects for disk-using functions.
1977+        msg( "%s.setUp()" % (self,))
1978+        self.mockedfilepaths = {}
1979+        # keys are pathnames, values are MockFilePath objects. This is necessary because
1980+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
1981+        # self.mockedfilepaths has the relevant information.
1982+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
1983+        self.basedir = self.storedir.child('shares')
1984+        self.baseincdir = self.basedir.child('incoming')
1985+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1986+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1987+        self.shareincomingname = self.sharedirincomingname.child('0')
1988+        self.sharefinalname = self.sharedirfinalname.child('0')
1989+
1990+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
1991+        # or LeaseCheckingCrawler.
1992+
1993+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
1994+        self.FilePathFake.__enter__()
1995+
1996+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
1997+        FakeBCC = self.BCountingCrawler.__enter__()
1998+        FakeBCC.side_effect = self.call_FakeBCC
1999+
2000+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
2001+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
2002+        FakeLCC.side_effect = self.call_FakeLCC
2003+
2004+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
2005+        GetSpace = self.get_available_space.__enter__()
2006+        GetSpace.side_effect = self.call_get_available_space
2007+
2008+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2009+        getsize = self.statforsize.__enter__()
2010+        getsize.side_effect = self.call_statforsize
2011+
2012+    def call_FakeBCC(self, StateFile):
2013+        return MockBCC()
2014+
2015+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2016+        return MockLCC()
2017+
2018+    def call_get_available_space(self, storedir, reservedspace):
2019+        # The input vector has an input size of 85.
2020+        return 85 - reservedspace
2021+
2022+    def call_statforsize(self, fakefpname):
2023+        return self.mockedfilepaths[fakefpname].fileobject.size()
2024+
2025+    def tearDown(self):
2026+        msg( "%s.tearDown()" % (self,))
2027+        self.FilePathFake.__exit__()
2028+        self.mockedfilepaths = {}
2029+
2030+
2031+class MockFilePath:
2032+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2033+        #  I can't just make the values MockFileObjects because they may be directories.
2034+        self.mockedfilepaths = ffpathsenvironment
2035+        self.path = pathstring
2036+        self.existence = existence
2037+        if not self.mockedfilepaths.has_key(self.path):
2038+            #  The first MockFilePath object is special
2039+            self.mockedfilepaths[self.path] = self
2040+            self.fileobject = None
2041+        else:
2042+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2043+        self.spawn = {}
2044+        self.antecedent = os.path.dirname(self.path)
2045+
2046+    def setContent(self, contentstring):
2047+        # This method rewrites the data in the file that corresponds to its path
2048+        # name whether it preexisted or not.
2049+        self.fileobject = MockFileObject(contentstring)
2050+        self.existence = True
2051+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2052+        self.mockedfilepaths[self.path].existence = self.existence
2053+        self.setparents()
2054+
2055+    def create(self):
2056+        # This method chokes if there's a pre-existing file!
2057+        if self.mockedfilepaths[self.path].fileobject:
2058+            raise OSError
2059+        else:
2060+            self.existence = True
2061+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2062+            self.mockedfilepaths[self.path].existence = self.existence
2063+            self.setparents()
2064+
2065+    def open(self, mode='r'):
2066+        # XXX Makes no use of mode.
2067+        if not self.mockedfilepaths[self.path].fileobject:
2068+            # If there's no fileobject there already then make one and put it there.
2069+            self.fileobject = MockFileObject()
2070+            self.existence = True
2071+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2072+            self.mockedfilepaths[self.path].existence = self.existence
2073+        else:
2074+            # Otherwise get a ref to it.
2075+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2076+            self.existence = self.mockedfilepaths[self.path].existence
2077+        return self.fileobject.open(mode)
2078+
2079+    def child(self, childstring):
2080+        arg2child = os.path.join(self.path, childstring)
2081+        child = MockFilePath(arg2child, self.mockedfilepaths)
2082+        return child
2083+
2084+    def children(self):
2085+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2086+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2087+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2088+        self.spawn = frozenset(childrenfromffs)
2089+        return self.spawn
2090+
2091+    def parent(self):
2092+        if self.mockedfilepaths.has_key(self.antecedent):
2093+            parent = self.mockedfilepaths[self.antecedent]
2094+        else:
2095+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2096+        return parent
2097+
2098+    def parents(self):
2099+        antecedents = []
2100+        def f(fps, antecedents):
2101+            newfps = os.path.split(fps)[0]
2102+            if newfps:
2103+                antecedents.append(newfps)
2104+                f(newfps, antecedents)
2105+        f(self.path, antecedents)
2106+        return antecedents
2107+
2108+    def setparents(self):
2109+        for fps in self.parents():
2110+            if not self.mockedfilepaths.has_key(fps):
2111+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2112+
2113+    def basename(self):
2114+        return os.path.split(self.path)[1]
2115+
2116+    def moveTo(self, newffp):
2117+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2118+        if self.mockedfilepaths[newffp.path].exists():
2119+            raise OSError
2120+        else:
2121+            self.mockedfilepaths[newffp.path] = self
2122+            self.path = newffp.path
2123+
2124+    def getsize(self):
2125+        return self.fileobject.getsize()
2126+
2127+    def exists(self):
2128+        return self.existence
2129+
2130+    def isdir(self):
2131+        return True
2132+
2133+    def makedirs(self):
2134+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2135+        pass
2136+
2137+    def remove(self):
2138+        pass
2139+
2140+
2141+class MockFileObject:
2142+    def __init__(self, contentstring=''):
2143+        self.buffer = contentstring
2144+        self.pos = 0
2145+    def open(self, mode='r'):
2146+        return self
2147+    def write(self, instring):
2148+        begin = self.pos
2149+        padlen = begin - len(self.buffer)
2150+        if padlen > 0:
2151+            self.buffer += '\x00' * padlen
2152+        end = self.pos + len(instring)
2153+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2154+        self.pos = end
2155+    def close(self):
2156+        self.pos = 0
2157+    def seek(self, pos):
2158+        self.pos = pos
2159+    def read(self, numberbytes):
2160+        return self.buffer[self.pos:self.pos+numberbytes]
2161+    def tell(self):
2162+        return self.pos
2163+    def size(self):
2164+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2165+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2166+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2167+        return {stat.ST_SIZE:len(self.buffer)}
2168+    def getsize(self):
2169+        return len(self.buffer)
2170+
2171+class MockBCC:
2172+    def setServiceParent(self, Parent):
2173+        pass
2174+
2175+
2176+class MockLCC:
2177+    def setServiceParent(self, Parent):
2178+        pass
2179+
2180+
2181+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2182+    """ NullBackend is just for testing and executable documentation, so
2183+    this test is actually a test of StorageServer in which we're using
2184+    NullBackend as helper code for the test, rather than a test of
2185+    NullBackend. """
2186+    def setUp(self):
2187+        self.ss = StorageServer(testnodeid, NullBackend())
2188+
2189+    @mock.patch('os.mkdir')
2190+    @mock.patch('__builtin__.open')
2191+    @mock.patch('os.listdir')
2192+    @mock.patch('os.path.isdir')
2193+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2194+        """
2195+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2196+        generates the correct return types when given test-vector arguments. That
2197+        bs is of the correct type is verified by attempting to invoke remote_write
2198+        on bs[0].
2199+        """
2200+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2201+        bs[0].remote_write(0, 'a')
2202+        self.failIf(mockisdir.called)
2203+        self.failIf(mocklistdir.called)
2204+        self.failIf(mockopen.called)
2205+        self.failIf(mockmkdir.called)
2206+
2207+
2208+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2209+    def test_create_server_disk_backend(self):
2210+        """ This tests whether a server instance can be constructed with a
2211+        filesystem backend. To pass the test, it mustn't use the filesystem
2212+        outside of its configured storedir. """
2213+        StorageServer(testnodeid, DiskBackend(self.storedir))
2214+
2215+
2216+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2217+    """ This tests both the StorageServer and the Disk backend together. """
2218+    def setUp(self):
2219+        MockFileSystem.setUp(self)
2220+        try:
2221+            self.backend = DiskBackend(self.storedir)
2222+            self.ss = StorageServer(testnodeid, self.backend)
2223+
2224+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2225+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2226+        except:
2227+            MockFileSystem.tearDown(self)
2228+            raise
2229+
2230+    @mock.patch('time.time')
2231+    @mock.patch('allmydata.util.fileutil.get_available_space')
2232+    def test_out_of_space(self, mockget_available_space, mocktime):
2233+        mocktime.return_value = 0
2234+
2235+        def call_get_available_space(dir, reserve):
2236+            return 0
2237+
2238+        mockget_available_space.side_effect = call_get_available_space
2239+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2240+        self.failUnlessReallyEqual(bsc, {})
2241+
2242+    @mock.patch('time.time')
2243+    def test_write_and_read_share(self, mocktime):
2244+        """
2245+        Write a new share, read it, and test the server's (and disk backend's)
2246+        handling of simultaneous and successive attempts to write the same
2247+        share.
2248+        """
2249+        mocktime.return_value = 0
2250+        # Inspect incoming and fail unless it's empty.
2251+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2252+
2253+        self.failUnlessReallyEqual(incomingset, frozenset())
2254+
2255+        # Populate incoming with the sharenum: 0.
2256+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2257+
2258+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2259+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2260+
2261+
2262+
2263+        # Attempt to create a second share writer with the same sharenum.
2264+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2265+
2266+        # Show that no sharewriter results from a remote_allocate_buckets
2267+        # with the same si and sharenum, until BucketWriter.remote_close()
2268+        # has been called.
2269+        self.failIf(bsa)
2270+
2271+        # Test allocated size.
2272+        spaceint = self.ss.allocated_size()
2273+        self.failUnlessReallyEqual(spaceint, 1)
2274+
2275+        # Write 'a' to shnum 0. Only tested together with close and read.
2276+        bs[0].remote_write(0, 'a')
2277+
2278+        # Preclose: Inspect final, failUnless nothing there.
2279+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2280+        bs[0].remote_close()
2281+
2282+        # Postclose: (Omnibus) failUnless written data is in final.
2283+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2284+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2285+        contents = sharesinfinal[0].read_share_data(0, 73)
2286+        self.failUnlessReallyEqual(contents, client_data)
2287+
2288+        # Exercise the case that the share we're asking to allocate is
2289+        # already (completely) uploaded.
2290+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2291+
2292+
2293+    def test_read_old_share(self):
2294+        """ This tests whether the code correctly finds and reads
2295+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2296+        servers. There is a similar test in test_download, but that one
2297+        is from the perspective of the client and exercises a deeper
2298+        stack of code. This one is for exercising just the
2299+        StorageServer object. """
2300+        # Contruct a file with the appropriate contents in the mockfilesystem.
2301+        datalen = len(share_data)
2302+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2303+        finalhome.setContent(share_data)
2304+
2305+        # Now begin the test.
2306+        bs = self.ss.remote_get_buckets('teststorage_index')
2307+
2308+        self.failUnlessEqual(len(bs), 1)
2309+        b = bs['0']
2310+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2311+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2312+        # If you try to read past the end you get the as much data as is there.
2313+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2314+        # If you start reading past the end of the file you get the empty string.
2315+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2316}
2317[Pluggable backends -- all other changes. refs #999
2318david-sarah@jacaranda.org**20110919233256
2319 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2320] {
2321hunk ./src/allmydata/client.py 245
2322             sharetypes.append("immutable")
2323         if self.get_config("storage", "expire.mutable", True, boolean=True):
2324             sharetypes.append("mutable")
2325-        expiration_sharetypes = tuple(sharetypes)
2326 
2327hunk ./src/allmydata/client.py 246
2328+        expiration_policy = {
2329+            'enabled': expire,
2330+            'mode': mode,
2331+            'override_lease_duration': o_l_d,
2332+            'cutoff_date': cutoff_date,
2333+            'sharetypes': tuple(sharetypes),
2334+        }
2335         ss = StorageServer(storedir, self.nodeid,
2336                            reserved_space=reserved,
2337                            discard_storage=discard,
2338hunk ./src/allmydata/client.py 258
2339                            readonly_storage=readonly,
2340                            stats_provider=self.stats_provider,
2341-                           expiration_enabled=expire,
2342-                           expiration_mode=mode,
2343-                           expiration_override_lease_duration=o_l_d,
2344-                           expiration_cutoff_date=cutoff_date,
2345-                           expiration_sharetypes=expiration_sharetypes)
2346+                           expiration_policy=expiration_policy)
2347         self.add_service(ss)
2348 
2349         d = self.when_tub_ready()
2350hunk ./src/allmydata/immutable/offloaded.py 306
2351         if os.path.exists(self._encoding_file):
2352             self.log("ciphertext already present, bypassing fetch",
2353                      level=log.UNUSUAL)
2354+            # XXX the following comment is probably stale, since
2355+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2356+            #
2357             # we'll still need the plaintext hashes (when
2358             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2359             # called), and currently the easiest way to get them is to ask
2360hunk ./src/allmydata/immutable/upload.py 765
2361             self._status.set_progress(1, progress)
2362         return cryptdata
2363 
2364-
2365     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2366hunk ./src/allmydata/immutable/upload.py 766
2367+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2368+        plaintext segments, i.e. get the tagged hashes of the given segments.
2369+        The segment size is expected to be generated by the
2370+        IEncryptedUploadable before any plaintext is read or ciphertext
2371+        produced, so that the segment hashes can be generated with only a
2372+        single pass.
2373+
2374+        This returns a Deferred that fires with a sequence of hashes, using:
2375+
2376+         tuple(segment_hashes[first:last])
2377+
2378+        'num_segments' is used to assert that the number of segments that the
2379+        IEncryptedUploadable handled matches the number of segments that the
2380+        encoder was expecting.
2381+
2382+        This method must not be called until the final byte has been read
2383+        from read_encrypted(). Once this method is called, read_encrypted()
2384+        can never be called again.
2385+        """
2386         # this is currently unused, but will live again when we fix #453
2387         if len(self._plaintext_segment_hashes) < num_segments:
2388             # close out the last one
2389hunk ./src/allmydata/immutable/upload.py 803
2390         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2391 
2392     def get_plaintext_hash(self):
2393+        """OBSOLETE; Get the hash of the whole plaintext.
2394+
2395+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2396+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2397+        """
2398+        # this is currently unused, but will live again when we fix #453
2399         h = self._plaintext_hasher.digest()
2400         return defer.succeed(h)
2401 
2402hunk ./src/allmydata/interfaces.py 29
2403 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2404 Offset = Number
2405 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2406-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2407-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2408-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2409+WriteEnablerSecret = Hash # used to protect mutable share modifications
2410+LeaseRenewSecret = Hash # used to protect lease renewal requests
2411+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2412 
2413 class RIStubClient(RemoteInterface):
2414     """Each client publishes a service announcement for a dummy object called
2415hunk ./src/allmydata/interfaces.py 106
2416                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2417                          allocated_size=Offset, canary=Referenceable):
2418         """
2419-        @param storage_index: the index of the bucket to be created or
2420+        @param storage_index: the index of the shareset to be created or
2421                               increfed.
2422         @param sharenums: these are the share numbers (probably between 0 and
2423                           99) that the sender is proposing to store on this
2424hunk ./src/allmydata/interfaces.py 111
2425                           server.
2426-        @param renew_secret: This is the secret used to protect bucket refresh
2427+        @param renew_secret: This is the secret used to protect lease renewal.
2428                              This secret is generated by the client and
2429                              stored for later comparison by the server. Each
2430                              server is given a different secret.
2431hunk ./src/allmydata/interfaces.py 115
2432-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2433-        @param canary: If the canary is lost before close(), the bucket is
2434+        @param cancel_secret: ignored
2435+        @param canary: If the canary is lost before close(), the allocation is
2436                        deleted.
2437         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2438                  already have and allocated is what we hereby agree to accept.
2439hunk ./src/allmydata/interfaces.py 129
2440                   renew_secret=LeaseRenewSecret,
2441                   cancel_secret=LeaseCancelSecret):
2442         """
2443-        Add a new lease on the given bucket. If the renew_secret matches an
2444+        Add a new lease on the given shareset. If the renew_secret matches an
2445         existing lease, that lease will be renewed instead. If there is no
2446hunk ./src/allmydata/interfaces.py 131
2447-        bucket for the given storage_index, return silently. (note that in
2448+        shareset for the given storage_index, return silently. (Note that in
2449         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2450hunk ./src/allmydata/interfaces.py 133
2451-        bucket)
2452+        shareset.)
2453         """
2454         return Any() # returns None now, but future versions might change
2455 
2456hunk ./src/allmydata/interfaces.py 139
2457     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2458         """
2459-        Renew the lease on a given bucket, resetting the timer to 31 days.
2460-        Some networks will use this, some will not. If there is no bucket for
2461+        Renew the lease on a given shareset, resetting the timer to 31 days.
2462+        Some networks will use this, some will not. If there is no shareset for
2463         the given storage_index, IndexError will be raised.
2464 
2465         For mutable shares, if the given renew_secret does not match an
2466hunk ./src/allmydata/interfaces.py 146
2467         existing lease, IndexError will be raised with a note listing the
2468         server-nodeids on the existing leases, so leases on migrated shares
2469-        can be renewed or cancelled. For immutable shares, IndexError
2470-        (without the note) will be raised.
2471+        can be renewed. For immutable shares, IndexError (without the note)
2472+        will be raised.
2473         """
2474         return Any()
2475 
2476hunk ./src/allmydata/interfaces.py 154
2477     def get_buckets(storage_index=StorageIndex):
2478         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2479 
2480-
2481-
2482     def slot_readv(storage_index=StorageIndex,
2483                    shares=ListOf(int), readv=ReadVector):
2484         """Read a vector from the numbered shares associated with the given
2485hunk ./src/allmydata/interfaces.py 163
2486 
2487     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2488                                         secrets=TupleOf(WriteEnablerSecret,
2489-                                                        LeaseRenewSecret,
2490-                                                        LeaseCancelSecret),
2491+                                                        LeaseRenewSecret),
2492                                         tw_vectors=TestAndWriteVectorsForShares,
2493                                         r_vector=ReadVector,
2494                                         ):
2495hunk ./src/allmydata/interfaces.py 167
2496-        """General-purpose test-and-set operation for mutable slots. Perform
2497-        a bunch of comparisons against the existing shares. If they all pass,
2498-        then apply a bunch of write vectors to those shares. Then use the
2499-        read vectors to extract data from all the shares and return the data.
2500+        """
2501+        General-purpose atomic test-read-and-set operation for mutable slots.
2502+        Perform a bunch of comparisons against the existing shares. If they
2503+        all pass: use the read vectors to extract data from all the shares,
2504+        then apply a bunch of write vectors to those shares. Return the read
2505+        data, which does not include any modifications made by the writes.
2506 
2507         This method is, um, large. The goal is to allow clients to update all
2508         the shares associated with a mutable file in a single round trip.
2509hunk ./src/allmydata/interfaces.py 177
2510 
2511-        @param storage_index: the index of the bucket to be created or
2512+        @param storage_index: the index of the shareset to be created or
2513                               increfed.
2514         @param write_enabler: a secret that is stored along with the slot.
2515                               Writes are accepted from any caller who can
2516hunk ./src/allmydata/interfaces.py 183
2517                               present the matching secret. A different secret
2518                               should be used for each slot*server pair.
2519-        @param renew_secret: This is the secret used to protect bucket refresh
2520+        @param renew_secret: This is the secret used to protect lease renewal.
2521                              This secret is generated by the client and
2522                              stored for later comparison by the server. Each
2523                              server is given a different secret.
2524hunk ./src/allmydata/interfaces.py 187
2525-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2526+        @param cancel_secret: ignored
2527 
2528hunk ./src/allmydata/interfaces.py 189
2529-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2530-        cancel_secret). The first is required to perform any write. The
2531-        latter two are used when allocating new shares. To simply acquire a
2532-        new lease on existing shares, use an empty testv and an empty writev.
2533+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2534+        The write_enabler is required to perform any write. The renew_secret
2535+        is used when allocating new shares.
2536 
2537         Each share can have a separate test vector (i.e. a list of
2538         comparisons to perform). If all vectors for all shares pass, then all
2539hunk ./src/allmydata/interfaces.py 280
2540         store that on disk.
2541         """
2542 
2543-class IStorageBucketWriter(Interface):
2544+
2545+class IStorageBackend(Interface):
2546     """
2547hunk ./src/allmydata/interfaces.py 283
2548-    Objects of this kind live on the client side.
2549+    Objects of this kind live on the server side and are used by the
2550+    storage server object.
2551     """
2552hunk ./src/allmydata/interfaces.py 286
2553-    def put_block(segmentnum=int, data=ShareData):
2554-        """@param data: For most segments, this data will be 'blocksize'
2555-        bytes in length. The last segment might be shorter.
2556-        @return: a Deferred that fires (with None) when the operation completes
2557+    def get_available_space():
2558+        """
2559+        Returns available space for share storage in bytes, or
2560+        None if this information is not available or if the available
2561+        space is unlimited.
2562+
2563+        If the backend is configured for read-only mode then this will
2564+        return 0.
2565+        """
2566+
2567+    def get_sharesets_for_prefix(prefix):
2568+        """
2569+        Generates IShareSet objects for all storage indices matching the
2570+        given prefix for which this backend holds shares.
2571+        """
2572+
2573+    def get_shareset(storageindex):
2574+        """
2575+        Get an IShareSet object for the given storage index.
2576+        """
2577+
2578+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2579+        """
2580+        Clients who discover hash failures in shares that they have
2581+        downloaded from me will use this method to inform me about the
2582+        failures. I will record their concern so that my operator can
2583+        manually inspect the shares in question.
2584+
2585+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2586+        share number. 'reason' is a human-readable explanation of the problem,
2587+        probably including some expected hash values and the computed ones
2588+        that did not match. Corruption advisories for mutable shares should
2589+        include a hash of the public key (the same value that appears in the
2590+        mutable-file verify-cap), since the current share format does not
2591+        store that on disk.
2592+
2593+        @param storageindex=str
2594+        @param sharetype=str
2595+        @param shnum=int
2596+        @param reason=str
2597+        """
2598+
2599+
2600+class IShareSet(Interface):
2601+    def get_storage_index():
2602+        """
2603+        Returns the storage index for this shareset.
2604+        """
2605+
2606+    def get_storage_index_string():
2607+        """
2608+        Returns the base32-encoded storage index for this shareset.
2609+        """
2610+
2611+    def get_overhead():
2612+        """
2613+        Returns the storage overhead, in bytes, of this shareset (exclusive
2614+        of the space used by its shares).
2615+        """
2616+
2617+    def get_shares():
2618+        """
2619+        Generates the IStoredShare objects held in this shareset.
2620+        """
2621+
2622+    def has_incoming(shnum):
2623+        """
2624+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2625+        """
2626+
2627+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2628+        """
2629+        Create a bucket writer that can be used to write data to a given share.
2630+
2631+        @param storageserver=RIStorageServer
2632+        @param shnum=int: A share number in this shareset
2633+        @param max_space_per_bucket=int: The maximum space allocated for the
2634+                 share, in bytes
2635+        @param lease_info=LeaseInfo: The initial lease information
2636+        @param canary=Referenceable: If the canary is lost before close(), the
2637+                 bucket is deleted.
2638+        @return an IStorageBucketWriter for the given share
2639+        """
2640+
2641+    def make_bucket_reader(storageserver, share):
2642+        """
2643+        Create a bucket reader that can be used to read data from a given share.
2644+
2645+        @param storageserver=RIStorageServer
2646+        @param share=IStoredShare
2647+        @return an IStorageBucketReader for the given share
2648+        """
2649+
2650+    def readv(wanted_shnums, read_vector):
2651+        """
2652+        Read a vector from the numbered shares in this shareset. An empty
2653+        wanted_shnums list means to return data from all known shares.
2654+
2655+        @param wanted_shnums=ListOf(int)
2656+        @param read_vector=ReadVector
2657+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2658+        """
2659+
2660+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2661+        """
2662+        General-purpose atomic test-read-and-set operation for mutable slots.
2663+        Perform a bunch of comparisons against the existing shares in this
2664+        shareset. If they all pass: use the read vectors to extract data from
2665+        all the shares, then apply a bunch of write vectors to those shares.
2666+        Return the read data, which does not include any modifications made by
2667+        the writes.
2668+
2669+        See the similar method in RIStorageServer for more detail.
2670+
2671+        @param storageserver=RIStorageServer
2672+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2673+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2674+        @param read_vector=ReadVector
2675+        @param expiration_time=int
2676+        @return TupleOf(bool, DictOf(int, ReadData))
2677+        """
2678+
2679+    def add_or_renew_lease(lease_info):
2680+        """
2681+        Add a new lease on the shares in this shareset. If the renew_secret
2682+        matches an existing lease, that lease will be renewed instead. If
2683+        there are no shares in this shareset, return silently.
2684+
2685+        @param lease_info=LeaseInfo
2686+        """
2687+
2688+    def renew_lease(renew_secret, new_expiration_time):
2689+        """
2690+        Renew a lease on the shares in this shareset, resetting the timer
2691+        to 31 days. Some grids will use this, some will not. If there are no
2692+        shares in this shareset, IndexError will be raised.
2693+
2694+        For mutable shares, if the given renew_secret does not match an
2695+        existing lease, IndexError will be raised with a note listing the
2696+        server-nodeids on the existing leases, so leases on migrated shares
2697+        can be renewed. For immutable shares, IndexError (without the note)
2698+        will be raised.
2699+
2700+        @param renew_secret=LeaseRenewSecret
2701+        """
2702+
2703+
2704+class IStoredShare(Interface):
2705+    """
2706+    This object contains as much as all of the share data.  It is intended
2707+    for lazy evaluation, such that in many use cases substantially less than
2708+    all of the share data will be accessed.
2709+    """
2710+    def close():
2711+        """
2712+        Complete writing to this share.
2713+        """
2714+
2715+    def get_storage_index():
2716+        """
2717+        Returns the storage index.
2718+        """
2719+
2720+    def get_shnum():
2721+        """
2722+        Returns the share number.
2723+        """
2724+
2725+    def get_data_length():
2726+        """
2727+        Returns the data length in bytes.
2728+        """
2729+
2730+    def get_size():
2731+        """
2732+        Returns the size of the share in bytes.
2733+        """
2734+
2735+    def get_used_space():
2736+        """
2737+        Returns the amount of backend storage including overhead, in bytes, used
2738+        by this share.
2739+        """
2740+
2741+    def unlink():
2742+        """
2743+        Signal that this share can be removed from the backend storage. This does
2744+        not guarantee that the share data will be immediately inaccessible, or
2745+        that it will be securely erased.
2746+        """
2747+
2748+    def readv(read_vector):
2749+        """
2750+        XXX
2751+        """
2752+
2753+
2754+class IStoredMutableShare(IStoredShare):
2755+    def check_write_enabler(write_enabler, si_s):
2756+        """
2757+        XXX
2758         """
2759 
2760hunk ./src/allmydata/interfaces.py 489
2761-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2762+    def check_testv(test_vector):
2763+        """
2764+        XXX
2765+        """
2766+
2767+    def writev(datav, new_length):
2768+        """
2769+        XXX
2770+        """
2771+
2772+
2773+class IStorageBucketWriter(Interface):
2774+    """
2775+    Objects of this kind live on the client side.
2776+    """
2777+    def put_block(segmentnum, data):
2778         """
2779hunk ./src/allmydata/interfaces.py 506
2780+        @param segmentnum=int
2781+        @param data=ShareData: For most segments, this data will be 'blocksize'
2782+        bytes in length. The last segment might be shorter.
2783         @return: a Deferred that fires (with None) when the operation completes
2784         """
2785 
2786hunk ./src/allmydata/interfaces.py 512
2787-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2788+    def put_crypttext_hashes(hashes):
2789         """
2790hunk ./src/allmydata/interfaces.py 514
2791+        @param hashes=ListOf(Hash)
2792         @return: a Deferred that fires (with None) when the operation completes
2793         """
2794 
2795hunk ./src/allmydata/interfaces.py 518
2796-    def put_block_hashes(blockhashes=ListOf(Hash)):
2797+    def put_block_hashes(blockhashes):
2798         """
2799hunk ./src/allmydata/interfaces.py 520
2800+        @param blockhashes=ListOf(Hash)
2801         @return: a Deferred that fires (with None) when the operation completes
2802         """
2803 
2804hunk ./src/allmydata/interfaces.py 524
2805-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2806+    def put_share_hashes(sharehashes):
2807         """
2808hunk ./src/allmydata/interfaces.py 526
2809+        @param sharehashes=ListOf(TupleOf(int, Hash))
2810         @return: a Deferred that fires (with None) when the operation completes
2811         """
2812 
2813hunk ./src/allmydata/interfaces.py 530
2814-    def put_uri_extension(data=URIExtensionData):
2815+    def put_uri_extension(data):
2816         """This block of data contains integrity-checking information (hashes
2817         of plaintext, crypttext, and shares), as well as encoding parameters
2818         that are necessary to recover the data. This is a serialized dict
2819hunk ./src/allmydata/interfaces.py 535
2820         mapping strings to other strings. The hash of this data is kept in
2821-        the URI and verified before any of the data is used. All buckets for
2822-        a given file contain identical copies of this data.
2823+        the URI and verified before any of the data is used. All share
2824+        containers for a given file contain identical copies of this data.
2825 
2826         The serialization format is specified with the following pseudocode:
2827         for k in sorted(dict.keys()):
2828hunk ./src/allmydata/interfaces.py 543
2829             assert re.match(r'^[a-zA-Z_\-]+$', k)
2830             write(k + ':' + netstring(dict[k]))
2831 
2832+        @param data=URIExtensionData
2833         @return: a Deferred that fires (with None) when the operation completes
2834         """
2835 
2836hunk ./src/allmydata/interfaces.py 558
2837 
2838 class IStorageBucketReader(Interface):
2839 
2840-    def get_block_data(blocknum=int, blocksize=int, size=int):
2841+    def get_block_data(blocknum, blocksize, size):
2842         """Most blocks will be the same size. The last block might be shorter
2843         than the others.
2844 
2845hunk ./src/allmydata/interfaces.py 562
2846+        @param blocknum=int
2847+        @param blocksize=int
2848+        @param size=int
2849         @return: ShareData
2850         """
2851 
2852hunk ./src/allmydata/interfaces.py 573
2853         @return: ListOf(Hash)
2854         """
2855 
2856-    def get_block_hashes(at_least_these=SetOf(int)):
2857+    def get_block_hashes(at_least_these=()):
2858         """
2859hunk ./src/allmydata/interfaces.py 575
2860+        @param at_least_these=SetOf(int)
2861         @return: ListOf(Hash)
2862         """
2863 
2864hunk ./src/allmydata/interfaces.py 579
2865-    def get_share_hashes(at_least_these=SetOf(int)):
2866+    def get_share_hashes():
2867         """
2868         @return: ListOf(TupleOf(int, Hash))
2869         """
2870hunk ./src/allmydata/interfaces.py 611
2871         @return: unicode nickname, or None
2872         """
2873 
2874-    # methods moved from IntroducerClient, need review
2875-    def get_all_connections():
2876-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2877-        each active connection we've established to a remote service. This is
2878-        mostly useful for unit tests that need to wait until a certain number
2879-        of connections have been made."""
2880-
2881-    def get_all_connectors():
2882-        """Return a dict that maps from (nodeid, service_name) to a
2883-        RemoteServiceConnector instance for all services that we are actively
2884-        trying to connect to. Each RemoteServiceConnector has the following
2885-        public attributes::
2886-
2887-          service_name: the type of service provided, like 'storage'
2888-          announcement_time: when we first heard about this service
2889-          last_connect_time: when we last established a connection
2890-          last_loss_time: when we last lost a connection
2891-
2892-          version: the peer's version, from the most recent connection
2893-          oldest_supported: the peer's oldest supported version, same
2894-
2895-          rref: the RemoteReference, if connected, otherwise None
2896-          remote_host: the IAddress, if connected, otherwise None
2897-
2898-        This method is intended for monitoring interfaces, such as a web page
2899-        that describes connecting and connected peers.
2900-        """
2901-
2902-    def get_all_peerids():
2903-        """Return a frozenset of all peerids to whom we have a connection (to
2904-        one or more services) established. Mostly useful for unit tests."""
2905-
2906-    def get_all_connections_for(service_name):
2907-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2908-        for each active connection that provides the given SERVICE_NAME."""
2909-
2910-    def get_permuted_peers(service_name, key):
2911-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2912-        the connections that provide SERVICE_NAME, using a hash-based
2913-        permutation keyed by KEY. This randomizes the service list in a
2914-        repeatable way, to distribute load over many peers.
2915-        """
2916-
2917 
2918 class IMutableSlotWriter(Interface):
2919     """
2920hunk ./src/allmydata/interfaces.py 616
2921     The interface for a writer around a mutable slot on a remote server.
2922     """
2923-    def set_checkstring(checkstring, *args):
2924+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
2925         """
2926         Set the checkstring that I will pass to the remote server when
2927         writing.
2928hunk ./src/allmydata/interfaces.py 640
2929         Add a block and salt to the share.
2930         """
2931 
2932-    def put_encprivey(encprivkey):
2933+    def put_encprivkey(encprivkey):
2934         """
2935         Add the encrypted private key to the share.
2936         """
2937hunk ./src/allmydata/interfaces.py 645
2938 
2939-    def put_blockhashes(blockhashes=list):
2940+    def put_blockhashes(blockhashes):
2941         """
2942hunk ./src/allmydata/interfaces.py 647
2943+        @param blockhashes=list
2944         Add the block hash tree to the share.
2945         """
2946 
2947hunk ./src/allmydata/interfaces.py 651
2948-    def put_sharehashes(sharehashes=dict):
2949+    def put_sharehashes(sharehashes):
2950         """
2951hunk ./src/allmydata/interfaces.py 653
2952+        @param sharehashes=dict
2953         Add the share hash chain to the share.
2954         """
2955 
2956hunk ./src/allmydata/interfaces.py 739
2957     def get_extension_params():
2958         """Return the extension parameters in the URI"""
2959 
2960-    def set_extension_params():
2961+    def set_extension_params(params):
2962         """Set the extension parameters that should be in the URI"""
2963 
2964 class IDirectoryURI(Interface):
2965hunk ./src/allmydata/interfaces.py 879
2966         writer-visible data using this writekey.
2967         """
2968 
2969-    # TODO: Can this be overwrite instead of replace?
2970-    def replace(new_contents):
2971-        """Replace the contents of the mutable file, provided that no other
2972+    def overwrite(new_contents):
2973+        """Overwrite the contents of the mutable file, provided that no other
2974         node has published (or is attempting to publish, concurrently) a
2975         newer version of the file than this one.
2976 
2977hunk ./src/allmydata/interfaces.py 1346
2978         is empty, the metadata will be an empty dictionary.
2979         """
2980 
2981-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
2982+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
2983         """I add a child (by writecap+readcap) at the specific name. I return
2984         a Deferred that fires when the operation finishes. If overwrite= is
2985         True, I will replace any existing child of the same name, otherwise
2986hunk ./src/allmydata/interfaces.py 1745
2987     Block Hash, and the encoding parameters, both of which must be included
2988     in the URI.
2989 
2990-    I do not choose shareholders, that is left to the IUploader. I must be
2991-    given a dict of RemoteReferences to storage buckets that are ready and
2992-    willing to receive data.
2993+    I do not choose shareholders, that is left to the IUploader.
2994     """
2995 
2996     def set_size(size):
2997hunk ./src/allmydata/interfaces.py 1752
2998         """Specify the number of bytes that will be encoded. This must be
2999         peformed before get_serialized_params() can be called.
3000         """
3001+
3002     def set_params(params):
3003         """Override the default encoding parameters. 'params' is a tuple of
3004         (k,d,n), where 'k' is the number of required shares, 'd' is the
3005hunk ./src/allmydata/interfaces.py 1848
3006     download, validate, decode, and decrypt data from them, writing the
3007     results to an output file.
3008 
3009-    I do not locate the shareholders, that is left to the IDownloader. I must
3010-    be given a dict of RemoteReferences to storage buckets that are ready to
3011-    send data.
3012+    I do not locate the shareholders, that is left to the IDownloader.
3013     """
3014 
3015     def setup(outfile):
3016hunk ./src/allmydata/interfaces.py 1950
3017         resuming an interrupted upload (where we need to compute the
3018         plaintext hashes, but don't need the redundant encrypted data)."""
3019 
3020-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3021-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3022-        plaintext segments, i.e. get the tagged hashes of the given segments.
3023-        The segment size is expected to be generated by the
3024-        IEncryptedUploadable before any plaintext is read or ciphertext
3025-        produced, so that the segment hashes can be generated with only a
3026-        single pass.
3027-
3028-        This returns a Deferred that fires with a sequence of hashes, using:
3029-
3030-         tuple(segment_hashes[first:last])
3031-
3032-        'num_segments' is used to assert that the number of segments that the
3033-        IEncryptedUploadable handled matches the number of segments that the
3034-        encoder was expecting.
3035-
3036-        This method must not be called until the final byte has been read
3037-        from read_encrypted(). Once this method is called, read_encrypted()
3038-        can never be called again.
3039-        """
3040-
3041-    def get_plaintext_hash():
3042-        """OBSOLETE; Get the hash of the whole plaintext.
3043-
3044-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3045-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3046-        """
3047-
3048     def close():
3049         """Just like IUploadable.close()."""
3050 
3051hunk ./src/allmydata/interfaces.py 2144
3052         returns a Deferred that fires with an IUploadResults instance, from
3053         which the URI of the file can be obtained as results.uri ."""
3054 
3055-    def upload_ssk(write_capability, new_version, uploadable):
3056-        """TODO: how should this work?"""
3057-
3058 class ICheckable(Interface):
3059     def check(monitor, verify=False, add_lease=False):
3060         """Check up on my health, optionally repairing any problems.
3061hunk ./src/allmydata/interfaces.py 2505
3062 
3063 class IRepairResults(Interface):
3064     """I contain the results of a repair operation."""
3065-    def get_successful(self):
3066+    def get_successful():
3067         """Returns a boolean: True if the repair made the file healthy, False
3068         if not. Repair failure generally indicates a file that has been
3069         damaged beyond repair."""
3070hunk ./src/allmydata/interfaces.py 2577
3071     Tahoe process will typically have a single NodeMaker, but unit tests may
3072     create simplified/mocked forms for testing purposes.
3073     """
3074-    def create_from_cap(writecap, readcap=None, **kwargs):
3075+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3076         """I create an IFilesystemNode from the given writecap/readcap. I can
3077         only provide nodes for existing file/directory objects: use my other
3078         methods to create new objects. I return synchronously."""
3079hunk ./src/allmydata/monitor.py 30
3080 
3081     # the following methods are provided for the operation code
3082 
3083-    def is_cancelled(self):
3084+    def is_cancelled():
3085         """Returns True if the operation has been cancelled. If True,
3086         operation code should stop creating new work, and attempt to stop any
3087         work already in progress."""
3088hunk ./src/allmydata/monitor.py 35
3089 
3090-    def raise_if_cancelled(self):
3091+    def raise_if_cancelled():
3092         """Raise OperationCancelledError if the operation has been cancelled.
3093         Operation code that has a robust error-handling path can simply call
3094         this periodically."""
3095hunk ./src/allmydata/monitor.py 40
3096 
3097-    def set_status(self, status):
3098+    def set_status(status):
3099         """Sets the Monitor's 'status' object to an arbitrary value.
3100         Different operations will store different sorts of status information
3101         here. Operation code should use get+modify+set sequences to update
3102hunk ./src/allmydata/monitor.py 46
3103         this."""
3104 
3105-    def get_status(self):
3106+    def get_status():
3107         """Return the status object. If the operation failed, this will be a
3108         Failure instance."""
3109 
3110hunk ./src/allmydata/monitor.py 50
3111-    def finish(self, status):
3112+    def finish(status):
3113         """Call this when the operation is done, successful or not. The
3114         Monitor's lifetime is influenced by the completion of the operation
3115         it is monitoring. The Monitor's 'status' value will be set with the
3116hunk ./src/allmydata/monitor.py 63
3117 
3118     # the following methods are provided for the initiator of the operation
3119 
3120-    def is_finished(self):
3121+    def is_finished():
3122         """Return a boolean, True if the operation is done (whether
3123         successful or failed), False if it is still running."""
3124 
3125hunk ./src/allmydata/monitor.py 67
3126-    def when_done(self):
3127+    def when_done():
3128         """Return a Deferred that fires when the operation is complete. It
3129         will fire with the operation status, the same value as returned by
3130         get_status()."""
3131hunk ./src/allmydata/monitor.py 72
3132 
3133-    def cancel(self):
3134+    def cancel():
3135         """Cancel the operation as soon as possible. is_cancelled() will
3136         start returning True after this is called."""
3137 
3138hunk ./src/allmydata/mutable/filenode.py 753
3139         self._writekey = writekey
3140         self._serializer = defer.succeed(None)
3141 
3142-
3143     def get_sequence_number(self):
3144         """
3145         Get the sequence number of the mutable version that I represent.
3146hunk ./src/allmydata/mutable/filenode.py 759
3147         """
3148         return self._version[0] # verinfo[0] == the sequence number
3149 
3150+    def get_servermap(self):
3151+        return self._servermap
3152 
3153hunk ./src/allmydata/mutable/filenode.py 762
3154-    # TODO: Terminology?
3155     def get_writekey(self):
3156         """
3157         I return a writekey or None if I don't have a writekey.
3158hunk ./src/allmydata/mutable/filenode.py 768
3159         """
3160         return self._writekey
3161 
3162-
3163     def set_downloader_hints(self, hints):
3164         """
3165         I set the downloader hints.
3166hunk ./src/allmydata/mutable/filenode.py 776
3167 
3168         self._downloader_hints = hints
3169 
3170-
3171     def get_downloader_hints(self):
3172         """
3173         I return the downloader hints.
3174hunk ./src/allmydata/mutable/filenode.py 782
3175         """
3176         return self._downloader_hints
3177 
3178-
3179     def overwrite(self, new_contents):
3180         """
3181         I overwrite the contents of this mutable file version with the
3182hunk ./src/allmydata/mutable/filenode.py 791
3183 
3184         return self._do_serialized(self._overwrite, new_contents)
3185 
3186-
3187     def _overwrite(self, new_contents):
3188         assert IMutableUploadable.providedBy(new_contents)
3189         assert self._servermap.last_update_mode == MODE_WRITE
3190hunk ./src/allmydata/mutable/filenode.py 797
3191 
3192         return self._upload(new_contents)
3193 
3194-
3195     def modify(self, modifier, backoffer=None):
3196         """I use a modifier callback to apply a change to the mutable file.
3197         I implement the following pseudocode::
3198hunk ./src/allmydata/mutable/filenode.py 841
3199 
3200         return self._do_serialized(self._modify, modifier, backoffer)
3201 
3202-
3203     def _modify(self, modifier, backoffer):
3204         if backoffer is None:
3205             backoffer = BackoffAgent().delay
3206hunk ./src/allmydata/mutable/filenode.py 846
3207         return self._modify_and_retry(modifier, backoffer, True)
3208 
3209-
3210     def _modify_and_retry(self, modifier, backoffer, first_time):
3211         """
3212         I try to apply modifier to the contents of this version of the
3213hunk ./src/allmydata/mutable/filenode.py 878
3214         d.addErrback(_retry)
3215         return d
3216 
3217-
3218     def _modify_once(self, modifier, first_time):
3219         """
3220         I attempt to apply a modifier to the contents of the mutable
3221hunk ./src/allmydata/mutable/filenode.py 913
3222         d.addCallback(_apply)
3223         return d
3224 
3225-
3226     def is_readonly(self):
3227         """
3228         I return True if this MutableFileVersion provides no write
3229hunk ./src/allmydata/mutable/filenode.py 921
3230         """
3231         return self._writekey is None
3232 
3233-
3234     def is_mutable(self):
3235         """
3236         I return True, since mutable files are always mutable by
3237hunk ./src/allmydata/mutable/filenode.py 928
3238         """
3239         return True
3240 
3241-
3242     def get_storage_index(self):
3243         """
3244         I return the storage index of the reference that I encapsulate.
3245hunk ./src/allmydata/mutable/filenode.py 934
3246         """
3247         return self._storage_index
3248 
3249-
3250     def get_size(self):
3251         """
3252         I return the length, in bytes, of this readable object.
3253hunk ./src/allmydata/mutable/filenode.py 940
3254         """
3255         return self._servermap.size_of_version(self._version)
3256 
3257-
3258     def download_to_data(self, fetch_privkey=False):
3259         """
3260         I return a Deferred that fires with the contents of this
3261hunk ./src/allmydata/mutable/filenode.py 951
3262         d.addCallback(lambda mc: "".join(mc.chunks))
3263         return d
3264 
3265-
3266     def _try_to_download_data(self):
3267         """
3268         I am an unserialized cousin of download_to_data; I am called
3269hunk ./src/allmydata/mutable/filenode.py 963
3270         d.addCallback(lambda mc: "".join(mc.chunks))
3271         return d
3272 
3273-
3274     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3275         """
3276         I read a portion (possibly all) of the mutable file that I
3277hunk ./src/allmydata/mutable/filenode.py 971
3278         return self._do_serialized(self._read, consumer, offset, size,
3279                                    fetch_privkey)
3280 
3281-
3282     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3283         """
3284         I am the serialized companion of read.
3285hunk ./src/allmydata/mutable/filenode.py 981
3286         d = r.download(consumer, offset, size)
3287         return d
3288 
3289-
3290     def _do_serialized(self, cb, *args, **kwargs):
3291         # note: to avoid deadlock, this callable is *not* allowed to invoke
3292         # other serialized methods within this (or any other)
3293hunk ./src/allmydata/mutable/filenode.py 999
3294         self._serializer.addErrback(log.err)
3295         return d
3296 
3297-
3298     def _upload(self, new_contents):
3299         #assert self._pubkey, "update_servermap must be called before publish"
3300         p = Publish(self._node, self._storage_broker, self._servermap)
3301hunk ./src/allmydata/mutable/filenode.py 1009
3302         d.addCallback(self._did_upload, new_contents.get_size())
3303         return d
3304 
3305-
3306     def _did_upload(self, res, size):
3307         self._most_recent_size = size
3308         return res
3309hunk ./src/allmydata/mutable/filenode.py 1029
3310         """
3311         return self._do_serialized(self._update, data, offset)
3312 
3313-
3314     def _update(self, data, offset):
3315         """
3316         I update the mutable file version represented by this particular
3317hunk ./src/allmydata/mutable/filenode.py 1058
3318         d.addCallback(self._build_uploadable_and_finish, data, offset)
3319         return d
3320 
3321-
3322     def _do_modify_update(self, data, offset):
3323         """
3324         I perform a file update by modifying the contents of the file
3325hunk ./src/allmydata/mutable/filenode.py 1073
3326             return new
3327         return self._modify(m, None)
3328 
3329-
3330     def _do_update_update(self, data, offset):
3331         """
3332         I start the Servermap update that gets us the data we need to
3333hunk ./src/allmydata/mutable/filenode.py 1108
3334         return self._update_servermap(update_range=(start_segment,
3335                                                     end_segment))
3336 
3337-
3338     def _decode_and_decrypt_segments(self, ignored, data, offset):
3339         """
3340         After the servermap update, I take the encrypted and encoded
3341hunk ./src/allmydata/mutable/filenode.py 1148
3342         d3 = defer.succeed(blockhashes)
3343         return deferredutil.gatherResults([d1, d2, d3])
3344 
3345-
3346     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3347         """
3348         After the process has the plaintext segments, I build the
3349hunk ./src/allmydata/mutable/filenode.py 1163
3350         p = Publish(self._node, self._storage_broker, self._servermap)
3351         return p.update(u, offset, segments_and_bht[2], self._version)
3352 
3353-
3354     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3355         """
3356         I update the servermap. I return a Deferred that fires when the
3357hunk ./src/allmydata/storage/common.py 1
3358-
3359-import os.path
3360 from allmydata.util import base32
3361 
3362 class DataTooLargeError(Exception):
3363hunk ./src/allmydata/storage/common.py 5
3364     pass
3365+
3366 class UnknownMutableContainerVersionError(Exception):
3367     pass
3368hunk ./src/allmydata/storage/common.py 8
3369+
3370 class UnknownImmutableContainerVersionError(Exception):
3371     pass
3372 
3373hunk ./src/allmydata/storage/common.py 18
3374 
3375 def si_a2b(ascii_storageindex):
3376     return base32.a2b(ascii_storageindex)
3377-
3378-def storage_index_to_dir(storageindex):
3379-    sia = si_b2a(storageindex)
3380-    return os.path.join(sia[:2], sia)
3381hunk ./src/allmydata/storage/crawler.py 2
3382 
3383-import os, time, struct
3384+import time, struct
3385 import cPickle as pickle
3386 from twisted.internet import reactor
3387 from twisted.application import service
3388hunk ./src/allmydata/storage/crawler.py 6
3389+
3390+from allmydata.util.assertutil import precondition
3391+from allmydata.interfaces import IStorageBackend
3392 from allmydata.storage.common import si_b2a
3393hunk ./src/allmydata/storage/crawler.py 10
3394-from allmydata.util import fileutil
3395+
3396 
3397 class TimeSliceExceeded(Exception):
3398     pass
3399hunk ./src/allmydata/storage/crawler.py 15
3400 
3401+
3402 class ShareCrawler(service.MultiService):
3403hunk ./src/allmydata/storage/crawler.py 17
3404-    """A ShareCrawler subclass is attached to a StorageServer, and
3405-    periodically walks all of its shares, processing each one in some
3406-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3407-    since large servers can easily have a terabyte of shares, in several
3408-    million files, which can take hours or days to read.
3409+    """
3410+    An instance of a subclass of ShareCrawler is attached to a storage
3411+    backend, and periodically walks the backend's shares, processing them
3412+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3413+    the host, since large servers can easily have a terabyte of shares in
3414+    several million files, which can take hours or days to read.
3415 
3416     Once the crawler starts a cycle, it will proceed at a rate limited by the
3417     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3418hunk ./src/allmydata/storage/crawler.py 33
3419     long enough to ensure that 'minimum_cycle_time' elapses between the start
3420     of two consecutive cycles.
3421 
3422-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3423+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3424     grid will cause the prefixdir contents to be mostly cached in the kernel,
3425hunk ./src/allmydata/storage/crawler.py 35
3426-    or that the number of buckets in each prefixdir will be small enough to
3427-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3428-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3429+    or that the number of sharesets in each prefixdir will be small enough to
3430+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3431+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3432     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3433     time, and 17ms to list the second time.
3434 
3435hunk ./src/allmydata/storage/crawler.py 41
3436-    To use a crawler, create a subclass which implements the process_bucket()
3437-    method. It will be called with a prefixdir and a base32 storage index
3438-    string. process_bucket() must run synchronously. Any keys added to
3439-    self.state will be preserved. Override add_initial_state() to set up
3440-    initial state keys. Override finished_cycle() to perform additional
3441-    processing when the cycle is complete. Any status that the crawler
3442-    produces should be put in the self.state dictionary. Status renderers
3443-    (like a web page which describes the accomplishments of your crawler)
3444-    will use crawler.get_state() to retrieve this dictionary; they can
3445-    present the contents as they see fit.
3446+    To implement a crawler, create a subclass that implements the
3447+    process_shareset() method. It will be called with a prefixdir and an
3448+    object providing the IShareSet interface. process_shareset() must run
3449+    synchronously. Any keys added to self.state will be preserved. Override
3450+    add_initial_state() to set up initial state keys. Override
3451+    finished_cycle() to perform additional processing when the cycle is
3452+    complete. Any status that the crawler produces should be put in the
3453+    self.state dictionary. Status renderers (like a web page describing the
3454+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3455+    this dictionary; they can present the contents as they see fit.
3456 
3457hunk ./src/allmydata/storage/crawler.py 52
3458-    Then create an instance, with a reference to a StorageServer and a
3459-    filename where it can store persistent state. The statefile is used to
3460-    keep track of how far around the ring the process has travelled, as well
3461-    as timing history to allow the pace to be predicted and controlled. The
3462-    statefile will be updated and written to disk after each time slice (just
3463-    before the crawler yields to the reactor), and also after each cycle is
3464-    finished, and also when stopService() is called. Note that this means
3465-    that a crawler which is interrupted with SIGKILL while it is in the
3466-    middle of a time slice will lose progress: the next time the node is
3467-    started, the crawler will repeat some unknown amount of work.
3468+    Then create an instance, with a reference to a backend object providing
3469+    the IStorageBackend interface, and a filename where it can store
3470+    persistent state. The statefile is used to keep track of how far around
3471+    the ring the process has travelled, as well as timing history to allow
3472+    the pace to be predicted and controlled. The statefile will be updated
3473+    and written to disk after each time slice (just before the crawler yields
3474+    to the reactor), and also after each cycle is finished, and also when
3475+    stopService() is called. Note that this means that a crawler that is
3476+    interrupted with SIGKILL while it is in the middle of a time slice will
3477+    lose progress: the next time the node is started, the crawler will repeat
3478+    some unknown amount of work.
3479 
3480     The crawler instance must be started with startService() before it will
3481hunk ./src/allmydata/storage/crawler.py 65
3482-    do any work. To make it stop doing work, call stopService().
3483+    do any work. To make it stop doing work, call stopService(). A crawler
3484+    is usually a child service of a StorageServer, although it should not
3485+    depend on that.
3486+
3487+    For historical reasons, some dictionary key names use the term "bucket"
3488+    for what is now preferably called a "shareset" (the set of shares that a
3489+    server holds under a given storage index).
3490     """
3491 
3492     slow_start = 300 # don't start crawling for 5 minutes after startup
3493hunk ./src/allmydata/storage/crawler.py 80
3494     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3495     minimum_cycle_time = 300 # don't run a cycle faster than this
3496 
3497-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3498+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3499+        precondition(IStorageBackend.providedBy(backend), backend)
3500         service.MultiService.__init__(self)
3501hunk ./src/allmydata/storage/crawler.py 83
3502+        self.backend = backend
3503+        self.statefp = statefp
3504         if allowed_cpu_percentage is not None:
3505             self.allowed_cpu_percentage = allowed_cpu_percentage
3506hunk ./src/allmydata/storage/crawler.py 87
3507-        self.server = server
3508-        self.sharedir = server.sharedir
3509-        self.statefile = statefile
3510         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3511                          for i in range(2**10)]
3512         self.prefixes.sort()
3513hunk ./src/allmydata/storage/crawler.py 91
3514         self.timer = None
3515-        self.bucket_cache = (None, [])
3516+        self.shareset_cache = (None, [])
3517         self.current_sleep_time = None
3518         self.next_wake_time = None
3519         self.last_prefix_finished_time = None
3520hunk ./src/allmydata/storage/crawler.py 154
3521                 left = len(self.prefixes) - self.last_complete_prefix_index
3522                 remaining = left * self.last_prefix_elapsed_time
3523                 # TODO: remainder of this prefix: we need to estimate the
3524-                # per-bucket time, probably by measuring the time spent on
3525-                # this prefix so far, divided by the number of buckets we've
3526+                # per-shareset time, probably by measuring the time spent on
3527+                # this prefix so far, divided by the number of sharesets we've
3528                 # processed.
3529             d["estimated-cycle-complete-time-left"] = remaining
3530             # it's possible to call get_progress() from inside a crawler's
3531hunk ./src/allmydata/storage/crawler.py 175
3532         state dictionary.
3533 
3534         If we are not currently sleeping (i.e. get_state() was called from
3535-        inside the process_prefixdir, process_bucket, or finished_cycle()
3536+        inside the process_prefixdir, process_shareset, or finished_cycle()
3537         methods, or if startService has not yet been called on this crawler),
3538         these two keys will be None.
3539 
3540hunk ./src/allmydata/storage/crawler.py 188
3541     def load_state(self):
3542         # we use this to store state for both the crawler's internals and
3543         # anything the subclass-specific code needs. The state is stored
3544-        # after each bucket is processed, after each prefixdir is processed,
3545+        # after each shareset is processed, after each prefixdir is processed,
3546         # and after a cycle is complete. The internal keys we use are:
3547         #  ["version"]: int, always 1
3548         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3549hunk ./src/allmydata/storage/crawler.py 202
3550         #                            are sleeping between cycles, or if we
3551         #                            have not yet finished any prefixdir since
3552         #                            a cycle was started
3553-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3554-        #                            of the last bucket to be processed, or
3555-        #                            None if we are sleeping between cycles
3556+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3557+        #                            shareset to be processed, or None if we
3558+        #                            are sleeping between cycles
3559         try:
3560hunk ./src/allmydata/storage/crawler.py 206
3561-            f = open(self.statefile, "rb")
3562-            state = pickle.load(f)
3563-            f.close()
3564+            state = pickle.loads(self.statefp.getContent())
3565         except EnvironmentError:
3566             state = {"version": 1,
3567                      "last-cycle-finished": None,
3568hunk ./src/allmydata/storage/crawler.py 242
3569         else:
3570             last_complete_prefix = self.prefixes[lcpi]
3571         self.state["last-complete-prefix"] = last_complete_prefix
3572-        tmpfile = self.statefile + ".tmp"
3573-        f = open(tmpfile, "wb")
3574-        pickle.dump(self.state, f)
3575-        f.close()
3576-        fileutil.move_into_place(tmpfile, self.statefile)
3577+        self.statefp.setContent(pickle.dumps(self.state))
3578 
3579     def startService(self):
3580         # arrange things to look like we were just sleeping, so
3581hunk ./src/allmydata/storage/crawler.py 284
3582         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3583         # if the math gets weird, or a timequake happens, don't sleep
3584         # forever. Note that this means that, while a cycle is running, we
3585-        # will process at least one bucket every 5 minutes, no matter how
3586-        # long that bucket takes.
3587+        # will process at least one shareset every 5 minutes, no matter how
3588+        # long that shareset takes.
3589         sleep_time = max(0.0, min(sleep_time, 299))
3590         if finished_cycle:
3591             # how long should we sleep between cycles? Don't run faster than
3592hunk ./src/allmydata/storage/crawler.py 315
3593         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3594             # if we want to yield earlier, just raise TimeSliceExceeded()
3595             prefix = self.prefixes[i]
3596-            prefixdir = os.path.join(self.sharedir, prefix)
3597-            if i == self.bucket_cache[0]:
3598-                buckets = self.bucket_cache[1]
3599+            if i == self.shareset_cache[0]:
3600+                sharesets = self.shareset_cache[1]
3601             else:
3602hunk ./src/allmydata/storage/crawler.py 318
3603-                try:
3604-                    buckets = os.listdir(prefixdir)
3605-                    buckets.sort()
3606-                except EnvironmentError:
3607-                    buckets = []
3608-                self.bucket_cache = (i, buckets)
3609-            self.process_prefixdir(cycle, prefix, prefixdir,
3610-                                   buckets, start_slice)
3611+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3612+                self.shareset_cache = (i, sharesets)
3613+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3614             self.last_complete_prefix_index = i
3615 
3616             now = time.time()
3617hunk ./src/allmydata/storage/crawler.py 345
3618         self.finished_cycle(cycle)
3619         self.save_state()
3620 
3621-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3622-        """This gets a list of bucket names (i.e. storage index strings,
3623+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3624+        """
3625+        This gets a list of shareset names (i.e. storage index strings,
3626         base32-encoded) in sorted order.
3627 
3628         You can override this if your crawler doesn't care about the actual
3629hunk ./src/allmydata/storage/crawler.py 352
3630         shares, for example a crawler which merely keeps track of how many
3631-        buckets are being managed by this server.
3632+        sharesets are being managed by this server.
3633 
3634hunk ./src/allmydata/storage/crawler.py 354
3635-        Subclasses which *do* care about actual bucket should leave this
3636-        method along, and implement process_bucket() instead.
3637+        Subclasses which *do* care about actual shareset should leave this
3638+        method alone, and implement process_shareset() instead.
3639         """
3640 
3641hunk ./src/allmydata/storage/crawler.py 358
3642-        for bucket in buckets:
3643-            if bucket <= self.state["last-complete-bucket"]:
3644+        for shareset in sharesets:
3645+            base32si = shareset.get_storage_index_string()
3646+            if base32si <= self.state["last-complete-bucket"]:
3647                 continue
3648hunk ./src/allmydata/storage/crawler.py 362
3649-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3650-            self.state["last-complete-bucket"] = bucket
3651+            self.process_shareset(cycle, prefix, shareset)
3652+            self.state["last-complete-bucket"] = base32si
3653             if time.time() >= start_slice + self.cpu_slice:
3654                 raise TimeSliceExceeded()
3655 
3656hunk ./src/allmydata/storage/crawler.py 370
3657     # the remaining methods are explictly for subclasses to implement.
3658 
3659     def started_cycle(self, cycle):
3660-        """Notify a subclass that the crawler is about to start a cycle.
3661+        """
3662+        Notify a subclass that the crawler is about to start a cycle.
3663 
3664         This method is for subclasses to override. No upcall is necessary.
3665         """
3666hunk ./src/allmydata/storage/crawler.py 377
3667         pass
3668 
3669-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3670-        """Examine a single bucket. Subclasses should do whatever they want
3671+    def process_shareset(self, cycle, prefix, shareset):
3672+        """
3673+        Examine a single shareset. Subclasses should do whatever they want
3674         to do to the shares therein, then update self.state as necessary.
3675 
3676         If the crawler is never interrupted by SIGKILL, this method will be
3677hunk ./src/allmydata/storage/crawler.py 383
3678-        called exactly once per share (per cycle). If it *is* interrupted,
3679+        called exactly once per shareset (per cycle). If it *is* interrupted,
3680         then the next time the node is started, some amount of work will be
3681         duplicated, according to when self.save_state() was last called. By
3682         default, save_state() is called at the end of each timeslice, and
3683hunk ./src/allmydata/storage/crawler.py 391
3684 
3685         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3686         records to a database), you can call save_state() at the end of your
3687-        process_bucket() method. This will reduce the maximum duplicated work
3688-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3689-        per bucket (and some disk writes), which will count against your
3690-        allowed_cpu_percentage, and which may be considerable if
3691-        process_bucket() runs quickly.
3692+        process_shareset() method. This will reduce the maximum duplicated
3693+        work to one shareset per SIGKILL. It will also add overhead, probably
3694+        1-20ms per shareset (and some disk writes), which will count against
3695+        your allowed_cpu_percentage, and which may be considerable if
3696+        process_shareset() runs quickly.
3697 
3698         This method is for subclasses to override. No upcall is necessary.
3699         """
3700hunk ./src/allmydata/storage/crawler.py 402
3701         pass
3702 
3703     def finished_prefix(self, cycle, prefix):
3704-        """Notify a subclass that the crawler has just finished processing a
3705-        prefix directory (all buckets with the same two-character/10bit
3706+        """
3707+        Notify a subclass that the crawler has just finished processing a
3708+        prefix directory (all sharesets with the same two-character/10-bit
3709         prefix). To impose a limit on how much work might be duplicated by a
3710         SIGKILL that occurs during a timeslice, you can call
3711         self.save_state() here, but be aware that it may represent a
3712hunk ./src/allmydata/storage/crawler.py 415
3713         pass
3714 
3715     def finished_cycle(self, cycle):
3716-        """Notify subclass that a cycle (one complete traversal of all
3717+        """
3718+        Notify subclass that a cycle (one complete traversal of all
3719         prefixdirs) has just finished. 'cycle' is the number of the cycle
3720         that just finished. This method should perform summary work and
3721         update self.state to publish information to status displays.
3722hunk ./src/allmydata/storage/crawler.py 433
3723         pass
3724 
3725     def yielding(self, sleep_time):
3726-        """The crawler is about to sleep for 'sleep_time' seconds. This
3727+        """
3728+        The crawler is about to sleep for 'sleep_time' seconds. This
3729         method is mostly for the convenience of unit tests.
3730 
3731         This method is for subclasses to override. No upcall is necessary.
3732hunk ./src/allmydata/storage/crawler.py 443
3733 
3734 
3735 class BucketCountingCrawler(ShareCrawler):
3736-    """I keep track of how many buckets are being managed by this server.
3737-    This is equivalent to the number of distributed files and directories for
3738-    which I am providing storage. The actual number of files+directories in
3739-    the full grid is probably higher (especially when there are more servers
3740-    than 'N', the number of generated shares), because some files+directories
3741-    will have shares on other servers instead of me. Also note that the
3742-    number of buckets will differ from the number of shares in small grids,
3743-    when more than one share is placed on a single server.
3744+    """
3745+    I keep track of how many sharesets, each corresponding to a storage index,
3746+    are being managed by this server. This is equivalent to the number of
3747+    distributed files and directories for which I am providing storage. The
3748+    actual number of files and directories in the full grid is probably higher
3749+    (especially when there are more servers than 'N', the number of generated
3750+    shares), because some files and directories will have shares on other
3751+    servers instead of me. Also note that the number of sharesets will differ
3752+    from the number of shares in small grids, when more than one share is
3753+    placed on a single server.
3754     """
3755 
3756     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3757hunk ./src/allmydata/storage/crawler.py 457
3758 
3759-    def __init__(self, server, statefile, num_sample_prefixes=1):
3760-        ShareCrawler.__init__(self, server, statefile)
3761+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3762+        ShareCrawler.__init__(self, backend, statefp)
3763         self.num_sample_prefixes = num_sample_prefixes
3764 
3765     def add_initial_state(self):
3766hunk ./src/allmydata/storage/crawler.py 471
3767         self.state.setdefault("last-complete-bucket-count", None)
3768         self.state.setdefault("storage-index-samples", {})
3769 
3770-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3771+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3772         # we override process_prefixdir() because we don't want to look at
3773hunk ./src/allmydata/storage/crawler.py 473
3774-        # the individual buckets. We'll save state after each one. On my
3775+        # the individual sharesets. We'll save state after each one. On my
3776         # laptop, a mostly-empty storage server can process about 70
3777         # prefixdirs in a 1.0s slice.
3778         if cycle not in self.state["bucket-counts"]:
3779hunk ./src/allmydata/storage/crawler.py 478
3780             self.state["bucket-counts"][cycle] = {}
3781-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3782+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3783         if prefix in self.prefixes[:self.num_sample_prefixes]:
3784hunk ./src/allmydata/storage/crawler.py 480
3785-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3786+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3787 
3788     def finished_cycle(self, cycle):
3789         last_counts = self.state["bucket-counts"].get(cycle, [])
3790hunk ./src/allmydata/storage/crawler.py 486
3791         if len(last_counts) == len(self.prefixes):
3792             # great, we have a whole cycle.
3793-            num_buckets = sum(last_counts.values())
3794-            self.state["last-complete-bucket-count"] = num_buckets
3795+            num_sharesets = sum(last_counts.values())
3796+            self.state["last-complete-bucket-count"] = num_sharesets
3797             # get rid of old counts
3798             for old_cycle in list(self.state["bucket-counts"].keys()):
3799                 if old_cycle != cycle:
3800hunk ./src/allmydata/storage/crawler.py 494
3801                     del self.state["bucket-counts"][old_cycle]
3802         # get rid of old samples too
3803         for prefix in list(self.state["storage-index-samples"].keys()):
3804-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3805+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3806             if old_cycle != cycle:
3807                 del self.state["storage-index-samples"][prefix]
3808hunk ./src/allmydata/storage/crawler.py 497
3809-
3810hunk ./src/allmydata/storage/expirer.py 1
3811-import time, os, pickle, struct
3812+
3813+import time, pickle, struct
3814+from twisted.python import log as twlog
3815+
3816 from allmydata.storage.crawler import ShareCrawler
3817hunk ./src/allmydata/storage/expirer.py 6
3818-from allmydata.storage.shares import get_share_file
3819-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3820+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3821      UnknownImmutableContainerVersionError
3822hunk ./src/allmydata/storage/expirer.py 8
3823-from twisted.python import log as twlog
3824+
3825 
3826 class LeaseCheckingCrawler(ShareCrawler):
3827     """I examine the leases on all shares, determining which are still valid
3828hunk ./src/allmydata/storage/expirer.py 17
3829     removed.
3830 
3831     I collect statistics on the leases and make these available to a web
3832-    status page, including::
3833+    status page, including:
3834 
3835     Space recovered during this cycle-so-far:
3836      actual (only if expiration_enabled=True):
3837hunk ./src/allmydata/storage/expirer.py 21
3838-      num-buckets, num-shares, sum of share sizes, real disk usage
3839+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3840       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3841        space used by the directory)
3842      what it would have been with the original lease expiration time
3843hunk ./src/allmydata/storage/expirer.py 32
3844 
3845     Space recovered during the last 10 cycles  <-- saved in separate pickle
3846 
3847-    Shares/buckets examined:
3848+    Shares/storage-indices examined:
3849      this cycle-so-far
3850      prediction of rest of cycle
3851      during last 10 cycles <-- separate pickle
3852hunk ./src/allmydata/storage/expirer.py 42
3853     Histogram of leases-per-share:
3854      this-cycle-to-date
3855      last 10 cycles <-- separate pickle
3856-    Histogram of lease ages, buckets = 1day
3857+    Histogram of lease ages, storage-indices over 1 day
3858      cycle-to-date
3859      last 10 cycles <-- separate pickle
3860 
3861hunk ./src/allmydata/storage/expirer.py 53
3862     slow_start = 360 # wait 6 minutes after startup
3863     minimum_cycle_time = 12*60*60 # not more than twice per day
3864 
3865-    def __init__(self, server, statefile, historyfile,
3866-                 expiration_enabled, mode,
3867-                 override_lease_duration, # used if expiration_mode=="age"
3868-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3869-                 sharetypes):
3870-        self.historyfile = historyfile
3871-        self.expiration_enabled = expiration_enabled
3872-        self.mode = mode
3873+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3874+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3875+        self.historyfp = historyfp
3876+        ShareCrawler.__init__(self, backend, statefp)
3877+
3878+        self.expiration_enabled = expiration_policy['enabled']
3879+        self.mode = expiration_policy['mode']
3880         self.override_lease_duration = None
3881         self.cutoff_date = None
3882         if self.mode == "age":
3883hunk ./src/allmydata/storage/expirer.py 63
3884-            assert isinstance(override_lease_duration, (int, type(None)))
3885-            self.override_lease_duration = override_lease_duration # seconds
3886+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3887+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3888         elif self.mode == "cutoff-date":
3889hunk ./src/allmydata/storage/expirer.py 66
3890-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3891-            assert cutoff_date is not None
3892-            self.cutoff_date = cutoff_date
3893+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3894+            self.cutoff_date = expiration_policy['cutoff_date']
3895         else:
3896hunk ./src/allmydata/storage/expirer.py 69
3897-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3898-        self.sharetypes_to_expire = sharetypes
3899-        ShareCrawler.__init__(self, server, statefile)
3900+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3901+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3902 
3903     def add_initial_state(self):
3904         # we fill ["cycle-to-date"] here (even though they will be reset in
3905hunk ./src/allmydata/storage/expirer.py 84
3906             self.state["cycle-to-date"].setdefault(k, so_far[k])
3907 
3908         # initialize history
3909-        if not os.path.exists(self.historyfile):
3910+        if not self.historyfp.exists():
3911             history = {} # cyclenum -> dict
3912hunk ./src/allmydata/storage/expirer.py 86
3913-            f = open(self.historyfile, "wb")
3914-            pickle.dump(history, f)
3915-            f.close()
3916+            self.historyfp.setContent(pickle.dumps(history))
3917 
3918     def create_empty_cycle_dict(self):
3919         recovered = self.create_empty_recovered_dict()
3920hunk ./src/allmydata/storage/expirer.py 99
3921 
3922     def create_empty_recovered_dict(self):
3923         recovered = {}
3924+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
3925         for a in ("actual", "original", "configured", "examined"):
3926             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
3927                 recovered[a+"-"+b] = 0
3928hunk ./src/allmydata/storage/expirer.py 110
3929     def started_cycle(self, cycle):
3930         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
3931 
3932-    def stat(self, fn):
3933-        return os.stat(fn)
3934-
3935-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3936-        bucketdir = os.path.join(prefixdir, storage_index_b32)
3937-        s = self.stat(bucketdir)
3938+    def process_storage_index(self, cycle, prefix, container):
3939         would_keep_shares = []
3940         wks = None
3941hunk ./src/allmydata/storage/expirer.py 113
3942+        sharetype = None
3943 
3944hunk ./src/allmydata/storage/expirer.py 115
3945-        for fn in os.listdir(bucketdir):
3946-            try:
3947-                shnum = int(fn)
3948-            except ValueError:
3949-                continue # non-numeric means not a sharefile
3950-            sharefile = os.path.join(bucketdir, fn)
3951+        for share in container.get_shares():
3952+            sharetype = share.sharetype
3953             try:
3954hunk ./src/allmydata/storage/expirer.py 118
3955-                wks = self.process_share(sharefile)
3956+                wks = self.process_share(share)
3957             except (UnknownMutableContainerVersionError,
3958                     UnknownImmutableContainerVersionError,
3959                     struct.error):
3960hunk ./src/allmydata/storage/expirer.py 122
3961-                twlog.msg("lease-checker error processing %s" % sharefile)
3962+                twlog.msg("lease-checker error processing %r" % (share,))
3963                 twlog.err()
3964hunk ./src/allmydata/storage/expirer.py 124
3965-                which = (storage_index_b32, shnum)
3966+                which = (si_b2a(share.storageindex), share.get_shnum())
3967                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
3968                 wks = (1, 1, 1, "unknown")
3969             would_keep_shares.append(wks)
3970hunk ./src/allmydata/storage/expirer.py 129
3971 
3972-        sharetype = None
3973+        container_type = None
3974         if wks:
3975hunk ./src/allmydata/storage/expirer.py 131
3976-            # use the last share's sharetype as the buckettype
3977-            sharetype = wks[3]
3978+            # use the last share's sharetype as the container type
3979+            container_type = wks[3]
3980         rec = self.state["cycle-to-date"]["space-recovered"]
3981         self.increment(rec, "examined-buckets", 1)
3982         if sharetype:
3983hunk ./src/allmydata/storage/expirer.py 136
3984-            self.increment(rec, "examined-buckets-"+sharetype, 1)
3985+            self.increment(rec, "examined-buckets-"+container_type, 1)
3986+
3987+        container_diskbytes = container.get_overhead()
3988 
3989hunk ./src/allmydata/storage/expirer.py 140
3990-        try:
3991-            bucket_diskbytes = s.st_blocks * 512
3992-        except AttributeError:
3993-            bucket_diskbytes = 0 # no stat().st_blocks on windows
3994         if sum([wks[0] for wks in would_keep_shares]) == 0:
3995hunk ./src/allmydata/storage/expirer.py 141
3996-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
3997+            self.increment_container_space("original", container_diskbytes, sharetype)
3998         if sum([wks[1] for wks in would_keep_shares]) == 0:
3999hunk ./src/allmydata/storage/expirer.py 143
4000-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
4001+            self.increment_container_space("configured", container_diskbytes, sharetype)
4002         if sum([wks[2] for wks in would_keep_shares]) == 0:
4003hunk ./src/allmydata/storage/expirer.py 145
4004-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
4005+            self.increment_container_space("actual", container_diskbytes, sharetype)
4006 
4007hunk ./src/allmydata/storage/expirer.py 147
4008-    def process_share(self, sharefilename):
4009-        # first, find out what kind of a share it is
4010-        sf = get_share_file(sharefilename)
4011-        sharetype = sf.sharetype
4012+    def process_share(self, share):
4013+        sharetype = share.sharetype
4014         now = time.time()
4015hunk ./src/allmydata/storage/expirer.py 150
4016-        s = self.stat(sharefilename)
4017+        sharebytes = share.get_size()
4018+        diskbytes = share.get_used_space()
4019 
4020         num_leases = 0
4021         num_valid_leases_original = 0
4022hunk ./src/allmydata/storage/expirer.py 158
4023         num_valid_leases_configured = 0
4024         expired_leases_configured = []
4025 
4026-        for li in sf.get_leases():
4027+        for li in share.get_leases():
4028             num_leases += 1
4029             original_expiration_time = li.get_expiration_time()
4030             grant_renew_time = li.get_grant_renew_time_time()
4031hunk ./src/allmydata/storage/expirer.py 171
4032 
4033             #  expired-or-not according to our configured age limit
4034             expired = False
4035-            if self.mode == "age":
4036-                age_limit = original_expiration_time
4037-                if self.override_lease_duration is not None:
4038-                    age_limit = self.override_lease_duration
4039-                if age > age_limit:
4040-                    expired = True
4041-            else:
4042-                assert self.mode == "cutoff-date"
4043-                if grant_renew_time < self.cutoff_date:
4044-                    expired = True
4045-            if sharetype not in self.sharetypes_to_expire:
4046-                expired = False
4047+            if sharetype in self.sharetypes_to_expire:
4048+                if self.mode == "age":
4049+                    age_limit = original_expiration_time
4050+                    if self.override_lease_duration is not None:
4051+                        age_limit = self.override_lease_duration
4052+                    if age > age_limit:
4053+                        expired = True
4054+                else:
4055+                    assert self.mode == "cutoff-date"
4056+                    if grant_renew_time < self.cutoff_date:
4057+                        expired = True
4058 
4059             if expired:
4060                 expired_leases_configured.append(li)
4061hunk ./src/allmydata/storage/expirer.py 190
4062 
4063         so_far = self.state["cycle-to-date"]
4064         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4065-        self.increment_space("examined", s, sharetype)
4066+        self.increment_space("examined", diskbytes, sharetype)
4067 
4068         would_keep_share = [1, 1, 1, sharetype]
4069 
4070hunk ./src/allmydata/storage/expirer.py 196
4071         if self.expiration_enabled:
4072             for li in expired_leases_configured:
4073-                sf.cancel_lease(li.cancel_secret)
4074+                share.cancel_lease(li.cancel_secret)
4075 
4076         if num_valid_leases_original == 0:
4077             would_keep_share[0] = 0
4078hunk ./src/allmydata/storage/expirer.py 200
4079-            self.increment_space("original", s, sharetype)
4080+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4081 
4082         if num_valid_leases_configured == 0:
4083             would_keep_share[1] = 0
4084hunk ./src/allmydata/storage/expirer.py 204
4085-            self.increment_space("configured", s, sharetype)
4086+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4087             if self.expiration_enabled:
4088                 would_keep_share[2] = 0
4089hunk ./src/allmydata/storage/expirer.py 207
4090-                self.increment_space("actual", s, sharetype)
4091+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4092 
4093         return would_keep_share
4094 
4095hunk ./src/allmydata/storage/expirer.py 211
4096-    def increment_space(self, a, s, sharetype):
4097-        sharebytes = s.st_size
4098-        try:
4099-            # note that stat(2) says that st_blocks is 512 bytes, and that
4100-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4101-            # independent of the block-size that st_blocks uses.
4102-            diskbytes = s.st_blocks * 512
4103-        except AttributeError:
4104-            # the docs say that st_blocks is only on linux. I also see it on
4105-            # MacOS. But it isn't available on windows.
4106-            diskbytes = sharebytes
4107+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4108         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4109         self.increment(so_far_sr, a+"-shares", 1)
4110         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4111hunk ./src/allmydata/storage/expirer.py 221
4112             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4113             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4114 
4115-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4116+    def increment_container_space(self, a, container_diskbytes, container_type):
4117         rec = self.state["cycle-to-date"]["space-recovered"]
4118hunk ./src/allmydata/storage/expirer.py 223
4119-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4120+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4121         self.increment(rec, a+"-buckets", 1)
4122hunk ./src/allmydata/storage/expirer.py 225
4123-        if sharetype:
4124-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4125-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4126+        if container_type:
4127+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4128+            self.increment(rec, a+"-buckets-"+container_type, 1)
4129 
4130     def increment(self, d, k, delta=1):
4131         if k not in d:
4132hunk ./src/allmydata/storage/expirer.py 281
4133         # copy() needs to become a deepcopy
4134         h["space-recovered"] = s["space-recovered"].copy()
4135 
4136-        history = pickle.load(open(self.historyfile, "rb"))
4137+        history = pickle.load(self.historyfp.getContent())
4138         history[cycle] = h
4139         while len(history) > 10:
4140             oldcycles = sorted(history.keys())
4141hunk ./src/allmydata/storage/expirer.py 286
4142             del history[oldcycles[0]]
4143-        f = open(self.historyfile, "wb")
4144-        pickle.dump(history, f)
4145-        f.close()
4146+        self.historyfp.setContent(pickle.dumps(history))
4147 
4148     def get_state(self):
4149         """In addition to the crawler state described in
4150hunk ./src/allmydata/storage/expirer.py 355
4151         progress = self.get_progress()
4152 
4153         state = ShareCrawler.get_state(self) # does a shallow copy
4154-        history = pickle.load(open(self.historyfile, "rb"))
4155+        history = pickle.load(self.historyfp.getContent())
4156         state["history"] = history
4157 
4158         if not progress["cycle-in-progress"]:
4159hunk ./src/allmydata/storage/lease.py 3
4160 import struct, time
4161 
4162+
4163+class NonExistentLeaseError(Exception):
4164+    pass
4165+
4166 class LeaseInfo:
4167     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4168                  expiration_time=None, nodeid=None):
4169hunk ./src/allmydata/storage/lease.py 21
4170 
4171     def get_expiration_time(self):
4172         return self.expiration_time
4173+
4174     def get_grant_renew_time_time(self):
4175         # hack, based upon fixed 31day expiration period
4176         return self.expiration_time - 31*24*60*60
4177hunk ./src/allmydata/storage/lease.py 25
4178+
4179     def get_age(self):
4180         return time.time() - self.get_grant_renew_time_time()
4181 
4182hunk ./src/allmydata/storage/lease.py 36
4183          self.expiration_time) = struct.unpack(">L32s32sL", data)
4184         self.nodeid = None
4185         return self
4186+
4187     def to_immutable_data(self):
4188         return struct.pack(">L32s32sL",
4189                            self.owner_num,
4190hunk ./src/allmydata/storage/lease.py 49
4191                            int(self.expiration_time),
4192                            self.renew_secret, self.cancel_secret,
4193                            self.nodeid)
4194+
4195     def from_mutable_data(self, data):
4196         (self.owner_num,
4197          self.expiration_time,
4198hunk ./src/allmydata/storage/server.py 1
4199-import os, re, weakref, struct, time
4200+import weakref, time
4201 
4202 from foolscap.api import Referenceable
4203 from twisted.application import service
4204hunk ./src/allmydata/storage/server.py 7
4205 
4206 from zope.interface import implements
4207-from allmydata.interfaces import RIStorageServer, IStatsProducer
4208-from allmydata.util import fileutil, idlib, log, time_format
4209+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4210+from allmydata.util.assertutil import precondition
4211+from allmydata.util import idlib, log
4212 import allmydata # for __full_version__
4213 
4214hunk ./src/allmydata/storage/server.py 12
4215-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4216-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4217+from allmydata.storage.common import si_a2b, si_b2a
4218+[si_a2b]  # hush pyflakes
4219 from allmydata.storage.lease import LeaseInfo
4220hunk ./src/allmydata/storage/server.py 15
4221-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4222-     create_mutable_sharefile
4223-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4224-from allmydata.storage.crawler import BucketCountingCrawler
4225 from allmydata.storage.expirer import LeaseCheckingCrawler
4226hunk ./src/allmydata/storage/server.py 16
4227-
4228-# storage/
4229-# storage/shares/incoming
4230-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4231-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4232-# storage/shares/$START/$STORAGEINDEX
4233-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4234-
4235-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4236-# base-32 chars).
4237-
4238-# $SHARENUM matches this regex:
4239-NUM_RE=re.compile("^[0-9]+$")
4240-
4241+from allmydata.storage.crawler import BucketCountingCrawler
4242 
4243 
4244 class StorageServer(service.MultiService, Referenceable):
4245hunk ./src/allmydata/storage/server.py 21
4246     implements(RIStorageServer, IStatsProducer)
4247+
4248     name = 'storage'
4249     LeaseCheckerClass = LeaseCheckingCrawler
4250hunk ./src/allmydata/storage/server.py 24
4251+    DEFAULT_EXPIRATION_POLICY = {
4252+        'enabled': False,
4253+        'mode': 'age',
4254+        'override_lease_duration': None,
4255+        'cutoff_date': None,
4256+        'sharetypes': ('mutable', 'immutable'),
4257+    }
4258 
4259hunk ./src/allmydata/storage/server.py 32
4260-    def __init__(self, storedir, nodeid, reserved_space=0,
4261-                 discard_storage=False, readonly_storage=False,
4262+    def __init__(self, serverid, backend, statedir,
4263                  stats_provider=None,
4264hunk ./src/allmydata/storage/server.py 34
4265-                 expiration_enabled=False,
4266-                 expiration_mode="age",
4267-                 expiration_override_lease_duration=None,
4268-                 expiration_cutoff_date=None,
4269-                 expiration_sharetypes=("mutable", "immutable")):
4270+                 expiration_policy=None):
4271         service.MultiService.__init__(self)
4272hunk ./src/allmydata/storage/server.py 36
4273-        assert isinstance(nodeid, str)
4274-        assert len(nodeid) == 20
4275-        self.my_nodeid = nodeid
4276-        self.storedir = storedir
4277-        sharedir = os.path.join(storedir, "shares")
4278-        fileutil.make_dirs(sharedir)
4279-        self.sharedir = sharedir
4280-        # we don't actually create the corruption-advisory dir until necessary
4281-        self.corruption_advisory_dir = os.path.join(storedir,
4282-                                                    "corruption-advisories")
4283-        self.reserved_space = int(reserved_space)
4284-        self.no_storage = discard_storage
4285-        self.readonly_storage = readonly_storage
4286+        precondition(IStorageBackend.providedBy(backend), backend)
4287+        precondition(isinstance(serverid, str), serverid)
4288+        precondition(len(serverid) == 20, serverid)
4289+
4290+        self._serverid = serverid
4291         self.stats_provider = stats_provider
4292         if self.stats_provider:
4293             self.stats_provider.register_producer(self)
4294hunk ./src/allmydata/storage/server.py 44
4295-        self.incomingdir = os.path.join(sharedir, 'incoming')
4296-        self._clean_incomplete()
4297-        fileutil.make_dirs(self.incomingdir)
4298         self._active_writers = weakref.WeakKeyDictionary()
4299hunk ./src/allmydata/storage/server.py 45
4300+        self.backend = backend
4301+        self.backend.setServiceParent(self)
4302+        self._statedir = statedir
4303         log.msg("StorageServer created", facility="tahoe.storage")
4304 
4305hunk ./src/allmydata/storage/server.py 50
4306-        if reserved_space:
4307-            if self.get_available_space() is None:
4308-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4309-                        umin="0wZ27w", level=log.UNUSUAL)
4310-
4311         self.latencies = {"allocate": [], # immutable
4312                           "write": [],
4313                           "close": [],
4314hunk ./src/allmydata/storage/server.py 61
4315                           "renew": [],
4316                           "cancel": [],
4317                           }
4318-        self.add_bucket_counter()
4319-
4320-        statefile = os.path.join(self.storedir, "lease_checker.state")
4321-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4322-        klass = self.LeaseCheckerClass
4323-        self.lease_checker = klass(self, statefile, historyfile,
4324-                                   expiration_enabled, expiration_mode,
4325-                                   expiration_override_lease_duration,
4326-                                   expiration_cutoff_date,
4327-                                   expiration_sharetypes)
4328-        self.lease_checker.setServiceParent(self)
4329+        self._setup_bucket_counter()
4330+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4331 
4332     def __repr__(self):
4333hunk ./src/allmydata/storage/server.py 65
4334-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4335+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4336 
4337hunk ./src/allmydata/storage/server.py 67
4338-    def add_bucket_counter(self):
4339-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4340-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4341+    def _setup_bucket_counter(self):
4342+        statefp = self._statedir.child("bucket_counter.state")
4343+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4344         self.bucket_counter.setServiceParent(self)
4345 
4346hunk ./src/allmydata/storage/server.py 72
4347+    def _setup_lease_checker(self, expiration_policy):
4348+        statefp = self._statedir.child("lease_checker.state")
4349+        historyfp = self._statedir.child("lease_checker.history")
4350+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4351+        self.lease_checker.setServiceParent(self)
4352+
4353     def count(self, name, delta=1):
4354         if self.stats_provider:
4355             self.stats_provider.count("storage_server." + name, delta)
4356hunk ./src/allmydata/storage/server.py 92
4357         """Return a dict, indexed by category, that contains a dict of
4358         latency numbers for each category. If there are sufficient samples
4359         for unambiguous interpretation, each dict will contain the
4360-        following keys: mean, 01_0_percentile, 10_0_percentile,
4361+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4362         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4363         99_0_percentile, 99_9_percentile.  If there are insufficient
4364         samples for a given percentile to be interpreted unambiguously
4365hunk ./src/allmydata/storage/server.py 114
4366             else:
4367                 stats["mean"] = None
4368 
4369-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4370-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4371-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4372+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4373+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4374+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4375                              (0.999, "99_9_percentile", 1000)]
4376 
4377             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4378hunk ./src/allmydata/storage/server.py 133
4379             kwargs["facility"] = "tahoe.storage"
4380         return log.msg(*args, **kwargs)
4381 
4382-    def _clean_incomplete(self):
4383-        fileutil.rm_dir(self.incomingdir)
4384+    def get_serverid(self):
4385+        return self._serverid
4386 
4387     def get_stats(self):
4388         # remember: RIStatsProvider requires that our return dict
4389hunk ./src/allmydata/storage/server.py 138
4390-        # contains numeric values.
4391+        # contains numeric, or None values.
4392         stats = { 'storage_server.allocated': self.allocated_size(), }
4393hunk ./src/allmydata/storage/server.py 140
4394-        stats['storage_server.reserved_space'] = self.reserved_space
4395         for category,ld in self.get_latencies().items():
4396             for name,v in ld.items():
4397                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4398hunk ./src/allmydata/storage/server.py 144
4399 
4400-        try:
4401-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4402-            writeable = disk['avail'] > 0
4403-
4404-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4405-            stats['storage_server.disk_total'] = disk['total']
4406-            stats['storage_server.disk_used'] = disk['used']
4407-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4408-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4409-            stats['storage_server.disk_avail'] = disk['avail']
4410-        except AttributeError:
4411-            writeable = True
4412-        except EnvironmentError:
4413-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4414-            writeable = False
4415-
4416-        if self.readonly_storage:
4417-            stats['storage_server.disk_avail'] = 0
4418-            writeable = False
4419+        self.backend.fill_in_space_stats(stats)
4420 
4421hunk ./src/allmydata/storage/server.py 146
4422-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4423         s = self.bucket_counter.get_state()
4424         bucket_count = s.get("last-complete-bucket-count")
4425         if bucket_count:
4426hunk ./src/allmydata/storage/server.py 153
4427         return stats
4428 
4429     def get_available_space(self):
4430-        """Returns available space for share storage in bytes, or None if no
4431-        API to get this information is available."""
4432-
4433-        if self.readonly_storage:
4434-            return 0
4435-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4436+        return self.backend.get_available_space()
4437 
4438     def allocated_size(self):
4439         space = 0
4440hunk ./src/allmydata/storage/server.py 162
4441         return space
4442 
4443     def remote_get_version(self):
4444-        remaining_space = self.get_available_space()
4445+        remaining_space = self.backend.get_available_space()
4446         if remaining_space is None:
4447             # We're on a platform that has no API to get disk stats.
4448             remaining_space = 2**64
4449hunk ./src/allmydata/storage/server.py 178
4450                     }
4451         return version
4452 
4453-    def remote_allocate_buckets(self, storage_index,
4454+    def remote_allocate_buckets(self, storageindex,
4455                                 renew_secret, cancel_secret,
4456                                 sharenums, allocated_size,
4457                                 canary, owner_num=0):
4458hunk ./src/allmydata/storage/server.py 182
4459+        # cancel_secret is no longer used.
4460         # owner_num is not for clients to set, but rather it should be
4461hunk ./src/allmydata/storage/server.py 184
4462-        # curried into the PersonalStorageServer instance that is dedicated
4463-        # to a particular owner.
4464+        # curried into a StorageServer instance dedicated to a particular
4465+        # owner.
4466         start = time.time()
4467         self.count("allocate")
4468hunk ./src/allmydata/storage/server.py 188
4469-        alreadygot = set()
4470         bucketwriters = {} # k: shnum, v: BucketWriter
4471hunk ./src/allmydata/storage/server.py 189
4472-        si_dir = storage_index_to_dir(storage_index)
4473-        si_s = si_b2a(storage_index)
4474 
4475hunk ./src/allmydata/storage/server.py 190
4476+        si_s = si_b2a(storageindex)
4477         log.msg("storage: allocate_buckets %s" % si_s)
4478 
4479hunk ./src/allmydata/storage/server.py 193
4480-        # in this implementation, the lease information (including secrets)
4481-        # goes into the share files themselves. It could also be put into a
4482-        # separate database. Note that the lease should not be added until
4483-        # the BucketWriter has been closed.
4484+        # Note that the lease should not be added until the BucketWriter
4485+        # has been closed.
4486         expire_time = time.time() + 31*24*60*60
4487hunk ./src/allmydata/storage/server.py 196
4488-        lease_info = LeaseInfo(owner_num,
4489-                               renew_secret, cancel_secret,
4490-                               expire_time, self.my_nodeid)
4491+        lease_info = LeaseInfo(owner_num, renew_secret,
4492+                               expire_time, self._serverid)
4493 
4494         max_space_per_bucket = allocated_size
4495 
4496hunk ./src/allmydata/storage/server.py 201
4497-        remaining_space = self.get_available_space()
4498+        remaining_space = self.backend.get_available_space()
4499         limited = remaining_space is not None
4500         if limited:
4501hunk ./src/allmydata/storage/server.py 204
4502-            # this is a bit conservative, since some of this allocated_size()
4503-            # has already been written to disk, where it will show up in
4504+            # This is a bit conservative, since some of this allocated_size()
4505+            # has already been written to the backend, where it will show up in
4506             # get_available_space.
4507             remaining_space -= self.allocated_size()
4508hunk ./src/allmydata/storage/server.py 208
4509-        # self.readonly_storage causes remaining_space <= 0
4510+            # If the backend is read-only, remaining_space will be <= 0.
4511+
4512+        shareset = self.backend.get_shareset(storageindex)
4513 
4514hunk ./src/allmydata/storage/server.py 212
4515-        # fill alreadygot with all shares that we have, not just the ones
4516+        # Fill alreadygot with all shares that we have, not just the ones
4517         # they asked about: this will save them a lot of work. Add or update
4518         # leases for all of them: if they want us to hold shares for this
4519hunk ./src/allmydata/storage/server.py 215
4520-        # file, they'll want us to hold leases for this file.
4521-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4522-            alreadygot.add(shnum)
4523-            sf = ShareFile(fn)
4524-            sf.add_or_renew_lease(lease_info)
4525+        # file, they'll want us to hold leases for all the shares of it.
4526+        #
4527+        # XXX should we be making the assumption here that lease info is
4528+        # duplicated in all shares?
4529+        alreadygot = set()
4530+        for share in shareset.get_shares():
4531+            share.add_or_renew_lease(lease_info)
4532+            alreadygot.add(share.shnum)
4533 
4534hunk ./src/allmydata/storage/server.py 224
4535-        for shnum in sharenums:
4536-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4537-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4538-            if os.path.exists(finalhome):
4539-                # great! we already have it. easy.
4540-                pass
4541-            elif os.path.exists(incominghome):
4542+        for shnum in sharenums - alreadygot:
4543+            if shareset.has_incoming(shnum):
4544                 # Note that we don't create BucketWriters for shnums that
4545                 # have a partial share (in incoming/), so if a second upload
4546                 # occurs while the first is still in progress, the second
4547hunk ./src/allmydata/storage/server.py 232
4548                 # uploader will use different storage servers.
4549                 pass
4550             elif (not limited) or (remaining_space >= max_space_per_bucket):
4551-                # ok! we need to create the new share file.
4552-                bw = BucketWriter(self, incominghome, finalhome,
4553-                                  max_space_per_bucket, lease_info, canary)
4554-                if self.no_storage:
4555-                    bw.throw_out_all_data = True
4556+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4557+                                                 lease_info, canary)
4558                 bucketwriters[shnum] = bw
4559                 self._active_writers[bw] = 1
4560                 if limited:
4561hunk ./src/allmydata/storage/server.py 239
4562                     remaining_space -= max_space_per_bucket
4563             else:
4564-                # bummer! not enough space to accept this bucket
4565+                # Bummer not enough space to accept this share.
4566                 pass
4567 
4568hunk ./src/allmydata/storage/server.py 242
4569-        if bucketwriters:
4570-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4571-
4572         self.add_latency("allocate", time.time() - start)
4573         return alreadygot, bucketwriters
4574 
4575hunk ./src/allmydata/storage/server.py 245
4576-    def _iter_share_files(self, storage_index):
4577-        for shnum, filename in self._get_bucket_shares(storage_index):
4578-            f = open(filename, 'rb')
4579-            header = f.read(32)
4580-            f.close()
4581-            if header[:32] == MutableShareFile.MAGIC:
4582-                sf = MutableShareFile(filename, self)
4583-                # note: if the share has been migrated, the renew_lease()
4584-                # call will throw an exception, with information to help the
4585-                # client update the lease.
4586-            elif header[:4] == struct.pack(">L", 1):
4587-                sf = ShareFile(filename)
4588-            else:
4589-                continue # non-sharefile
4590-            yield sf
4591-
4592-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4593+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4594                          owner_num=1):
4595hunk ./src/allmydata/storage/server.py 247
4596+        # cancel_secret is no longer used.
4597         start = time.time()
4598         self.count("add-lease")
4599         new_expire_time = time.time() + 31*24*60*60
4600hunk ./src/allmydata/storage/server.py 251
4601-        lease_info = LeaseInfo(owner_num,
4602-                               renew_secret, cancel_secret,
4603-                               new_expire_time, self.my_nodeid)
4604-        for sf in self._iter_share_files(storage_index):
4605-            sf.add_or_renew_lease(lease_info)
4606-        self.add_latency("add-lease", time.time() - start)
4607-        return None
4608+        lease_info = LeaseInfo(owner_num, renew_secret,
4609+                               new_expire_time, self._serverid)
4610 
4611hunk ./src/allmydata/storage/server.py 254
4612-    def remote_renew_lease(self, storage_index, renew_secret):
4613+        try:
4614+            self.backend.add_or_renew_lease(lease_info)
4615+        finally:
4616+            self.add_latency("add-lease", time.time() - start)
4617+
4618+    def remote_renew_lease(self, storageindex, renew_secret):
4619         start = time.time()
4620         self.count("renew")
4621hunk ./src/allmydata/storage/server.py 262
4622-        new_expire_time = time.time() + 31*24*60*60
4623-        found_buckets = False
4624-        for sf in self._iter_share_files(storage_index):
4625-            found_buckets = True
4626-            sf.renew_lease(renew_secret, new_expire_time)
4627-        self.add_latency("renew", time.time() - start)
4628-        if not found_buckets:
4629-            raise IndexError("no such lease to renew")
4630+
4631+        try:
4632+            shareset = self.backend.get_shareset(storageindex)
4633+            new_expiration_time = start + 31*24*60*60   # one month from now
4634+            shareset.renew_lease(renew_secret, new_expiration_time)
4635+        finally:
4636+            self.add_latency("renew", time.time() - start)
4637 
4638     def bucket_writer_closed(self, bw, consumed_size):
4639         if self.stats_provider:
4640hunk ./src/allmydata/storage/server.py 275
4641             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4642         del self._active_writers[bw]
4643 
4644-    def _get_bucket_shares(self, storage_index):
4645-        """Return a list of (shnum, pathname) tuples for files that hold
4646-        shares for this storage_index. In each tuple, 'shnum' will always be
4647-        the integer form of the last component of 'pathname'."""
4648-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4649-        try:
4650-            for f in os.listdir(storagedir):
4651-                if NUM_RE.match(f):
4652-                    filename = os.path.join(storagedir, f)
4653-                    yield (int(f), filename)
4654-        except OSError:
4655-            # Commonly caused by there being no buckets at all.
4656-            pass
4657-
4658-    def remote_get_buckets(self, storage_index):
4659+    def remote_get_buckets(self, storageindex):
4660         start = time.time()
4661         self.count("get")
4662hunk ./src/allmydata/storage/server.py 278
4663-        si_s = si_b2a(storage_index)
4664+        si_s = si_b2a(storageindex)
4665         log.msg("storage: get_buckets %s" % si_s)
4666         bucketreaders = {} # k: sharenum, v: BucketReader
4667hunk ./src/allmydata/storage/server.py 281
4668-        for shnum, filename in self._get_bucket_shares(storage_index):
4669-            bucketreaders[shnum] = BucketReader(self, filename,
4670-                                                storage_index, shnum)
4671-        self.add_latency("get", time.time() - start)
4672-        return bucketreaders
4673 
4674hunk ./src/allmydata/storage/server.py 282
4675-    def get_leases(self, storage_index):
4676-        """Provide an iterator that yields all of the leases attached to this
4677-        bucket. Each lease is returned as a LeaseInfo instance.
4678+        try:
4679+            shareset = self.backend.get_shareset(storageindex)
4680+            for share in shareset.get_shares():
4681+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4682+            return bucketreaders
4683+        finally:
4684+            self.add_latency("get", time.time() - start)
4685 
4686hunk ./src/allmydata/storage/server.py 290
4687-        This method is not for client use.
4688+    def get_leases(self, storageindex):
4689         """
4690hunk ./src/allmydata/storage/server.py 292
4691+        Provide an iterator that yields all of the leases attached to this
4692+        bucket. Each lease is returned as a LeaseInfo instance.
4693 
4694hunk ./src/allmydata/storage/server.py 295
4695-        # since all shares get the same lease data, we just grab the leases
4696-        # from the first share
4697-        try:
4698-            shnum, filename = self._get_bucket_shares(storage_index).next()
4699-            sf = ShareFile(filename)
4700-            return sf.get_leases()
4701-        except StopIteration:
4702-            return iter([])
4703+        This method is not for client use. XXX do we need it at all?
4704+        """
4705+        return self.backend.get_shareset(storageindex).get_leases()
4706 
4707hunk ./src/allmydata/storage/server.py 299
4708-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4709+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4710                                                secrets,
4711                                                test_and_write_vectors,
4712                                                read_vector):
4713hunk ./src/allmydata/storage/server.py 305
4714         start = time.time()
4715         self.count("writev")
4716-        si_s = si_b2a(storage_index)
4717+        si_s = si_b2a(storageindex)
4718         log.msg("storage: slot_writev %s" % si_s)
4719hunk ./src/allmydata/storage/server.py 307
4720-        si_dir = storage_index_to_dir(storage_index)
4721-        (write_enabler, renew_secret, cancel_secret) = secrets
4722-        # shares exist if there is a file for them
4723-        bucketdir = os.path.join(self.sharedir, si_dir)
4724-        shares = {}
4725-        if os.path.isdir(bucketdir):
4726-            for sharenum_s in os.listdir(bucketdir):
4727-                try:
4728-                    sharenum = int(sharenum_s)
4729-                except ValueError:
4730-                    continue
4731-                filename = os.path.join(bucketdir, sharenum_s)
4732-                msf = MutableShareFile(filename, self)
4733-                msf.check_write_enabler(write_enabler, si_s)
4734-                shares[sharenum] = msf
4735-        # write_enabler is good for all existing shares.
4736-
4737-        # Now evaluate test vectors.
4738-        testv_is_good = True
4739-        for sharenum in test_and_write_vectors:
4740-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4741-            if sharenum in shares:
4742-                if not shares[sharenum].check_testv(testv):
4743-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4744-                    testv_is_good = False
4745-                    break
4746-            else:
4747-                # compare the vectors against an empty share, in which all
4748-                # reads return empty strings.
4749-                if not EmptyShare().check_testv(testv):
4750-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4751-                                                                testv))
4752-                    testv_is_good = False
4753-                    break
4754-
4755-        # now gather the read vectors, before we do any writes
4756-        read_data = {}
4757-        for sharenum, share in shares.items():
4758-            read_data[sharenum] = share.readv(read_vector)
4759-
4760-        ownerid = 1 # TODO
4761-        expire_time = time.time() + 31*24*60*60   # one month
4762-        lease_info = LeaseInfo(ownerid,
4763-                               renew_secret, cancel_secret,
4764-                               expire_time, self.my_nodeid)
4765-
4766-        if testv_is_good:
4767-            # now apply the write vectors
4768-            for sharenum in test_and_write_vectors:
4769-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4770-                if new_length == 0:
4771-                    if sharenum in shares:
4772-                        shares[sharenum].unlink()
4773-                else:
4774-                    if sharenum not in shares:
4775-                        # allocate a new share
4776-                        allocated_size = 2000 # arbitrary, really
4777-                        share = self._allocate_slot_share(bucketdir, secrets,
4778-                                                          sharenum,
4779-                                                          allocated_size,
4780-                                                          owner_num=0)
4781-                        shares[sharenum] = share
4782-                    shares[sharenum].writev(datav, new_length)
4783-                    # and update the lease
4784-                    shares[sharenum].add_or_renew_lease(lease_info)
4785-
4786-            if new_length == 0:
4787-                # delete empty bucket directories
4788-                if not os.listdir(bucketdir):
4789-                    os.rmdir(bucketdir)
4790 
4791hunk ./src/allmydata/storage/server.py 308
4792+        try:
4793+            shareset = self.backend.get_shareset(storageindex)
4794+            expiration_time = start + 31*24*60*60   # one month from now
4795+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4796+                                                       read_vector, expiration_time)
4797+        finally:
4798+            self.add_latency("writev", time.time() - start)
4799 
4800hunk ./src/allmydata/storage/server.py 316
4801-        # all done
4802-        self.add_latency("writev", time.time() - start)
4803-        return (testv_is_good, read_data)
4804-
4805-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4806-                             allocated_size, owner_num=0):
4807-        (write_enabler, renew_secret, cancel_secret) = secrets
4808-        my_nodeid = self.my_nodeid
4809-        fileutil.make_dirs(bucketdir)
4810-        filename = os.path.join(bucketdir, "%d" % sharenum)
4811-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4812-                                         self)
4813-        return share
4814-
4815-    def remote_slot_readv(self, storage_index, shares, readv):
4816+    def remote_slot_readv(self, storageindex, shares, readv):
4817         start = time.time()
4818         self.count("readv")
4819hunk ./src/allmydata/storage/server.py 319
4820-        si_s = si_b2a(storage_index)
4821-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4822-                     facility="tahoe.storage", level=log.OPERATIONAL)
4823-        si_dir = storage_index_to_dir(storage_index)
4824-        # shares exist if there is a file for them
4825-        bucketdir = os.path.join(self.sharedir, si_dir)
4826-        if not os.path.isdir(bucketdir):
4827+        si_s = si_b2a(storageindex)
4828+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4829+                facility="tahoe.storage", level=log.OPERATIONAL)
4830+
4831+        try:
4832+            shareset = self.backend.get_shareset(storageindex)
4833+            return shareset.readv(self, shares, readv)
4834+        finally:
4835             self.add_latency("readv", time.time() - start)
4836hunk ./src/allmydata/storage/server.py 328
4837-            return {}
4838-        datavs = {}
4839-        for sharenum_s in os.listdir(bucketdir):
4840-            try:
4841-                sharenum = int(sharenum_s)
4842-            except ValueError:
4843-                continue
4844-            if sharenum in shares or not shares:
4845-                filename = os.path.join(bucketdir, sharenum_s)
4846-                msf = MutableShareFile(filename, self)
4847-                datavs[sharenum] = msf.readv(readv)
4848-        log.msg("returning shares %s" % (datavs.keys(),),
4849-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4850-        self.add_latency("readv", time.time() - start)
4851-        return datavs
4852 
4853hunk ./src/allmydata/storage/server.py 329
4854-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4855-                                    reason):
4856-        fileutil.make_dirs(self.corruption_advisory_dir)
4857-        now = time_format.iso_utc(sep="T")
4858-        si_s = si_b2a(storage_index)
4859-        # windows can't handle colons in the filename
4860-        fn = os.path.join(self.corruption_advisory_dir,
4861-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4862-        f = open(fn, "w")
4863-        f.write("report: Share Corruption\n")
4864-        f.write("type: %s\n" % share_type)
4865-        f.write("storage_index: %s\n" % si_s)
4866-        f.write("share_number: %d\n" % shnum)
4867-        f.write("\n")
4868-        f.write(reason)
4869-        f.write("\n")
4870-        f.close()
4871-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4872-                        "%(si)s-%(shnum)d: %(reason)s"),
4873-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4874-                level=log.SCARY, umid="SGx2fA")
4875-        return None
4876+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4877+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4878hunk ./src/allmydata/test/common.py 20
4879 from allmydata.mutable.common import CorruptShareError
4880 from allmydata.mutable.layout import unpack_header
4881 from allmydata.mutable.publish import MutableData
4882-from allmydata.storage.mutable import MutableShareFile
4883+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4884 from allmydata.util import hashutil, log, fileutil, pollmixin
4885 from allmydata.util.assertutil import precondition
4886 from allmydata.util.consumer import download_to_data
4887hunk ./src/allmydata/test/common.py 1297
4888 
4889 def _corrupt_mutable_share_data(data, debug=False):
4890     prefix = data[:32]
4891-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4892-    data_offset = MutableShareFile.DATA_OFFSET
4893+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4894+    data_offset = MutableDiskShare.DATA_OFFSET
4895     sharetype = data[data_offset:data_offset+1]
4896     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4897     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4898hunk ./src/allmydata/test/no_network.py 21
4899 from twisted.application import service
4900 from twisted.internet import defer, reactor
4901 from twisted.python.failure import Failure
4902+from twisted.python.filepath import FilePath
4903 from foolscap.api import Referenceable, fireEventually, RemoteException
4904 from base64 import b32encode
4905hunk ./src/allmydata/test/no_network.py 24
4906+
4907 from allmydata import uri as tahoe_uri
4908 from allmydata.client import Client
4909hunk ./src/allmydata/test/no_network.py 27
4910-from allmydata.storage.server import StorageServer, storage_index_to_dir
4911+from allmydata.storage.server import StorageServer
4912+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4913 from allmydata.util import fileutil, idlib, hashutil
4914 from allmydata.util.hashutil import sha1
4915 from allmydata.test.common_web import HTTPClientGETFactory
4916hunk ./src/allmydata/test/no_network.py 155
4917             seed = server.get_permutation_seed()
4918             return sha1(peer_selection_index + seed).digest()
4919         return sorted(self.get_connected_servers(), key=_permuted)
4920+
4921     def get_connected_servers(self):
4922         return self.client._servers
4923hunk ./src/allmydata/test/no_network.py 158
4924+
4925     def get_nickname_for_serverid(self, serverid):
4926         return None
4927 
4928hunk ./src/allmydata/test/no_network.py 162
4929+    def get_known_servers(self):
4930+        return self.get_connected_servers()
4931+
4932+    def get_all_serverids(self):
4933+        return self.client.get_all_serverids()
4934+
4935+
4936 class NoNetworkClient(Client):
4937     def create_tub(self):
4938         pass
4939hunk ./src/allmydata/test/no_network.py 262
4940 
4941     def make_server(self, i, readonly=False):
4942         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
4943-        serverdir = os.path.join(self.basedir, "servers",
4944-                                 idlib.shortnodeid_b2a(serverid), "storage")
4945-        fileutil.make_dirs(serverdir)
4946-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
4947-                           readonly_storage=readonly)
4948+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
4949+
4950+        # The backend will make the storage directory and any necessary parents.
4951+        backend = DiskBackend(storagedir, readonly=readonly)
4952+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
4953         ss._no_network_server_number = i
4954         return ss
4955 
4956hunk ./src/allmydata/test/no_network.py 276
4957         middleman = service.MultiService()
4958         middleman.setServiceParent(self)
4959         ss.setServiceParent(middleman)
4960-        serverid = ss.my_nodeid
4961+        serverid = ss.get_serverid()
4962         self.servers_by_number[i] = ss
4963         wrapper = wrap_storage_server(ss)
4964         self.wrappers_by_id[serverid] = wrapper
4965hunk ./src/allmydata/test/no_network.py 295
4966         # it's enough to remove the server from c._servers (we don't actually
4967         # have to detach and stopService it)
4968         for i,ss in self.servers_by_number.items():
4969-            if ss.my_nodeid == serverid:
4970+            if ss.get_serverid() == serverid:
4971                 del self.servers_by_number[i]
4972                 break
4973         del self.wrappers_by_id[serverid]
4974hunk ./src/allmydata/test/no_network.py 345
4975     def get_clientdir(self, i=0):
4976         return self.g.clients[i].basedir
4977 
4978+    def get_server(self, i):
4979+        return self.g.servers_by_number[i]
4980+
4981     def get_serverdir(self, i):
4982hunk ./src/allmydata/test/no_network.py 349
4983-        return self.g.servers_by_number[i].storedir
4984+        return self.g.servers_by_number[i].backend.storedir
4985+
4986+    def remove_server(self, i):
4987+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
4988 
4989     def iterate_servers(self):
4990         for i in sorted(self.g.servers_by_number.keys()):
4991hunk ./src/allmydata/test/no_network.py 357
4992             ss = self.g.servers_by_number[i]
4993-            yield (i, ss, ss.storedir)
4994+            yield (i, ss, ss.backend.storedir)
4995 
4996     def find_uri_shares(self, uri):
4997         si = tahoe_uri.from_string(uri).get_storage_index()
4998hunk ./src/allmydata/test/no_network.py 361
4999-        prefixdir = storage_index_to_dir(si)
5000         shares = []
5001         for i,ss in self.g.servers_by_number.items():
5002hunk ./src/allmydata/test/no_network.py 363
5003-            serverid = ss.my_nodeid
5004-            basedir = os.path.join(ss.sharedir, prefixdir)
5005-            if not os.path.exists(basedir):
5006-                continue
5007-            for f in os.listdir(basedir):
5008-                try:
5009-                    shnum = int(f)
5010-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5011-                except ValueError:
5012-                    pass
5013+            for share in ss.backend.get_shareset(si).get_shares():
5014+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5015         return sorted(shares)
5016 
5017hunk ./src/allmydata/test/no_network.py 367
5018+    def count_leases(self, uri):
5019+        """Return (filename, leasecount) pairs in arbitrary order."""
5020+        si = tahoe_uri.from_string(uri).get_storage_index()
5021+        lease_counts = []
5022+        for i,ss in self.g.servers_by_number.items():
5023+            for share in ss.backend.get_shareset(si).get_shares():
5024+                num_leases = len(list(share.get_leases()))
5025+                lease_counts.append( (share._home.path, num_leases) )
5026+        return lease_counts
5027+
5028     def copy_shares(self, uri):
5029         shares = {}
5030hunk ./src/allmydata/test/no_network.py 379
5031-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5032-            shares[sharefile] = open(sharefile, "rb").read()
5033+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5034+            shares[sharefp.path] = sharefp.getContent()
5035         return shares
5036 
5037hunk ./src/allmydata/test/no_network.py 383
5038+    def copy_share(self, from_share, uri, to_server):
5039+        si = uri.from_string(self.uri).get_storage_index()
5040+        (i_shnum, i_serverid, i_sharefp) = from_share
5041+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5042+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5043+
5044     def restore_all_shares(self, shares):
5045hunk ./src/allmydata/test/no_network.py 390
5046-        for sharefile, data in shares.items():
5047-            open(sharefile, "wb").write(data)
5048+        for share, data in shares.items():
5049+            share.home.setContent(data)
5050 
5051hunk ./src/allmydata/test/no_network.py 393
5052-    def delete_share(self, (shnum, serverid, sharefile)):
5053-        os.unlink(sharefile)
5054+    def delete_share(self, (shnum, serverid, sharefp)):
5055+        sharefp.remove()
5056 
5057     def delete_shares_numbered(self, uri, shnums):
5058hunk ./src/allmydata/test/no_network.py 397
5059-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5060+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5061             if i_shnum in shnums:
5062hunk ./src/allmydata/test/no_network.py 399
5063-                os.unlink(i_sharefile)
5064+                i_sharefp.remove()
5065 
5066hunk ./src/allmydata/test/no_network.py 401
5067-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5068-        sharedata = open(sharefile, "rb").read()
5069-        corruptdata = corruptor_function(sharedata)
5070-        open(sharefile, "wb").write(corruptdata)
5071+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5072+        sharedata = sharefp.getContent()
5073+        corruptdata = corruptor_function(sharedata, debug=debug)
5074+        sharefp.setContent(corruptdata)
5075 
5076     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5077hunk ./src/allmydata/test/no_network.py 407
5078-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5079+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5080             if i_shnum in shnums:
5081hunk ./src/allmydata/test/no_network.py 409
5082-                sharedata = open(i_sharefile, "rb").read()
5083-                corruptdata = corruptor(sharedata, debug=debug)
5084-                open(i_sharefile, "wb").write(corruptdata)
5085+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5086 
5087     def corrupt_all_shares(self, uri, corruptor, debug=False):
5088hunk ./src/allmydata/test/no_network.py 412
5089-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5090-            sharedata = open(i_sharefile, "rb").read()
5091-            corruptdata = corruptor(sharedata, debug=debug)
5092-            open(i_sharefile, "wb").write(corruptdata)
5093+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5094+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5095 
5096     def GET(self, urlpath, followRedirect=False, return_response=False,
5097             method="GET", clientnum=0, **kwargs):
5098hunk ./src/allmydata/test/test_download.py 6
5099 # a previous run. This asserts that the current code is capable of decoding
5100 # shares from a previous version.
5101 
5102-import os
5103 from twisted.trial import unittest
5104 from twisted.internet import defer, reactor
5105 from allmydata import uri
5106hunk ./src/allmydata/test/test_download.py 9
5107-from allmydata.storage.server import storage_index_to_dir
5108 from allmydata.util import base32, fileutil, spans, log, hashutil
5109 from allmydata.util.consumer import download_to_data, MemoryConsumer
5110 from allmydata.immutable import upload, layout
5111hunk ./src/allmydata/test/test_download.py 85
5112         u = upload.Data(plaintext, None)
5113         d = self.c0.upload(u)
5114         f = open("stored_shares.py", "w")
5115-        def _created_immutable(ur):
5116-            # write the generated shares and URI to a file, which can then be
5117-            # incorporated into this one next time.
5118-            f.write('immutable_uri = "%s"\n' % ur.uri)
5119-            f.write('immutable_shares = {\n')
5120-            si = uri.from_string(ur.uri).get_storage_index()
5121-            si_dir = storage_index_to_dir(si)
5122+
5123+        def _write_py(uri):
5124+            si = uri.from_string(uri).get_storage_index()
5125             for (i,ss,ssdir) in self.iterate_servers():
5126hunk ./src/allmydata/test/test_download.py 89
5127-                sharedir = os.path.join(ssdir, "shares", si_dir)
5128                 shares = {}
5129hunk ./src/allmydata/test/test_download.py 90
5130-                for fn in os.listdir(sharedir):
5131-                    shnum = int(fn)
5132-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5133-                    shares[shnum] = sharedata
5134-                fileutil.rm_dir(sharedir)
5135+                shareset = ss.backend.get_shareset(si)
5136+                for share in shareset.get_shares():
5137+                    sharedata = share._home.getContent()
5138+                    shares[share.get_shnum()] = sharedata
5139+
5140+                fileutil.fp_remove(shareset._sharehomedir)
5141                 if shares:
5142                     f.write(' %d: { # client[%d]\n' % (i, i))
5143                     for shnum in sorted(shares.keys()):
5144hunk ./src/allmydata/test/test_download.py 103
5145                                 (shnum, base32.b2a(shares[shnum])))
5146                     f.write('    },\n')
5147             f.write('}\n')
5148-            f.write('\n')
5149 
5150hunk ./src/allmydata/test/test_download.py 104
5151+        def _created_immutable(ur):
5152+            # write the generated shares and URI to a file, which can then be
5153+            # incorporated into this one next time.
5154+            f.write('immutable_uri = "%s"\n' % ur.uri)
5155+            f.write('immutable_shares = {\n')
5156+            _write_py(ur.uri)
5157+            f.write('\n')
5158         d.addCallback(_created_immutable)
5159 
5160         d.addCallback(lambda ignored:
5161hunk ./src/allmydata/test/test_download.py 118
5162         def _created_mutable(n):
5163             f.write('mutable_uri = "%s"\n' % n.get_uri())
5164             f.write('mutable_shares = {\n')
5165-            si = uri.from_string(n.get_uri()).get_storage_index()
5166-            si_dir = storage_index_to_dir(si)
5167-            for (i,ss,ssdir) in self.iterate_servers():
5168-                sharedir = os.path.join(ssdir, "shares", si_dir)
5169-                shares = {}
5170-                for fn in os.listdir(sharedir):
5171-                    shnum = int(fn)
5172-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5173-                    shares[shnum] = sharedata
5174-                fileutil.rm_dir(sharedir)
5175-                if shares:
5176-                    f.write(' %d: { # client[%d]\n' % (i, i))
5177-                    for shnum in sorted(shares.keys()):
5178-                        f.write('  %d: base32.a2b("%s"),\n' %
5179-                                (shnum, base32.b2a(shares[shnum])))
5180-                    f.write('    },\n')
5181-            f.write('}\n')
5182-
5183-            f.close()
5184+            _write_py(n.get_uri())
5185         d.addCallback(_created_mutable)
5186 
5187         def _done(ignored):
5188hunk ./src/allmydata/test/test_download.py 123
5189             f.close()
5190-        d.addCallback(_done)
5191+        d.addBoth(_done)
5192 
5193         return d
5194 
5195hunk ./src/allmydata/test/test_download.py 127
5196+    def _write_shares(self, uri, shares):
5197+        si = uri.from_string(uri).get_storage_index()
5198+        for i in shares:
5199+            shares_for_server = shares[i]
5200+            for shnum in shares_for_server:
5201+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5202+                fileutil.fp_make_dirs(share_dir)
5203+                share_dir.child(str(shnum)).setContent(shares[shnum])
5204+
5205     def load_shares(self, ignored=None):
5206         # this uses the data generated by create_shares() to populate the
5207         # storage servers with pre-generated shares
5208hunk ./src/allmydata/test/test_download.py 139
5209-        si = uri.from_string(immutable_uri).get_storage_index()
5210-        si_dir = storage_index_to_dir(si)
5211-        for i in immutable_shares:
5212-            shares = immutable_shares[i]
5213-            for shnum in shares:
5214-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5215-                fileutil.make_dirs(dn)
5216-                fn = os.path.join(dn, str(shnum))
5217-                f = open(fn, "wb")
5218-                f.write(shares[shnum])
5219-                f.close()
5220-
5221-        si = uri.from_string(mutable_uri).get_storage_index()
5222-        si_dir = storage_index_to_dir(si)
5223-        for i in mutable_shares:
5224-            shares = mutable_shares[i]
5225-            for shnum in shares:
5226-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5227-                fileutil.make_dirs(dn)
5228-                fn = os.path.join(dn, str(shnum))
5229-                f = open(fn, "wb")
5230-                f.write(shares[shnum])
5231-                f.close()
5232+        self._write_shares(immutable_uri, immutable_shares)
5233+        self._write_shares(mutable_uri, mutable_shares)
5234 
5235     def download_immutable(self, ignored=None):
5236         n = self.c0.create_node_from_uri(immutable_uri)
5237hunk ./src/allmydata/test/test_download.py 183
5238 
5239         self.load_shares()
5240         si = uri.from_string(immutable_uri).get_storage_index()
5241-        si_dir = storage_index_to_dir(si)
5242 
5243         n = self.c0.create_node_from_uri(immutable_uri)
5244         d = download_to_data(n)
5245hunk ./src/allmydata/test/test_download.py 198
5246                 for clientnum in immutable_shares:
5247                     for shnum in immutable_shares[clientnum]:
5248                         if s._shnum == shnum:
5249-                            fn = os.path.join(self.get_serverdir(clientnum),
5250-                                              "shares", si_dir, str(shnum))
5251-                            os.unlink(fn)
5252+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5253+                            share_dir.child(str(shnum)).remove()
5254         d.addCallback(_clobber_some_shares)
5255         d.addCallback(lambda ign: download_to_data(n))
5256         d.addCallback(_got_data)
5257hunk ./src/allmydata/test/test_download.py 212
5258                 for shnum in immutable_shares[clientnum]:
5259                     if shnum == save_me:
5260                         continue
5261-                    fn = os.path.join(self.get_serverdir(clientnum),
5262-                                      "shares", si_dir, str(shnum))
5263-                    if os.path.exists(fn):
5264-                        os.unlink(fn)
5265+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5266+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5267             # now the download should fail with NotEnoughSharesError
5268             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5269                                    download_to_data, n)
5270hunk ./src/allmydata/test/test_download.py 223
5271             # delete the last remaining share
5272             for clientnum in immutable_shares:
5273                 for shnum in immutable_shares[clientnum]:
5274-                    fn = os.path.join(self.get_serverdir(clientnum),
5275-                                      "shares", si_dir, str(shnum))
5276-                    if os.path.exists(fn):
5277-                        os.unlink(fn)
5278+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5279+                    share_dir.child(str(shnum)).remove()
5280             # now a new download should fail with NoSharesError. We want a
5281             # new ImmutableFileNode so it will forget about the old shares.
5282             # If we merely called create_node_from_uri() without first
5283hunk ./src/allmydata/test/test_download.py 801
5284         # will report two shares, and the ShareFinder will handle the
5285         # duplicate by attaching both to the same CommonShare instance.
5286         si = uri.from_string(immutable_uri).get_storage_index()
5287-        si_dir = storage_index_to_dir(si)
5288-        sh0_file = [sharefile
5289-                    for (shnum, serverid, sharefile)
5290-                    in self.find_uri_shares(immutable_uri)
5291-                    if shnum == 0][0]
5292-        sh0_data = open(sh0_file, "rb").read()
5293+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5294+                          in self.find_uri_shares(immutable_uri)
5295+                          if shnum == 0][0]
5296+        sh0_data = sh0_fp.getContent()
5297         for clientnum in immutable_shares:
5298             if 0 in immutable_shares[clientnum]:
5299                 continue
5300hunk ./src/allmydata/test/test_download.py 808
5301-            cdir = self.get_serverdir(clientnum)
5302-            target = os.path.join(cdir, "shares", si_dir, "0")
5303-            outf = open(target, "wb")
5304-            outf.write(sh0_data)
5305-            outf.close()
5306+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5307+            fileutil.fp_make_dirs(cdir)
5308+            cdir.child(str(shnum)).setContent(sh0_data)
5309 
5310         d = self.download_immutable()
5311         return d
5312hunk ./src/allmydata/test/test_encode.py 134
5313         d.addCallback(_try)
5314         return d
5315 
5316-    def get_share_hashes(self, at_least_these=()):
5317+    def get_share_hashes(self):
5318         d = self._start()
5319         def _try(unused=None):
5320             if self.mode == "bad sharehash":
5321hunk ./src/allmydata/test/test_hung_server.py 3
5322 # -*- coding: utf-8 -*-
5323 
5324-import os, shutil
5325 from twisted.trial import unittest
5326 from twisted.internet import defer
5327hunk ./src/allmydata/test/test_hung_server.py 5
5328-from allmydata import uri
5329+
5330 from allmydata.util.consumer import download_to_data
5331 from allmydata.immutable import upload
5332 from allmydata.mutable.common import UnrecoverableFileError
5333hunk ./src/allmydata/test/test_hung_server.py 10
5334 from allmydata.mutable.publish import MutableData
5335-from allmydata.storage.common import storage_index_to_dir
5336 from allmydata.test.no_network import GridTestMixin
5337 from allmydata.test.common import ShouldFailMixin
5338 from allmydata.util.pollmixin import PollMixin
5339hunk ./src/allmydata/test/test_hung_server.py 18
5340 immutable_plaintext = "data" * 10000
5341 mutable_plaintext = "muta" * 10000
5342 
5343+
5344 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5345                              unittest.TestCase):
5346     # Many of these tests take around 60 seconds on François's ARM buildslave:
5347hunk ./src/allmydata/test/test_hung_server.py 31
5348     timeout = 240
5349 
5350     def _break(self, servers):
5351-        for (id, ss) in servers:
5352-            self.g.break_server(id)
5353+        for ss in servers:
5354+            self.g.break_server(ss.get_serverid())
5355 
5356     def _hang(self, servers, **kwargs):
5357hunk ./src/allmydata/test/test_hung_server.py 35
5358-        for (id, ss) in servers:
5359-            self.g.hang_server(id, **kwargs)
5360+        for ss in servers:
5361+            self.g.hang_server(ss.get_serverid(), **kwargs)
5362 
5363     def _unhang(self, servers, **kwargs):
5364hunk ./src/allmydata/test/test_hung_server.py 39
5365-        for (id, ss) in servers:
5366-            self.g.unhang_server(id, **kwargs)
5367+        for ss in servers:
5368+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5369 
5370     def _hang_shares(self, shnums, **kwargs):
5371         # hang all servers who are holding the given shares
5372hunk ./src/allmydata/test/test_hung_server.py 52
5373                     hung_serverids.add(i_serverid)
5374 
5375     def _delete_all_shares_from(self, servers):
5376-        serverids = [id for (id, ss) in servers]
5377-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5378+        serverids = [ss.get_serverid() for ss in servers]
5379+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5380             if i_serverid in serverids:
5381hunk ./src/allmydata/test/test_hung_server.py 55
5382-                os.unlink(i_sharefile)
5383+                i_sharefp.remove()
5384 
5385     def _corrupt_all_shares_in(self, servers, corruptor_func):
5386hunk ./src/allmydata/test/test_hung_server.py 58
5387-        serverids = [id for (id, ss) in servers]
5388-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5389+        serverids = [ss.get_serverid() for ss in servers]
5390+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5391             if i_serverid in serverids:
5392hunk ./src/allmydata/test/test_hung_server.py 61
5393-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5394+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5395 
5396     def _copy_all_shares_from(self, from_servers, to_server):
5397hunk ./src/allmydata/test/test_hung_server.py 64
5398-        serverids = [id for (id, ss) in from_servers]
5399-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5400+        serverids = [ss.get_serverid() for ss in from_servers]
5401+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5402             if i_serverid in serverids:
5403hunk ./src/allmydata/test/test_hung_server.py 67
5404-                self._copy_share((i_shnum, i_sharefile), to_server)
5405+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5406 
5407hunk ./src/allmydata/test/test_hung_server.py 69
5408-    def _copy_share(self, share, to_server):
5409-        (sharenum, sharefile) = share
5410-        (id, ss) = to_server
5411-        shares_dir = os.path.join(ss.original.storedir, "shares")
5412-        si = uri.from_string(self.uri).get_storage_index()
5413-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5414-        if not os.path.exists(si_dir):
5415-            os.makedirs(si_dir)
5416-        new_sharefile = os.path.join(si_dir, str(sharenum))
5417-        shutil.copy(sharefile, new_sharefile)
5418         self.shares = self.find_uri_shares(self.uri)
5419hunk ./src/allmydata/test/test_hung_server.py 70
5420-        # Make sure that the storage server has the share.
5421-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5422-                        in self.shares)
5423-
5424-    def _corrupt_share(self, share, corruptor_func):
5425-        (sharenum, sharefile) = share
5426-        data = open(sharefile, "rb").read()
5427-        newdata = corruptor_func(data)
5428-        os.unlink(sharefile)
5429-        wf = open(sharefile, "wb")
5430-        wf.write(newdata)
5431-        wf.close()
5432 
5433     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5434         self.mutable = mutable
5435hunk ./src/allmydata/test/test_hung_server.py 82
5436 
5437         self.c0 = self.g.clients[0]
5438         nm = self.c0.nodemaker
5439-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5440-                               for s in nm.storage_broker.get_connected_servers()])
5441+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5442+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5443         self.servers = self.servers[5:] + self.servers[:5]
5444 
5445         if mutable:
5446hunk ./src/allmydata/test/test_hung_server.py 244
5447             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5448             # will retire before the download is complete and the ShareFinder
5449             # is shut off. That will leave 4 OVERDUE and 1
5450-            # stuck-but-not-overdue, for a total of 5 requests in in
5451+            # stuck-but-not-overdue, for a total of 5 requests in
5452             # _sf.pending_requests
5453             for t in self._sf.overdue_timers.values()[:4]:
5454                 t.reset(-1.0)
5455hunk ./src/allmydata/test/test_mutable.py 21
5456 from foolscap.api import eventually, fireEventually
5457 from foolscap.logging import log
5458 from allmydata.storage_client import StorageFarmBroker
5459-from allmydata.storage.common import storage_index_to_dir
5460 from allmydata.scripts import debug
5461 
5462 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5463hunk ./src/allmydata/test/test_mutable.py 3670
5464         # Now execute each assignment by writing the storage.
5465         for (share, servernum) in assignments:
5466             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5467-            storedir = self.get_serverdir(servernum)
5468-            storage_path = os.path.join(storedir, "shares",
5469-                                        storage_index_to_dir(si))
5470-            fileutil.make_dirs(storage_path)
5471-            fileutil.write(os.path.join(storage_path, "%d" % share),
5472-                           sharedata)
5473+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5474+            fileutil.fp_make_dirs(storage_dir)
5475+            storage_dir.child("%d" % share).setContent(sharedata)
5476         # ...and verify that the shares are there.
5477         shares = self.find_uri_shares(self.sdmf_old_cap)
5478         assert len(shares) == 10
5479hunk ./src/allmydata/test/test_provisioning.py 13
5480 from nevow import inevow
5481 from zope.interface import implements
5482 
5483-class MyRequest:
5484+class MockRequest:
5485     implements(inevow.IRequest)
5486     pass
5487 
5488hunk ./src/allmydata/test/test_provisioning.py 26
5489     def test_load(self):
5490         pt = provisioning.ProvisioningTool()
5491         self.fields = {}
5492-        #r = MyRequest()
5493+        #r = MockRequest()
5494         #r.fields = self.fields
5495         #ctx = RequestContext()
5496         #unfilled = pt.renderSynchronously(ctx)
5497hunk ./src/allmydata/test/test_repairer.py 537
5498         # happiness setting.
5499         def _delete_some_servers(ignored):
5500             for i in xrange(7):
5501-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5502+                self.remove_server(i)
5503 
5504             assert len(self.g.servers_by_number) == 3
5505 
5506hunk ./src/allmydata/test/test_storage.py 14
5507 from allmydata import interfaces
5508 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5509 from allmydata.storage.server import StorageServer
5510-from allmydata.storage.mutable import MutableShareFile
5511-from allmydata.storage.immutable import BucketWriter, BucketReader
5512-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5513+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5514+from allmydata.storage.bucket import BucketWriter, BucketReader
5515+from allmydata.storage.common import DataTooLargeError, \
5516      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5517 from allmydata.storage.lease import LeaseInfo
5518 from allmydata.storage.crawler import BucketCountingCrawler
5519hunk ./src/allmydata/test/test_storage.py 474
5520         w[0].remote_write(0, "\xff"*10)
5521         w[0].remote_close()
5522 
5523-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5524-        f = open(fn, "rb+")
5525+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5526+        f = fp.open("rb+")
5527         f.seek(0)
5528         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5529         f.close()
5530hunk ./src/allmydata/test/test_storage.py 814
5531     def test_bad_magic(self):
5532         ss = self.create("test_bad_magic")
5533         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5534-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5535-        f = open(fn, "rb+")
5536+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5537+        f = fp.open("rb+")
5538         f.seek(0)
5539         f.write("BAD MAGIC")
5540         f.close()
5541hunk ./src/allmydata/test/test_storage.py 842
5542 
5543         # Trying to make the container too large (by sending a write vector
5544         # whose offset is too high) will raise an exception.
5545-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5546+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5547         self.failUnlessRaises(DataTooLargeError,
5548                               rstaraw, "si1", secrets,
5549                               {0: ([], [(TOOBIG,data)], None)},
5550hunk ./src/allmydata/test/test_storage.py 1229
5551 
5552         # create a random non-numeric file in the bucket directory, to
5553         # exercise the code that's supposed to ignore those.
5554-        bucket_dir = os.path.join(self.workdir("test_leases"),
5555-                                  "shares", storage_index_to_dir("si1"))
5556-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5557-        f.write("you ought to be ignoring me\n")
5558-        f.close()
5559+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5560+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5561 
5562hunk ./src/allmydata/test/test_storage.py 1232
5563-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5564+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5565         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5566 
5567         # add-lease on a missing storage index is silently ignored
5568hunk ./src/allmydata/test/test_storage.py 3118
5569         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5570 
5571         # add a non-sharefile to exercise another code path
5572-        fn = os.path.join(ss.sharedir,
5573-                          storage_index_to_dir(immutable_si_0),
5574-                          "not-a-share")
5575-        f = open(fn, "wb")
5576-        f.write("I am not a share.\n")
5577-        f.close()
5578+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5579+        fp.setContent("I am not a share.\n")
5580 
5581         # this is before the crawl has started, so we're not in a cycle yet
5582         initial_state = lc.get_state()
5583hunk ./src/allmydata/test/test_storage.py 3282
5584     def test_expire_age(self):
5585         basedir = "storage/LeaseCrawler/expire_age"
5586         fileutil.make_dirs(basedir)
5587-        # setting expiration_time to 2000 means that any lease which is more
5588-        # than 2000s old will be expired.
5589-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5590-                                       expiration_enabled=True,
5591-                                       expiration_mode="age",
5592-                                       expiration_override_lease_duration=2000)
5593+        # setting 'override_lease_duration' to 2000 means that any lease that
5594+        # is more than 2000 seconds old will be expired.
5595+        expiration_policy = {
5596+            'enabled': True,
5597+            'mode': 'age',
5598+            'override_lease_duration': 2000,
5599+            'sharetypes': ('mutable', 'immutable'),
5600+        }
5601+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5602         # make it start sooner than usual.
5603         lc = ss.lease_checker
5604         lc.slow_start = 0
5605hunk ./src/allmydata/test/test_storage.py 3423
5606     def test_expire_cutoff_date(self):
5607         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5608         fileutil.make_dirs(basedir)
5609-        # setting cutoff-date to 2000 seconds ago means that any lease which
5610-        # is more than 2000s old will be expired.
5611+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5612+        # is more than 2000 seconds old will be expired.
5613         now = time.time()
5614         then = int(now - 2000)
5615hunk ./src/allmydata/test/test_storage.py 3427
5616-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5617-                                       expiration_enabled=True,
5618-                                       expiration_mode="cutoff-date",
5619-                                       expiration_cutoff_date=then)
5620+        expiration_policy = {
5621+            'enabled': True,
5622+            'mode': 'cutoff-date',
5623+            'cutoff_date': then,
5624+            'sharetypes': ('mutable', 'immutable'),
5625+        }
5626+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5627         # make it start sooner than usual.
5628         lc = ss.lease_checker
5629         lc.slow_start = 0
5630hunk ./src/allmydata/test/test_storage.py 3575
5631     def test_only_immutable(self):
5632         basedir = "storage/LeaseCrawler/only_immutable"
5633         fileutil.make_dirs(basedir)
5634+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5635+        # is more than 2000 seconds old will be expired.
5636         now = time.time()
5637         then = int(now - 2000)
5638hunk ./src/allmydata/test/test_storage.py 3579
5639-        ss = StorageServer(basedir, "\x00" * 20,
5640-                           expiration_enabled=True,
5641-                           expiration_mode="cutoff-date",
5642-                           expiration_cutoff_date=then,
5643-                           expiration_sharetypes=("immutable",))
5644+        expiration_policy = {
5645+            'enabled': True,
5646+            'mode': 'cutoff-date',
5647+            'cutoff_date': then,
5648+            'sharetypes': ('immutable',),
5649+        }
5650+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5651         lc = ss.lease_checker
5652         lc.slow_start = 0
5653         webstatus = StorageStatus(ss)
5654hunk ./src/allmydata/test/test_storage.py 3636
5655     def test_only_mutable(self):
5656         basedir = "storage/LeaseCrawler/only_mutable"
5657         fileutil.make_dirs(basedir)
5658+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5659+        # is more than 2000 seconds old will be expired.
5660         now = time.time()
5661         then = int(now - 2000)
5662hunk ./src/allmydata/test/test_storage.py 3640
5663-        ss = StorageServer(basedir, "\x00" * 20,
5664-                           expiration_enabled=True,
5665-                           expiration_mode="cutoff-date",
5666-                           expiration_cutoff_date=then,
5667-                           expiration_sharetypes=("mutable",))
5668+        expiration_policy = {
5669+            'enabled': True,
5670+            'mode': 'cutoff-date',
5671+            'cutoff_date': then,
5672+            'sharetypes': ('mutable',),
5673+        }
5674+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5675         lc = ss.lease_checker
5676         lc.slow_start = 0
5677         webstatus = StorageStatus(ss)
5678hunk ./src/allmydata/test/test_storage.py 3819
5679     def test_no_st_blocks(self):
5680         basedir = "storage/LeaseCrawler/no_st_blocks"
5681         fileutil.make_dirs(basedir)
5682-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5683-                                        expiration_mode="age",
5684-                                        expiration_override_lease_duration=-1000)
5685-        # a negative expiration_time= means the "configured-"
5686+        # A negative 'override_lease_duration' means that the "configured-"
5687         # space-recovered counts will be non-zero, since all shares will have
5688hunk ./src/allmydata/test/test_storage.py 3821
5689-        # expired by then
5690+        # expired by then.
5691+        expiration_policy = {
5692+            'enabled': True,
5693+            'mode': 'age',
5694+            'override_lease_duration': -1000,
5695+            'sharetypes': ('mutable', 'immutable'),
5696+        }
5697+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5698 
5699         # make it start sooner than usual.
5700         lc = ss.lease_checker
5701hunk ./src/allmydata/test/test_storage.py 3877
5702         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5703         first = min(self.sis)
5704         first_b32 = base32.b2a(first)
5705-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5706-        f = open(fn, "rb+")
5707+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5708+        f = fp.open("rb+")
5709         f.seek(0)
5710         f.write("BAD MAGIC")
5711         f.close()
5712hunk ./src/allmydata/test/test_storage.py 3890
5713 
5714         # also create an empty bucket
5715         empty_si = base32.b2a("\x04"*16)
5716-        empty_bucket_dir = os.path.join(ss.sharedir,
5717-                                        storage_index_to_dir(empty_si))
5718-        fileutil.make_dirs(empty_bucket_dir)
5719+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5720+        fileutil.fp_make_dirs(empty_bucket_dir)
5721 
5722         ss.setServiceParent(self.s)
5723 
5724hunk ./src/allmydata/test/test_system.py 10
5725 
5726 import allmydata
5727 from allmydata import uri
5728-from allmydata.storage.mutable import MutableShareFile
5729+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5730 from allmydata.storage.server import si_a2b
5731 from allmydata.immutable import offloaded, upload
5732 from allmydata.immutable.literal import LiteralFileNode
5733hunk ./src/allmydata/test/test_system.py 421
5734         return shares
5735 
5736     def _corrupt_mutable_share(self, filename, which):
5737-        msf = MutableShareFile(filename)
5738+        msf = MutableDiskShare(filename)
5739         datav = msf.readv([ (0, 1000000) ])
5740         final_share = datav[0]
5741         assert len(final_share) < 1000000 # ought to be truncated
5742hunk ./src/allmydata/test/test_upload.py 22
5743 from allmydata.util.happinessutil import servers_of_happiness, \
5744                                          shares_by_server, merge_servers
5745 from allmydata.storage_client import StorageFarmBroker
5746-from allmydata.storage.server import storage_index_to_dir
5747 
5748 MiB = 1024*1024
5749 
5750hunk ./src/allmydata/test/test_upload.py 821
5751 
5752     def _copy_share_to_server(self, share_number, server_number):
5753         ss = self.g.servers_by_number[server_number]
5754-        # Copy share i from the directory associated with the first
5755-        # storage server to the directory associated with this one.
5756-        assert self.g, "I tried to find a grid at self.g, but failed"
5757-        assert self.shares, "I tried to find shares at self.shares, but failed"
5758-        old_share_location = self.shares[share_number][2]
5759-        new_share_location = os.path.join(ss.storedir, "shares")
5760-        si = uri.from_string(self.uri).get_storage_index()
5761-        new_share_location = os.path.join(new_share_location,
5762-                                          storage_index_to_dir(si))
5763-        if not os.path.exists(new_share_location):
5764-            os.makedirs(new_share_location)
5765-        new_share_location = os.path.join(new_share_location,
5766-                                          str(share_number))
5767-        if old_share_location != new_share_location:
5768-            shutil.copy(old_share_location, new_share_location)
5769-        shares = self.find_uri_shares(self.uri)
5770-        # Make sure that the storage server has the share.
5771-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5772-                        in shares)
5773+        self.copy_share(self.shares[share_number], ss)
5774 
5775     def _setup_grid(self):
5776         """
5777hunk ./src/allmydata/test/test_upload.py 1103
5778                 self._copy_share_to_server(i, 2)
5779         d.addCallback(_copy_shares)
5780         # Remove the first server, and add a placeholder with share 0
5781-        d.addCallback(lambda ign:
5782-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5783+        d.addCallback(lambda ign: self.remove_server(0))
5784         d.addCallback(lambda ign:
5785             self._add_server_with_share(server_number=4, share_number=0))
5786         # Now try uploading.
5787hunk ./src/allmydata/test/test_upload.py 1134
5788         d.addCallback(lambda ign:
5789             self._add_server(server_number=4))
5790         d.addCallback(_copy_shares)
5791-        d.addCallback(lambda ign:
5792-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5793+        d.addCallback(lambda ign: self.remove_server(0))
5794         d.addCallback(_reset_encoding_parameters)
5795         d.addCallback(lambda client:
5796             client.upload(upload.Data("data" * 10000, convergence="")))
5797hunk ./src/allmydata/test/test_upload.py 1196
5798                 self._copy_share_to_server(i, 2)
5799         d.addCallback(_copy_shares)
5800         # Remove server 0, and add another in its place
5801-        d.addCallback(lambda ign:
5802-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5803+        d.addCallback(lambda ign: self.remove_server(0))
5804         d.addCallback(lambda ign:
5805             self._add_server_with_share(server_number=4, share_number=0,
5806                                         readonly=True))
5807hunk ./src/allmydata/test/test_upload.py 1237
5808             for i in xrange(1, 10):
5809                 self._copy_share_to_server(i, 2)
5810         d.addCallback(_copy_shares)
5811-        d.addCallback(lambda ign:
5812-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5813+        d.addCallback(lambda ign: self.remove_server(0))
5814         def _reset_encoding_parameters(ign, happy=4):
5815             client = self.g.clients[0]
5816             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5817hunk ./src/allmydata/test/test_upload.py 1273
5818         # remove the original server
5819         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5820         #  all the shares)
5821-        def _remove_server(ign):
5822-            server = self.g.servers_by_number[0]
5823-            self.g.remove_server(server.my_nodeid)
5824-        d.addCallback(_remove_server)
5825+        d.addCallback(lambda ign: self.remove_server(0))
5826         # This should succeed; we still have 4 servers, and the
5827         # happiness of the upload is 4.
5828         d.addCallback(lambda ign:
5829hunk ./src/allmydata/test/test_upload.py 1285
5830         d.addCallback(lambda ign:
5831             self._setup_and_upload())
5832         d.addCallback(_do_server_setup)
5833-        d.addCallback(_remove_server)
5834+        d.addCallback(lambda ign: self.remove_server(0))
5835         d.addCallback(lambda ign:
5836             self.shouldFail(UploadUnhappinessError,
5837                             "test_dropped_servers_in_encoder",
5838hunk ./src/allmydata/test/test_upload.py 1307
5839             self._add_server_with_share(4, 7, readonly=True)
5840             self._add_server_with_share(5, 8, readonly=True)
5841         d.addCallback(_do_server_setup_2)
5842-        d.addCallback(_remove_server)
5843+        d.addCallback(lambda ign: self.remove_server(0))
5844         d.addCallback(lambda ign:
5845             self._do_upload_with_broken_servers(1))
5846         d.addCallback(_set_basedir)
5847hunk ./src/allmydata/test/test_upload.py 1314
5848         d.addCallback(lambda ign:
5849             self._setup_and_upload())
5850         d.addCallback(_do_server_setup_2)
5851-        d.addCallback(_remove_server)
5852+        d.addCallback(lambda ign: self.remove_server(0))
5853         d.addCallback(lambda ign:
5854             self.shouldFail(UploadUnhappinessError,
5855                             "test_dropped_servers_in_encoder",
5856hunk ./src/allmydata/test/test_upload.py 1528
5857             for i in xrange(1, 10):
5858                 self._copy_share_to_server(i, 1)
5859         d.addCallback(_copy_shares)
5860-        d.addCallback(lambda ign:
5861-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5862+        d.addCallback(lambda ign: self.remove_server(0))
5863         def _prepare_client(ign):
5864             client = self.g.clients[0]
5865             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5866hunk ./src/allmydata/test/test_upload.py 1550
5867         def _setup(ign):
5868             for i in xrange(1, 11):
5869                 self._add_server(server_number=i)
5870-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5871+            self.remove_server(0)
5872             c = self.g.clients[0]
5873             # We set happy to an unsatisfiable value so that we can check the
5874             # counting in the exception message. The same progress message
5875hunk ./src/allmydata/test/test_upload.py 1577
5876                 self._add_server(server_number=i)
5877             self._add_server(server_number=11, readonly=True)
5878             self._add_server(server_number=12, readonly=True)
5879-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5880+            self.remove_server(0)
5881             c = self.g.clients[0]
5882             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5883             return c
5884hunk ./src/allmydata/test/test_upload.py 1605
5885             # the first one that the selector sees.
5886             for i in xrange(10):
5887                 self._copy_share_to_server(i, 9)
5888-            # Remove server 0, and its contents
5889-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5890+            self.remove_server(0)
5891             # Make happiness unsatisfiable
5892             c = self.g.clients[0]
5893             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5894hunk ./src/allmydata/test/test_upload.py 1625
5895         def _then(ign):
5896             for i in xrange(1, 11):
5897                 self._add_server(server_number=i, readonly=True)
5898-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5899+            self.remove_server(0)
5900             c = self.g.clients[0]
5901             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5902             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5903hunk ./src/allmydata/test/test_upload.py 1661
5904             self._add_server(server_number=4, readonly=True))
5905         d.addCallback(lambda ign:
5906             self._add_server(server_number=5, readonly=True))
5907-        d.addCallback(lambda ign:
5908-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5909+        d.addCallback(lambda ign: self.remove_server(0))
5910         def _reset_encoding_parameters(ign, happy=4):
5911             client = self.g.clients[0]
5912             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5913hunk ./src/allmydata/test/test_upload.py 1696
5914         d.addCallback(lambda ign:
5915             self._add_server(server_number=2))
5916         def _break_server_2(ign):
5917-            serverid = self.g.servers_by_number[2].my_nodeid
5918+            serverid = self.get_server(2).get_serverid()
5919             self.g.break_server(serverid)
5920         d.addCallback(_break_server_2)
5921         d.addCallback(lambda ign:
5922hunk ./src/allmydata/test/test_upload.py 1705
5923             self._add_server(server_number=4, readonly=True))
5924         d.addCallback(lambda ign:
5925             self._add_server(server_number=5, readonly=True))
5926-        d.addCallback(lambda ign:
5927-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5928+        d.addCallback(lambda ign: self.remove_server(0))
5929         d.addCallback(_reset_encoding_parameters)
5930         d.addCallback(lambda client:
5931             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
5932hunk ./src/allmydata/test/test_upload.py 1816
5933             # Copy shares
5934             self._copy_share_to_server(1, 1)
5935             self._copy_share_to_server(2, 1)
5936-            # Remove server 0
5937-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5938+            self.remove_server(0)
5939             client = self.g.clients[0]
5940             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
5941             return client
5942hunk ./src/allmydata/test/test_upload.py 1930
5943                                         readonly=True)
5944             self._add_server_with_share(server_number=4, share_number=3,
5945                                         readonly=True)
5946-            # Remove server 0.
5947-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5948+            self.remove_server(0)
5949             # Set the client appropriately
5950             c = self.g.clients[0]
5951             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5952hunk ./src/allmydata/test/test_util.py 9
5953 from twisted.trial import unittest
5954 from twisted.internet import defer, reactor
5955 from twisted.python.failure import Failure
5956+from twisted.python.filepath import FilePath
5957 from twisted.python import log
5958 from pycryptopp.hash.sha256 import SHA256 as _hash
5959 
5960hunk ./src/allmydata/test/test_util.py 508
5961                 os.chdir(saved_cwd)
5962 
5963     def test_disk_stats(self):
5964-        avail = fileutil.get_available_space('.', 2**14)
5965+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
5966         if avail == 0:
5967             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
5968 
5969hunk ./src/allmydata/test/test_util.py 512
5970-        disk = fileutil.get_disk_stats('.', 2**13)
5971+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
5972         self.failUnless(disk['total'] > 0, disk['total'])
5973         self.failUnless(disk['used'] > 0, disk['used'])
5974         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
5975hunk ./src/allmydata/test/test_util.py 521
5976 
5977     def test_disk_stats_avail_nonnegative(self):
5978         # This test will spuriously fail if you have more than 2^128
5979-        # bytes of available space on your filesystem.
5980-        disk = fileutil.get_disk_stats('.', 2**128)
5981+        # bytes of available space on your filesystem (lucky you).
5982+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
5983         self.failUnlessEqual(disk['avail'], 0)
5984 
5985 class PollMixinTests(unittest.TestCase):
5986hunk ./src/allmydata/test/test_web.py 12
5987 from twisted.python import failure, log
5988 from nevow import rend
5989 from allmydata import interfaces, uri, webish, dirnode
5990-from allmydata.storage.shares import get_share_file
5991 from allmydata.storage_client import StorageFarmBroker
5992 from allmydata.immutable import upload
5993 from allmydata.immutable.downloader.status import DownloadStatus
5994hunk ./src/allmydata/test/test_web.py 4111
5995             good_shares = self.find_uri_shares(self.uris["good"])
5996             self.failUnlessReallyEqual(len(good_shares), 10)
5997             sick_shares = self.find_uri_shares(self.uris["sick"])
5998-            os.unlink(sick_shares[0][2])
5999+            sick_shares[0][2].remove()
6000             dead_shares = self.find_uri_shares(self.uris["dead"])
6001             for i in range(1, 10):
6002hunk ./src/allmydata/test/test_web.py 4114
6003-                os.unlink(dead_shares[i][2])
6004+                dead_shares[i][2].remove()
6005             c_shares = self.find_uri_shares(self.uris["corrupt"])
6006             cso = CorruptShareOptions()
6007             cso.stdout = StringIO()
6008hunk ./src/allmydata/test/test_web.py 4118
6009-            cso.parseOptions([c_shares[0][2]])
6010+            cso.parseOptions([c_shares[0][2].path])
6011             corrupt_share(cso)
6012         d.addCallback(_clobber_shares)
6013 
6014hunk ./src/allmydata/test/test_web.py 4253
6015             good_shares = self.find_uri_shares(self.uris["good"])
6016             self.failUnlessReallyEqual(len(good_shares), 10)
6017             sick_shares = self.find_uri_shares(self.uris["sick"])
6018-            os.unlink(sick_shares[0][2])
6019+            sick_shares[0][2].remove()
6020             dead_shares = self.find_uri_shares(self.uris["dead"])
6021             for i in range(1, 10):
6022hunk ./src/allmydata/test/test_web.py 4256
6023-                os.unlink(dead_shares[i][2])
6024+                dead_shares[i][2].remove()
6025             c_shares = self.find_uri_shares(self.uris["corrupt"])
6026             cso = CorruptShareOptions()
6027             cso.stdout = StringIO()
6028hunk ./src/allmydata/test/test_web.py 4260
6029-            cso.parseOptions([c_shares[0][2]])
6030+            cso.parseOptions([c_shares[0][2].path])
6031             corrupt_share(cso)
6032         d.addCallback(_clobber_shares)
6033 
6034hunk ./src/allmydata/test/test_web.py 4319
6035 
6036         def _clobber_shares(ignored):
6037             sick_shares = self.find_uri_shares(self.uris["sick"])
6038-            os.unlink(sick_shares[0][2])
6039+            sick_shares[0][2].remove()
6040         d.addCallback(_clobber_shares)
6041 
6042         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6043hunk ./src/allmydata/test/test_web.py 4811
6044             good_shares = self.find_uri_shares(self.uris["good"])
6045             self.failUnlessReallyEqual(len(good_shares), 10)
6046             sick_shares = self.find_uri_shares(self.uris["sick"])
6047-            os.unlink(sick_shares[0][2])
6048+            sick_shares[0][2].remove()
6049             #dead_shares = self.find_uri_shares(self.uris["dead"])
6050             #for i in range(1, 10):
6051hunk ./src/allmydata/test/test_web.py 4814
6052-            #    os.unlink(dead_shares[i][2])
6053+            #    dead_shares[i][2].remove()
6054 
6055             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6056             #cso = CorruptShareOptions()
6057hunk ./src/allmydata/test/test_web.py 4819
6058             #cso.stdout = StringIO()
6059-            #cso.parseOptions([c_shares[0][2]])
6060+            #cso.parseOptions([c_shares[0][2].path])
6061             #corrupt_share(cso)
6062         d.addCallback(_clobber_shares)
6063 
6064hunk ./src/allmydata/test/test_web.py 4870
6065         d.addErrback(self.explain_web_error)
6066         return d
6067 
6068-    def _count_leases(self, ignored, which):
6069-        u = self.uris[which]
6070-        shares = self.find_uri_shares(u)
6071-        lease_counts = []
6072-        for shnum, serverid, fn in shares:
6073-            sf = get_share_file(fn)
6074-            num_leases = len(list(sf.get_leases()))
6075-            lease_counts.append( (fn, num_leases) )
6076-        return lease_counts
6077-
6078-    def _assert_leasecount(self, lease_counts, expected):
6079+    def _assert_leasecount(self, ignored, which, expected):
6080+        lease_counts = self.count_leases(self.uris[which])
6081         for (fn, num_leases) in lease_counts:
6082             if num_leases != expected:
6083                 self.fail("expected %d leases, have %d, on %s" %
6084hunk ./src/allmydata/test/test_web.py 4903
6085                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6086         d.addCallback(_compute_fileurls)
6087 
6088-        d.addCallback(self._count_leases, "one")
6089-        d.addCallback(self._assert_leasecount, 1)
6090-        d.addCallback(self._count_leases, "two")
6091-        d.addCallback(self._assert_leasecount, 1)
6092-        d.addCallback(self._count_leases, "mutable")
6093-        d.addCallback(self._assert_leasecount, 1)
6094+        d.addCallback(self._assert_leasecount, "one", 1)
6095+        d.addCallback(self._assert_leasecount, "two", 1)
6096+        d.addCallback(self._assert_leasecount, "mutable", 1)
6097 
6098         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6099         def _got_html_good(res):
6100hunk ./src/allmydata/test/test_web.py 4913
6101             self.failIf("Not Healthy" in res, res)
6102         d.addCallback(_got_html_good)
6103 
6104-        d.addCallback(self._count_leases, "one")
6105-        d.addCallback(self._assert_leasecount, 1)
6106-        d.addCallback(self._count_leases, "two")
6107-        d.addCallback(self._assert_leasecount, 1)
6108-        d.addCallback(self._count_leases, "mutable")
6109-        d.addCallback(self._assert_leasecount, 1)
6110+        d.addCallback(self._assert_leasecount, "one", 1)
6111+        d.addCallback(self._assert_leasecount, "two", 1)
6112+        d.addCallback(self._assert_leasecount, "mutable", 1)
6113 
6114         # this CHECK uses the original client, which uses the same
6115         # lease-secrets, so it will just renew the original lease
6116hunk ./src/allmydata/test/test_web.py 4922
6117         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6118         d.addCallback(_got_html_good)
6119 
6120-        d.addCallback(self._count_leases, "one")
6121-        d.addCallback(self._assert_leasecount, 1)
6122-        d.addCallback(self._count_leases, "two")
6123-        d.addCallback(self._assert_leasecount, 1)
6124-        d.addCallback(self._count_leases, "mutable")
6125-        d.addCallback(self._assert_leasecount, 1)
6126+        d.addCallback(self._assert_leasecount, "one", 1)
6127+        d.addCallback(self._assert_leasecount, "two", 1)
6128+        d.addCallback(self._assert_leasecount, "mutable", 1)
6129 
6130         # this CHECK uses an alternate client, which adds a second lease
6131         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6132hunk ./src/allmydata/test/test_web.py 4930
6133         d.addCallback(_got_html_good)
6134 
6135-        d.addCallback(self._count_leases, "one")
6136-        d.addCallback(self._assert_leasecount, 2)
6137-        d.addCallback(self._count_leases, "two")
6138-        d.addCallback(self._assert_leasecount, 1)
6139-        d.addCallback(self._count_leases, "mutable")
6140-        d.addCallback(self._assert_leasecount, 1)
6141+        d.addCallback(self._assert_leasecount, "one", 2)
6142+        d.addCallback(self._assert_leasecount, "two", 1)
6143+        d.addCallback(self._assert_leasecount, "mutable", 1)
6144 
6145         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6146         d.addCallback(_got_html_good)
6147hunk ./src/allmydata/test/test_web.py 4937
6148 
6149-        d.addCallback(self._count_leases, "one")
6150-        d.addCallback(self._assert_leasecount, 2)
6151-        d.addCallback(self._count_leases, "two")
6152-        d.addCallback(self._assert_leasecount, 1)
6153-        d.addCallback(self._count_leases, "mutable")
6154-        d.addCallback(self._assert_leasecount, 1)
6155+        d.addCallback(self._assert_leasecount, "one", 2)
6156+        d.addCallback(self._assert_leasecount, "two", 1)
6157+        d.addCallback(self._assert_leasecount, "mutable", 1)
6158 
6159         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6160                       clientnum=1)
6161hunk ./src/allmydata/test/test_web.py 4945
6162         d.addCallback(_got_html_good)
6163 
6164-        d.addCallback(self._count_leases, "one")
6165-        d.addCallback(self._assert_leasecount, 2)
6166-        d.addCallback(self._count_leases, "two")
6167-        d.addCallback(self._assert_leasecount, 1)
6168-        d.addCallback(self._count_leases, "mutable")
6169-        d.addCallback(self._assert_leasecount, 2)
6170+        d.addCallback(self._assert_leasecount, "one", 2)
6171+        d.addCallback(self._assert_leasecount, "two", 1)
6172+        d.addCallback(self._assert_leasecount, "mutable", 2)
6173 
6174         d.addErrback(self.explain_web_error)
6175         return d
6176hunk ./src/allmydata/test/test_web.py 4989
6177             self.failUnlessReallyEqual(len(units), 4+1)
6178         d.addCallback(_done)
6179 
6180-        d.addCallback(self._count_leases, "root")
6181-        d.addCallback(self._assert_leasecount, 1)
6182-        d.addCallback(self._count_leases, "one")
6183-        d.addCallback(self._assert_leasecount, 1)
6184-        d.addCallback(self._count_leases, "mutable")
6185-        d.addCallback(self._assert_leasecount, 1)
6186+        d.addCallback(self._assert_leasecount, "root", 1)
6187+        d.addCallback(self._assert_leasecount, "one", 1)
6188+        d.addCallback(self._assert_leasecount, "mutable", 1)
6189 
6190         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6191         d.addCallback(_done)
6192hunk ./src/allmydata/test/test_web.py 4996
6193 
6194-        d.addCallback(self._count_leases, "root")
6195-        d.addCallback(self._assert_leasecount, 1)
6196-        d.addCallback(self._count_leases, "one")
6197-        d.addCallback(self._assert_leasecount, 1)
6198-        d.addCallback(self._count_leases, "mutable")
6199-        d.addCallback(self._assert_leasecount, 1)
6200+        d.addCallback(self._assert_leasecount, "root", 1)
6201+        d.addCallback(self._assert_leasecount, "one", 1)
6202+        d.addCallback(self._assert_leasecount, "mutable", 1)
6203 
6204         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6205                       clientnum=1)
6206hunk ./src/allmydata/test/test_web.py 5004
6207         d.addCallback(_done)
6208 
6209-        d.addCallback(self._count_leases, "root")
6210-        d.addCallback(self._assert_leasecount, 2)
6211-        d.addCallback(self._count_leases, "one")
6212-        d.addCallback(self._assert_leasecount, 2)
6213-        d.addCallback(self._count_leases, "mutable")
6214-        d.addCallback(self._assert_leasecount, 2)
6215+        d.addCallback(self._assert_leasecount, "root", 2)
6216+        d.addCallback(self._assert_leasecount, "one", 2)
6217+        d.addCallback(self._assert_leasecount, "mutable", 2)
6218 
6219         d.addErrback(self.explain_web_error)
6220         return d
6221merger 0.0 (
6222hunk ./src/allmydata/uri.py 829
6223+    def is_readonly(self):
6224+        return True
6225+
6226+    def get_readonly(self):
6227+        return self
6228+
6229+
6230hunk ./src/allmydata/uri.py 829
6231+    def is_readonly(self):
6232+        return True
6233+
6234+    def get_readonly(self):
6235+        return self
6236+
6237+
6238)
6239merger 0.0 (
6240hunk ./src/allmydata/uri.py 848
6241+    def is_readonly(self):
6242+        return True
6243+
6244+    def get_readonly(self):
6245+        return self
6246+
6247hunk ./src/allmydata/uri.py 848
6248+    def is_readonly(self):
6249+        return True
6250+
6251+    def get_readonly(self):
6252+        return self
6253+
6254)
6255hunk ./src/allmydata/util/encodingutil.py 221
6256 def quote_path(path, quotemarks=True):
6257     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6258 
6259+def quote_filepath(fp, quotemarks=True, encoding=None):
6260+    path = fp.path
6261+    if isinstance(path, str):
6262+        try:
6263+            path = path.decode(filesystem_encoding)
6264+        except UnicodeDecodeError:
6265+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6266+
6267+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6268+
6269 
6270 def unicode_platform():
6271     """
6272hunk ./src/allmydata/util/fileutil.py 5
6273 Futz with files like a pro.
6274 """
6275 
6276-import sys, exceptions, os, stat, tempfile, time, binascii
6277+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6278+
6279+from allmydata.util.assertutil import precondition
6280 
6281 from twisted.python import log
6282hunk ./src/allmydata/util/fileutil.py 10
6283+from twisted.python.filepath import FilePath, UnlistableError
6284 
6285 from pycryptopp.cipher.aes import AES
6286 
6287hunk ./src/allmydata/util/fileutil.py 189
6288             raise tx
6289         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6290 
6291-def rm_dir(dirname):
6292+def fp_make_dirs(dirfp):
6293+    """
6294+    An idempotent version of FilePath.makedirs().  If the dir already
6295+    exists, do nothing and return without raising an exception.  If this
6296+    call creates the dir, return without raising an exception.  If there is
6297+    an error that prevents creation or if the directory gets deleted after
6298+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6299+    exists, raise an exception.
6300+    """
6301+    log.msg( "xxx 0 %s" % (dirfp,))
6302+    tx = None
6303+    try:
6304+        dirfp.makedirs()
6305+    except OSError, x:
6306+        tx = x
6307+
6308+    if not dirfp.isdir():
6309+        if tx:
6310+            raise tx
6311+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6312+
6313+def fp_rmdir_if_empty(dirfp):
6314+    """ Remove the directory if it is empty. """
6315+    try:
6316+        os.rmdir(dirfp.path)
6317+    except OSError, e:
6318+        if e.errno != errno.ENOTEMPTY:
6319+            raise
6320+    else:
6321+        dirfp.changed()
6322+
6323+def rmtree(dirname):
6324     """
6325     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6326     already gone, do nothing and return without raising an exception.  If this
6327hunk ./src/allmydata/util/fileutil.py 239
6328             else:
6329                 remove(fullname)
6330         os.rmdir(dirname)
6331-    except Exception, le:
6332-        # Ignore "No such file or directory"
6333-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6334+    except EnvironmentError, le:
6335+        # Ignore "No such file or directory", collect any other exception.
6336+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6337             excs.append(le)
6338hunk ./src/allmydata/util/fileutil.py 243
6339+    except Exception, le:
6340+        excs.append(le)
6341 
6342     # Okay, now we've recursively removed everything, ignoring any "No
6343     # such file or directory" errors, and collecting any other errors.
6344hunk ./src/allmydata/util/fileutil.py 256
6345             raise OSError, "Failed to remove dir for unknown reason."
6346         raise OSError, excs
6347 
6348+def fp_remove(fp):
6349+    """
6350+    An idempotent version of shutil.rmtree().  If the file/dir is already
6351+    gone, do nothing and return without raising an exception.  If this call
6352+    removes the file/dir, return without raising an exception.  If there is
6353+    an error that prevents removal, or if a file or directory at the same
6354+    path gets created again by someone else after this deletes it and before
6355+    this checks that it is gone, raise an exception.
6356+    """
6357+    try:
6358+        fp.remove()
6359+    except UnlistableError, e:
6360+        if e.originalException.errno != errno.ENOENT:
6361+            raise
6362+    except OSError, e:
6363+        if e.errno != errno.ENOENT:
6364+            raise
6365+
6366+def rm_dir(dirname):
6367+    # Renamed to be like shutil.rmtree and unlike rmdir.
6368+    return rmtree(dirname)
6369 
6370 def remove_if_possible(f):
6371     try:
6372hunk ./src/allmydata/util/fileutil.py 387
6373         import traceback
6374         traceback.print_exc()
6375 
6376-def get_disk_stats(whichdir, reserved_space=0):
6377+def get_disk_stats(whichdirfp, reserved_space=0):
6378     """Return disk statistics for the storage disk, in the form of a dict
6379     with the following fields.
6380       total:            total bytes on disk
6381hunk ./src/allmydata/util/fileutil.py 408
6382     you can pass how many bytes you would like to leave unused on this
6383     filesystem as reserved_space.
6384     """
6385+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6386 
6387     if have_GetDiskFreeSpaceExW:
6388         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6389hunk ./src/allmydata/util/fileutil.py 419
6390         n_free_for_nonroot = c_ulonglong(0)
6391         n_total            = c_ulonglong(0)
6392         n_free_for_root    = c_ulonglong(0)
6393-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6394-                                               byref(n_total),
6395-                                               byref(n_free_for_root))
6396+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6397+                                                      byref(n_total),
6398+                                                      byref(n_free_for_root))
6399         if retval == 0:
6400             raise OSError("Windows error %d attempting to get disk statistics for %r"
6401hunk ./src/allmydata/util/fileutil.py 424
6402-                          % (GetLastError(), whichdir))
6403+                          % (GetLastError(), whichdirfp.path))
6404         free_for_nonroot = n_free_for_nonroot.value
6405         total            = n_total.value
6406         free_for_root    = n_free_for_root.value
6407hunk ./src/allmydata/util/fileutil.py 433
6408         # <http://docs.python.org/library/os.html#os.statvfs>
6409         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6410         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6411-        s = os.statvfs(whichdir)
6412+        s = os.statvfs(whichdirfp.path)
6413 
6414         # on my mac laptop:
6415         #  statvfs(2) is a wrapper around statfs(2).
6416hunk ./src/allmydata/util/fileutil.py 460
6417              'avail': avail,
6418            }
6419 
6420-def get_available_space(whichdir, reserved_space):
6421+def get_available_space(whichdirfp, reserved_space):
6422     """Returns available space for share storage in bytes, or None if no
6423     API to get this information is available.
6424 
6425hunk ./src/allmydata/util/fileutil.py 472
6426     you can pass how many bytes you would like to leave unused on this
6427     filesystem as reserved_space.
6428     """
6429+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6430     try:
6431hunk ./src/allmydata/util/fileutil.py 474
6432-        return get_disk_stats(whichdir, reserved_space)['avail']
6433+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6434     except AttributeError:
6435         return None
6436hunk ./src/allmydata/util/fileutil.py 477
6437-    except EnvironmentError:
6438-        log.msg("OS call to get disk statistics failed")
6439+
6440+
6441+def get_used_space(fp):
6442+    if fp is None:
6443         return 0
6444hunk ./src/allmydata/util/fileutil.py 482
6445+    try:
6446+        s = os.stat(fp.path)
6447+    except EnvironmentError:
6448+        if not fp.exists():
6449+            return 0
6450+        raise
6451+    else:
6452+        # POSIX defines st_blocks (originally a BSDism):
6453+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6454+        # but does not require stat() to give it a "meaningful value"
6455+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6456+        # and says:
6457+        #   "The unit for the st_blocks member of the stat structure is not defined
6458+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6459+        #    It may differ on a file system basis. There is no correlation between
6460+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6461+        #    structure members."
6462+        #
6463+        # The Linux docs define it as "the number of blocks allocated to the file,
6464+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6465+        # not set the attribute on Windows.
6466+        #
6467+        # We consider platforms that define st_blocks but give it a wrong value, or
6468+        # measure it in a unit other than 512 bytes, to be broken. See also
6469+        # <http://bugs.python.org/issue12350>.
6470+
6471+        if hasattr(s, 'st_blocks'):
6472+            return s.st_blocks * 512
6473+        else:
6474+            return s.st_size
6475}
6476[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6477david-sarah@jacaranda.org**20110920033803
6478 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6479] {
6480hunk ./src/allmydata/client.py 9
6481 from twisted.internet import reactor, defer
6482 from twisted.application import service
6483 from twisted.application.internet import TimerService
6484+from twisted.python.filepath import FilePath
6485 from foolscap.api import Referenceable
6486 from pycryptopp.publickey import rsa
6487 
6488hunk ./src/allmydata/client.py 15
6489 import allmydata
6490 from allmydata.storage.server import StorageServer
6491+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6492 from allmydata import storage_client
6493 from allmydata.immutable.upload import Uploader
6494 from allmydata.immutable.offloaded import Helper
6495hunk ./src/allmydata/client.py 213
6496             return
6497         readonly = self.get_config("storage", "readonly", False, boolean=True)
6498 
6499-        storedir = os.path.join(self.basedir, self.STOREDIR)
6500+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6501 
6502         data = self.get_config("storage", "reserved_space", None)
6503         reserved = None
6504hunk ./src/allmydata/client.py 255
6505             'cutoff_date': cutoff_date,
6506             'sharetypes': tuple(sharetypes),
6507         }
6508-        ss = StorageServer(storedir, self.nodeid,
6509-                           reserved_space=reserved,
6510-                           discard_storage=discard,
6511-                           readonly_storage=readonly,
6512+
6513+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6514+                              discard_storage=discard)
6515+        ss = StorageServer(nodeid, backend, storedir,
6516                            stats_provider=self.stats_provider,
6517                            expiration_policy=expiration_policy)
6518         self.add_service(ss)
6519hunk ./src/allmydata/interfaces.py 348
6520 
6521     def get_shares():
6522         """
6523-        Generates the IStoredShare objects held in this shareset.
6524+        Generates IStoredShare objects for all completed shares in this shareset.
6525         """
6526 
6527     def has_incoming(shnum):
6528hunk ./src/allmydata/storage/backends/base.py 69
6529         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6530         #     """create a mutable share with the given shnum and write_enabler"""
6531 
6532-        # secrets might be a triple with cancel_secret in secrets[2], but if
6533-        # so we ignore the cancel_secret.
6534         write_enabler = secrets[0]
6535         renew_secret = secrets[1]
6536hunk ./src/allmydata/storage/backends/base.py 71
6537+        cancel_secret = '\x00'*32
6538+        if len(secrets) > 2:
6539+            cancel_secret = secrets[2]
6540 
6541         si_s = self.get_storage_index_string()
6542         shares = {}
6543hunk ./src/allmydata/storage/backends/base.py 110
6544             read_data[shnum] = share.readv(read_vector)
6545 
6546         ownerid = 1 # TODO
6547-        lease_info = LeaseInfo(ownerid, renew_secret,
6548+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6549                                expiration_time, storageserver.get_serverid())
6550 
6551         if testv_is_good:
6552hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6553     return newfp.child(sia)
6554 
6555 
6556-def get_share(fp):
6557+def get_share(storageindex, shnum, fp):
6558     f = fp.open('rb')
6559     try:
6560         prefix = f.read(32)
6561hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6562         f.close()
6563 
6564     if prefix == MutableDiskShare.MAGIC:
6565-        return MutableDiskShare(fp)
6566+        return MutableDiskShare(storageindex, shnum, fp)
6567     else:
6568         # assume it's immutable
6569hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6570-        return ImmutableDiskShare(fp)
6571+        return ImmutableDiskShare(storageindex, shnum, fp)
6572 
6573 
6574 class DiskBackend(Backend):
6575hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6576                 if not NUM_RE.match(shnumstr):
6577                     continue
6578                 sharehome = self._sharehomedir.child(shnumstr)
6579-                yield self.get_share(sharehome)
6580+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6581         except UnlistableError:
6582             # There is no shares directory at all.
6583             pass
6584hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6585         return self._incominghomedir.child(str(shnum)).exists()
6586 
6587     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6588-        sharehome = self._sharehomedir.child(str(shnum))
6589+        finalhome = self._sharehomedir.child(str(shnum))
6590         incominghome = self._incominghomedir.child(str(shnum))
6591hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6592-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6593-                                   max_size=max_space_per_bucket, create=True)
6594+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6595+                                   max_size=max_space_per_bucket)
6596         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6597         if self._discard_storage:
6598             bw.throw_out_all_data = True
6599hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6600         fileutil.fp_make_dirs(self._sharehomedir)
6601         sharehome = self._sharehomedir.child(str(shnum))
6602         serverid = storageserver.get_serverid()
6603-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6604+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6605 
6606     def _clean_up_after_unlink(self):
6607         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6608hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6609     LEASE_SIZE = struct.calcsize(">L32s32sL")
6610 
6611 
6612-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6613-        """ If max_size is not None then I won't allow more than
6614-        max_size to be written to me. If create=True then max_size
6615-        must not be None. """
6616-        precondition((max_size is not None) or (not create), max_size, create)
6617+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6618+        """
6619+        If max_size is not None then I won't allow more than max_size to be written to me.
6620+        If finalhome is not None (meaning that we are creating the share) then max_size
6621+        must not be None.
6622+        """
6623+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6624         self._storageindex = storageindex
6625         self._max_size = max_size
6626hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6627-        self._incominghome = incominghome
6628-        self._home = finalhome
6629+
6630+        # If we are creating the share, _finalhome refers to the final path and
6631+        # _home to the incoming path. Otherwise, _finalhome is None.
6632+        self._finalhome = finalhome
6633+        self._home = home
6634         self._shnum = shnum
6635hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6636-        if create:
6637-            # touch the file, so later callers will see that we're working on
6638+
6639+        if self._finalhome is not None:
6640+            # Touch the file, so later callers will see that we're working on
6641             # it. Also construct the metadata.
6642hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6643-            assert not finalhome.exists()
6644-            fp_make_dirs(self._incominghome.parent())
6645+            assert not self._finalhome.exists()
6646+            fp_make_dirs(self._home.parent())
6647             # The second field -- the four-byte share data length -- is no
6648             # longer used as of Tahoe v1.3.0, but we continue to write it in
6649             # there in case someone downgrades a storage server from >=
6650hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6651             # the largest length that can fit into the field. That way, even
6652             # if this does happen, the old < v1.3.0 server will still allow
6653             # clients to read the first part of the share.
6654-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6655+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6656             self._lease_offset = max_size + 0x0c
6657             self._num_leases = 0
6658         else:
6659hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6660                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6661 
6662     def close(self):
6663-        fileutil.fp_make_dirs(self._home.parent())
6664-        self._incominghome.moveTo(self._home)
6665-        try:
6666-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6667-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6668-            # these directories lying around forever, but the delete might
6669-            # fail if we're working on another share for the same storage
6670-            # index (like ab/abcde/5). The alternative approach would be to
6671-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6672-            # ShareWriter), each of which is responsible for a single
6673-            # directory on disk, and have them use reference counting of
6674-            # their children to know when they should do the rmdir. This
6675-            # approach is simpler, but relies on os.rmdir refusing to delete
6676-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6677-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6678-            # we also delete the grandparent (prefix) directory, .../ab ,
6679-            # again to avoid leaving directories lying around. This might
6680-            # fail if there is another bucket open that shares a prefix (like
6681-            # ab/abfff).
6682-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6683-            # we leave the great-grandparent (incoming/) directory in place.
6684-        except EnvironmentError:
6685-            # ignore the "can't rmdir because the directory is not empty"
6686-            # exceptions, those are normal consequences of the
6687-            # above-mentioned conditions.
6688-            pass
6689-        pass
6690+        fileutil.fp_make_dirs(self._finalhome.parent())
6691+        self._home.moveTo(self._finalhome)
6692+
6693+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6694+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6695+        # these directories lying around forever, but the delete might
6696+        # fail if we're working on another share for the same storage
6697+        # index (like ab/abcde/5). The alternative approach would be to
6698+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6699+        # ShareWriter), each of which is responsible for a single
6700+        # directory on disk, and have them use reference counting of
6701+        # their children to know when they should do the rmdir. This
6702+        # approach is simpler, but relies on os.rmdir (used by
6703+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6704+        # Do *not* use fileutil.fp_remove() here!
6705+        parent = self._home.parent()
6706+        fileutil.fp_rmdir_if_empty(parent)
6707+
6708+        # we also delete the grandparent (prefix) directory, .../ab ,
6709+        # again to avoid leaving directories lying around. This might
6710+        # fail if there is another bucket open that shares a prefix (like
6711+        # ab/abfff).
6712+        fileutil.fp_rmdir_if_empty(parent.parent())
6713+
6714+        # we leave the great-grandparent (incoming/) directory in place.
6715+
6716+        # allow lease changes after closing.
6717+        self._home = self._finalhome
6718+        self._finalhome = None
6719 
6720     def get_used_space(self):
6721hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6722-        return (fileutil.get_used_space(self._home) +
6723-                fileutil.get_used_space(self._incominghome))
6724+        return (fileutil.get_used_space(self._finalhome) +
6725+                fileutil.get_used_space(self._home))
6726 
6727     def get_storage_index(self):
6728         return self._storageindex
6729hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6730         precondition(offset >= 0, offset)
6731         if self._max_size is not None and offset+length > self._max_size:
6732             raise DataTooLargeError(self._max_size, offset, length)
6733-        f = self._incominghome.open(mode='rb+')
6734+        f = self._home.open(mode='rb+')
6735         try:
6736             real_offset = self._data_offset+offset
6737             f.seek(real_offset)
6738hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6739 
6740     # These lease operations are intended for use by disk_backend.py.
6741     # Other clients should not depend on the fact that the disk backend
6742-    # stores leases in share files.
6743+    # stores leases in share files. XXX bucket.py also relies on this.
6744 
6745     def get_leases(self):
6746         """Yields a LeaseInfo instance for all leases."""
6747hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6748             f.close()
6749 
6750     def add_lease(self, lease_info):
6751-        f = self._incominghome.open(mode='rb')
6752+        f = self._home.open(mode='rb+')
6753         try:
6754             num_leases = self._read_num_leases(f)
6755hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6756-        finally:
6757-            f.close()
6758-        f = self._home.open(mode='wb+')
6759-        try:
6760             self._write_lease_record(f, num_leases, lease_info)
6761             self._write_num_leases(f, num_leases+1)
6762         finally:
6763hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6764         pass
6765 
6766 
6767-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6768-    ms = MutableDiskShare(fp, parent)
6769+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6770+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6771     ms.create(serverid, write_enabler)
6772     del ms
6773hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6774-    return MutableDiskShare(fp, parent)
6775+    return MutableDiskShare(storageindex, shnum, fp, parent)
6776hunk ./src/allmydata/storage/bucket.py 44
6777         start = time.time()
6778 
6779         self._share.close()
6780-        filelen = self._share.stat()
6781+        # XXX should this be self._share.get_used_space() ?
6782+        consumed_size = self._share.get_size()
6783         self._share = None
6784 
6785         self.closed = True
6786hunk ./src/allmydata/storage/bucket.py 51
6787         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6788 
6789-        self.ss.bucket_writer_closed(self, filelen)
6790+        self.ss.bucket_writer_closed(self, consumed_size)
6791         self.ss.add_latency("close", time.time() - start)
6792         self.ss.count("close")
6793 
6794hunk ./src/allmydata/storage/server.py 182
6795                                 renew_secret, cancel_secret,
6796                                 sharenums, allocated_size,
6797                                 canary, owner_num=0):
6798-        # cancel_secret is no longer used.
6799         # owner_num is not for clients to set, but rather it should be
6800         # curried into a StorageServer instance dedicated to a particular
6801         # owner.
6802hunk ./src/allmydata/storage/server.py 195
6803         # Note that the lease should not be added until the BucketWriter
6804         # has been closed.
6805         expire_time = time.time() + 31*24*60*60
6806-        lease_info = LeaseInfo(owner_num, renew_secret,
6807+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6808                                expire_time, self._serverid)
6809 
6810         max_space_per_bucket = allocated_size
6811hunk ./src/allmydata/test/no_network.py 349
6812         return self.g.servers_by_number[i]
6813 
6814     def get_serverdir(self, i):
6815-        return self.g.servers_by_number[i].backend.storedir
6816+        return self.g.servers_by_number[i].backend._storedir
6817 
6818     def remove_server(self, i):
6819         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6820hunk ./src/allmydata/test/no_network.py 357
6821     def iterate_servers(self):
6822         for i in sorted(self.g.servers_by_number.keys()):
6823             ss = self.g.servers_by_number[i]
6824-            yield (i, ss, ss.backend.storedir)
6825+            yield (i, ss, ss.backend._storedir)
6826 
6827     def find_uri_shares(self, uri):
6828         si = tahoe_uri.from_string(uri).get_storage_index()
6829hunk ./src/allmydata/test/no_network.py 384
6830         return shares
6831 
6832     def copy_share(self, from_share, uri, to_server):
6833-        si = uri.from_string(self.uri).get_storage_index()
6834+        si = tahoe_uri.from_string(uri).get_storage_index()
6835         (i_shnum, i_serverid, i_sharefp) = from_share
6836         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6837         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6838hunk ./src/allmydata/test/test_download.py 127
6839 
6840         return d
6841 
6842-    def _write_shares(self, uri, shares):
6843-        si = uri.from_string(uri).get_storage_index()
6844+    def _write_shares(self, fileuri, shares):
6845+        si = uri.from_string(fileuri).get_storage_index()
6846         for i in shares:
6847             shares_for_server = shares[i]
6848             for shnum in shares_for_server:
6849hunk ./src/allmydata/test/test_hung_server.py 36
6850 
6851     def _hang(self, servers, **kwargs):
6852         for ss in servers:
6853-            self.g.hang_server(ss.get_serverid(), **kwargs)
6854+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6855 
6856     def _unhang(self, servers, **kwargs):
6857         for ss in servers:
6858hunk ./src/allmydata/test/test_hung_server.py 40
6859-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6860+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6861 
6862     def _hang_shares(self, shnums, **kwargs):
6863         # hang all servers who are holding the given shares
6864hunk ./src/allmydata/test/test_hung_server.py 52
6865                     hung_serverids.add(i_serverid)
6866 
6867     def _delete_all_shares_from(self, servers):
6868-        serverids = [ss.get_serverid() for ss in servers]
6869+        serverids = [ss.original.get_serverid() for ss in servers]
6870         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6871             if i_serverid in serverids:
6872                 i_sharefp.remove()
6873hunk ./src/allmydata/test/test_hung_server.py 58
6874 
6875     def _corrupt_all_shares_in(self, servers, corruptor_func):
6876-        serverids = [ss.get_serverid() for ss in servers]
6877+        serverids = [ss.original.get_serverid() for ss in servers]
6878         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6879             if i_serverid in serverids:
6880                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6881hunk ./src/allmydata/test/test_hung_server.py 64
6882 
6883     def _copy_all_shares_from(self, from_servers, to_server):
6884-        serverids = [ss.get_serverid() for ss in from_servers]
6885+        serverids = [ss.original.get_serverid() for ss in from_servers]
6886         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6887             if i_serverid in serverids:
6888                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6889hunk ./src/allmydata/test/test_mutable.py 2991
6890             fso = debug.FindSharesOptions()
6891             storage_index = base32.b2a(n.get_storage_index())
6892             fso.si_s = storage_index
6893-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6894+            fso.nodedirs = [unicode(storedir.parent().path)
6895                             for (i,ss,storedir)
6896                             in self.iterate_servers()]
6897             fso.stdout = StringIO()
6898hunk ./src/allmydata/test/test_upload.py 818
6899         if share_number is not None:
6900             self._copy_share_to_server(share_number, server_number)
6901 
6902-
6903     def _copy_share_to_server(self, share_number, server_number):
6904         ss = self.g.servers_by_number[server_number]
6905hunk ./src/allmydata/test/test_upload.py 820
6906-        self.copy_share(self.shares[share_number], ss)
6907+        self.copy_share(self.shares[share_number], self.uri, ss)
6908 
6909     def _setup_grid(self):
6910         """
6911}
6912[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
6913david-sarah@jacaranda.org**20110920171737
6914 Ignore-this: 5947e864682a43cb04e557334cda7c19
6915] {
6916adddir ./docs/backends
6917addfile ./docs/backends/S3.rst
6918hunk ./docs/backends/S3.rst 1
6919+====================================================
6920+Storing Shares in Amazon Simple Storage Service (S3)
6921+====================================================
6922+
6923+S3 is a commercial storage service provided by Amazon, described at
6924+`<https://aws.amazon.com/s3/>`_.
6925+
6926+The Tahoe-LAFS storage server can be configured to store its shares in
6927+an S3 bucket, rather than on local filesystem. To enable this, add the
6928+following keys to the server's ``tahoe.cfg`` file:
6929+
6930+``[storage]``
6931+
6932+``backend = s3``
6933+
6934+    This turns off the local filesystem backend and enables use of S3.
6935+
6936+``s3.access_key_id = (string, required)``
6937+``s3.secret_access_key = (string, required)``
6938+
6939+    These two give the storage server permission to access your Amazon
6940+    Web Services account, allowing them to upload and download shares
6941+    from S3.
6942+
6943+``s3.bucket = (string, required)``
6944+
6945+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
6946+    storage server will only modify and access objects in the configured S3
6947+    bucket.
6948+
6949+``s3.url = (URL string, optional)``
6950+
6951+    This URL tells the storage server how to access the S3 service. It
6952+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
6953+    else, you may be able to use some other S3-like service if it is
6954+    sufficiently compatible.
6955+
6956+``s3.max_space = (str, optional)``
6957+
6958+    This tells the server to limit how much space can be used in the S3
6959+    bucket. Before each share is uploaded, the server will ask S3 for the
6960+    current bucket usage, and will only accept the share if it does not cause
6961+    the usage to grow above this limit.
6962+
6963+    The string contains a number, with an optional case-insensitive scale
6964+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
6965+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
6966+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
6967+    thing.
6968+
6969+    If ``s3.max_space`` is omitted, the default behavior is to allow
6970+    unlimited usage.
6971+
6972+
6973+Once configured, the WUI "storage server" page will provide information about
6974+how much space is being used and how many shares are being stored.
6975+
6976+
6977+Issues
6978+------
6979+
6980+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
6981+is configured to store shares in S3 rather than on local disk, some common
6982+operations may behave differently:
6983+
6984+* Lease crawling/expiration is not yet implemented. As a result, shares will
6985+  be retained forever, and the Storage Server status web page will not show
6986+  information about the number of mutable/immutable shares present.
6987+
6988+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
6989+  each share upload, causing the upload process to run slightly slower and
6990+  incur more S3 request charges.
6991addfile ./docs/backends/disk.rst
6992hunk ./docs/backends/disk.rst 1
6993+====================================
6994+Storing Shares on a Local Filesystem
6995+====================================
6996+
6997+The "disk" backend stores shares on the local filesystem. Versions of
6998+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
6999+
7000+``[storage]``
7001+
7002+``backend = disk``
7003+
7004+    This enables use of the disk backend, and is the default.
7005+
7006+``reserved_space = (str, optional)``
7007+
7008+    If provided, this value defines how much disk space is reserved: the
7009+    storage server will not accept any share that causes the amount of free
7010+    disk space to drop below this value. (The free space is measured by a
7011+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7012+    space available to the user account under which the storage server runs.)
7013+
7014+    This string contains a number, with an optional case-insensitive scale
7015+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7016+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7017+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7018+    thing.
7019+
7020+    "``tahoe create-node``" generates a tahoe.cfg with
7021+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7022+    reservation to suit your needs.
7023+
7024+``expire.enabled =``
7025+
7026+``expire.mode =``
7027+
7028+``expire.override_lease_duration =``
7029+
7030+``expire.cutoff_date =``
7031+
7032+``expire.immutable =``
7033+
7034+``expire.mutable =``
7035+
7036+    These settings control garbage collection, causing the server to
7037+    delete shares that no longer have an up-to-date lease on them. Please
7038+    see `<garbage-collection.rst>`_ for full details.
7039hunk ./docs/configuration.rst 412
7040     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7041     status of this bug. The default value is ``False``.
7042 
7043-``reserved_space = (str, optional)``
7044+``backend = (string, optional)``
7045 
7046hunk ./docs/configuration.rst 414
7047-    If provided, this value defines how much disk space is reserved: the
7048-    storage server will not accept any share that causes the amount of free
7049-    disk space to drop below this value. (The free space is measured by a
7050-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7051-    space available to the user account under which the storage server runs.)
7052+    Storage servers can store the data into different "backends". Clients
7053+    need not be aware of which backend is used by a server. The default
7054+    value is ``disk``.
7055 
7056hunk ./docs/configuration.rst 418
7057-    This string contains a number, with an optional case-insensitive scale
7058-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7059-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7060-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7061-    thing.
7062+``backend = disk``
7063 
7064hunk ./docs/configuration.rst 420
7065-    "``tahoe create-node``" generates a tahoe.cfg with
7066-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7067-    reservation to suit your needs.
7068+    The default is to store shares on the local filesystem (in
7069+    BASEDIR/storage/shares/). For configuration details (including how to
7070+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7071 
7072hunk ./docs/configuration.rst 424
7073-``expire.enabled =``
7074+``backend = S3``
7075 
7076hunk ./docs/configuration.rst 426
7077-``expire.mode =``
7078-
7079-``expire.override_lease_duration =``
7080-
7081-``expire.cutoff_date =``
7082-
7083-``expire.immutable =``
7084-
7085-``expire.mutable =``
7086-
7087-    These settings control garbage collection, in which the server will
7088-    delete shares that no longer have an up-to-date lease on them. Please see
7089-    `<garbage-collection.rst>`_ for full details.
7090+    The storage server can store all shares to an Amazon Simple Storage
7091+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7092 
7093 
7094 Running A Helper
7095}
7096[Fix some incorrect attribute accesses. refs #999
7097david-sarah@jacaranda.org**20110921031207
7098 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7099] {
7100hunk ./src/allmydata/client.py 258
7101 
7102         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7103                               discard_storage=discard)
7104-        ss = StorageServer(nodeid, backend, storedir,
7105+        ss = StorageServer(self.nodeid, backend, storedir,
7106                            stats_provider=self.stats_provider,
7107                            expiration_policy=expiration_policy)
7108         self.add_service(ss)
7109hunk ./src/allmydata/interfaces.py 449
7110         Returns the storage index.
7111         """
7112 
7113+    def get_storage_index_string():
7114+        """
7115+        Returns the base32-encoded storage index.
7116+        """
7117+
7118     def get_shnum():
7119         """
7120         Returns the share number.
7121hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7122     def get_storage_index(self):
7123         return self._storageindex
7124 
7125+    def get_storage_index_string(self):
7126+        return si_b2a(self._storageindex)
7127+
7128     def get_shnum(self):
7129         return self._shnum
7130 
7131hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7132     def get_storage_index(self):
7133         return self._storageindex
7134 
7135+    def get_storage_index_string(self):
7136+        return si_b2a(self._storageindex)
7137+
7138     def get_shnum(self):
7139         return self._shnum
7140 
7141hunk ./src/allmydata/storage/bucket.py 86
7142     def __init__(self, ss, share):
7143         self.ss = ss
7144         self._share = share
7145-        self.storageindex = share.storageindex
7146-        self.shnum = share.shnum
7147+        self.storageindex = share.get_storage_index()
7148+        self.shnum = share.get_shnum()
7149 
7150     def __repr__(self):
7151         return "<%s %s %s>" % (self.__class__.__name__,
7152hunk ./src/allmydata/storage/expirer.py 6
7153 from twisted.python import log as twlog
7154 
7155 from allmydata.storage.crawler import ShareCrawler
7156-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7157+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7158      UnknownImmutableContainerVersionError
7159 
7160 
7161hunk ./src/allmydata/storage/expirer.py 124
7162                     struct.error):
7163                 twlog.msg("lease-checker error processing %r" % (share,))
7164                 twlog.err()
7165-                which = (si_b2a(share.storageindex), share.get_shnum())
7166+                which = (share.get_storage_index_string(), share.get_shnum())
7167                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7168                 wks = (1, 1, 1, "unknown")
7169             would_keep_shares.append(wks)
7170hunk ./src/allmydata/storage/server.py 221
7171         alreadygot = set()
7172         for share in shareset.get_shares():
7173             share.add_or_renew_lease(lease_info)
7174-            alreadygot.add(share.shnum)
7175+            alreadygot.add(share.get_shnum())
7176 
7177         for shnum in sharenums - alreadygot:
7178             if shareset.has_incoming(shnum):
7179hunk ./src/allmydata/storage/server.py 324
7180 
7181         try:
7182             shareset = self.backend.get_shareset(storageindex)
7183-            return shareset.readv(self, shares, readv)
7184+            return shareset.readv(shares, readv)
7185         finally:
7186             self.add_latency("readv", time.time() - start)
7187 
7188hunk ./src/allmydata/storage/shares.py 1
7189-#! /usr/bin/python
7190-
7191-from allmydata.storage.mutable import MutableShareFile
7192-from allmydata.storage.immutable import ShareFile
7193-
7194-def get_share_file(filename):
7195-    f = open(filename, "rb")
7196-    prefix = f.read(32)
7197-    f.close()
7198-    if prefix == MutableShareFile.MAGIC:
7199-        return MutableShareFile(filename)
7200-    # otherwise assume it's immutable
7201-    return ShareFile(filename)
7202-
7203rmfile ./src/allmydata/storage/shares.py
7204hunk ./src/allmydata/test/no_network.py 387
7205         si = tahoe_uri.from_string(uri).get_storage_index()
7206         (i_shnum, i_serverid, i_sharefp) = from_share
7207         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7208+        fileutil.fp_make_dirs(shares_dir)
7209         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7210 
7211     def restore_all_shares(self, shares):
7212hunk ./src/allmydata/test/no_network.py 391
7213-        for share, data in shares.items():
7214-            share.home.setContent(data)
7215+        for sharepath, data in shares.items():
7216+            FilePath(sharepath).setContent(data)
7217 
7218     def delete_share(self, (shnum, serverid, sharefp)):
7219         sharefp.remove()
7220hunk ./src/allmydata/test/test_upload.py 744
7221         servertoshnums = {} # k: server, v: set(shnum)
7222 
7223         for i, c in self.g.servers_by_number.iteritems():
7224-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7225+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7226                 for fn in fns:
7227                     try:
7228                         sharenum = int(fn)
7229}
7230[docs/backends/S3.rst: remove Issues section. refs #999
7231david-sarah@jacaranda.org**20110921031625
7232 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7233] hunk ./docs/backends/S3.rst 57
7234 
7235 Once configured, the WUI "storage server" page will provide information about
7236 how much space is being used and how many shares are being stored.
7237-
7238-
7239-Issues
7240-------
7241-
7242-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7243-is configured to store shares in S3 rather than on local disk, some common
7244-operations may behave differently:
7245-
7246-* Lease crawling/expiration is not yet implemented. As a result, shares will
7247-  be retained forever, and the Storage Server status web page will not show
7248-  information about the number of mutable/immutable shares present.
7249-
7250-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7251-  each share upload, causing the upload process to run slightly slower and
7252-  incur more S3 request charges.
7253[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7254david-sarah@jacaranda.org**20110921031705
7255 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7256] {
7257hunk ./docs/backends/S3.rst 38
7258     else, you may be able to use some other S3-like service if it is
7259     sufficiently compatible.
7260 
7261-``s3.max_space = (str, optional)``
7262+``s3.max_space = (quantity of space, optional)``
7263 
7264     This tells the server to limit how much space can be used in the S3
7265     bucket. Before each share is uploaded, the server will ask S3 for the
7266hunk ./docs/backends/disk.rst 14
7267 
7268     This enables use of the disk backend, and is the default.
7269 
7270-``reserved_space = (str, optional)``
7271+``reserved_space = (quantity of space, optional)``
7272 
7273     If provided, this value defines how much disk space is reserved: the
7274     storage server will not accept any share that causes the amount of free
7275}
7276[More fixes to tests needed for pluggable backends. refs #999
7277david-sarah@jacaranda.org**20110921184649
7278 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7279] {
7280hunk ./src/allmydata/scripts/debug.py 8
7281 from twisted.python import usage, failure
7282 from twisted.internet import defer
7283 from twisted.scripts import trial as twisted_trial
7284+from twisted.python.filepath import FilePath
7285 
7286 
7287 class DumpOptions(usage.Options):
7288hunk ./src/allmydata/scripts/debug.py 38
7289         self['filename'] = argv_to_abspath(filename)
7290 
7291 def dump_share(options):
7292-    from allmydata.storage.mutable import MutableShareFile
7293+    from allmydata.storage.backends.disk.disk_backend import get_share
7294     from allmydata.util.encodingutil import quote_output
7295 
7296     out = options.stdout
7297hunk ./src/allmydata/scripts/debug.py 46
7298     # check the version, to see if we have a mutable or immutable share
7299     print >>out, "share filename: %s" % quote_output(options['filename'])
7300 
7301-    f = open(options['filename'], "rb")
7302-    prefix = f.read(32)
7303-    f.close()
7304-    if prefix == MutableShareFile.MAGIC:
7305-        return dump_mutable_share(options)
7306-    # otherwise assume it's immutable
7307-    return dump_immutable_share(options)
7308-
7309-def dump_immutable_share(options):
7310-    from allmydata.storage.immutable import ShareFile
7311+    share = get_share("", 0, fp)
7312+    if share.sharetype == "mutable":
7313+        return dump_mutable_share(options, share)
7314+    else:
7315+        assert share.sharetype == "immutable", share.sharetype
7316+        return dump_immutable_share(options)
7317 
7318hunk ./src/allmydata/scripts/debug.py 53
7319+def dump_immutable_share(options, share):
7320     out = options.stdout
7321hunk ./src/allmydata/scripts/debug.py 55
7322-    f = ShareFile(options['filename'])
7323     if not options["leases-only"]:
7324hunk ./src/allmydata/scripts/debug.py 56
7325-        dump_immutable_chk_share(f, out, options)
7326-    dump_immutable_lease_info(f, out)
7327+        dump_immutable_chk_share(share, out, options)
7328+    dump_immutable_lease_info(share, out)
7329     print >>out
7330     return 0
7331 
7332hunk ./src/allmydata/scripts/debug.py 166
7333     return when
7334 
7335 
7336-def dump_mutable_share(options):
7337-    from allmydata.storage.mutable import MutableShareFile
7338+def dump_mutable_share(options, m):
7339     from allmydata.util import base32, idlib
7340     out = options.stdout
7341hunk ./src/allmydata/scripts/debug.py 169
7342-    m = MutableShareFile(options['filename'])
7343     f = open(options['filename'], "rb")
7344     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7345     num_extra_leases = m._read_num_extra_leases(f)
7346hunk ./src/allmydata/scripts/debug.py 641
7347     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7348     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7349     """
7350-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7351-    from allmydata.util.encodingutil import listdir_unicode
7352+    from allmydata.storage.server import si_a2b
7353+    from allmydata.storage.backends.disk_backend import si_si2dir
7354+    from allmydata.util.encodingutil import quote_filepath
7355 
7356     out = options.stdout
7357hunk ./src/allmydata/scripts/debug.py 646
7358-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7359-    for d in options.nodedirs:
7360-        d = os.path.join(d, "storage/shares", sharedir)
7361-        if os.path.exists(d):
7362-            for shnum in listdir_unicode(d):
7363-                print >>out, os.path.join(d, shnum)
7364+    si = si_a2b(options.si_s)
7365+    for nodedir in options.nodedirs:
7366+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7367+        if sharedir.exists():
7368+            for sharefp in sharedir.children():
7369+                print >>out, quote_filepath(sharefp, quotemarks=False)
7370 
7371     return 0
7372 
7373hunk ./src/allmydata/scripts/debug.py 878
7374         print >>err, "Error processing %s" % quote_output(si_dir)
7375         failure.Failure().printTraceback(err)
7376 
7377+
7378 class CorruptShareOptions(usage.Options):
7379     def getSynopsis(self):
7380         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7381hunk ./src/allmydata/scripts/debug.py 902
7382 Obviously, this command should not be used in normal operation.
7383 """
7384         return t
7385+
7386     def parseArgs(self, filename):
7387         self['filename'] = filename
7388 
7389hunk ./src/allmydata/scripts/debug.py 907
7390 def corrupt_share(options):
7391+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7392+
7393+def do_corrupt_share(out, fp, offset="block-random"):
7394     import random
7395hunk ./src/allmydata/scripts/debug.py 911
7396-    from allmydata.storage.mutable import MutableShareFile
7397-    from allmydata.storage.immutable import ShareFile
7398+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7399+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7400     from allmydata.mutable.layout import unpack_header
7401     from allmydata.immutable.layout import ReadBucketProxy
7402hunk ./src/allmydata/scripts/debug.py 915
7403-    out = options.stdout
7404-    fn = options['filename']
7405-    assert options["offset"] == "block-random", "other offsets not implemented"
7406+
7407+    assert offset == "block-random", "other offsets not implemented"
7408+
7409     # first, what kind of share is it?
7410 
7411     def flip_bit(start, end):
7412hunk ./src/allmydata/scripts/debug.py 924
7413         offset = random.randrange(start, end)
7414         bit = random.randrange(0, 8)
7415         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7416-        f = open(fn, "rb+")
7417-        f.seek(offset)
7418-        d = f.read(1)
7419-        d = chr(ord(d) ^ 0x01)
7420-        f.seek(offset)
7421-        f.write(d)
7422-        f.close()
7423+        f = fp.open("rb+")
7424+        try:
7425+            f.seek(offset)
7426+            d = f.read(1)
7427+            d = chr(ord(d) ^ 0x01)
7428+            f.seek(offset)
7429+            f.write(d)
7430+        finally:
7431+            f.close()
7432 
7433hunk ./src/allmydata/scripts/debug.py 934
7434-    f = open(fn, "rb")
7435-    prefix = f.read(32)
7436-    f.close()
7437-    if prefix == MutableShareFile.MAGIC:
7438-        # mutable
7439-        m = MutableShareFile(fn)
7440-        f = open(fn, "rb")
7441-        f.seek(m.DATA_OFFSET)
7442-        data = f.read(2000)
7443-        # make sure this slot contains an SMDF share
7444-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7445+    f = fp.open("rb")
7446+    try:
7447+        prefix = f.read(32)
7448+    finally:
7449         f.close()
7450hunk ./src/allmydata/scripts/debug.py 939
7451+    if prefix == MutableDiskShare.MAGIC:
7452+        # mutable
7453+        m = MutableDiskShare("", 0, fp)
7454+        f = fp.open("rb")
7455+        try:
7456+            f.seek(m.DATA_OFFSET)
7457+            data = f.read(2000)
7458+            # make sure this slot contains an SMDF share
7459+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7460+        finally:
7461+            f.close()
7462 
7463         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7464          ig_datalen, offsets) = unpack_header(data)
7465hunk ./src/allmydata/scripts/debug.py 960
7466         flip_bit(start, end)
7467     else:
7468         # otherwise assume it's immutable
7469-        f = ShareFile(fn)
7470+        f = ImmutableDiskShare("", 0, fp)
7471         bp = ReadBucketProxy(None, None, '')
7472         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7473         start = f._data_offset + offsets["data"]
7474hunk ./src/allmydata/storage/backends/base.py 92
7475             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7476             if sharenum in shares:
7477                 if not shares[sharenum].check_testv(testv):
7478-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7479+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7480                     testv_is_good = False
7481                     break
7482             else:
7483hunk ./src/allmydata/storage/backends/base.py 99
7484                 # compare the vectors against an empty share, in which all
7485                 # reads return empty strings
7486                 if not EmptyShare().check_testv(testv):
7487-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7488-                                                                testv))
7489+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7490                     testv_is_good = False
7491                     break
7492 
7493hunk ./src/allmydata/test/test_cli.py 2892
7494             # delete one, corrupt a second
7495             shares = self.find_uri_shares(self.uri)
7496             self.failUnlessReallyEqual(len(shares), 10)
7497-            os.unlink(shares[0][2])
7498-            cso = debug.CorruptShareOptions()
7499-            cso.stdout = StringIO()
7500-            cso.parseOptions([shares[1][2]])
7501+            shares[0][2].remove()
7502+            stdout = StringIO()
7503+            sharefile = shares[1][2]
7504             storage_index = uri.from_string(self.uri).get_storage_index()
7505             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7506                                        (base32.b2a(shares[1][1]),
7507hunk ./src/allmydata/test/test_cli.py 2900
7508                                         base32.b2a(storage_index),
7509                                         shares[1][0])
7510-            debug.corrupt_share(cso)
7511+            debug.do_corrupt_share(stdout, sharefile)
7512         d.addCallback(_clobber_shares)
7513 
7514         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7515hunk ./src/allmydata/test/test_cli.py 3017
7516         def _clobber_shares(ignored):
7517             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7518             self.failUnlessReallyEqual(len(shares), 10)
7519-            os.unlink(shares[0][2])
7520+            shares[0][2].remove()
7521 
7522             shares = self.find_uri_shares(self.uris["mutable"])
7523hunk ./src/allmydata/test/test_cli.py 3020
7524-            cso = debug.CorruptShareOptions()
7525-            cso.stdout = StringIO()
7526-            cso.parseOptions([shares[1][2]])
7527+            stdout = StringIO()
7528+            sharefile = shares[1][2]
7529             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7530             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7531                                        (base32.b2a(shares[1][1]),
7532hunk ./src/allmydata/test/test_cli.py 3027
7533                                         base32.b2a(storage_index),
7534                                         shares[1][0])
7535-            debug.corrupt_share(cso)
7536+            debug.do_corrupt_share(stdout, sharefile)
7537         d.addCallback(_clobber_shares)
7538 
7539         # root
7540hunk ./src/allmydata/test/test_client.py 90
7541                            "enabled = true\n" + \
7542                            "reserved_space = 1000\n")
7543         c = client.Client(basedir)
7544-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7545+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7546 
7547     def test_reserved_2(self):
7548         basedir = "client.Basic.test_reserved_2"
7549hunk ./src/allmydata/test/test_client.py 101
7550                            "enabled = true\n" + \
7551                            "reserved_space = 10K\n")
7552         c = client.Client(basedir)
7553-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7554+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7555 
7556     def test_reserved_3(self):
7557         basedir = "client.Basic.test_reserved_3"
7558hunk ./src/allmydata/test/test_client.py 112
7559                            "enabled = true\n" + \
7560                            "reserved_space = 5mB\n")
7561         c = client.Client(basedir)
7562-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7563+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7564                              5*1000*1000)
7565 
7566     def test_reserved_4(self):
7567hunk ./src/allmydata/test/test_client.py 124
7568                            "enabled = true\n" + \
7569                            "reserved_space = 78Gb\n")
7570         c = client.Client(basedir)
7571-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7572+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7573                              78*1000*1000*1000)
7574 
7575     def test_reserved_bad(self):
7576hunk ./src/allmydata/test/test_client.py 136
7577                            "enabled = true\n" + \
7578                            "reserved_space = bogus\n")
7579         c = client.Client(basedir)
7580-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7581+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7582 
7583     def _permute(self, sb, key):
7584         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7585hunk ./src/allmydata/test/test_crawler.py 7
7586 from twisted.trial import unittest
7587 from twisted.application import service
7588 from twisted.internet import defer
7589+from twisted.python.filepath import FilePath
7590 from foolscap.api import eventually, fireEventually
7591 
7592 from allmydata.util import fileutil, hashutil, pollmixin
7593hunk ./src/allmydata/test/test_crawler.py 13
7594 from allmydata.storage.server import StorageServer, si_b2a
7595 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7596+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7597 
7598 from allmydata.test.test_storage import FakeCanary
7599 from allmydata.test.common_util import StallMixin
7600hunk ./src/allmydata/test/test_crawler.py 115
7601 
7602     def test_immediate(self):
7603         self.basedir = "crawler/Basic/immediate"
7604-        fileutil.make_dirs(self.basedir)
7605         serverid = "\x00" * 20
7606hunk ./src/allmydata/test/test_crawler.py 116
7607-        ss = StorageServer(self.basedir, serverid)
7608+        fp = FilePath(self.basedir)
7609+        backend = DiskBackend(fp)
7610+        ss = StorageServer(serverid, backend, fp)
7611         ss.setServiceParent(self.s)
7612 
7613         sis = [self.write(i, ss, serverid) for i in range(10)]
7614hunk ./src/allmydata/test/test_crawler.py 122
7615-        statefile = os.path.join(self.basedir, "statefile")
7616+        statefp = fp.child("statefile")
7617 
7618hunk ./src/allmydata/test/test_crawler.py 124
7619-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7620+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7621         c.load_state()
7622 
7623         c.start_current_prefix(time.time())
7624hunk ./src/allmydata/test/test_crawler.py 137
7625         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7626 
7627         # check that a new crawler picks up on the state file properly
7628-        c2 = BucketEnumeratingCrawler(ss, statefile)
7629+        c2 = BucketEnumeratingCrawler(backend, statefp)
7630         c2.load_state()
7631 
7632         c2.start_current_prefix(time.time())
7633hunk ./src/allmydata/test/test_crawler.py 145
7634 
7635     def test_service(self):
7636         self.basedir = "crawler/Basic/service"
7637-        fileutil.make_dirs(self.basedir)
7638         serverid = "\x00" * 20
7639hunk ./src/allmydata/test/test_crawler.py 146
7640-        ss = StorageServer(self.basedir, serverid)
7641+        fp = FilePath(self.basedir)
7642+        backend = DiskBackend(fp)
7643+        ss = StorageServer(serverid, backend, fp)
7644         ss.setServiceParent(self.s)
7645 
7646         sis = [self.write(i, ss, serverid) for i in range(10)]
7647hunk ./src/allmydata/test/test_crawler.py 153
7648 
7649-        statefile = os.path.join(self.basedir, "statefile")
7650-        c = BucketEnumeratingCrawler(ss, statefile)
7651+        statefp = fp.child("statefile")
7652+        c = BucketEnumeratingCrawler(backend, statefp)
7653         c.setServiceParent(self.s)
7654 
7655         # it should be legal to call get_state() and get_progress() right
7656hunk ./src/allmydata/test/test_crawler.py 174
7657 
7658     def test_paced(self):
7659         self.basedir = "crawler/Basic/paced"
7660-        fileutil.make_dirs(self.basedir)
7661         serverid = "\x00" * 20
7662hunk ./src/allmydata/test/test_crawler.py 175
7663-        ss = StorageServer(self.basedir, serverid)
7664+        fp = FilePath(self.basedir)
7665+        backend = DiskBackend(fp)
7666+        ss = StorageServer(serverid, backend, fp)
7667         ss.setServiceParent(self.s)
7668 
7669         # put four buckets in each prefixdir
7670hunk ./src/allmydata/test/test_crawler.py 186
7671             for tail in range(4):
7672                 sis.append(self.write(i, ss, serverid, tail))
7673 
7674-        statefile = os.path.join(self.basedir, "statefile")
7675+        statefp = fp.child("statefile")
7676 
7677hunk ./src/allmydata/test/test_crawler.py 188
7678-        c = PacedCrawler(ss, statefile)
7679+        c = PacedCrawler(backend, statefp)
7680         c.load_state()
7681         try:
7682             c.start_current_prefix(time.time())
7683hunk ./src/allmydata/test/test_crawler.py 213
7684         del c
7685 
7686         # start a new crawler, it should start from the beginning
7687-        c = PacedCrawler(ss, statefile)
7688+        c = PacedCrawler(backend, statefp)
7689         c.load_state()
7690         try:
7691             c.start_current_prefix(time.time())
7692hunk ./src/allmydata/test/test_crawler.py 226
7693         c.cpu_slice = PacedCrawler.cpu_slice
7694 
7695         # a third crawler should pick up from where it left off
7696-        c2 = PacedCrawler(ss, statefile)
7697+        c2 = PacedCrawler(backend, statefp)
7698         c2.all_buckets = c.all_buckets[:]
7699         c2.load_state()
7700         c2.countdown = -1
7701hunk ./src/allmydata/test/test_crawler.py 237
7702 
7703         # now stop it at the end of a bucket (countdown=4), to exercise a
7704         # different place that checks the time
7705-        c = PacedCrawler(ss, statefile)
7706+        c = PacedCrawler(backend, statefp)
7707         c.load_state()
7708         c.countdown = 4
7709         try:
7710hunk ./src/allmydata/test/test_crawler.py 256
7711 
7712         # stop it again at the end of the bucket, check that a new checker
7713         # picks up correctly
7714-        c = PacedCrawler(ss, statefile)
7715+        c = PacedCrawler(backend, statefp)
7716         c.load_state()
7717         c.countdown = 4
7718         try:
7719hunk ./src/allmydata/test/test_crawler.py 266
7720         # that should stop at the end of one of the buckets.
7721         c.save_state()
7722 
7723-        c2 = PacedCrawler(ss, statefile)
7724+        c2 = PacedCrawler(backend, statefp)
7725         c2.all_buckets = c.all_buckets[:]
7726         c2.load_state()
7727         c2.countdown = -1
7728hunk ./src/allmydata/test/test_crawler.py 277
7729 
7730     def test_paced_service(self):
7731         self.basedir = "crawler/Basic/paced_service"
7732-        fileutil.make_dirs(self.basedir)
7733         serverid = "\x00" * 20
7734hunk ./src/allmydata/test/test_crawler.py 278
7735-        ss = StorageServer(self.basedir, serverid)
7736+        fp = FilePath(self.basedir)
7737+        backend = DiskBackend(fp)
7738+        ss = StorageServer(serverid, backend, fp)
7739         ss.setServiceParent(self.s)
7740 
7741         sis = [self.write(i, ss, serverid) for i in range(10)]
7742hunk ./src/allmydata/test/test_crawler.py 285
7743 
7744-        statefile = os.path.join(self.basedir, "statefile")
7745-        c = PacedCrawler(ss, statefile)
7746+        statefp = fp.child("statefile")
7747+        c = PacedCrawler(backend, statefp)
7748 
7749         did_check_progress = [False]
7750         def check_progress():
7751hunk ./src/allmydata/test/test_crawler.py 345
7752         # and read the stdout when it runs.
7753 
7754         self.basedir = "crawler/Basic/cpu_usage"
7755-        fileutil.make_dirs(self.basedir)
7756         serverid = "\x00" * 20
7757hunk ./src/allmydata/test/test_crawler.py 346
7758-        ss = StorageServer(self.basedir, serverid)
7759+        fp = FilePath(self.basedir)
7760+        backend = DiskBackend(fp)
7761+        ss = StorageServer(serverid, backend, fp)
7762         ss.setServiceParent(self.s)
7763 
7764         for i in range(10):
7765hunk ./src/allmydata/test/test_crawler.py 354
7766             self.write(i, ss, serverid)
7767 
7768-        statefile = os.path.join(self.basedir, "statefile")
7769-        c = ConsumingCrawler(ss, statefile)
7770+        statefp = fp.child("statefile")
7771+        c = ConsumingCrawler(backend, statefp)
7772         c.setServiceParent(self.s)
7773 
7774         # this will run as fast as it can, consuming about 50ms per call to
7775hunk ./src/allmydata/test/test_crawler.py 391
7776 
7777     def test_empty_subclass(self):
7778         self.basedir = "crawler/Basic/empty_subclass"
7779-        fileutil.make_dirs(self.basedir)
7780         serverid = "\x00" * 20
7781hunk ./src/allmydata/test/test_crawler.py 392
7782-        ss = StorageServer(self.basedir, serverid)
7783+        fp = FilePath(self.basedir)
7784+        backend = DiskBackend(fp)
7785+        ss = StorageServer(serverid, backend, fp)
7786         ss.setServiceParent(self.s)
7787 
7788         for i in range(10):
7789hunk ./src/allmydata/test/test_crawler.py 400
7790             self.write(i, ss, serverid)
7791 
7792-        statefile = os.path.join(self.basedir, "statefile")
7793-        c = ShareCrawler(ss, statefile)
7794+        statefp = fp.child("statefile")
7795+        c = ShareCrawler(backend, statefp)
7796         c.slow_start = 0
7797         c.setServiceParent(self.s)
7798 
7799hunk ./src/allmydata/test/test_crawler.py 417
7800         d.addCallback(_done)
7801         return d
7802 
7803-
7804     def test_oneshot(self):
7805         self.basedir = "crawler/Basic/oneshot"
7806hunk ./src/allmydata/test/test_crawler.py 419
7807-        fileutil.make_dirs(self.basedir)
7808         serverid = "\x00" * 20
7809hunk ./src/allmydata/test/test_crawler.py 420
7810-        ss = StorageServer(self.basedir, serverid)
7811+        fp = FilePath(self.basedir)
7812+        backend = DiskBackend(fp)
7813+        ss = StorageServer(serverid, backend, fp)
7814         ss.setServiceParent(self.s)
7815 
7816         for i in range(30):
7817hunk ./src/allmydata/test/test_crawler.py 428
7818             self.write(i, ss, serverid)
7819 
7820-        statefile = os.path.join(self.basedir, "statefile")
7821-        c = OneShotCrawler(ss, statefile)
7822+        statefp = fp.child("statefile")
7823+        c = OneShotCrawler(backend, statefp)
7824         c.setServiceParent(self.s)
7825 
7826         d = c.finished_d
7827hunk ./src/allmydata/test/test_crawler.py 447
7828             self.failUnlessEqual(s["current-cycle"], None)
7829         d.addCallback(_check)
7830         return d
7831-
7832hunk ./src/allmydata/test/test_deepcheck.py 23
7833      ShouldFailMixin
7834 from allmydata.test.common_util import StallMixin
7835 from allmydata.test.no_network import GridTestMixin
7836+from allmydata.scripts import debug
7837+
7838 
7839 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7840 
7841hunk ./src/allmydata/test/test_deepcheck.py 905
7842         d.addErrback(self.explain_error)
7843         return d
7844 
7845-
7846-
7847     def set_up_damaged_tree(self):
7848         # 6.4s
7849 
7850hunk ./src/allmydata/test/test_deepcheck.py 989
7851 
7852         return d
7853 
7854-    def _run_cli(self, argv):
7855-        stdout, stderr = StringIO(), StringIO()
7856-        # this can only do synchronous operations
7857-        assert argv[0] == "debug"
7858-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7859-        return stdout.getvalue()
7860-
7861     def _delete_some_shares(self, node):
7862         self.delete_shares_numbered(node.get_uri(), [0,1])
7863 
7864hunk ./src/allmydata/test/test_deepcheck.py 995
7865     def _corrupt_some_shares(self, node):
7866         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7867             if shnum in (0,1):
7868-                self._run_cli(["debug", "corrupt-share", sharefile])
7869+                debug.do_corrupt_share(StringIO(), sharefile)
7870 
7871     def _delete_most_shares(self, node):
7872         self.delete_shares_numbered(node.get_uri(), range(1,10))
7873hunk ./src/allmydata/test/test_deepcheck.py 1000
7874 
7875-
7876     def check_is_healthy(self, cr, where):
7877         try:
7878             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
7879hunk ./src/allmydata/test/test_download.py 134
7880             for shnum in shares_for_server:
7881                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
7882                 fileutil.fp_make_dirs(share_dir)
7883-                share_dir.child(str(shnum)).setContent(shares[shnum])
7884+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
7885 
7886     def load_shares(self, ignored=None):
7887         # this uses the data generated by create_shares() to populate the
7888hunk ./src/allmydata/test/test_hung_server.py 32
7889 
7890     def _break(self, servers):
7891         for ss in servers:
7892-            self.g.break_server(ss.get_serverid())
7893+            self.g.break_server(ss.original.get_serverid())
7894 
7895     def _hang(self, servers, **kwargs):
7896         for ss in servers:
7897hunk ./src/allmydata/test/test_hung_server.py 67
7898         serverids = [ss.original.get_serverid() for ss in from_servers]
7899         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7900             if i_serverid in serverids:
7901-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7902+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
7903 
7904         self.shares = self.find_uri_shares(self.uri)
7905 
7906hunk ./src/allmydata/test/test_mutable.py 3670
7907         # Now execute each assignment by writing the storage.
7908         for (share, servernum) in assignments:
7909             sharedata = base64.b64decode(self.sdmf_old_shares[share])
7910-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
7911+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
7912             fileutil.fp_make_dirs(storage_dir)
7913             storage_dir.child("%d" % share).setContent(sharedata)
7914         # ...and verify that the shares are there.
7915hunk ./src/allmydata/test/test_no_network.py 10
7916 from allmydata.immutable.upload import Data
7917 from allmydata.util.consumer import download_to_data
7918 
7919+
7920 class Harness(unittest.TestCase):
7921     def setUp(self):
7922         self.s = service.MultiService()
7923hunk ./src/allmydata/test/test_storage.py 1
7924-import time, os.path, platform, stat, re, simplejson, struct, shutil
7925+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
7926 
7927 import mock
7928 
7929hunk ./src/allmydata/test/test_storage.py 6
7930 from twisted.trial import unittest
7931-
7932 from twisted.internet import defer
7933 from twisted.application import service
7934hunk ./src/allmydata/test/test_storage.py 8
7935+from twisted.python.filepath import FilePath
7936 from foolscap.api import fireEventually
7937hunk ./src/allmydata/test/test_storage.py 10
7938-import itertools
7939+
7940 from allmydata import interfaces
7941 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
7942 from allmydata.storage.server import StorageServer
7943hunk ./src/allmydata/test/test_storage.py 14
7944+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7945 from allmydata.storage.backends.disk.mutable import MutableDiskShare
7946 from allmydata.storage.bucket import BucketWriter, BucketReader
7947 from allmydata.storage.common import DataTooLargeError, \
7948hunk ./src/allmydata/test/test_storage.py 310
7949         return self.sparent.stopService()
7950 
7951     def workdir(self, name):
7952-        basedir = os.path.join("storage", "Server", name)
7953-        return basedir
7954+        return FilePath("storage").child("Server").child(name)
7955 
7956     def create(self, name, reserved_space=0, klass=StorageServer):
7957         workdir = self.workdir(name)
7958hunk ./src/allmydata/test/test_storage.py 314
7959-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
7960+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
7961+        ss = klass("\x00" * 20, backend, workdir,
7962                    stats_provider=FakeStatsProvider())
7963         ss.setServiceParent(self.sparent)
7964         return ss
7965hunk ./src/allmydata/test/test_storage.py 1386
7966 
7967     def tearDown(self):
7968         self.sparent.stopService()
7969-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
7970+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
7971 
7972 
7973     def write_enabler(self, we_tag):
7974hunk ./src/allmydata/test/test_storage.py 2781
7975         return self.sparent.stopService()
7976 
7977     def workdir(self, name):
7978-        basedir = os.path.join("storage", "Server", name)
7979-        return basedir
7980+        return FilePath("storage").child("Server").child(name)
7981 
7982     def create(self, name):
7983         workdir = self.workdir(name)
7984hunk ./src/allmydata/test/test_storage.py 2785
7985-        ss = StorageServer(workdir, "\x00" * 20)
7986+        backend = DiskBackend(workdir)
7987+        ss = StorageServer("\x00" * 20, backend, workdir)
7988         ss.setServiceParent(self.sparent)
7989         return ss
7990 
7991hunk ./src/allmydata/test/test_storage.py 4061
7992         }
7993 
7994         basedir = "storage/WebStatus/status_right_disk_stats"
7995-        fileutil.make_dirs(basedir)
7996-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
7997-        expecteddir = ss.sharedir
7998+        fp = FilePath(basedir)
7999+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
8000+        ss = StorageServer("\x00" * 20, backend, fp)
8001+        expecteddir = backend._sharedir
8002         ss.setServiceParent(self.s)
8003         w = StorageStatus(ss)
8004         html = w.renderSynchronously()
8005hunk ./src/allmydata/test/test_storage.py 4084
8006 
8007     def test_readonly(self):
8008         basedir = "storage/WebStatus/readonly"
8009-        fileutil.make_dirs(basedir)
8010-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8011+        fp = FilePath(basedir)
8012+        backend = DiskBackend(fp, readonly=True)
8013+        ss = StorageServer("\x00" * 20, backend, fp)
8014         ss.setServiceParent(self.s)
8015         w = StorageStatus(ss)
8016         html = w.renderSynchronously()
8017hunk ./src/allmydata/test/test_storage.py 4096
8018 
8019     def test_reserved(self):
8020         basedir = "storage/WebStatus/reserved"
8021-        fileutil.make_dirs(basedir)
8022-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8023-        ss.setServiceParent(self.s)
8024-        w = StorageStatus(ss)
8025-        html = w.renderSynchronously()
8026-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8027-        s = remove_tags(html)
8028-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8029-
8030-    def test_huge_reserved(self):
8031-        basedir = "storage/WebStatus/reserved"
8032-        fileutil.make_dirs(basedir)
8033-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8034+        fp = FilePath(basedir)
8035+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8036+        ss = StorageServer("\x00" * 20, backend, fp)
8037         ss.setServiceParent(self.s)
8038         w = StorageStatus(ss)
8039         html = w.renderSynchronously()
8040hunk ./src/allmydata/test/test_upload.py 3
8041 # -*- coding: utf-8 -*-
8042 
8043-import os, shutil
8044+import os
8045 from cStringIO import StringIO
8046 from twisted.trial import unittest
8047 from twisted.python.failure import Failure
8048hunk ./src/allmydata/test/test_upload.py 14
8049 from allmydata import uri, monitor, client
8050 from allmydata.immutable import upload, encode
8051 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8052-from allmydata.util import log
8053+from allmydata.util import log, fileutil
8054 from allmydata.util.assertutil import precondition
8055 from allmydata.util.deferredutil import DeferredListShouldSucceed
8056 from allmydata.test.no_network import GridTestMixin
8057hunk ./src/allmydata/test/test_upload.py 972
8058                                         readonly=True))
8059         # Remove the first share from server 0.
8060         def _remove_share_0_from_server_0():
8061-            share_location = self.shares[0][2]
8062-            os.remove(share_location)
8063+            self.shares[0][2].remove()
8064         d.addCallback(lambda ign:
8065             _remove_share_0_from_server_0())
8066         # Set happy = 4 in the client.
8067hunk ./src/allmydata/test/test_upload.py 1847
8068             self._copy_share_to_server(3, 1)
8069             storedir = self.get_serverdir(0)
8070             # remove the storedir, wiping out any existing shares
8071-            shutil.rmtree(storedir)
8072+            fileutil.fp_remove(storedir)
8073             # create an empty storedir to replace the one we just removed
8074hunk ./src/allmydata/test/test_upload.py 1849
8075-            os.mkdir(storedir)
8076+            storedir.mkdir()
8077             client = self.g.clients[0]
8078             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8079             return client
8080hunk ./src/allmydata/test/test_upload.py 1888
8081             self._copy_share_to_server(3, 1)
8082             storedir = self.get_serverdir(0)
8083             # remove the storedir, wiping out any existing shares
8084-            shutil.rmtree(storedir)
8085+            fileutil.fp_remove(storedir)
8086             # create an empty storedir to replace the one we just removed
8087hunk ./src/allmydata/test/test_upload.py 1890
8088-            os.mkdir(storedir)
8089+            storedir.mkdir()
8090             client = self.g.clients[0]
8091             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8092             return client
8093hunk ./src/allmydata/test/test_web.py 4870
8094         d.addErrback(self.explain_web_error)
8095         return d
8096 
8097-    def _assert_leasecount(self, ignored, which, expected):
8098+    def _assert_leasecount(self, which, expected):
8099         lease_counts = self.count_leases(self.uris[which])
8100         for (fn, num_leases) in lease_counts:
8101             if num_leases != expected:
8102hunk ./src/allmydata/test/test_web.py 4903
8103                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8104         d.addCallback(_compute_fileurls)
8105 
8106-        d.addCallback(self._assert_leasecount, "one", 1)
8107-        d.addCallback(self._assert_leasecount, "two", 1)
8108-        d.addCallback(self._assert_leasecount, "mutable", 1)
8109+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8110+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8111+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8112 
8113         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8114         def _got_html_good(res):
8115hunk ./src/allmydata/test/test_web.py 4913
8116             self.failIf("Not Healthy" in res, res)
8117         d.addCallback(_got_html_good)
8118 
8119-        d.addCallback(self._assert_leasecount, "one", 1)
8120-        d.addCallback(self._assert_leasecount, "two", 1)
8121-        d.addCallback(self._assert_leasecount, "mutable", 1)
8122+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8123+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8124+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8125 
8126         # this CHECK uses the original client, which uses the same
8127         # lease-secrets, so it will just renew the original lease
8128hunk ./src/allmydata/test/test_web.py 4922
8129         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8130         d.addCallback(_got_html_good)
8131 
8132-        d.addCallback(self._assert_leasecount, "one", 1)
8133-        d.addCallback(self._assert_leasecount, "two", 1)
8134-        d.addCallback(self._assert_leasecount, "mutable", 1)
8135+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8136+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8137+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8138 
8139         # this CHECK uses an alternate client, which adds a second lease
8140         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8141hunk ./src/allmydata/test/test_web.py 4930
8142         d.addCallback(_got_html_good)
8143 
8144-        d.addCallback(self._assert_leasecount, "one", 2)
8145-        d.addCallback(self._assert_leasecount, "two", 1)
8146-        d.addCallback(self._assert_leasecount, "mutable", 1)
8147+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8148+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8149+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8150 
8151         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8152         d.addCallback(_got_html_good)
8153hunk ./src/allmydata/test/test_web.py 4937
8154 
8155-        d.addCallback(self._assert_leasecount, "one", 2)
8156-        d.addCallback(self._assert_leasecount, "two", 1)
8157-        d.addCallback(self._assert_leasecount, "mutable", 1)
8158+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8159+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8160+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8161 
8162         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8163                       clientnum=1)
8164hunk ./src/allmydata/test/test_web.py 4945
8165         d.addCallback(_got_html_good)
8166 
8167-        d.addCallback(self._assert_leasecount, "one", 2)
8168-        d.addCallback(self._assert_leasecount, "two", 1)
8169-        d.addCallback(self._assert_leasecount, "mutable", 2)
8170+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8171+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8172+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8173 
8174         d.addErrback(self.explain_web_error)
8175         return d
8176hunk ./src/allmydata/test/test_web.py 4989
8177             self.failUnlessReallyEqual(len(units), 4+1)
8178         d.addCallback(_done)
8179 
8180-        d.addCallback(self._assert_leasecount, "root", 1)
8181-        d.addCallback(self._assert_leasecount, "one", 1)
8182-        d.addCallback(self._assert_leasecount, "mutable", 1)
8183+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8184+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8185+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8186 
8187         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8188         d.addCallback(_done)
8189hunk ./src/allmydata/test/test_web.py 4996
8190 
8191-        d.addCallback(self._assert_leasecount, "root", 1)
8192-        d.addCallback(self._assert_leasecount, "one", 1)
8193-        d.addCallback(self._assert_leasecount, "mutable", 1)
8194+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8195+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8196+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8197 
8198         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8199                       clientnum=1)
8200hunk ./src/allmydata/test/test_web.py 5004
8201         d.addCallback(_done)
8202 
8203-        d.addCallback(self._assert_leasecount, "root", 2)
8204-        d.addCallback(self._assert_leasecount, "one", 2)
8205-        d.addCallback(self._assert_leasecount, "mutable", 2)
8206+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
8207+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8208+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8209 
8210         d.addErrback(self.explain_web_error)
8211         return d
8212}
8213[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
8214david-sarah@jacaranda.org**20110921221421
8215 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
8216] {
8217hunk ./src/allmydata/scripts/debug.py 642
8218     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
8219     """
8220     from allmydata.storage.server import si_a2b
8221-    from allmydata.storage.backends.disk_backend import si_si2dir
8222+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
8223     from allmydata.util.encodingutil import quote_filepath
8224 
8225     out = options.stdout
8226hunk ./src/allmydata/scripts/debug.py 648
8227     si = si_a2b(options.si_s)
8228     for nodedir in options.nodedirs:
8229-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
8230+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
8231         if sharedir.exists():
8232             for sharefp in sharedir.children():
8233                 print >>out, quote_filepath(sharefp, quotemarks=False)
8234hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
8235         incominghome = self._incominghomedir.child(str(shnum))
8236         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
8237                                    max_size=max_space_per_bucket)
8238-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
8239+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
8240         if self._discard_storage:
8241             bw.throw_out_all_data = True
8242         return bw
8243hunk ./src/allmydata/storage/backends/disk/immutable.py 147
8244     def unlink(self):
8245         self._home.remove()
8246 
8247+    def get_allocated_size(self):
8248+        return self._max_size
8249+
8250     def get_size(self):
8251         return self._home.getsize()
8252 
8253hunk ./src/allmydata/storage/bucket.py 15
8254 class BucketWriter(Referenceable):
8255     implements(RIBucketWriter)
8256 
8257-    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
8258+    def __init__(self, ss, immutableshare, lease_info, canary):
8259         self.ss = ss
8260hunk ./src/allmydata/storage/bucket.py 17
8261-        self._max_size = max_size # don't allow the client to write more than this
8262         self._canary = canary
8263         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
8264         self.closed = False
8265hunk ./src/allmydata/storage/bucket.py 27
8266         self._share.add_lease(lease_info)
8267 
8268     def allocated_size(self):
8269-        return self._max_size
8270+        return self._share.get_allocated_size()
8271 
8272     def remote_write(self, offset, data):
8273         start = time.time()
8274hunk ./src/allmydata/storage/crawler.py 480
8275             self.state["bucket-counts"][cycle] = {}
8276         self.state["bucket-counts"][cycle][prefix] = len(sharesets)
8277         if prefix in self.prefixes[:self.num_sample_prefixes]:
8278-            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
8279+            si_strings = [shareset.get_storage_index_string() for shareset in sharesets]
8280+            self.state["storage-index-samples"][prefix] = (cycle, si_strings)
8281 
8282     def finished_cycle(self, cycle):
8283         last_counts = self.state["bucket-counts"].get(cycle, [])
8284hunk ./src/allmydata/storage/expirer.py 281
8285         # copy() needs to become a deepcopy
8286         h["space-recovered"] = s["space-recovered"].copy()
8287 
8288-        history = pickle.load(self.historyfp.getContent())
8289+        history = pickle.loads(self.historyfp.getContent())
8290         history[cycle] = h
8291         while len(history) > 10:
8292             oldcycles = sorted(history.keys())
8293hunk ./src/allmydata/storage/expirer.py 355
8294         progress = self.get_progress()
8295 
8296         state = ShareCrawler.get_state(self) # does a shallow copy
8297-        history = pickle.load(self.historyfp.getContent())
8298+        history = pickle.loads(self.historyfp.getContent())
8299         state["history"] = history
8300 
8301         if not progress["cycle-in-progress"]:
8302hunk ./src/allmydata/test/test_download.py 199
8303                     for shnum in immutable_shares[clientnum]:
8304                         if s._shnum == shnum:
8305                             share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8306-                            share_dir.child(str(shnum)).remove()
8307+                            fileutil.fp_remove(share_dir.child(str(shnum)))
8308         d.addCallback(_clobber_some_shares)
8309         d.addCallback(lambda ign: download_to_data(n))
8310         d.addCallback(_got_data)
8311hunk ./src/allmydata/test/test_download.py 224
8312             for clientnum in immutable_shares:
8313                 for shnum in immutable_shares[clientnum]:
8314                     share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8315-                    share_dir.child(str(shnum)).remove()
8316+                    fileutil.fp_remove(share_dir.child(str(shnum)))
8317             # now a new download should fail with NoSharesError. We want a
8318             # new ImmutableFileNode so it will forget about the old shares.
8319             # If we merely called create_node_from_uri() without first
8320hunk ./src/allmydata/test/test_repairer.py 415
8321         def _test_corrupt(ignored):
8322             olddata = {}
8323             shares = self.find_uri_shares(self.uri)
8324-            for (shnum, serverid, sharefile) in shares:
8325-                olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
8326+            for (shnum, serverid, sharefp) in shares:
8327+                olddata[ (shnum, serverid) ] = sharefp.getContent()
8328             for sh in shares:
8329                 self.corrupt_share(sh, common._corrupt_uri_extension)
8330hunk ./src/allmydata/test/test_repairer.py 419
8331-            for (shnum, serverid, sharefile) in shares:
8332-                newdata = open(sharefile, "rb").read()
8333+            for (shnum, serverid, sharefp) in shares:
8334+                newdata = sharefp.getContent()
8335                 self.failIfEqual(olddata[ (shnum, serverid) ], newdata)
8336         d.addCallback(_test_corrupt)
8337 
8338hunk ./src/allmydata/test/test_storage.py 63
8339 
8340 class Bucket(unittest.TestCase):
8341     def make_workdir(self, name):
8342-        basedir = os.path.join("storage", "Bucket", name)
8343-        incoming = os.path.join(basedir, "tmp", "bucket")
8344-        final = os.path.join(basedir, "bucket")
8345-        fileutil.make_dirs(basedir)
8346-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8347+        basedir = FilePath("storage").child("Bucket").child(name)
8348+        tmpdir = basedir.child("tmp")
8349+        tmpdir.makedirs()
8350+        incoming = tmpdir.child("bucket")
8351+        final = basedir.child("bucket")
8352         return incoming, final
8353 
8354     def bucket_writer_closed(self, bw, consumed):
8355hunk ./src/allmydata/test/test_storage.py 87
8356 
8357     def test_create(self):
8358         incoming, final = self.make_workdir("test_create")
8359-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8360-                          FakeCanary())
8361+        share = ImmutableDiskShare("", 0, incoming, final, 200)
8362+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8363         bw.remote_write(0, "a"*25)
8364         bw.remote_write(25, "b"*25)
8365         bw.remote_write(50, "c"*25)
8366hunk ./src/allmydata/test/test_storage.py 97
8367 
8368     def test_readwrite(self):
8369         incoming, final = self.make_workdir("test_readwrite")
8370-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8371-                          FakeCanary())
8372+        share = ImmutableDiskShare("", 0, incoming, 200)
8373+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8374         bw.remote_write(0, "a"*25)
8375         bw.remote_write(25, "b"*25)
8376         bw.remote_write(50, "c"*7) # last block may be short
8377hunk ./src/allmydata/test/test_storage.py 140
8378 
8379         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8380 
8381-        fileutil.write(final, share_file_data)
8382+        final.setContent(share_file_data)
8383 
8384         mockstorageserver = mock.Mock()
8385 
8386hunk ./src/allmydata/test/test_storage.py 179
8387 
8388 class BucketProxy(unittest.TestCase):
8389     def make_bucket(self, name, size):
8390-        basedir = os.path.join("storage", "BucketProxy", name)
8391-        incoming = os.path.join(basedir, "tmp", "bucket")
8392-        final = os.path.join(basedir, "bucket")
8393-        fileutil.make_dirs(basedir)
8394-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8395-        bw = BucketWriter(self, incoming, final, size, self.make_lease(),
8396-                          FakeCanary())
8397+        basedir = FilePath("storage").child("BucketProxy").child(name)
8398+        tmpdir = basedir.child("tmp")
8399+        tmpdir.makedirs()
8400+        incoming = tmpdir.child("bucket")
8401+        final = basedir.child("bucket")
8402+        share = ImmutableDiskShare("", 0, incoming, final, size)
8403+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8404         rb = RemoteBucket()
8405         rb.target = bw
8406         return bw, rb, final
8407hunk ./src/allmydata/test/test_storage.py 206
8408         pass
8409 
8410     def test_create(self):
8411-        bw, rb, sharefname = self.make_bucket("test_create", 500)
8412+        bw, rb, sharefp = self.make_bucket("test_create", 500)
8413         bp = WriteBucketProxy(rb, None,
8414                               data_size=300,
8415                               block_size=10,
8416hunk ./src/allmydata/test/test_storage.py 237
8417                         for i in (1,9,13)]
8418         uri_extension = "s" + "E"*498 + "e"
8419 
8420-        bw, rb, sharefname = self.make_bucket(name, sharesize)
8421+        bw, rb, sharefp = self.make_bucket(name, sharesize)
8422         bp = wbp_class(rb, None,
8423                        data_size=95,
8424                        block_size=25,
8425hunk ./src/allmydata/test/test_storage.py 258
8426 
8427         # now read everything back
8428         def _start_reading(res):
8429-            br = BucketReader(self, sharefname)
8430+            br = BucketReader(self, sharefp)
8431             rb = RemoteBucket()
8432             rb.target = br
8433             server = NoNetworkServer("abc", None)
8434hunk ./src/allmydata/test/test_storage.py 373
8435         for i, wb in writers.items():
8436             wb.remote_write(0, "%10d" % i)
8437             wb.remote_close()
8438-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8439-                                "shares")
8440-        children_of_storedir = set(os.listdir(storedir))
8441+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8442+        children_of_storedir = sorted([child.basename() for child in storedir.children()])
8443 
8444         # Now store another one under another storageindex that has leading
8445         # chars the same as the first storageindex.
8446hunk ./src/allmydata/test/test_storage.py 382
8447         for i, wb in writers.items():
8448             wb.remote_write(0, "%10d" % i)
8449             wb.remote_close()
8450-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8451-                                "shares")
8452-        new_children_of_storedir = set(os.listdir(storedir))
8453+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8454+        new_children_of_storedir = sorted([child.basename() for child in storedir.children()])
8455         self.failUnlessEqual(children_of_storedir, new_children_of_storedir)
8456 
8457     def test_remove_incoming(self):
8458hunk ./src/allmydata/test/test_storage.py 390
8459         ss = self.create("test_remove_incoming")
8460         already, writers = self.allocate(ss, "vid", range(3), 10)
8461         for i,wb in writers.items():
8462+            incoming_share_home = wb._share._home
8463             wb.remote_write(0, "%10d" % i)
8464             wb.remote_close()
8465hunk ./src/allmydata/test/test_storage.py 393
8466-        incoming_share_dir = wb.incominghome
8467-        incoming_bucket_dir = os.path.dirname(incoming_share_dir)
8468-        incoming_prefix_dir = os.path.dirname(incoming_bucket_dir)
8469-        incoming_dir = os.path.dirname(incoming_prefix_dir)
8470-        self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir)
8471-        self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir)
8472-        self.failUnless(os.path.exists(incoming_dir), incoming_dir)
8473+        incoming_bucket_dir = incoming_share_home.parent()
8474+        incoming_prefix_dir = incoming_bucket_dir.parent()
8475+        incoming_dir = incoming_prefix_dir.parent()
8476+        self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir)
8477+        self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir)
8478+        self.failUnless(incoming_dir.exists(), incoming_dir)
8479 
8480     def test_abort(self):
8481         # remote_abort, when called on a writer, should make sure that
8482hunk ./src/allmydata/test/test_upload.py 1849
8483             # remove the storedir, wiping out any existing shares
8484             fileutil.fp_remove(storedir)
8485             # create an empty storedir to replace the one we just removed
8486-            storedir.mkdir()
8487+            storedir.makedirs()
8488             client = self.g.clients[0]
8489             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8490             return client
8491hunk ./src/allmydata/test/test_upload.py 1890
8492             # remove the storedir, wiping out any existing shares
8493             fileutil.fp_remove(storedir)
8494             # create an empty storedir to replace the one we just removed
8495-            storedir.mkdir()
8496+            storedir.makedirs()
8497             client = self.g.clients[0]
8498             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8499             return client
8500}
8501[uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
8502david-sarah@jacaranda.org**20110921222038
8503 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf
8504] {
8505hunk ./src/allmydata/uri.py 829
8506     def is_mutable(self):
8507         return False
8508 
8509+    def is_readonly(self):
8510+        return True
8511+
8512+    def get_readonly(self):
8513+        return self
8514+
8515+
8516 class DirectoryURIVerifier(_DirectoryBaseURI):
8517     implements(IVerifierURI)
8518 
8519hunk ./src/allmydata/uri.py 855
8520     def is_mutable(self):
8521         return False
8522 
8523+    def is_readonly(self):
8524+        return True
8525+
8526+    def get_readonly(self):
8527+        return self
8528+
8529 
8530 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
8531     implements(IVerifierURI)
8532}
8533[Fix some more test failures. refs #999
8534david-sarah@jacaranda.org**20110922045451
8535 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7
8536] {
8537hunk ./src/allmydata/scripts/debug.py 42
8538     from allmydata.util.encodingutil import quote_output
8539 
8540     out = options.stdout
8541+    filename = options['filename']
8542 
8543     # check the version, to see if we have a mutable or immutable share
8544hunk ./src/allmydata/scripts/debug.py 45
8545-    print >>out, "share filename: %s" % quote_output(options['filename'])
8546+    print >>out, "share filename: %s" % quote_output(filename)
8547 
8548hunk ./src/allmydata/scripts/debug.py 47
8549-    share = get_share("", 0, fp)
8550+    share = get_share("", 0, FilePath(filename))
8551     if share.sharetype == "mutable":
8552         return dump_mutable_share(options, share)
8553     else:
8554hunk ./src/allmydata/storage/backends/disk/mutable.py 85
8555         self.parent = parent # for logging
8556 
8557     def log(self, *args, **kwargs):
8558-        return self.parent.log(*args, **kwargs)
8559+        if self.parent:
8560+            return self.parent.log(*args, **kwargs)
8561 
8562     def create(self, serverid, write_enabler):
8563         assert not self._home.exists()
8564hunk ./src/allmydata/storage/common.py 6
8565 class DataTooLargeError(Exception):
8566     pass
8567 
8568-class UnknownMutableContainerVersionError(Exception):
8569+class UnknownContainerVersionError(Exception):
8570     pass
8571 
8572hunk ./src/allmydata/storage/common.py 9
8573-class UnknownImmutableContainerVersionError(Exception):
8574+class UnknownMutableContainerVersionError(UnknownContainerVersionError):
8575+    pass
8576+
8577+class UnknownImmutableContainerVersionError(UnknownContainerVersionError):
8578     pass
8579 
8580 
8581hunk ./src/allmydata/storage/crawler.py 208
8582         try:
8583             state = pickle.loads(self.statefp.getContent())
8584         except EnvironmentError:
8585+            if self.statefp.exists():
8586+                raise
8587             state = {"version": 1,
8588                      "last-cycle-finished": None,
8589                      "current-cycle": None,
8590hunk ./src/allmydata/storage/server.py 24
8591 
8592     name = 'storage'
8593     LeaseCheckerClass = LeaseCheckingCrawler
8594+    BucketCounterClass = BucketCountingCrawler
8595     DEFAULT_EXPIRATION_POLICY = {
8596         'enabled': False,
8597         'mode': 'age',
8598hunk ./src/allmydata/storage/server.py 70
8599 
8600     def _setup_bucket_counter(self):
8601         statefp = self._statedir.child("bucket_counter.state")
8602-        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
8603+        self.bucket_counter = self.BucketCounterClass(self.backend, statefp)
8604         self.bucket_counter.setServiceParent(self)
8605 
8606     def _setup_lease_checker(self, expiration_policy):
8607hunk ./src/allmydata/storage/server.py 224
8608             share.add_or_renew_lease(lease_info)
8609             alreadygot.add(share.get_shnum())
8610 
8611-        for shnum in sharenums - alreadygot:
8612+        for shnum in set(sharenums) - alreadygot:
8613             if shareset.has_incoming(shnum):
8614                 # Note that we don't create BucketWriters for shnums that
8615                 # have a partial share (in incoming/), so if a second upload
8616hunk ./src/allmydata/storage/server.py 247
8617 
8618     def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
8619                          owner_num=1):
8620-        # cancel_secret is no longer used.
8621         start = time.time()
8622         self.count("add-lease")
8623         new_expire_time = time.time() + 31*24*60*60
8624hunk ./src/allmydata/storage/server.py 250
8625-        lease_info = LeaseInfo(owner_num, renew_secret,
8626+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
8627                                new_expire_time, self._serverid)
8628 
8629         try:
8630hunk ./src/allmydata/storage/server.py 254
8631-            self.backend.add_or_renew_lease(lease_info)
8632+            shareset = self.backend.get_shareset(storageindex)
8633+            shareset.add_or_renew_lease(lease_info)
8634         finally:
8635             self.add_latency("add-lease", time.time() - start)
8636 
8637hunk ./src/allmydata/test/test_crawler.py 3
8638 
8639 import time
8640-import os.path
8641+
8642 from twisted.trial import unittest
8643 from twisted.application import service
8644 from twisted.internet import defer
8645hunk ./src/allmydata/test/test_crawler.py 10
8646 from twisted.python.filepath import FilePath
8647 from foolscap.api import eventually, fireEventually
8648 
8649-from allmydata.util import fileutil, hashutil, pollmixin
8650+from allmydata.util import hashutil, pollmixin
8651 from allmydata.storage.server import StorageServer, si_b2a
8652 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
8653 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8654hunk ./src/allmydata/test/test_mutable.py 3025
8655             cso.stderr = StringIO()
8656             debug.catalog_shares(cso)
8657             shares = cso.stdout.getvalue().splitlines()
8658+            self.failIf(len(shares) < 1, shares)
8659             oneshare = shares[0] # all shares should be MDMF
8660             self.failIf(oneshare.startswith("UNKNOWN"), oneshare)
8661             self.failUnless(oneshare.startswith("MDMF"), oneshare)
8662hunk ./src/allmydata/test/test_storage.py 1
8663-import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8664+import time, os.path, platform, re, simplejson, struct, itertools
8665 
8666 import mock
8667 
8668hunk ./src/allmydata/test/test_storage.py 15
8669 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8670 from allmydata.storage.server import StorageServer
8671 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8672+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
8673 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8674 from allmydata.storage.bucket import BucketWriter, BucketReader
8675hunk ./src/allmydata/test/test_storage.py 18
8676-from allmydata.storage.common import DataTooLargeError, \
8677+from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \
8678      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
8679 from allmydata.storage.lease import LeaseInfo
8680 from allmydata.storage.crawler import BucketCountingCrawler
8681hunk ./src/allmydata/test/test_storage.py 88
8682 
8683     def test_create(self):
8684         incoming, final = self.make_workdir("test_create")
8685-        share = ImmutableDiskShare("", 0, incoming, final, 200)
8686+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8687         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8688         bw.remote_write(0, "a"*25)
8689         bw.remote_write(25, "b"*25)
8690hunk ./src/allmydata/test/test_storage.py 98
8691 
8692     def test_readwrite(self):
8693         incoming, final = self.make_workdir("test_readwrite")
8694-        share = ImmutableDiskShare("", 0, incoming, 200)
8695+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8696         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8697         bw.remote_write(0, "a"*25)
8698         bw.remote_write(25, "b"*25)
8699hunk ./src/allmydata/test/test_storage.py 106
8700         bw.remote_close()
8701 
8702         # now read from it
8703-        br = BucketReader(self, bw.finalhome)
8704+        br = BucketReader(self, share)
8705         self.failUnlessEqual(br.remote_read(0, 25), "a"*25)
8706         self.failUnlessEqual(br.remote_read(25, 25), "b"*25)
8707         self.failUnlessEqual(br.remote_read(50, 7), "c"*7)
8708hunk ./src/allmydata/test/test_storage.py 131
8709         ownernumber = struct.pack('>L', 0)
8710         renewsecret  = 'THIS LETS ME RENEW YOUR FILE....'
8711         assert len(renewsecret) == 32
8712-        cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA'
8713+        cancelsecret = 'THIS USED TO LET ME KILL YR FILE'
8714         assert len(cancelsecret) == 32
8715         expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds
8716 
8717hunk ./src/allmydata/test/test_storage.py 142
8718         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8719 
8720         final.setContent(share_file_data)
8721+        share = ImmutableDiskShare("", 0, final)
8722 
8723         mockstorageserver = mock.Mock()
8724 
8725hunk ./src/allmydata/test/test_storage.py 147
8726         # Now read from it.
8727-        br = BucketReader(mockstorageserver, final)
8728+        br = BucketReader(mockstorageserver, share)
8729 
8730         self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data)
8731 
8732hunk ./src/allmydata/test/test_storage.py 260
8733 
8734         # now read everything back
8735         def _start_reading(res):
8736-            br = BucketReader(self, sharefp)
8737+            share = ImmutableDiskShare("", 0, sharefp)
8738+            br = BucketReader(self, share)
8739             rb = RemoteBucket()
8740             rb.target = br
8741             server = NoNetworkServer("abc", None)
8742hunk ./src/allmydata/test/test_storage.py 346
8743         if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow:
8744             raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).")
8745 
8746-        avail = fileutil.get_available_space('.', 512*2**20)
8747+        avail = fileutil.get_available_space(FilePath('.'), 512*2**20)
8748         if avail <= 4*2**30:
8749             raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.")
8750 
8751hunk ./src/allmydata/test/test_storage.py 476
8752         w[0].remote_write(0, "\xff"*10)
8753         w[0].remote_close()
8754 
8755-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8756+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8757         f = fp.open("rb+")
8758hunk ./src/allmydata/test/test_storage.py 478
8759-        f.seek(0)
8760-        f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8761-        f.close()
8762+        try:
8763+            f.seek(0)
8764+            f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8765+        finally:
8766+            f.close()
8767 
8768         ss.remote_get_buckets("allocate")
8769 
8770hunk ./src/allmydata/test/test_storage.py 575
8771 
8772     def test_seek(self):
8773         basedir = self.workdir("test_seek_behavior")
8774-        fileutil.make_dirs(basedir)
8775-        filename = os.path.join(basedir, "testfile")
8776-        f = open(filename, "wb")
8777-        f.write("start")
8778-        f.close()
8779+        basedir.makedirs()
8780+        fp = basedir.child("testfile")
8781+        fp.setContent("start")
8782+
8783         # mode="w" allows seeking-to-create-holes, but truncates pre-existing
8784         # files. mode="a" preserves previous contents but does not allow
8785         # seeking-to-create-holes. mode="r+" allows both.
8786hunk ./src/allmydata/test/test_storage.py 582
8787-        f = open(filename, "rb+")
8788-        f.seek(100)
8789-        f.write("100")
8790-        f.close()
8791-        filelen = os.stat(filename)[stat.ST_SIZE]
8792+        f = fp.open("rb+")
8793+        try:
8794+            f.seek(100)
8795+            f.write("100")
8796+        finally:
8797+            f.close()
8798+        fp.restat()
8799+        filelen = fp.getsize()
8800         self.failUnlessEqual(filelen, 100+3)
8801hunk ./src/allmydata/test/test_storage.py 591
8802-        f2 = open(filename, "rb")
8803-        self.failUnlessEqual(f2.read(5), "start")
8804-
8805+        f2 = fp.open("rb")
8806+        try:
8807+            self.failUnlessEqual(f2.read(5), "start")
8808+        finally:
8809+            f2.close()
8810 
8811     def test_leases(self):
8812         ss = self.create("test_leases")
8813hunk ./src/allmydata/test/test_storage.py 693
8814 
8815     def test_readonly(self):
8816         workdir = self.workdir("test_readonly")
8817-        ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True)
8818+        backend = DiskBackend(workdir, readonly=True)
8819+        ss = StorageServer("\x00" * 20, backend, workdir)
8820         ss.setServiceParent(self.sparent)
8821 
8822         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8823hunk ./src/allmydata/test/test_storage.py 710
8824 
8825     def test_discard(self):
8826         # discard is really only used for other tests, but we test it anyways
8827+        # XXX replace this with a null backend test
8828         workdir = self.workdir("test_discard")
8829hunk ./src/allmydata/test/test_storage.py 712
8830-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8831+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8832+        ss = StorageServer("\x00" * 20, backend, workdir)
8833         ss.setServiceParent(self.sparent)
8834 
8835         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8836hunk ./src/allmydata/test/test_storage.py 731
8837 
8838     def test_advise_corruption(self):
8839         workdir = self.workdir("test_advise_corruption")
8840-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8841+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8842+        ss = StorageServer("\x00" * 20, backend, workdir)
8843         ss.setServiceParent(self.sparent)
8844 
8845         si0_s = base32.b2a("si0")
8846hunk ./src/allmydata/test/test_storage.py 738
8847         ss.remote_advise_corrupt_share("immutable", "si0", 0,
8848                                        "This share smells funny.\n")
8849-        reportdir = os.path.join(workdir, "corruption-advisories")
8850-        reports = os.listdir(reportdir)
8851+        reportdir = workdir.child("corruption-advisories")
8852+        reports = [child.basename() for child in reportdir.children()]
8853         self.failUnlessEqual(len(reports), 1)
8854         report_si0 = reports[0]
8855hunk ./src/allmydata/test/test_storage.py 742
8856-        self.failUnlessIn(si0_s, report_si0)
8857-        f = open(os.path.join(reportdir, report_si0), "r")
8858-        report = f.read()
8859-        f.close()
8860+        self.failUnlessIn(si0_s, str(report_si0))
8861+        report = reportdir.child(report_si0).getContent()
8862+
8863         self.failUnlessIn("type: immutable", report)
8864         self.failUnlessIn("storage_index: %s" % si0_s, report)
8865         self.failUnlessIn("share_number: 0", report)
8866hunk ./src/allmydata/test/test_storage.py 762
8867         self.failUnlessEqual(set(b.keys()), set([1]))
8868         b[1].remote_advise_corrupt_share("This share tastes like dust.\n")
8869 
8870-        reports = os.listdir(reportdir)
8871+        reports = [child.basename() for child in reportdir.children()]
8872         self.failUnlessEqual(len(reports), 2)
8873hunk ./src/allmydata/test/test_storage.py 764
8874-        report_si1 = [r for r in reports if si1_s in r][0]
8875-        f = open(os.path.join(reportdir, report_si1), "r")
8876-        report = f.read()
8877-        f.close()
8878+        report_si1 = [r for r in reports if si1_s in str(r)][0]
8879+        report = reportdir.child(report_si1).getContent()
8880+
8881         self.failUnlessIn("type: immutable", report)
8882         self.failUnlessIn("storage_index: %s" % si1_s, report)
8883         self.failUnlessIn("share_number: 1", report)
8884hunk ./src/allmydata/test/test_storage.py 783
8885         return self.sparent.stopService()
8886 
8887     def workdir(self, name):
8888-        basedir = os.path.join("storage", "MutableServer", name)
8889-        return basedir
8890+        return FilePath("storage").child("MutableServer").child(name)
8891 
8892     def create(self, name):
8893         workdir = self.workdir(name)
8894hunk ./src/allmydata/test/test_storage.py 787
8895-        ss = StorageServer(workdir, "\x00" * 20)
8896+        backend = DiskBackend(workdir)
8897+        ss = StorageServer("\x00" * 20, backend, workdir)
8898         ss.setServiceParent(self.sparent)
8899         return ss
8900 
8901hunk ./src/allmydata/test/test_storage.py 810
8902         cancel_secret = self.cancel_secret(lease_tag)
8903         rstaraw = ss.remote_slot_testv_and_readv_and_writev
8904         testandwritev = dict( [ (shnum, ([], [], None) )
8905-                         for shnum in sharenums ] )
8906+                                for shnum in sharenums ] )
8907         readv = []
8908         rc = rstaraw(storage_index,
8909                      (write_enabler, renew_secret, cancel_secret),
8910hunk ./src/allmydata/test/test_storage.py 824
8911     def test_bad_magic(self):
8912         ss = self.create("test_bad_magic")
8913         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
8914-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8915+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8916         f = fp.open("rb+")
8917hunk ./src/allmydata/test/test_storage.py 826
8918-        f.seek(0)
8919-        f.write("BAD MAGIC")
8920-        f.close()
8921+        try:
8922+            f.seek(0)
8923+            f.write("BAD MAGIC")
8924+        finally:
8925+            f.close()
8926         read = ss.remote_slot_readv
8927hunk ./src/allmydata/test/test_storage.py 832
8928-        e = self.failUnlessRaises(UnknownMutableContainerVersionError,
8929+
8930+        # This used to test for UnknownMutableContainerVersionError,
8931+        # but the current code raises UnknownImmutableContainerVersionError.
8932+        # (It changed because remote_slot_readv now works with either
8933+        # mutable or immutable shares.) Since the share file doesn't have
8934+        # the mutable magic, it's not clear that this is wrong.
8935+        # For now, accept either exception.
8936+        e = self.failUnlessRaises(UnknownContainerVersionError,
8937                                   read, "si1", [0], [(0,10)])
8938hunk ./src/allmydata/test/test_storage.py 841
8939-        self.failUnlessIn(" had magic ", str(e))
8940+        self.failUnlessIn(" had ", str(e))
8941         self.failUnlessIn(" but we wanted ", str(e))
8942 
8943     def test_container_size(self):
8944hunk ./src/allmydata/test/test_storage.py 1248
8945 
8946         # create a random non-numeric file in the bucket directory, to
8947         # exercise the code that's supposed to ignore those.
8948-        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
8949+        bucket_dir = ss.backend.get_shareset("si1")._sharehomedir
8950         bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
8951 
8952hunk ./src/allmydata/test/test_storage.py 1251
8953-        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
8954+        s0 = MutableDiskShare("", 0, bucket_dir.child("0"))
8955         self.failUnlessEqual(len(list(s0.get_leases())), 1)
8956 
8957         # add-lease on a missing storage index is silently ignored
8958hunk ./src/allmydata/test/test_storage.py 1365
8959         # note: this is a detail of the storage server implementation, and
8960         # may change in the future
8961         prefix = si[:2]
8962-        prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix)
8963-        bucketdir = os.path.join(prefixdir, si)
8964-        self.failUnless(os.path.exists(prefixdir), prefixdir)
8965-        self.failIf(os.path.exists(bucketdir), bucketdir)
8966+        prefixdir = self.workdir("test_remove").child("shares").child(prefix)
8967+        bucketdir = prefixdir.child(si)
8968+        self.failUnless(prefixdir.exists(), prefixdir)
8969+        self.failIf(bucketdir.exists(), bucketdir)
8970 
8971 
8972 class MDMFProxies(unittest.TestCase, ShouldFailMixin):
8973hunk ./src/allmydata/test/test_storage.py 1420
8974 
8975 
8976     def workdir(self, name):
8977-        basedir = os.path.join("storage", "MutableServer", name)
8978-        return basedir
8979-
8980+        return FilePath("storage").child("MDMFProxies").child(name)
8981 
8982     def create(self, name):
8983         workdir = self.workdir(name)
8984hunk ./src/allmydata/test/test_storage.py 1424
8985-        ss = StorageServer(workdir, "\x00" * 20)
8986+        backend = DiskBackend(workdir)
8987+        ss = StorageServer("\x00" * 20, backend, workdir)
8988         ss.setServiceParent(self.sparent)
8989         return ss
8990 
8991hunk ./src/allmydata/test/test_storage.py 2798
8992         return self.sparent.stopService()
8993 
8994     def workdir(self, name):
8995-        return FilePath("storage").child("Server").child(name)
8996+        return FilePath("storage").child("Stats").child(name)
8997 
8998     def create(self, name):
8999         workdir = self.workdir(name)
9000hunk ./src/allmydata/test/test_storage.py 2886
9001             d.callback(None)
9002 
9003 class MyStorageServer(StorageServer):
9004-    def add_bucket_counter(self):
9005-        statefile = os.path.join(self.storedir, "bucket_counter.state")
9006-        self.bucket_counter = MyBucketCountingCrawler(self, statefile)
9007-        self.bucket_counter.setServiceParent(self)
9008+    BucketCounterClass = MyBucketCountingCrawler
9009+
9010 
9011 class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
9012 
9013hunk ./src/allmydata/test/test_storage.py 2899
9014 
9015     def test_bucket_counter(self):
9016         basedir = "storage/BucketCounter/bucket_counter"
9017-        fileutil.make_dirs(basedir)
9018-        ss = StorageServer(basedir, "\x00" * 20)
9019+        fp = FilePath(basedir)
9020+        backend = DiskBackend(fp)
9021+        ss = StorageServer("\x00" * 20, backend, fp)
9022+
9023         # to make sure we capture the bucket-counting-crawler in the middle
9024         # of a cycle, we reach in and reduce its maximum slice time to 0. We
9025         # also make it start sooner than usual.
9026hunk ./src/allmydata/test/test_storage.py 2958
9027 
9028     def test_bucket_counter_cleanup(self):
9029         basedir = "storage/BucketCounter/bucket_counter_cleanup"
9030-        fileutil.make_dirs(basedir)
9031-        ss = StorageServer(basedir, "\x00" * 20)
9032+        fp = FilePath(basedir)
9033+        backend = DiskBackend(fp)
9034+        ss = StorageServer("\x00" * 20, backend, fp)
9035+
9036         # to make sure we capture the bucket-counting-crawler in the middle
9037         # of a cycle, we reach in and reduce its maximum slice time to 0.
9038         ss.bucket_counter.slow_start = 0
9039hunk ./src/allmydata/test/test_storage.py 3002
9040 
9041     def test_bucket_counter_eta(self):
9042         basedir = "storage/BucketCounter/bucket_counter_eta"
9043-        fileutil.make_dirs(basedir)
9044-        ss = MyStorageServer(basedir, "\x00" * 20)
9045+        fp = FilePath(basedir)
9046+        backend = DiskBackend(fp)
9047+        ss = MyStorageServer("\x00" * 20, backend, fp)
9048         ss.bucket_counter.slow_start = 0
9049         # these will be fired inside finished_prefix()
9050         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
9051hunk ./src/allmydata/test/test_storage.py 3125
9052 
9053     def test_basic(self):
9054         basedir = "storage/LeaseCrawler/basic"
9055-        fileutil.make_dirs(basedir)
9056-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9057+        fp = FilePath(basedir)
9058+        backend = DiskBackend(fp)
9059+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9060+
9061         # make it start sooner than usual.
9062         lc = ss.lease_checker
9063         lc.slow_start = 0
9064hunk ./src/allmydata/test/test_storage.py 3141
9065         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9066 
9067         # add a non-sharefile to exercise another code path
9068-        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
9069+        fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share")
9070         fp.setContent("I am not a share.\n")
9071 
9072         # this is before the crawl has started, so we're not in a cycle yet
9073hunk ./src/allmydata/test/test_storage.py 3264
9074             self.failUnlessEqual(rec["configured-sharebytes"], 0)
9075 
9076             def _get_sharefile(si):
9077-                return list(ss._iter_share_files(si))[0]
9078+                return list(ss.backend.get_shareset(si).get_shares())[0]
9079             def count_leases(si):
9080                 return len(list(_get_sharefile(si).get_leases()))
9081             self.failUnlessEqual(count_leases(immutable_si_0), 1)
9082hunk ./src/allmydata/test/test_storage.py 3296
9083         for i,lease in enumerate(sf.get_leases()):
9084             if lease.renew_secret == renew_secret:
9085                 lease.expiration_time = new_expire_time
9086-                f = open(sf.home, 'rb+')
9087-                sf._write_lease_record(f, i, lease)
9088-                f.close()
9089+                f = sf._home.open('rb+')
9090+                try:
9091+                    sf._write_lease_record(f, i, lease)
9092+                finally:
9093+                    f.close()
9094                 return
9095         raise IndexError("unable to renew non-existent lease")
9096 
9097hunk ./src/allmydata/test/test_storage.py 3306
9098     def test_expire_age(self):
9099         basedir = "storage/LeaseCrawler/expire_age"
9100-        fileutil.make_dirs(basedir)
9101+        fp = FilePath(basedir)
9102+        backend = DiskBackend(fp)
9103+
9104         # setting 'override_lease_duration' to 2000 means that any lease that
9105         # is more than 2000 seconds old will be expired.
9106         expiration_policy = {
9107hunk ./src/allmydata/test/test_storage.py 3317
9108             'override_lease_duration': 2000,
9109             'sharetypes': ('mutable', 'immutable'),
9110         }
9111-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9112+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9113+
9114         # make it start sooner than usual.
9115         lc = ss.lease_checker
9116         lc.slow_start = 0
9117hunk ./src/allmydata/test/test_storage.py 3330
9118         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9119 
9120         def count_shares(si):
9121-            return len(list(ss._iter_share_files(si)))
9122+            return len(list(ss.backend.get_shareset(si).get_shares()))
9123         def _get_sharefile(si):
9124hunk ./src/allmydata/test/test_storage.py 3332
9125-            return list(ss._iter_share_files(si))[0]
9126+            return list(ss.backend.get_shareset(si).get_shares())[0]
9127         def count_leases(si):
9128             return len(list(_get_sharefile(si).get_leases()))
9129 
9130hunk ./src/allmydata/test/test_storage.py 3355
9131 
9132         sf0 = _get_sharefile(immutable_si_0)
9133         self.backdate_lease(sf0, self.renew_secrets[0], now - 1000)
9134-        sf0_size = os.stat(sf0.home).st_size
9135+        sf0_size = sf0.get_size()
9136 
9137         # immutable_si_1 gets an extra lease
9138         sf1 = _get_sharefile(immutable_si_1)
9139hunk ./src/allmydata/test/test_storage.py 3363
9140 
9141         sf2 = _get_sharefile(mutable_si_2)
9142         self.backdate_lease(sf2, self.renew_secrets[3], now - 1000)
9143-        sf2_size = os.stat(sf2.home).st_size
9144+        sf2_size = sf2.get_size()
9145 
9146         # mutable_si_3 gets an extra lease
9147         sf3 = _get_sharefile(mutable_si_3)
9148hunk ./src/allmydata/test/test_storage.py 3450
9149 
9150     def test_expire_cutoff_date(self):
9151         basedir = "storage/LeaseCrawler/expire_cutoff_date"
9152-        fileutil.make_dirs(basedir)
9153+        fp = FilePath(basedir)
9154+        backend = DiskBackend(fp)
9155+
9156         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9157         # is more than 2000 seconds old will be expired.
9158         now = time.time()
9159hunk ./src/allmydata/test/test_storage.py 3463
9160             'cutoff_date': then,
9161             'sharetypes': ('mutable', 'immutable'),
9162         }
9163-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9164+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9165+
9166         # make it start sooner than usual.
9167         lc = ss.lease_checker
9168         lc.slow_start = 0
9169hunk ./src/allmydata/test/test_storage.py 3476
9170         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9171 
9172         def count_shares(si):
9173-            return len(list(ss._iter_share_files(si)))
9174+            return len(list(ss.backend.get_shareset(si).get_shares()))
9175         def _get_sharefile(si):
9176hunk ./src/allmydata/test/test_storage.py 3478
9177-            return list(ss._iter_share_files(si))[0]
9178+            return list(ss.backend.get_shareset(si).get_shares())[0]
9179         def count_leases(si):
9180             return len(list(_get_sharefile(si).get_leases()))
9181 
9182hunk ./src/allmydata/test/test_storage.py 3505
9183 
9184         sf0 = _get_sharefile(immutable_si_0)
9185         self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time)
9186-        sf0_size = os.stat(sf0.home).st_size
9187+        sf0_size = sf0.get_size()
9188 
9189         # immutable_si_1 gets an extra lease
9190         sf1 = _get_sharefile(immutable_si_1)
9191hunk ./src/allmydata/test/test_storage.py 3513
9192 
9193         sf2 = _get_sharefile(mutable_si_2)
9194         self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time)
9195-        sf2_size = os.stat(sf2.home).st_size
9196+        sf2_size = sf2.get_size()
9197 
9198         # mutable_si_3 gets an extra lease
9199         sf3 = _get_sharefile(mutable_si_3)
9200hunk ./src/allmydata/test/test_storage.py 3605
9201 
9202     def test_only_immutable(self):
9203         basedir = "storage/LeaseCrawler/only_immutable"
9204-        fileutil.make_dirs(basedir)
9205+        fp = FilePath(basedir)
9206+        backend = DiskBackend(fp)
9207+
9208         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9209         # is more than 2000 seconds old will be expired.
9210         now = time.time()
9211hunk ./src/allmydata/test/test_storage.py 3618
9212             'cutoff_date': then,
9213             'sharetypes': ('immutable',),
9214         }
9215-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9216+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9217         lc = ss.lease_checker
9218         lc.slow_start = 0
9219         webstatus = StorageStatus(ss)
9220hunk ./src/allmydata/test/test_storage.py 3629
9221         new_expiration_time = now - 3000 + 31*24*60*60
9222 
9223         def count_shares(si):
9224-            return len(list(ss._iter_share_files(si)))
9225+            return len(list(ss.backend.get_shareset(si).get_shares()))
9226         def _get_sharefile(si):
9227hunk ./src/allmydata/test/test_storage.py 3631
9228-            return list(ss._iter_share_files(si))[0]
9229+            return list(ss.backend.get_shareset(si).get_shares())[0]
9230         def count_leases(si):
9231             return len(list(_get_sharefile(si).get_leases()))
9232 
9233hunk ./src/allmydata/test/test_storage.py 3668
9234 
9235     def test_only_mutable(self):
9236         basedir = "storage/LeaseCrawler/only_mutable"
9237-        fileutil.make_dirs(basedir)
9238+        fp = FilePath(basedir)
9239+        backend = DiskBackend(fp)
9240+
9241         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9242         # is more than 2000 seconds old will be expired.
9243         now = time.time()
9244hunk ./src/allmydata/test/test_storage.py 3681
9245             'cutoff_date': then,
9246             'sharetypes': ('mutable',),
9247         }
9248-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9249+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9250         lc = ss.lease_checker
9251         lc.slow_start = 0
9252         webstatus = StorageStatus(ss)
9253hunk ./src/allmydata/test/test_storage.py 3692
9254         new_expiration_time = now - 3000 + 31*24*60*60
9255 
9256         def count_shares(si):
9257-            return len(list(ss._iter_share_files(si)))
9258+            return len(list(ss.backend.get_shareset(si).get_shares()))
9259         def _get_sharefile(si):
9260hunk ./src/allmydata/test/test_storage.py 3694
9261-            return list(ss._iter_share_files(si))[0]
9262+            return list(ss.backend.get_shareset(si).get_shares())[0]
9263         def count_leases(si):
9264             return len(list(_get_sharefile(si).get_leases()))
9265 
9266hunk ./src/allmydata/test/test_storage.py 3731
9267 
9268     def test_bad_mode(self):
9269         basedir = "storage/LeaseCrawler/bad_mode"
9270-        fileutil.make_dirs(basedir)
9271+        fp = FilePath(basedir)
9272+        backend = DiskBackend(fp)
9273+
9274+        expiration_policy = {
9275+            'enabled': True,
9276+            'mode': 'bogus',
9277+            'override_lease_duration': None,
9278+            'cutoff_date': None,
9279+            'sharetypes': ('mutable', 'immutable'),
9280+        }
9281         e = self.failUnlessRaises(ValueError,
9282hunk ./src/allmydata/test/test_storage.py 3742
9283-                                  StorageServer, basedir, "\x00" * 20,
9284-                                  expiration_mode="bogus")
9285+                                  StorageServer, "\x00" * 20, backend, fp,
9286+                                  expiration_policy=expiration_policy)
9287         self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e))
9288 
9289     def test_parse_duration(self):
9290hunk ./src/allmydata/test/test_storage.py 3767
9291 
9292     def test_limited_history(self):
9293         basedir = "storage/LeaseCrawler/limited_history"
9294-        fileutil.make_dirs(basedir)
9295-        ss = StorageServer(basedir, "\x00" * 20)
9296+        fp = FilePath(basedir)
9297+        backend = DiskBackend(fp)
9298+        ss = StorageServer("\x00" * 20, backend, fp)
9299+
9300         # make it start sooner than usual.
9301         lc = ss.lease_checker
9302         lc.slow_start = 0
9303hunk ./src/allmydata/test/test_storage.py 3801
9304 
9305     def test_unpredictable_future(self):
9306         basedir = "storage/LeaseCrawler/unpredictable_future"
9307-        fileutil.make_dirs(basedir)
9308-        ss = StorageServer(basedir, "\x00" * 20)
9309+        fp = FilePath(basedir)
9310+        backend = DiskBackend(fp)
9311+        ss = StorageServer("\x00" * 20, backend, fp)
9312+
9313         # make it start sooner than usual.
9314         lc = ss.lease_checker
9315         lc.slow_start = 0
9316hunk ./src/allmydata/test/test_storage.py 3866
9317 
9318     def test_no_st_blocks(self):
9319         basedir = "storage/LeaseCrawler/no_st_blocks"
9320-        fileutil.make_dirs(basedir)
9321+        fp = FilePath(basedir)
9322+        backend = DiskBackend(fp)
9323+
9324         # A negative 'override_lease_duration' means that the "configured-"
9325         # space-recovered counts will be non-zero, since all shares will have
9326         # expired by then.
9327hunk ./src/allmydata/test/test_storage.py 3878
9328             'override_lease_duration': -1000,
9329             'sharetypes': ('mutable', 'immutable'),
9330         }
9331-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
9332+        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9333 
9334         # make it start sooner than usual.
9335         lc = ss.lease_checker
9336hunk ./src/allmydata/test/test_storage.py 3911
9337             UnknownImmutableContainerVersionError,
9338             ]
9339         basedir = "storage/LeaseCrawler/share_corruption"
9340-        fileutil.make_dirs(basedir)
9341-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9342+        fp = FilePath(basedir)
9343+        backend = DiskBackend(fp)
9344+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9345         w = StorageStatus(ss)
9346         # make it start sooner than usual.
9347         lc = ss.lease_checker
9348hunk ./src/allmydata/test/test_storage.py 3928
9349         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9350         first = min(self.sis)
9351         first_b32 = base32.b2a(first)
9352-        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
9353+        fp = ss.backend.get_shareset(first)._sharehomedir.child("0")
9354         f = fp.open("rb+")
9355hunk ./src/allmydata/test/test_storage.py 3930
9356-        f.seek(0)
9357-        f.write("BAD MAGIC")
9358-        f.close()
9359+        try:
9360+            f.seek(0)
9361+            f.write("BAD MAGIC")
9362+        finally:
9363+            f.close()
9364         # if get_share_file() doesn't see the correct mutable magic, it
9365         # assumes the file is an immutable share, and then
9366         # immutable.ShareFile sees a bad version. So regardless of which kind
9367hunk ./src/allmydata/test/test_storage.py 3943
9368 
9369         # also create an empty bucket
9370         empty_si = base32.b2a("\x04"*16)
9371-        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
9372+        empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir
9373         fileutil.fp_make_dirs(empty_bucket_dir)
9374 
9375         ss.setServiceParent(self.s)
9376hunk ./src/allmydata/test/test_storage.py 4031
9377 
9378     def test_status(self):
9379         basedir = "storage/WebStatus/status"
9380-        fileutil.make_dirs(basedir)
9381-        ss = StorageServer(basedir, "\x00" * 20)
9382+        fp = FilePath(basedir)
9383+        backend = DiskBackend(fp)
9384+        ss = StorageServer("\x00" * 20, backend, fp)
9385         ss.setServiceParent(self.s)
9386         w = StorageStatus(ss)
9387         d = self.render1(w)
9388hunk ./src/allmydata/test/test_storage.py 4065
9389         # Some platforms may have no disk stats API. Make sure the code can handle that
9390         # (test runs on all platforms).
9391         basedir = "storage/WebStatus/status_no_disk_stats"
9392-        fileutil.make_dirs(basedir)
9393-        ss = StorageServer(basedir, "\x00" * 20)
9394+        fp = FilePath(basedir)
9395+        backend = DiskBackend(fp)
9396+        ss = StorageServer("\x00" * 20, backend, fp)
9397         ss.setServiceParent(self.s)
9398         w = StorageStatus(ss)
9399         html = w.renderSynchronously()
9400hunk ./src/allmydata/test/test_storage.py 4085
9401         # If the API to get disk stats exists but a call to it fails, then the status should
9402         # show that no shares will be accepted, and get_available_space() should be 0.
9403         basedir = "storage/WebStatus/status_bad_disk_stats"
9404-        fileutil.make_dirs(basedir)
9405-        ss = StorageServer(basedir, "\x00" * 20)
9406+        fp = FilePath(basedir)
9407+        backend = DiskBackend(fp)
9408+        ss = StorageServer("\x00" * 20, backend, fp)
9409         ss.setServiceParent(self.s)
9410         w = StorageStatus(ss)
9411         html = w.renderSynchronously()
9412}
9413[Fix most of the crawler tests. refs #999
9414david-sarah@jacaranda.org**20110922183008
9415 Ignore-this: 116c0848008f3989ba78d87c07ec783c
9416] {
9417hunk ./src/allmydata/storage/backends/disk/disk_backend.py 160
9418         self._discard_storage = discard_storage
9419 
9420     def get_overhead(self):
9421-        return (fileutil.get_disk_usage(self._sharehomedir) +
9422-                fileutil.get_disk_usage(self._incominghomedir))
9423+        return (fileutil.get_used_space(self._sharehomedir) +
9424+                fileutil.get_used_space(self._incominghomedir))
9425 
9426     def get_shares(self):
9427         """
9428hunk ./src/allmydata/storage/crawler.py 2
9429 
9430-import time, struct
9431-import cPickle as pickle
9432+import time, pickle, struct
9433 from twisted.internet import reactor
9434 from twisted.application import service
9435 
9436hunk ./src/allmydata/storage/crawler.py 205
9437         #                            shareset to be processed, or None if we
9438         #                            are sleeping between cycles
9439         try:
9440-            state = pickle.loads(self.statefp.getContent())
9441+            pickled = self.statefp.getContent()
9442         except EnvironmentError:
9443             if self.statefp.exists():
9444                 raise
9445hunk ./src/allmydata/storage/crawler.py 215
9446                      "last-complete-prefix": None,
9447                      "last-complete-bucket": None,
9448                      }
9449+        else:
9450+            state = pickle.loads(pickled)
9451+
9452         state.setdefault("current-cycle-start-time", time.time()) # approximate
9453         self.state = state
9454         lcp = state["last-complete-prefix"]
9455hunk ./src/allmydata/storage/crawler.py 246
9456         else:
9457             last_complete_prefix = self.prefixes[lcpi]
9458         self.state["last-complete-prefix"] = last_complete_prefix
9459-        self.statefp.setContent(pickle.dumps(self.state))
9460+        pickled = pickle.dumps(self.state)
9461+        self.statefp.setContent(pickled)
9462 
9463     def startService(self):
9464         # arrange things to look like we were just sleeping, so
9465hunk ./src/allmydata/storage/expirer.py 86
9466         # initialize history
9467         if not self.historyfp.exists():
9468             history = {} # cyclenum -> dict
9469-            self.historyfp.setContent(pickle.dumps(history))
9470+            pickled = pickle.dumps(history)
9471+            self.historyfp.setContent(pickled)
9472 
9473     def create_empty_cycle_dict(self):
9474         recovered = self.create_empty_recovered_dict()
9475hunk ./src/allmydata/storage/expirer.py 111
9476     def started_cycle(self, cycle):
9477         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
9478 
9479-    def process_storage_index(self, cycle, prefix, container):
9480+    def process_shareset(self, cycle, prefix, shareset):
9481         would_keep_shares = []
9482         wks = None
9483hunk ./src/allmydata/storage/expirer.py 114
9484-        sharetype = None
9485 
9486hunk ./src/allmydata/storage/expirer.py 115
9487-        for share in container.get_shares():
9488-            sharetype = share.sharetype
9489+        for share in shareset.get_shares():
9490             try:
9491                 wks = self.process_share(share)
9492             except (UnknownMutableContainerVersionError,
9493hunk ./src/allmydata/storage/expirer.py 128
9494                 wks = (1, 1, 1, "unknown")
9495             would_keep_shares.append(wks)
9496 
9497-        container_type = None
9498+        shareset_type = None
9499         if wks:
9500hunk ./src/allmydata/storage/expirer.py 130
9501-            # use the last share's sharetype as the container type
9502-            container_type = wks[3]
9503+            # use the last share's type as the shareset type
9504+            shareset_type = wks[3]
9505         rec = self.state["cycle-to-date"]["space-recovered"]
9506         self.increment(rec, "examined-buckets", 1)
9507hunk ./src/allmydata/storage/expirer.py 134
9508-        if sharetype:
9509-            self.increment(rec, "examined-buckets-"+container_type, 1)
9510+        if shareset_type:
9511+            self.increment(rec, "examined-buckets-"+shareset_type, 1)
9512 
9513hunk ./src/allmydata/storage/expirer.py 137
9514-        container_diskbytes = container.get_overhead()
9515+        shareset_diskbytes = shareset.get_overhead()
9516 
9517         if sum([wks[0] for wks in would_keep_shares]) == 0:
9518hunk ./src/allmydata/storage/expirer.py 140
9519-            self.increment_container_space("original", container_diskbytes, sharetype)
9520+            self.increment_shareset_space("original", shareset_diskbytes, shareset_type)
9521         if sum([wks[1] for wks in would_keep_shares]) == 0:
9522hunk ./src/allmydata/storage/expirer.py 142
9523-            self.increment_container_space("configured", container_diskbytes, sharetype)
9524+            self.increment_shareset_space("configured", shareset_diskbytes, shareset_type)
9525         if sum([wks[2] for wks in would_keep_shares]) == 0:
9526hunk ./src/allmydata/storage/expirer.py 144
9527-            self.increment_container_space("actual", container_diskbytes, sharetype)
9528+            self.increment_shareset_space("actual", shareset_diskbytes, shareset_type)
9529 
9530     def process_share(self, share):
9531         sharetype = share.sharetype
9532hunk ./src/allmydata/storage/expirer.py 189
9533 
9534         so_far = self.state["cycle-to-date"]
9535         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
9536-        self.increment_space("examined", diskbytes, sharetype)
9537+        self.increment_space("examined", sharebytes, diskbytes, sharetype)
9538 
9539         would_keep_share = [1, 1, 1, sharetype]
9540 
9541hunk ./src/allmydata/storage/expirer.py 220
9542             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
9543             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
9544 
9545-    def increment_container_space(self, a, container_diskbytes, container_type):
9546+    def increment_shareset_space(self, a, shareset_diskbytes, shareset_type):
9547         rec = self.state["cycle-to-date"]["space-recovered"]
9548hunk ./src/allmydata/storage/expirer.py 222
9549-        self.increment(rec, a+"-diskbytes", container_diskbytes)
9550+        self.increment(rec, a+"-diskbytes", shareset_diskbytes)
9551         self.increment(rec, a+"-buckets", 1)
9552hunk ./src/allmydata/storage/expirer.py 224
9553-        if container_type:
9554-            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
9555-            self.increment(rec, a+"-buckets-"+container_type, 1)
9556+        if shareset_type:
9557+            self.increment(rec, a+"-diskbytes-"+shareset_type, shareset_diskbytes)
9558+            self.increment(rec, a+"-buckets-"+shareset_type, 1)
9559 
9560     def increment(self, d, k, delta=1):
9561         if k not in d:
9562hunk ./src/allmydata/storage/expirer.py 280
9563         # copy() needs to become a deepcopy
9564         h["space-recovered"] = s["space-recovered"].copy()
9565 
9566-        history = pickle.loads(self.historyfp.getContent())
9567+        pickled = self.historyfp.getContent()
9568+        history = pickle.loads(pickled)
9569         history[cycle] = h
9570         while len(history) > 10:
9571             oldcycles = sorted(history.keys())
9572hunk ./src/allmydata/storage/expirer.py 286
9573             del history[oldcycles[0]]
9574-        self.historyfp.setContent(pickle.dumps(history))
9575+        repickled = pickle.dumps(history)
9576+        self.historyfp.setContent(repickled)
9577 
9578     def get_state(self):
9579         """In addition to the crawler state described in
9580hunk ./src/allmydata/storage/expirer.py 356
9581         progress = self.get_progress()
9582 
9583         state = ShareCrawler.get_state(self) # does a shallow copy
9584-        history = pickle.loads(self.historyfp.getContent())
9585+        pickled = self.historyfp.getContent()
9586+        history = pickle.loads(pickled)
9587         state["history"] = history
9588 
9589         if not progress["cycle-in-progress"]:
9590hunk ./src/allmydata/test/test_crawler.py 25
9591         ShareCrawler.__init__(self, *args, **kwargs)
9592         self.all_buckets = []
9593         self.finished_d = defer.Deferred()
9594-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9595-        self.all_buckets.append(storage_index_b32)
9596+
9597+    def process_shareset(self, cycle, prefix, shareset):
9598+        self.all_buckets.append(shareset.get_storage_index_string())
9599+
9600     def finished_cycle(self, cycle):
9601         eventually(self.finished_d.callback, None)
9602 
9603hunk ./src/allmydata/test/test_crawler.py 41
9604         self.all_buckets = []
9605         self.finished_d = defer.Deferred()
9606         self.yield_cb = None
9607-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9608-        self.all_buckets.append(storage_index_b32)
9609+
9610+    def process_shareset(self, cycle, prefix, shareset):
9611+        self.all_buckets.append(shareset.get_storage_index_string())
9612         self.countdown -= 1
9613         if self.countdown == 0:
9614             # force a timeout. We restore it in yielding()
9615hunk ./src/allmydata/test/test_crawler.py 66
9616         self.accumulated = 0.0
9617         self.cycles = 0
9618         self.last_yield = 0.0
9619-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9620+
9621+    def process_shareset(self, cycle, prefix, shareset):
9622         start = time.time()
9623         time.sleep(0.05)
9624         elapsed = time.time() - start
9625hunk ./src/allmydata/test/test_crawler.py 85
9626         ShareCrawler.__init__(self, *args, **kwargs)
9627         self.counter = 0
9628         self.finished_d = defer.Deferred()
9629-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9630+
9631+    def process_shareset(self, cycle, prefix, shareset):
9632         self.counter += 1
9633     def finished_cycle(self, cycle):
9634         self.finished_d.callback(None)
9635hunk ./src/allmydata/test/test_storage.py 3041
9636 
9637 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
9638     stop_after_first_bucket = False
9639-    def process_bucket(self, *args, **kwargs):
9640-        LeaseCheckingCrawler.process_bucket(self, *args, **kwargs)
9641+
9642+    def process_shareset(self, cycle, prefix, shareset):
9643+        LeaseCheckingCrawler.process_shareset(self, cycle, prefix, shareset)
9644         if self.stop_after_first_bucket:
9645             self.stop_after_first_bucket = False
9646             self.cpu_slice = -1.0
9647hunk ./src/allmydata/test/test_storage.py 3051
9648         if not self.stop_after_first_bucket:
9649             self.cpu_slice = 500
9650 
9651+class InstrumentedStorageServer(StorageServer):
9652+    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9653+
9654+
9655 class BrokenStatResults:
9656     pass
9657 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
9658hunk ./src/allmydata/test/test_storage.py 3069
9659             setattr(bsr, attrname, getattr(s, attrname))
9660         return bsr
9661 
9662-class InstrumentedStorageServer(StorageServer):
9663-    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9664 class No_ST_BLOCKS_StorageServer(StorageServer):
9665     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
9666 
9667}
9668[Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
9669david-sarah@jacaranda.org**20110922183323
9670 Ignore-this: a11fb0dd0078ff627cb727fc769ec848
9671] {
9672hunk ./src/allmydata/storage/backends/disk/immutable.py 260
9673         except IndexError:
9674             self.add_lease(lease_info)
9675 
9676+    def cancel_lease(self, cancel_secret):
9677+        """Remove a lease with the given cancel_secret. If the last lease is
9678+        cancelled, the file will be removed. Return the number of bytes that
9679+        were freed (by truncating the list of leases, and possibly by
9680+        deleting the file). Raise IndexError if there was no lease with the
9681+        given cancel_secret.
9682+        """
9683+
9684+        leases = list(self.get_leases())
9685+        num_leases_removed = 0
9686+        for i, lease in enumerate(leases):
9687+            if constant_time_compare(lease.cancel_secret, cancel_secret):
9688+                leases[i] = None
9689+                num_leases_removed += 1
9690+        if not num_leases_removed:
9691+            raise IndexError("unable to find matching lease to cancel")
9692+
9693+        space_freed = 0
9694+        if num_leases_removed:
9695+            # pack and write out the remaining leases. We write these out in
9696+            # the same order as they were added, so that if we crash while
9697+            # doing this, we won't lose any non-cancelled leases.
9698+            leases = [l for l in leases if l] # remove the cancelled leases
9699+            if len(leases) > 0:
9700+                f = self._home.open('rb+')
9701+                try:
9702+                    for i, lease in enumerate(leases):
9703+                        self._write_lease_record(f, i, lease)
9704+                    self._write_num_leases(f, len(leases))
9705+                    self._truncate_leases(f, len(leases))
9706+                finally:
9707+                    f.close()
9708+                space_freed = self.LEASE_SIZE * num_leases_removed
9709+            else:
9710+                space_freed = fileutil.get_used_space(self._home)
9711+                self.unlink()
9712+        return space_freed
9713+
9714hunk ./src/allmydata/storage/backends/disk/mutable.py 361
9715         except IndexError:
9716             self.add_lease(lease_info)
9717 
9718+    def cancel_lease(self, cancel_secret):
9719+        """Remove any leases with the given cancel_secret. If the last lease
9720+        is cancelled, the file will be removed. Return the number of bytes
9721+        that were freed (by truncating the list of leases, and possibly by
9722+        deleting the file). Raise IndexError if there was no lease with the
9723+        given cancel_secret."""
9724+
9725+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
9726+
9727+        accepting_nodeids = set()
9728+        modified = 0
9729+        remaining = 0
9730+        blank_lease = LeaseInfo(owner_num=0,
9731+                                renew_secret="\x00"*32,
9732+                                cancel_secret="\x00"*32,
9733+                                expiration_time=0,
9734+                                nodeid="\x00"*20)
9735+        f = self._home.open('rb+')
9736+        try:
9737+            for (leasenum, lease) in self._enumerate_leases(f):
9738+                accepting_nodeids.add(lease.nodeid)
9739+                if constant_time_compare(lease.cancel_secret, cancel_secret):
9740+                    self._write_lease_record(f, leasenum, blank_lease)
9741+                    modified += 1
9742+                else:
9743+                    remaining += 1
9744+            if modified:
9745+                freed_space = self._pack_leases(f)
9746+        finally:
9747+            f.close()
9748+
9749+        if modified > 0:
9750+            if remaining == 0:
9751+                freed_space = fileutil.get_used_space(self._home)
9752+                self.unlink()
9753+            return freed_space
9754+
9755+        msg = ("Unable to cancel non-existent lease. I have leases "
9756+               "accepted by nodeids: ")
9757+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
9758+                         for anid in accepting_nodeids])
9759+        msg += " ."
9760+        raise IndexError(msg)
9761+
9762+    def _pack_leases(self, f):
9763+        # TODO: reclaim space from cancelled leases
9764+        return 0
9765+
9766     def _read_write_enabler_and_nodeid(self, f):
9767         f.seek(0)
9768         data = f.read(self.HEADER_SIZE)
9769}
9770
9771Context:
9772
9773[test_mutable.py: skip test_publish_surprise_mdmf, which is causing an error. refs #1534, #393
9774david-sarah@jacaranda.org**20110920183319
9775 Ignore-this: 6fb020e09e8de437cbcc2c9f57835b31
9776]
9777[test/test_mutable: write publish surprise test for MDMF, rename existing test_publish_surprise to clarify that it is for SDMF
9778kevan@isnotajoke.com**20110918003657
9779 Ignore-this: 722c507e8f5b537ff920e0555951059a
9780]
9781[test/test_mutable: refactor publish surprise test into common test fixture, rewrite test_publish_surprise to use test fixture
9782kevan@isnotajoke.com**20110918003533
9783 Ignore-this: 6f135888d400a99a09b5f9a4be443b6e
9784]
9785[mutable/publish: add errback immediately after write, don't consume errors from other parts of the publisher
9786kevan@isnotajoke.com**20110917234708
9787 Ignore-this: 12bf6b0918a5dc5ffc30ece669fad51d
9788]
9789[.darcs-boringfile: minor cleanups.
9790david-sarah@jacaranda.org**20110920154918
9791 Ignore-this: cab78e30d293da7e2832207dbee2ffeb
9792]
9793[uri.py: fix two interface violations in verifier URI classes. refs #1474
9794david-sarah@jacaranda.org**20110920030156
9795 Ignore-this: 454ddd1419556cb1d7576d914cb19598
9796]
9797[Make platform-detection code tolerate linux-3.0, patch by zooko.
9798Brian Warner <warner@lothar.com>**20110915202620
9799 Ignore-this: af63cf9177ae531984dea7a1cad03762
9800 
9801 Otherwise address-autodetection can't find ifconfig. refs #1536
9802]
9803[test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended.
9804david-sarah@jacaranda.org**20110915185126
9805 Ignore-this: d96632bc48d770b9b577cda1bbd8ff94
9806]
9807[docs: insert a newline at the beginning of known_issues.rst to see if this makes it render more nicely in trac
9808zooko@zooko.com**20110914064728
9809 Ignore-this: aca15190fa22083c5d4114d3965f5d65
9810]
9811[docs: remove the coding: utf-8 declaration at the to of known_issues.rst, since the trac rendering doesn't hide it
9812zooko@zooko.com**20110914055713
9813 Ignore-this: 941ed32f83ead377171aa7a6bd198fcf
9814]
9815[docs: more cleanup of known_issues.rst -- now it passes "rst2html --verbose" without comment
9816zooko@zooko.com**20110914055419
9817 Ignore-this: 5505b3d76934bd97d0312cc59ed53879
9818]
9819[docs: more formatting improvements to known_issues.rst
9820zooko@zooko.com**20110914051639
9821 Ignore-this: 9ae9230ec9a38a312cbacaf370826691
9822]
9823[docs: reformatting of known_issues.rst
9824zooko@zooko.com**20110914050240
9825 Ignore-this: b8be0375079fb478be9d07500f9aaa87
9826]
9827[docs: fix formatting error in docs/known_issues.rst
9828zooko@zooko.com**20110914045909
9829 Ignore-this: f73fe74ad2b9e655aa0c6075acced15a
9830]
9831[merge Tahoe-LAFS v1.8.3 release announcement with trunk
9832zooko@zooko.com**20110913210544
9833 Ignore-this: 163f2c3ddacca387d7308e4b9332516e
9834]
9835[docs: release notes for Tahoe-LAFS v1.8.3
9836zooko@zooko.com**20110913165826
9837 Ignore-this: 84223604985b14733a956d2fbaeb4e9f
9838]
9839[tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow
9840zooko@zooko.com**20110913024255
9841 Ignore-this: 6a86d691e878cec583722faad06fb8e4
9842]
9843[interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528
9844david-sarah@jacaranda.org**20110913002843
9845 Ignore-this: 1a00a6029d40f6792af48c5578c1fd69
9846]
9847[CREDITS: more CREDITS for Kevan and David-Sarah
9848zooko@zooko.com**20110912223357
9849 Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada
9850]
9851[merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress
9852zooko@zooko.com**20110913205521
9853 Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8
9854]
9855[doc: add NEWS item about fixes to potential palimpsest issues in mutable files
9856zooko@zooko.com**20110912223329
9857 Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a
9858 ref. #1528
9859]
9860[merge the NEWS about the security fix (#1528) with the work-in-progress NEWS
9861zooko@zooko.com**20110913205153
9862 Ignore-this: 88e88a2ad140238c62010cf7c66953fc
9863]
9864[doc: add NEWS entry about the issue which allows unauthorized deletion of shares
9865zooko@zooko.com**20110912223246
9866 Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0
9867 ref. #1528
9868]
9869[doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares
9870zooko@zooko.com**20110912223135
9871 Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd
9872 ref. #1528
9873]
9874[storage: more paranoid handling of bounds and palimpsests in mutable share files
9875zooko@zooko.com**20110912222655
9876 Ignore-this: a20782fa423779ee851ea086901e1507
9877 * storage server ignores requests to extend shares by sending a new_length
9878 * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
9879 * storage server zeroes out lease info at the old location when moving it to a new location
9880 ref. #1528
9881]
9882[storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents
9883zooko@zooko.com**20110912222554
9884 Ignore-this: 61ebd7b11250963efdf5b1734a35271
9885 ref. #1528
9886]
9887[immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret
9888zooko@zooko.com**20110912222458
9889 Ignore-this: da1ebd31433ea052087b75b2e3480c25
9890 Declare explicitly that we prevent this problem in the server's version dict.
9891 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
9892]
9893[storage: remove the storage server's "remote_cancel_lease" function
9894zooko@zooko.com**20110912222331
9895 Ignore-this: 1c32dee50e0981408576daffad648c50
9896 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
9897 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
9898]
9899[storage: test that the storage server does *not* have a "remote_cancel_lease" function
9900zooko@zooko.com**20110912222324
9901 Ignore-this: 21c652009704652d35f34651f98dd403
9902 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
9903 ref. #1528
9904]
9905[immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret
9906zooko@zooko.com**20110912221201
9907 Ignore-this: 376e47b346c713d37096531491176349
9908 Also test whether the server explicitly declares that it prevents this problem.
9909 ref #1528
9910]
9911[Retrieve._activate_enough_peers: rewrite Verify logic
9912Brian Warner <warner@lothar.com>**20110909181150
9913 Ignore-this: 9367c11e1eacbf025f75ce034030d717
9914]
9915[Retrieve: implement/test stopProducing
9916Brian Warner <warner@lothar.com>**20110909181150
9917 Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178
9918]
9919[move DownloadStopped from download.common to interfaces
9920Brian Warner <warner@lothar.com>**20110909181150
9921 Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50
9922]
9923[retrieve.py: remove vestigal self._validated_readers
9924Brian Warner <warner@lothar.com>**20110909181150
9925 Ignore-this: faab2ec14e314a53a2ffb714de626e2d
9926]
9927[Retrieve: rewrite flow-control: use a top-level loop() to catch all errors
9928Brian Warner <warner@lothar.com>**20110909181150
9929 Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714
9930 
9931 This ought to close the potential for dropped errors and hanging downloads.
9932 Verify needs to be examined, I may have broken it, although all tests pass.
9933]
9934[Retrieve: merge _validate_active_prefixes into _add_active_peers
9935Brian Warner <warner@lothar.com>**20110909181150
9936 Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c
9937]
9938[Retrieve: remove the initial prefix-is-still-good check
9939Brian Warner <warner@lothar.com>**20110909181150
9940 Ignore-this: da66ee51c894eaa4e862e2dffb458acc
9941 
9942 This check needs to be done with each fetch from the storage server, to
9943 detect when someone has changed the share (i.e. our servermap goes stale).
9944 Doing it just once at the beginning of retrieve isn't enough: a write might
9945 occur after the first segment but before the second, etc.
9946 
9947 _try_to_validate_prefix() was not removed: it will be used by the future
9948 check-with-each-fetch code.
9949 
9950 test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
9951 fails until this check is brought back. (the corruption it applies only
9952 touches the prefix, not the block data, so the check-less retrieve actually
9953 tolerates it). Don't forget to re-enable it once the check is brought back.
9954]
9955[MDMFSlotReadProxy: remove the queue
9956Brian Warner <warner@lothar.com>**20110909181150
9957 Ignore-this: 96673cb8dda7a87a423de2f4897d66d2
9958 
9959 This is a neat trick to reduce Foolscap overhead, but the need for an
9960 explicit flush() complicates the Retrieve path and makes it prone to
9961 lost-progress bugs.
9962 
9963 Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
9964 same share in a row, a limitation exposed by turning off the queue.
9965]
9966[rearrange Retrieve: first step, shouldn't change order of execution
9967Brian Warner <warner@lothar.com>**20110909181149
9968 Ignore-this: e3006368bfd2802b82ea45c52409e8d6
9969]
9970[CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527
9971david-sarah@jacaranda.org**20110906183730
9972 Ignore-this: 122e2ffbee84861c32eda766a57759cf
9973]
9974[CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527
9975david-sarah@jacaranda.org**20110906183020
9976 Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d
9977]
9978[CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527
9979david-sarah@jacaranda.org**20110905020922
9980 Ignore-this: 75a6df0a2df9c467d8c010579e9a024e
9981]
9982[cli: make --mutable-type imply --mutable in 'tahoe put'
9983Kevan Carstensen <kevan@isnotajoke.com>**20110903190920
9984 Ignore-this: 23336d3c43b2a9554e40c2a11c675e93
9985]
9986[SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on.
9987david-sarah@jacaranda.org**20110903222304
9988 Ignore-this: 980c61d4dd0119337f1463a69aeebaf0
9989]
9990[improve the storage/mutable.py asserts even more
9991warner@lothar.com**20110901160543
9992 Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946
9993]
9994[storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts
9995wilcoxjg@gmail.com**20110901084144
9996 Ignore-this: 28ace2b2678642e4d7269ddab8c67f30
9997]
9998[docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs.
9999david-sarah@jacaranda.org**20110831232148
10000 Ignore-this: cd9c851d3eb4e0a1e088f337c291586c
10001]
10002[test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s
10003warner@lothar.com**20110831050451
10004 Ignore-this: 64815284d9e536f8f3798b5f44cf580c
10005]
10006[mutable/retrieve: handle the case where self._read_length is 0.
10007Kevan Carstensen <kevan@isnotajoke.com>**20110830210141
10008 Ignore-this: fceafbe485851ca53f2774e5a4fd8d30
10009 
10010 Note that the downloader will still fetch a segment for a zero-length
10011 read, which is wasteful. Fixing that isn't specifically required to fix
10012 #1512, but it should probably be fixed before 1.9.
10013]
10014[NEWS: added summary of all changes since 1.8.2. Needs editing.
10015Brian Warner <warner@lothar.com>**20110830163205
10016 Ignore-this: 273899b37a899fc6919b74572454b8b2
10017]
10018[test_mutable.Update: only upload the files needed for each test. refs #1500
10019Brian Warner <warner@lothar.com>**20110829072717
10020 Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7
10021 
10022 This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
10023 It also fixes a couple of places where a Deferred was being dropped, which
10024 would cause two tests to run in parallel and also confuse error reporting.
10025]
10026[Let Uploader retain History instead of passing it into upload(). Fixes #1079.
10027Brian Warner <warner@lothar.com>**20110829063246
10028 Ignore-this: 3902c58ec12bd4b2d876806248e19f17
10029 
10030 This consistently records all immutable uploads in the Recent Uploads And
10031 Downloads page, regardless of code path. Previously, certain webapi upload
10032 operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
10033 object and were left out.
10034]
10035[Fix mutable publish/retrieve timing status displays. Fixes #1505.
10036Brian Warner <warner@lothar.com>**20110828232221
10037 Ignore-this: 4080ce065cf481b2180fd711c9772dd6
10038 
10039 publish:
10040 * encrypt and encode times are cumulative, not just current-segment
10041 
10042 retrieve:
10043 * same for decrypt and decode times
10044 * update "current status" to include segment number
10045 * set status to Finished/Failed when download is complete
10046 * set progress to 1.0 when complete
10047 
10048 More improvements to consider:
10049 * progress is currently 0% or 100%: should calculate how many segments are
10050   involved (remembering retrieve can be less than the whole file) and set it
10051   to a fraction
10052 * "fetch" time is fuzzy: what we want is to know how much of the delay is not
10053   our own fault, but since we do decode/decrypt work while waiting for more
10054   shares, it's not straightforward
10055]
10056[Teach 'tahoe debug catalog-shares about MDMF. Closes #1507.
10057Brian Warner <warner@lothar.com>**20110828080931
10058 Ignore-this: 56ef2951db1a648353d7daac6a04c7d1
10059]
10060[debug.py: remove some dead comments
10061Brian Warner <warner@lothar.com>**20110828074556
10062 Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31
10063]
10064[hush pyflakes
10065Brian Warner <warner@lothar.com>**20110828074254
10066 Ignore-this: bef9d537a969fa82fe4decc4ba2acb09
10067]
10068[MutableFileNode.set_downloader_hints: never depend upon order of dict.values()
10069Brian Warner <warner@lothar.com>**20110828074103
10070 Ignore-this: caaf1aa518dbdde4d797b7f335230faa
10071 
10072 The old code was calculating the "extension parameters" (a list) from the
10073 downloader hints (a dictionary) with hints.values(), which is not stable, and
10074 would result in corrupted filecaps (with the 'k' and 'segsize' hints
10075 occasionally swapped). The new code always uses [k,segsize].
10076]
10077[layout.py: fix MDMF share layout documentation
10078Brian Warner <warner@lothar.com>**20110828073921
10079 Ignore-this: 3f13366fed75b5e31b51ae895450a225
10080]
10081[teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507
10082Brian Warner <warner@lothar.com>**20110828073834
10083 Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea
10084]
10085[test_mutable.Version.test_debug: use splitlines() to fix buildslaves
10086Brian Warner <warner@lothar.com>**20110828064728
10087 Ignore-this: c7f6245426fc80b9d1ae901d5218246a
10088 
10089 Any slave running in a directory with spaces in the name was miscounting
10090 shares, causing the test to fail.
10091]
10092[test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507
10093Brian Warner <warner@lothar.com>**20110828005542
10094 Ignore-this: cb20bea1c28bfa50a72317d70e109672
10095 
10096 Also changes NoNetworkGrid to put shares in storage/shares/ .
10097]
10098[test_mutable.py: oops, missed a .todo
10099Brian Warner <warner@lothar.com>**20110828002118
10100 Ignore-this: fda09ae86481352b7a627c278d2a3940
10101]
10102[test_mutable: merge davidsarah's patch with my Version refactorings
10103warner@lothar.com**20110827235707
10104 Ignore-this: b5aaf481c90d99e33827273b5d118fd0
10105]
10106[Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393
10107david-sarah@jacaranda.org**20110823012720
10108 Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721
10109]
10110[Additional tests for MDMF URIs and for zero-length files. refs #393
10111david-sarah@jacaranda.org**20110823011532
10112 Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5
10113]
10114[Additional tests for zero-length partial reads and updates to mutable versions. refs #393
10115david-sarah@jacaranda.org**20110822014111
10116 Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea
10117]
10118[test_mutable.Version: factor out some expensive uploads, save 25% runtime
10119Brian Warner <warner@lothar.com>**20110827232737
10120 Ignore-this: ea37383eb85ea0894b254fe4dfb45544
10121]
10122[SDMF: update filenode with correct k/N after Retrieve. Fixes #1510.
10123Brian Warner <warner@lothar.com>**20110827225031
10124 Ignore-this: b50ae6e1045818c400079f118b4ef48
10125 
10126 Without this, we get a regression when modifying a mutable file that was
10127 created with more shares (larger N) than our current tahoe.cfg . The
10128 modification attempt creates new versions of the (0,1,..,newN-1) shares, but
10129 leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
10130 assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
10131 
10132 The mixed versions that result (some shares with e.g. N=10, some with N=20,
10133 such that both versions are recoverable) cause problems for the Publish code,
10134 even before MDMF landed. Might be related to refs #1390 and refs #1042.
10135]
10136[layout.py: annotate assertion to figure out 'tahoe backup' failure
10137Brian Warner <warner@lothar.com>**20110827195253
10138 Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5
10139]
10140[Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507.
10141Brian Warner <warner@lothar.com>**20110827195048
10142 Ignore-this: 61c6af5e33fc88e0251e697a50addb2c
10143 
10144 This also adds tests for all those cases, and fixes an omission in uri.py
10145 that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier.
10146]
10147[MDMF: more writable/writeable consistentifications
10148warner@lothar.com**20110827190602
10149 Ignore-this: 22492a9e20c1819ddb12091062888b55
10150]
10151[MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code
10152warner@lothar.com**20110827183357
10153 Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b
10154]
10155[setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479
10156david-sarah@jacaranda.org**20110826230345
10157 Ignore-this: 40e908b8937322a290fb8012bfcad02a
10158]
10159[test_mutable.Update: increase timeout from 120s to 400s, slaves are failing
10160Brian Warner <warner@lothar.com>**20110825230140
10161 Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec
10162]
10163[tests: fix check_memory test
10164zooko@zooko.com**20110825201116
10165 Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835
10166 fixes #1503
10167]
10168[TAG allmydata-tahoe-1.9.0a1
10169warner@lothar.com**20110825161122
10170 Ignore-this: 3cbf49f00dbda58189f893c427f65605
10171]
10172Patch bundle hash:
101737e9fd7ca66bba646aab82d2886530d0caa025f44