Ticket #999: pluggable-backends-davidsarah-v9.darcs.patch

File pluggable-backends-davidsarah-v9.darcs.patch, 410.9 KB (added by davidsarah, at 2011-09-22T05:11:43Z)

Still more test fixes.

Line 
112 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
16  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
17
18Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
19  * Fix some incorrect attribute accesses. refs #999
20
21Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
22  * docs/backends/S3.rst: remove Issues section. refs #999
23
24Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
25  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
26
27Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
28  * More fixes to tests needed for pluggable backends. refs #999
29
30Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
31  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
32
33Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
34  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
35
36Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
37  * Fix some more test failures. refs #999
38
39New patches:
40
41[interfaces.py: 'which -> that' grammar cleanup.
42david-sarah@jacaranda.org**20110825003217
43 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
44] {
45hunk ./src/allmydata/interfaces.py 38
46     the StubClient. This object doesn't actually offer any services, but the
47     announcement helps the Introducer keep track of which clients are
48     subscribed (so the grid admin can keep track of things like the size of
49-    the grid and the client versions in use. This is the (empty)
50+    the grid and the client versions in use). This is the (empty)
51     RemoteInterface for the StubClient."""
52 
53 class RIBucketWriter(RemoteInterface):
54hunk ./src/allmydata/interfaces.py 276
55         (binary) storage index string, and 'shnum' is the integer share
56         number. 'reason' is a human-readable explanation of the problem,
57         probably including some expected hash values and the computed ones
58-        which did not match. Corruption advisories for mutable shares should
59+        that did not match. Corruption advisories for mutable shares should
60         include a hash of the public key (the same value that appears in the
61         mutable-file verify-cap), since the current share format does not
62         store that on disk.
63hunk ./src/allmydata/interfaces.py 413
64           remote_host: the IAddress, if connected, otherwise None
65 
66         This method is intended for monitoring interfaces, such as a web page
67-        which describes connecting and connected peers.
68+        that describes connecting and connected peers.
69         """
70 
71     def get_all_peerids():
72hunk ./src/allmydata/interfaces.py 515
73 
74     # TODO: rename to get_read_cap()
75     def get_readonly():
76-        """Return another IURI instance, which represents a read-only form of
77+        """Return another IURI instance that represents a read-only form of
78         this one. If is_readonly() is True, this returns self."""
79 
80     def get_verify_cap():
81hunk ./src/allmydata/interfaces.py 542
82         passing into init_from_string."""
83 
84 class IDirnodeURI(Interface):
85-    """I am a URI which represents a dirnode."""
86+    """I am a URI that represents a dirnode."""
87 
88 class IFileURI(Interface):
89hunk ./src/allmydata/interfaces.py 545
90-    """I am a URI which represents a filenode."""
91+    """I am a URI that represents a filenode."""
92     def get_size():
93         """Return the length (in bytes) of the file that I represent."""
94 
95hunk ./src/allmydata/interfaces.py 553
96     pass
97 
98 class IMutableFileURI(Interface):
99-    """I am a URI which represents a mutable filenode."""
100+    """I am a URI that represents a mutable filenode."""
101     def get_extension_params():
102         """Return the extension parameters in the URI"""
103 
104hunk ./src/allmydata/interfaces.py 856
105         """
106 
107 class IFileNode(IFilesystemNode):
108-    """I am a node which represents a file: a sequence of bytes. I am not a
109+    """I am a node that represents a file: a sequence of bytes. I am not a
110     container, like IDirectoryNode."""
111     def get_best_readable_version():
112         """Return a Deferred that fires with an IReadable for the 'best'
113hunk ./src/allmydata/interfaces.py 905
114     multiple versions of a file present in the grid, some of which might be
115     unrecoverable (i.e. have fewer than 'k' shares). These versions are
116     loosely ordered: each has a sequence number and a hash, and any version
117-    with seqnum=N was uploaded by a node which has seen at least one version
118+    with seqnum=N was uploaded by a node that has seen at least one version
119     with seqnum=N-1.
120 
121     The 'servermap' (an instance of IMutableFileServerMap) is used to
122hunk ./src/allmydata/interfaces.py 1014
123         as a guide to where the shares are located.
124 
125         I return a Deferred that fires with the requested contents, or
126-        errbacks with UnrecoverableFileError. Note that a servermap which was
127+        errbacks with UnrecoverableFileError. Note that a servermap that was
128         updated with MODE_ANYTHING or MODE_READ may not know about shares for
129         all versions (those modes stop querying servers as soon as they can
130         fulfil their goals), so you may want to use MODE_CHECK (which checks
131hunk ./src/allmydata/interfaces.py 1073
132     """Upload was unable to satisfy 'servers_of_happiness'"""
133 
134 class UnableToFetchCriticalDownloadDataError(Exception):
135-    """I was unable to fetch some piece of critical data which is supposed to
136+    """I was unable to fetch some piece of critical data that is supposed to
137     be identically present in all shares."""
138 
139 class NoServersError(Exception):
140hunk ./src/allmydata/interfaces.py 1085
141     exists, and overwrite= was set to False."""
142 
143 class NoSuchChildError(Exception):
144-    """A directory node was asked to fetch a child which does not exist."""
145+    """A directory node was asked to fetch a child that does not exist."""
146 
147 class ChildOfWrongTypeError(Exception):
148     """An operation was attempted on a child of the wrong type (file or directory)."""
149hunk ./src/allmydata/interfaces.py 1403
150         if you initially thought you were going to use 10 peers, started
151         encoding, and then two of the peers dropped out: you could use
152         desired_share_ids= to skip the work (both memory and CPU) of
153-        producing shares for the peers which are no longer available.
154+        producing shares for the peers that are no longer available.
155 
156         """
157 
158hunk ./src/allmydata/interfaces.py 1478
159         if you initially thought you were going to use 10 peers, started
160         encoding, and then two of the peers dropped out: you could use
161         desired_share_ids= to skip the work (both memory and CPU) of
162-        producing shares for the peers which are no longer available.
163+        producing shares for the peers that are no longer available.
164 
165         For each call, encode() will return a Deferred that fires with two
166         lists, one containing shares and the other containing the shareids.
167hunk ./src/allmydata/interfaces.py 1535
168         required to be of the same length.  The i'th element of their_shareids
169         is required to be the shareid of the i'th buffer in some_shares.
170 
171-        This returns a Deferred which fires with a sequence of buffers. This
172+        This returns a Deferred that fires with a sequence of buffers. This
173         sequence will contain all of the segments of the original data, in
174         order. The sum of the lengths of all of the buffers will be the
175         'data_size' value passed into the original ICodecEncode.set_params()
176hunk ./src/allmydata/interfaces.py 1582
177         Encoding parameters can be set in three ways. 1: The Encoder class
178         provides defaults (3/7/10). 2: the Encoder can be constructed with
179         an 'options' dictionary, in which the
180-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
181+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
182         set_params((k,d,n)) can be called.
183 
184         If you intend to use set_params(), you must call it before
185hunk ./src/allmydata/interfaces.py 1780
186         produced, so that the segment hashes can be generated with only a
187         single pass.
188 
189-        This returns a Deferred which fires with a sequence of hashes, using:
190+        This returns a Deferred that fires with a sequence of hashes, using:
191 
192          tuple(segment_hashes[first:last])
193 
194hunk ./src/allmydata/interfaces.py 1796
195     def get_plaintext_hash():
196         """OBSOLETE; Get the hash of the whole plaintext.
197 
198-        This returns a Deferred which fires with a tagged SHA-256 hash of the
199+        This returns a Deferred that fires with a tagged SHA-256 hash of the
200         whole plaintext, obtained from hashutil.plaintext_hash(data).
201         """
202 
203hunk ./src/allmydata/interfaces.py 1856
204         be used to encrypt the data. The key will also be hashed to derive
205         the StorageIndex.
206 
207-        Uploadables which want to achieve convergence should hash their file
208+        Uploadables that want to achieve convergence should hash their file
209         contents and the serialized_encoding_parameters to form the key
210         (which of course requires a full pass over the data). Uploadables can
211         use the upload.ConvergentUploadMixin class to achieve this
212hunk ./src/allmydata/interfaces.py 1862
213         automatically.
214 
215-        Uploadables which do not care about convergence (or do not wish to
216+        Uploadables that do not care about convergence (or do not wish to
217         make multiple passes over the data) can simply return a
218         strongly-random 16 byte string.
219 
220hunk ./src/allmydata/interfaces.py 1872
221 
222     def read(length):
223         """Return a Deferred that fires with a list of strings (perhaps with
224-        only a single element) which, when concatenated together, contain the
225+        only a single element) that, when concatenated together, contain the
226         next 'length' bytes of data. If EOF is near, this may provide fewer
227         than 'length' bytes. The total number of bytes provided by read()
228         before it signals EOF must equal the size provided by get_size().
229hunk ./src/allmydata/interfaces.py 1919
230 
231     def read(length):
232         """
233-        Returns a list of strings which, when concatenated, are the next
234+        Returns a list of strings that, when concatenated, are the next
235         length bytes of the file, or fewer if there are fewer bytes
236         between the current location and the end of the file.
237         """
238hunk ./src/allmydata/interfaces.py 1932
239 
240 class IUploadResults(Interface):
241     """I am returned by upload() methods. I contain a number of public
242-    attributes which can be read to determine the results of the upload. Some
243+    attributes that can be read to determine the results of the upload. Some
244     of these are functional, some are timing information. All of these may be
245     None.
246 
247hunk ./src/allmydata/interfaces.py 1965
248 
249 class IDownloadResults(Interface):
250     """I am created internally by download() methods. I contain a number of
251-    public attributes which contain details about the download process.::
252+    public attributes that contain details about the download process.::
253 
254      .file_size : the size of the file, in bytes
255      .servers_used : set of server peerids that were used during download
256hunk ./src/allmydata/interfaces.py 1991
257 class IUploader(Interface):
258     def upload(uploadable):
259         """Upload the file. 'uploadable' must impement IUploadable. This
260-        returns a Deferred which fires with an IUploadResults instance, from
261+        returns a Deferred that fires with an IUploadResults instance, from
262         which the URI of the file can be obtained as results.uri ."""
263 
264     def upload_ssk(write_capability, new_version, uploadable):
265hunk ./src/allmydata/interfaces.py 2041
266         kind of lease that is obtained (which account number to claim, etc).
267 
268         TODO: any problems seen during checking will be reported to the
269-        health-manager.furl, a centralized object which is responsible for
270+        health-manager.furl, a centralized object that is responsible for
271         figuring out why files are unhealthy so corrective action can be
272         taken.
273         """
274hunk ./src/allmydata/interfaces.py 2056
275         will be put in the check-and-repair results. The Deferred will not
276         fire until the repair is complete.
277 
278-        This returns a Deferred which fires with an instance of
279+        This returns a Deferred that fires with an instance of
280         ICheckAndRepairResults."""
281 
282 class IDeepCheckable(Interface):
283hunk ./src/allmydata/interfaces.py 2141
284                               that was found to be corrupt. Each share
285                               locator is a list of (serverid, storage_index,
286                               sharenum).
287-         count-incompatible-shares: the number of shares which are of a share
288+         count-incompatible-shares: the number of shares that are of a share
289                                     format unknown to this checker
290          list-incompatible-shares: a list of 'share locators', one for each
291                                    share that was found to be of an unknown
292hunk ./src/allmydata/interfaces.py 2148
293                                    format. Each share locator is a list of
294                                    (serverid, storage_index, sharenum).
295          servers-responding: list of (binary) storage server identifiers,
296-                             one for each server which responded to the share
297+                             one for each server that responded to the share
298                              query (even if they said they didn't have
299                              shares, and even if they said they did have
300                              shares but then didn't send them when asked, or
301hunk ./src/allmydata/interfaces.py 2345
302         will use the data in the checker results to guide the repair process,
303         such as which servers provided bad data and should therefore be
304         avoided. The ICheckResults object is inside the
305-        ICheckAndRepairResults object, which is returned by the
306+        ICheckAndRepairResults object that is returned by the
307         ICheckable.check() method::
308 
309          d = filenode.check(repair=False)
310hunk ./src/allmydata/interfaces.py 2436
311         methods to create new objects. I return synchronously."""
312 
313     def create_mutable_file(contents=None, keysize=None):
314-        """I create a new mutable file, and return a Deferred which will fire
315+        """I create a new mutable file, and return a Deferred that will fire
316         with the IMutableFileNode instance when it is ready. If contents= is
317         provided (a bytestring), it will be used as the initial contents of
318         the new file, otherwise the file will contain zero bytes. keysize= is
319hunk ./src/allmydata/interfaces.py 2444
320         usual."""
321 
322     def create_new_mutable_directory(initial_children={}):
323-        """I create a new mutable directory, and return a Deferred which will
324+        """I create a new mutable directory, and return a Deferred that will
325         fire with the IDirectoryNode instance when it is ready. If
326         initial_children= is provided (a dict mapping unicode child name to
327         (childnode, metadata_dict) tuples), the directory will be populated
328hunk ./src/allmydata/interfaces.py 2452
329 
330 class IClientStatus(Interface):
331     def list_all_uploads():
332-        """Return a list of uploader objects, one for each upload which
333+        """Return a list of uploader objects, one for each upload that
334         currently has an object available (tracked with weakrefs). This is
335         intended for debugging purposes."""
336     def list_active_uploads():
337hunk ./src/allmydata/interfaces.py 2462
338         started uploads."""
339 
340     def list_all_downloads():
341-        """Return a list of downloader objects, one for each download which
342+        """Return a list of downloader objects, one for each download that
343         currently has an object available (tracked with weakrefs). This is
344         intended for debugging purposes."""
345     def list_active_downloads():
346hunk ./src/allmydata/interfaces.py 2689
347 
348     def provide(provider=RIStatsProvider, nickname=str):
349         """
350-        @param provider: a stats collector instance which should be polled
351+        @param provider: a stats collector instance that should be polled
352                          periodically by the gatherer to collect stats.
353         @param nickname: a name useful to identify the provided client
354         """
355hunk ./src/allmydata/interfaces.py 2722
356 
357 class IValidatedThingProxy(Interface):
358     def start():
359-        """ Acquire a thing and validate it. Return a deferred which is
360+        """ Acquire a thing and validate it. Return a deferred that is
361         eventually fired with self if the thing is valid or errbacked if it
362         can't be acquired or validated."""
363 
364}
365[Pluggable backends -- new and moved files, changes to moved files. refs #999
366david-sarah@jacaranda.org**20110919232926
367 Ignore-this: ec5d2d1362a092d919e84327d3092424
368] {
369adddir ./src/allmydata/storage/backends
370adddir ./src/allmydata/storage/backends/disk
371move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
372move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
373adddir ./src/allmydata/storage/backends/null
374addfile ./src/allmydata/storage/backends/__init__.py
375addfile ./src/allmydata/storage/backends/base.py
376hunk ./src/allmydata/storage/backends/base.py 1
377+
378+from twisted.application import service
379+
380+from allmydata.storage.common import si_b2a
381+from allmydata.storage.lease import LeaseInfo
382+from allmydata.storage.bucket import BucketReader
383+
384+
385+class Backend(service.MultiService):
386+    def __init__(self):
387+        service.MultiService.__init__(self)
388+
389+
390+class ShareSet(object):
391+    """
392+    This class implements shareset logic that could work for all backends, but
393+    might be useful to override for efficiency.
394+    """
395+
396+    def __init__(self, storageindex):
397+        self.storageindex = storageindex
398+
399+    def get_storage_index(self):
400+        return self.storageindex
401+
402+    def get_storage_index_string(self):
403+        return si_b2a(self.storageindex)
404+
405+    def renew_lease(self, renew_secret, new_expiration_time):
406+        found_shares = False
407+        for share in self.get_shares():
408+            found_shares = True
409+            share.renew_lease(renew_secret, new_expiration_time)
410+
411+        if not found_shares:
412+            raise IndexError("no such lease to renew")
413+
414+    def get_leases(self):
415+        # Since all shares get the same lease data, we just grab the leases
416+        # from the first share.
417+        try:
418+            sf = self.get_shares().next()
419+            return sf.get_leases()
420+        except StopIteration:
421+            return iter([])
422+
423+    def add_or_renew_lease(self, lease_info):
424+        # This implementation assumes that lease data is duplicated in
425+        # all shares of a shareset, which might not be true for all backends.
426+        for share in self.get_shares():
427+            share.add_or_renew_lease(lease_info)
428+
429+    def make_bucket_reader(self, storageserver, share):
430+        return BucketReader(storageserver, share)
431+
432+    def testv_and_readv_and_writev(self, storageserver, secrets,
433+                                   test_and_write_vectors, read_vector,
434+                                   expiration_time):
435+        # The implementation here depends on the following helper methods,
436+        # which must be provided by subclasses:
437+        #
438+        # def _clean_up_after_unlink(self):
439+        #     """clean up resources associated with the shareset after some
440+        #     shares might have been deleted"""
441+        #
442+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
443+        #     """create a mutable share with the given shnum and write_enabler"""
444+
445+        # secrets might be a triple with cancel_secret in secrets[2], but if
446+        # so we ignore the cancel_secret.
447+        write_enabler = secrets[0]
448+        renew_secret = secrets[1]
449+
450+        si_s = self.get_storage_index_string()
451+        shares = {}
452+        for share in self.get_shares():
453+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
454+            # have a parameter saying what type it's expecting.
455+            if share.sharetype == "mutable":
456+                share.check_write_enabler(write_enabler, si_s)
457+                shares[share.get_shnum()] = share
458+
459+        # write_enabler is good for all existing shares
460+
461+        # now evaluate test vectors
462+        testv_is_good = True
463+        for sharenum in test_and_write_vectors:
464+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
465+            if sharenum in shares:
466+                if not shares[sharenum].check_testv(testv):
467+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
468+                    testv_is_good = False
469+                    break
470+            else:
471+                # compare the vectors against an empty share, in which all
472+                # reads return empty strings
473+                if not EmptyShare().check_testv(testv):
474+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
475+                                                                testv))
476+                    testv_is_good = False
477+                    break
478+
479+        # gather the read vectors, before we do any writes
480+        read_data = {}
481+        for shnum, share in shares.items():
482+            read_data[shnum] = share.readv(read_vector)
483+
484+        ownerid = 1 # TODO
485+        lease_info = LeaseInfo(ownerid, renew_secret,
486+                               expiration_time, storageserver.get_serverid())
487+
488+        if testv_is_good:
489+            # now apply the write vectors
490+            for shnum in test_and_write_vectors:
491+                (testv, datav, new_length) = test_and_write_vectors[shnum]
492+                if new_length == 0:
493+                    if shnum in shares:
494+                        shares[shnum].unlink()
495+                else:
496+                    if shnum not in shares:
497+                        # allocate a new share
498+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
499+                        shares[shnum] = share
500+                    shares[shnum].writev(datav, new_length)
501+                    # and update the lease
502+                    shares[shnum].add_or_renew_lease(lease_info)
503+
504+            if new_length == 0:
505+                self._clean_up_after_unlink()
506+
507+        return (testv_is_good, read_data)
508+
509+    def readv(self, wanted_shnums, read_vector):
510+        """
511+        Read a vector from the numbered shares in this shareset. An empty
512+        shares list means to return data from all known shares.
513+
514+        @param wanted_shnums=ListOf(int)
515+        @param read_vector=ReadVector
516+        @return DictOf(int, ReadData): shnum -> results, with one key per share
517+        """
518+        datavs = {}
519+        for share in self.get_shares():
520+            shnum = share.get_shnum()
521+            if not wanted_shnums or shnum in wanted_shnums:
522+                datavs[shnum] = share.readv(read_vector)
523+
524+        return datavs
525+
526+
527+def testv_compare(a, op, b):
528+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
529+    if op == "lt":
530+        return a < b
531+    if op == "le":
532+        return a <= b
533+    if op == "eq":
534+        return a == b
535+    if op == "ne":
536+        return a != b
537+    if op == "ge":
538+        return a >= b
539+    if op == "gt":
540+        return a > b
541+    # never reached
542+
543+
544+class EmptyShare:
545+    def check_testv(self, testv):
546+        test_good = True
547+        for (offset, length, operator, specimen) in testv:
548+            data = ""
549+            if not testv_compare(data, operator, specimen):
550+                test_good = False
551+                break
552+        return test_good
553+
554addfile ./src/allmydata/storage/backends/disk/__init__.py
555addfile ./src/allmydata/storage/backends/disk/disk_backend.py
556hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
557+
558+import re
559+
560+from twisted.python.filepath import UnlistableError
561+
562+from zope.interface import implements
563+from allmydata.interfaces import IStorageBackend, IShareSet
564+from allmydata.util import fileutil, log, time_format
565+from allmydata.storage.common import si_b2a, si_a2b
566+from allmydata.storage.bucket import BucketWriter
567+from allmydata.storage.backends.base import Backend, ShareSet
568+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
569+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
570+
571+# storage/
572+# storage/shares/incoming
573+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
574+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
575+# storage/shares/$START/$STORAGEINDEX
576+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
577+
578+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
579+# base-32 chars).
580+# $SHARENUM matches this regex:
581+NUM_RE=re.compile("^[0-9]+$")
582+
583+
584+def si_si2dir(startfp, storageindex):
585+    sia = si_b2a(storageindex)
586+    newfp = startfp.child(sia[:2])
587+    return newfp.child(sia)
588+
589+
590+def get_share(fp):
591+    f = fp.open('rb')
592+    try:
593+        prefix = f.read(32)
594+    finally:
595+        f.close()
596+
597+    if prefix == MutableDiskShare.MAGIC:
598+        return MutableDiskShare(fp)
599+    else:
600+        # assume it's immutable
601+        return ImmutableDiskShare(fp)
602+
603+
604+class DiskBackend(Backend):
605+    implements(IStorageBackend)
606+
607+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
608+        Backend.__init__(self)
609+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
610+        self._setup_corruption_advisory()
611+
612+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
613+        self._storedir = storedir
614+        self._readonly = readonly
615+        self._reserved_space = int(reserved_space)
616+        self._discard_storage = discard_storage
617+        self._sharedir = self._storedir.child("shares")
618+        fileutil.fp_make_dirs(self._sharedir)
619+        self._incomingdir = self._sharedir.child('incoming')
620+        self._clean_incomplete()
621+        if self._reserved_space and (self.get_available_space() is None):
622+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
623+                    umid="0wZ27w", level=log.UNUSUAL)
624+
625+    def _clean_incomplete(self):
626+        fileutil.fp_remove(self._incomingdir)
627+        fileutil.fp_make_dirs(self._incomingdir)
628+
629+    def _setup_corruption_advisory(self):
630+        # we don't actually create the corruption-advisory dir until necessary
631+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
632+
633+    def _make_shareset(self, sharehomedir):
634+        return self.get_shareset(si_a2b(sharehomedir.basename()))
635+
636+    def get_sharesets_for_prefix(self, prefix):
637+        prefixfp = self._sharedir.child(prefix)
638+        try:
639+            sharesets = map(self._make_shareset, prefixfp.children())
640+            def _by_base32si(b):
641+                return b.get_storage_index_string()
642+            sharesets.sort(key=_by_base32si)
643+        except EnvironmentError:
644+            sharesets = []
645+        return sharesets
646+
647+    def get_shareset(self, storageindex):
648+        sharehomedir = si_si2dir(self._sharedir, storageindex)
649+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
650+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
651+
652+    def fill_in_space_stats(self, stats):
653+        stats['storage_server.reserved_space'] = self._reserved_space
654+        try:
655+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
656+            writeable = disk['avail'] > 0
657+
658+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
659+            stats['storage_server.disk_total'] = disk['total']
660+            stats['storage_server.disk_used'] = disk['used']
661+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
662+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
663+            stats['storage_server.disk_avail'] = disk['avail']
664+        except AttributeError:
665+            writeable = True
666+        except EnvironmentError:
667+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
668+            writeable = False
669+
670+        if self._readonly:
671+            stats['storage_server.disk_avail'] = 0
672+            writeable = False
673+
674+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
675+
676+    def get_available_space(self):
677+        if self._readonly:
678+            return 0
679+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
680+
681+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
682+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
683+        now = time_format.iso_utc(sep="T")
684+        si_s = si_b2a(storageindex)
685+
686+        # Windows can't handle colons in the filename.
687+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
688+        f = self._corruption_advisory_dir.child(name).open("w")
689+        try:
690+            f.write("report: Share Corruption\n")
691+            f.write("type: %s\n" % sharetype)
692+            f.write("storage_index: %s\n" % si_s)
693+            f.write("share_number: %d\n" % shnum)
694+            f.write("\n")
695+            f.write(reason)
696+            f.write("\n")
697+        finally:
698+            f.close()
699+
700+        log.msg(format=("client claims corruption in (%(share_type)s) " +
701+                        "%(si)s-%(shnum)d: %(reason)s"),
702+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
703+                level=log.SCARY, umid="SGx2fA")
704+
705+
706+class DiskShareSet(ShareSet):
707+    implements(IShareSet)
708+
709+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
710+        ShareSet.__init__(self, storageindex)
711+        self._sharehomedir = sharehomedir
712+        self._incominghomedir = incominghomedir
713+        self._discard_storage = discard_storage
714+
715+    def get_overhead(self):
716+        return (fileutil.get_disk_usage(self._sharehomedir) +
717+                fileutil.get_disk_usage(self._incominghomedir))
718+
719+    def get_shares(self):
720+        """
721+        Generate IStorageBackendShare objects for shares we have for this storage index.
722+        ("Shares we have" means completed ones, excluding incoming ones.)
723+        """
724+        try:
725+            for fp in self._sharehomedir.children():
726+                shnumstr = fp.basename()
727+                if not NUM_RE.match(shnumstr):
728+                    continue
729+                sharehome = self._sharehomedir.child(shnumstr)
730+                yield self.get_share(sharehome)
731+        except UnlistableError:
732+            # There is no shares directory at all.
733+            pass
734+
735+    def has_incoming(self, shnum):
736+        if self._incominghomedir is None:
737+            return False
738+        return self._incominghomedir.child(str(shnum)).exists()
739+
740+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
741+        sharehome = self._sharehomedir.child(str(shnum))
742+        incominghome = self._incominghomedir.child(str(shnum))
743+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
744+                                   max_size=max_space_per_bucket, create=True)
745+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
746+        if self._discard_storage:
747+            bw.throw_out_all_data = True
748+        return bw
749+
750+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
751+        fileutil.fp_make_dirs(self._sharehomedir)
752+        sharehome = self._sharehomedir.child(str(shnum))
753+        serverid = storageserver.get_serverid()
754+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
755+
756+    def _clean_up_after_unlink(self):
757+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
758+
759hunk ./src/allmydata/storage/backends/disk/immutable.py 1
760-import os, stat, struct, time
761 
762hunk ./src/allmydata/storage/backends/disk/immutable.py 2
763-from foolscap.api import Referenceable
764+import struct
765 
766 from zope.interface import implements
767hunk ./src/allmydata/storage/backends/disk/immutable.py 5
768-from allmydata.interfaces import RIBucketWriter, RIBucketReader
769-from allmydata.util import base32, fileutil, log
770+
771+from allmydata.interfaces import IStoredShare
772+from allmydata.util import fileutil
773 from allmydata.util.assertutil import precondition
774hunk ./src/allmydata/storage/backends/disk/immutable.py 9
775+from allmydata.util.fileutil import fp_make_dirs
776 from allmydata.util.hashutil import constant_time_compare
777hunk ./src/allmydata/storage/backends/disk/immutable.py 11
778+from allmydata.util.encodingutil import quote_filepath
779+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
780 from allmydata.storage.lease import LeaseInfo
781hunk ./src/allmydata/storage/backends/disk/immutable.py 14
782-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
783-     DataTooLargeError
784+
785 
786 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
787 # and share data. The share data is accessed by RIBucketWriter.write and
788hunk ./src/allmydata/storage/backends/disk/immutable.py 41
789 # then the value stored in this field will be the actual share data length
790 # modulo 2**32.
791 
792-class ShareFile:
793-    LEASE_SIZE = struct.calcsize(">L32s32sL")
794+class ImmutableDiskShare(object):
795+    implements(IStoredShare)
796+
797     sharetype = "immutable"
798hunk ./src/allmydata/storage/backends/disk/immutable.py 45
799+    LEASE_SIZE = struct.calcsize(">L32s32sL")
800+
801 
802hunk ./src/allmydata/storage/backends/disk/immutable.py 48
803-    def __init__(self, filename, max_size=None, create=False):
804-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
805+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
806+        """ If max_size is not None then I won't allow more than
807+        max_size to be written to me. If create=True then max_size
808+        must not be None. """
809         precondition((max_size is not None) or (not create), max_size, create)
810hunk ./src/allmydata/storage/backends/disk/immutable.py 53
811-        self.home = filename
812+        self._storageindex = storageindex
813         self._max_size = max_size
814hunk ./src/allmydata/storage/backends/disk/immutable.py 55
815+        self._incominghome = incominghome
816+        self._home = finalhome
817+        self._shnum = shnum
818         if create:
819             # touch the file, so later callers will see that we're working on
820             # it. Also construct the metadata.
821hunk ./src/allmydata/storage/backends/disk/immutable.py 61
822-            assert not os.path.exists(self.home)
823-            fileutil.make_dirs(os.path.dirname(self.home))
824-            f = open(self.home, 'wb')
825+            assert not finalhome.exists()
826+            fp_make_dirs(self._incominghome.parent())
827             # The second field -- the four-byte share data length -- is no
828             # longer used as of Tahoe v1.3.0, but we continue to write it in
829             # there in case someone downgrades a storage server from >=
830hunk ./src/allmydata/storage/backends/disk/immutable.py 72
831             # the largest length that can fit into the field. That way, even
832             # if this does happen, the old < v1.3.0 server will still allow
833             # clients to read the first part of the share.
834-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
835-            f.close()
836+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
837             self._lease_offset = max_size + 0x0c
838             self._num_leases = 0
839         else:
840hunk ./src/allmydata/storage/backends/disk/immutable.py 76
841-            f = open(self.home, 'rb')
842-            filesize = os.path.getsize(self.home)
843-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
844-            f.close()
845+            f = self._home.open(mode='rb')
846+            try:
847+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
848+            finally:
849+                f.close()
850+            filesize = self._home.getsize()
851             if version != 1:
852                 msg = "sharefile %s had version %d but we wanted 1" % \
853hunk ./src/allmydata/storage/backends/disk/immutable.py 84
854-                      (filename, version)
855+                      (self._home, version)
856                 raise UnknownImmutableContainerVersionError(msg)
857             self._num_leases = num_leases
858             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
859hunk ./src/allmydata/storage/backends/disk/immutable.py 90
860         self._data_offset = 0xc
861 
862+    def __repr__(self):
863+        return ("<ImmutableDiskShare %s:%r at %s>"
864+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
865+
866+    def close(self):
867+        fileutil.fp_make_dirs(self._home.parent())
868+        self._incominghome.moveTo(self._home)
869+        try:
870+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
871+            # We try to delete the parent (.../ab/abcde) to avoid leaving
872+            # these directories lying around forever, but the delete might
873+            # fail if we're working on another share for the same storage
874+            # index (like ab/abcde/5). The alternative approach would be to
875+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
876+            # ShareWriter), each of which is responsible for a single
877+            # directory on disk, and have them use reference counting of
878+            # their children to know when they should do the rmdir. This
879+            # approach is simpler, but relies on os.rmdir refusing to delete
880+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
881+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
882+            # we also delete the grandparent (prefix) directory, .../ab ,
883+            # again to avoid leaving directories lying around. This might
884+            # fail if there is another bucket open that shares a prefix (like
885+            # ab/abfff).
886+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
887+            # we leave the great-grandparent (incoming/) directory in place.
888+        except EnvironmentError:
889+            # ignore the "can't rmdir because the directory is not empty"
890+            # exceptions, those are normal consequences of the
891+            # above-mentioned conditions.
892+            pass
893+        pass
894+
895+    def get_used_space(self):
896+        return (fileutil.get_used_space(self._home) +
897+                fileutil.get_used_space(self._incominghome))
898+
899+    def get_storage_index(self):
900+        return self._storageindex
901+
902+    def get_shnum(self):
903+        return self._shnum
904+
905     def unlink(self):
906hunk ./src/allmydata/storage/backends/disk/immutable.py 134
907-        os.unlink(self.home)
908+        self._home.remove()
909+
910+    def get_size(self):
911+        return self._home.getsize()
912+
913+    def get_data_length(self):
914+        return self._lease_offset - self._data_offset
915+
916+    #def readv(self, read_vector):
917+    #    ...
918 
919     def read_share_data(self, offset, length):
920         precondition(offset >= 0)
921hunk ./src/allmydata/storage/backends/disk/immutable.py 147
922-        # reads beyond the end of the data are truncated. Reads that start
923+
924+        # Reads beyond the end of the data are truncated. Reads that start
925         # beyond the end of the data return an empty string.
926         seekpos = self._data_offset+offset
927         actuallength = max(0, min(length, self._lease_offset-seekpos))
928hunk ./src/allmydata/storage/backends/disk/immutable.py 154
929         if actuallength == 0:
930             return ""
931-        f = open(self.home, 'rb')
932-        f.seek(seekpos)
933-        return f.read(actuallength)
934+        f = self._home.open(mode='rb')
935+        try:
936+            f.seek(seekpos)
937+            sharedata = f.read(actuallength)
938+        finally:
939+            f.close()
940+        return sharedata
941 
942     def write_share_data(self, offset, data):
943         length = len(data)
944hunk ./src/allmydata/storage/backends/disk/immutable.py 167
945         precondition(offset >= 0, offset)
946         if self._max_size is not None and offset+length > self._max_size:
947             raise DataTooLargeError(self._max_size, offset, length)
948-        f = open(self.home, 'rb+')
949-        real_offset = self._data_offset+offset
950-        f.seek(real_offset)
951-        assert f.tell() == real_offset
952-        f.write(data)
953-        f.close()
954+        f = self._incominghome.open(mode='rb+')
955+        try:
956+            real_offset = self._data_offset+offset
957+            f.seek(real_offset)
958+            assert f.tell() == real_offset
959+            f.write(data)
960+        finally:
961+            f.close()
962 
963     def _write_lease_record(self, f, lease_number, lease_info):
964         offset = self._lease_offset + lease_number * self.LEASE_SIZE
965hunk ./src/allmydata/storage/backends/disk/immutable.py 184
966 
967     def _read_num_leases(self, f):
968         f.seek(0x08)
969-        (num_leases,) = struct.unpack(">L", f.read(4))
970+        ro = f.read(4)
971+        (num_leases,) = struct.unpack(">L", ro)
972         return num_leases
973 
974     def _write_num_leases(self, f, num_leases):
975hunk ./src/allmydata/storage/backends/disk/immutable.py 195
976     def _truncate_leases(self, f, num_leases):
977         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
978 
979+    # These lease operations are intended for use by disk_backend.py.
980+    # Other clients should not depend on the fact that the disk backend
981+    # stores leases in share files.
982+
983     def get_leases(self):
984         """Yields a LeaseInfo instance for all leases."""
985hunk ./src/allmydata/storage/backends/disk/immutable.py 201
986-        f = open(self.home, 'rb')
987-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
988-        f.seek(self._lease_offset)
989-        for i in range(num_leases):
990-            data = f.read(self.LEASE_SIZE)
991-            if data:
992-                yield LeaseInfo().from_immutable_data(data)
993+        f = self._home.open(mode='rb')
994+        try:
995+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
996+            f.seek(self._lease_offset)
997+            for i in range(num_leases):
998+                data = f.read(self.LEASE_SIZE)
999+                if data:
1000+                    yield LeaseInfo().from_immutable_data(data)
1001+        finally:
1002+            f.close()
1003 
1004     def add_lease(self, lease_info):
1005hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1006-        f = open(self.home, 'rb+')
1007-        num_leases = self._read_num_leases(f)
1008-        self._write_lease_record(f, num_leases, lease_info)
1009-        self._write_num_leases(f, num_leases+1)
1010-        f.close()
1011+        f = self._incominghome.open(mode='rb')
1012+        try:
1013+            num_leases = self._read_num_leases(f)
1014+        finally:
1015+            f.close()
1016+        f = self._home.open(mode='wb+')
1017+        try:
1018+            self._write_lease_record(f, num_leases, lease_info)
1019+            self._write_num_leases(f, num_leases+1)
1020+        finally:
1021+            f.close()
1022 
1023     def renew_lease(self, renew_secret, new_expire_time):
1024hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1025-        for i,lease in enumerate(self.get_leases()):
1026-            if constant_time_compare(lease.renew_secret, renew_secret):
1027-                # yup. See if we need to update the owner time.
1028-                if new_expire_time > lease.expiration_time:
1029-                    # yes
1030-                    lease.expiration_time = new_expire_time
1031-                    f = open(self.home, 'rb+')
1032-                    self._write_lease_record(f, i, lease)
1033-                    f.close()
1034-                return
1035+        try:
1036+            for i, lease in enumerate(self.get_leases()):
1037+                if constant_time_compare(lease.renew_secret, renew_secret):
1038+                    # yup. See if we need to update the owner time.
1039+                    if new_expire_time > lease.expiration_time:
1040+                        # yes
1041+                        lease.expiration_time = new_expire_time
1042+                        f = self._home.open('rb+')
1043+                        try:
1044+                            self._write_lease_record(f, i, lease)
1045+                        finally:
1046+                            f.close()
1047+                    return
1048+        except IndexError, e:
1049+            raise Exception("IndexError: %s" % (e,))
1050         raise IndexError("unable to renew non-existent lease")
1051 
1052     def add_or_renew_lease(self, lease_info):
1053hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1054                              lease_info.expiration_time)
1055         except IndexError:
1056             self.add_lease(lease_info)
1057-
1058-
1059-    def cancel_lease(self, cancel_secret):
1060-        """Remove a lease with the given cancel_secret. If the last lease is
1061-        cancelled, the file will be removed. Return the number of bytes that
1062-        were freed (by truncating the list of leases, and possibly by
1063-        deleting the file. Raise IndexError if there was no lease with the
1064-        given cancel_secret.
1065-        """
1066-
1067-        leases = list(self.get_leases())
1068-        num_leases_removed = 0
1069-        for i,lease in enumerate(leases):
1070-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1071-                leases[i] = None
1072-                num_leases_removed += 1
1073-        if not num_leases_removed:
1074-            raise IndexError("unable to find matching lease to cancel")
1075-        if num_leases_removed:
1076-            # pack and write out the remaining leases. We write these out in
1077-            # the same order as they were added, so that if we crash while
1078-            # doing this, we won't lose any non-cancelled leases.
1079-            leases = [l for l in leases if l] # remove the cancelled leases
1080-            f = open(self.home, 'rb+')
1081-            for i,lease in enumerate(leases):
1082-                self._write_lease_record(f, i, lease)
1083-            self._write_num_leases(f, len(leases))
1084-            self._truncate_leases(f, len(leases))
1085-            f.close()
1086-        space_freed = self.LEASE_SIZE * num_leases_removed
1087-        if not len(leases):
1088-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1089-            self.unlink()
1090-        return space_freed
1091-
1092-
1093-class BucketWriter(Referenceable):
1094-    implements(RIBucketWriter)
1095-
1096-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1097-        self.ss = ss
1098-        self.incominghome = incominghome
1099-        self.finalhome = finalhome
1100-        self._max_size = max_size # don't allow the client to write more than this
1101-        self._canary = canary
1102-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1103-        self.closed = False
1104-        self.throw_out_all_data = False
1105-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1106-        # also, add our lease to the file now, so that other ones can be
1107-        # added by simultaneous uploaders
1108-        self._sharefile.add_lease(lease_info)
1109-
1110-    def allocated_size(self):
1111-        return self._max_size
1112-
1113-    def remote_write(self, offset, data):
1114-        start = time.time()
1115-        precondition(not self.closed)
1116-        if self.throw_out_all_data:
1117-            return
1118-        self._sharefile.write_share_data(offset, data)
1119-        self.ss.add_latency("write", time.time() - start)
1120-        self.ss.count("write")
1121-
1122-    def remote_close(self):
1123-        precondition(not self.closed)
1124-        start = time.time()
1125-
1126-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1127-        fileutil.rename(self.incominghome, self.finalhome)
1128-        try:
1129-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1130-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1131-            # these directories lying around forever, but the delete might
1132-            # fail if we're working on another share for the same storage
1133-            # index (like ab/abcde/5). The alternative approach would be to
1134-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1135-            # ShareWriter), each of which is responsible for a single
1136-            # directory on disk, and have them use reference counting of
1137-            # their children to know when they should do the rmdir. This
1138-            # approach is simpler, but relies on os.rmdir refusing to delete
1139-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1140-            os.rmdir(os.path.dirname(self.incominghome))
1141-            # we also delete the grandparent (prefix) directory, .../ab ,
1142-            # again to avoid leaving directories lying around. This might
1143-            # fail if there is another bucket open that shares a prefix (like
1144-            # ab/abfff).
1145-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1146-            # we leave the great-grandparent (incoming/) directory in place.
1147-        except EnvironmentError:
1148-            # ignore the "can't rmdir because the directory is not empty"
1149-            # exceptions, those are normal consequences of the
1150-            # above-mentioned conditions.
1151-            pass
1152-        self._sharefile = None
1153-        self.closed = True
1154-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1155-
1156-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1157-        self.ss.bucket_writer_closed(self, filelen)
1158-        self.ss.add_latency("close", time.time() - start)
1159-        self.ss.count("close")
1160-
1161-    def _disconnected(self):
1162-        if not self.closed:
1163-            self._abort()
1164-
1165-    def remote_abort(self):
1166-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1167-                facility="tahoe.storage", level=log.UNUSUAL)
1168-        if not self.closed:
1169-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1170-        self._abort()
1171-        self.ss.count("abort")
1172-
1173-    def _abort(self):
1174-        if self.closed:
1175-            return
1176-
1177-        os.remove(self.incominghome)
1178-        # if we were the last share to be moved, remove the incoming/
1179-        # directory that was our parent
1180-        parentdir = os.path.split(self.incominghome)[0]
1181-        if not os.listdir(parentdir):
1182-            os.rmdir(parentdir)
1183-        self._sharefile = None
1184-
1185-        # We are now considered closed for further writing. We must tell
1186-        # the storage server about this so that it stops expecting us to
1187-        # use the space it allocated for us earlier.
1188-        self.closed = True
1189-        self.ss.bucket_writer_closed(self, 0)
1190-
1191-
1192-class BucketReader(Referenceable):
1193-    implements(RIBucketReader)
1194-
1195-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1196-        self.ss = ss
1197-        self._share_file = ShareFile(sharefname)
1198-        self.storage_index = storage_index
1199-        self.shnum = shnum
1200-
1201-    def __repr__(self):
1202-        return "<%s %s %s>" % (self.__class__.__name__,
1203-                               base32.b2a_l(self.storage_index[:8], 60),
1204-                               self.shnum)
1205-
1206-    def remote_read(self, offset, length):
1207-        start = time.time()
1208-        data = self._share_file.read_share_data(offset, length)
1209-        self.ss.add_latency("read", time.time() - start)
1210-        self.ss.count("read")
1211-        return data
1212-
1213-    def remote_advise_corrupt_share(self, reason):
1214-        return self.ss.remote_advise_corrupt_share("immutable",
1215-                                                   self.storage_index,
1216-                                                   self.shnum,
1217-                                                   reason)
1218hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1219-import os, stat, struct
1220 
1221hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1222-from allmydata.interfaces import BadWriteEnablerError
1223-from allmydata.util import idlib, log
1224+import struct
1225+
1226+from zope.interface import implements
1227+
1228+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1229+from allmydata.util import fileutil, idlib, log
1230 from allmydata.util.assertutil import precondition
1231 from allmydata.util.hashutil import constant_time_compare
1232hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1233-from allmydata.storage.lease import LeaseInfo
1234-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1235+from allmydata.util.encodingutil import quote_filepath
1236+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1237      DataTooLargeError
1238hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1239+from allmydata.storage.lease import LeaseInfo
1240+from allmydata.storage.backends.base import testv_compare
1241 
1242hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1243-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1244-# has a different layout. See docs/mutable.txt for more details.
1245+
1246+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1247+# It has a different layout. See docs/mutable.rst for more details.
1248 
1249 # #   offset    size    name
1250 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1251hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1252 #                        4    4   expiration timestamp
1253 #                        8   32   renewal token
1254 #                        40  32   cancel token
1255-#                        72  20   nodeid which accepted the tokens
1256+#                        72  20   nodeid that accepted the tokens
1257 # 7   468       (a)     data
1258 # 8   ??        4       count of extra leases
1259 # 9   ??        n*92    extra leases
1260hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1261 
1262 
1263-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1264+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1265 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1266 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1267 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1268hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1269 
1270-class MutableShareFile:
1271+
1272+class MutableDiskShare(object):
1273+    implements(IStoredMutableShare)
1274 
1275     sharetype = "mutable"
1276     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1277hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1278     assert LEASE_SIZE == 92
1279     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1280     assert DATA_OFFSET == 468, DATA_OFFSET
1281+
1282     # our sharefiles share with a recognizable string, plus some random
1283     # binary data to reduce the chance that a regular text file will look
1284     # like a sharefile.
1285hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1286     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1287     # TODO: decide upon a policy for max share size
1288 
1289-    def __init__(self, filename, parent=None):
1290-        self.home = filename
1291-        if os.path.exists(self.home):
1292+    def __init__(self, storageindex, shnum, home, parent=None):
1293+        self._storageindex = storageindex
1294+        self._shnum = shnum
1295+        self._home = home
1296+        if self._home.exists():
1297             # we don't cache anything, just check the magic
1298hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1299-            f = open(self.home, 'rb')
1300-            data = f.read(self.HEADER_SIZE)
1301-            (magic,
1302-             write_enabler_nodeid, write_enabler,
1303-             data_length, extra_least_offset) = \
1304-             struct.unpack(">32s20s32sQQ", data)
1305-            if magic != self.MAGIC:
1306-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1307-                      (filename, magic, self.MAGIC)
1308-                raise UnknownMutableContainerVersionError(msg)
1309+            f = self._home.open('rb')
1310+            try:
1311+                data = f.read(self.HEADER_SIZE)
1312+                (magic,
1313+                 write_enabler_nodeid, write_enabler,
1314+                 data_length, extra_least_offset) = \
1315+                 struct.unpack(">32s20s32sQQ", data)
1316+                if magic != self.MAGIC:
1317+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1318+                          (quote_filepath(self._home), magic, self.MAGIC)
1319+                    raise UnknownMutableContainerVersionError(msg)
1320+            finally:
1321+                f.close()
1322         self.parent = parent # for logging
1323 
1324     def log(self, *args, **kwargs):
1325hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1326         return self.parent.log(*args, **kwargs)
1327 
1328-    def create(self, my_nodeid, write_enabler):
1329-        assert not os.path.exists(self.home)
1330+    def create(self, serverid, write_enabler):
1331+        assert not self._home.exists()
1332         data_length = 0
1333         extra_lease_offset = (self.HEADER_SIZE
1334                               + 4 * self.LEASE_SIZE
1335hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1336                               + data_length)
1337         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1338         num_extra_leases = 0
1339-        f = open(self.home, 'wb')
1340-        header = struct.pack(">32s20s32sQQ",
1341-                             self.MAGIC, my_nodeid, write_enabler,
1342-                             data_length, extra_lease_offset,
1343-                             )
1344-        leases = ("\x00"*self.LEASE_SIZE) * 4
1345-        f.write(header + leases)
1346-        # data goes here, empty after creation
1347-        f.write(struct.pack(">L", num_extra_leases))
1348-        # extra leases go here, none at creation
1349-        f.close()
1350+        f = self._home.open('wb')
1351+        try:
1352+            header = struct.pack(">32s20s32sQQ",
1353+                                 self.MAGIC, serverid, write_enabler,
1354+                                 data_length, extra_lease_offset,
1355+                                 )
1356+            leases = ("\x00"*self.LEASE_SIZE) * 4
1357+            f.write(header + leases)
1358+            # data goes here, empty after creation
1359+            f.write(struct.pack(">L", num_extra_leases))
1360+            # extra leases go here, none at creation
1361+        finally:
1362+            f.close()
1363+
1364+    def __repr__(self):
1365+        return ("<MutableDiskShare %s:%r at %s>"
1366+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1367+
1368+    def get_used_space(self):
1369+        return fileutil.get_used_space(self._home)
1370+
1371+    def get_storage_index(self):
1372+        return self._storageindex
1373+
1374+    def get_shnum(self):
1375+        return self._shnum
1376 
1377     def unlink(self):
1378hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1379-        os.unlink(self.home)
1380+        self._home.remove()
1381 
1382     def _read_data_length(self, f):
1383         f.seek(self.DATA_LENGTH_OFFSET)
1384hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1385 
1386     def get_leases(self):
1387         """Yields a LeaseInfo instance for all leases."""
1388-        f = open(self.home, 'rb')
1389-        for i, lease in self._enumerate_leases(f):
1390-            yield lease
1391-        f.close()
1392+        f = self._home.open('rb')
1393+        try:
1394+            for i, lease in self._enumerate_leases(f):
1395+                yield lease
1396+        finally:
1397+            f.close()
1398 
1399     def _enumerate_leases(self, f):
1400         for i in range(self._get_num_lease_slots(f)):
1401hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1402             try:
1403                 data = self._read_lease_record(f, i)
1404                 if data is not None:
1405-                    yield i,data
1406+                    yield i, data
1407             except IndexError:
1408                 return
1409 
1410hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1411+    # These lease operations are intended for use by disk_backend.py.
1412+    # Other non-test clients should not depend on the fact that the disk
1413+    # backend stores leases in share files.
1414+
1415     def add_lease(self, lease_info):
1416         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1417hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1418-        f = open(self.home, 'rb+')
1419-        num_lease_slots = self._get_num_lease_slots(f)
1420-        empty_slot = self._get_first_empty_lease_slot(f)
1421-        if empty_slot is not None:
1422-            self._write_lease_record(f, empty_slot, lease_info)
1423-        else:
1424-            self._write_lease_record(f, num_lease_slots, lease_info)
1425-        f.close()
1426+        f = self._home.open('rb+')
1427+        try:
1428+            num_lease_slots = self._get_num_lease_slots(f)
1429+            empty_slot = self._get_first_empty_lease_slot(f)
1430+            if empty_slot is not None:
1431+                self._write_lease_record(f, empty_slot, lease_info)
1432+            else:
1433+                self._write_lease_record(f, num_lease_slots, lease_info)
1434+        finally:
1435+            f.close()
1436 
1437     def renew_lease(self, renew_secret, new_expire_time):
1438         accepting_nodeids = set()
1439hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1440-        f = open(self.home, 'rb+')
1441-        for (leasenum,lease) in self._enumerate_leases(f):
1442-            if constant_time_compare(lease.renew_secret, renew_secret):
1443-                # yup. See if we need to update the owner time.
1444-                if new_expire_time > lease.expiration_time:
1445-                    # yes
1446-                    lease.expiration_time = new_expire_time
1447-                    self._write_lease_record(f, leasenum, lease)
1448-                f.close()
1449-                return
1450-            accepting_nodeids.add(lease.nodeid)
1451-        f.close()
1452+        f = self._home.open('rb+')
1453+        try:
1454+            for (leasenum, lease) in self._enumerate_leases(f):
1455+                if constant_time_compare(lease.renew_secret, renew_secret):
1456+                    # yup. See if we need to update the owner time.
1457+                    if new_expire_time > lease.expiration_time:
1458+                        # yes
1459+                        lease.expiration_time = new_expire_time
1460+                        self._write_lease_record(f, leasenum, lease)
1461+                    return
1462+                accepting_nodeids.add(lease.nodeid)
1463+        finally:
1464+            f.close()
1465         # Return the accepting_nodeids set, to give the client a chance to
1466hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1467-        # update the leases on a share which has been migrated from its
1468+        # update the leases on a share that has been migrated from its
1469         # original server to a new one.
1470         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1471                " nodeids: ")
1472hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1473         except IndexError:
1474             self.add_lease(lease_info)
1475 
1476-    def cancel_lease(self, cancel_secret):
1477-        """Remove any leases with the given cancel_secret. If the last lease
1478-        is cancelled, the file will be removed. Return the number of bytes
1479-        that were freed (by truncating the list of leases, and possibly by
1480-        deleting the file. Raise IndexError if there was no lease with the
1481-        given cancel_secret."""
1482-
1483-        accepting_nodeids = set()
1484-        modified = 0
1485-        remaining = 0
1486-        blank_lease = LeaseInfo(owner_num=0,
1487-                                renew_secret="\x00"*32,
1488-                                cancel_secret="\x00"*32,
1489-                                expiration_time=0,
1490-                                nodeid="\x00"*20)
1491-        f = open(self.home, 'rb+')
1492-        for (leasenum,lease) in self._enumerate_leases(f):
1493-            accepting_nodeids.add(lease.nodeid)
1494-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1495-                self._write_lease_record(f, leasenum, blank_lease)
1496-                modified += 1
1497-            else:
1498-                remaining += 1
1499-        if modified:
1500-            freed_space = self._pack_leases(f)
1501-            f.close()
1502-            if not remaining:
1503-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1504-                self.unlink()
1505-            return freed_space
1506-
1507-        msg = ("Unable to cancel non-existent lease. I have leases "
1508-               "accepted by nodeids: ")
1509-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1510-                         for anid in accepting_nodeids])
1511-        msg += " ."
1512-        raise IndexError(msg)
1513-
1514-    def _pack_leases(self, f):
1515-        # TODO: reclaim space from cancelled leases
1516-        return 0
1517-
1518     def _read_write_enabler_and_nodeid(self, f):
1519         f.seek(0)
1520         data = f.read(self.HEADER_SIZE)
1521hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1522 
1523     def readv(self, readv):
1524         datav = []
1525-        f = open(self.home, 'rb')
1526-        for (offset, length) in readv:
1527-            datav.append(self._read_share_data(f, offset, length))
1528-        f.close()
1529+        f = self._home.open('rb')
1530+        try:
1531+            for (offset, length) in readv:
1532+                datav.append(self._read_share_data(f, offset, length))
1533+        finally:
1534+            f.close()
1535         return datav
1536 
1537hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1538-#    def remote_get_length(self):
1539-#        f = open(self.home, 'rb')
1540-#        data_length = self._read_data_length(f)
1541-#        f.close()
1542-#        return data_length
1543+    def get_size(self):
1544+        return self._home.getsize()
1545+
1546+    def get_data_length(self):
1547+        f = self._home.open('rb')
1548+        try:
1549+            data_length = self._read_data_length(f)
1550+        finally:
1551+            f.close()
1552+        return data_length
1553 
1554     def check_write_enabler(self, write_enabler, si_s):
1555hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1556-        f = open(self.home, 'rb+')
1557-        (real_write_enabler, write_enabler_nodeid) = \
1558-                             self._read_write_enabler_and_nodeid(f)
1559-        f.close()
1560+        f = self._home.open('rb+')
1561+        try:
1562+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1563+        finally:
1564+            f.close()
1565         # avoid a timing attack
1566         #if write_enabler != real_write_enabler:
1567         if not constant_time_compare(write_enabler, real_write_enabler):
1568hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1569 
1570     def check_testv(self, testv):
1571         test_good = True
1572-        f = open(self.home, 'rb+')
1573-        for (offset, length, operator, specimen) in testv:
1574-            data = self._read_share_data(f, offset, length)
1575-            if not testv_compare(data, operator, specimen):
1576-                test_good = False
1577-                break
1578-        f.close()
1579+        f = self._home.open('rb+')
1580+        try:
1581+            for (offset, length, operator, specimen) in testv:
1582+                data = self._read_share_data(f, offset, length)
1583+                if not testv_compare(data, operator, specimen):
1584+                    test_good = False
1585+                    break
1586+        finally:
1587+            f.close()
1588         return test_good
1589 
1590     def writev(self, datav, new_length):
1591hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1592-        f = open(self.home, 'rb+')
1593-        for (offset, data) in datav:
1594-            self._write_share_data(f, offset, data)
1595-        if new_length is not None:
1596-            cur_length = self._read_data_length(f)
1597-            if new_length < cur_length:
1598-                self._write_data_length(f, new_length)
1599-                # TODO: if we're going to shrink the share file when the
1600-                # share data has shrunk, then call
1601-                # self._change_container_size() here.
1602-        f.close()
1603-
1604-def testv_compare(a, op, b):
1605-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1606-    if op == "lt":
1607-        return a < b
1608-    if op == "le":
1609-        return a <= b
1610-    if op == "eq":
1611-        return a == b
1612-    if op == "ne":
1613-        return a != b
1614-    if op == "ge":
1615-        return a >= b
1616-    if op == "gt":
1617-        return a > b
1618-    # never reached
1619+        f = self._home.open('rb+')
1620+        try:
1621+            for (offset, data) in datav:
1622+                self._write_share_data(f, offset, data)
1623+            if new_length is not None:
1624+                cur_length = self._read_data_length(f)
1625+                if new_length < cur_length:
1626+                    self._write_data_length(f, new_length)
1627+                    # TODO: if we're going to shrink the share file when the
1628+                    # share data has shrunk, then call
1629+                    # self._change_container_size() here.
1630+        finally:
1631+            f.close()
1632 
1633hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1634-class EmptyShare:
1635+    def close(self):
1636+        pass
1637 
1638hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1639-    def check_testv(self, testv):
1640-        test_good = True
1641-        for (offset, length, operator, specimen) in testv:
1642-            data = ""
1643-            if not testv_compare(data, operator, specimen):
1644-                test_good = False
1645-                break
1646-        return test_good
1647 
1648hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1649-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1650-    ms = MutableShareFile(filename, parent)
1651-    ms.create(my_nodeid, write_enabler)
1652+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1653+    ms = MutableDiskShare(fp, parent)
1654+    ms.create(serverid, write_enabler)
1655     del ms
1656hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1657-    return MutableShareFile(filename, parent)
1658-
1659+    return MutableDiskShare(fp, parent)
1660addfile ./src/allmydata/storage/backends/null/__init__.py
1661addfile ./src/allmydata/storage/backends/null/null_backend.py
1662hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1663 
1664+import os, struct
1665+
1666+from zope.interface import implements
1667+
1668+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1669+from allmydata.util.assertutil import precondition
1670+from allmydata.util.hashutil import constant_time_compare
1671+from allmydata.storage.backends.base import Backend, ShareSet
1672+from allmydata.storage.bucket import BucketWriter
1673+from allmydata.storage.common import si_b2a
1674+from allmydata.storage.lease import LeaseInfo
1675+
1676+
1677+class NullBackend(Backend):
1678+    implements(IStorageBackend)
1679+
1680+    def __init__(self):
1681+        Backend.__init__(self)
1682+
1683+    def get_available_space(self, reserved_space):
1684+        return None
1685+
1686+    def get_sharesets_for_prefix(self, prefix):
1687+        pass
1688+
1689+    def get_shareset(self, storageindex):
1690+        return NullShareSet(storageindex)
1691+
1692+    def fill_in_space_stats(self, stats):
1693+        pass
1694+
1695+    def set_storage_server(self, ss):
1696+        self.ss = ss
1697+
1698+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1699+        pass
1700+
1701+
1702+class NullShareSet(ShareSet):
1703+    implements(IShareSet)
1704+
1705+    def __init__(self, storageindex):
1706+        self.storageindex = storageindex
1707+
1708+    def get_overhead(self):
1709+        return 0
1710+
1711+    def get_incoming_shnums(self):
1712+        return frozenset()
1713+
1714+    def get_shares(self):
1715+        pass
1716+
1717+    def get_share(self, shnum):
1718+        return None
1719+
1720+    def get_storage_index(self):
1721+        return self.storageindex
1722+
1723+    def get_storage_index_string(self):
1724+        return si_b2a(self.storageindex)
1725+
1726+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1727+        immutableshare = ImmutableNullShare()
1728+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1729+
1730+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1731+        return MutableNullShare()
1732+
1733+    def _clean_up_after_unlink(self):
1734+        pass
1735+
1736+
1737+class ImmutableNullShare:
1738+    implements(IStoredShare)
1739+    sharetype = "immutable"
1740+
1741+    def __init__(self):
1742+        """ If max_size is not None then I won't allow more than
1743+        max_size to be written to me. If create=True then max_size
1744+        must not be None. """
1745+        pass
1746+
1747+    def get_shnum(self):
1748+        return self.shnum
1749+
1750+    def unlink(self):
1751+        os.unlink(self.fname)
1752+
1753+    def read_share_data(self, offset, length):
1754+        precondition(offset >= 0)
1755+        # Reads beyond the end of the data are truncated. Reads that start
1756+        # beyond the end of the data return an empty string.
1757+        seekpos = self._data_offset+offset
1758+        fsize = os.path.getsize(self.fname)
1759+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1760+        if actuallength == 0:
1761+            return ""
1762+        f = open(self.fname, 'rb')
1763+        f.seek(seekpos)
1764+        return f.read(actuallength)
1765+
1766+    def write_share_data(self, offset, data):
1767+        pass
1768+
1769+    def _write_lease_record(self, f, lease_number, lease_info):
1770+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1771+        f.seek(offset)
1772+        assert f.tell() == offset
1773+        f.write(lease_info.to_immutable_data())
1774+
1775+    def _read_num_leases(self, f):
1776+        f.seek(0x08)
1777+        (num_leases,) = struct.unpack(">L", f.read(4))
1778+        return num_leases
1779+
1780+    def _write_num_leases(self, f, num_leases):
1781+        f.seek(0x08)
1782+        f.write(struct.pack(">L", num_leases))
1783+
1784+    def _truncate_leases(self, f, num_leases):
1785+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1786+
1787+    def get_leases(self):
1788+        """Yields a LeaseInfo instance for all leases."""
1789+        f = open(self.fname, 'rb')
1790+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1791+        f.seek(self._lease_offset)
1792+        for i in range(num_leases):
1793+            data = f.read(self.LEASE_SIZE)
1794+            if data:
1795+                yield LeaseInfo().from_immutable_data(data)
1796+
1797+    def add_lease(self, lease):
1798+        pass
1799+
1800+    def renew_lease(self, renew_secret, new_expire_time):
1801+        for i,lease in enumerate(self.get_leases()):
1802+            if constant_time_compare(lease.renew_secret, renew_secret):
1803+                # yup. See if we need to update the owner time.
1804+                if new_expire_time > lease.expiration_time:
1805+                    # yes
1806+                    lease.expiration_time = new_expire_time
1807+                    f = open(self.fname, 'rb+')
1808+                    self._write_lease_record(f, i, lease)
1809+                    f.close()
1810+                return
1811+        raise IndexError("unable to renew non-existent lease")
1812+
1813+    def add_or_renew_lease(self, lease_info):
1814+        try:
1815+            self.renew_lease(lease_info.renew_secret,
1816+                             lease_info.expiration_time)
1817+        except IndexError:
1818+            self.add_lease(lease_info)
1819+
1820+
1821+class MutableNullShare:
1822+    implements(IStoredMutableShare)
1823+    sharetype = "mutable"
1824+
1825+    """ XXX: TODO """
1826addfile ./src/allmydata/storage/bucket.py
1827hunk ./src/allmydata/storage/bucket.py 1
1828+
1829+import time
1830+
1831+from foolscap.api import Referenceable
1832+
1833+from zope.interface import implements
1834+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1835+from allmydata.util import base32, log
1836+from allmydata.util.assertutil import precondition
1837+
1838+
1839+class BucketWriter(Referenceable):
1840+    implements(RIBucketWriter)
1841+
1842+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1843+        self.ss = ss
1844+        self._max_size = max_size # don't allow the client to write more than this
1845+        self._canary = canary
1846+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1847+        self.closed = False
1848+        self.throw_out_all_data = False
1849+        self._share = immutableshare
1850+        # also, add our lease to the file now, so that other ones can be
1851+        # added by simultaneous uploaders
1852+        self._share.add_lease(lease_info)
1853+
1854+    def allocated_size(self):
1855+        return self._max_size
1856+
1857+    def remote_write(self, offset, data):
1858+        start = time.time()
1859+        precondition(not self.closed)
1860+        if self.throw_out_all_data:
1861+            return
1862+        self._share.write_share_data(offset, data)
1863+        self.ss.add_latency("write", time.time() - start)
1864+        self.ss.count("write")
1865+
1866+    def remote_close(self):
1867+        precondition(not self.closed)
1868+        start = time.time()
1869+
1870+        self._share.close()
1871+        filelen = self._share.stat()
1872+        self._share = None
1873+
1874+        self.closed = True
1875+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1876+
1877+        self.ss.bucket_writer_closed(self, filelen)
1878+        self.ss.add_latency("close", time.time() - start)
1879+        self.ss.count("close")
1880+
1881+    def _disconnected(self):
1882+        if not self.closed:
1883+            self._abort()
1884+
1885+    def remote_abort(self):
1886+        log.msg("storage: aborting write to share %r" % self._share,
1887+                facility="tahoe.storage", level=log.UNUSUAL)
1888+        if not self.closed:
1889+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1890+        self._abort()
1891+        self.ss.count("abort")
1892+
1893+    def _abort(self):
1894+        if self.closed:
1895+            return
1896+        self._share.unlink()
1897+        self._share = None
1898+
1899+        # We are now considered closed for further writing. We must tell
1900+        # the storage server about this so that it stops expecting us to
1901+        # use the space it allocated for us earlier.
1902+        self.closed = True
1903+        self.ss.bucket_writer_closed(self, 0)
1904+
1905+
1906+class BucketReader(Referenceable):
1907+    implements(RIBucketReader)
1908+
1909+    def __init__(self, ss, share):
1910+        self.ss = ss
1911+        self._share = share
1912+        self.storageindex = share.storageindex
1913+        self.shnum = share.shnum
1914+
1915+    def __repr__(self):
1916+        return "<%s %s %s>" % (self.__class__.__name__,
1917+                               base32.b2a_l(self.storageindex[:8], 60),
1918+                               self.shnum)
1919+
1920+    def remote_read(self, offset, length):
1921+        start = time.time()
1922+        data = self._share.read_share_data(offset, length)
1923+        self.ss.add_latency("read", time.time() - start)
1924+        self.ss.count("read")
1925+        return data
1926+
1927+    def remote_advise_corrupt_share(self, reason):
1928+        return self.ss.remote_advise_corrupt_share("immutable",
1929+                                                   self.storageindex,
1930+                                                   self.shnum,
1931+                                                   reason)
1932addfile ./src/allmydata/test/test_backends.py
1933hunk ./src/allmydata/test/test_backends.py 1
1934+import os, stat
1935+from twisted.trial import unittest
1936+from allmydata.util.log import msg
1937+from allmydata.test.common_util import ReallyEqualMixin
1938+import mock
1939+
1940+# This is the code that we're going to be testing.
1941+from allmydata.storage.server import StorageServer
1942+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
1943+from allmydata.storage.backends.null.null_backend import NullBackend
1944+
1945+# The following share file content was generated with
1946+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1947+# with share data == 'a'. The total size of this input
1948+# is 85 bytes.
1949+shareversionnumber = '\x00\x00\x00\x01'
1950+sharedatalength = '\x00\x00\x00\x01'
1951+numberofleases = '\x00\x00\x00\x01'
1952+shareinputdata = 'a'
1953+ownernumber = '\x00\x00\x00\x00'
1954+renewsecret  = 'x'*32
1955+cancelsecret = 'y'*32
1956+expirationtime = '\x00(\xde\x80'
1957+nextlease = ''
1958+containerdata = shareversionnumber + sharedatalength + numberofleases
1959+client_data = shareinputdata + ownernumber + renewsecret + \
1960+    cancelsecret + expirationtime + nextlease
1961+share_data = containerdata + client_data
1962+testnodeid = 'testnodeidxxxxxxxxxx'
1963+
1964+
1965+class MockFileSystem(unittest.TestCase):
1966+    """ I simulate a filesystem that the code under test can use. I simulate
1967+    just the parts of the filesystem that the current implementation of Disk
1968+    backend needs. """
1969+    def setUp(self):
1970+        # Make patcher, patch, and effects for disk-using functions.
1971+        msg( "%s.setUp()" % (self,))
1972+        self.mockedfilepaths = {}
1973+        # keys are pathnames, values are MockFilePath objects. This is necessary because
1974+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
1975+        # self.mockedfilepaths has the relevant information.
1976+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
1977+        self.basedir = self.storedir.child('shares')
1978+        self.baseincdir = self.basedir.child('incoming')
1979+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1980+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1981+        self.shareincomingname = self.sharedirincomingname.child('0')
1982+        self.sharefinalname = self.sharedirfinalname.child('0')
1983+
1984+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
1985+        # or LeaseCheckingCrawler.
1986+
1987+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
1988+        self.FilePathFake.__enter__()
1989+
1990+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
1991+        FakeBCC = self.BCountingCrawler.__enter__()
1992+        FakeBCC.side_effect = self.call_FakeBCC
1993+
1994+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
1995+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
1996+        FakeLCC.side_effect = self.call_FakeLCC
1997+
1998+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
1999+        GetSpace = self.get_available_space.__enter__()
2000+        GetSpace.side_effect = self.call_get_available_space
2001+
2002+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2003+        getsize = self.statforsize.__enter__()
2004+        getsize.side_effect = self.call_statforsize
2005+
2006+    def call_FakeBCC(self, StateFile):
2007+        return MockBCC()
2008+
2009+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2010+        return MockLCC()
2011+
2012+    def call_get_available_space(self, storedir, reservedspace):
2013+        # The input vector has an input size of 85.
2014+        return 85 - reservedspace
2015+
2016+    def call_statforsize(self, fakefpname):
2017+        return self.mockedfilepaths[fakefpname].fileobject.size()
2018+
2019+    def tearDown(self):
2020+        msg( "%s.tearDown()" % (self,))
2021+        self.FilePathFake.__exit__()
2022+        self.mockedfilepaths = {}
2023+
2024+
2025+class MockFilePath:
2026+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2027+        #  I can't just make the values MockFileObjects because they may be directories.
2028+        self.mockedfilepaths = ffpathsenvironment
2029+        self.path = pathstring
2030+        self.existence = existence
2031+        if not self.mockedfilepaths.has_key(self.path):
2032+            #  The first MockFilePath object is special
2033+            self.mockedfilepaths[self.path] = self
2034+            self.fileobject = None
2035+        else:
2036+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2037+        self.spawn = {}
2038+        self.antecedent = os.path.dirname(self.path)
2039+
2040+    def setContent(self, contentstring):
2041+        # This method rewrites the data in the file that corresponds to its path
2042+        # name whether it preexisted or not.
2043+        self.fileobject = MockFileObject(contentstring)
2044+        self.existence = True
2045+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2046+        self.mockedfilepaths[self.path].existence = self.existence
2047+        self.setparents()
2048+
2049+    def create(self):
2050+        # This method chokes if there's a pre-existing file!
2051+        if self.mockedfilepaths[self.path].fileobject:
2052+            raise OSError
2053+        else:
2054+            self.existence = True
2055+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2056+            self.mockedfilepaths[self.path].existence = self.existence
2057+            self.setparents()
2058+
2059+    def open(self, mode='r'):
2060+        # XXX Makes no use of mode.
2061+        if not self.mockedfilepaths[self.path].fileobject:
2062+            # If there's no fileobject there already then make one and put it there.
2063+            self.fileobject = MockFileObject()
2064+            self.existence = True
2065+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2066+            self.mockedfilepaths[self.path].existence = self.existence
2067+        else:
2068+            # Otherwise get a ref to it.
2069+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2070+            self.existence = self.mockedfilepaths[self.path].existence
2071+        return self.fileobject.open(mode)
2072+
2073+    def child(self, childstring):
2074+        arg2child = os.path.join(self.path, childstring)
2075+        child = MockFilePath(arg2child, self.mockedfilepaths)
2076+        return child
2077+
2078+    def children(self):
2079+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2080+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2081+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2082+        self.spawn = frozenset(childrenfromffs)
2083+        return self.spawn
2084+
2085+    def parent(self):
2086+        if self.mockedfilepaths.has_key(self.antecedent):
2087+            parent = self.mockedfilepaths[self.antecedent]
2088+        else:
2089+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2090+        return parent
2091+
2092+    def parents(self):
2093+        antecedents = []
2094+        def f(fps, antecedents):
2095+            newfps = os.path.split(fps)[0]
2096+            if newfps:
2097+                antecedents.append(newfps)
2098+                f(newfps, antecedents)
2099+        f(self.path, antecedents)
2100+        return antecedents
2101+
2102+    def setparents(self):
2103+        for fps in self.parents():
2104+            if not self.mockedfilepaths.has_key(fps):
2105+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2106+
2107+    def basename(self):
2108+        return os.path.split(self.path)[1]
2109+
2110+    def moveTo(self, newffp):
2111+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2112+        if self.mockedfilepaths[newffp.path].exists():
2113+            raise OSError
2114+        else:
2115+            self.mockedfilepaths[newffp.path] = self
2116+            self.path = newffp.path
2117+
2118+    def getsize(self):
2119+        return self.fileobject.getsize()
2120+
2121+    def exists(self):
2122+        return self.existence
2123+
2124+    def isdir(self):
2125+        return True
2126+
2127+    def makedirs(self):
2128+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2129+        pass
2130+
2131+    def remove(self):
2132+        pass
2133+
2134+
2135+class MockFileObject:
2136+    def __init__(self, contentstring=''):
2137+        self.buffer = contentstring
2138+        self.pos = 0
2139+    def open(self, mode='r'):
2140+        return self
2141+    def write(self, instring):
2142+        begin = self.pos
2143+        padlen = begin - len(self.buffer)
2144+        if padlen > 0:
2145+            self.buffer += '\x00' * padlen
2146+        end = self.pos + len(instring)
2147+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2148+        self.pos = end
2149+    def close(self):
2150+        self.pos = 0
2151+    def seek(self, pos):
2152+        self.pos = pos
2153+    def read(self, numberbytes):
2154+        return self.buffer[self.pos:self.pos+numberbytes]
2155+    def tell(self):
2156+        return self.pos
2157+    def size(self):
2158+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2159+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2160+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2161+        return {stat.ST_SIZE:len(self.buffer)}
2162+    def getsize(self):
2163+        return len(self.buffer)
2164+
2165+class MockBCC:
2166+    def setServiceParent(self, Parent):
2167+        pass
2168+
2169+
2170+class MockLCC:
2171+    def setServiceParent(self, Parent):
2172+        pass
2173+
2174+
2175+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2176+    """ NullBackend is just for testing and executable documentation, so
2177+    this test is actually a test of StorageServer in which we're using
2178+    NullBackend as helper code for the test, rather than a test of
2179+    NullBackend. """
2180+    def setUp(self):
2181+        self.ss = StorageServer(testnodeid, NullBackend())
2182+
2183+    @mock.patch('os.mkdir')
2184+    @mock.patch('__builtin__.open')
2185+    @mock.patch('os.listdir')
2186+    @mock.patch('os.path.isdir')
2187+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2188+        """
2189+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2190+        generates the correct return types when given test-vector arguments. That
2191+        bs is of the correct type is verified by attempting to invoke remote_write
2192+        on bs[0].
2193+        """
2194+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2195+        bs[0].remote_write(0, 'a')
2196+        self.failIf(mockisdir.called)
2197+        self.failIf(mocklistdir.called)
2198+        self.failIf(mockopen.called)
2199+        self.failIf(mockmkdir.called)
2200+
2201+
2202+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2203+    def test_create_server_disk_backend(self):
2204+        """ This tests whether a server instance can be constructed with a
2205+        filesystem backend. To pass the test, it mustn't use the filesystem
2206+        outside of its configured storedir. """
2207+        StorageServer(testnodeid, DiskBackend(self.storedir))
2208+
2209+
2210+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2211+    """ This tests both the StorageServer and the Disk backend together. """
2212+    def setUp(self):
2213+        MockFileSystem.setUp(self)
2214+        try:
2215+            self.backend = DiskBackend(self.storedir)
2216+            self.ss = StorageServer(testnodeid, self.backend)
2217+
2218+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2219+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2220+        except:
2221+            MockFileSystem.tearDown(self)
2222+            raise
2223+
2224+    @mock.patch('time.time')
2225+    @mock.patch('allmydata.util.fileutil.get_available_space')
2226+    def test_out_of_space(self, mockget_available_space, mocktime):
2227+        mocktime.return_value = 0
2228+
2229+        def call_get_available_space(dir, reserve):
2230+            return 0
2231+
2232+        mockget_available_space.side_effect = call_get_available_space
2233+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2234+        self.failUnlessReallyEqual(bsc, {})
2235+
2236+    @mock.patch('time.time')
2237+    def test_write_and_read_share(self, mocktime):
2238+        """
2239+        Write a new share, read it, and test the server's (and disk backend's)
2240+        handling of simultaneous and successive attempts to write the same
2241+        share.
2242+        """
2243+        mocktime.return_value = 0
2244+        # Inspect incoming and fail unless it's empty.
2245+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2246+
2247+        self.failUnlessReallyEqual(incomingset, frozenset())
2248+
2249+        # Populate incoming with the sharenum: 0.
2250+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2251+
2252+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2253+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2254+
2255+
2256+
2257+        # Attempt to create a second share writer with the same sharenum.
2258+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2259+
2260+        # Show that no sharewriter results from a remote_allocate_buckets
2261+        # with the same si and sharenum, until BucketWriter.remote_close()
2262+        # has been called.
2263+        self.failIf(bsa)
2264+
2265+        # Test allocated size.
2266+        spaceint = self.ss.allocated_size()
2267+        self.failUnlessReallyEqual(spaceint, 1)
2268+
2269+        # Write 'a' to shnum 0. Only tested together with close and read.
2270+        bs[0].remote_write(0, 'a')
2271+
2272+        # Preclose: Inspect final, failUnless nothing there.
2273+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2274+        bs[0].remote_close()
2275+
2276+        # Postclose: (Omnibus) failUnless written data is in final.
2277+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2278+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2279+        contents = sharesinfinal[0].read_share_data(0, 73)
2280+        self.failUnlessReallyEqual(contents, client_data)
2281+
2282+        # Exercise the case that the share we're asking to allocate is
2283+        # already (completely) uploaded.
2284+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2285+
2286+
2287+    def test_read_old_share(self):
2288+        """ This tests whether the code correctly finds and reads
2289+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2290+        servers. There is a similar test in test_download, but that one
2291+        is from the perspective of the client and exercises a deeper
2292+        stack of code. This one is for exercising just the
2293+        StorageServer object. """
2294+        # Contruct a file with the appropriate contents in the mockfilesystem.
2295+        datalen = len(share_data)
2296+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2297+        finalhome.setContent(share_data)
2298+
2299+        # Now begin the test.
2300+        bs = self.ss.remote_get_buckets('teststorage_index')
2301+
2302+        self.failUnlessEqual(len(bs), 1)
2303+        b = bs['0']
2304+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2305+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2306+        # If you try to read past the end you get the as much data as is there.
2307+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2308+        # If you start reading past the end of the file you get the empty string.
2309+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2310}
2311[Pluggable backends -- all other changes. refs #999
2312david-sarah@jacaranda.org**20110919233256
2313 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2314] {
2315hunk ./src/allmydata/client.py 245
2316             sharetypes.append("immutable")
2317         if self.get_config("storage", "expire.mutable", True, boolean=True):
2318             sharetypes.append("mutable")
2319-        expiration_sharetypes = tuple(sharetypes)
2320 
2321hunk ./src/allmydata/client.py 246
2322+        expiration_policy = {
2323+            'enabled': expire,
2324+            'mode': mode,
2325+            'override_lease_duration': o_l_d,
2326+            'cutoff_date': cutoff_date,
2327+            'sharetypes': tuple(sharetypes),
2328+        }
2329         ss = StorageServer(storedir, self.nodeid,
2330                            reserved_space=reserved,
2331                            discard_storage=discard,
2332hunk ./src/allmydata/client.py 258
2333                            readonly_storage=readonly,
2334                            stats_provider=self.stats_provider,
2335-                           expiration_enabled=expire,
2336-                           expiration_mode=mode,
2337-                           expiration_override_lease_duration=o_l_d,
2338-                           expiration_cutoff_date=cutoff_date,
2339-                           expiration_sharetypes=expiration_sharetypes)
2340+                           expiration_policy=expiration_policy)
2341         self.add_service(ss)
2342 
2343         d = self.when_tub_ready()
2344hunk ./src/allmydata/immutable/offloaded.py 306
2345         if os.path.exists(self._encoding_file):
2346             self.log("ciphertext already present, bypassing fetch",
2347                      level=log.UNUSUAL)
2348+            # XXX the following comment is probably stale, since
2349+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2350+            #
2351             # we'll still need the plaintext hashes (when
2352             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2353             # called), and currently the easiest way to get them is to ask
2354hunk ./src/allmydata/immutable/upload.py 765
2355             self._status.set_progress(1, progress)
2356         return cryptdata
2357 
2358-
2359     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2360hunk ./src/allmydata/immutable/upload.py 766
2361+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2362+        plaintext segments, i.e. get the tagged hashes of the given segments.
2363+        The segment size is expected to be generated by the
2364+        IEncryptedUploadable before any plaintext is read or ciphertext
2365+        produced, so that the segment hashes can be generated with only a
2366+        single pass.
2367+
2368+        This returns a Deferred that fires with a sequence of hashes, using:
2369+
2370+         tuple(segment_hashes[first:last])
2371+
2372+        'num_segments' is used to assert that the number of segments that the
2373+        IEncryptedUploadable handled matches the number of segments that the
2374+        encoder was expecting.
2375+
2376+        This method must not be called until the final byte has been read
2377+        from read_encrypted(). Once this method is called, read_encrypted()
2378+        can never be called again.
2379+        """
2380         # this is currently unused, but will live again when we fix #453
2381         if len(self._plaintext_segment_hashes) < num_segments:
2382             # close out the last one
2383hunk ./src/allmydata/immutable/upload.py 803
2384         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2385 
2386     def get_plaintext_hash(self):
2387+        """OBSOLETE; Get the hash of the whole plaintext.
2388+
2389+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2390+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2391+        """
2392+        # this is currently unused, but will live again when we fix #453
2393         h = self._plaintext_hasher.digest()
2394         return defer.succeed(h)
2395 
2396hunk ./src/allmydata/interfaces.py 29
2397 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2398 Offset = Number
2399 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2400-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2401-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2402-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2403+WriteEnablerSecret = Hash # used to protect mutable share modifications
2404+LeaseRenewSecret = Hash # used to protect lease renewal requests
2405+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2406 
2407 class RIStubClient(RemoteInterface):
2408     """Each client publishes a service announcement for a dummy object called
2409hunk ./src/allmydata/interfaces.py 106
2410                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2411                          allocated_size=Offset, canary=Referenceable):
2412         """
2413-        @param storage_index: the index of the bucket to be created or
2414+        @param storage_index: the index of the shareset to be created or
2415                               increfed.
2416         @param sharenums: these are the share numbers (probably between 0 and
2417                           99) that the sender is proposing to store on this
2418hunk ./src/allmydata/interfaces.py 111
2419                           server.
2420-        @param renew_secret: This is the secret used to protect bucket refresh
2421+        @param renew_secret: This is the secret used to protect lease renewal.
2422                              This secret is generated by the client and
2423                              stored for later comparison by the server. Each
2424                              server is given a different secret.
2425hunk ./src/allmydata/interfaces.py 115
2426-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2427-        @param canary: If the canary is lost before close(), the bucket is
2428+        @param cancel_secret: ignored
2429+        @param canary: If the canary is lost before close(), the allocation is
2430                        deleted.
2431         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2432                  already have and allocated is what we hereby agree to accept.
2433hunk ./src/allmydata/interfaces.py 129
2434                   renew_secret=LeaseRenewSecret,
2435                   cancel_secret=LeaseCancelSecret):
2436         """
2437-        Add a new lease on the given bucket. If the renew_secret matches an
2438+        Add a new lease on the given shareset. If the renew_secret matches an
2439         existing lease, that lease will be renewed instead. If there is no
2440hunk ./src/allmydata/interfaces.py 131
2441-        bucket for the given storage_index, return silently. (note that in
2442+        shareset for the given storage_index, return silently. (Note that in
2443         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2444hunk ./src/allmydata/interfaces.py 133
2445-        bucket)
2446+        shareset.)
2447         """
2448         return Any() # returns None now, but future versions might change
2449 
2450hunk ./src/allmydata/interfaces.py 139
2451     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2452         """
2453-        Renew the lease on a given bucket, resetting the timer to 31 days.
2454-        Some networks will use this, some will not. If there is no bucket for
2455+        Renew the lease on a given shareset, resetting the timer to 31 days.
2456+        Some networks will use this, some will not. If there is no shareset for
2457         the given storage_index, IndexError will be raised.
2458 
2459         For mutable shares, if the given renew_secret does not match an
2460hunk ./src/allmydata/interfaces.py 146
2461         existing lease, IndexError will be raised with a note listing the
2462         server-nodeids on the existing leases, so leases on migrated shares
2463-        can be renewed or cancelled. For immutable shares, IndexError
2464-        (without the note) will be raised.
2465+        can be renewed. For immutable shares, IndexError (without the note)
2466+        will be raised.
2467         """
2468         return Any()
2469 
2470hunk ./src/allmydata/interfaces.py 154
2471     def get_buckets(storage_index=StorageIndex):
2472         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2473 
2474-
2475-
2476     def slot_readv(storage_index=StorageIndex,
2477                    shares=ListOf(int), readv=ReadVector):
2478         """Read a vector from the numbered shares associated with the given
2479hunk ./src/allmydata/interfaces.py 163
2480 
2481     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2482                                         secrets=TupleOf(WriteEnablerSecret,
2483-                                                        LeaseRenewSecret,
2484-                                                        LeaseCancelSecret),
2485+                                                        LeaseRenewSecret),
2486                                         tw_vectors=TestAndWriteVectorsForShares,
2487                                         r_vector=ReadVector,
2488                                         ):
2489hunk ./src/allmydata/interfaces.py 167
2490-        """General-purpose test-and-set operation for mutable slots. Perform
2491-        a bunch of comparisons against the existing shares. If they all pass,
2492-        then apply a bunch of write vectors to those shares. Then use the
2493-        read vectors to extract data from all the shares and return the data.
2494+        """
2495+        General-purpose atomic test-read-and-set operation for mutable slots.
2496+        Perform a bunch of comparisons against the existing shares. If they
2497+        all pass: use the read vectors to extract data from all the shares,
2498+        then apply a bunch of write vectors to those shares. Return the read
2499+        data, which does not include any modifications made by the writes.
2500 
2501         This method is, um, large. The goal is to allow clients to update all
2502         the shares associated with a mutable file in a single round trip.
2503hunk ./src/allmydata/interfaces.py 177
2504 
2505-        @param storage_index: the index of the bucket to be created or
2506+        @param storage_index: the index of the shareset to be created or
2507                               increfed.
2508         @param write_enabler: a secret that is stored along with the slot.
2509                               Writes are accepted from any caller who can
2510hunk ./src/allmydata/interfaces.py 183
2511                               present the matching secret. A different secret
2512                               should be used for each slot*server pair.
2513-        @param renew_secret: This is the secret used to protect bucket refresh
2514+        @param renew_secret: This is the secret used to protect lease renewal.
2515                              This secret is generated by the client and
2516                              stored for later comparison by the server. Each
2517                              server is given a different secret.
2518hunk ./src/allmydata/interfaces.py 187
2519-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2520+        @param cancel_secret: ignored
2521 
2522hunk ./src/allmydata/interfaces.py 189
2523-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2524-        cancel_secret). The first is required to perform any write. The
2525-        latter two are used when allocating new shares. To simply acquire a
2526-        new lease on existing shares, use an empty testv and an empty writev.
2527+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2528+        The write_enabler is required to perform any write. The renew_secret
2529+        is used when allocating new shares.
2530 
2531         Each share can have a separate test vector (i.e. a list of
2532         comparisons to perform). If all vectors for all shares pass, then all
2533hunk ./src/allmydata/interfaces.py 280
2534         store that on disk.
2535         """
2536 
2537-class IStorageBucketWriter(Interface):
2538+
2539+class IStorageBackend(Interface):
2540     """
2541hunk ./src/allmydata/interfaces.py 283
2542-    Objects of this kind live on the client side.
2543+    Objects of this kind live on the server side and are used by the
2544+    storage server object.
2545     """
2546hunk ./src/allmydata/interfaces.py 286
2547-    def put_block(segmentnum=int, data=ShareData):
2548-        """@param data: For most segments, this data will be 'blocksize'
2549-        bytes in length. The last segment might be shorter.
2550-        @return: a Deferred that fires (with None) when the operation completes
2551+    def get_available_space():
2552+        """
2553+        Returns available space for share storage in bytes, or
2554+        None if this information is not available or if the available
2555+        space is unlimited.
2556+
2557+        If the backend is configured for read-only mode then this will
2558+        return 0.
2559+        """
2560+
2561+    def get_sharesets_for_prefix(prefix):
2562+        """
2563+        Generates IShareSet objects for all storage indices matching the
2564+        given prefix for which this backend holds shares.
2565+        """
2566+
2567+    def get_shareset(storageindex):
2568+        """
2569+        Get an IShareSet object for the given storage index.
2570+        """
2571+
2572+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2573+        """
2574+        Clients who discover hash failures in shares that they have
2575+        downloaded from me will use this method to inform me about the
2576+        failures. I will record their concern so that my operator can
2577+        manually inspect the shares in question.
2578+
2579+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2580+        share number. 'reason' is a human-readable explanation of the problem,
2581+        probably including some expected hash values and the computed ones
2582+        that did not match. Corruption advisories for mutable shares should
2583+        include a hash of the public key (the same value that appears in the
2584+        mutable-file verify-cap), since the current share format does not
2585+        store that on disk.
2586+
2587+        @param storageindex=str
2588+        @param sharetype=str
2589+        @param shnum=int
2590+        @param reason=str
2591+        """
2592+
2593+
2594+class IShareSet(Interface):
2595+    def get_storage_index():
2596+        """
2597+        Returns the storage index for this shareset.
2598+        """
2599+
2600+    def get_storage_index_string():
2601+        """
2602+        Returns the base32-encoded storage index for this shareset.
2603+        """
2604+
2605+    def get_overhead():
2606+        """
2607+        Returns the storage overhead, in bytes, of this shareset (exclusive
2608+        of the space used by its shares).
2609+        """
2610+
2611+    def get_shares():
2612+        """
2613+        Generates the IStoredShare objects held in this shareset.
2614+        """
2615+
2616+    def has_incoming(shnum):
2617+        """
2618+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2619+        """
2620+
2621+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2622+        """
2623+        Create a bucket writer that can be used to write data to a given share.
2624+
2625+        @param storageserver=RIStorageServer
2626+        @param shnum=int: A share number in this shareset
2627+        @param max_space_per_bucket=int: The maximum space allocated for the
2628+                 share, in bytes
2629+        @param lease_info=LeaseInfo: The initial lease information
2630+        @param canary=Referenceable: If the canary is lost before close(), the
2631+                 bucket is deleted.
2632+        @return an IStorageBucketWriter for the given share
2633+        """
2634+
2635+    def make_bucket_reader(storageserver, share):
2636+        """
2637+        Create a bucket reader that can be used to read data from a given share.
2638+
2639+        @param storageserver=RIStorageServer
2640+        @param share=IStoredShare
2641+        @return an IStorageBucketReader for the given share
2642+        """
2643+
2644+    def readv(wanted_shnums, read_vector):
2645+        """
2646+        Read a vector from the numbered shares in this shareset. An empty
2647+        wanted_shnums list means to return data from all known shares.
2648+
2649+        @param wanted_shnums=ListOf(int)
2650+        @param read_vector=ReadVector
2651+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2652+        """
2653+
2654+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2655+        """
2656+        General-purpose atomic test-read-and-set operation for mutable slots.
2657+        Perform a bunch of comparisons against the existing shares in this
2658+        shareset. If they all pass: use the read vectors to extract data from
2659+        all the shares, then apply a bunch of write vectors to those shares.
2660+        Return the read data, which does not include any modifications made by
2661+        the writes.
2662+
2663+        See the similar method in RIStorageServer for more detail.
2664+
2665+        @param storageserver=RIStorageServer
2666+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2667+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2668+        @param read_vector=ReadVector
2669+        @param expiration_time=int
2670+        @return TupleOf(bool, DictOf(int, ReadData))
2671+        """
2672+
2673+    def add_or_renew_lease(lease_info):
2674+        """
2675+        Add a new lease on the shares in this shareset. If the renew_secret
2676+        matches an existing lease, that lease will be renewed instead. If
2677+        there are no shares in this shareset, return silently.
2678+
2679+        @param lease_info=LeaseInfo
2680+        """
2681+
2682+    def renew_lease(renew_secret, new_expiration_time):
2683+        """
2684+        Renew a lease on the shares in this shareset, resetting the timer
2685+        to 31 days. Some grids will use this, some will not. If there are no
2686+        shares in this shareset, IndexError will be raised.
2687+
2688+        For mutable shares, if the given renew_secret does not match an
2689+        existing lease, IndexError will be raised with a note listing the
2690+        server-nodeids on the existing leases, so leases on migrated shares
2691+        can be renewed. For immutable shares, IndexError (without the note)
2692+        will be raised.
2693+
2694+        @param renew_secret=LeaseRenewSecret
2695+        """
2696+
2697+
2698+class IStoredShare(Interface):
2699+    """
2700+    This object contains as much as all of the share data.  It is intended
2701+    for lazy evaluation, such that in many use cases substantially less than
2702+    all of the share data will be accessed.
2703+    """
2704+    def close():
2705+        """
2706+        Complete writing to this share.
2707+        """
2708+
2709+    def get_storage_index():
2710+        """
2711+        Returns the storage index.
2712+        """
2713+
2714+    def get_shnum():
2715+        """
2716+        Returns the share number.
2717+        """
2718+
2719+    def get_data_length():
2720+        """
2721+        Returns the data length in bytes.
2722+        """
2723+
2724+    def get_size():
2725+        """
2726+        Returns the size of the share in bytes.
2727+        """
2728+
2729+    def get_used_space():
2730+        """
2731+        Returns the amount of backend storage including overhead, in bytes, used
2732+        by this share.
2733+        """
2734+
2735+    def unlink():
2736+        """
2737+        Signal that this share can be removed from the backend storage. This does
2738+        not guarantee that the share data will be immediately inaccessible, or
2739+        that it will be securely erased.
2740+        """
2741+
2742+    def readv(read_vector):
2743+        """
2744+        XXX
2745+        """
2746+
2747+
2748+class IStoredMutableShare(IStoredShare):
2749+    def check_write_enabler(write_enabler, si_s):
2750+        """
2751+        XXX
2752         """
2753 
2754hunk ./src/allmydata/interfaces.py 489
2755-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2756+    def check_testv(test_vector):
2757+        """
2758+        XXX
2759+        """
2760+
2761+    def writev(datav, new_length):
2762+        """
2763+        XXX
2764+        """
2765+
2766+
2767+class IStorageBucketWriter(Interface):
2768+    """
2769+    Objects of this kind live on the client side.
2770+    """
2771+    def put_block(segmentnum, data):
2772         """
2773hunk ./src/allmydata/interfaces.py 506
2774+        @param segmentnum=int
2775+        @param data=ShareData: For most segments, this data will be 'blocksize'
2776+        bytes in length. The last segment might be shorter.
2777         @return: a Deferred that fires (with None) when the operation completes
2778         """
2779 
2780hunk ./src/allmydata/interfaces.py 512
2781-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2782+    def put_crypttext_hashes(hashes):
2783         """
2784hunk ./src/allmydata/interfaces.py 514
2785+        @param hashes=ListOf(Hash)
2786         @return: a Deferred that fires (with None) when the operation completes
2787         """
2788 
2789hunk ./src/allmydata/interfaces.py 518
2790-    def put_block_hashes(blockhashes=ListOf(Hash)):
2791+    def put_block_hashes(blockhashes):
2792         """
2793hunk ./src/allmydata/interfaces.py 520
2794+        @param blockhashes=ListOf(Hash)
2795         @return: a Deferred that fires (with None) when the operation completes
2796         """
2797 
2798hunk ./src/allmydata/interfaces.py 524
2799-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2800+    def put_share_hashes(sharehashes):
2801         """
2802hunk ./src/allmydata/interfaces.py 526
2803+        @param sharehashes=ListOf(TupleOf(int, Hash))
2804         @return: a Deferred that fires (with None) when the operation completes
2805         """
2806 
2807hunk ./src/allmydata/interfaces.py 530
2808-    def put_uri_extension(data=URIExtensionData):
2809+    def put_uri_extension(data):
2810         """This block of data contains integrity-checking information (hashes
2811         of plaintext, crypttext, and shares), as well as encoding parameters
2812         that are necessary to recover the data. This is a serialized dict
2813hunk ./src/allmydata/interfaces.py 535
2814         mapping strings to other strings. The hash of this data is kept in
2815-        the URI and verified before any of the data is used. All buckets for
2816-        a given file contain identical copies of this data.
2817+        the URI and verified before any of the data is used. All share
2818+        containers for a given file contain identical copies of this data.
2819 
2820         The serialization format is specified with the following pseudocode:
2821         for k in sorted(dict.keys()):
2822hunk ./src/allmydata/interfaces.py 543
2823             assert re.match(r'^[a-zA-Z_\-]+$', k)
2824             write(k + ':' + netstring(dict[k]))
2825 
2826+        @param data=URIExtensionData
2827         @return: a Deferred that fires (with None) when the operation completes
2828         """
2829 
2830hunk ./src/allmydata/interfaces.py 558
2831 
2832 class IStorageBucketReader(Interface):
2833 
2834-    def get_block_data(blocknum=int, blocksize=int, size=int):
2835+    def get_block_data(blocknum, blocksize, size):
2836         """Most blocks will be the same size. The last block might be shorter
2837         than the others.
2838 
2839hunk ./src/allmydata/interfaces.py 562
2840+        @param blocknum=int
2841+        @param blocksize=int
2842+        @param size=int
2843         @return: ShareData
2844         """
2845 
2846hunk ./src/allmydata/interfaces.py 573
2847         @return: ListOf(Hash)
2848         """
2849 
2850-    def get_block_hashes(at_least_these=SetOf(int)):
2851+    def get_block_hashes(at_least_these=()):
2852         """
2853hunk ./src/allmydata/interfaces.py 575
2854+        @param at_least_these=SetOf(int)
2855         @return: ListOf(Hash)
2856         """
2857 
2858hunk ./src/allmydata/interfaces.py 579
2859-    def get_share_hashes(at_least_these=SetOf(int)):
2860+    def get_share_hashes():
2861         """
2862         @return: ListOf(TupleOf(int, Hash))
2863         """
2864hunk ./src/allmydata/interfaces.py 611
2865         @return: unicode nickname, or None
2866         """
2867 
2868-    # methods moved from IntroducerClient, need review
2869-    def get_all_connections():
2870-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2871-        each active connection we've established to a remote service. This is
2872-        mostly useful for unit tests that need to wait until a certain number
2873-        of connections have been made."""
2874-
2875-    def get_all_connectors():
2876-        """Return a dict that maps from (nodeid, service_name) to a
2877-        RemoteServiceConnector instance for all services that we are actively
2878-        trying to connect to. Each RemoteServiceConnector has the following
2879-        public attributes::
2880-
2881-          service_name: the type of service provided, like 'storage'
2882-          announcement_time: when we first heard about this service
2883-          last_connect_time: when we last established a connection
2884-          last_loss_time: when we last lost a connection
2885-
2886-          version: the peer's version, from the most recent connection
2887-          oldest_supported: the peer's oldest supported version, same
2888-
2889-          rref: the RemoteReference, if connected, otherwise None
2890-          remote_host: the IAddress, if connected, otherwise None
2891-
2892-        This method is intended for monitoring interfaces, such as a web page
2893-        that describes connecting and connected peers.
2894-        """
2895-
2896-    def get_all_peerids():
2897-        """Return a frozenset of all peerids to whom we have a connection (to
2898-        one or more services) established. Mostly useful for unit tests."""
2899-
2900-    def get_all_connections_for(service_name):
2901-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2902-        for each active connection that provides the given SERVICE_NAME."""
2903-
2904-    def get_permuted_peers(service_name, key):
2905-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2906-        the connections that provide SERVICE_NAME, using a hash-based
2907-        permutation keyed by KEY. This randomizes the service list in a
2908-        repeatable way, to distribute load over many peers.
2909-        """
2910-
2911 
2912 class IMutableSlotWriter(Interface):
2913     """
2914hunk ./src/allmydata/interfaces.py 616
2915     The interface for a writer around a mutable slot on a remote server.
2916     """
2917-    def set_checkstring(checkstring, *args):
2918+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
2919         """
2920         Set the checkstring that I will pass to the remote server when
2921         writing.
2922hunk ./src/allmydata/interfaces.py 640
2923         Add a block and salt to the share.
2924         """
2925 
2926-    def put_encprivey(encprivkey):
2927+    def put_encprivkey(encprivkey):
2928         """
2929         Add the encrypted private key to the share.
2930         """
2931hunk ./src/allmydata/interfaces.py 645
2932 
2933-    def put_blockhashes(blockhashes=list):
2934+    def put_blockhashes(blockhashes):
2935         """
2936hunk ./src/allmydata/interfaces.py 647
2937+        @param blockhashes=list
2938         Add the block hash tree to the share.
2939         """
2940 
2941hunk ./src/allmydata/interfaces.py 651
2942-    def put_sharehashes(sharehashes=dict):
2943+    def put_sharehashes(sharehashes):
2944         """
2945hunk ./src/allmydata/interfaces.py 653
2946+        @param sharehashes=dict
2947         Add the share hash chain to the share.
2948         """
2949 
2950hunk ./src/allmydata/interfaces.py 739
2951     def get_extension_params():
2952         """Return the extension parameters in the URI"""
2953 
2954-    def set_extension_params():
2955+    def set_extension_params(params):
2956         """Set the extension parameters that should be in the URI"""
2957 
2958 class IDirectoryURI(Interface):
2959hunk ./src/allmydata/interfaces.py 879
2960         writer-visible data using this writekey.
2961         """
2962 
2963-    # TODO: Can this be overwrite instead of replace?
2964-    def replace(new_contents):
2965-        """Replace the contents of the mutable file, provided that no other
2966+    def overwrite(new_contents):
2967+        """Overwrite the contents of the mutable file, provided that no other
2968         node has published (or is attempting to publish, concurrently) a
2969         newer version of the file than this one.
2970 
2971hunk ./src/allmydata/interfaces.py 1346
2972         is empty, the metadata will be an empty dictionary.
2973         """
2974 
2975-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
2976+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
2977         """I add a child (by writecap+readcap) at the specific name. I return
2978         a Deferred that fires when the operation finishes. If overwrite= is
2979         True, I will replace any existing child of the same name, otherwise
2980hunk ./src/allmydata/interfaces.py 1745
2981     Block Hash, and the encoding parameters, both of which must be included
2982     in the URI.
2983 
2984-    I do not choose shareholders, that is left to the IUploader. I must be
2985-    given a dict of RemoteReferences to storage buckets that are ready and
2986-    willing to receive data.
2987+    I do not choose shareholders, that is left to the IUploader.
2988     """
2989 
2990     def set_size(size):
2991hunk ./src/allmydata/interfaces.py 1752
2992         """Specify the number of bytes that will be encoded. This must be
2993         peformed before get_serialized_params() can be called.
2994         """
2995+
2996     def set_params(params):
2997         """Override the default encoding parameters. 'params' is a tuple of
2998         (k,d,n), where 'k' is the number of required shares, 'd' is the
2999hunk ./src/allmydata/interfaces.py 1848
3000     download, validate, decode, and decrypt data from them, writing the
3001     results to an output file.
3002 
3003-    I do not locate the shareholders, that is left to the IDownloader. I must
3004-    be given a dict of RemoteReferences to storage buckets that are ready to
3005-    send data.
3006+    I do not locate the shareholders, that is left to the IDownloader.
3007     """
3008 
3009     def setup(outfile):
3010hunk ./src/allmydata/interfaces.py 1950
3011         resuming an interrupted upload (where we need to compute the
3012         plaintext hashes, but don't need the redundant encrypted data)."""
3013 
3014-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3015-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3016-        plaintext segments, i.e. get the tagged hashes of the given segments.
3017-        The segment size is expected to be generated by the
3018-        IEncryptedUploadable before any plaintext is read or ciphertext
3019-        produced, so that the segment hashes can be generated with only a
3020-        single pass.
3021-
3022-        This returns a Deferred that fires with a sequence of hashes, using:
3023-
3024-         tuple(segment_hashes[first:last])
3025-
3026-        'num_segments' is used to assert that the number of segments that the
3027-        IEncryptedUploadable handled matches the number of segments that the
3028-        encoder was expecting.
3029-
3030-        This method must not be called until the final byte has been read
3031-        from read_encrypted(). Once this method is called, read_encrypted()
3032-        can never be called again.
3033-        """
3034-
3035-    def get_plaintext_hash():
3036-        """OBSOLETE; Get the hash of the whole plaintext.
3037-
3038-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3039-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3040-        """
3041-
3042     def close():
3043         """Just like IUploadable.close()."""
3044 
3045hunk ./src/allmydata/interfaces.py 2144
3046         returns a Deferred that fires with an IUploadResults instance, from
3047         which the URI of the file can be obtained as results.uri ."""
3048 
3049-    def upload_ssk(write_capability, new_version, uploadable):
3050-        """TODO: how should this work?"""
3051-
3052 class ICheckable(Interface):
3053     def check(monitor, verify=False, add_lease=False):
3054         """Check up on my health, optionally repairing any problems.
3055hunk ./src/allmydata/interfaces.py 2505
3056 
3057 class IRepairResults(Interface):
3058     """I contain the results of a repair operation."""
3059-    def get_successful(self):
3060+    def get_successful():
3061         """Returns a boolean: True if the repair made the file healthy, False
3062         if not. Repair failure generally indicates a file that has been
3063         damaged beyond repair."""
3064hunk ./src/allmydata/interfaces.py 2577
3065     Tahoe process will typically have a single NodeMaker, but unit tests may
3066     create simplified/mocked forms for testing purposes.
3067     """
3068-    def create_from_cap(writecap, readcap=None, **kwargs):
3069+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3070         """I create an IFilesystemNode from the given writecap/readcap. I can
3071         only provide nodes for existing file/directory objects: use my other
3072         methods to create new objects. I return synchronously."""
3073hunk ./src/allmydata/monitor.py 30
3074 
3075     # the following methods are provided for the operation code
3076 
3077-    def is_cancelled(self):
3078+    def is_cancelled():
3079         """Returns True if the operation has been cancelled. If True,
3080         operation code should stop creating new work, and attempt to stop any
3081         work already in progress."""
3082hunk ./src/allmydata/monitor.py 35
3083 
3084-    def raise_if_cancelled(self):
3085+    def raise_if_cancelled():
3086         """Raise OperationCancelledError if the operation has been cancelled.
3087         Operation code that has a robust error-handling path can simply call
3088         this periodically."""
3089hunk ./src/allmydata/monitor.py 40
3090 
3091-    def set_status(self, status):
3092+    def set_status(status):
3093         """Sets the Monitor's 'status' object to an arbitrary value.
3094         Different operations will store different sorts of status information
3095         here. Operation code should use get+modify+set sequences to update
3096hunk ./src/allmydata/monitor.py 46
3097         this."""
3098 
3099-    def get_status(self):
3100+    def get_status():
3101         """Return the status object. If the operation failed, this will be a
3102         Failure instance."""
3103 
3104hunk ./src/allmydata/monitor.py 50
3105-    def finish(self, status):
3106+    def finish(status):
3107         """Call this when the operation is done, successful or not. The
3108         Monitor's lifetime is influenced by the completion of the operation
3109         it is monitoring. The Monitor's 'status' value will be set with the
3110hunk ./src/allmydata/monitor.py 63
3111 
3112     # the following methods are provided for the initiator of the operation
3113 
3114-    def is_finished(self):
3115+    def is_finished():
3116         """Return a boolean, True if the operation is done (whether
3117         successful or failed), False if it is still running."""
3118 
3119hunk ./src/allmydata/monitor.py 67
3120-    def when_done(self):
3121+    def when_done():
3122         """Return a Deferred that fires when the operation is complete. It
3123         will fire with the operation status, the same value as returned by
3124         get_status()."""
3125hunk ./src/allmydata/monitor.py 72
3126 
3127-    def cancel(self):
3128+    def cancel():
3129         """Cancel the operation as soon as possible. is_cancelled() will
3130         start returning True after this is called."""
3131 
3132hunk ./src/allmydata/mutable/filenode.py 753
3133         self._writekey = writekey
3134         self._serializer = defer.succeed(None)
3135 
3136-
3137     def get_sequence_number(self):
3138         """
3139         Get the sequence number of the mutable version that I represent.
3140hunk ./src/allmydata/mutable/filenode.py 759
3141         """
3142         return self._version[0] # verinfo[0] == the sequence number
3143 
3144+    def get_servermap(self):
3145+        return self._servermap
3146 
3147hunk ./src/allmydata/mutable/filenode.py 762
3148-    # TODO: Terminology?
3149     def get_writekey(self):
3150         """
3151         I return a writekey or None if I don't have a writekey.
3152hunk ./src/allmydata/mutable/filenode.py 768
3153         """
3154         return self._writekey
3155 
3156-
3157     def set_downloader_hints(self, hints):
3158         """
3159         I set the downloader hints.
3160hunk ./src/allmydata/mutable/filenode.py 776
3161 
3162         self._downloader_hints = hints
3163 
3164-
3165     def get_downloader_hints(self):
3166         """
3167         I return the downloader hints.
3168hunk ./src/allmydata/mutable/filenode.py 782
3169         """
3170         return self._downloader_hints
3171 
3172-
3173     def overwrite(self, new_contents):
3174         """
3175         I overwrite the contents of this mutable file version with the
3176hunk ./src/allmydata/mutable/filenode.py 791
3177 
3178         return self._do_serialized(self._overwrite, new_contents)
3179 
3180-
3181     def _overwrite(self, new_contents):
3182         assert IMutableUploadable.providedBy(new_contents)
3183         assert self._servermap.last_update_mode == MODE_WRITE
3184hunk ./src/allmydata/mutable/filenode.py 797
3185 
3186         return self._upload(new_contents)
3187 
3188-
3189     def modify(self, modifier, backoffer=None):
3190         """I use a modifier callback to apply a change to the mutable file.
3191         I implement the following pseudocode::
3192hunk ./src/allmydata/mutable/filenode.py 841
3193 
3194         return self._do_serialized(self._modify, modifier, backoffer)
3195 
3196-
3197     def _modify(self, modifier, backoffer):
3198         if backoffer is None:
3199             backoffer = BackoffAgent().delay
3200hunk ./src/allmydata/mutable/filenode.py 846
3201         return self._modify_and_retry(modifier, backoffer, True)
3202 
3203-
3204     def _modify_and_retry(self, modifier, backoffer, first_time):
3205         """
3206         I try to apply modifier to the contents of this version of the
3207hunk ./src/allmydata/mutable/filenode.py 878
3208         d.addErrback(_retry)
3209         return d
3210 
3211-
3212     def _modify_once(self, modifier, first_time):
3213         """
3214         I attempt to apply a modifier to the contents of the mutable
3215hunk ./src/allmydata/mutable/filenode.py 913
3216         d.addCallback(_apply)
3217         return d
3218 
3219-
3220     def is_readonly(self):
3221         """
3222         I return True if this MutableFileVersion provides no write
3223hunk ./src/allmydata/mutable/filenode.py 921
3224         """
3225         return self._writekey is None
3226 
3227-
3228     def is_mutable(self):
3229         """
3230         I return True, since mutable files are always mutable by
3231hunk ./src/allmydata/mutable/filenode.py 928
3232         """
3233         return True
3234 
3235-
3236     def get_storage_index(self):
3237         """
3238         I return the storage index of the reference that I encapsulate.
3239hunk ./src/allmydata/mutable/filenode.py 934
3240         """
3241         return self._storage_index
3242 
3243-
3244     def get_size(self):
3245         """
3246         I return the length, in bytes, of this readable object.
3247hunk ./src/allmydata/mutable/filenode.py 940
3248         """
3249         return self._servermap.size_of_version(self._version)
3250 
3251-
3252     def download_to_data(self, fetch_privkey=False):
3253         """
3254         I return a Deferred that fires with the contents of this
3255hunk ./src/allmydata/mutable/filenode.py 951
3256         d.addCallback(lambda mc: "".join(mc.chunks))
3257         return d
3258 
3259-
3260     def _try_to_download_data(self):
3261         """
3262         I am an unserialized cousin of download_to_data; I am called
3263hunk ./src/allmydata/mutable/filenode.py 963
3264         d.addCallback(lambda mc: "".join(mc.chunks))
3265         return d
3266 
3267-
3268     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3269         """
3270         I read a portion (possibly all) of the mutable file that I
3271hunk ./src/allmydata/mutable/filenode.py 971
3272         return self._do_serialized(self._read, consumer, offset, size,
3273                                    fetch_privkey)
3274 
3275-
3276     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3277         """
3278         I am the serialized companion of read.
3279hunk ./src/allmydata/mutable/filenode.py 981
3280         d = r.download(consumer, offset, size)
3281         return d
3282 
3283-
3284     def _do_serialized(self, cb, *args, **kwargs):
3285         # note: to avoid deadlock, this callable is *not* allowed to invoke
3286         # other serialized methods within this (or any other)
3287hunk ./src/allmydata/mutable/filenode.py 999
3288         self._serializer.addErrback(log.err)
3289         return d
3290 
3291-
3292     def _upload(self, new_contents):
3293         #assert self._pubkey, "update_servermap must be called before publish"
3294         p = Publish(self._node, self._storage_broker, self._servermap)
3295hunk ./src/allmydata/mutable/filenode.py 1009
3296         d.addCallback(self._did_upload, new_contents.get_size())
3297         return d
3298 
3299-
3300     def _did_upload(self, res, size):
3301         self._most_recent_size = size
3302         return res
3303hunk ./src/allmydata/mutable/filenode.py 1029
3304         """
3305         return self._do_serialized(self._update, data, offset)
3306 
3307-
3308     def _update(self, data, offset):
3309         """
3310         I update the mutable file version represented by this particular
3311hunk ./src/allmydata/mutable/filenode.py 1058
3312         d.addCallback(self._build_uploadable_and_finish, data, offset)
3313         return d
3314 
3315-
3316     def _do_modify_update(self, data, offset):
3317         """
3318         I perform a file update by modifying the contents of the file
3319hunk ./src/allmydata/mutable/filenode.py 1073
3320             return new
3321         return self._modify(m, None)
3322 
3323-
3324     def _do_update_update(self, data, offset):
3325         """
3326         I start the Servermap update that gets us the data we need to
3327hunk ./src/allmydata/mutable/filenode.py 1108
3328         return self._update_servermap(update_range=(start_segment,
3329                                                     end_segment))
3330 
3331-
3332     def _decode_and_decrypt_segments(self, ignored, data, offset):
3333         """
3334         After the servermap update, I take the encrypted and encoded
3335hunk ./src/allmydata/mutable/filenode.py 1148
3336         d3 = defer.succeed(blockhashes)
3337         return deferredutil.gatherResults([d1, d2, d3])
3338 
3339-
3340     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3341         """
3342         After the process has the plaintext segments, I build the
3343hunk ./src/allmydata/mutable/filenode.py 1163
3344         p = Publish(self._node, self._storage_broker, self._servermap)
3345         return p.update(u, offset, segments_and_bht[2], self._version)
3346 
3347-
3348     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3349         """
3350         I update the servermap. I return a Deferred that fires when the
3351hunk ./src/allmydata/storage/common.py 1
3352-
3353-import os.path
3354 from allmydata.util import base32
3355 
3356 class DataTooLargeError(Exception):
3357hunk ./src/allmydata/storage/common.py 5
3358     pass
3359+
3360 class UnknownMutableContainerVersionError(Exception):
3361     pass
3362hunk ./src/allmydata/storage/common.py 8
3363+
3364 class UnknownImmutableContainerVersionError(Exception):
3365     pass
3366 
3367hunk ./src/allmydata/storage/common.py 18
3368 
3369 def si_a2b(ascii_storageindex):
3370     return base32.a2b(ascii_storageindex)
3371-
3372-def storage_index_to_dir(storageindex):
3373-    sia = si_b2a(storageindex)
3374-    return os.path.join(sia[:2], sia)
3375hunk ./src/allmydata/storage/crawler.py 2
3376 
3377-import os, time, struct
3378+import time, struct
3379 import cPickle as pickle
3380 from twisted.internet import reactor
3381 from twisted.application import service
3382hunk ./src/allmydata/storage/crawler.py 6
3383+
3384+from allmydata.util.assertutil import precondition
3385+from allmydata.interfaces import IStorageBackend
3386 from allmydata.storage.common import si_b2a
3387hunk ./src/allmydata/storage/crawler.py 10
3388-from allmydata.util import fileutil
3389+
3390 
3391 class TimeSliceExceeded(Exception):
3392     pass
3393hunk ./src/allmydata/storage/crawler.py 15
3394 
3395+
3396 class ShareCrawler(service.MultiService):
3397hunk ./src/allmydata/storage/crawler.py 17
3398-    """A ShareCrawler subclass is attached to a StorageServer, and
3399-    periodically walks all of its shares, processing each one in some
3400-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3401-    since large servers can easily have a terabyte of shares, in several
3402-    million files, which can take hours or days to read.
3403+    """
3404+    An instance of a subclass of ShareCrawler is attached to a storage
3405+    backend, and periodically walks the backend's shares, processing them
3406+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3407+    the host, since large servers can easily have a terabyte of shares in
3408+    several million files, which can take hours or days to read.
3409 
3410     Once the crawler starts a cycle, it will proceed at a rate limited by the
3411     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3412hunk ./src/allmydata/storage/crawler.py 33
3413     long enough to ensure that 'minimum_cycle_time' elapses between the start
3414     of two consecutive cycles.
3415 
3416-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3417+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3418     grid will cause the prefixdir contents to be mostly cached in the kernel,
3419hunk ./src/allmydata/storage/crawler.py 35
3420-    or that the number of buckets in each prefixdir will be small enough to
3421-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3422-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3423+    or that the number of sharesets in each prefixdir will be small enough to
3424+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3425+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3426     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3427     time, and 17ms to list the second time.
3428 
3429hunk ./src/allmydata/storage/crawler.py 41
3430-    To use a crawler, create a subclass which implements the process_bucket()
3431-    method. It will be called with a prefixdir and a base32 storage index
3432-    string. process_bucket() must run synchronously. Any keys added to
3433-    self.state will be preserved. Override add_initial_state() to set up
3434-    initial state keys. Override finished_cycle() to perform additional
3435-    processing when the cycle is complete. Any status that the crawler
3436-    produces should be put in the self.state dictionary. Status renderers
3437-    (like a web page which describes the accomplishments of your crawler)
3438-    will use crawler.get_state() to retrieve this dictionary; they can
3439-    present the contents as they see fit.
3440+    To implement a crawler, create a subclass that implements the
3441+    process_shareset() method. It will be called with a prefixdir and an
3442+    object providing the IShareSet interface. process_shareset() must run
3443+    synchronously. Any keys added to self.state will be preserved. Override
3444+    add_initial_state() to set up initial state keys. Override
3445+    finished_cycle() to perform additional processing when the cycle is
3446+    complete. Any status that the crawler produces should be put in the
3447+    self.state dictionary. Status renderers (like a web page describing the
3448+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3449+    this dictionary; they can present the contents as they see fit.
3450 
3451hunk ./src/allmydata/storage/crawler.py 52
3452-    Then create an instance, with a reference to a StorageServer and a
3453-    filename where it can store persistent state. The statefile is used to
3454-    keep track of how far around the ring the process has travelled, as well
3455-    as timing history to allow the pace to be predicted and controlled. The
3456-    statefile will be updated and written to disk after each time slice (just
3457-    before the crawler yields to the reactor), and also after each cycle is
3458-    finished, and also when stopService() is called. Note that this means
3459-    that a crawler which is interrupted with SIGKILL while it is in the
3460-    middle of a time slice will lose progress: the next time the node is
3461-    started, the crawler will repeat some unknown amount of work.
3462+    Then create an instance, with a reference to a backend object providing
3463+    the IStorageBackend interface, and a filename where it can store
3464+    persistent state. The statefile is used to keep track of how far around
3465+    the ring the process has travelled, as well as timing history to allow
3466+    the pace to be predicted and controlled. The statefile will be updated
3467+    and written to disk after each time slice (just before the crawler yields
3468+    to the reactor), and also after each cycle is finished, and also when
3469+    stopService() is called. Note that this means that a crawler that is
3470+    interrupted with SIGKILL while it is in the middle of a time slice will
3471+    lose progress: the next time the node is started, the crawler will repeat
3472+    some unknown amount of work.
3473 
3474     The crawler instance must be started with startService() before it will
3475hunk ./src/allmydata/storage/crawler.py 65
3476-    do any work. To make it stop doing work, call stopService().
3477+    do any work. To make it stop doing work, call stopService(). A crawler
3478+    is usually a child service of a StorageServer, although it should not
3479+    depend on that.
3480+
3481+    For historical reasons, some dictionary key names use the term "bucket"
3482+    for what is now preferably called a "shareset" (the set of shares that a
3483+    server holds under a given storage index).
3484     """
3485 
3486     slow_start = 300 # don't start crawling for 5 minutes after startup
3487hunk ./src/allmydata/storage/crawler.py 80
3488     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3489     minimum_cycle_time = 300 # don't run a cycle faster than this
3490 
3491-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3492+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3493+        precondition(IStorageBackend.providedBy(backend), backend)
3494         service.MultiService.__init__(self)
3495hunk ./src/allmydata/storage/crawler.py 83
3496+        self.backend = backend
3497+        self.statefp = statefp
3498         if allowed_cpu_percentage is not None:
3499             self.allowed_cpu_percentage = allowed_cpu_percentage
3500hunk ./src/allmydata/storage/crawler.py 87
3501-        self.server = server
3502-        self.sharedir = server.sharedir
3503-        self.statefile = statefile
3504         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3505                          for i in range(2**10)]
3506         self.prefixes.sort()
3507hunk ./src/allmydata/storage/crawler.py 91
3508         self.timer = None
3509-        self.bucket_cache = (None, [])
3510+        self.shareset_cache = (None, [])
3511         self.current_sleep_time = None
3512         self.next_wake_time = None
3513         self.last_prefix_finished_time = None
3514hunk ./src/allmydata/storage/crawler.py 154
3515                 left = len(self.prefixes) - self.last_complete_prefix_index
3516                 remaining = left * self.last_prefix_elapsed_time
3517                 # TODO: remainder of this prefix: we need to estimate the
3518-                # per-bucket time, probably by measuring the time spent on
3519-                # this prefix so far, divided by the number of buckets we've
3520+                # per-shareset time, probably by measuring the time spent on
3521+                # this prefix so far, divided by the number of sharesets we've
3522                 # processed.
3523             d["estimated-cycle-complete-time-left"] = remaining
3524             # it's possible to call get_progress() from inside a crawler's
3525hunk ./src/allmydata/storage/crawler.py 175
3526         state dictionary.
3527 
3528         If we are not currently sleeping (i.e. get_state() was called from
3529-        inside the process_prefixdir, process_bucket, or finished_cycle()
3530+        inside the process_prefixdir, process_shareset, or finished_cycle()
3531         methods, or if startService has not yet been called on this crawler),
3532         these two keys will be None.
3533 
3534hunk ./src/allmydata/storage/crawler.py 188
3535     def load_state(self):
3536         # we use this to store state for both the crawler's internals and
3537         # anything the subclass-specific code needs. The state is stored
3538-        # after each bucket is processed, after each prefixdir is processed,
3539+        # after each shareset is processed, after each prefixdir is processed,
3540         # and after a cycle is complete. The internal keys we use are:
3541         #  ["version"]: int, always 1
3542         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3543hunk ./src/allmydata/storage/crawler.py 202
3544         #                            are sleeping between cycles, or if we
3545         #                            have not yet finished any prefixdir since
3546         #                            a cycle was started
3547-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3548-        #                            of the last bucket to be processed, or
3549-        #                            None if we are sleeping between cycles
3550+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3551+        #                            shareset to be processed, or None if we
3552+        #                            are sleeping between cycles
3553         try:
3554hunk ./src/allmydata/storage/crawler.py 206
3555-            f = open(self.statefile, "rb")
3556-            state = pickle.load(f)
3557-            f.close()
3558+            state = pickle.loads(self.statefp.getContent())
3559         except EnvironmentError:
3560             state = {"version": 1,
3561                      "last-cycle-finished": None,
3562hunk ./src/allmydata/storage/crawler.py 242
3563         else:
3564             last_complete_prefix = self.prefixes[lcpi]
3565         self.state["last-complete-prefix"] = last_complete_prefix
3566-        tmpfile = self.statefile + ".tmp"
3567-        f = open(tmpfile, "wb")
3568-        pickle.dump(self.state, f)
3569-        f.close()
3570-        fileutil.move_into_place(tmpfile, self.statefile)
3571+        self.statefp.setContent(pickle.dumps(self.state))
3572 
3573     def startService(self):
3574         # arrange things to look like we were just sleeping, so
3575hunk ./src/allmydata/storage/crawler.py 284
3576         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3577         # if the math gets weird, or a timequake happens, don't sleep
3578         # forever. Note that this means that, while a cycle is running, we
3579-        # will process at least one bucket every 5 minutes, no matter how
3580-        # long that bucket takes.
3581+        # will process at least one shareset every 5 minutes, no matter how
3582+        # long that shareset takes.
3583         sleep_time = max(0.0, min(sleep_time, 299))
3584         if finished_cycle:
3585             # how long should we sleep between cycles? Don't run faster than
3586hunk ./src/allmydata/storage/crawler.py 315
3587         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3588             # if we want to yield earlier, just raise TimeSliceExceeded()
3589             prefix = self.prefixes[i]
3590-            prefixdir = os.path.join(self.sharedir, prefix)
3591-            if i == self.bucket_cache[0]:
3592-                buckets = self.bucket_cache[1]
3593+            if i == self.shareset_cache[0]:
3594+                sharesets = self.shareset_cache[1]
3595             else:
3596hunk ./src/allmydata/storage/crawler.py 318
3597-                try:
3598-                    buckets = os.listdir(prefixdir)
3599-                    buckets.sort()
3600-                except EnvironmentError:
3601-                    buckets = []
3602-                self.bucket_cache = (i, buckets)
3603-            self.process_prefixdir(cycle, prefix, prefixdir,
3604-                                   buckets, start_slice)
3605+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3606+                self.shareset_cache = (i, sharesets)
3607+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3608             self.last_complete_prefix_index = i
3609 
3610             now = time.time()
3611hunk ./src/allmydata/storage/crawler.py 345
3612         self.finished_cycle(cycle)
3613         self.save_state()
3614 
3615-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3616-        """This gets a list of bucket names (i.e. storage index strings,
3617+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3618+        """
3619+        This gets a list of shareset names (i.e. storage index strings,
3620         base32-encoded) in sorted order.
3621 
3622         You can override this if your crawler doesn't care about the actual
3623hunk ./src/allmydata/storage/crawler.py 352
3624         shares, for example a crawler which merely keeps track of how many
3625-        buckets are being managed by this server.
3626+        sharesets are being managed by this server.
3627 
3628hunk ./src/allmydata/storage/crawler.py 354
3629-        Subclasses which *do* care about actual bucket should leave this
3630-        method along, and implement process_bucket() instead.
3631+        Subclasses which *do* care about actual shareset should leave this
3632+        method alone, and implement process_shareset() instead.
3633         """
3634 
3635hunk ./src/allmydata/storage/crawler.py 358
3636-        for bucket in buckets:
3637-            if bucket <= self.state["last-complete-bucket"]:
3638+        for shareset in sharesets:
3639+            base32si = shareset.get_storage_index_string()
3640+            if base32si <= self.state["last-complete-bucket"]:
3641                 continue
3642hunk ./src/allmydata/storage/crawler.py 362
3643-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3644-            self.state["last-complete-bucket"] = bucket
3645+            self.process_shareset(cycle, prefix, shareset)
3646+            self.state["last-complete-bucket"] = base32si
3647             if time.time() >= start_slice + self.cpu_slice:
3648                 raise TimeSliceExceeded()
3649 
3650hunk ./src/allmydata/storage/crawler.py 370
3651     # the remaining methods are explictly for subclasses to implement.
3652 
3653     def started_cycle(self, cycle):
3654-        """Notify a subclass that the crawler is about to start a cycle.
3655+        """
3656+        Notify a subclass that the crawler is about to start a cycle.
3657 
3658         This method is for subclasses to override. No upcall is necessary.
3659         """
3660hunk ./src/allmydata/storage/crawler.py 377
3661         pass
3662 
3663-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3664-        """Examine a single bucket. Subclasses should do whatever they want
3665+    def process_shareset(self, cycle, prefix, shareset):
3666+        """
3667+        Examine a single shareset. Subclasses should do whatever they want
3668         to do to the shares therein, then update self.state as necessary.
3669 
3670         If the crawler is never interrupted by SIGKILL, this method will be
3671hunk ./src/allmydata/storage/crawler.py 383
3672-        called exactly once per share (per cycle). If it *is* interrupted,
3673+        called exactly once per shareset (per cycle). If it *is* interrupted,
3674         then the next time the node is started, some amount of work will be
3675         duplicated, according to when self.save_state() was last called. By
3676         default, save_state() is called at the end of each timeslice, and
3677hunk ./src/allmydata/storage/crawler.py 391
3678 
3679         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3680         records to a database), you can call save_state() at the end of your
3681-        process_bucket() method. This will reduce the maximum duplicated work
3682-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3683-        per bucket (and some disk writes), which will count against your
3684-        allowed_cpu_percentage, and which may be considerable if
3685-        process_bucket() runs quickly.
3686+        process_shareset() method. This will reduce the maximum duplicated
3687+        work to one shareset per SIGKILL. It will also add overhead, probably
3688+        1-20ms per shareset (and some disk writes), which will count against
3689+        your allowed_cpu_percentage, and which may be considerable if
3690+        process_shareset() runs quickly.
3691 
3692         This method is for subclasses to override. No upcall is necessary.
3693         """
3694hunk ./src/allmydata/storage/crawler.py 402
3695         pass
3696 
3697     def finished_prefix(self, cycle, prefix):
3698-        """Notify a subclass that the crawler has just finished processing a
3699-        prefix directory (all buckets with the same two-character/10bit
3700+        """
3701+        Notify a subclass that the crawler has just finished processing a
3702+        prefix directory (all sharesets with the same two-character/10-bit
3703         prefix). To impose a limit on how much work might be duplicated by a
3704         SIGKILL that occurs during a timeslice, you can call
3705         self.save_state() here, but be aware that it may represent a
3706hunk ./src/allmydata/storage/crawler.py 415
3707         pass
3708 
3709     def finished_cycle(self, cycle):
3710-        """Notify subclass that a cycle (one complete traversal of all
3711+        """
3712+        Notify subclass that a cycle (one complete traversal of all
3713         prefixdirs) has just finished. 'cycle' is the number of the cycle
3714         that just finished. This method should perform summary work and
3715         update self.state to publish information to status displays.
3716hunk ./src/allmydata/storage/crawler.py 433
3717         pass
3718 
3719     def yielding(self, sleep_time):
3720-        """The crawler is about to sleep for 'sleep_time' seconds. This
3721+        """
3722+        The crawler is about to sleep for 'sleep_time' seconds. This
3723         method is mostly for the convenience of unit tests.
3724 
3725         This method is for subclasses to override. No upcall is necessary.
3726hunk ./src/allmydata/storage/crawler.py 443
3727 
3728 
3729 class BucketCountingCrawler(ShareCrawler):
3730-    """I keep track of how many buckets are being managed by this server.
3731-    This is equivalent to the number of distributed files and directories for
3732-    which I am providing storage. The actual number of files+directories in
3733-    the full grid is probably higher (especially when there are more servers
3734-    than 'N', the number of generated shares), because some files+directories
3735-    will have shares on other servers instead of me. Also note that the
3736-    number of buckets will differ from the number of shares in small grids,
3737-    when more than one share is placed on a single server.
3738+    """
3739+    I keep track of how many sharesets, each corresponding to a storage index,
3740+    are being managed by this server. This is equivalent to the number of
3741+    distributed files and directories for which I am providing storage. The
3742+    actual number of files and directories in the full grid is probably higher
3743+    (especially when there are more servers than 'N', the number of generated
3744+    shares), because some files and directories will have shares on other
3745+    servers instead of me. Also note that the number of sharesets will differ
3746+    from the number of shares in small grids, when more than one share is
3747+    placed on a single server.
3748     """
3749 
3750     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3751hunk ./src/allmydata/storage/crawler.py 457
3752 
3753-    def __init__(self, server, statefile, num_sample_prefixes=1):
3754-        ShareCrawler.__init__(self, server, statefile)
3755+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3756+        ShareCrawler.__init__(self, backend, statefp)
3757         self.num_sample_prefixes = num_sample_prefixes
3758 
3759     def add_initial_state(self):
3760hunk ./src/allmydata/storage/crawler.py 471
3761         self.state.setdefault("last-complete-bucket-count", None)
3762         self.state.setdefault("storage-index-samples", {})
3763 
3764-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3765+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3766         # we override process_prefixdir() because we don't want to look at
3767hunk ./src/allmydata/storage/crawler.py 473
3768-        # the individual buckets. We'll save state after each one. On my
3769+        # the individual sharesets. We'll save state after each one. On my
3770         # laptop, a mostly-empty storage server can process about 70
3771         # prefixdirs in a 1.0s slice.
3772         if cycle not in self.state["bucket-counts"]:
3773hunk ./src/allmydata/storage/crawler.py 478
3774             self.state["bucket-counts"][cycle] = {}
3775-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3776+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3777         if prefix in self.prefixes[:self.num_sample_prefixes]:
3778hunk ./src/allmydata/storage/crawler.py 480
3779-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3780+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3781 
3782     def finished_cycle(self, cycle):
3783         last_counts = self.state["bucket-counts"].get(cycle, [])
3784hunk ./src/allmydata/storage/crawler.py 486
3785         if len(last_counts) == len(self.prefixes):
3786             # great, we have a whole cycle.
3787-            num_buckets = sum(last_counts.values())
3788-            self.state["last-complete-bucket-count"] = num_buckets
3789+            num_sharesets = sum(last_counts.values())
3790+            self.state["last-complete-bucket-count"] = num_sharesets
3791             # get rid of old counts
3792             for old_cycle in list(self.state["bucket-counts"].keys()):
3793                 if old_cycle != cycle:
3794hunk ./src/allmydata/storage/crawler.py 494
3795                     del self.state["bucket-counts"][old_cycle]
3796         # get rid of old samples too
3797         for prefix in list(self.state["storage-index-samples"].keys()):
3798-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3799+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3800             if old_cycle != cycle:
3801                 del self.state["storage-index-samples"][prefix]
3802hunk ./src/allmydata/storage/crawler.py 497
3803-
3804hunk ./src/allmydata/storage/expirer.py 1
3805-import time, os, pickle, struct
3806+
3807+import time, pickle, struct
3808+from twisted.python import log as twlog
3809+
3810 from allmydata.storage.crawler import ShareCrawler
3811hunk ./src/allmydata/storage/expirer.py 6
3812-from allmydata.storage.shares import get_share_file
3813-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3814+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3815      UnknownImmutableContainerVersionError
3816hunk ./src/allmydata/storage/expirer.py 8
3817-from twisted.python import log as twlog
3818+
3819 
3820 class LeaseCheckingCrawler(ShareCrawler):
3821     """I examine the leases on all shares, determining which are still valid
3822hunk ./src/allmydata/storage/expirer.py 17
3823     removed.
3824 
3825     I collect statistics on the leases and make these available to a web
3826-    status page, including::
3827+    status page, including:
3828 
3829     Space recovered during this cycle-so-far:
3830      actual (only if expiration_enabled=True):
3831hunk ./src/allmydata/storage/expirer.py 21
3832-      num-buckets, num-shares, sum of share sizes, real disk usage
3833+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3834       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3835        space used by the directory)
3836      what it would have been with the original lease expiration time
3837hunk ./src/allmydata/storage/expirer.py 32
3838 
3839     Space recovered during the last 10 cycles  <-- saved in separate pickle
3840 
3841-    Shares/buckets examined:
3842+    Shares/storage-indices examined:
3843      this cycle-so-far
3844      prediction of rest of cycle
3845      during last 10 cycles <-- separate pickle
3846hunk ./src/allmydata/storage/expirer.py 42
3847     Histogram of leases-per-share:
3848      this-cycle-to-date
3849      last 10 cycles <-- separate pickle
3850-    Histogram of lease ages, buckets = 1day
3851+    Histogram of lease ages, storage-indices over 1 day
3852      cycle-to-date
3853      last 10 cycles <-- separate pickle
3854 
3855hunk ./src/allmydata/storage/expirer.py 53
3856     slow_start = 360 # wait 6 minutes after startup
3857     minimum_cycle_time = 12*60*60 # not more than twice per day
3858 
3859-    def __init__(self, server, statefile, historyfile,
3860-                 expiration_enabled, mode,
3861-                 override_lease_duration, # used if expiration_mode=="age"
3862-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3863-                 sharetypes):
3864-        self.historyfile = historyfile
3865-        self.expiration_enabled = expiration_enabled
3866-        self.mode = mode
3867+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3868+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3869+        self.historyfp = historyfp
3870+        ShareCrawler.__init__(self, backend, statefp)
3871+
3872+        self.expiration_enabled = expiration_policy['enabled']
3873+        self.mode = expiration_policy['mode']
3874         self.override_lease_duration = None
3875         self.cutoff_date = None
3876         if self.mode == "age":
3877hunk ./src/allmydata/storage/expirer.py 63
3878-            assert isinstance(override_lease_duration, (int, type(None)))
3879-            self.override_lease_duration = override_lease_duration # seconds
3880+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3881+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3882         elif self.mode == "cutoff-date":
3883hunk ./src/allmydata/storage/expirer.py 66
3884-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3885-            assert cutoff_date is not None
3886-            self.cutoff_date = cutoff_date
3887+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3888+            self.cutoff_date = expiration_policy['cutoff_date']
3889         else:
3890hunk ./src/allmydata/storage/expirer.py 69
3891-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3892-        self.sharetypes_to_expire = sharetypes
3893-        ShareCrawler.__init__(self, server, statefile)
3894+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3895+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3896 
3897     def add_initial_state(self):
3898         # we fill ["cycle-to-date"] here (even though they will be reset in
3899hunk ./src/allmydata/storage/expirer.py 84
3900             self.state["cycle-to-date"].setdefault(k, so_far[k])
3901 
3902         # initialize history
3903-        if not os.path.exists(self.historyfile):
3904+        if not self.historyfp.exists():
3905             history = {} # cyclenum -> dict
3906hunk ./src/allmydata/storage/expirer.py 86
3907-            f = open(self.historyfile, "wb")
3908-            pickle.dump(history, f)
3909-            f.close()
3910+            self.historyfp.setContent(pickle.dumps(history))
3911 
3912     def create_empty_cycle_dict(self):
3913         recovered = self.create_empty_recovered_dict()
3914hunk ./src/allmydata/storage/expirer.py 99
3915 
3916     def create_empty_recovered_dict(self):
3917         recovered = {}
3918+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
3919         for a in ("actual", "original", "configured", "examined"):
3920             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
3921                 recovered[a+"-"+b] = 0
3922hunk ./src/allmydata/storage/expirer.py 110
3923     def started_cycle(self, cycle):
3924         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
3925 
3926-    def stat(self, fn):
3927-        return os.stat(fn)
3928-
3929-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3930-        bucketdir = os.path.join(prefixdir, storage_index_b32)
3931-        s = self.stat(bucketdir)
3932+    def process_storage_index(self, cycle, prefix, container):
3933         would_keep_shares = []
3934         wks = None
3935hunk ./src/allmydata/storage/expirer.py 113
3936+        sharetype = None
3937 
3938hunk ./src/allmydata/storage/expirer.py 115
3939-        for fn in os.listdir(bucketdir):
3940-            try:
3941-                shnum = int(fn)
3942-            except ValueError:
3943-                continue # non-numeric means not a sharefile
3944-            sharefile = os.path.join(bucketdir, fn)
3945+        for share in container.get_shares():
3946+            sharetype = share.sharetype
3947             try:
3948hunk ./src/allmydata/storage/expirer.py 118
3949-                wks = self.process_share(sharefile)
3950+                wks = self.process_share(share)
3951             except (UnknownMutableContainerVersionError,
3952                     UnknownImmutableContainerVersionError,
3953                     struct.error):
3954hunk ./src/allmydata/storage/expirer.py 122
3955-                twlog.msg("lease-checker error processing %s" % sharefile)
3956+                twlog.msg("lease-checker error processing %r" % (share,))
3957                 twlog.err()
3958hunk ./src/allmydata/storage/expirer.py 124
3959-                which = (storage_index_b32, shnum)
3960+                which = (si_b2a(share.storageindex), share.get_shnum())
3961                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
3962                 wks = (1, 1, 1, "unknown")
3963             would_keep_shares.append(wks)
3964hunk ./src/allmydata/storage/expirer.py 129
3965 
3966-        sharetype = None
3967+        container_type = None
3968         if wks:
3969hunk ./src/allmydata/storage/expirer.py 131
3970-            # use the last share's sharetype as the buckettype
3971-            sharetype = wks[3]
3972+            # use the last share's sharetype as the container type
3973+            container_type = wks[3]
3974         rec = self.state["cycle-to-date"]["space-recovered"]
3975         self.increment(rec, "examined-buckets", 1)
3976         if sharetype:
3977hunk ./src/allmydata/storage/expirer.py 136
3978-            self.increment(rec, "examined-buckets-"+sharetype, 1)
3979+            self.increment(rec, "examined-buckets-"+container_type, 1)
3980+
3981+        container_diskbytes = container.get_overhead()
3982 
3983hunk ./src/allmydata/storage/expirer.py 140
3984-        try:
3985-            bucket_diskbytes = s.st_blocks * 512
3986-        except AttributeError:
3987-            bucket_diskbytes = 0 # no stat().st_blocks on windows
3988         if sum([wks[0] for wks in would_keep_shares]) == 0:
3989hunk ./src/allmydata/storage/expirer.py 141
3990-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
3991+            self.increment_container_space("original", container_diskbytes, sharetype)
3992         if sum([wks[1] for wks in would_keep_shares]) == 0:
3993hunk ./src/allmydata/storage/expirer.py 143
3994-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
3995+            self.increment_container_space("configured", container_diskbytes, sharetype)
3996         if sum([wks[2] for wks in would_keep_shares]) == 0:
3997hunk ./src/allmydata/storage/expirer.py 145
3998-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
3999+            self.increment_container_space("actual", container_diskbytes, sharetype)
4000 
4001hunk ./src/allmydata/storage/expirer.py 147
4002-    def process_share(self, sharefilename):
4003-        # first, find out what kind of a share it is
4004-        sf = get_share_file(sharefilename)
4005-        sharetype = sf.sharetype
4006+    def process_share(self, share):
4007+        sharetype = share.sharetype
4008         now = time.time()
4009hunk ./src/allmydata/storage/expirer.py 150
4010-        s = self.stat(sharefilename)
4011+        sharebytes = share.get_size()
4012+        diskbytes = share.get_used_space()
4013 
4014         num_leases = 0
4015         num_valid_leases_original = 0
4016hunk ./src/allmydata/storage/expirer.py 158
4017         num_valid_leases_configured = 0
4018         expired_leases_configured = []
4019 
4020-        for li in sf.get_leases():
4021+        for li in share.get_leases():
4022             num_leases += 1
4023             original_expiration_time = li.get_expiration_time()
4024             grant_renew_time = li.get_grant_renew_time_time()
4025hunk ./src/allmydata/storage/expirer.py 171
4026 
4027             #  expired-or-not according to our configured age limit
4028             expired = False
4029-            if self.mode == "age":
4030-                age_limit = original_expiration_time
4031-                if self.override_lease_duration is not None:
4032-                    age_limit = self.override_lease_duration
4033-                if age > age_limit:
4034-                    expired = True
4035-            else:
4036-                assert self.mode == "cutoff-date"
4037-                if grant_renew_time < self.cutoff_date:
4038-                    expired = True
4039-            if sharetype not in self.sharetypes_to_expire:
4040-                expired = False
4041+            if sharetype in self.sharetypes_to_expire:
4042+                if self.mode == "age":
4043+                    age_limit = original_expiration_time
4044+                    if self.override_lease_duration is not None:
4045+                        age_limit = self.override_lease_duration
4046+                    if age > age_limit:
4047+                        expired = True
4048+                else:
4049+                    assert self.mode == "cutoff-date"
4050+                    if grant_renew_time < self.cutoff_date:
4051+                        expired = True
4052 
4053             if expired:
4054                 expired_leases_configured.append(li)
4055hunk ./src/allmydata/storage/expirer.py 190
4056 
4057         so_far = self.state["cycle-to-date"]
4058         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4059-        self.increment_space("examined", s, sharetype)
4060+        self.increment_space("examined", diskbytes, sharetype)
4061 
4062         would_keep_share = [1, 1, 1, sharetype]
4063 
4064hunk ./src/allmydata/storage/expirer.py 196
4065         if self.expiration_enabled:
4066             for li in expired_leases_configured:
4067-                sf.cancel_lease(li.cancel_secret)
4068+                share.cancel_lease(li.cancel_secret)
4069 
4070         if num_valid_leases_original == 0:
4071             would_keep_share[0] = 0
4072hunk ./src/allmydata/storage/expirer.py 200
4073-            self.increment_space("original", s, sharetype)
4074+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4075 
4076         if num_valid_leases_configured == 0:
4077             would_keep_share[1] = 0
4078hunk ./src/allmydata/storage/expirer.py 204
4079-            self.increment_space("configured", s, sharetype)
4080+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4081             if self.expiration_enabled:
4082                 would_keep_share[2] = 0
4083hunk ./src/allmydata/storage/expirer.py 207
4084-                self.increment_space("actual", s, sharetype)
4085+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4086 
4087         return would_keep_share
4088 
4089hunk ./src/allmydata/storage/expirer.py 211
4090-    def increment_space(self, a, s, sharetype):
4091-        sharebytes = s.st_size
4092-        try:
4093-            # note that stat(2) says that st_blocks is 512 bytes, and that
4094-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4095-            # independent of the block-size that st_blocks uses.
4096-            diskbytes = s.st_blocks * 512
4097-        except AttributeError:
4098-            # the docs say that st_blocks is only on linux. I also see it on
4099-            # MacOS. But it isn't available on windows.
4100-            diskbytes = sharebytes
4101+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4102         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4103         self.increment(so_far_sr, a+"-shares", 1)
4104         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4105hunk ./src/allmydata/storage/expirer.py 221
4106             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4107             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4108 
4109-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4110+    def increment_container_space(self, a, container_diskbytes, container_type):
4111         rec = self.state["cycle-to-date"]["space-recovered"]
4112hunk ./src/allmydata/storage/expirer.py 223
4113-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4114+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4115         self.increment(rec, a+"-buckets", 1)
4116hunk ./src/allmydata/storage/expirer.py 225
4117-        if sharetype:
4118-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4119-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4120+        if container_type:
4121+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4122+            self.increment(rec, a+"-buckets-"+container_type, 1)
4123 
4124     def increment(self, d, k, delta=1):
4125         if k not in d:
4126hunk ./src/allmydata/storage/expirer.py 281
4127         # copy() needs to become a deepcopy
4128         h["space-recovered"] = s["space-recovered"].copy()
4129 
4130-        history = pickle.load(open(self.historyfile, "rb"))
4131+        history = pickle.load(self.historyfp.getContent())
4132         history[cycle] = h
4133         while len(history) > 10:
4134             oldcycles = sorted(history.keys())
4135hunk ./src/allmydata/storage/expirer.py 286
4136             del history[oldcycles[0]]
4137-        f = open(self.historyfile, "wb")
4138-        pickle.dump(history, f)
4139-        f.close()
4140+        self.historyfp.setContent(pickle.dumps(history))
4141 
4142     def get_state(self):
4143         """In addition to the crawler state described in
4144hunk ./src/allmydata/storage/expirer.py 355
4145         progress = self.get_progress()
4146 
4147         state = ShareCrawler.get_state(self) # does a shallow copy
4148-        history = pickle.load(open(self.historyfile, "rb"))
4149+        history = pickle.load(self.historyfp.getContent())
4150         state["history"] = history
4151 
4152         if not progress["cycle-in-progress"]:
4153hunk ./src/allmydata/storage/lease.py 3
4154 import struct, time
4155 
4156+
4157+class NonExistentLeaseError(Exception):
4158+    pass
4159+
4160 class LeaseInfo:
4161     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4162                  expiration_time=None, nodeid=None):
4163hunk ./src/allmydata/storage/lease.py 21
4164 
4165     def get_expiration_time(self):
4166         return self.expiration_time
4167+
4168     def get_grant_renew_time_time(self):
4169         # hack, based upon fixed 31day expiration period
4170         return self.expiration_time - 31*24*60*60
4171hunk ./src/allmydata/storage/lease.py 25
4172+
4173     def get_age(self):
4174         return time.time() - self.get_grant_renew_time_time()
4175 
4176hunk ./src/allmydata/storage/lease.py 36
4177          self.expiration_time) = struct.unpack(">L32s32sL", data)
4178         self.nodeid = None
4179         return self
4180+
4181     def to_immutable_data(self):
4182         return struct.pack(">L32s32sL",
4183                            self.owner_num,
4184hunk ./src/allmydata/storage/lease.py 49
4185                            int(self.expiration_time),
4186                            self.renew_secret, self.cancel_secret,
4187                            self.nodeid)
4188+
4189     def from_mutable_data(self, data):
4190         (self.owner_num,
4191          self.expiration_time,
4192hunk ./src/allmydata/storage/server.py 1
4193-import os, re, weakref, struct, time
4194+import weakref, time
4195 
4196 from foolscap.api import Referenceable
4197 from twisted.application import service
4198hunk ./src/allmydata/storage/server.py 7
4199 
4200 from zope.interface import implements
4201-from allmydata.interfaces import RIStorageServer, IStatsProducer
4202-from allmydata.util import fileutil, idlib, log, time_format
4203+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4204+from allmydata.util.assertutil import precondition
4205+from allmydata.util import idlib, log
4206 import allmydata # for __full_version__
4207 
4208hunk ./src/allmydata/storage/server.py 12
4209-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4210-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4211+from allmydata.storage.common import si_a2b, si_b2a
4212+[si_a2b]  # hush pyflakes
4213 from allmydata.storage.lease import LeaseInfo
4214hunk ./src/allmydata/storage/server.py 15
4215-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4216-     create_mutable_sharefile
4217-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4218-from allmydata.storage.crawler import BucketCountingCrawler
4219 from allmydata.storage.expirer import LeaseCheckingCrawler
4220hunk ./src/allmydata/storage/server.py 16
4221-
4222-# storage/
4223-# storage/shares/incoming
4224-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4225-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4226-# storage/shares/$START/$STORAGEINDEX
4227-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4228-
4229-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4230-# base-32 chars).
4231-
4232-# $SHARENUM matches this regex:
4233-NUM_RE=re.compile("^[0-9]+$")
4234-
4235+from allmydata.storage.crawler import BucketCountingCrawler
4236 
4237 
4238 class StorageServer(service.MultiService, Referenceable):
4239hunk ./src/allmydata/storage/server.py 21
4240     implements(RIStorageServer, IStatsProducer)
4241+
4242     name = 'storage'
4243     LeaseCheckerClass = LeaseCheckingCrawler
4244hunk ./src/allmydata/storage/server.py 24
4245+    DEFAULT_EXPIRATION_POLICY = {
4246+        'enabled': False,
4247+        'mode': 'age',
4248+        'override_lease_duration': None,
4249+        'cutoff_date': None,
4250+        'sharetypes': ('mutable', 'immutable'),
4251+    }
4252 
4253hunk ./src/allmydata/storage/server.py 32
4254-    def __init__(self, storedir, nodeid, reserved_space=0,
4255-                 discard_storage=False, readonly_storage=False,
4256+    def __init__(self, serverid, backend, statedir,
4257                  stats_provider=None,
4258hunk ./src/allmydata/storage/server.py 34
4259-                 expiration_enabled=False,
4260-                 expiration_mode="age",
4261-                 expiration_override_lease_duration=None,
4262-                 expiration_cutoff_date=None,
4263-                 expiration_sharetypes=("mutable", "immutable")):
4264+                 expiration_policy=None):
4265         service.MultiService.__init__(self)
4266hunk ./src/allmydata/storage/server.py 36
4267-        assert isinstance(nodeid, str)
4268-        assert len(nodeid) == 20
4269-        self.my_nodeid = nodeid
4270-        self.storedir = storedir
4271-        sharedir = os.path.join(storedir, "shares")
4272-        fileutil.make_dirs(sharedir)
4273-        self.sharedir = sharedir
4274-        # we don't actually create the corruption-advisory dir until necessary
4275-        self.corruption_advisory_dir = os.path.join(storedir,
4276-                                                    "corruption-advisories")
4277-        self.reserved_space = int(reserved_space)
4278-        self.no_storage = discard_storage
4279-        self.readonly_storage = readonly_storage
4280+        precondition(IStorageBackend.providedBy(backend), backend)
4281+        precondition(isinstance(serverid, str), serverid)
4282+        precondition(len(serverid) == 20, serverid)
4283+
4284+        self._serverid = serverid
4285         self.stats_provider = stats_provider
4286         if self.stats_provider:
4287             self.stats_provider.register_producer(self)
4288hunk ./src/allmydata/storage/server.py 44
4289-        self.incomingdir = os.path.join(sharedir, 'incoming')
4290-        self._clean_incomplete()
4291-        fileutil.make_dirs(self.incomingdir)
4292         self._active_writers = weakref.WeakKeyDictionary()
4293hunk ./src/allmydata/storage/server.py 45
4294+        self.backend = backend
4295+        self.backend.setServiceParent(self)
4296+        self._statedir = statedir
4297         log.msg("StorageServer created", facility="tahoe.storage")
4298 
4299hunk ./src/allmydata/storage/server.py 50
4300-        if reserved_space:
4301-            if self.get_available_space() is None:
4302-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4303-                        umin="0wZ27w", level=log.UNUSUAL)
4304-
4305         self.latencies = {"allocate": [], # immutable
4306                           "write": [],
4307                           "close": [],
4308hunk ./src/allmydata/storage/server.py 61
4309                           "renew": [],
4310                           "cancel": [],
4311                           }
4312-        self.add_bucket_counter()
4313-
4314-        statefile = os.path.join(self.storedir, "lease_checker.state")
4315-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4316-        klass = self.LeaseCheckerClass
4317-        self.lease_checker = klass(self, statefile, historyfile,
4318-                                   expiration_enabled, expiration_mode,
4319-                                   expiration_override_lease_duration,
4320-                                   expiration_cutoff_date,
4321-                                   expiration_sharetypes)
4322-        self.lease_checker.setServiceParent(self)
4323+        self._setup_bucket_counter()
4324+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4325 
4326     def __repr__(self):
4327hunk ./src/allmydata/storage/server.py 65
4328-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4329+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4330 
4331hunk ./src/allmydata/storage/server.py 67
4332-    def add_bucket_counter(self):
4333-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4334-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4335+    def _setup_bucket_counter(self):
4336+        statefp = self._statedir.child("bucket_counter.state")
4337+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4338         self.bucket_counter.setServiceParent(self)
4339 
4340hunk ./src/allmydata/storage/server.py 72
4341+    def _setup_lease_checker(self, expiration_policy):
4342+        statefp = self._statedir.child("lease_checker.state")
4343+        historyfp = self._statedir.child("lease_checker.history")
4344+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4345+        self.lease_checker.setServiceParent(self)
4346+
4347     def count(self, name, delta=1):
4348         if self.stats_provider:
4349             self.stats_provider.count("storage_server." + name, delta)
4350hunk ./src/allmydata/storage/server.py 92
4351         """Return a dict, indexed by category, that contains a dict of
4352         latency numbers for each category. If there are sufficient samples
4353         for unambiguous interpretation, each dict will contain the
4354-        following keys: mean, 01_0_percentile, 10_0_percentile,
4355+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4356         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4357         99_0_percentile, 99_9_percentile.  If there are insufficient
4358         samples for a given percentile to be interpreted unambiguously
4359hunk ./src/allmydata/storage/server.py 114
4360             else:
4361                 stats["mean"] = None
4362 
4363-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4364-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4365-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4366+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4367+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4368+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4369                              (0.999, "99_9_percentile", 1000)]
4370 
4371             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4372hunk ./src/allmydata/storage/server.py 133
4373             kwargs["facility"] = "tahoe.storage"
4374         return log.msg(*args, **kwargs)
4375 
4376-    def _clean_incomplete(self):
4377-        fileutil.rm_dir(self.incomingdir)
4378+    def get_serverid(self):
4379+        return self._serverid
4380 
4381     def get_stats(self):
4382         # remember: RIStatsProvider requires that our return dict
4383hunk ./src/allmydata/storage/server.py 138
4384-        # contains numeric values.
4385+        # contains numeric, or None values.
4386         stats = { 'storage_server.allocated': self.allocated_size(), }
4387hunk ./src/allmydata/storage/server.py 140
4388-        stats['storage_server.reserved_space'] = self.reserved_space
4389         for category,ld in self.get_latencies().items():
4390             for name,v in ld.items():
4391                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4392hunk ./src/allmydata/storage/server.py 144
4393 
4394-        try:
4395-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4396-            writeable = disk['avail'] > 0
4397-
4398-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4399-            stats['storage_server.disk_total'] = disk['total']
4400-            stats['storage_server.disk_used'] = disk['used']
4401-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4402-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4403-            stats['storage_server.disk_avail'] = disk['avail']
4404-        except AttributeError:
4405-            writeable = True
4406-        except EnvironmentError:
4407-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4408-            writeable = False
4409-
4410-        if self.readonly_storage:
4411-            stats['storage_server.disk_avail'] = 0
4412-            writeable = False
4413+        self.backend.fill_in_space_stats(stats)
4414 
4415hunk ./src/allmydata/storage/server.py 146
4416-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4417         s = self.bucket_counter.get_state()
4418         bucket_count = s.get("last-complete-bucket-count")
4419         if bucket_count:
4420hunk ./src/allmydata/storage/server.py 153
4421         return stats
4422 
4423     def get_available_space(self):
4424-        """Returns available space for share storage in bytes, or None if no
4425-        API to get this information is available."""
4426-
4427-        if self.readonly_storage:
4428-            return 0
4429-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4430+        return self.backend.get_available_space()
4431 
4432     def allocated_size(self):
4433         space = 0
4434hunk ./src/allmydata/storage/server.py 162
4435         return space
4436 
4437     def remote_get_version(self):
4438-        remaining_space = self.get_available_space()
4439+        remaining_space = self.backend.get_available_space()
4440         if remaining_space is None:
4441             # We're on a platform that has no API to get disk stats.
4442             remaining_space = 2**64
4443hunk ./src/allmydata/storage/server.py 178
4444                     }
4445         return version
4446 
4447-    def remote_allocate_buckets(self, storage_index,
4448+    def remote_allocate_buckets(self, storageindex,
4449                                 renew_secret, cancel_secret,
4450                                 sharenums, allocated_size,
4451                                 canary, owner_num=0):
4452hunk ./src/allmydata/storage/server.py 182
4453+        # cancel_secret is no longer used.
4454         # owner_num is not for clients to set, but rather it should be
4455hunk ./src/allmydata/storage/server.py 184
4456-        # curried into the PersonalStorageServer instance that is dedicated
4457-        # to a particular owner.
4458+        # curried into a StorageServer instance dedicated to a particular
4459+        # owner.
4460         start = time.time()
4461         self.count("allocate")
4462hunk ./src/allmydata/storage/server.py 188
4463-        alreadygot = set()
4464         bucketwriters = {} # k: shnum, v: BucketWriter
4465hunk ./src/allmydata/storage/server.py 189
4466-        si_dir = storage_index_to_dir(storage_index)
4467-        si_s = si_b2a(storage_index)
4468 
4469hunk ./src/allmydata/storage/server.py 190
4470+        si_s = si_b2a(storageindex)
4471         log.msg("storage: allocate_buckets %s" % si_s)
4472 
4473hunk ./src/allmydata/storage/server.py 193
4474-        # in this implementation, the lease information (including secrets)
4475-        # goes into the share files themselves. It could also be put into a
4476-        # separate database. Note that the lease should not be added until
4477-        # the BucketWriter has been closed.
4478+        # Note that the lease should not be added until the BucketWriter
4479+        # has been closed.
4480         expire_time = time.time() + 31*24*60*60
4481hunk ./src/allmydata/storage/server.py 196
4482-        lease_info = LeaseInfo(owner_num,
4483-                               renew_secret, cancel_secret,
4484-                               expire_time, self.my_nodeid)
4485+        lease_info = LeaseInfo(owner_num, renew_secret,
4486+                               expire_time, self._serverid)
4487 
4488         max_space_per_bucket = allocated_size
4489 
4490hunk ./src/allmydata/storage/server.py 201
4491-        remaining_space = self.get_available_space()
4492+        remaining_space = self.backend.get_available_space()
4493         limited = remaining_space is not None
4494         if limited:
4495hunk ./src/allmydata/storage/server.py 204
4496-            # this is a bit conservative, since some of this allocated_size()
4497-            # has already been written to disk, where it will show up in
4498+            # This is a bit conservative, since some of this allocated_size()
4499+            # has already been written to the backend, where it will show up in
4500             # get_available_space.
4501             remaining_space -= self.allocated_size()
4502hunk ./src/allmydata/storage/server.py 208
4503-        # self.readonly_storage causes remaining_space <= 0
4504+            # If the backend is read-only, remaining_space will be <= 0.
4505+
4506+        shareset = self.backend.get_shareset(storageindex)
4507 
4508hunk ./src/allmydata/storage/server.py 212
4509-        # fill alreadygot with all shares that we have, not just the ones
4510+        # Fill alreadygot with all shares that we have, not just the ones
4511         # they asked about: this will save them a lot of work. Add or update
4512         # leases for all of them: if they want us to hold shares for this
4513hunk ./src/allmydata/storage/server.py 215
4514-        # file, they'll want us to hold leases for this file.
4515-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4516-            alreadygot.add(shnum)
4517-            sf = ShareFile(fn)
4518-            sf.add_or_renew_lease(lease_info)
4519+        # file, they'll want us to hold leases for all the shares of it.
4520+        #
4521+        # XXX should we be making the assumption here that lease info is
4522+        # duplicated in all shares?
4523+        alreadygot = set()
4524+        for share in shareset.get_shares():
4525+            share.add_or_renew_lease(lease_info)
4526+            alreadygot.add(share.shnum)
4527 
4528hunk ./src/allmydata/storage/server.py 224
4529-        for shnum in sharenums:
4530-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4531-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4532-            if os.path.exists(finalhome):
4533-                # great! we already have it. easy.
4534-                pass
4535-            elif os.path.exists(incominghome):
4536+        for shnum in sharenums - alreadygot:
4537+            if shareset.has_incoming(shnum):
4538                 # Note that we don't create BucketWriters for shnums that
4539                 # have a partial share (in incoming/), so if a second upload
4540                 # occurs while the first is still in progress, the second
4541hunk ./src/allmydata/storage/server.py 232
4542                 # uploader will use different storage servers.
4543                 pass
4544             elif (not limited) or (remaining_space >= max_space_per_bucket):
4545-                # ok! we need to create the new share file.
4546-                bw = BucketWriter(self, incominghome, finalhome,
4547-                                  max_space_per_bucket, lease_info, canary)
4548-                if self.no_storage:
4549-                    bw.throw_out_all_data = True
4550+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4551+                                                 lease_info, canary)
4552                 bucketwriters[shnum] = bw
4553                 self._active_writers[bw] = 1
4554                 if limited:
4555hunk ./src/allmydata/storage/server.py 239
4556                     remaining_space -= max_space_per_bucket
4557             else:
4558-                # bummer! not enough space to accept this bucket
4559+                # Bummer not enough space to accept this share.
4560                 pass
4561 
4562hunk ./src/allmydata/storage/server.py 242
4563-        if bucketwriters:
4564-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4565-
4566         self.add_latency("allocate", time.time() - start)
4567         return alreadygot, bucketwriters
4568 
4569hunk ./src/allmydata/storage/server.py 245
4570-    def _iter_share_files(self, storage_index):
4571-        for shnum, filename in self._get_bucket_shares(storage_index):
4572-            f = open(filename, 'rb')
4573-            header = f.read(32)
4574-            f.close()
4575-            if header[:32] == MutableShareFile.MAGIC:
4576-                sf = MutableShareFile(filename, self)
4577-                # note: if the share has been migrated, the renew_lease()
4578-                # call will throw an exception, with information to help the
4579-                # client update the lease.
4580-            elif header[:4] == struct.pack(">L", 1):
4581-                sf = ShareFile(filename)
4582-            else:
4583-                continue # non-sharefile
4584-            yield sf
4585-
4586-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4587+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4588                          owner_num=1):
4589hunk ./src/allmydata/storage/server.py 247
4590+        # cancel_secret is no longer used.
4591         start = time.time()
4592         self.count("add-lease")
4593         new_expire_time = time.time() + 31*24*60*60
4594hunk ./src/allmydata/storage/server.py 251
4595-        lease_info = LeaseInfo(owner_num,
4596-                               renew_secret, cancel_secret,
4597-                               new_expire_time, self.my_nodeid)
4598-        for sf in self._iter_share_files(storage_index):
4599-            sf.add_or_renew_lease(lease_info)
4600-        self.add_latency("add-lease", time.time() - start)
4601-        return None
4602+        lease_info = LeaseInfo(owner_num, renew_secret,
4603+                               new_expire_time, self._serverid)
4604 
4605hunk ./src/allmydata/storage/server.py 254
4606-    def remote_renew_lease(self, storage_index, renew_secret):
4607+        try:
4608+            self.backend.add_or_renew_lease(lease_info)
4609+        finally:
4610+            self.add_latency("add-lease", time.time() - start)
4611+
4612+    def remote_renew_lease(self, storageindex, renew_secret):
4613         start = time.time()
4614         self.count("renew")
4615hunk ./src/allmydata/storage/server.py 262
4616-        new_expire_time = time.time() + 31*24*60*60
4617-        found_buckets = False
4618-        for sf in self._iter_share_files(storage_index):
4619-            found_buckets = True
4620-            sf.renew_lease(renew_secret, new_expire_time)
4621-        self.add_latency("renew", time.time() - start)
4622-        if not found_buckets:
4623-            raise IndexError("no such lease to renew")
4624+
4625+        try:
4626+            shareset = self.backend.get_shareset(storageindex)
4627+            new_expiration_time = start + 31*24*60*60   # one month from now
4628+            shareset.renew_lease(renew_secret, new_expiration_time)
4629+        finally:
4630+            self.add_latency("renew", time.time() - start)
4631 
4632     def bucket_writer_closed(self, bw, consumed_size):
4633         if self.stats_provider:
4634hunk ./src/allmydata/storage/server.py 275
4635             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4636         del self._active_writers[bw]
4637 
4638-    def _get_bucket_shares(self, storage_index):
4639-        """Return a list of (shnum, pathname) tuples for files that hold
4640-        shares for this storage_index. In each tuple, 'shnum' will always be
4641-        the integer form of the last component of 'pathname'."""
4642-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4643-        try:
4644-            for f in os.listdir(storagedir):
4645-                if NUM_RE.match(f):
4646-                    filename = os.path.join(storagedir, f)
4647-                    yield (int(f), filename)
4648-        except OSError:
4649-            # Commonly caused by there being no buckets at all.
4650-            pass
4651-
4652-    def remote_get_buckets(self, storage_index):
4653+    def remote_get_buckets(self, storageindex):
4654         start = time.time()
4655         self.count("get")
4656hunk ./src/allmydata/storage/server.py 278
4657-        si_s = si_b2a(storage_index)
4658+        si_s = si_b2a(storageindex)
4659         log.msg("storage: get_buckets %s" % si_s)
4660         bucketreaders = {} # k: sharenum, v: BucketReader
4661hunk ./src/allmydata/storage/server.py 281
4662-        for shnum, filename in self._get_bucket_shares(storage_index):
4663-            bucketreaders[shnum] = BucketReader(self, filename,
4664-                                                storage_index, shnum)
4665-        self.add_latency("get", time.time() - start)
4666-        return bucketreaders
4667 
4668hunk ./src/allmydata/storage/server.py 282
4669-    def get_leases(self, storage_index):
4670-        """Provide an iterator that yields all of the leases attached to this
4671-        bucket. Each lease is returned as a LeaseInfo instance.
4672+        try:
4673+            shareset = self.backend.get_shareset(storageindex)
4674+            for share in shareset.get_shares():
4675+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4676+            return bucketreaders
4677+        finally:
4678+            self.add_latency("get", time.time() - start)
4679 
4680hunk ./src/allmydata/storage/server.py 290
4681-        This method is not for client use.
4682+    def get_leases(self, storageindex):
4683         """
4684hunk ./src/allmydata/storage/server.py 292
4685+        Provide an iterator that yields all of the leases attached to this
4686+        bucket. Each lease is returned as a LeaseInfo instance.
4687 
4688hunk ./src/allmydata/storage/server.py 295
4689-        # since all shares get the same lease data, we just grab the leases
4690-        # from the first share
4691-        try:
4692-            shnum, filename = self._get_bucket_shares(storage_index).next()
4693-            sf = ShareFile(filename)
4694-            return sf.get_leases()
4695-        except StopIteration:
4696-            return iter([])
4697+        This method is not for client use. XXX do we need it at all?
4698+        """
4699+        return self.backend.get_shareset(storageindex).get_leases()
4700 
4701hunk ./src/allmydata/storage/server.py 299
4702-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4703+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4704                                                secrets,
4705                                                test_and_write_vectors,
4706                                                read_vector):
4707hunk ./src/allmydata/storage/server.py 305
4708         start = time.time()
4709         self.count("writev")
4710-        si_s = si_b2a(storage_index)
4711+        si_s = si_b2a(storageindex)
4712         log.msg("storage: slot_writev %s" % si_s)
4713hunk ./src/allmydata/storage/server.py 307
4714-        si_dir = storage_index_to_dir(storage_index)
4715-        (write_enabler, renew_secret, cancel_secret) = secrets
4716-        # shares exist if there is a file for them
4717-        bucketdir = os.path.join(self.sharedir, si_dir)
4718-        shares = {}
4719-        if os.path.isdir(bucketdir):
4720-            for sharenum_s in os.listdir(bucketdir):
4721-                try:
4722-                    sharenum = int(sharenum_s)
4723-                except ValueError:
4724-                    continue
4725-                filename = os.path.join(bucketdir, sharenum_s)
4726-                msf = MutableShareFile(filename, self)
4727-                msf.check_write_enabler(write_enabler, si_s)
4728-                shares[sharenum] = msf
4729-        # write_enabler is good for all existing shares.
4730-
4731-        # Now evaluate test vectors.
4732-        testv_is_good = True
4733-        for sharenum in test_and_write_vectors:
4734-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4735-            if sharenum in shares:
4736-                if not shares[sharenum].check_testv(testv):
4737-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4738-                    testv_is_good = False
4739-                    break
4740-            else:
4741-                # compare the vectors against an empty share, in which all
4742-                # reads return empty strings.
4743-                if not EmptyShare().check_testv(testv):
4744-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4745-                                                                testv))
4746-                    testv_is_good = False
4747-                    break
4748-
4749-        # now gather the read vectors, before we do any writes
4750-        read_data = {}
4751-        for sharenum, share in shares.items():
4752-            read_data[sharenum] = share.readv(read_vector)
4753-
4754-        ownerid = 1 # TODO
4755-        expire_time = time.time() + 31*24*60*60   # one month
4756-        lease_info = LeaseInfo(ownerid,
4757-                               renew_secret, cancel_secret,
4758-                               expire_time, self.my_nodeid)
4759-
4760-        if testv_is_good:
4761-            # now apply the write vectors
4762-            for sharenum in test_and_write_vectors:
4763-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4764-                if new_length == 0:
4765-                    if sharenum in shares:
4766-                        shares[sharenum].unlink()
4767-                else:
4768-                    if sharenum not in shares:
4769-                        # allocate a new share
4770-                        allocated_size = 2000 # arbitrary, really
4771-                        share = self._allocate_slot_share(bucketdir, secrets,
4772-                                                          sharenum,
4773-                                                          allocated_size,
4774-                                                          owner_num=0)
4775-                        shares[sharenum] = share
4776-                    shares[sharenum].writev(datav, new_length)
4777-                    # and update the lease
4778-                    shares[sharenum].add_or_renew_lease(lease_info)
4779-
4780-            if new_length == 0:
4781-                # delete empty bucket directories
4782-                if not os.listdir(bucketdir):
4783-                    os.rmdir(bucketdir)
4784 
4785hunk ./src/allmydata/storage/server.py 308
4786+        try:
4787+            shareset = self.backend.get_shareset(storageindex)
4788+            expiration_time = start + 31*24*60*60   # one month from now
4789+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4790+                                                       read_vector, expiration_time)
4791+        finally:
4792+            self.add_latency("writev", time.time() - start)
4793 
4794hunk ./src/allmydata/storage/server.py 316
4795-        # all done
4796-        self.add_latency("writev", time.time() - start)
4797-        return (testv_is_good, read_data)
4798-
4799-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4800-                             allocated_size, owner_num=0):
4801-        (write_enabler, renew_secret, cancel_secret) = secrets
4802-        my_nodeid = self.my_nodeid
4803-        fileutil.make_dirs(bucketdir)
4804-        filename = os.path.join(bucketdir, "%d" % sharenum)
4805-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4806-                                         self)
4807-        return share
4808-
4809-    def remote_slot_readv(self, storage_index, shares, readv):
4810+    def remote_slot_readv(self, storageindex, shares, readv):
4811         start = time.time()
4812         self.count("readv")
4813hunk ./src/allmydata/storage/server.py 319
4814-        si_s = si_b2a(storage_index)
4815-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4816-                     facility="tahoe.storage", level=log.OPERATIONAL)
4817-        si_dir = storage_index_to_dir(storage_index)
4818-        # shares exist if there is a file for them
4819-        bucketdir = os.path.join(self.sharedir, si_dir)
4820-        if not os.path.isdir(bucketdir):
4821+        si_s = si_b2a(storageindex)
4822+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4823+                facility="tahoe.storage", level=log.OPERATIONAL)
4824+
4825+        try:
4826+            shareset = self.backend.get_shareset(storageindex)
4827+            return shareset.readv(self, shares, readv)
4828+        finally:
4829             self.add_latency("readv", time.time() - start)
4830hunk ./src/allmydata/storage/server.py 328
4831-            return {}
4832-        datavs = {}
4833-        for sharenum_s in os.listdir(bucketdir):
4834-            try:
4835-                sharenum = int(sharenum_s)
4836-            except ValueError:
4837-                continue
4838-            if sharenum in shares or not shares:
4839-                filename = os.path.join(bucketdir, sharenum_s)
4840-                msf = MutableShareFile(filename, self)
4841-                datavs[sharenum] = msf.readv(readv)
4842-        log.msg("returning shares %s" % (datavs.keys(),),
4843-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4844-        self.add_latency("readv", time.time() - start)
4845-        return datavs
4846 
4847hunk ./src/allmydata/storage/server.py 329
4848-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4849-                                    reason):
4850-        fileutil.make_dirs(self.corruption_advisory_dir)
4851-        now = time_format.iso_utc(sep="T")
4852-        si_s = si_b2a(storage_index)
4853-        # windows can't handle colons in the filename
4854-        fn = os.path.join(self.corruption_advisory_dir,
4855-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4856-        f = open(fn, "w")
4857-        f.write("report: Share Corruption\n")
4858-        f.write("type: %s\n" % share_type)
4859-        f.write("storage_index: %s\n" % si_s)
4860-        f.write("share_number: %d\n" % shnum)
4861-        f.write("\n")
4862-        f.write(reason)
4863-        f.write("\n")
4864-        f.close()
4865-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4866-                        "%(si)s-%(shnum)d: %(reason)s"),
4867-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4868-                level=log.SCARY, umid="SGx2fA")
4869-        return None
4870+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4871+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4872hunk ./src/allmydata/test/common.py 20
4873 from allmydata.mutable.common import CorruptShareError
4874 from allmydata.mutable.layout import unpack_header
4875 from allmydata.mutable.publish import MutableData
4876-from allmydata.storage.mutable import MutableShareFile
4877+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4878 from allmydata.util import hashutil, log, fileutil, pollmixin
4879 from allmydata.util.assertutil import precondition
4880 from allmydata.util.consumer import download_to_data
4881hunk ./src/allmydata/test/common.py 1297
4882 
4883 def _corrupt_mutable_share_data(data, debug=False):
4884     prefix = data[:32]
4885-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4886-    data_offset = MutableShareFile.DATA_OFFSET
4887+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4888+    data_offset = MutableDiskShare.DATA_OFFSET
4889     sharetype = data[data_offset:data_offset+1]
4890     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4891     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4892hunk ./src/allmydata/test/no_network.py 21
4893 from twisted.application import service
4894 from twisted.internet import defer, reactor
4895 from twisted.python.failure import Failure
4896+from twisted.python.filepath import FilePath
4897 from foolscap.api import Referenceable, fireEventually, RemoteException
4898 from base64 import b32encode
4899hunk ./src/allmydata/test/no_network.py 24
4900+
4901 from allmydata import uri as tahoe_uri
4902 from allmydata.client import Client
4903hunk ./src/allmydata/test/no_network.py 27
4904-from allmydata.storage.server import StorageServer, storage_index_to_dir
4905+from allmydata.storage.server import StorageServer
4906+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4907 from allmydata.util import fileutil, idlib, hashutil
4908 from allmydata.util.hashutil import sha1
4909 from allmydata.test.common_web import HTTPClientGETFactory
4910hunk ./src/allmydata/test/no_network.py 155
4911             seed = server.get_permutation_seed()
4912             return sha1(peer_selection_index + seed).digest()
4913         return sorted(self.get_connected_servers(), key=_permuted)
4914+
4915     def get_connected_servers(self):
4916         return self.client._servers
4917hunk ./src/allmydata/test/no_network.py 158
4918+
4919     def get_nickname_for_serverid(self, serverid):
4920         return None
4921 
4922hunk ./src/allmydata/test/no_network.py 162
4923+    def get_known_servers(self):
4924+        return self.get_connected_servers()
4925+
4926+    def get_all_serverids(self):
4927+        return self.client.get_all_serverids()
4928+
4929+
4930 class NoNetworkClient(Client):
4931     def create_tub(self):
4932         pass
4933hunk ./src/allmydata/test/no_network.py 262
4934 
4935     def make_server(self, i, readonly=False):
4936         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
4937-        serverdir = os.path.join(self.basedir, "servers",
4938-                                 idlib.shortnodeid_b2a(serverid), "storage")
4939-        fileutil.make_dirs(serverdir)
4940-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
4941-                           readonly_storage=readonly)
4942+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
4943+
4944+        # The backend will make the storage directory and any necessary parents.
4945+        backend = DiskBackend(storagedir, readonly=readonly)
4946+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
4947         ss._no_network_server_number = i
4948         return ss
4949 
4950hunk ./src/allmydata/test/no_network.py 276
4951         middleman = service.MultiService()
4952         middleman.setServiceParent(self)
4953         ss.setServiceParent(middleman)
4954-        serverid = ss.my_nodeid
4955+        serverid = ss.get_serverid()
4956         self.servers_by_number[i] = ss
4957         wrapper = wrap_storage_server(ss)
4958         self.wrappers_by_id[serverid] = wrapper
4959hunk ./src/allmydata/test/no_network.py 295
4960         # it's enough to remove the server from c._servers (we don't actually
4961         # have to detach and stopService it)
4962         for i,ss in self.servers_by_number.items():
4963-            if ss.my_nodeid == serverid:
4964+            if ss.get_serverid() == serverid:
4965                 del self.servers_by_number[i]
4966                 break
4967         del self.wrappers_by_id[serverid]
4968hunk ./src/allmydata/test/no_network.py 345
4969     def get_clientdir(self, i=0):
4970         return self.g.clients[i].basedir
4971 
4972+    def get_server(self, i):
4973+        return self.g.servers_by_number[i]
4974+
4975     def get_serverdir(self, i):
4976hunk ./src/allmydata/test/no_network.py 349
4977-        return self.g.servers_by_number[i].storedir
4978+        return self.g.servers_by_number[i].backend.storedir
4979+
4980+    def remove_server(self, i):
4981+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
4982 
4983     def iterate_servers(self):
4984         for i in sorted(self.g.servers_by_number.keys()):
4985hunk ./src/allmydata/test/no_network.py 357
4986             ss = self.g.servers_by_number[i]
4987-            yield (i, ss, ss.storedir)
4988+            yield (i, ss, ss.backend.storedir)
4989 
4990     def find_uri_shares(self, uri):
4991         si = tahoe_uri.from_string(uri).get_storage_index()
4992hunk ./src/allmydata/test/no_network.py 361
4993-        prefixdir = storage_index_to_dir(si)
4994         shares = []
4995         for i,ss in self.g.servers_by_number.items():
4996hunk ./src/allmydata/test/no_network.py 363
4997-            serverid = ss.my_nodeid
4998-            basedir = os.path.join(ss.sharedir, prefixdir)
4999-            if not os.path.exists(basedir):
5000-                continue
5001-            for f in os.listdir(basedir):
5002-                try:
5003-                    shnum = int(f)
5004-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5005-                except ValueError:
5006-                    pass
5007+            for share in ss.backend.get_shareset(si).get_shares():
5008+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5009         return sorted(shares)
5010 
5011hunk ./src/allmydata/test/no_network.py 367
5012+    def count_leases(self, uri):
5013+        """Return (filename, leasecount) pairs in arbitrary order."""
5014+        si = tahoe_uri.from_string(uri).get_storage_index()
5015+        lease_counts = []
5016+        for i,ss in self.g.servers_by_number.items():
5017+            for share in ss.backend.get_shareset(si).get_shares():
5018+                num_leases = len(list(share.get_leases()))
5019+                lease_counts.append( (share._home.path, num_leases) )
5020+        return lease_counts
5021+
5022     def copy_shares(self, uri):
5023         shares = {}
5024hunk ./src/allmydata/test/no_network.py 379
5025-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5026-            shares[sharefile] = open(sharefile, "rb").read()
5027+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5028+            shares[sharefp.path] = sharefp.getContent()
5029         return shares
5030 
5031hunk ./src/allmydata/test/no_network.py 383
5032+    def copy_share(self, from_share, uri, to_server):
5033+        si = uri.from_string(self.uri).get_storage_index()
5034+        (i_shnum, i_serverid, i_sharefp) = from_share
5035+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5036+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5037+
5038     def restore_all_shares(self, shares):
5039hunk ./src/allmydata/test/no_network.py 390
5040-        for sharefile, data in shares.items():
5041-            open(sharefile, "wb").write(data)
5042+        for share, data in shares.items():
5043+            share.home.setContent(data)
5044 
5045hunk ./src/allmydata/test/no_network.py 393
5046-    def delete_share(self, (shnum, serverid, sharefile)):
5047-        os.unlink(sharefile)
5048+    def delete_share(self, (shnum, serverid, sharefp)):
5049+        sharefp.remove()
5050 
5051     def delete_shares_numbered(self, uri, shnums):
5052hunk ./src/allmydata/test/no_network.py 397
5053-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5054+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5055             if i_shnum in shnums:
5056hunk ./src/allmydata/test/no_network.py 399
5057-                os.unlink(i_sharefile)
5058+                i_sharefp.remove()
5059 
5060hunk ./src/allmydata/test/no_network.py 401
5061-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5062-        sharedata = open(sharefile, "rb").read()
5063-        corruptdata = corruptor_function(sharedata)
5064-        open(sharefile, "wb").write(corruptdata)
5065+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5066+        sharedata = sharefp.getContent()
5067+        corruptdata = corruptor_function(sharedata, debug=debug)
5068+        sharefp.setContent(corruptdata)
5069 
5070     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5071hunk ./src/allmydata/test/no_network.py 407
5072-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5073+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5074             if i_shnum in shnums:
5075hunk ./src/allmydata/test/no_network.py 409
5076-                sharedata = open(i_sharefile, "rb").read()
5077-                corruptdata = corruptor(sharedata, debug=debug)
5078-                open(i_sharefile, "wb").write(corruptdata)
5079+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5080 
5081     def corrupt_all_shares(self, uri, corruptor, debug=False):
5082hunk ./src/allmydata/test/no_network.py 412
5083-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5084-            sharedata = open(i_sharefile, "rb").read()
5085-            corruptdata = corruptor(sharedata, debug=debug)
5086-            open(i_sharefile, "wb").write(corruptdata)
5087+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5088+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5089 
5090     def GET(self, urlpath, followRedirect=False, return_response=False,
5091             method="GET", clientnum=0, **kwargs):
5092hunk ./src/allmydata/test/test_download.py 6
5093 # a previous run. This asserts that the current code is capable of decoding
5094 # shares from a previous version.
5095 
5096-import os
5097 from twisted.trial import unittest
5098 from twisted.internet import defer, reactor
5099 from allmydata import uri
5100hunk ./src/allmydata/test/test_download.py 9
5101-from allmydata.storage.server import storage_index_to_dir
5102 from allmydata.util import base32, fileutil, spans, log, hashutil
5103 from allmydata.util.consumer import download_to_data, MemoryConsumer
5104 from allmydata.immutable import upload, layout
5105hunk ./src/allmydata/test/test_download.py 85
5106         u = upload.Data(plaintext, None)
5107         d = self.c0.upload(u)
5108         f = open("stored_shares.py", "w")
5109-        def _created_immutable(ur):
5110-            # write the generated shares and URI to a file, which can then be
5111-            # incorporated into this one next time.
5112-            f.write('immutable_uri = "%s"\n' % ur.uri)
5113-            f.write('immutable_shares = {\n')
5114-            si = uri.from_string(ur.uri).get_storage_index()
5115-            si_dir = storage_index_to_dir(si)
5116+
5117+        def _write_py(uri):
5118+            si = uri.from_string(uri).get_storage_index()
5119             for (i,ss,ssdir) in self.iterate_servers():
5120hunk ./src/allmydata/test/test_download.py 89
5121-                sharedir = os.path.join(ssdir, "shares", si_dir)
5122                 shares = {}
5123hunk ./src/allmydata/test/test_download.py 90
5124-                for fn in os.listdir(sharedir):
5125-                    shnum = int(fn)
5126-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5127-                    shares[shnum] = sharedata
5128-                fileutil.rm_dir(sharedir)
5129+                shareset = ss.backend.get_shareset(si)
5130+                for share in shareset.get_shares():
5131+                    sharedata = share._home.getContent()
5132+                    shares[share.get_shnum()] = sharedata
5133+
5134+                fileutil.fp_remove(shareset._sharehomedir)
5135                 if shares:
5136                     f.write(' %d: { # client[%d]\n' % (i, i))
5137                     for shnum in sorted(shares.keys()):
5138hunk ./src/allmydata/test/test_download.py 103
5139                                 (shnum, base32.b2a(shares[shnum])))
5140                     f.write('    },\n')
5141             f.write('}\n')
5142-            f.write('\n')
5143 
5144hunk ./src/allmydata/test/test_download.py 104
5145+        def _created_immutable(ur):
5146+            # write the generated shares and URI to a file, which can then be
5147+            # incorporated into this one next time.
5148+            f.write('immutable_uri = "%s"\n' % ur.uri)
5149+            f.write('immutable_shares = {\n')
5150+            _write_py(ur.uri)
5151+            f.write('\n')
5152         d.addCallback(_created_immutable)
5153 
5154         d.addCallback(lambda ignored:
5155hunk ./src/allmydata/test/test_download.py 118
5156         def _created_mutable(n):
5157             f.write('mutable_uri = "%s"\n' % n.get_uri())
5158             f.write('mutable_shares = {\n')
5159-            si = uri.from_string(n.get_uri()).get_storage_index()
5160-            si_dir = storage_index_to_dir(si)
5161-            for (i,ss,ssdir) in self.iterate_servers():
5162-                sharedir = os.path.join(ssdir, "shares", si_dir)
5163-                shares = {}
5164-                for fn in os.listdir(sharedir):
5165-                    shnum = int(fn)
5166-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5167-                    shares[shnum] = sharedata
5168-                fileutil.rm_dir(sharedir)
5169-                if shares:
5170-                    f.write(' %d: { # client[%d]\n' % (i, i))
5171-                    for shnum in sorted(shares.keys()):
5172-                        f.write('  %d: base32.a2b("%s"),\n' %
5173-                                (shnum, base32.b2a(shares[shnum])))
5174-                    f.write('    },\n')
5175-            f.write('}\n')
5176-
5177-            f.close()
5178+            _write_py(n.get_uri())
5179         d.addCallback(_created_mutable)
5180 
5181         def _done(ignored):
5182hunk ./src/allmydata/test/test_download.py 123
5183             f.close()
5184-        d.addCallback(_done)
5185+        d.addBoth(_done)
5186 
5187         return d
5188 
5189hunk ./src/allmydata/test/test_download.py 127
5190+    def _write_shares(self, uri, shares):
5191+        si = uri.from_string(uri).get_storage_index()
5192+        for i in shares:
5193+            shares_for_server = shares[i]
5194+            for shnum in shares_for_server:
5195+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5196+                fileutil.fp_make_dirs(share_dir)
5197+                share_dir.child(str(shnum)).setContent(shares[shnum])
5198+
5199     def load_shares(self, ignored=None):
5200         # this uses the data generated by create_shares() to populate the
5201         # storage servers with pre-generated shares
5202hunk ./src/allmydata/test/test_download.py 139
5203-        si = uri.from_string(immutable_uri).get_storage_index()
5204-        si_dir = storage_index_to_dir(si)
5205-        for i in immutable_shares:
5206-            shares = immutable_shares[i]
5207-            for shnum in shares:
5208-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5209-                fileutil.make_dirs(dn)
5210-                fn = os.path.join(dn, str(shnum))
5211-                f = open(fn, "wb")
5212-                f.write(shares[shnum])
5213-                f.close()
5214-
5215-        si = uri.from_string(mutable_uri).get_storage_index()
5216-        si_dir = storage_index_to_dir(si)
5217-        for i in mutable_shares:
5218-            shares = mutable_shares[i]
5219-            for shnum in shares:
5220-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5221-                fileutil.make_dirs(dn)
5222-                fn = os.path.join(dn, str(shnum))
5223-                f = open(fn, "wb")
5224-                f.write(shares[shnum])
5225-                f.close()
5226+        self._write_shares(immutable_uri, immutable_shares)
5227+        self._write_shares(mutable_uri, mutable_shares)
5228 
5229     def download_immutable(self, ignored=None):
5230         n = self.c0.create_node_from_uri(immutable_uri)
5231hunk ./src/allmydata/test/test_download.py 183
5232 
5233         self.load_shares()
5234         si = uri.from_string(immutable_uri).get_storage_index()
5235-        si_dir = storage_index_to_dir(si)
5236 
5237         n = self.c0.create_node_from_uri(immutable_uri)
5238         d = download_to_data(n)
5239hunk ./src/allmydata/test/test_download.py 198
5240                 for clientnum in immutable_shares:
5241                     for shnum in immutable_shares[clientnum]:
5242                         if s._shnum == shnum:
5243-                            fn = os.path.join(self.get_serverdir(clientnum),
5244-                                              "shares", si_dir, str(shnum))
5245-                            os.unlink(fn)
5246+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5247+                            share_dir.child(str(shnum)).remove()
5248         d.addCallback(_clobber_some_shares)
5249         d.addCallback(lambda ign: download_to_data(n))
5250         d.addCallback(_got_data)
5251hunk ./src/allmydata/test/test_download.py 212
5252                 for shnum in immutable_shares[clientnum]:
5253                     if shnum == save_me:
5254                         continue
5255-                    fn = os.path.join(self.get_serverdir(clientnum),
5256-                                      "shares", si_dir, str(shnum))
5257-                    if os.path.exists(fn):
5258-                        os.unlink(fn)
5259+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5260+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5261             # now the download should fail with NotEnoughSharesError
5262             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5263                                    download_to_data, n)
5264hunk ./src/allmydata/test/test_download.py 223
5265             # delete the last remaining share
5266             for clientnum in immutable_shares:
5267                 for shnum in immutable_shares[clientnum]:
5268-                    fn = os.path.join(self.get_serverdir(clientnum),
5269-                                      "shares", si_dir, str(shnum))
5270-                    if os.path.exists(fn):
5271-                        os.unlink(fn)
5272+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5273+                    share_dir.child(str(shnum)).remove()
5274             # now a new download should fail with NoSharesError. We want a
5275             # new ImmutableFileNode so it will forget about the old shares.
5276             # If we merely called create_node_from_uri() without first
5277hunk ./src/allmydata/test/test_download.py 801
5278         # will report two shares, and the ShareFinder will handle the
5279         # duplicate by attaching both to the same CommonShare instance.
5280         si = uri.from_string(immutable_uri).get_storage_index()
5281-        si_dir = storage_index_to_dir(si)
5282-        sh0_file = [sharefile
5283-                    for (shnum, serverid, sharefile)
5284-                    in self.find_uri_shares(immutable_uri)
5285-                    if shnum == 0][0]
5286-        sh0_data = open(sh0_file, "rb").read()
5287+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5288+                          in self.find_uri_shares(immutable_uri)
5289+                          if shnum == 0][0]
5290+        sh0_data = sh0_fp.getContent()
5291         for clientnum in immutable_shares:
5292             if 0 in immutable_shares[clientnum]:
5293                 continue
5294hunk ./src/allmydata/test/test_download.py 808
5295-            cdir = self.get_serverdir(clientnum)
5296-            target = os.path.join(cdir, "shares", si_dir, "0")
5297-            outf = open(target, "wb")
5298-            outf.write(sh0_data)
5299-            outf.close()
5300+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5301+            fileutil.fp_make_dirs(cdir)
5302+            cdir.child(str(shnum)).setContent(sh0_data)
5303 
5304         d = self.download_immutable()
5305         return d
5306hunk ./src/allmydata/test/test_encode.py 134
5307         d.addCallback(_try)
5308         return d
5309 
5310-    def get_share_hashes(self, at_least_these=()):
5311+    def get_share_hashes(self):
5312         d = self._start()
5313         def _try(unused=None):
5314             if self.mode == "bad sharehash":
5315hunk ./src/allmydata/test/test_hung_server.py 3
5316 # -*- coding: utf-8 -*-
5317 
5318-import os, shutil
5319 from twisted.trial import unittest
5320 from twisted.internet import defer
5321hunk ./src/allmydata/test/test_hung_server.py 5
5322-from allmydata import uri
5323+
5324 from allmydata.util.consumer import download_to_data
5325 from allmydata.immutable import upload
5326 from allmydata.mutable.common import UnrecoverableFileError
5327hunk ./src/allmydata/test/test_hung_server.py 10
5328 from allmydata.mutable.publish import MutableData
5329-from allmydata.storage.common import storage_index_to_dir
5330 from allmydata.test.no_network import GridTestMixin
5331 from allmydata.test.common import ShouldFailMixin
5332 from allmydata.util.pollmixin import PollMixin
5333hunk ./src/allmydata/test/test_hung_server.py 18
5334 immutable_plaintext = "data" * 10000
5335 mutable_plaintext = "muta" * 10000
5336 
5337+
5338 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5339                              unittest.TestCase):
5340     # Many of these tests take around 60 seconds on François's ARM buildslave:
5341hunk ./src/allmydata/test/test_hung_server.py 31
5342     timeout = 240
5343 
5344     def _break(self, servers):
5345-        for (id, ss) in servers:
5346-            self.g.break_server(id)
5347+        for ss in servers:
5348+            self.g.break_server(ss.get_serverid())
5349 
5350     def _hang(self, servers, **kwargs):
5351hunk ./src/allmydata/test/test_hung_server.py 35
5352-        for (id, ss) in servers:
5353-            self.g.hang_server(id, **kwargs)
5354+        for ss in servers:
5355+            self.g.hang_server(ss.get_serverid(), **kwargs)
5356 
5357     def _unhang(self, servers, **kwargs):
5358hunk ./src/allmydata/test/test_hung_server.py 39
5359-        for (id, ss) in servers:
5360-            self.g.unhang_server(id, **kwargs)
5361+        for ss in servers:
5362+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5363 
5364     def _hang_shares(self, shnums, **kwargs):
5365         # hang all servers who are holding the given shares
5366hunk ./src/allmydata/test/test_hung_server.py 52
5367                     hung_serverids.add(i_serverid)
5368 
5369     def _delete_all_shares_from(self, servers):
5370-        serverids = [id for (id, ss) in servers]
5371-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5372+        serverids = [ss.get_serverid() for ss in servers]
5373+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5374             if i_serverid in serverids:
5375hunk ./src/allmydata/test/test_hung_server.py 55
5376-                os.unlink(i_sharefile)
5377+                i_sharefp.remove()
5378 
5379     def _corrupt_all_shares_in(self, servers, corruptor_func):
5380hunk ./src/allmydata/test/test_hung_server.py 58
5381-        serverids = [id for (id, ss) in servers]
5382-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5383+        serverids = [ss.get_serverid() for ss in servers]
5384+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5385             if i_serverid in serverids:
5386hunk ./src/allmydata/test/test_hung_server.py 61
5387-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5388+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5389 
5390     def _copy_all_shares_from(self, from_servers, to_server):
5391hunk ./src/allmydata/test/test_hung_server.py 64
5392-        serverids = [id for (id, ss) in from_servers]
5393-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5394+        serverids = [ss.get_serverid() for ss in from_servers]
5395+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5396             if i_serverid in serverids:
5397hunk ./src/allmydata/test/test_hung_server.py 67
5398-                self._copy_share((i_shnum, i_sharefile), to_server)
5399+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5400 
5401hunk ./src/allmydata/test/test_hung_server.py 69
5402-    def _copy_share(self, share, to_server):
5403-        (sharenum, sharefile) = share
5404-        (id, ss) = to_server
5405-        shares_dir = os.path.join(ss.original.storedir, "shares")
5406-        si = uri.from_string(self.uri).get_storage_index()
5407-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5408-        if not os.path.exists(si_dir):
5409-            os.makedirs(si_dir)
5410-        new_sharefile = os.path.join(si_dir, str(sharenum))
5411-        shutil.copy(sharefile, new_sharefile)
5412         self.shares = self.find_uri_shares(self.uri)
5413hunk ./src/allmydata/test/test_hung_server.py 70
5414-        # Make sure that the storage server has the share.
5415-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5416-                        in self.shares)
5417-
5418-    def _corrupt_share(self, share, corruptor_func):
5419-        (sharenum, sharefile) = share
5420-        data = open(sharefile, "rb").read()
5421-        newdata = corruptor_func(data)
5422-        os.unlink(sharefile)
5423-        wf = open(sharefile, "wb")
5424-        wf.write(newdata)
5425-        wf.close()
5426 
5427     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5428         self.mutable = mutable
5429hunk ./src/allmydata/test/test_hung_server.py 82
5430 
5431         self.c0 = self.g.clients[0]
5432         nm = self.c0.nodemaker
5433-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5434-                               for s in nm.storage_broker.get_connected_servers()])
5435+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5436+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5437         self.servers = self.servers[5:] + self.servers[:5]
5438 
5439         if mutable:
5440hunk ./src/allmydata/test/test_hung_server.py 244
5441             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5442             # will retire before the download is complete and the ShareFinder
5443             # is shut off. That will leave 4 OVERDUE and 1
5444-            # stuck-but-not-overdue, for a total of 5 requests in in
5445+            # stuck-but-not-overdue, for a total of 5 requests in
5446             # _sf.pending_requests
5447             for t in self._sf.overdue_timers.values()[:4]:
5448                 t.reset(-1.0)
5449hunk ./src/allmydata/test/test_mutable.py 21
5450 from foolscap.api import eventually, fireEventually
5451 from foolscap.logging import log
5452 from allmydata.storage_client import StorageFarmBroker
5453-from allmydata.storage.common import storage_index_to_dir
5454 from allmydata.scripts import debug
5455 
5456 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5457hunk ./src/allmydata/test/test_mutable.py 3670
5458         # Now execute each assignment by writing the storage.
5459         for (share, servernum) in assignments:
5460             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5461-            storedir = self.get_serverdir(servernum)
5462-            storage_path = os.path.join(storedir, "shares",
5463-                                        storage_index_to_dir(si))
5464-            fileutil.make_dirs(storage_path)
5465-            fileutil.write(os.path.join(storage_path, "%d" % share),
5466-                           sharedata)
5467+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5468+            fileutil.fp_make_dirs(storage_dir)
5469+            storage_dir.child("%d" % share).setContent(sharedata)
5470         # ...and verify that the shares are there.
5471         shares = self.find_uri_shares(self.sdmf_old_cap)
5472         assert len(shares) == 10
5473hunk ./src/allmydata/test/test_provisioning.py 13
5474 from nevow import inevow
5475 from zope.interface import implements
5476 
5477-class MyRequest:
5478+class MockRequest:
5479     implements(inevow.IRequest)
5480     pass
5481 
5482hunk ./src/allmydata/test/test_provisioning.py 26
5483     def test_load(self):
5484         pt = provisioning.ProvisioningTool()
5485         self.fields = {}
5486-        #r = MyRequest()
5487+        #r = MockRequest()
5488         #r.fields = self.fields
5489         #ctx = RequestContext()
5490         #unfilled = pt.renderSynchronously(ctx)
5491hunk ./src/allmydata/test/test_repairer.py 537
5492         # happiness setting.
5493         def _delete_some_servers(ignored):
5494             for i in xrange(7):
5495-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5496+                self.remove_server(i)
5497 
5498             assert len(self.g.servers_by_number) == 3
5499 
5500hunk ./src/allmydata/test/test_storage.py 14
5501 from allmydata import interfaces
5502 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5503 from allmydata.storage.server import StorageServer
5504-from allmydata.storage.mutable import MutableShareFile
5505-from allmydata.storage.immutable import BucketWriter, BucketReader
5506-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5507+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5508+from allmydata.storage.bucket import BucketWriter, BucketReader
5509+from allmydata.storage.common import DataTooLargeError, \
5510      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5511 from allmydata.storage.lease import LeaseInfo
5512 from allmydata.storage.crawler import BucketCountingCrawler
5513hunk ./src/allmydata/test/test_storage.py 474
5514         w[0].remote_write(0, "\xff"*10)
5515         w[0].remote_close()
5516 
5517-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5518-        f = open(fn, "rb+")
5519+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5520+        f = fp.open("rb+")
5521         f.seek(0)
5522         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5523         f.close()
5524hunk ./src/allmydata/test/test_storage.py 814
5525     def test_bad_magic(self):
5526         ss = self.create("test_bad_magic")
5527         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5528-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5529-        f = open(fn, "rb+")
5530+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5531+        f = fp.open("rb+")
5532         f.seek(0)
5533         f.write("BAD MAGIC")
5534         f.close()
5535hunk ./src/allmydata/test/test_storage.py 842
5536 
5537         # Trying to make the container too large (by sending a write vector
5538         # whose offset is too high) will raise an exception.
5539-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5540+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5541         self.failUnlessRaises(DataTooLargeError,
5542                               rstaraw, "si1", secrets,
5543                               {0: ([], [(TOOBIG,data)], None)},
5544hunk ./src/allmydata/test/test_storage.py 1229
5545 
5546         # create a random non-numeric file in the bucket directory, to
5547         # exercise the code that's supposed to ignore those.
5548-        bucket_dir = os.path.join(self.workdir("test_leases"),
5549-                                  "shares", storage_index_to_dir("si1"))
5550-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5551-        f.write("you ought to be ignoring me\n")
5552-        f.close()
5553+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5554+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5555 
5556hunk ./src/allmydata/test/test_storage.py 1232
5557-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5558+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5559         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5560 
5561         # add-lease on a missing storage index is silently ignored
5562hunk ./src/allmydata/test/test_storage.py 3118
5563         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5564 
5565         # add a non-sharefile to exercise another code path
5566-        fn = os.path.join(ss.sharedir,
5567-                          storage_index_to_dir(immutable_si_0),
5568-                          "not-a-share")
5569-        f = open(fn, "wb")
5570-        f.write("I am not a share.\n")
5571-        f.close()
5572+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5573+        fp.setContent("I am not a share.\n")
5574 
5575         # this is before the crawl has started, so we're not in a cycle yet
5576         initial_state = lc.get_state()
5577hunk ./src/allmydata/test/test_storage.py 3282
5578     def test_expire_age(self):
5579         basedir = "storage/LeaseCrawler/expire_age"
5580         fileutil.make_dirs(basedir)
5581-        # setting expiration_time to 2000 means that any lease which is more
5582-        # than 2000s old will be expired.
5583-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5584-                                       expiration_enabled=True,
5585-                                       expiration_mode="age",
5586-                                       expiration_override_lease_duration=2000)
5587+        # setting 'override_lease_duration' to 2000 means that any lease that
5588+        # is more than 2000 seconds old will be expired.
5589+        expiration_policy = {
5590+            'enabled': True,
5591+            'mode': 'age',
5592+            'override_lease_duration': 2000,
5593+            'sharetypes': ('mutable', 'immutable'),
5594+        }
5595+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5596         # make it start sooner than usual.
5597         lc = ss.lease_checker
5598         lc.slow_start = 0
5599hunk ./src/allmydata/test/test_storage.py 3423
5600     def test_expire_cutoff_date(self):
5601         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5602         fileutil.make_dirs(basedir)
5603-        # setting cutoff-date to 2000 seconds ago means that any lease which
5604-        # is more than 2000s old will be expired.
5605+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5606+        # is more than 2000 seconds old will be expired.
5607         now = time.time()
5608         then = int(now - 2000)
5609hunk ./src/allmydata/test/test_storage.py 3427
5610-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5611-                                       expiration_enabled=True,
5612-                                       expiration_mode="cutoff-date",
5613-                                       expiration_cutoff_date=then)
5614+        expiration_policy = {
5615+            'enabled': True,
5616+            'mode': 'cutoff-date',
5617+            'cutoff_date': then,
5618+            'sharetypes': ('mutable', 'immutable'),
5619+        }
5620+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5621         # make it start sooner than usual.
5622         lc = ss.lease_checker
5623         lc.slow_start = 0
5624hunk ./src/allmydata/test/test_storage.py 3575
5625     def test_only_immutable(self):
5626         basedir = "storage/LeaseCrawler/only_immutable"
5627         fileutil.make_dirs(basedir)
5628+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5629+        # is more than 2000 seconds old will be expired.
5630         now = time.time()
5631         then = int(now - 2000)
5632hunk ./src/allmydata/test/test_storage.py 3579
5633-        ss = StorageServer(basedir, "\x00" * 20,
5634-                           expiration_enabled=True,
5635-                           expiration_mode="cutoff-date",
5636-                           expiration_cutoff_date=then,
5637-                           expiration_sharetypes=("immutable",))
5638+        expiration_policy = {
5639+            'enabled': True,
5640+            'mode': 'cutoff-date',
5641+            'cutoff_date': then,
5642+            'sharetypes': ('immutable',),
5643+        }
5644+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5645         lc = ss.lease_checker
5646         lc.slow_start = 0
5647         webstatus = StorageStatus(ss)
5648hunk ./src/allmydata/test/test_storage.py 3636
5649     def test_only_mutable(self):
5650         basedir = "storage/LeaseCrawler/only_mutable"
5651         fileutil.make_dirs(basedir)
5652+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5653+        # is more than 2000 seconds old will be expired.
5654         now = time.time()
5655         then = int(now - 2000)
5656hunk ./src/allmydata/test/test_storage.py 3640
5657-        ss = StorageServer(basedir, "\x00" * 20,
5658-                           expiration_enabled=True,
5659-                           expiration_mode="cutoff-date",
5660-                           expiration_cutoff_date=then,
5661-                           expiration_sharetypes=("mutable",))
5662+        expiration_policy = {
5663+            'enabled': True,
5664+            'mode': 'cutoff-date',
5665+            'cutoff_date': then,
5666+            'sharetypes': ('mutable',),
5667+        }
5668+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5669         lc = ss.lease_checker
5670         lc.slow_start = 0
5671         webstatus = StorageStatus(ss)
5672hunk ./src/allmydata/test/test_storage.py 3819
5673     def test_no_st_blocks(self):
5674         basedir = "storage/LeaseCrawler/no_st_blocks"
5675         fileutil.make_dirs(basedir)
5676-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5677-                                        expiration_mode="age",
5678-                                        expiration_override_lease_duration=-1000)
5679-        # a negative expiration_time= means the "configured-"
5680+        # A negative 'override_lease_duration' means that the "configured-"
5681         # space-recovered counts will be non-zero, since all shares will have
5682hunk ./src/allmydata/test/test_storage.py 3821
5683-        # expired by then
5684+        # expired by then.
5685+        expiration_policy = {
5686+            'enabled': True,
5687+            'mode': 'age',
5688+            'override_lease_duration': -1000,
5689+            'sharetypes': ('mutable', 'immutable'),
5690+        }
5691+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5692 
5693         # make it start sooner than usual.
5694         lc = ss.lease_checker
5695hunk ./src/allmydata/test/test_storage.py 3877
5696         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5697         first = min(self.sis)
5698         first_b32 = base32.b2a(first)
5699-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5700-        f = open(fn, "rb+")
5701+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5702+        f = fp.open("rb+")
5703         f.seek(0)
5704         f.write("BAD MAGIC")
5705         f.close()
5706hunk ./src/allmydata/test/test_storage.py 3890
5707 
5708         # also create an empty bucket
5709         empty_si = base32.b2a("\x04"*16)
5710-        empty_bucket_dir = os.path.join(ss.sharedir,
5711-                                        storage_index_to_dir(empty_si))
5712-        fileutil.make_dirs(empty_bucket_dir)
5713+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5714+        fileutil.fp_make_dirs(empty_bucket_dir)
5715 
5716         ss.setServiceParent(self.s)
5717 
5718hunk ./src/allmydata/test/test_system.py 10
5719 
5720 import allmydata
5721 from allmydata import uri
5722-from allmydata.storage.mutable import MutableShareFile
5723+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5724 from allmydata.storage.server import si_a2b
5725 from allmydata.immutable import offloaded, upload
5726 from allmydata.immutable.literal import LiteralFileNode
5727hunk ./src/allmydata/test/test_system.py 421
5728         return shares
5729 
5730     def _corrupt_mutable_share(self, filename, which):
5731-        msf = MutableShareFile(filename)
5732+        msf = MutableDiskShare(filename)
5733         datav = msf.readv([ (0, 1000000) ])
5734         final_share = datav[0]
5735         assert len(final_share) < 1000000 # ought to be truncated
5736hunk ./src/allmydata/test/test_upload.py 22
5737 from allmydata.util.happinessutil import servers_of_happiness, \
5738                                          shares_by_server, merge_servers
5739 from allmydata.storage_client import StorageFarmBroker
5740-from allmydata.storage.server import storage_index_to_dir
5741 
5742 MiB = 1024*1024
5743 
5744hunk ./src/allmydata/test/test_upload.py 821
5745 
5746     def _copy_share_to_server(self, share_number, server_number):
5747         ss = self.g.servers_by_number[server_number]
5748-        # Copy share i from the directory associated with the first
5749-        # storage server to the directory associated with this one.
5750-        assert self.g, "I tried to find a grid at self.g, but failed"
5751-        assert self.shares, "I tried to find shares at self.shares, but failed"
5752-        old_share_location = self.shares[share_number][2]
5753-        new_share_location = os.path.join(ss.storedir, "shares")
5754-        si = uri.from_string(self.uri).get_storage_index()
5755-        new_share_location = os.path.join(new_share_location,
5756-                                          storage_index_to_dir(si))
5757-        if not os.path.exists(new_share_location):
5758-            os.makedirs(new_share_location)
5759-        new_share_location = os.path.join(new_share_location,
5760-                                          str(share_number))
5761-        if old_share_location != new_share_location:
5762-            shutil.copy(old_share_location, new_share_location)
5763-        shares = self.find_uri_shares(self.uri)
5764-        # Make sure that the storage server has the share.
5765-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5766-                        in shares)
5767+        self.copy_share(self.shares[share_number], ss)
5768 
5769     def _setup_grid(self):
5770         """
5771hunk ./src/allmydata/test/test_upload.py 1103
5772                 self._copy_share_to_server(i, 2)
5773         d.addCallback(_copy_shares)
5774         # Remove the first server, and add a placeholder with share 0
5775-        d.addCallback(lambda ign:
5776-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5777+        d.addCallback(lambda ign: self.remove_server(0))
5778         d.addCallback(lambda ign:
5779             self._add_server_with_share(server_number=4, share_number=0))
5780         # Now try uploading.
5781hunk ./src/allmydata/test/test_upload.py 1134
5782         d.addCallback(lambda ign:
5783             self._add_server(server_number=4))
5784         d.addCallback(_copy_shares)
5785-        d.addCallback(lambda ign:
5786-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5787+        d.addCallback(lambda ign: self.remove_server(0))
5788         d.addCallback(_reset_encoding_parameters)
5789         d.addCallback(lambda client:
5790             client.upload(upload.Data("data" * 10000, convergence="")))
5791hunk ./src/allmydata/test/test_upload.py 1196
5792                 self._copy_share_to_server(i, 2)
5793         d.addCallback(_copy_shares)
5794         # Remove server 0, and add another in its place
5795-        d.addCallback(lambda ign:
5796-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5797+        d.addCallback(lambda ign: self.remove_server(0))
5798         d.addCallback(lambda ign:
5799             self._add_server_with_share(server_number=4, share_number=0,
5800                                         readonly=True))
5801hunk ./src/allmydata/test/test_upload.py 1237
5802             for i in xrange(1, 10):
5803                 self._copy_share_to_server(i, 2)
5804         d.addCallback(_copy_shares)
5805-        d.addCallback(lambda ign:
5806-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5807+        d.addCallback(lambda ign: self.remove_server(0))
5808         def _reset_encoding_parameters(ign, happy=4):
5809             client = self.g.clients[0]
5810             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5811hunk ./src/allmydata/test/test_upload.py 1273
5812         # remove the original server
5813         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5814         #  all the shares)
5815-        def _remove_server(ign):
5816-            server = self.g.servers_by_number[0]
5817-            self.g.remove_server(server.my_nodeid)
5818-        d.addCallback(_remove_server)
5819+        d.addCallback(lambda ign: self.remove_server(0))
5820         # This should succeed; we still have 4 servers, and the
5821         # happiness of the upload is 4.
5822         d.addCallback(lambda ign:
5823hunk ./src/allmydata/test/test_upload.py 1285
5824         d.addCallback(lambda ign:
5825             self._setup_and_upload())
5826         d.addCallback(_do_server_setup)
5827-        d.addCallback(_remove_server)
5828+        d.addCallback(lambda ign: self.remove_server(0))
5829         d.addCallback(lambda ign:
5830             self.shouldFail(UploadUnhappinessError,
5831                             "test_dropped_servers_in_encoder",
5832hunk ./src/allmydata/test/test_upload.py 1307
5833             self._add_server_with_share(4, 7, readonly=True)
5834             self._add_server_with_share(5, 8, readonly=True)
5835         d.addCallback(_do_server_setup_2)
5836-        d.addCallback(_remove_server)
5837+        d.addCallback(lambda ign: self.remove_server(0))
5838         d.addCallback(lambda ign:
5839             self._do_upload_with_broken_servers(1))
5840         d.addCallback(_set_basedir)
5841hunk ./src/allmydata/test/test_upload.py 1314
5842         d.addCallback(lambda ign:
5843             self._setup_and_upload())
5844         d.addCallback(_do_server_setup_2)
5845-        d.addCallback(_remove_server)
5846+        d.addCallback(lambda ign: self.remove_server(0))
5847         d.addCallback(lambda ign:
5848             self.shouldFail(UploadUnhappinessError,
5849                             "test_dropped_servers_in_encoder",
5850hunk ./src/allmydata/test/test_upload.py 1528
5851             for i in xrange(1, 10):
5852                 self._copy_share_to_server(i, 1)
5853         d.addCallback(_copy_shares)
5854-        d.addCallback(lambda ign:
5855-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5856+        d.addCallback(lambda ign: self.remove_server(0))
5857         def _prepare_client(ign):
5858             client = self.g.clients[0]
5859             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5860hunk ./src/allmydata/test/test_upload.py 1550
5861         def _setup(ign):
5862             for i in xrange(1, 11):
5863                 self._add_server(server_number=i)
5864-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5865+            self.remove_server(0)
5866             c = self.g.clients[0]
5867             # We set happy to an unsatisfiable value so that we can check the
5868             # counting in the exception message. The same progress message
5869hunk ./src/allmydata/test/test_upload.py 1577
5870                 self._add_server(server_number=i)
5871             self._add_server(server_number=11, readonly=True)
5872             self._add_server(server_number=12, readonly=True)
5873-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5874+            self.remove_server(0)
5875             c = self.g.clients[0]
5876             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5877             return c
5878hunk ./src/allmydata/test/test_upload.py 1605
5879             # the first one that the selector sees.
5880             for i in xrange(10):
5881                 self._copy_share_to_server(i, 9)
5882-            # Remove server 0, and its contents
5883-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5884+            self.remove_server(0)
5885             # Make happiness unsatisfiable
5886             c = self.g.clients[0]
5887             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5888hunk ./src/allmydata/test/test_upload.py 1625
5889         def _then(ign):
5890             for i in xrange(1, 11):
5891                 self._add_server(server_number=i, readonly=True)
5892-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5893+            self.remove_server(0)
5894             c = self.g.clients[0]
5895             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5896             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5897hunk ./src/allmydata/test/test_upload.py 1661
5898             self._add_server(server_number=4, readonly=True))
5899         d.addCallback(lambda ign:
5900             self._add_server(server_number=5, readonly=True))
5901-        d.addCallback(lambda ign:
5902-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5903+        d.addCallback(lambda ign: self.remove_server(0))
5904         def _reset_encoding_parameters(ign, happy=4):
5905             client = self.g.clients[0]
5906             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5907hunk ./src/allmydata/test/test_upload.py 1696
5908         d.addCallback(lambda ign:
5909             self._add_server(server_number=2))
5910         def _break_server_2(ign):
5911-            serverid = self.g.servers_by_number[2].my_nodeid
5912+            serverid = self.get_server(2).get_serverid()
5913             self.g.break_server(serverid)
5914         d.addCallback(_break_server_2)
5915         d.addCallback(lambda ign:
5916hunk ./src/allmydata/test/test_upload.py 1705
5917             self._add_server(server_number=4, readonly=True))
5918         d.addCallback(lambda ign:
5919             self._add_server(server_number=5, readonly=True))
5920-        d.addCallback(lambda ign:
5921-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5922+        d.addCallback(lambda ign: self.remove_server(0))
5923         d.addCallback(_reset_encoding_parameters)
5924         d.addCallback(lambda client:
5925             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
5926hunk ./src/allmydata/test/test_upload.py 1816
5927             # Copy shares
5928             self._copy_share_to_server(1, 1)
5929             self._copy_share_to_server(2, 1)
5930-            # Remove server 0
5931-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5932+            self.remove_server(0)
5933             client = self.g.clients[0]
5934             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
5935             return client
5936hunk ./src/allmydata/test/test_upload.py 1930
5937                                         readonly=True)
5938             self._add_server_with_share(server_number=4, share_number=3,
5939                                         readonly=True)
5940-            # Remove server 0.
5941-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5942+            self.remove_server(0)
5943             # Set the client appropriately
5944             c = self.g.clients[0]
5945             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5946hunk ./src/allmydata/test/test_util.py 9
5947 from twisted.trial import unittest
5948 from twisted.internet import defer, reactor
5949 from twisted.python.failure import Failure
5950+from twisted.python.filepath import FilePath
5951 from twisted.python import log
5952 from pycryptopp.hash.sha256 import SHA256 as _hash
5953 
5954hunk ./src/allmydata/test/test_util.py 508
5955                 os.chdir(saved_cwd)
5956 
5957     def test_disk_stats(self):
5958-        avail = fileutil.get_available_space('.', 2**14)
5959+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
5960         if avail == 0:
5961             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
5962 
5963hunk ./src/allmydata/test/test_util.py 512
5964-        disk = fileutil.get_disk_stats('.', 2**13)
5965+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
5966         self.failUnless(disk['total'] > 0, disk['total'])
5967         self.failUnless(disk['used'] > 0, disk['used'])
5968         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
5969hunk ./src/allmydata/test/test_util.py 521
5970 
5971     def test_disk_stats_avail_nonnegative(self):
5972         # This test will spuriously fail if you have more than 2^128
5973-        # bytes of available space on your filesystem.
5974-        disk = fileutil.get_disk_stats('.', 2**128)
5975+        # bytes of available space on your filesystem (lucky you).
5976+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
5977         self.failUnlessEqual(disk['avail'], 0)
5978 
5979 class PollMixinTests(unittest.TestCase):
5980hunk ./src/allmydata/test/test_web.py 12
5981 from twisted.python import failure, log
5982 from nevow import rend
5983 from allmydata import interfaces, uri, webish, dirnode
5984-from allmydata.storage.shares import get_share_file
5985 from allmydata.storage_client import StorageFarmBroker
5986 from allmydata.immutable import upload
5987 from allmydata.immutable.downloader.status import DownloadStatus
5988hunk ./src/allmydata/test/test_web.py 4111
5989             good_shares = self.find_uri_shares(self.uris["good"])
5990             self.failUnlessReallyEqual(len(good_shares), 10)
5991             sick_shares = self.find_uri_shares(self.uris["sick"])
5992-            os.unlink(sick_shares[0][2])
5993+            sick_shares[0][2].remove()
5994             dead_shares = self.find_uri_shares(self.uris["dead"])
5995             for i in range(1, 10):
5996hunk ./src/allmydata/test/test_web.py 4114
5997-                os.unlink(dead_shares[i][2])
5998+                dead_shares[i][2].remove()
5999             c_shares = self.find_uri_shares(self.uris["corrupt"])
6000             cso = CorruptShareOptions()
6001             cso.stdout = StringIO()
6002hunk ./src/allmydata/test/test_web.py 4118
6003-            cso.parseOptions([c_shares[0][2]])
6004+            cso.parseOptions([c_shares[0][2].path])
6005             corrupt_share(cso)
6006         d.addCallback(_clobber_shares)
6007 
6008hunk ./src/allmydata/test/test_web.py 4253
6009             good_shares = self.find_uri_shares(self.uris["good"])
6010             self.failUnlessReallyEqual(len(good_shares), 10)
6011             sick_shares = self.find_uri_shares(self.uris["sick"])
6012-            os.unlink(sick_shares[0][2])
6013+            sick_shares[0][2].remove()
6014             dead_shares = self.find_uri_shares(self.uris["dead"])
6015             for i in range(1, 10):
6016hunk ./src/allmydata/test/test_web.py 4256
6017-                os.unlink(dead_shares[i][2])
6018+                dead_shares[i][2].remove()
6019             c_shares = self.find_uri_shares(self.uris["corrupt"])
6020             cso = CorruptShareOptions()
6021             cso.stdout = StringIO()
6022hunk ./src/allmydata/test/test_web.py 4260
6023-            cso.parseOptions([c_shares[0][2]])
6024+            cso.parseOptions([c_shares[0][2].path])
6025             corrupt_share(cso)
6026         d.addCallback(_clobber_shares)
6027 
6028hunk ./src/allmydata/test/test_web.py 4319
6029 
6030         def _clobber_shares(ignored):
6031             sick_shares = self.find_uri_shares(self.uris["sick"])
6032-            os.unlink(sick_shares[0][2])
6033+            sick_shares[0][2].remove()
6034         d.addCallback(_clobber_shares)
6035 
6036         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6037hunk ./src/allmydata/test/test_web.py 4811
6038             good_shares = self.find_uri_shares(self.uris["good"])
6039             self.failUnlessReallyEqual(len(good_shares), 10)
6040             sick_shares = self.find_uri_shares(self.uris["sick"])
6041-            os.unlink(sick_shares[0][2])
6042+            sick_shares[0][2].remove()
6043             #dead_shares = self.find_uri_shares(self.uris["dead"])
6044             #for i in range(1, 10):
6045hunk ./src/allmydata/test/test_web.py 4814
6046-            #    os.unlink(dead_shares[i][2])
6047+            #    dead_shares[i][2].remove()
6048 
6049             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6050             #cso = CorruptShareOptions()
6051hunk ./src/allmydata/test/test_web.py 4819
6052             #cso.stdout = StringIO()
6053-            #cso.parseOptions([c_shares[0][2]])
6054+            #cso.parseOptions([c_shares[0][2].path])
6055             #corrupt_share(cso)
6056         d.addCallback(_clobber_shares)
6057 
6058hunk ./src/allmydata/test/test_web.py 4870
6059         d.addErrback(self.explain_web_error)
6060         return d
6061 
6062-    def _count_leases(self, ignored, which):
6063-        u = self.uris[which]
6064-        shares = self.find_uri_shares(u)
6065-        lease_counts = []
6066-        for shnum, serverid, fn in shares:
6067-            sf = get_share_file(fn)
6068-            num_leases = len(list(sf.get_leases()))
6069-            lease_counts.append( (fn, num_leases) )
6070-        return lease_counts
6071-
6072-    def _assert_leasecount(self, lease_counts, expected):
6073+    def _assert_leasecount(self, ignored, which, expected):
6074+        lease_counts = self.count_leases(self.uris[which])
6075         for (fn, num_leases) in lease_counts:
6076             if num_leases != expected:
6077                 self.fail("expected %d leases, have %d, on %s" %
6078hunk ./src/allmydata/test/test_web.py 4903
6079                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6080         d.addCallback(_compute_fileurls)
6081 
6082-        d.addCallback(self._count_leases, "one")
6083-        d.addCallback(self._assert_leasecount, 1)
6084-        d.addCallback(self._count_leases, "two")
6085-        d.addCallback(self._assert_leasecount, 1)
6086-        d.addCallback(self._count_leases, "mutable")
6087-        d.addCallback(self._assert_leasecount, 1)
6088+        d.addCallback(self._assert_leasecount, "one", 1)
6089+        d.addCallback(self._assert_leasecount, "two", 1)
6090+        d.addCallback(self._assert_leasecount, "mutable", 1)
6091 
6092         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6093         def _got_html_good(res):
6094hunk ./src/allmydata/test/test_web.py 4913
6095             self.failIf("Not Healthy" in res, res)
6096         d.addCallback(_got_html_good)
6097 
6098-        d.addCallback(self._count_leases, "one")
6099-        d.addCallback(self._assert_leasecount, 1)
6100-        d.addCallback(self._count_leases, "two")
6101-        d.addCallback(self._assert_leasecount, 1)
6102-        d.addCallback(self._count_leases, "mutable")
6103-        d.addCallback(self._assert_leasecount, 1)
6104+        d.addCallback(self._assert_leasecount, "one", 1)
6105+        d.addCallback(self._assert_leasecount, "two", 1)
6106+        d.addCallback(self._assert_leasecount, "mutable", 1)
6107 
6108         # this CHECK uses the original client, which uses the same
6109         # lease-secrets, so it will just renew the original lease
6110hunk ./src/allmydata/test/test_web.py 4922
6111         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6112         d.addCallback(_got_html_good)
6113 
6114-        d.addCallback(self._count_leases, "one")
6115-        d.addCallback(self._assert_leasecount, 1)
6116-        d.addCallback(self._count_leases, "two")
6117-        d.addCallback(self._assert_leasecount, 1)
6118-        d.addCallback(self._count_leases, "mutable")
6119-        d.addCallback(self._assert_leasecount, 1)
6120+        d.addCallback(self._assert_leasecount, "one", 1)
6121+        d.addCallback(self._assert_leasecount, "two", 1)
6122+        d.addCallback(self._assert_leasecount, "mutable", 1)
6123 
6124         # this CHECK uses an alternate client, which adds a second lease
6125         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6126hunk ./src/allmydata/test/test_web.py 4930
6127         d.addCallback(_got_html_good)
6128 
6129-        d.addCallback(self._count_leases, "one")
6130-        d.addCallback(self._assert_leasecount, 2)
6131-        d.addCallback(self._count_leases, "two")
6132-        d.addCallback(self._assert_leasecount, 1)
6133-        d.addCallback(self._count_leases, "mutable")
6134-        d.addCallback(self._assert_leasecount, 1)
6135+        d.addCallback(self._assert_leasecount, "one", 2)
6136+        d.addCallback(self._assert_leasecount, "two", 1)
6137+        d.addCallback(self._assert_leasecount, "mutable", 1)
6138 
6139         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6140         d.addCallback(_got_html_good)
6141hunk ./src/allmydata/test/test_web.py 4937
6142 
6143-        d.addCallback(self._count_leases, "one")
6144-        d.addCallback(self._assert_leasecount, 2)
6145-        d.addCallback(self._count_leases, "two")
6146-        d.addCallback(self._assert_leasecount, 1)
6147-        d.addCallback(self._count_leases, "mutable")
6148-        d.addCallback(self._assert_leasecount, 1)
6149+        d.addCallback(self._assert_leasecount, "one", 2)
6150+        d.addCallback(self._assert_leasecount, "two", 1)
6151+        d.addCallback(self._assert_leasecount, "mutable", 1)
6152 
6153         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6154                       clientnum=1)
6155hunk ./src/allmydata/test/test_web.py 4945
6156         d.addCallback(_got_html_good)
6157 
6158-        d.addCallback(self._count_leases, "one")
6159-        d.addCallback(self._assert_leasecount, 2)
6160-        d.addCallback(self._count_leases, "two")
6161-        d.addCallback(self._assert_leasecount, 1)
6162-        d.addCallback(self._count_leases, "mutable")
6163-        d.addCallback(self._assert_leasecount, 2)
6164+        d.addCallback(self._assert_leasecount, "one", 2)
6165+        d.addCallback(self._assert_leasecount, "two", 1)
6166+        d.addCallback(self._assert_leasecount, "mutable", 2)
6167 
6168         d.addErrback(self.explain_web_error)
6169         return d
6170hunk ./src/allmydata/test/test_web.py 4989
6171             self.failUnlessReallyEqual(len(units), 4+1)
6172         d.addCallback(_done)
6173 
6174-        d.addCallback(self._count_leases, "root")
6175-        d.addCallback(self._assert_leasecount, 1)
6176-        d.addCallback(self._count_leases, "one")
6177-        d.addCallback(self._assert_leasecount, 1)
6178-        d.addCallback(self._count_leases, "mutable")
6179-        d.addCallback(self._assert_leasecount, 1)
6180+        d.addCallback(self._assert_leasecount, "root", 1)
6181+        d.addCallback(self._assert_leasecount, "one", 1)
6182+        d.addCallback(self._assert_leasecount, "mutable", 1)
6183 
6184         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6185         d.addCallback(_done)
6186hunk ./src/allmydata/test/test_web.py 4996
6187 
6188-        d.addCallback(self._count_leases, "root")
6189-        d.addCallback(self._assert_leasecount, 1)
6190-        d.addCallback(self._count_leases, "one")
6191-        d.addCallback(self._assert_leasecount, 1)
6192-        d.addCallback(self._count_leases, "mutable")
6193-        d.addCallback(self._assert_leasecount, 1)
6194+        d.addCallback(self._assert_leasecount, "root", 1)
6195+        d.addCallback(self._assert_leasecount, "one", 1)
6196+        d.addCallback(self._assert_leasecount, "mutable", 1)
6197 
6198         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6199                       clientnum=1)
6200hunk ./src/allmydata/test/test_web.py 5004
6201         d.addCallback(_done)
6202 
6203-        d.addCallback(self._count_leases, "root")
6204-        d.addCallback(self._assert_leasecount, 2)
6205-        d.addCallback(self._count_leases, "one")
6206-        d.addCallback(self._assert_leasecount, 2)
6207-        d.addCallback(self._count_leases, "mutable")
6208-        d.addCallback(self._assert_leasecount, 2)
6209+        d.addCallback(self._assert_leasecount, "root", 2)
6210+        d.addCallback(self._assert_leasecount, "one", 2)
6211+        d.addCallback(self._assert_leasecount, "mutable", 2)
6212 
6213         d.addErrback(self.explain_web_error)
6214         return d
6215merger 0.0 (
6216hunk ./src/allmydata/uri.py 829
6217+    def is_readonly(self):
6218+        return True
6219+
6220+    def get_readonly(self):
6221+        return self
6222+
6223+
6224hunk ./src/allmydata/uri.py 829
6225+    def is_readonly(self):
6226+        return True
6227+
6228+    def get_readonly(self):
6229+        return self
6230+
6231+
6232)
6233merger 0.0 (
6234hunk ./src/allmydata/uri.py 848
6235+    def is_readonly(self):
6236+        return True
6237+
6238+    def get_readonly(self):
6239+        return self
6240+
6241hunk ./src/allmydata/uri.py 848
6242+    def is_readonly(self):
6243+        return True
6244+
6245+    def get_readonly(self):
6246+        return self
6247+
6248)
6249hunk ./src/allmydata/util/encodingutil.py 221
6250 def quote_path(path, quotemarks=True):
6251     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6252 
6253+def quote_filepath(fp, quotemarks=True, encoding=None):
6254+    path = fp.path
6255+    if isinstance(path, str):
6256+        try:
6257+            path = path.decode(filesystem_encoding)
6258+        except UnicodeDecodeError:
6259+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6260+
6261+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6262+
6263 
6264 def unicode_platform():
6265     """
6266hunk ./src/allmydata/util/fileutil.py 5
6267 Futz with files like a pro.
6268 """
6269 
6270-import sys, exceptions, os, stat, tempfile, time, binascii
6271+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6272+
6273+from allmydata.util.assertutil import precondition
6274 
6275 from twisted.python import log
6276hunk ./src/allmydata/util/fileutil.py 10
6277+from twisted.python.filepath import FilePath, UnlistableError
6278 
6279 from pycryptopp.cipher.aes import AES
6280 
6281hunk ./src/allmydata/util/fileutil.py 189
6282             raise tx
6283         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6284 
6285-def rm_dir(dirname):
6286+def fp_make_dirs(dirfp):
6287+    """
6288+    An idempotent version of FilePath.makedirs().  If the dir already
6289+    exists, do nothing and return without raising an exception.  If this
6290+    call creates the dir, return without raising an exception.  If there is
6291+    an error that prevents creation or if the directory gets deleted after
6292+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6293+    exists, raise an exception.
6294+    """
6295+    log.msg( "xxx 0 %s" % (dirfp,))
6296+    tx = None
6297+    try:
6298+        dirfp.makedirs()
6299+    except OSError, x:
6300+        tx = x
6301+
6302+    if not dirfp.isdir():
6303+        if tx:
6304+            raise tx
6305+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6306+
6307+def fp_rmdir_if_empty(dirfp):
6308+    """ Remove the directory if it is empty. """
6309+    try:
6310+        os.rmdir(dirfp.path)
6311+    except OSError, e:
6312+        if e.errno != errno.ENOTEMPTY:
6313+            raise
6314+    else:
6315+        dirfp.changed()
6316+
6317+def rmtree(dirname):
6318     """
6319     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6320     already gone, do nothing and return without raising an exception.  If this
6321hunk ./src/allmydata/util/fileutil.py 239
6322             else:
6323                 remove(fullname)
6324         os.rmdir(dirname)
6325-    except Exception, le:
6326-        # Ignore "No such file or directory"
6327-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6328+    except EnvironmentError, le:
6329+        # Ignore "No such file or directory", collect any other exception.
6330+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6331             excs.append(le)
6332hunk ./src/allmydata/util/fileutil.py 243
6333+    except Exception, le:
6334+        excs.append(le)
6335 
6336     # Okay, now we've recursively removed everything, ignoring any "No
6337     # such file or directory" errors, and collecting any other errors.
6338hunk ./src/allmydata/util/fileutil.py 256
6339             raise OSError, "Failed to remove dir for unknown reason."
6340         raise OSError, excs
6341 
6342+def fp_remove(fp):
6343+    """
6344+    An idempotent version of shutil.rmtree().  If the file/dir is already
6345+    gone, do nothing and return without raising an exception.  If this call
6346+    removes the file/dir, return without raising an exception.  If there is
6347+    an error that prevents removal, or if a file or directory at the same
6348+    path gets created again by someone else after this deletes it and before
6349+    this checks that it is gone, raise an exception.
6350+    """
6351+    try:
6352+        fp.remove()
6353+    except UnlistableError, e:
6354+        if e.originalException.errno != errno.ENOENT:
6355+            raise
6356+    except OSError, e:
6357+        if e.errno != errno.ENOENT:
6358+            raise
6359+
6360+def rm_dir(dirname):
6361+    # Renamed to be like shutil.rmtree and unlike rmdir.
6362+    return rmtree(dirname)
6363 
6364 def remove_if_possible(f):
6365     try:
6366hunk ./src/allmydata/util/fileutil.py 387
6367         import traceback
6368         traceback.print_exc()
6369 
6370-def get_disk_stats(whichdir, reserved_space=0):
6371+def get_disk_stats(whichdirfp, reserved_space=0):
6372     """Return disk statistics for the storage disk, in the form of a dict
6373     with the following fields.
6374       total:            total bytes on disk
6375hunk ./src/allmydata/util/fileutil.py 408
6376     you can pass how many bytes you would like to leave unused on this
6377     filesystem as reserved_space.
6378     """
6379+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6380 
6381     if have_GetDiskFreeSpaceExW:
6382         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6383hunk ./src/allmydata/util/fileutil.py 419
6384         n_free_for_nonroot = c_ulonglong(0)
6385         n_total            = c_ulonglong(0)
6386         n_free_for_root    = c_ulonglong(0)
6387-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6388-                                               byref(n_total),
6389-                                               byref(n_free_for_root))
6390+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6391+                                                      byref(n_total),
6392+                                                      byref(n_free_for_root))
6393         if retval == 0:
6394             raise OSError("Windows error %d attempting to get disk statistics for %r"
6395hunk ./src/allmydata/util/fileutil.py 424
6396-                          % (GetLastError(), whichdir))
6397+                          % (GetLastError(), whichdirfp.path))
6398         free_for_nonroot = n_free_for_nonroot.value
6399         total            = n_total.value
6400         free_for_root    = n_free_for_root.value
6401hunk ./src/allmydata/util/fileutil.py 433
6402         # <http://docs.python.org/library/os.html#os.statvfs>
6403         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6404         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6405-        s = os.statvfs(whichdir)
6406+        s = os.statvfs(whichdirfp.path)
6407 
6408         # on my mac laptop:
6409         #  statvfs(2) is a wrapper around statfs(2).
6410hunk ./src/allmydata/util/fileutil.py 460
6411              'avail': avail,
6412            }
6413 
6414-def get_available_space(whichdir, reserved_space):
6415+def get_available_space(whichdirfp, reserved_space):
6416     """Returns available space for share storage in bytes, or None if no
6417     API to get this information is available.
6418 
6419hunk ./src/allmydata/util/fileutil.py 472
6420     you can pass how many bytes you would like to leave unused on this
6421     filesystem as reserved_space.
6422     """
6423+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6424     try:
6425hunk ./src/allmydata/util/fileutil.py 474
6426-        return get_disk_stats(whichdir, reserved_space)['avail']
6427+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6428     except AttributeError:
6429         return None
6430hunk ./src/allmydata/util/fileutil.py 477
6431-    except EnvironmentError:
6432-        log.msg("OS call to get disk statistics failed")
6433+
6434+
6435+def get_used_space(fp):
6436+    if fp is None:
6437         return 0
6438hunk ./src/allmydata/util/fileutil.py 482
6439+    try:
6440+        s = os.stat(fp.path)
6441+    except EnvironmentError:
6442+        if not fp.exists():
6443+            return 0
6444+        raise
6445+    else:
6446+        # POSIX defines st_blocks (originally a BSDism):
6447+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6448+        # but does not require stat() to give it a "meaningful value"
6449+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6450+        # and says:
6451+        #   "The unit for the st_blocks member of the stat structure is not defined
6452+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6453+        #    It may differ on a file system basis. There is no correlation between
6454+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6455+        #    structure members."
6456+        #
6457+        # The Linux docs define it as "the number of blocks allocated to the file,
6458+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6459+        # not set the attribute on Windows.
6460+        #
6461+        # We consider platforms that define st_blocks but give it a wrong value, or
6462+        # measure it in a unit other than 512 bytes, to be broken. See also
6463+        # <http://bugs.python.org/issue12350>.
6464+
6465+        if hasattr(s, 'st_blocks'):
6466+            return s.st_blocks * 512
6467+        else:
6468+            return s.st_size
6469}
6470[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6471david-sarah@jacaranda.org**20110920033803
6472 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6473] {
6474hunk ./src/allmydata/client.py 9
6475 from twisted.internet import reactor, defer
6476 from twisted.application import service
6477 from twisted.application.internet import TimerService
6478+from twisted.python.filepath import FilePath
6479 from foolscap.api import Referenceable
6480 from pycryptopp.publickey import rsa
6481 
6482hunk ./src/allmydata/client.py 15
6483 import allmydata
6484 from allmydata.storage.server import StorageServer
6485+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6486 from allmydata import storage_client
6487 from allmydata.immutable.upload import Uploader
6488 from allmydata.immutable.offloaded import Helper
6489hunk ./src/allmydata/client.py 213
6490             return
6491         readonly = self.get_config("storage", "readonly", False, boolean=True)
6492 
6493-        storedir = os.path.join(self.basedir, self.STOREDIR)
6494+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6495 
6496         data = self.get_config("storage", "reserved_space", None)
6497         reserved = None
6498hunk ./src/allmydata/client.py 255
6499             'cutoff_date': cutoff_date,
6500             'sharetypes': tuple(sharetypes),
6501         }
6502-        ss = StorageServer(storedir, self.nodeid,
6503-                           reserved_space=reserved,
6504-                           discard_storage=discard,
6505-                           readonly_storage=readonly,
6506+
6507+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6508+                              discard_storage=discard)
6509+        ss = StorageServer(nodeid, backend, storedir,
6510                            stats_provider=self.stats_provider,
6511                            expiration_policy=expiration_policy)
6512         self.add_service(ss)
6513hunk ./src/allmydata/interfaces.py 348
6514 
6515     def get_shares():
6516         """
6517-        Generates the IStoredShare objects held in this shareset.
6518+        Generates IStoredShare objects for all completed shares in this shareset.
6519         """
6520 
6521     def has_incoming(shnum):
6522hunk ./src/allmydata/storage/backends/base.py 69
6523         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6524         #     """create a mutable share with the given shnum and write_enabler"""
6525 
6526-        # secrets might be a triple with cancel_secret in secrets[2], but if
6527-        # so we ignore the cancel_secret.
6528         write_enabler = secrets[0]
6529         renew_secret = secrets[1]
6530hunk ./src/allmydata/storage/backends/base.py 71
6531+        cancel_secret = '\x00'*32
6532+        if len(secrets) > 2:
6533+            cancel_secret = secrets[2]
6534 
6535         si_s = self.get_storage_index_string()
6536         shares = {}
6537hunk ./src/allmydata/storage/backends/base.py 110
6538             read_data[shnum] = share.readv(read_vector)
6539 
6540         ownerid = 1 # TODO
6541-        lease_info = LeaseInfo(ownerid, renew_secret,
6542+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6543                                expiration_time, storageserver.get_serverid())
6544 
6545         if testv_is_good:
6546hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6547     return newfp.child(sia)
6548 
6549 
6550-def get_share(fp):
6551+def get_share(storageindex, shnum, fp):
6552     f = fp.open('rb')
6553     try:
6554         prefix = f.read(32)
6555hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6556         f.close()
6557 
6558     if prefix == MutableDiskShare.MAGIC:
6559-        return MutableDiskShare(fp)
6560+        return MutableDiskShare(storageindex, shnum, fp)
6561     else:
6562         # assume it's immutable
6563hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6564-        return ImmutableDiskShare(fp)
6565+        return ImmutableDiskShare(storageindex, shnum, fp)
6566 
6567 
6568 class DiskBackend(Backend):
6569hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6570                 if not NUM_RE.match(shnumstr):
6571                     continue
6572                 sharehome = self._sharehomedir.child(shnumstr)
6573-                yield self.get_share(sharehome)
6574+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6575         except UnlistableError:
6576             # There is no shares directory at all.
6577             pass
6578hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6579         return self._incominghomedir.child(str(shnum)).exists()
6580 
6581     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6582-        sharehome = self._sharehomedir.child(str(shnum))
6583+        finalhome = self._sharehomedir.child(str(shnum))
6584         incominghome = self._incominghomedir.child(str(shnum))
6585hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6586-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6587-                                   max_size=max_space_per_bucket, create=True)
6588+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6589+                                   max_size=max_space_per_bucket)
6590         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6591         if self._discard_storage:
6592             bw.throw_out_all_data = True
6593hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6594         fileutil.fp_make_dirs(self._sharehomedir)
6595         sharehome = self._sharehomedir.child(str(shnum))
6596         serverid = storageserver.get_serverid()
6597-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6598+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6599 
6600     def _clean_up_after_unlink(self):
6601         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6602hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6603     LEASE_SIZE = struct.calcsize(">L32s32sL")
6604 
6605 
6606-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6607-        """ If max_size is not None then I won't allow more than
6608-        max_size to be written to me. If create=True then max_size
6609-        must not be None. """
6610-        precondition((max_size is not None) or (not create), max_size, create)
6611+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6612+        """
6613+        If max_size is not None then I won't allow more than max_size to be written to me.
6614+        If finalhome is not None (meaning that we are creating the share) then max_size
6615+        must not be None.
6616+        """
6617+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6618         self._storageindex = storageindex
6619         self._max_size = max_size
6620hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6621-        self._incominghome = incominghome
6622-        self._home = finalhome
6623+
6624+        # If we are creating the share, _finalhome refers to the final path and
6625+        # _home to the incoming path. Otherwise, _finalhome is None.
6626+        self._finalhome = finalhome
6627+        self._home = home
6628         self._shnum = shnum
6629hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6630-        if create:
6631-            # touch the file, so later callers will see that we're working on
6632+
6633+        if self._finalhome is not None:
6634+            # Touch the file, so later callers will see that we're working on
6635             # it. Also construct the metadata.
6636hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6637-            assert not finalhome.exists()
6638-            fp_make_dirs(self._incominghome.parent())
6639+            assert not self._finalhome.exists()
6640+            fp_make_dirs(self._home.parent())
6641             # The second field -- the four-byte share data length -- is no
6642             # longer used as of Tahoe v1.3.0, but we continue to write it in
6643             # there in case someone downgrades a storage server from >=
6644hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6645             # the largest length that can fit into the field. That way, even
6646             # if this does happen, the old < v1.3.0 server will still allow
6647             # clients to read the first part of the share.
6648-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6649+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6650             self._lease_offset = max_size + 0x0c
6651             self._num_leases = 0
6652         else:
6653hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6654                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6655 
6656     def close(self):
6657-        fileutil.fp_make_dirs(self._home.parent())
6658-        self._incominghome.moveTo(self._home)
6659-        try:
6660-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6661-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6662-            # these directories lying around forever, but the delete might
6663-            # fail if we're working on another share for the same storage
6664-            # index (like ab/abcde/5). The alternative approach would be to
6665-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6666-            # ShareWriter), each of which is responsible for a single
6667-            # directory on disk, and have them use reference counting of
6668-            # their children to know when they should do the rmdir. This
6669-            # approach is simpler, but relies on os.rmdir refusing to delete
6670-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6671-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6672-            # we also delete the grandparent (prefix) directory, .../ab ,
6673-            # again to avoid leaving directories lying around. This might
6674-            # fail if there is another bucket open that shares a prefix (like
6675-            # ab/abfff).
6676-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6677-            # we leave the great-grandparent (incoming/) directory in place.
6678-        except EnvironmentError:
6679-            # ignore the "can't rmdir because the directory is not empty"
6680-            # exceptions, those are normal consequences of the
6681-            # above-mentioned conditions.
6682-            pass
6683-        pass
6684+        fileutil.fp_make_dirs(self._finalhome.parent())
6685+        self._home.moveTo(self._finalhome)
6686+
6687+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6688+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6689+        # these directories lying around forever, but the delete might
6690+        # fail if we're working on another share for the same storage
6691+        # index (like ab/abcde/5). The alternative approach would be to
6692+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6693+        # ShareWriter), each of which is responsible for a single
6694+        # directory on disk, and have them use reference counting of
6695+        # their children to know when they should do the rmdir. This
6696+        # approach is simpler, but relies on os.rmdir (used by
6697+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6698+        # Do *not* use fileutil.fp_remove() here!
6699+        parent = self._home.parent()
6700+        fileutil.fp_rmdir_if_empty(parent)
6701+
6702+        # we also delete the grandparent (prefix) directory, .../ab ,
6703+        # again to avoid leaving directories lying around. This might
6704+        # fail if there is another bucket open that shares a prefix (like
6705+        # ab/abfff).
6706+        fileutil.fp_rmdir_if_empty(parent.parent())
6707+
6708+        # we leave the great-grandparent (incoming/) directory in place.
6709+
6710+        # allow lease changes after closing.
6711+        self._home = self._finalhome
6712+        self._finalhome = None
6713 
6714     def get_used_space(self):
6715hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6716-        return (fileutil.get_used_space(self._home) +
6717-                fileutil.get_used_space(self._incominghome))
6718+        return (fileutil.get_used_space(self._finalhome) +
6719+                fileutil.get_used_space(self._home))
6720 
6721     def get_storage_index(self):
6722         return self._storageindex
6723hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6724         precondition(offset >= 0, offset)
6725         if self._max_size is not None and offset+length > self._max_size:
6726             raise DataTooLargeError(self._max_size, offset, length)
6727-        f = self._incominghome.open(mode='rb+')
6728+        f = self._home.open(mode='rb+')
6729         try:
6730             real_offset = self._data_offset+offset
6731             f.seek(real_offset)
6732hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6733 
6734     # These lease operations are intended for use by disk_backend.py.
6735     # Other clients should not depend on the fact that the disk backend
6736-    # stores leases in share files.
6737+    # stores leases in share files. XXX bucket.py also relies on this.
6738 
6739     def get_leases(self):
6740         """Yields a LeaseInfo instance for all leases."""
6741hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6742             f.close()
6743 
6744     def add_lease(self, lease_info):
6745-        f = self._incominghome.open(mode='rb')
6746+        f = self._home.open(mode='rb+')
6747         try:
6748             num_leases = self._read_num_leases(f)
6749hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6750-        finally:
6751-            f.close()
6752-        f = self._home.open(mode='wb+')
6753-        try:
6754             self._write_lease_record(f, num_leases, lease_info)
6755             self._write_num_leases(f, num_leases+1)
6756         finally:
6757hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6758         pass
6759 
6760 
6761-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6762-    ms = MutableDiskShare(fp, parent)
6763+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6764+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6765     ms.create(serverid, write_enabler)
6766     del ms
6767hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6768-    return MutableDiskShare(fp, parent)
6769+    return MutableDiskShare(storageindex, shnum, fp, parent)
6770hunk ./src/allmydata/storage/bucket.py 44
6771         start = time.time()
6772 
6773         self._share.close()
6774-        filelen = self._share.stat()
6775+        # XXX should this be self._share.get_used_space() ?
6776+        consumed_size = self._share.get_size()
6777         self._share = None
6778 
6779         self.closed = True
6780hunk ./src/allmydata/storage/bucket.py 51
6781         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6782 
6783-        self.ss.bucket_writer_closed(self, filelen)
6784+        self.ss.bucket_writer_closed(self, consumed_size)
6785         self.ss.add_latency("close", time.time() - start)
6786         self.ss.count("close")
6787 
6788hunk ./src/allmydata/storage/server.py 182
6789                                 renew_secret, cancel_secret,
6790                                 sharenums, allocated_size,
6791                                 canary, owner_num=0):
6792-        # cancel_secret is no longer used.
6793         # owner_num is not for clients to set, but rather it should be
6794         # curried into a StorageServer instance dedicated to a particular
6795         # owner.
6796hunk ./src/allmydata/storage/server.py 195
6797         # Note that the lease should not be added until the BucketWriter
6798         # has been closed.
6799         expire_time = time.time() + 31*24*60*60
6800-        lease_info = LeaseInfo(owner_num, renew_secret,
6801+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6802                                expire_time, self._serverid)
6803 
6804         max_space_per_bucket = allocated_size
6805hunk ./src/allmydata/test/no_network.py 349
6806         return self.g.servers_by_number[i]
6807 
6808     def get_serverdir(self, i):
6809-        return self.g.servers_by_number[i].backend.storedir
6810+        return self.g.servers_by_number[i].backend._storedir
6811 
6812     def remove_server(self, i):
6813         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6814hunk ./src/allmydata/test/no_network.py 357
6815     def iterate_servers(self):
6816         for i in sorted(self.g.servers_by_number.keys()):
6817             ss = self.g.servers_by_number[i]
6818-            yield (i, ss, ss.backend.storedir)
6819+            yield (i, ss, ss.backend._storedir)
6820 
6821     def find_uri_shares(self, uri):
6822         si = tahoe_uri.from_string(uri).get_storage_index()
6823hunk ./src/allmydata/test/no_network.py 384
6824         return shares
6825 
6826     def copy_share(self, from_share, uri, to_server):
6827-        si = uri.from_string(self.uri).get_storage_index()
6828+        si = tahoe_uri.from_string(uri).get_storage_index()
6829         (i_shnum, i_serverid, i_sharefp) = from_share
6830         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6831         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6832hunk ./src/allmydata/test/test_download.py 127
6833 
6834         return d
6835 
6836-    def _write_shares(self, uri, shares):
6837-        si = uri.from_string(uri).get_storage_index()
6838+    def _write_shares(self, fileuri, shares):
6839+        si = uri.from_string(fileuri).get_storage_index()
6840         for i in shares:
6841             shares_for_server = shares[i]
6842             for shnum in shares_for_server:
6843hunk ./src/allmydata/test/test_hung_server.py 36
6844 
6845     def _hang(self, servers, **kwargs):
6846         for ss in servers:
6847-            self.g.hang_server(ss.get_serverid(), **kwargs)
6848+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6849 
6850     def _unhang(self, servers, **kwargs):
6851         for ss in servers:
6852hunk ./src/allmydata/test/test_hung_server.py 40
6853-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6854+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6855 
6856     def _hang_shares(self, shnums, **kwargs):
6857         # hang all servers who are holding the given shares
6858hunk ./src/allmydata/test/test_hung_server.py 52
6859                     hung_serverids.add(i_serverid)
6860 
6861     def _delete_all_shares_from(self, servers):
6862-        serverids = [ss.get_serverid() for ss in servers]
6863+        serverids = [ss.original.get_serverid() for ss in servers]
6864         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6865             if i_serverid in serverids:
6866                 i_sharefp.remove()
6867hunk ./src/allmydata/test/test_hung_server.py 58
6868 
6869     def _corrupt_all_shares_in(self, servers, corruptor_func):
6870-        serverids = [ss.get_serverid() for ss in servers]
6871+        serverids = [ss.original.get_serverid() for ss in servers]
6872         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6873             if i_serverid in serverids:
6874                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6875hunk ./src/allmydata/test/test_hung_server.py 64
6876 
6877     def _copy_all_shares_from(self, from_servers, to_server):
6878-        serverids = [ss.get_serverid() for ss in from_servers]
6879+        serverids = [ss.original.get_serverid() for ss in from_servers]
6880         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6881             if i_serverid in serverids:
6882                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6883hunk ./src/allmydata/test/test_mutable.py 2991
6884             fso = debug.FindSharesOptions()
6885             storage_index = base32.b2a(n.get_storage_index())
6886             fso.si_s = storage_index
6887-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6888+            fso.nodedirs = [unicode(storedir.parent().path)
6889                             for (i,ss,storedir)
6890                             in self.iterate_servers()]
6891             fso.stdout = StringIO()
6892hunk ./src/allmydata/test/test_upload.py 818
6893         if share_number is not None:
6894             self._copy_share_to_server(share_number, server_number)
6895 
6896-
6897     def _copy_share_to_server(self, share_number, server_number):
6898         ss = self.g.servers_by_number[server_number]
6899hunk ./src/allmydata/test/test_upload.py 820
6900-        self.copy_share(self.shares[share_number], ss)
6901+        self.copy_share(self.shares[share_number], self.uri, ss)
6902 
6903     def _setup_grid(self):
6904         """
6905}
6906[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
6907david-sarah@jacaranda.org**20110920171737
6908 Ignore-this: 5947e864682a43cb04e557334cda7c19
6909] {
6910adddir ./docs/backends
6911addfile ./docs/backends/S3.rst
6912hunk ./docs/backends/S3.rst 1
6913+====================================================
6914+Storing Shares in Amazon Simple Storage Service (S3)
6915+====================================================
6916+
6917+S3 is a commercial storage service provided by Amazon, described at
6918+`<https://aws.amazon.com/s3/>`_.
6919+
6920+The Tahoe-LAFS storage server can be configured to store its shares in
6921+an S3 bucket, rather than on local filesystem. To enable this, add the
6922+following keys to the server's ``tahoe.cfg`` file:
6923+
6924+``[storage]``
6925+
6926+``backend = s3``
6927+
6928+    This turns off the local filesystem backend and enables use of S3.
6929+
6930+``s3.access_key_id = (string, required)``
6931+``s3.secret_access_key = (string, required)``
6932+
6933+    These two give the storage server permission to access your Amazon
6934+    Web Services account, allowing them to upload and download shares
6935+    from S3.
6936+
6937+``s3.bucket = (string, required)``
6938+
6939+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
6940+    storage server will only modify and access objects in the configured S3
6941+    bucket.
6942+
6943+``s3.url = (URL string, optional)``
6944+
6945+    This URL tells the storage server how to access the S3 service. It
6946+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
6947+    else, you may be able to use some other S3-like service if it is
6948+    sufficiently compatible.
6949+
6950+``s3.max_space = (str, optional)``
6951+
6952+    This tells the server to limit how much space can be used in the S3
6953+    bucket. Before each share is uploaded, the server will ask S3 for the
6954+    current bucket usage, and will only accept the share if it does not cause
6955+    the usage to grow above this limit.
6956+
6957+    The string contains a number, with an optional case-insensitive scale
6958+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
6959+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
6960+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
6961+    thing.
6962+
6963+    If ``s3.max_space`` is omitted, the default behavior is to allow
6964+    unlimited usage.
6965+
6966+
6967+Once configured, the WUI "storage server" page will provide information about
6968+how much space is being used and how many shares are being stored.
6969+
6970+
6971+Issues
6972+------
6973+
6974+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
6975+is configured to store shares in S3 rather than on local disk, some common
6976+operations may behave differently:
6977+
6978+* Lease crawling/expiration is not yet implemented. As a result, shares will
6979+  be retained forever, and the Storage Server status web page will not show
6980+  information about the number of mutable/immutable shares present.
6981+
6982+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
6983+  each share upload, causing the upload process to run slightly slower and
6984+  incur more S3 request charges.
6985addfile ./docs/backends/disk.rst
6986hunk ./docs/backends/disk.rst 1
6987+====================================
6988+Storing Shares on a Local Filesystem
6989+====================================
6990+
6991+The "disk" backend stores shares on the local filesystem. Versions of
6992+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
6993+
6994+``[storage]``
6995+
6996+``backend = disk``
6997+
6998+    This enables use of the disk backend, and is the default.
6999+
7000+``reserved_space = (str, optional)``
7001+
7002+    If provided, this value defines how much disk space is reserved: the
7003+    storage server will not accept any share that causes the amount of free
7004+    disk space to drop below this value. (The free space is measured by a
7005+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7006+    space available to the user account under which the storage server runs.)
7007+
7008+    This string contains a number, with an optional case-insensitive scale
7009+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7010+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7011+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7012+    thing.
7013+
7014+    "``tahoe create-node``" generates a tahoe.cfg with
7015+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7016+    reservation to suit your needs.
7017+
7018+``expire.enabled =``
7019+
7020+``expire.mode =``
7021+
7022+``expire.override_lease_duration =``
7023+
7024+``expire.cutoff_date =``
7025+
7026+``expire.immutable =``
7027+
7028+``expire.mutable =``
7029+
7030+    These settings control garbage collection, causing the server to
7031+    delete shares that no longer have an up-to-date lease on them. Please
7032+    see `<garbage-collection.rst>`_ for full details.
7033hunk ./docs/configuration.rst 412
7034     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7035     status of this bug. The default value is ``False``.
7036 
7037-``reserved_space = (str, optional)``
7038+``backend = (string, optional)``
7039 
7040hunk ./docs/configuration.rst 414
7041-    If provided, this value defines how much disk space is reserved: the
7042-    storage server will not accept any share that causes the amount of free
7043-    disk space to drop below this value. (The free space is measured by a
7044-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7045-    space available to the user account under which the storage server runs.)
7046+    Storage servers can store the data into different "backends". Clients
7047+    need not be aware of which backend is used by a server. The default
7048+    value is ``disk``.
7049 
7050hunk ./docs/configuration.rst 418
7051-    This string contains a number, with an optional case-insensitive scale
7052-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7053-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7054-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7055-    thing.
7056+``backend = disk``
7057 
7058hunk ./docs/configuration.rst 420
7059-    "``tahoe create-node``" generates a tahoe.cfg with
7060-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7061-    reservation to suit your needs.
7062+    The default is to store shares on the local filesystem (in
7063+    BASEDIR/storage/shares/). For configuration details (including how to
7064+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7065 
7066hunk ./docs/configuration.rst 424
7067-``expire.enabled =``
7068+``backend = S3``
7069 
7070hunk ./docs/configuration.rst 426
7071-``expire.mode =``
7072-
7073-``expire.override_lease_duration =``
7074-
7075-``expire.cutoff_date =``
7076-
7077-``expire.immutable =``
7078-
7079-``expire.mutable =``
7080-
7081-    These settings control garbage collection, in which the server will
7082-    delete shares that no longer have an up-to-date lease on them. Please see
7083-    `<garbage-collection.rst>`_ for full details.
7084+    The storage server can store all shares to an Amazon Simple Storage
7085+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7086 
7087 
7088 Running A Helper
7089}
7090[Fix some incorrect attribute accesses. refs #999
7091david-sarah@jacaranda.org**20110921031207
7092 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7093] {
7094hunk ./src/allmydata/client.py 258
7095 
7096         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7097                               discard_storage=discard)
7098-        ss = StorageServer(nodeid, backend, storedir,
7099+        ss = StorageServer(self.nodeid, backend, storedir,
7100                            stats_provider=self.stats_provider,
7101                            expiration_policy=expiration_policy)
7102         self.add_service(ss)
7103hunk ./src/allmydata/interfaces.py 449
7104         Returns the storage index.
7105         """
7106 
7107+    def get_storage_index_string():
7108+        """
7109+        Returns the base32-encoded storage index.
7110+        """
7111+
7112     def get_shnum():
7113         """
7114         Returns the share number.
7115hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7116     def get_storage_index(self):
7117         return self._storageindex
7118 
7119+    def get_storage_index_string(self):
7120+        return si_b2a(self._storageindex)
7121+
7122     def get_shnum(self):
7123         return self._shnum
7124 
7125hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7126     def get_storage_index(self):
7127         return self._storageindex
7128 
7129+    def get_storage_index_string(self):
7130+        return si_b2a(self._storageindex)
7131+
7132     def get_shnum(self):
7133         return self._shnum
7134 
7135hunk ./src/allmydata/storage/bucket.py 86
7136     def __init__(self, ss, share):
7137         self.ss = ss
7138         self._share = share
7139-        self.storageindex = share.storageindex
7140-        self.shnum = share.shnum
7141+        self.storageindex = share.get_storage_index()
7142+        self.shnum = share.get_shnum()
7143 
7144     def __repr__(self):
7145         return "<%s %s %s>" % (self.__class__.__name__,
7146hunk ./src/allmydata/storage/expirer.py 6
7147 from twisted.python import log as twlog
7148 
7149 from allmydata.storage.crawler import ShareCrawler
7150-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7151+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7152      UnknownImmutableContainerVersionError
7153 
7154 
7155hunk ./src/allmydata/storage/expirer.py 124
7156                     struct.error):
7157                 twlog.msg("lease-checker error processing %r" % (share,))
7158                 twlog.err()
7159-                which = (si_b2a(share.storageindex), share.get_shnum())
7160+                which = (share.get_storage_index_string(), share.get_shnum())
7161                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7162                 wks = (1, 1, 1, "unknown")
7163             would_keep_shares.append(wks)
7164hunk ./src/allmydata/storage/server.py 221
7165         alreadygot = set()
7166         for share in shareset.get_shares():
7167             share.add_or_renew_lease(lease_info)
7168-            alreadygot.add(share.shnum)
7169+            alreadygot.add(share.get_shnum())
7170 
7171         for shnum in sharenums - alreadygot:
7172             if shareset.has_incoming(shnum):
7173hunk ./src/allmydata/storage/server.py 324
7174 
7175         try:
7176             shareset = self.backend.get_shareset(storageindex)
7177-            return shareset.readv(self, shares, readv)
7178+            return shareset.readv(shares, readv)
7179         finally:
7180             self.add_latency("readv", time.time() - start)
7181 
7182hunk ./src/allmydata/storage/shares.py 1
7183-#! /usr/bin/python
7184-
7185-from allmydata.storage.mutable import MutableShareFile
7186-from allmydata.storage.immutable import ShareFile
7187-
7188-def get_share_file(filename):
7189-    f = open(filename, "rb")
7190-    prefix = f.read(32)
7191-    f.close()
7192-    if prefix == MutableShareFile.MAGIC:
7193-        return MutableShareFile(filename)
7194-    # otherwise assume it's immutable
7195-    return ShareFile(filename)
7196-
7197rmfile ./src/allmydata/storage/shares.py
7198hunk ./src/allmydata/test/no_network.py 387
7199         si = tahoe_uri.from_string(uri).get_storage_index()
7200         (i_shnum, i_serverid, i_sharefp) = from_share
7201         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7202+        fileutil.fp_make_dirs(shares_dir)
7203         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7204 
7205     def restore_all_shares(self, shares):
7206hunk ./src/allmydata/test/no_network.py 391
7207-        for share, data in shares.items():
7208-            share.home.setContent(data)
7209+        for sharepath, data in shares.items():
7210+            FilePath(sharepath).setContent(data)
7211 
7212     def delete_share(self, (shnum, serverid, sharefp)):
7213         sharefp.remove()
7214hunk ./src/allmydata/test/test_upload.py 744
7215         servertoshnums = {} # k: server, v: set(shnum)
7216 
7217         for i, c in self.g.servers_by_number.iteritems():
7218-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7219+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7220                 for fn in fns:
7221                     try:
7222                         sharenum = int(fn)
7223}
7224[docs/backends/S3.rst: remove Issues section. refs #999
7225david-sarah@jacaranda.org**20110921031625
7226 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7227] hunk ./docs/backends/S3.rst 57
7228 
7229 Once configured, the WUI "storage server" page will provide information about
7230 how much space is being used and how many shares are being stored.
7231-
7232-
7233-Issues
7234-------
7235-
7236-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7237-is configured to store shares in S3 rather than on local disk, some common
7238-operations may behave differently:
7239-
7240-* Lease crawling/expiration is not yet implemented. As a result, shares will
7241-  be retained forever, and the Storage Server status web page will not show
7242-  information about the number of mutable/immutable shares present.
7243-
7244-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7245-  each share upload, causing the upload process to run slightly slower and
7246-  incur more S3 request charges.
7247[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7248david-sarah@jacaranda.org**20110921031705
7249 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7250] {
7251hunk ./docs/backends/S3.rst 38
7252     else, you may be able to use some other S3-like service if it is
7253     sufficiently compatible.
7254 
7255-``s3.max_space = (str, optional)``
7256+``s3.max_space = (quantity of space, optional)``
7257 
7258     This tells the server to limit how much space can be used in the S3
7259     bucket. Before each share is uploaded, the server will ask S3 for the
7260hunk ./docs/backends/disk.rst 14
7261 
7262     This enables use of the disk backend, and is the default.
7263 
7264-``reserved_space = (str, optional)``
7265+``reserved_space = (quantity of space, optional)``
7266 
7267     If provided, this value defines how much disk space is reserved: the
7268     storage server will not accept any share that causes the amount of free
7269}
7270[More fixes to tests needed for pluggable backends. refs #999
7271david-sarah@jacaranda.org**20110921184649
7272 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7273] {
7274hunk ./src/allmydata/scripts/debug.py 8
7275 from twisted.python import usage, failure
7276 from twisted.internet import defer
7277 from twisted.scripts import trial as twisted_trial
7278+from twisted.python.filepath import FilePath
7279 
7280 
7281 class DumpOptions(usage.Options):
7282hunk ./src/allmydata/scripts/debug.py 38
7283         self['filename'] = argv_to_abspath(filename)
7284 
7285 def dump_share(options):
7286-    from allmydata.storage.mutable import MutableShareFile
7287+    from allmydata.storage.backends.disk.disk_backend import get_share
7288     from allmydata.util.encodingutil import quote_output
7289 
7290     out = options.stdout
7291hunk ./src/allmydata/scripts/debug.py 46
7292     # check the version, to see if we have a mutable or immutable share
7293     print >>out, "share filename: %s" % quote_output(options['filename'])
7294 
7295-    f = open(options['filename'], "rb")
7296-    prefix = f.read(32)
7297-    f.close()
7298-    if prefix == MutableShareFile.MAGIC:
7299-        return dump_mutable_share(options)
7300-    # otherwise assume it's immutable
7301-    return dump_immutable_share(options)
7302-
7303-def dump_immutable_share(options):
7304-    from allmydata.storage.immutable import ShareFile
7305+    share = get_share("", 0, fp)
7306+    if share.sharetype == "mutable":
7307+        return dump_mutable_share(options, share)
7308+    else:
7309+        assert share.sharetype == "immutable", share.sharetype
7310+        return dump_immutable_share(options)
7311 
7312hunk ./src/allmydata/scripts/debug.py 53
7313+def dump_immutable_share(options, share):
7314     out = options.stdout
7315hunk ./src/allmydata/scripts/debug.py 55
7316-    f = ShareFile(options['filename'])
7317     if not options["leases-only"]:
7318hunk ./src/allmydata/scripts/debug.py 56
7319-        dump_immutable_chk_share(f, out, options)
7320-    dump_immutable_lease_info(f, out)
7321+        dump_immutable_chk_share(share, out, options)
7322+    dump_immutable_lease_info(share, out)
7323     print >>out
7324     return 0
7325 
7326hunk ./src/allmydata/scripts/debug.py 166
7327     return when
7328 
7329 
7330-def dump_mutable_share(options):
7331-    from allmydata.storage.mutable import MutableShareFile
7332+def dump_mutable_share(options, m):
7333     from allmydata.util import base32, idlib
7334     out = options.stdout
7335hunk ./src/allmydata/scripts/debug.py 169
7336-    m = MutableShareFile(options['filename'])
7337     f = open(options['filename'], "rb")
7338     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7339     num_extra_leases = m._read_num_extra_leases(f)
7340hunk ./src/allmydata/scripts/debug.py 641
7341     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7342     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7343     """
7344-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7345-    from allmydata.util.encodingutil import listdir_unicode
7346+    from allmydata.storage.server import si_a2b
7347+    from allmydata.storage.backends.disk_backend import si_si2dir
7348+    from allmydata.util.encodingutil import quote_filepath
7349 
7350     out = options.stdout
7351hunk ./src/allmydata/scripts/debug.py 646
7352-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7353-    for d in options.nodedirs:
7354-        d = os.path.join(d, "storage/shares", sharedir)
7355-        if os.path.exists(d):
7356-            for shnum in listdir_unicode(d):
7357-                print >>out, os.path.join(d, shnum)
7358+    si = si_a2b(options.si_s)
7359+    for nodedir in options.nodedirs:
7360+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7361+        if sharedir.exists():
7362+            for sharefp in sharedir.children():
7363+                print >>out, quote_filepath(sharefp, quotemarks=False)
7364 
7365     return 0
7366 
7367hunk ./src/allmydata/scripts/debug.py 878
7368         print >>err, "Error processing %s" % quote_output(si_dir)
7369         failure.Failure().printTraceback(err)
7370 
7371+
7372 class CorruptShareOptions(usage.Options):
7373     def getSynopsis(self):
7374         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7375hunk ./src/allmydata/scripts/debug.py 902
7376 Obviously, this command should not be used in normal operation.
7377 """
7378         return t
7379+
7380     def parseArgs(self, filename):
7381         self['filename'] = filename
7382 
7383hunk ./src/allmydata/scripts/debug.py 907
7384 def corrupt_share(options):
7385+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7386+
7387+def do_corrupt_share(out, fp, offset="block-random"):
7388     import random
7389hunk ./src/allmydata/scripts/debug.py 911
7390-    from allmydata.storage.mutable import MutableShareFile
7391-    from allmydata.storage.immutable import ShareFile
7392+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7393+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7394     from allmydata.mutable.layout import unpack_header
7395     from allmydata.immutable.layout import ReadBucketProxy
7396hunk ./src/allmydata/scripts/debug.py 915
7397-    out = options.stdout
7398-    fn = options['filename']
7399-    assert options["offset"] == "block-random", "other offsets not implemented"
7400+
7401+    assert offset == "block-random", "other offsets not implemented"
7402+
7403     # first, what kind of share is it?
7404 
7405     def flip_bit(start, end):
7406hunk ./src/allmydata/scripts/debug.py 924
7407         offset = random.randrange(start, end)
7408         bit = random.randrange(0, 8)
7409         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7410-        f = open(fn, "rb+")
7411-        f.seek(offset)
7412-        d = f.read(1)
7413-        d = chr(ord(d) ^ 0x01)
7414-        f.seek(offset)
7415-        f.write(d)
7416-        f.close()
7417+        f = fp.open("rb+")
7418+        try:
7419+            f.seek(offset)
7420+            d = f.read(1)
7421+            d = chr(ord(d) ^ 0x01)
7422+            f.seek(offset)
7423+            f.write(d)
7424+        finally:
7425+            f.close()
7426 
7427hunk ./src/allmydata/scripts/debug.py 934
7428-    f = open(fn, "rb")
7429-    prefix = f.read(32)
7430-    f.close()
7431-    if prefix == MutableShareFile.MAGIC:
7432-        # mutable
7433-        m = MutableShareFile(fn)
7434-        f = open(fn, "rb")
7435-        f.seek(m.DATA_OFFSET)
7436-        data = f.read(2000)
7437-        # make sure this slot contains an SMDF share
7438-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7439+    f = fp.open("rb")
7440+    try:
7441+        prefix = f.read(32)
7442+    finally:
7443         f.close()
7444hunk ./src/allmydata/scripts/debug.py 939
7445+    if prefix == MutableDiskShare.MAGIC:
7446+        # mutable
7447+        m = MutableDiskShare("", 0, fp)
7448+        f = fp.open("rb")
7449+        try:
7450+            f.seek(m.DATA_OFFSET)
7451+            data = f.read(2000)
7452+            # make sure this slot contains an SMDF share
7453+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7454+        finally:
7455+            f.close()
7456 
7457         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7458          ig_datalen, offsets) = unpack_header(data)
7459hunk ./src/allmydata/scripts/debug.py 960
7460         flip_bit(start, end)
7461     else:
7462         # otherwise assume it's immutable
7463-        f = ShareFile(fn)
7464+        f = ImmutableDiskShare("", 0, fp)
7465         bp = ReadBucketProxy(None, None, '')
7466         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7467         start = f._data_offset + offsets["data"]
7468hunk ./src/allmydata/storage/backends/base.py 92
7469             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7470             if sharenum in shares:
7471                 if not shares[sharenum].check_testv(testv):
7472-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7473+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7474                     testv_is_good = False
7475                     break
7476             else:
7477hunk ./src/allmydata/storage/backends/base.py 99
7478                 # compare the vectors against an empty share, in which all
7479                 # reads return empty strings
7480                 if not EmptyShare().check_testv(testv):
7481-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7482-                                                                testv))
7483+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7484                     testv_is_good = False
7485                     break
7486 
7487hunk ./src/allmydata/test/test_cli.py 2892
7488             # delete one, corrupt a second
7489             shares = self.find_uri_shares(self.uri)
7490             self.failUnlessReallyEqual(len(shares), 10)
7491-            os.unlink(shares[0][2])
7492-            cso = debug.CorruptShareOptions()
7493-            cso.stdout = StringIO()
7494-            cso.parseOptions([shares[1][2]])
7495+            shares[0][2].remove()
7496+            stdout = StringIO()
7497+            sharefile = shares[1][2]
7498             storage_index = uri.from_string(self.uri).get_storage_index()
7499             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7500                                        (base32.b2a(shares[1][1]),
7501hunk ./src/allmydata/test/test_cli.py 2900
7502                                         base32.b2a(storage_index),
7503                                         shares[1][0])
7504-            debug.corrupt_share(cso)
7505+            debug.do_corrupt_share(stdout, sharefile)
7506         d.addCallback(_clobber_shares)
7507 
7508         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7509hunk ./src/allmydata/test/test_cli.py 3017
7510         def _clobber_shares(ignored):
7511             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7512             self.failUnlessReallyEqual(len(shares), 10)
7513-            os.unlink(shares[0][2])
7514+            shares[0][2].remove()
7515 
7516             shares = self.find_uri_shares(self.uris["mutable"])
7517hunk ./src/allmydata/test/test_cli.py 3020
7518-            cso = debug.CorruptShareOptions()
7519-            cso.stdout = StringIO()
7520-            cso.parseOptions([shares[1][2]])
7521+            stdout = StringIO()
7522+            sharefile = shares[1][2]
7523             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7524             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7525                                        (base32.b2a(shares[1][1]),
7526hunk ./src/allmydata/test/test_cli.py 3027
7527                                         base32.b2a(storage_index),
7528                                         shares[1][0])
7529-            debug.corrupt_share(cso)
7530+            debug.do_corrupt_share(stdout, sharefile)
7531         d.addCallback(_clobber_shares)
7532 
7533         # root
7534hunk ./src/allmydata/test/test_client.py 90
7535                            "enabled = true\n" + \
7536                            "reserved_space = 1000\n")
7537         c = client.Client(basedir)
7538-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7539+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7540 
7541     def test_reserved_2(self):
7542         basedir = "client.Basic.test_reserved_2"
7543hunk ./src/allmydata/test/test_client.py 101
7544                            "enabled = true\n" + \
7545                            "reserved_space = 10K\n")
7546         c = client.Client(basedir)
7547-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7548+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7549 
7550     def test_reserved_3(self):
7551         basedir = "client.Basic.test_reserved_3"
7552hunk ./src/allmydata/test/test_client.py 112
7553                            "enabled = true\n" + \
7554                            "reserved_space = 5mB\n")
7555         c = client.Client(basedir)
7556-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7557+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7558                              5*1000*1000)
7559 
7560     def test_reserved_4(self):
7561hunk ./src/allmydata/test/test_client.py 124
7562                            "enabled = true\n" + \
7563                            "reserved_space = 78Gb\n")
7564         c = client.Client(basedir)
7565-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7566+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7567                              78*1000*1000*1000)
7568 
7569     def test_reserved_bad(self):
7570hunk ./src/allmydata/test/test_client.py 136
7571                            "enabled = true\n" + \
7572                            "reserved_space = bogus\n")
7573         c = client.Client(basedir)
7574-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7575+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7576 
7577     def _permute(self, sb, key):
7578         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7579hunk ./src/allmydata/test/test_crawler.py 7
7580 from twisted.trial import unittest
7581 from twisted.application import service
7582 from twisted.internet import defer
7583+from twisted.python.filepath import FilePath
7584 from foolscap.api import eventually, fireEventually
7585 
7586 from allmydata.util import fileutil, hashutil, pollmixin
7587hunk ./src/allmydata/test/test_crawler.py 13
7588 from allmydata.storage.server import StorageServer, si_b2a
7589 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7590+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7591 
7592 from allmydata.test.test_storage import FakeCanary
7593 from allmydata.test.common_util import StallMixin
7594hunk ./src/allmydata/test/test_crawler.py 115
7595 
7596     def test_immediate(self):
7597         self.basedir = "crawler/Basic/immediate"
7598-        fileutil.make_dirs(self.basedir)
7599         serverid = "\x00" * 20
7600hunk ./src/allmydata/test/test_crawler.py 116
7601-        ss = StorageServer(self.basedir, serverid)
7602+        fp = FilePath(self.basedir)
7603+        backend = DiskBackend(fp)
7604+        ss = StorageServer(serverid, backend, fp)
7605         ss.setServiceParent(self.s)
7606 
7607         sis = [self.write(i, ss, serverid) for i in range(10)]
7608hunk ./src/allmydata/test/test_crawler.py 122
7609-        statefile = os.path.join(self.basedir, "statefile")
7610+        statefp = fp.child("statefile")
7611 
7612hunk ./src/allmydata/test/test_crawler.py 124
7613-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7614+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7615         c.load_state()
7616 
7617         c.start_current_prefix(time.time())
7618hunk ./src/allmydata/test/test_crawler.py 137
7619         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7620 
7621         # check that a new crawler picks up on the state file properly
7622-        c2 = BucketEnumeratingCrawler(ss, statefile)
7623+        c2 = BucketEnumeratingCrawler(backend, statefp)
7624         c2.load_state()
7625 
7626         c2.start_current_prefix(time.time())
7627hunk ./src/allmydata/test/test_crawler.py 145
7628 
7629     def test_service(self):
7630         self.basedir = "crawler/Basic/service"
7631-        fileutil.make_dirs(self.basedir)
7632         serverid = "\x00" * 20
7633hunk ./src/allmydata/test/test_crawler.py 146
7634-        ss = StorageServer(self.basedir, serverid)
7635+        fp = FilePath(self.basedir)
7636+        backend = DiskBackend(fp)
7637+        ss = StorageServer(serverid, backend, fp)
7638         ss.setServiceParent(self.s)
7639 
7640         sis = [self.write(i, ss, serverid) for i in range(10)]
7641hunk ./src/allmydata/test/test_crawler.py 153
7642 
7643-        statefile = os.path.join(self.basedir, "statefile")
7644-        c = BucketEnumeratingCrawler(ss, statefile)
7645+        statefp = fp.child("statefile")
7646+        c = BucketEnumeratingCrawler(backend, statefp)
7647         c.setServiceParent(self.s)
7648 
7649         # it should be legal to call get_state() and get_progress() right
7650hunk ./src/allmydata/test/test_crawler.py 174
7651 
7652     def test_paced(self):
7653         self.basedir = "crawler/Basic/paced"
7654-        fileutil.make_dirs(self.basedir)
7655         serverid = "\x00" * 20
7656hunk ./src/allmydata/test/test_crawler.py 175
7657-        ss = StorageServer(self.basedir, serverid)
7658+        fp = FilePath(self.basedir)
7659+        backend = DiskBackend(fp)
7660+        ss = StorageServer(serverid, backend, fp)
7661         ss.setServiceParent(self.s)
7662 
7663         # put four buckets in each prefixdir
7664hunk ./src/allmydata/test/test_crawler.py 186
7665             for tail in range(4):
7666                 sis.append(self.write(i, ss, serverid, tail))
7667 
7668-        statefile = os.path.join(self.basedir, "statefile")
7669+        statefp = fp.child("statefile")
7670 
7671hunk ./src/allmydata/test/test_crawler.py 188
7672-        c = PacedCrawler(ss, statefile)
7673+        c = PacedCrawler(backend, statefp)
7674         c.load_state()
7675         try:
7676             c.start_current_prefix(time.time())
7677hunk ./src/allmydata/test/test_crawler.py 213
7678         del c
7679 
7680         # start a new crawler, it should start from the beginning
7681-        c = PacedCrawler(ss, statefile)
7682+        c = PacedCrawler(backend, statefp)
7683         c.load_state()
7684         try:
7685             c.start_current_prefix(time.time())
7686hunk ./src/allmydata/test/test_crawler.py 226
7687         c.cpu_slice = PacedCrawler.cpu_slice
7688 
7689         # a third crawler should pick up from where it left off
7690-        c2 = PacedCrawler(ss, statefile)
7691+        c2 = PacedCrawler(backend, statefp)
7692         c2.all_buckets = c.all_buckets[:]
7693         c2.load_state()
7694         c2.countdown = -1
7695hunk ./src/allmydata/test/test_crawler.py 237
7696 
7697         # now stop it at the end of a bucket (countdown=4), to exercise a
7698         # different place that checks the time
7699-        c = PacedCrawler(ss, statefile)
7700+        c = PacedCrawler(backend, statefp)
7701         c.load_state()
7702         c.countdown = 4
7703         try:
7704hunk ./src/allmydata/test/test_crawler.py 256
7705 
7706         # stop it again at the end of the bucket, check that a new checker
7707         # picks up correctly
7708-        c = PacedCrawler(ss, statefile)
7709+        c = PacedCrawler(backend, statefp)
7710         c.load_state()
7711         c.countdown = 4
7712         try:
7713hunk ./src/allmydata/test/test_crawler.py 266
7714         # that should stop at the end of one of the buckets.
7715         c.save_state()
7716 
7717-        c2 = PacedCrawler(ss, statefile)
7718+        c2 = PacedCrawler(backend, statefp)
7719         c2.all_buckets = c.all_buckets[:]
7720         c2.load_state()
7721         c2.countdown = -1
7722hunk ./src/allmydata/test/test_crawler.py 277
7723 
7724     def test_paced_service(self):
7725         self.basedir = "crawler/Basic/paced_service"
7726-        fileutil.make_dirs(self.basedir)
7727         serverid = "\x00" * 20
7728hunk ./src/allmydata/test/test_crawler.py 278
7729-        ss = StorageServer(self.basedir, serverid)
7730+        fp = FilePath(self.basedir)
7731+        backend = DiskBackend(fp)
7732+        ss = StorageServer(serverid, backend, fp)
7733         ss.setServiceParent(self.s)
7734 
7735         sis = [self.write(i, ss, serverid) for i in range(10)]
7736hunk ./src/allmydata/test/test_crawler.py 285
7737 
7738-        statefile = os.path.join(self.basedir, "statefile")
7739-        c = PacedCrawler(ss, statefile)
7740+        statefp = fp.child("statefile")
7741+        c = PacedCrawler(backend, statefp)
7742 
7743         did_check_progress = [False]
7744         def check_progress():
7745hunk ./src/allmydata/test/test_crawler.py 345
7746         # and read the stdout when it runs.
7747 
7748         self.basedir = "crawler/Basic/cpu_usage"
7749-        fileutil.make_dirs(self.basedir)
7750         serverid = "\x00" * 20
7751hunk ./src/allmydata/test/test_crawler.py 346
7752-        ss = StorageServer(self.basedir, serverid)
7753+        fp = FilePath(self.basedir)
7754+        backend = DiskBackend(fp)
7755+        ss = StorageServer(serverid, backend, fp)
7756         ss.setServiceParent(self.s)
7757 
7758         for i in range(10):
7759hunk ./src/allmydata/test/test_crawler.py 354
7760             self.write(i, ss, serverid)
7761 
7762-        statefile = os.path.join(self.basedir, "statefile")
7763-        c = ConsumingCrawler(ss, statefile)
7764+        statefp = fp.child("statefile")
7765+        c = ConsumingCrawler(backend, statefp)
7766         c.setServiceParent(self.s)
7767 
7768         # this will run as fast as it can, consuming about 50ms per call to
7769hunk ./src/allmydata/test/test_crawler.py 391
7770 
7771     def test_empty_subclass(self):
7772         self.basedir = "crawler/Basic/empty_subclass"
7773-        fileutil.make_dirs(self.basedir)
7774         serverid = "\x00" * 20
7775hunk ./src/allmydata/test/test_crawler.py 392
7776-        ss = StorageServer(self.basedir, serverid)
7777+        fp = FilePath(self.basedir)
7778+        backend = DiskBackend(fp)
7779+        ss = StorageServer(serverid, backend, fp)
7780         ss.setServiceParent(self.s)
7781 
7782         for i in range(10):
7783hunk ./src/allmydata/test/test_crawler.py 400
7784             self.write(i, ss, serverid)
7785 
7786-        statefile = os.path.join(self.basedir, "statefile")
7787-        c = ShareCrawler(ss, statefile)
7788+        statefp = fp.child("statefile")
7789+        c = ShareCrawler(backend, statefp)
7790         c.slow_start = 0
7791         c.setServiceParent(self.s)
7792 
7793hunk ./src/allmydata/test/test_crawler.py 417
7794         d.addCallback(_done)
7795         return d
7796 
7797-
7798     def test_oneshot(self):
7799         self.basedir = "crawler/Basic/oneshot"
7800hunk ./src/allmydata/test/test_crawler.py 419
7801-        fileutil.make_dirs(self.basedir)
7802         serverid = "\x00" * 20
7803hunk ./src/allmydata/test/test_crawler.py 420
7804-        ss = StorageServer(self.basedir, serverid)
7805+        fp = FilePath(self.basedir)
7806+        backend = DiskBackend(fp)
7807+        ss = StorageServer(serverid, backend, fp)
7808         ss.setServiceParent(self.s)
7809 
7810         for i in range(30):
7811hunk ./src/allmydata/test/test_crawler.py 428
7812             self.write(i, ss, serverid)
7813 
7814-        statefile = os.path.join(self.basedir, "statefile")
7815-        c = OneShotCrawler(ss, statefile)
7816+        statefp = fp.child("statefile")
7817+        c = OneShotCrawler(backend, statefp)
7818         c.setServiceParent(self.s)
7819 
7820         d = c.finished_d
7821hunk ./src/allmydata/test/test_crawler.py 447
7822             self.failUnlessEqual(s["current-cycle"], None)
7823         d.addCallback(_check)
7824         return d
7825-
7826hunk ./src/allmydata/test/test_deepcheck.py 23
7827      ShouldFailMixin
7828 from allmydata.test.common_util import StallMixin
7829 from allmydata.test.no_network import GridTestMixin
7830+from allmydata.scripts import debug
7831+
7832 
7833 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7834 
7835hunk ./src/allmydata/test/test_deepcheck.py 905
7836         d.addErrback(self.explain_error)
7837         return d
7838 
7839-
7840-
7841     def set_up_damaged_tree(self):
7842         # 6.4s
7843 
7844hunk ./src/allmydata/test/test_deepcheck.py 989
7845 
7846         return d
7847 
7848-    def _run_cli(self, argv):
7849-        stdout, stderr = StringIO(), StringIO()
7850-        # this can only do synchronous operations
7851-        assert argv[0] == "debug"
7852-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7853-        return stdout.getvalue()
7854-
7855     def _delete_some_shares(self, node):
7856         self.delete_shares_numbered(node.get_uri(), [0,1])
7857 
7858hunk ./src/allmydata/test/test_deepcheck.py 995
7859     def _corrupt_some_shares(self, node):
7860         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7861             if shnum in (0,1):
7862-                self._run_cli(["debug", "corrupt-share", sharefile])
7863+                debug.do_corrupt_share(StringIO(), sharefile)
7864 
7865     def _delete_most_shares(self, node):
7866         self.delete_shares_numbered(node.get_uri(), range(1,10))
7867hunk ./src/allmydata/test/test_deepcheck.py 1000
7868 
7869-
7870     def check_is_healthy(self, cr, where):
7871         try:
7872             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
7873hunk ./src/allmydata/test/test_download.py 134
7874             for shnum in shares_for_server:
7875                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
7876                 fileutil.fp_make_dirs(share_dir)
7877-                share_dir.child(str(shnum)).setContent(shares[shnum])
7878+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
7879 
7880     def load_shares(self, ignored=None):
7881         # this uses the data generated by create_shares() to populate the
7882hunk ./src/allmydata/test/test_hung_server.py 32
7883 
7884     def _break(self, servers):
7885         for ss in servers:
7886-            self.g.break_server(ss.get_serverid())
7887+            self.g.break_server(ss.original.get_serverid())
7888 
7889     def _hang(self, servers, **kwargs):
7890         for ss in servers:
7891hunk ./src/allmydata/test/test_hung_server.py 67
7892         serverids = [ss.original.get_serverid() for ss in from_servers]
7893         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7894             if i_serverid in serverids:
7895-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7896+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
7897 
7898         self.shares = self.find_uri_shares(self.uri)
7899 
7900hunk ./src/allmydata/test/test_mutable.py 3670
7901         # Now execute each assignment by writing the storage.
7902         for (share, servernum) in assignments:
7903             sharedata = base64.b64decode(self.sdmf_old_shares[share])
7904-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
7905+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
7906             fileutil.fp_make_dirs(storage_dir)
7907             storage_dir.child("%d" % share).setContent(sharedata)
7908         # ...and verify that the shares are there.
7909hunk ./src/allmydata/test/test_no_network.py 10
7910 from allmydata.immutable.upload import Data
7911 from allmydata.util.consumer import download_to_data
7912 
7913+
7914 class Harness(unittest.TestCase):
7915     def setUp(self):
7916         self.s = service.MultiService()
7917hunk ./src/allmydata/test/test_storage.py 1
7918-import time, os.path, platform, stat, re, simplejson, struct, shutil
7919+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
7920 
7921 import mock
7922 
7923hunk ./src/allmydata/test/test_storage.py 6
7924 from twisted.trial import unittest
7925-
7926 from twisted.internet import defer
7927 from twisted.application import service
7928hunk ./src/allmydata/test/test_storage.py 8
7929+from twisted.python.filepath import FilePath
7930 from foolscap.api import fireEventually
7931hunk ./src/allmydata/test/test_storage.py 10
7932-import itertools
7933+
7934 from allmydata import interfaces
7935 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
7936 from allmydata.storage.server import StorageServer
7937hunk ./src/allmydata/test/test_storage.py 14
7938+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7939 from allmydata.storage.backends.disk.mutable import MutableDiskShare
7940 from allmydata.storage.bucket import BucketWriter, BucketReader
7941 from allmydata.storage.common import DataTooLargeError, \
7942hunk ./src/allmydata/test/test_storage.py 310
7943         return self.sparent.stopService()
7944 
7945     def workdir(self, name):
7946-        basedir = os.path.join("storage", "Server", name)
7947-        return basedir
7948+        return FilePath("storage").child("Server").child(name)
7949 
7950     def create(self, name, reserved_space=0, klass=StorageServer):
7951         workdir = self.workdir(name)
7952hunk ./src/allmydata/test/test_storage.py 314
7953-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
7954+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
7955+        ss = klass("\x00" * 20, backend, workdir,
7956                    stats_provider=FakeStatsProvider())
7957         ss.setServiceParent(self.sparent)
7958         return ss
7959hunk ./src/allmydata/test/test_storage.py 1386
7960 
7961     def tearDown(self):
7962         self.sparent.stopService()
7963-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
7964+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
7965 
7966 
7967     def write_enabler(self, we_tag):
7968hunk ./src/allmydata/test/test_storage.py 2781
7969         return self.sparent.stopService()
7970 
7971     def workdir(self, name):
7972-        basedir = os.path.join("storage", "Server", name)
7973-        return basedir
7974+        return FilePath("storage").child("Server").child(name)
7975 
7976     def create(self, name):
7977         workdir = self.workdir(name)
7978hunk ./src/allmydata/test/test_storage.py 2785
7979-        ss = StorageServer(workdir, "\x00" * 20)
7980+        backend = DiskBackend(workdir)
7981+        ss = StorageServer("\x00" * 20, backend, workdir)
7982         ss.setServiceParent(self.sparent)
7983         return ss
7984 
7985hunk ./src/allmydata/test/test_storage.py 4061
7986         }
7987 
7988         basedir = "storage/WebStatus/status_right_disk_stats"
7989-        fileutil.make_dirs(basedir)
7990-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
7991-        expecteddir = ss.sharedir
7992+        fp = FilePath(basedir)
7993+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
7994+        ss = StorageServer("\x00" * 20, backend, fp)
7995+        expecteddir = backend._sharedir
7996         ss.setServiceParent(self.s)
7997         w = StorageStatus(ss)
7998         html = w.renderSynchronously()
7999hunk ./src/allmydata/test/test_storage.py 4084
8000 
8001     def test_readonly(self):
8002         basedir = "storage/WebStatus/readonly"
8003-        fileutil.make_dirs(basedir)
8004-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8005+        fp = FilePath(basedir)
8006+        backend = DiskBackend(fp, readonly=True)
8007+        ss = StorageServer("\x00" * 20, backend, fp)
8008         ss.setServiceParent(self.s)
8009         w = StorageStatus(ss)
8010         html = w.renderSynchronously()
8011hunk ./src/allmydata/test/test_storage.py 4096
8012 
8013     def test_reserved(self):
8014         basedir = "storage/WebStatus/reserved"
8015-        fileutil.make_dirs(basedir)
8016-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8017-        ss.setServiceParent(self.s)
8018-        w = StorageStatus(ss)
8019-        html = w.renderSynchronously()
8020-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8021-        s = remove_tags(html)
8022-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8023-
8024-    def test_huge_reserved(self):
8025-        basedir = "storage/WebStatus/reserved"
8026-        fileutil.make_dirs(basedir)
8027-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8028+        fp = FilePath(basedir)
8029+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8030+        ss = StorageServer("\x00" * 20, backend, fp)
8031         ss.setServiceParent(self.s)
8032         w = StorageStatus(ss)
8033         html = w.renderSynchronously()
8034hunk ./src/allmydata/test/test_upload.py 3
8035 # -*- coding: utf-8 -*-
8036 
8037-import os, shutil
8038+import os
8039 from cStringIO import StringIO
8040 from twisted.trial import unittest
8041 from twisted.python.failure import Failure
8042hunk ./src/allmydata/test/test_upload.py 14
8043 from allmydata import uri, monitor, client
8044 from allmydata.immutable import upload, encode
8045 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8046-from allmydata.util import log
8047+from allmydata.util import log, fileutil
8048 from allmydata.util.assertutil import precondition
8049 from allmydata.util.deferredutil import DeferredListShouldSucceed
8050 from allmydata.test.no_network import GridTestMixin
8051hunk ./src/allmydata/test/test_upload.py 972
8052                                         readonly=True))
8053         # Remove the first share from server 0.
8054         def _remove_share_0_from_server_0():
8055-            share_location = self.shares[0][2]
8056-            os.remove(share_location)
8057+            self.shares[0][2].remove()
8058         d.addCallback(lambda ign:
8059             _remove_share_0_from_server_0())
8060         # Set happy = 4 in the client.
8061hunk ./src/allmydata/test/test_upload.py 1847
8062             self._copy_share_to_server(3, 1)
8063             storedir = self.get_serverdir(0)
8064             # remove the storedir, wiping out any existing shares
8065-            shutil.rmtree(storedir)
8066+            fileutil.fp_remove(storedir)
8067             # create an empty storedir to replace the one we just removed
8068hunk ./src/allmydata/test/test_upload.py 1849
8069-            os.mkdir(storedir)
8070+            storedir.mkdir()
8071             client = self.g.clients[0]
8072             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8073             return client
8074hunk ./src/allmydata/test/test_upload.py 1888
8075             self._copy_share_to_server(3, 1)
8076             storedir = self.get_serverdir(0)
8077             # remove the storedir, wiping out any existing shares
8078-            shutil.rmtree(storedir)
8079+            fileutil.fp_remove(storedir)
8080             # create an empty storedir to replace the one we just removed
8081hunk ./src/allmydata/test/test_upload.py 1890
8082-            os.mkdir(storedir)
8083+            storedir.mkdir()
8084             client = self.g.clients[0]
8085             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8086             return client
8087hunk ./src/allmydata/test/test_web.py 4870
8088         d.addErrback(self.explain_web_error)
8089         return d
8090 
8091-    def _assert_leasecount(self, ignored, which, expected):
8092+    def _assert_leasecount(self, which, expected):
8093         lease_counts = self.count_leases(self.uris[which])
8094         for (fn, num_leases) in lease_counts:
8095             if num_leases != expected:
8096hunk ./src/allmydata/test/test_web.py 4903
8097                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8098         d.addCallback(_compute_fileurls)
8099 
8100-        d.addCallback(self._assert_leasecount, "one", 1)
8101-        d.addCallback(self._assert_leasecount, "two", 1)
8102-        d.addCallback(self._assert_leasecount, "mutable", 1)
8103+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8104+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8105+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8106 
8107         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8108         def _got_html_good(res):
8109hunk ./src/allmydata/test/test_web.py 4913
8110             self.failIf("Not Healthy" in res, res)
8111         d.addCallback(_got_html_good)
8112 
8113-        d.addCallback(self._assert_leasecount, "one", 1)
8114-        d.addCallback(self._assert_leasecount, "two", 1)
8115-        d.addCallback(self._assert_leasecount, "mutable", 1)
8116+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8117+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8118+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8119 
8120         # this CHECK uses the original client, which uses the same
8121         # lease-secrets, so it will just renew the original lease
8122hunk ./src/allmydata/test/test_web.py 4922
8123         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8124         d.addCallback(_got_html_good)
8125 
8126-        d.addCallback(self._assert_leasecount, "one", 1)
8127-        d.addCallback(self._assert_leasecount, "two", 1)
8128-        d.addCallback(self._assert_leasecount, "mutable", 1)
8129+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8130+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8131+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8132 
8133         # this CHECK uses an alternate client, which adds a second lease
8134         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8135hunk ./src/allmydata/test/test_web.py 4930
8136         d.addCallback(_got_html_good)
8137 
8138-        d.addCallback(self._assert_leasecount, "one", 2)
8139-        d.addCallback(self._assert_leasecount, "two", 1)
8140-        d.addCallback(self._assert_leasecount, "mutable", 1)
8141+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8142+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8143+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8144 
8145         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8146         d.addCallback(_got_html_good)
8147hunk ./src/allmydata/test/test_web.py 4937
8148 
8149-        d.addCallback(self._assert_leasecount, "one", 2)
8150-        d.addCallback(self._assert_leasecount, "two", 1)
8151-        d.addCallback(self._assert_leasecount, "mutable", 1)
8152+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8153+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8154+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8155 
8156         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8157                       clientnum=1)
8158hunk ./src/allmydata/test/test_web.py 4945
8159         d.addCallback(_got_html_good)
8160 
8161-        d.addCallback(self._assert_leasecount, "one", 2)
8162-        d.addCallback(self._assert_leasecount, "two", 1)
8163-        d.addCallback(self._assert_leasecount, "mutable", 2)
8164+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8165+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8166+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8167 
8168         d.addErrback(self.explain_web_error)
8169         return d
8170hunk ./src/allmydata/test/test_web.py 4989
8171             self.failUnlessReallyEqual(len(units), 4+1)
8172         d.addCallback(_done)
8173 
8174-        d.addCallback(self._assert_leasecount, "root", 1)
8175-        d.addCallback(self._assert_leasecount, "one", 1)
8176-        d.addCallback(self._assert_leasecount, "mutable", 1)
8177+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8178+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8179+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8180 
8181         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8182         d.addCallback(_done)
8183hunk ./src/allmydata/test/test_web.py 4996
8184 
8185-        d.addCallback(self._assert_leasecount, "root", 1)
8186-        d.addCallback(self._assert_leasecount, "one", 1)
8187-        d.addCallback(self._assert_leasecount, "mutable", 1)
8188+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8189+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8190+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8191 
8192         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8193                       clientnum=1)
8194hunk ./src/allmydata/test/test_web.py 5004
8195         d.addCallback(_done)
8196 
8197-        d.addCallback(self._assert_leasecount, "root", 2)
8198-        d.addCallback(self._assert_leasecount, "one", 2)
8199-        d.addCallback(self._assert_leasecount, "mutable", 2)
8200+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
8201+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8202+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8203 
8204         d.addErrback(self.explain_web_error)
8205         return d
8206}
8207[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
8208david-sarah@jacaranda.org**20110921221421
8209 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
8210] {
8211hunk ./src/allmydata/scripts/debug.py 642
8212     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
8213     """
8214     from allmydata.storage.server import si_a2b
8215-    from allmydata.storage.backends.disk_backend import si_si2dir
8216+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
8217     from allmydata.util.encodingutil import quote_filepath
8218 
8219     out = options.stdout
8220hunk ./src/allmydata/scripts/debug.py 648
8221     si = si_a2b(options.si_s)
8222     for nodedir in options.nodedirs:
8223-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
8224+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
8225         if sharedir.exists():
8226             for sharefp in sharedir.children():
8227                 print >>out, quote_filepath(sharefp, quotemarks=False)
8228hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
8229         incominghome = self._incominghomedir.child(str(shnum))
8230         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
8231                                    max_size=max_space_per_bucket)
8232-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
8233+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
8234         if self._discard_storage:
8235             bw.throw_out_all_data = True
8236         return bw
8237hunk ./src/allmydata/storage/backends/disk/immutable.py 147
8238     def unlink(self):
8239         self._home.remove()
8240 
8241+    def get_allocated_size(self):
8242+        return self._max_size
8243+
8244     def get_size(self):
8245         return self._home.getsize()
8246 
8247hunk ./src/allmydata/storage/bucket.py 15
8248 class BucketWriter(Referenceable):
8249     implements(RIBucketWriter)
8250 
8251-    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
8252+    def __init__(self, ss, immutableshare, lease_info, canary):
8253         self.ss = ss
8254hunk ./src/allmydata/storage/bucket.py 17
8255-        self._max_size = max_size # don't allow the client to write more than this
8256         self._canary = canary
8257         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
8258         self.closed = False
8259hunk ./src/allmydata/storage/bucket.py 27
8260         self._share.add_lease(lease_info)
8261 
8262     def allocated_size(self):
8263-        return self._max_size
8264+        return self._share.get_allocated_size()
8265 
8266     def remote_write(self, offset, data):
8267         start = time.time()
8268hunk ./src/allmydata/storage/crawler.py 480
8269             self.state["bucket-counts"][cycle] = {}
8270         self.state["bucket-counts"][cycle][prefix] = len(sharesets)
8271         if prefix in self.prefixes[:self.num_sample_prefixes]:
8272-            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
8273+            si_strings = [shareset.get_storage_index_string() for shareset in sharesets]
8274+            self.state["storage-index-samples"][prefix] = (cycle, si_strings)
8275 
8276     def finished_cycle(self, cycle):
8277         last_counts = self.state["bucket-counts"].get(cycle, [])
8278hunk ./src/allmydata/storage/expirer.py 281
8279         # copy() needs to become a deepcopy
8280         h["space-recovered"] = s["space-recovered"].copy()
8281 
8282-        history = pickle.load(self.historyfp.getContent())
8283+        history = pickle.loads(self.historyfp.getContent())
8284         history[cycle] = h
8285         while len(history) > 10:
8286             oldcycles = sorted(history.keys())
8287hunk ./src/allmydata/storage/expirer.py 355
8288         progress = self.get_progress()
8289 
8290         state = ShareCrawler.get_state(self) # does a shallow copy
8291-        history = pickle.load(self.historyfp.getContent())
8292+        history = pickle.loads(self.historyfp.getContent())
8293         state["history"] = history
8294 
8295         if not progress["cycle-in-progress"]:
8296hunk ./src/allmydata/test/test_download.py 199
8297                     for shnum in immutable_shares[clientnum]:
8298                         if s._shnum == shnum:
8299                             share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8300-                            share_dir.child(str(shnum)).remove()
8301+                            fileutil.fp_remove(share_dir.child(str(shnum)))
8302         d.addCallback(_clobber_some_shares)
8303         d.addCallback(lambda ign: download_to_data(n))
8304         d.addCallback(_got_data)
8305hunk ./src/allmydata/test/test_download.py 224
8306             for clientnum in immutable_shares:
8307                 for shnum in immutable_shares[clientnum]:
8308                     share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8309-                    share_dir.child(str(shnum)).remove()
8310+                    fileutil.fp_remove(share_dir.child(str(shnum)))
8311             # now a new download should fail with NoSharesError. We want a
8312             # new ImmutableFileNode so it will forget about the old shares.
8313             # If we merely called create_node_from_uri() without first
8314hunk ./src/allmydata/test/test_repairer.py 415
8315         def _test_corrupt(ignored):
8316             olddata = {}
8317             shares = self.find_uri_shares(self.uri)
8318-            for (shnum, serverid, sharefile) in shares:
8319-                olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
8320+            for (shnum, serverid, sharefp) in shares:
8321+                olddata[ (shnum, serverid) ] = sharefp.getContent()
8322             for sh in shares:
8323                 self.corrupt_share(sh, common._corrupt_uri_extension)
8324hunk ./src/allmydata/test/test_repairer.py 419
8325-            for (shnum, serverid, sharefile) in shares:
8326-                newdata = open(sharefile, "rb").read()
8327+            for (shnum, serverid, sharefp) in shares:
8328+                newdata = sharefp.getContent()
8329                 self.failIfEqual(olddata[ (shnum, serverid) ], newdata)
8330         d.addCallback(_test_corrupt)
8331 
8332hunk ./src/allmydata/test/test_storage.py 63
8333 
8334 class Bucket(unittest.TestCase):
8335     def make_workdir(self, name):
8336-        basedir = os.path.join("storage", "Bucket", name)
8337-        incoming = os.path.join(basedir, "tmp", "bucket")
8338-        final = os.path.join(basedir, "bucket")
8339-        fileutil.make_dirs(basedir)
8340-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8341+        basedir = FilePath("storage").child("Bucket").child(name)
8342+        tmpdir = basedir.child("tmp")
8343+        tmpdir.makedirs()
8344+        incoming = tmpdir.child("bucket")
8345+        final = basedir.child("bucket")
8346         return incoming, final
8347 
8348     def bucket_writer_closed(self, bw, consumed):
8349hunk ./src/allmydata/test/test_storage.py 87
8350 
8351     def test_create(self):
8352         incoming, final = self.make_workdir("test_create")
8353-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8354-                          FakeCanary())
8355+        share = ImmutableDiskShare("", 0, incoming, final, 200)
8356+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8357         bw.remote_write(0, "a"*25)
8358         bw.remote_write(25, "b"*25)
8359         bw.remote_write(50, "c"*25)
8360hunk ./src/allmydata/test/test_storage.py 97
8361 
8362     def test_readwrite(self):
8363         incoming, final = self.make_workdir("test_readwrite")
8364-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8365-                          FakeCanary())
8366+        share = ImmutableDiskShare("", 0, incoming, 200)
8367+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8368         bw.remote_write(0, "a"*25)
8369         bw.remote_write(25, "b"*25)
8370         bw.remote_write(50, "c"*7) # last block may be short
8371hunk ./src/allmydata/test/test_storage.py 140
8372 
8373         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8374 
8375-        fileutil.write(final, share_file_data)
8376+        final.setContent(share_file_data)
8377 
8378         mockstorageserver = mock.Mock()
8379 
8380hunk ./src/allmydata/test/test_storage.py 179
8381 
8382 class BucketProxy(unittest.TestCase):
8383     def make_bucket(self, name, size):
8384-        basedir = os.path.join("storage", "BucketProxy", name)
8385-        incoming = os.path.join(basedir, "tmp", "bucket")
8386-        final = os.path.join(basedir, "bucket")
8387-        fileutil.make_dirs(basedir)
8388-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8389-        bw = BucketWriter(self, incoming, final, size, self.make_lease(),
8390-                          FakeCanary())
8391+        basedir = FilePath("storage").child("BucketProxy").child(name)
8392+        tmpdir = basedir.child("tmp")
8393+        tmpdir.makedirs()
8394+        incoming = tmpdir.child("bucket")
8395+        final = basedir.child("bucket")
8396+        share = ImmutableDiskShare("", 0, incoming, final, size)
8397+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8398         rb = RemoteBucket()
8399         rb.target = bw
8400         return bw, rb, final
8401hunk ./src/allmydata/test/test_storage.py 206
8402         pass
8403 
8404     def test_create(self):
8405-        bw, rb, sharefname = self.make_bucket("test_create", 500)
8406+        bw, rb, sharefp = self.make_bucket("test_create", 500)
8407         bp = WriteBucketProxy(rb, None,
8408                               data_size=300,
8409                               block_size=10,
8410hunk ./src/allmydata/test/test_storage.py 237
8411                         for i in (1,9,13)]
8412         uri_extension = "s" + "E"*498 + "e"
8413 
8414-        bw, rb, sharefname = self.make_bucket(name, sharesize)
8415+        bw, rb, sharefp = self.make_bucket(name, sharesize)
8416         bp = wbp_class(rb, None,
8417                        data_size=95,
8418                        block_size=25,
8419hunk ./src/allmydata/test/test_storage.py 258
8420 
8421         # now read everything back
8422         def _start_reading(res):
8423-            br = BucketReader(self, sharefname)
8424+            br = BucketReader(self, sharefp)
8425             rb = RemoteBucket()
8426             rb.target = br
8427             server = NoNetworkServer("abc", None)
8428hunk ./src/allmydata/test/test_storage.py 373
8429         for i, wb in writers.items():
8430             wb.remote_write(0, "%10d" % i)
8431             wb.remote_close()
8432-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8433-                                "shares")
8434-        children_of_storedir = set(os.listdir(storedir))
8435+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8436+        children_of_storedir = sorted([child.basename() for child in storedir.children()])
8437 
8438         # Now store another one under another storageindex that has leading
8439         # chars the same as the first storageindex.
8440hunk ./src/allmydata/test/test_storage.py 382
8441         for i, wb in writers.items():
8442             wb.remote_write(0, "%10d" % i)
8443             wb.remote_close()
8444-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8445-                                "shares")
8446-        new_children_of_storedir = set(os.listdir(storedir))
8447+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8448+        new_children_of_storedir = sorted([child.basename() for child in storedir.children()])
8449         self.failUnlessEqual(children_of_storedir, new_children_of_storedir)
8450 
8451     def test_remove_incoming(self):
8452hunk ./src/allmydata/test/test_storage.py 390
8453         ss = self.create("test_remove_incoming")
8454         already, writers = self.allocate(ss, "vid", range(3), 10)
8455         for i,wb in writers.items():
8456+            incoming_share_home = wb._share._home
8457             wb.remote_write(0, "%10d" % i)
8458             wb.remote_close()
8459hunk ./src/allmydata/test/test_storage.py 393
8460-        incoming_share_dir = wb.incominghome
8461-        incoming_bucket_dir = os.path.dirname(incoming_share_dir)
8462-        incoming_prefix_dir = os.path.dirname(incoming_bucket_dir)
8463-        incoming_dir = os.path.dirname(incoming_prefix_dir)
8464-        self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir)
8465-        self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir)
8466-        self.failUnless(os.path.exists(incoming_dir), incoming_dir)
8467+        incoming_bucket_dir = incoming_share_home.parent()
8468+        incoming_prefix_dir = incoming_bucket_dir.parent()
8469+        incoming_dir = incoming_prefix_dir.parent()
8470+        self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir)
8471+        self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir)
8472+        self.failUnless(incoming_dir.exists(), incoming_dir)
8473 
8474     def test_abort(self):
8475         # remote_abort, when called on a writer, should make sure that
8476hunk ./src/allmydata/test/test_upload.py 1849
8477             # remove the storedir, wiping out any existing shares
8478             fileutil.fp_remove(storedir)
8479             # create an empty storedir to replace the one we just removed
8480-            storedir.mkdir()
8481+            storedir.makedirs()
8482             client = self.g.clients[0]
8483             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8484             return client
8485hunk ./src/allmydata/test/test_upload.py 1890
8486             # remove the storedir, wiping out any existing shares
8487             fileutil.fp_remove(storedir)
8488             # create an empty storedir to replace the one we just removed
8489-            storedir.mkdir()
8490+            storedir.makedirs()
8491             client = self.g.clients[0]
8492             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8493             return client
8494}
8495[uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
8496david-sarah@jacaranda.org**20110921222038
8497 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf
8498] {
8499hunk ./src/allmydata/uri.py 829
8500     def is_mutable(self):
8501         return False
8502 
8503+    def is_readonly(self):
8504+        return True
8505+
8506+    def get_readonly(self):
8507+        return self
8508+
8509+
8510 class DirectoryURIVerifier(_DirectoryBaseURI):
8511     implements(IVerifierURI)
8512 
8513hunk ./src/allmydata/uri.py 855
8514     def is_mutable(self):
8515         return False
8516 
8517+    def is_readonly(self):
8518+        return True
8519+
8520+    def get_readonly(self):
8521+        return self
8522+
8523 
8524 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
8525     implements(IVerifierURI)
8526}
8527[Fix some more test failures. refs #999
8528david-sarah@jacaranda.org**20110922045451
8529 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7
8530] {
8531hunk ./src/allmydata/scripts/debug.py 42
8532     from allmydata.util.encodingutil import quote_output
8533 
8534     out = options.stdout
8535+    filename = options['filename']
8536 
8537     # check the version, to see if we have a mutable or immutable share
8538hunk ./src/allmydata/scripts/debug.py 45
8539-    print >>out, "share filename: %s" % quote_output(options['filename'])
8540+    print >>out, "share filename: %s" % quote_output(filename)
8541 
8542hunk ./src/allmydata/scripts/debug.py 47
8543-    share = get_share("", 0, fp)
8544+    share = get_share("", 0, FilePath(filename))
8545     if share.sharetype == "mutable":
8546         return dump_mutable_share(options, share)
8547     else:
8548hunk ./src/allmydata/storage/backends/disk/mutable.py 85
8549         self.parent = parent # for logging
8550 
8551     def log(self, *args, **kwargs):
8552-        return self.parent.log(*args, **kwargs)
8553+        if self.parent:
8554+            return self.parent.log(*args, **kwargs)
8555 
8556     def create(self, serverid, write_enabler):
8557         assert not self._home.exists()
8558hunk ./src/allmydata/storage/common.py 6
8559 class DataTooLargeError(Exception):
8560     pass
8561 
8562-class UnknownMutableContainerVersionError(Exception):
8563+class UnknownContainerVersionError(Exception):
8564     pass
8565 
8566hunk ./src/allmydata/storage/common.py 9
8567-class UnknownImmutableContainerVersionError(Exception):
8568+class UnknownMutableContainerVersionError(UnknownContainerVersionError):
8569+    pass
8570+
8571+class UnknownImmutableContainerVersionError(UnknownContainerVersionError):
8572     pass
8573 
8574 
8575hunk ./src/allmydata/storage/crawler.py 208
8576         try:
8577             state = pickle.loads(self.statefp.getContent())
8578         except EnvironmentError:
8579+            if self.statefp.exists():
8580+                raise
8581             state = {"version": 1,
8582                      "last-cycle-finished": None,
8583                      "current-cycle": None,
8584hunk ./src/allmydata/storage/server.py 24
8585 
8586     name = 'storage'
8587     LeaseCheckerClass = LeaseCheckingCrawler
8588+    BucketCounterClass = BucketCountingCrawler
8589     DEFAULT_EXPIRATION_POLICY = {
8590         'enabled': False,
8591         'mode': 'age',
8592hunk ./src/allmydata/storage/server.py 70
8593 
8594     def _setup_bucket_counter(self):
8595         statefp = self._statedir.child("bucket_counter.state")
8596-        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
8597+        self.bucket_counter = self.BucketCounterClass(self.backend, statefp)
8598         self.bucket_counter.setServiceParent(self)
8599 
8600     def _setup_lease_checker(self, expiration_policy):
8601hunk ./src/allmydata/storage/server.py 224
8602             share.add_or_renew_lease(lease_info)
8603             alreadygot.add(share.get_shnum())
8604 
8605-        for shnum in sharenums - alreadygot:
8606+        for shnum in set(sharenums) - alreadygot:
8607             if shareset.has_incoming(shnum):
8608                 # Note that we don't create BucketWriters for shnums that
8609                 # have a partial share (in incoming/), so if a second upload
8610hunk ./src/allmydata/storage/server.py 247
8611 
8612     def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
8613                          owner_num=1):
8614-        # cancel_secret is no longer used.
8615         start = time.time()
8616         self.count("add-lease")
8617         new_expire_time = time.time() + 31*24*60*60
8618hunk ./src/allmydata/storage/server.py 250
8619-        lease_info = LeaseInfo(owner_num, renew_secret,
8620+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
8621                                new_expire_time, self._serverid)
8622 
8623         try:
8624hunk ./src/allmydata/storage/server.py 254
8625-            self.backend.add_or_renew_lease(lease_info)
8626+            shareset = self.backend.get_shareset(storageindex)
8627+            shareset.add_or_renew_lease(lease_info)
8628         finally:
8629             self.add_latency("add-lease", time.time() - start)
8630 
8631hunk ./src/allmydata/test/test_crawler.py 3
8632 
8633 import time
8634-import os.path
8635+
8636 from twisted.trial import unittest
8637 from twisted.application import service
8638 from twisted.internet import defer
8639hunk ./src/allmydata/test/test_crawler.py 10
8640 from twisted.python.filepath import FilePath
8641 from foolscap.api import eventually, fireEventually
8642 
8643-from allmydata.util import fileutil, hashutil, pollmixin
8644+from allmydata.util import hashutil, pollmixin
8645 from allmydata.storage.server import StorageServer, si_b2a
8646 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
8647 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8648hunk ./src/allmydata/test/test_mutable.py 3025
8649             cso.stderr = StringIO()
8650             debug.catalog_shares(cso)
8651             shares = cso.stdout.getvalue().splitlines()
8652+            self.failIf(len(shares) < 1, shares)
8653             oneshare = shares[0] # all shares should be MDMF
8654             self.failIf(oneshare.startswith("UNKNOWN"), oneshare)
8655             self.failUnless(oneshare.startswith("MDMF"), oneshare)
8656hunk ./src/allmydata/test/test_storage.py 1
8657-import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8658+import time, os.path, platform, re, simplejson, struct, itertools
8659 
8660 import mock
8661 
8662hunk ./src/allmydata/test/test_storage.py 15
8663 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8664 from allmydata.storage.server import StorageServer
8665 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8666+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
8667 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8668 from allmydata.storage.bucket import BucketWriter, BucketReader
8669hunk ./src/allmydata/test/test_storage.py 18
8670-from allmydata.storage.common import DataTooLargeError, \
8671+from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \
8672      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
8673 from allmydata.storage.lease import LeaseInfo
8674 from allmydata.storage.crawler import BucketCountingCrawler
8675hunk ./src/allmydata/test/test_storage.py 88
8676 
8677     def test_create(self):
8678         incoming, final = self.make_workdir("test_create")
8679-        share = ImmutableDiskShare("", 0, incoming, final, 200)
8680+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8681         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8682         bw.remote_write(0, "a"*25)
8683         bw.remote_write(25, "b"*25)
8684hunk ./src/allmydata/test/test_storage.py 98
8685 
8686     def test_readwrite(self):
8687         incoming, final = self.make_workdir("test_readwrite")
8688-        share = ImmutableDiskShare("", 0, incoming, 200)
8689+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8690         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8691         bw.remote_write(0, "a"*25)
8692         bw.remote_write(25, "b"*25)
8693hunk ./src/allmydata/test/test_storage.py 106
8694         bw.remote_close()
8695 
8696         # now read from it
8697-        br = BucketReader(self, bw.finalhome)
8698+        br = BucketReader(self, share)
8699         self.failUnlessEqual(br.remote_read(0, 25), "a"*25)
8700         self.failUnlessEqual(br.remote_read(25, 25), "b"*25)
8701         self.failUnlessEqual(br.remote_read(50, 7), "c"*7)
8702hunk ./src/allmydata/test/test_storage.py 131
8703         ownernumber = struct.pack('>L', 0)
8704         renewsecret  = 'THIS LETS ME RENEW YOUR FILE....'
8705         assert len(renewsecret) == 32
8706-        cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA'
8707+        cancelsecret = 'THIS USED TO LET ME KILL YR FILE'
8708         assert len(cancelsecret) == 32
8709         expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds
8710 
8711hunk ./src/allmydata/test/test_storage.py 142
8712         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8713 
8714         final.setContent(share_file_data)
8715+        share = ImmutableDiskShare("", 0, final)
8716 
8717         mockstorageserver = mock.Mock()
8718 
8719hunk ./src/allmydata/test/test_storage.py 147
8720         # Now read from it.
8721-        br = BucketReader(mockstorageserver, final)
8722+        br = BucketReader(mockstorageserver, share)
8723 
8724         self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data)
8725 
8726hunk ./src/allmydata/test/test_storage.py 260
8727 
8728         # now read everything back
8729         def _start_reading(res):
8730-            br = BucketReader(self, sharefp)
8731+            share = ImmutableDiskShare("", 0, sharefp)
8732+            br = BucketReader(self, share)
8733             rb = RemoteBucket()
8734             rb.target = br
8735             server = NoNetworkServer("abc", None)
8736hunk ./src/allmydata/test/test_storage.py 346
8737         if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow:
8738             raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).")
8739 
8740-        avail = fileutil.get_available_space('.', 512*2**20)
8741+        avail = fileutil.get_available_space(FilePath('.'), 512*2**20)
8742         if avail <= 4*2**30:
8743             raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.")
8744 
8745hunk ./src/allmydata/test/test_storage.py 476
8746         w[0].remote_write(0, "\xff"*10)
8747         w[0].remote_close()
8748 
8749-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8750+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8751         f = fp.open("rb+")
8752hunk ./src/allmydata/test/test_storage.py 478
8753-        f.seek(0)
8754-        f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8755-        f.close()
8756+        try:
8757+            f.seek(0)
8758+            f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8759+        finally:
8760+            f.close()
8761 
8762         ss.remote_get_buckets("allocate")
8763 
8764hunk ./src/allmydata/test/test_storage.py 575
8765 
8766     def test_seek(self):
8767         basedir = self.workdir("test_seek_behavior")
8768-        fileutil.make_dirs(basedir)
8769-        filename = os.path.join(basedir, "testfile")
8770-        f = open(filename, "wb")
8771-        f.write("start")
8772-        f.close()
8773+        basedir.makedirs()
8774+        fp = basedir.child("testfile")
8775+        fp.setContent("start")
8776+
8777         # mode="w" allows seeking-to-create-holes, but truncates pre-existing
8778         # files. mode="a" preserves previous contents but does not allow
8779         # seeking-to-create-holes. mode="r+" allows both.
8780hunk ./src/allmydata/test/test_storage.py 582
8781-        f = open(filename, "rb+")
8782-        f.seek(100)
8783-        f.write("100")
8784-        f.close()
8785-        filelen = os.stat(filename)[stat.ST_SIZE]
8786+        f = fp.open("rb+")
8787+        try:
8788+            f.seek(100)
8789+            f.write("100")
8790+        finally:
8791+            f.close()
8792+        fp.restat()
8793+        filelen = fp.getsize()
8794         self.failUnlessEqual(filelen, 100+3)
8795hunk ./src/allmydata/test/test_storage.py 591
8796-        f2 = open(filename, "rb")
8797-        self.failUnlessEqual(f2.read(5), "start")
8798-
8799+        f2 = fp.open("rb")
8800+        try:
8801+            self.failUnlessEqual(f2.read(5), "start")
8802+        finally:
8803+            f2.close()
8804 
8805     def test_leases(self):
8806         ss = self.create("test_leases")
8807hunk ./src/allmydata/test/test_storage.py 693
8808 
8809     def test_readonly(self):
8810         workdir = self.workdir("test_readonly")
8811-        ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True)
8812+        backend = DiskBackend(workdir, readonly=True)
8813+        ss = StorageServer("\x00" * 20, backend, workdir)
8814         ss.setServiceParent(self.sparent)
8815 
8816         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8817hunk ./src/allmydata/test/test_storage.py 710
8818 
8819     def test_discard(self):
8820         # discard is really only used for other tests, but we test it anyways
8821+        # XXX replace this with a null backend test
8822         workdir = self.workdir("test_discard")
8823hunk ./src/allmydata/test/test_storage.py 712
8824-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8825+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8826+        ss = StorageServer("\x00" * 20, backend, workdir)
8827         ss.setServiceParent(self.sparent)
8828 
8829         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8830hunk ./src/allmydata/test/test_storage.py 731
8831 
8832     def test_advise_corruption(self):
8833         workdir = self.workdir("test_advise_corruption")
8834-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8835+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8836+        ss = StorageServer("\x00" * 20, backend, workdir)
8837         ss.setServiceParent(self.sparent)
8838 
8839         si0_s = base32.b2a("si0")
8840hunk ./src/allmydata/test/test_storage.py 738
8841         ss.remote_advise_corrupt_share("immutable", "si0", 0,
8842                                        "This share smells funny.\n")
8843-        reportdir = os.path.join(workdir, "corruption-advisories")
8844-        reports = os.listdir(reportdir)
8845+        reportdir = workdir.child("corruption-advisories")
8846+        reports = [child.basename() for child in reportdir.children()]
8847         self.failUnlessEqual(len(reports), 1)
8848         report_si0 = reports[0]
8849hunk ./src/allmydata/test/test_storage.py 742
8850-        self.failUnlessIn(si0_s, report_si0)
8851-        f = open(os.path.join(reportdir, report_si0), "r")
8852-        report = f.read()
8853-        f.close()
8854+        self.failUnlessIn(si0_s, str(report_si0))
8855+        report = reportdir.child(report_si0).getContent()
8856+
8857         self.failUnlessIn("type: immutable", report)
8858         self.failUnlessIn("storage_index: %s" % si0_s, report)
8859         self.failUnlessIn("share_number: 0", report)
8860hunk ./src/allmydata/test/test_storage.py 762
8861         self.failUnlessEqual(set(b.keys()), set([1]))
8862         b[1].remote_advise_corrupt_share("This share tastes like dust.\n")
8863 
8864-        reports = os.listdir(reportdir)
8865+        reports = [child.basename() for child in reportdir.children()]
8866         self.failUnlessEqual(len(reports), 2)
8867hunk ./src/allmydata/test/test_storage.py 764
8868-        report_si1 = [r for r in reports if si1_s in r][0]
8869-        f = open(os.path.join(reportdir, report_si1), "r")
8870-        report = f.read()
8871-        f.close()
8872+        report_si1 = [r for r in reports if si1_s in str(r)][0]
8873+        report = reportdir.child(report_si1).getContent()
8874+
8875         self.failUnlessIn("type: immutable", report)
8876         self.failUnlessIn("storage_index: %s" % si1_s, report)
8877         self.failUnlessIn("share_number: 1", report)
8878hunk ./src/allmydata/test/test_storage.py 783
8879         return self.sparent.stopService()
8880 
8881     def workdir(self, name):
8882-        basedir = os.path.join("storage", "MutableServer", name)
8883-        return basedir
8884+        return FilePath("storage").child("MutableServer").child(name)
8885 
8886     def create(self, name):
8887         workdir = self.workdir(name)
8888hunk ./src/allmydata/test/test_storage.py 787
8889-        ss = StorageServer(workdir, "\x00" * 20)
8890+        backend = DiskBackend(workdir)
8891+        ss = StorageServer("\x00" * 20, backend, workdir)
8892         ss.setServiceParent(self.sparent)
8893         return ss
8894 
8895hunk ./src/allmydata/test/test_storage.py 810
8896         cancel_secret = self.cancel_secret(lease_tag)
8897         rstaraw = ss.remote_slot_testv_and_readv_and_writev
8898         testandwritev = dict( [ (shnum, ([], [], None) )
8899-                         for shnum in sharenums ] )
8900+                                for shnum in sharenums ] )
8901         readv = []
8902         rc = rstaraw(storage_index,
8903                      (write_enabler, renew_secret, cancel_secret),
8904hunk ./src/allmydata/test/test_storage.py 824
8905     def test_bad_magic(self):
8906         ss = self.create("test_bad_magic")
8907         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
8908-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8909+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8910         f = fp.open("rb+")
8911hunk ./src/allmydata/test/test_storage.py 826
8912-        f.seek(0)
8913-        f.write("BAD MAGIC")
8914-        f.close()
8915+        try:
8916+            f.seek(0)
8917+            f.write("BAD MAGIC")
8918+        finally:
8919+            f.close()
8920         read = ss.remote_slot_readv
8921hunk ./src/allmydata/test/test_storage.py 832
8922-        e = self.failUnlessRaises(UnknownMutableContainerVersionError,
8923+
8924+        # This used to test for UnknownMutableContainerVersionError,
8925+        # but the current code raises UnknownImmutableContainerVersionError.
8926+        # (It changed because remote_slot_readv now works with either
8927+        # mutable or immutable shares.) Since the share file doesn't have
8928+        # the mutable magic, it's not clear that this is wrong.
8929+        # For now, accept either exception.
8930+        e = self.failUnlessRaises(UnknownContainerVersionError,
8931                                   read, "si1", [0], [(0,10)])
8932hunk ./src/allmydata/test/test_storage.py 841
8933-        self.failUnlessIn(" had magic ", str(e))
8934+        self.failUnlessIn(" had ", str(e))
8935         self.failUnlessIn(" but we wanted ", str(e))
8936 
8937     def test_container_size(self):
8938hunk ./src/allmydata/test/test_storage.py 1248
8939 
8940         # create a random non-numeric file in the bucket directory, to
8941         # exercise the code that's supposed to ignore those.
8942-        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
8943+        bucket_dir = ss.backend.get_shareset("si1")._sharehomedir
8944         bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
8945 
8946hunk ./src/allmydata/test/test_storage.py 1251
8947-        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
8948+        s0 = MutableDiskShare("", 0, bucket_dir.child("0"))
8949         self.failUnlessEqual(len(list(s0.get_leases())), 1)
8950 
8951         # add-lease on a missing storage index is silently ignored
8952hunk ./src/allmydata/test/test_storage.py 1365
8953         # note: this is a detail of the storage server implementation, and
8954         # may change in the future
8955         prefix = si[:2]
8956-        prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix)
8957-        bucketdir = os.path.join(prefixdir, si)
8958-        self.failUnless(os.path.exists(prefixdir), prefixdir)
8959-        self.failIf(os.path.exists(bucketdir), bucketdir)
8960+        prefixdir = self.workdir("test_remove").child("shares").child(prefix)
8961+        bucketdir = prefixdir.child(si)
8962+        self.failUnless(prefixdir.exists(), prefixdir)
8963+        self.failIf(bucketdir.exists(), bucketdir)
8964 
8965 
8966 class MDMFProxies(unittest.TestCase, ShouldFailMixin):
8967hunk ./src/allmydata/test/test_storage.py 1420
8968 
8969 
8970     def workdir(self, name):
8971-        basedir = os.path.join("storage", "MutableServer", name)
8972-        return basedir
8973-
8974+        return FilePath("storage").child("MDMFProxies").child(name)
8975 
8976     def create(self, name):
8977         workdir = self.workdir(name)
8978hunk ./src/allmydata/test/test_storage.py 1424
8979-        ss = StorageServer(workdir, "\x00" * 20)
8980+        backend = DiskBackend(workdir)
8981+        ss = StorageServer("\x00" * 20, backend, workdir)
8982         ss.setServiceParent(self.sparent)
8983         return ss
8984 
8985hunk ./src/allmydata/test/test_storage.py 2798
8986         return self.sparent.stopService()
8987 
8988     def workdir(self, name):
8989-        return FilePath("storage").child("Server").child(name)
8990+        return FilePath("storage").child("Stats").child(name)
8991 
8992     def create(self, name):
8993         workdir = self.workdir(name)
8994hunk ./src/allmydata/test/test_storage.py 2886
8995             d.callback(None)
8996 
8997 class MyStorageServer(StorageServer):
8998-    def add_bucket_counter(self):
8999-        statefile = os.path.join(self.storedir, "bucket_counter.state")
9000-        self.bucket_counter = MyBucketCountingCrawler(self, statefile)
9001-        self.bucket_counter.setServiceParent(self)
9002+    BucketCounterClass = MyBucketCountingCrawler
9003+
9004 
9005 class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
9006 
9007hunk ./src/allmydata/test/test_storage.py 2899
9008 
9009     def test_bucket_counter(self):
9010         basedir = "storage/BucketCounter/bucket_counter"
9011-        fileutil.make_dirs(basedir)
9012-        ss = StorageServer(basedir, "\x00" * 20)
9013+        fp = FilePath(basedir)
9014+        backend = DiskBackend(fp)
9015+        ss = StorageServer("\x00" * 20, backend, fp)
9016+
9017         # to make sure we capture the bucket-counting-crawler in the middle
9018         # of a cycle, we reach in and reduce its maximum slice time to 0. We
9019         # also make it start sooner than usual.
9020hunk ./src/allmydata/test/test_storage.py 2958
9021 
9022     def test_bucket_counter_cleanup(self):
9023         basedir = "storage/BucketCounter/bucket_counter_cleanup"
9024-        fileutil.make_dirs(basedir)
9025-        ss = StorageServer(basedir, "\x00" * 20)
9026+        fp = FilePath(basedir)
9027+        backend = DiskBackend(fp)
9028+        ss = StorageServer("\x00" * 20, backend, fp)
9029+
9030         # to make sure we capture the bucket-counting-crawler in the middle
9031         # of a cycle, we reach in and reduce its maximum slice time to 0.
9032         ss.bucket_counter.slow_start = 0
9033hunk ./src/allmydata/test/test_storage.py 3002
9034 
9035     def test_bucket_counter_eta(self):
9036         basedir = "storage/BucketCounter/bucket_counter_eta"
9037-        fileutil.make_dirs(basedir)
9038-        ss = MyStorageServer(basedir, "\x00" * 20)
9039+        fp = FilePath(basedir)
9040+        backend = DiskBackend(fp)
9041+        ss = MyStorageServer("\x00" * 20, backend, fp)
9042         ss.bucket_counter.slow_start = 0
9043         # these will be fired inside finished_prefix()
9044         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
9045hunk ./src/allmydata/test/test_storage.py 3125
9046 
9047     def test_basic(self):
9048         basedir = "storage/LeaseCrawler/basic"
9049-        fileutil.make_dirs(basedir)
9050-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9051+        fp = FilePath(basedir)
9052+        backend = DiskBackend(fp)
9053+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9054+
9055         # make it start sooner than usual.
9056         lc = ss.lease_checker
9057         lc.slow_start = 0
9058hunk ./src/allmydata/test/test_storage.py 3141
9059         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9060 
9061         # add a non-sharefile to exercise another code path
9062-        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
9063+        fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share")
9064         fp.setContent("I am not a share.\n")
9065 
9066         # this is before the crawl has started, so we're not in a cycle yet
9067hunk ./src/allmydata/test/test_storage.py 3264
9068             self.failUnlessEqual(rec["configured-sharebytes"], 0)
9069 
9070             def _get_sharefile(si):
9071-                return list(ss._iter_share_files(si))[0]
9072+                return list(ss.backend.get_shareset(si).get_shares())[0]
9073             def count_leases(si):
9074                 return len(list(_get_sharefile(si).get_leases()))
9075             self.failUnlessEqual(count_leases(immutable_si_0), 1)
9076hunk ./src/allmydata/test/test_storage.py 3296
9077         for i,lease in enumerate(sf.get_leases()):
9078             if lease.renew_secret == renew_secret:
9079                 lease.expiration_time = new_expire_time
9080-                f = open(sf.home, 'rb+')
9081-                sf._write_lease_record(f, i, lease)
9082-                f.close()
9083+                f = sf._home.open('rb+')
9084+                try:
9085+                    sf._write_lease_record(f, i, lease)
9086+                finally:
9087+                    f.close()
9088                 return
9089         raise IndexError("unable to renew non-existent lease")
9090 
9091hunk ./src/allmydata/test/test_storage.py 3306
9092     def test_expire_age(self):
9093         basedir = "storage/LeaseCrawler/expire_age"
9094-        fileutil.make_dirs(basedir)
9095+        fp = FilePath(basedir)
9096+        backend = DiskBackend(fp)
9097+
9098         # setting 'override_lease_duration' to 2000 means that any lease that
9099         # is more than 2000 seconds old will be expired.
9100         expiration_policy = {
9101hunk ./src/allmydata/test/test_storage.py 3317
9102             'override_lease_duration': 2000,
9103             'sharetypes': ('mutable', 'immutable'),
9104         }
9105-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9106+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9107+
9108         # make it start sooner than usual.
9109         lc = ss.lease_checker
9110         lc.slow_start = 0
9111hunk ./src/allmydata/test/test_storage.py 3330
9112         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9113 
9114         def count_shares(si):
9115-            return len(list(ss._iter_share_files(si)))
9116+            return len(list(ss.backend.get_shareset(si).get_shares()))
9117         def _get_sharefile(si):
9118hunk ./src/allmydata/test/test_storage.py 3332
9119-            return list(ss._iter_share_files(si))[0]
9120+            return list(ss.backend.get_shareset(si).get_shares())[0]
9121         def count_leases(si):
9122             return len(list(_get_sharefile(si).get_leases()))
9123 
9124hunk ./src/allmydata/test/test_storage.py 3355
9125 
9126         sf0 = _get_sharefile(immutable_si_0)
9127         self.backdate_lease(sf0, self.renew_secrets[0], now - 1000)
9128-        sf0_size = os.stat(sf0.home).st_size
9129+        sf0_size = sf0.get_size()
9130 
9131         # immutable_si_1 gets an extra lease
9132         sf1 = _get_sharefile(immutable_si_1)
9133hunk ./src/allmydata/test/test_storage.py 3363
9134 
9135         sf2 = _get_sharefile(mutable_si_2)
9136         self.backdate_lease(sf2, self.renew_secrets[3], now - 1000)
9137-        sf2_size = os.stat(sf2.home).st_size
9138+        sf2_size = sf2.get_size()
9139 
9140         # mutable_si_3 gets an extra lease
9141         sf3 = _get_sharefile(mutable_si_3)
9142hunk ./src/allmydata/test/test_storage.py 3450
9143 
9144     def test_expire_cutoff_date(self):
9145         basedir = "storage/LeaseCrawler/expire_cutoff_date"
9146-        fileutil.make_dirs(basedir)
9147+        fp = FilePath(basedir)
9148+        backend = DiskBackend(fp)
9149+
9150         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9151         # is more than 2000 seconds old will be expired.
9152         now = time.time()
9153hunk ./src/allmydata/test/test_storage.py 3463
9154             'cutoff_date': then,
9155             'sharetypes': ('mutable', 'immutable'),
9156         }
9157-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9158+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9159+
9160         # make it start sooner than usual.
9161         lc = ss.lease_checker
9162         lc.slow_start = 0
9163hunk ./src/allmydata/test/test_storage.py 3476
9164         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9165 
9166         def count_shares(si):
9167-            return len(list(ss._iter_share_files(si)))
9168+            return len(list(ss.backend.get_shareset(si).get_shares()))
9169         def _get_sharefile(si):
9170hunk ./src/allmydata/test/test_storage.py 3478
9171-            return list(ss._iter_share_files(si))[0]
9172+            return list(ss.backend.get_shareset(si).get_shares())[0]
9173         def count_leases(si):
9174             return len(list(_get_sharefile(si).get_leases()))
9175 
9176hunk ./src/allmydata/test/test_storage.py 3505
9177 
9178         sf0 = _get_sharefile(immutable_si_0)
9179         self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time)
9180-        sf0_size = os.stat(sf0.home).st_size
9181+        sf0_size = sf0.get_size()
9182 
9183         # immutable_si_1 gets an extra lease
9184         sf1 = _get_sharefile(immutable_si_1)
9185hunk ./src/allmydata/test/test_storage.py 3513
9186 
9187         sf2 = _get_sharefile(mutable_si_2)
9188         self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time)
9189-        sf2_size = os.stat(sf2.home).st_size
9190+        sf2_size = sf2.get_size()
9191 
9192         # mutable_si_3 gets an extra lease
9193         sf3 = _get_sharefile(mutable_si_3)
9194hunk ./src/allmydata/test/test_storage.py 3605
9195 
9196     def test_only_immutable(self):
9197         basedir = "storage/LeaseCrawler/only_immutable"
9198-        fileutil.make_dirs(basedir)
9199+        fp = FilePath(basedir)
9200+        backend = DiskBackend(fp)
9201+
9202         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9203         # is more than 2000 seconds old will be expired.
9204         now = time.time()
9205hunk ./src/allmydata/test/test_storage.py 3618
9206             'cutoff_date': then,
9207             'sharetypes': ('immutable',),
9208         }
9209-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9210+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9211         lc = ss.lease_checker
9212         lc.slow_start = 0
9213         webstatus = StorageStatus(ss)
9214hunk ./src/allmydata/test/test_storage.py 3629
9215         new_expiration_time = now - 3000 + 31*24*60*60
9216 
9217         def count_shares(si):
9218-            return len(list(ss._iter_share_files(si)))
9219+            return len(list(ss.backend.get_shareset(si).get_shares()))
9220         def _get_sharefile(si):
9221hunk ./src/allmydata/test/test_storage.py 3631
9222-            return list(ss._iter_share_files(si))[0]
9223+            return list(ss.backend.get_shareset(si).get_shares())[0]
9224         def count_leases(si):
9225             return len(list(_get_sharefile(si).get_leases()))
9226 
9227hunk ./src/allmydata/test/test_storage.py 3668
9228 
9229     def test_only_mutable(self):
9230         basedir = "storage/LeaseCrawler/only_mutable"
9231-        fileutil.make_dirs(basedir)
9232+        fp = FilePath(basedir)
9233+        backend = DiskBackend(fp)
9234+
9235         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9236         # is more than 2000 seconds old will be expired.
9237         now = time.time()
9238hunk ./src/allmydata/test/test_storage.py 3681
9239             'cutoff_date': then,
9240             'sharetypes': ('mutable',),
9241         }
9242-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9243+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9244         lc = ss.lease_checker
9245         lc.slow_start = 0
9246         webstatus = StorageStatus(ss)
9247hunk ./src/allmydata/test/test_storage.py 3692
9248         new_expiration_time = now - 3000 + 31*24*60*60
9249 
9250         def count_shares(si):
9251-            return len(list(ss._iter_share_files(si)))
9252+            return len(list(ss.backend.get_shareset(si).get_shares()))
9253         def _get_sharefile(si):
9254hunk ./src/allmydata/test/test_storage.py 3694
9255-            return list(ss._iter_share_files(si))[0]
9256+            return list(ss.backend.get_shareset(si).get_shares())[0]
9257         def count_leases(si):
9258             return len(list(_get_sharefile(si).get_leases()))
9259 
9260hunk ./src/allmydata/test/test_storage.py 3731
9261 
9262     def test_bad_mode(self):
9263         basedir = "storage/LeaseCrawler/bad_mode"
9264-        fileutil.make_dirs(basedir)
9265+        fp = FilePath(basedir)
9266+        backend = DiskBackend(fp)
9267+
9268+        expiration_policy = {
9269+            'enabled': True,
9270+            'mode': 'bogus',
9271+            'override_lease_duration': None,
9272+            'cutoff_date': None,
9273+            'sharetypes': ('mutable', 'immutable'),
9274+        }
9275         e = self.failUnlessRaises(ValueError,
9276hunk ./src/allmydata/test/test_storage.py 3742
9277-                                  StorageServer, basedir, "\x00" * 20,
9278-                                  expiration_mode="bogus")
9279+                                  StorageServer, "\x00" * 20, backend, fp,
9280+                                  expiration_policy=expiration_policy)
9281         self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e))
9282 
9283     def test_parse_duration(self):
9284hunk ./src/allmydata/test/test_storage.py 3767
9285 
9286     def test_limited_history(self):
9287         basedir = "storage/LeaseCrawler/limited_history"
9288-        fileutil.make_dirs(basedir)
9289-        ss = StorageServer(basedir, "\x00" * 20)
9290+        fp = FilePath(basedir)
9291+        backend = DiskBackend(fp)
9292+        ss = StorageServer("\x00" * 20, backend, fp)
9293+
9294         # make it start sooner than usual.
9295         lc = ss.lease_checker
9296         lc.slow_start = 0
9297hunk ./src/allmydata/test/test_storage.py 3801
9298 
9299     def test_unpredictable_future(self):
9300         basedir = "storage/LeaseCrawler/unpredictable_future"
9301-        fileutil.make_dirs(basedir)
9302-        ss = StorageServer(basedir, "\x00" * 20)
9303+        fp = FilePath(basedir)
9304+        backend = DiskBackend(fp)
9305+        ss = StorageServer("\x00" * 20, backend, fp)
9306+
9307         # make it start sooner than usual.
9308         lc = ss.lease_checker
9309         lc.slow_start = 0
9310hunk ./src/allmydata/test/test_storage.py 3866
9311 
9312     def test_no_st_blocks(self):
9313         basedir = "storage/LeaseCrawler/no_st_blocks"
9314-        fileutil.make_dirs(basedir)
9315+        fp = FilePath(basedir)
9316+        backend = DiskBackend(fp)
9317+
9318         # A negative 'override_lease_duration' means that the "configured-"
9319         # space-recovered counts will be non-zero, since all shares will have
9320         # expired by then.
9321hunk ./src/allmydata/test/test_storage.py 3878
9322             'override_lease_duration': -1000,
9323             'sharetypes': ('mutable', 'immutable'),
9324         }
9325-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
9326+        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9327 
9328         # make it start sooner than usual.
9329         lc = ss.lease_checker
9330hunk ./src/allmydata/test/test_storage.py 3911
9331             UnknownImmutableContainerVersionError,
9332             ]
9333         basedir = "storage/LeaseCrawler/share_corruption"
9334-        fileutil.make_dirs(basedir)
9335-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9336+        fp = FilePath(basedir)
9337+        backend = DiskBackend(fp)
9338+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9339         w = StorageStatus(ss)
9340         # make it start sooner than usual.
9341         lc = ss.lease_checker
9342hunk ./src/allmydata/test/test_storage.py 3928
9343         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9344         first = min(self.sis)
9345         first_b32 = base32.b2a(first)
9346-        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
9347+        fp = ss.backend.get_shareset(first)._sharehomedir.child("0")
9348         f = fp.open("rb+")
9349hunk ./src/allmydata/test/test_storage.py 3930
9350-        f.seek(0)
9351-        f.write("BAD MAGIC")
9352-        f.close()
9353+        try:
9354+            f.seek(0)
9355+            f.write("BAD MAGIC")
9356+        finally:
9357+            f.close()
9358         # if get_share_file() doesn't see the correct mutable magic, it
9359         # assumes the file is an immutable share, and then
9360         # immutable.ShareFile sees a bad version. So regardless of which kind
9361hunk ./src/allmydata/test/test_storage.py 3943
9362 
9363         # also create an empty bucket
9364         empty_si = base32.b2a("\x04"*16)
9365-        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
9366+        empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir
9367         fileutil.fp_make_dirs(empty_bucket_dir)
9368 
9369         ss.setServiceParent(self.s)
9370hunk ./src/allmydata/test/test_storage.py 4031
9371 
9372     def test_status(self):
9373         basedir = "storage/WebStatus/status"
9374-        fileutil.make_dirs(basedir)
9375-        ss = StorageServer(basedir, "\x00" * 20)
9376+        fp = FilePath(basedir)
9377+        backend = DiskBackend(fp)
9378+        ss = StorageServer("\x00" * 20, backend, fp)
9379         ss.setServiceParent(self.s)
9380         w = StorageStatus(ss)
9381         d = self.render1(w)
9382hunk ./src/allmydata/test/test_storage.py 4065
9383         # Some platforms may have no disk stats API. Make sure the code can handle that
9384         # (test runs on all platforms).
9385         basedir = "storage/WebStatus/status_no_disk_stats"
9386-        fileutil.make_dirs(basedir)
9387-        ss = StorageServer(basedir, "\x00" * 20)
9388+        fp = FilePath(basedir)
9389+        backend = DiskBackend(fp)
9390+        ss = StorageServer("\x00" * 20, backend, fp)
9391         ss.setServiceParent(self.s)
9392         w = StorageStatus(ss)
9393         html = w.renderSynchronously()
9394hunk ./src/allmydata/test/test_storage.py 4085
9395         # If the API to get disk stats exists but a call to it fails, then the status should
9396         # show that no shares will be accepted, and get_available_space() should be 0.
9397         basedir = "storage/WebStatus/status_bad_disk_stats"
9398-        fileutil.make_dirs(basedir)
9399-        ss = StorageServer(basedir, "\x00" * 20)
9400+        fp = FilePath(basedir)
9401+        backend = DiskBackend(fp)
9402+        ss = StorageServer("\x00" * 20, backend, fp)
9403         ss.setServiceParent(self.s)
9404         w = StorageStatus(ss)
9405         html = w.renderSynchronously()
9406}
9407
9408Context:
9409
9410[test_mutable.py: skip test_publish_surprise_mdmf, which is causing an error. refs #1534, #393
9411david-sarah@jacaranda.org**20110920183319
9412 Ignore-this: 6fb020e09e8de437cbcc2c9f57835b31
9413]
9414[test/test_mutable: write publish surprise test for MDMF, rename existing test_publish_surprise to clarify that it is for SDMF
9415kevan@isnotajoke.com**20110918003657
9416 Ignore-this: 722c507e8f5b537ff920e0555951059a
9417]
9418[test/test_mutable: refactor publish surprise test into common test fixture, rewrite test_publish_surprise to use test fixture
9419kevan@isnotajoke.com**20110918003533
9420 Ignore-this: 6f135888d400a99a09b5f9a4be443b6e
9421]
9422[mutable/publish: add errback immediately after write, don't consume errors from other parts of the publisher
9423kevan@isnotajoke.com**20110917234708
9424 Ignore-this: 12bf6b0918a5dc5ffc30ece669fad51d
9425]
9426[.darcs-boringfile: minor cleanups.
9427david-sarah@jacaranda.org**20110920154918
9428 Ignore-this: cab78e30d293da7e2832207dbee2ffeb
9429]
9430[uri.py: fix two interface violations in verifier URI classes. refs #1474
9431david-sarah@jacaranda.org**20110920030156
9432 Ignore-this: 454ddd1419556cb1d7576d914cb19598
9433]
9434[Make platform-detection code tolerate linux-3.0, patch by zooko.
9435Brian Warner <warner@lothar.com>**20110915202620
9436 Ignore-this: af63cf9177ae531984dea7a1cad03762
9437 
9438 Otherwise address-autodetection can't find ifconfig. refs #1536
9439]
9440[test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended.
9441david-sarah@jacaranda.org**20110915185126
9442 Ignore-this: d96632bc48d770b9b577cda1bbd8ff94
9443]
9444[docs: insert a newline at the beginning of known_issues.rst to see if this makes it render more nicely in trac
9445zooko@zooko.com**20110914064728
9446 Ignore-this: aca15190fa22083c5d4114d3965f5d65
9447]
9448[docs: remove the coding: utf-8 declaration at the to of known_issues.rst, since the trac rendering doesn't hide it
9449zooko@zooko.com**20110914055713
9450 Ignore-this: 941ed32f83ead377171aa7a6bd198fcf
9451]
9452[docs: more cleanup of known_issues.rst -- now it passes "rst2html --verbose" without comment
9453zooko@zooko.com**20110914055419
9454 Ignore-this: 5505b3d76934bd97d0312cc59ed53879
9455]
9456[docs: more formatting improvements to known_issues.rst
9457zooko@zooko.com**20110914051639
9458 Ignore-this: 9ae9230ec9a38a312cbacaf370826691
9459]
9460[docs: reformatting of known_issues.rst
9461zooko@zooko.com**20110914050240
9462 Ignore-this: b8be0375079fb478be9d07500f9aaa87
9463]
9464[docs: fix formatting error in docs/known_issues.rst
9465zooko@zooko.com**20110914045909
9466 Ignore-this: f73fe74ad2b9e655aa0c6075acced15a
9467]
9468[merge Tahoe-LAFS v1.8.3 release announcement with trunk
9469zooko@zooko.com**20110913210544
9470 Ignore-this: 163f2c3ddacca387d7308e4b9332516e
9471]
9472[docs: release notes for Tahoe-LAFS v1.8.3
9473zooko@zooko.com**20110913165826
9474 Ignore-this: 84223604985b14733a956d2fbaeb4e9f
9475]
9476[tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow
9477zooko@zooko.com**20110913024255
9478 Ignore-this: 6a86d691e878cec583722faad06fb8e4
9479]
9480[interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528
9481david-sarah@jacaranda.org**20110913002843
9482 Ignore-this: 1a00a6029d40f6792af48c5578c1fd69
9483]
9484[CREDITS: more CREDITS for Kevan and David-Sarah
9485zooko@zooko.com**20110912223357
9486 Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada
9487]
9488[merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress
9489zooko@zooko.com**20110913205521
9490 Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8
9491]
9492[doc: add NEWS item about fixes to potential palimpsest issues in mutable files
9493zooko@zooko.com**20110912223329
9494 Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a
9495 ref. #1528
9496]
9497[merge the NEWS about the security fix (#1528) with the work-in-progress NEWS
9498zooko@zooko.com**20110913205153
9499 Ignore-this: 88e88a2ad140238c62010cf7c66953fc
9500]
9501[doc: add NEWS entry about the issue which allows unauthorized deletion of shares
9502zooko@zooko.com**20110912223246
9503 Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0
9504 ref. #1528
9505]
9506[doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares
9507zooko@zooko.com**20110912223135
9508 Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd
9509 ref. #1528
9510]
9511[storage: more paranoid handling of bounds and palimpsests in mutable share files
9512zooko@zooko.com**20110912222655
9513 Ignore-this: a20782fa423779ee851ea086901e1507
9514 * storage server ignores requests to extend shares by sending a new_length
9515 * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
9516 * storage server zeroes out lease info at the old location when moving it to a new location
9517 ref. #1528
9518]
9519[storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents
9520zooko@zooko.com**20110912222554
9521 Ignore-this: 61ebd7b11250963efdf5b1734a35271
9522 ref. #1528
9523]
9524[immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret
9525zooko@zooko.com**20110912222458
9526 Ignore-this: da1ebd31433ea052087b75b2e3480c25
9527 Declare explicitly that we prevent this problem in the server's version dict.
9528 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
9529]
9530[storage: remove the storage server's "remote_cancel_lease" function
9531zooko@zooko.com**20110912222331
9532 Ignore-this: 1c32dee50e0981408576daffad648c50
9533 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
9534 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
9535]
9536[storage: test that the storage server does *not* have a "remote_cancel_lease" function
9537zooko@zooko.com**20110912222324
9538 Ignore-this: 21c652009704652d35f34651f98dd403
9539 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
9540 ref. #1528
9541]
9542[immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret
9543zooko@zooko.com**20110912221201
9544 Ignore-this: 376e47b346c713d37096531491176349
9545 Also test whether the server explicitly declares that it prevents this problem.
9546 ref #1528
9547]
9548[Retrieve._activate_enough_peers: rewrite Verify logic
9549Brian Warner <warner@lothar.com>**20110909181150
9550 Ignore-this: 9367c11e1eacbf025f75ce034030d717
9551]
9552[Retrieve: implement/test stopProducing
9553Brian Warner <warner@lothar.com>**20110909181150
9554 Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178
9555]
9556[move DownloadStopped from download.common to interfaces
9557Brian Warner <warner@lothar.com>**20110909181150
9558 Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50
9559]
9560[retrieve.py: remove vestigal self._validated_readers
9561Brian Warner <warner@lothar.com>**20110909181150
9562 Ignore-this: faab2ec14e314a53a2ffb714de626e2d
9563]
9564[Retrieve: rewrite flow-control: use a top-level loop() to catch all errors
9565Brian Warner <warner@lothar.com>**20110909181150
9566 Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714
9567 
9568 This ought to close the potential for dropped errors and hanging downloads.
9569 Verify needs to be examined, I may have broken it, although all tests pass.
9570]
9571[Retrieve: merge _validate_active_prefixes into _add_active_peers
9572Brian Warner <warner@lothar.com>**20110909181150
9573 Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c
9574]
9575[Retrieve: remove the initial prefix-is-still-good check
9576Brian Warner <warner@lothar.com>**20110909181150
9577 Ignore-this: da66ee51c894eaa4e862e2dffb458acc
9578 
9579 This check needs to be done with each fetch from the storage server, to
9580 detect when someone has changed the share (i.e. our servermap goes stale).
9581 Doing it just once at the beginning of retrieve isn't enough: a write might
9582 occur after the first segment but before the second, etc.
9583 
9584 _try_to_validate_prefix() was not removed: it will be used by the future
9585 check-with-each-fetch code.
9586 
9587 test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
9588 fails until this check is brought back. (the corruption it applies only
9589 touches the prefix, not the block data, so the check-less retrieve actually
9590 tolerates it). Don't forget to re-enable it once the check is brought back.
9591]
9592[MDMFSlotReadProxy: remove the queue
9593Brian Warner <warner@lothar.com>**20110909181150
9594 Ignore-this: 96673cb8dda7a87a423de2f4897d66d2
9595 
9596 This is a neat trick to reduce Foolscap overhead, but the need for an
9597 explicit flush() complicates the Retrieve path and makes it prone to
9598 lost-progress bugs.
9599 
9600 Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
9601 same share in a row, a limitation exposed by turning off the queue.
9602]
9603[rearrange Retrieve: first step, shouldn't change order of execution
9604Brian Warner <warner@lothar.com>**20110909181149
9605 Ignore-this: e3006368bfd2802b82ea45c52409e8d6
9606]
9607[CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527
9608david-sarah@jacaranda.org**20110906183730
9609 Ignore-this: 122e2ffbee84861c32eda766a57759cf
9610]
9611[CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527
9612david-sarah@jacaranda.org**20110906183020
9613 Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d
9614]
9615[CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527
9616david-sarah@jacaranda.org**20110905020922
9617 Ignore-this: 75a6df0a2df9c467d8c010579e9a024e
9618]
9619[cli: make --mutable-type imply --mutable in 'tahoe put'
9620Kevan Carstensen <kevan@isnotajoke.com>**20110903190920
9621 Ignore-this: 23336d3c43b2a9554e40c2a11c675e93
9622]
9623[SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on.
9624david-sarah@jacaranda.org**20110903222304
9625 Ignore-this: 980c61d4dd0119337f1463a69aeebaf0
9626]
9627[improve the storage/mutable.py asserts even more
9628warner@lothar.com**20110901160543
9629 Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946
9630]
9631[storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts
9632wilcoxjg@gmail.com**20110901084144
9633 Ignore-this: 28ace2b2678642e4d7269ddab8c67f30
9634]
9635[docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs.
9636david-sarah@jacaranda.org**20110831232148
9637 Ignore-this: cd9c851d3eb4e0a1e088f337c291586c
9638]
9639[test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s
9640warner@lothar.com**20110831050451
9641 Ignore-this: 64815284d9e536f8f3798b5f44cf580c
9642]
9643[mutable/retrieve: handle the case where self._read_length is 0.
9644Kevan Carstensen <kevan@isnotajoke.com>**20110830210141
9645 Ignore-this: fceafbe485851ca53f2774e5a4fd8d30
9646 
9647 Note that the downloader will still fetch a segment for a zero-length
9648 read, which is wasteful. Fixing that isn't specifically required to fix
9649 #1512, but it should probably be fixed before 1.9.
9650]
9651[NEWS: added summary of all changes since 1.8.2. Needs editing.
9652Brian Warner <warner@lothar.com>**20110830163205
9653 Ignore-this: 273899b37a899fc6919b74572454b8b2
9654]
9655[test_mutable.Update: only upload the files needed for each test. refs #1500
9656Brian Warner <warner@lothar.com>**20110829072717
9657 Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7
9658 
9659 This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
9660 It also fixes a couple of places where a Deferred was being dropped, which
9661 would cause two tests to run in parallel and also confuse error reporting.
9662]
9663[Let Uploader retain History instead of passing it into upload(). Fixes #1079.
9664Brian Warner <warner@lothar.com>**20110829063246
9665 Ignore-this: 3902c58ec12bd4b2d876806248e19f17
9666 
9667 This consistently records all immutable uploads in the Recent Uploads And
9668 Downloads page, regardless of code path. Previously, certain webapi upload
9669 operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
9670 object and were left out.
9671]
9672[Fix mutable publish/retrieve timing status displays. Fixes #1505.
9673Brian Warner <warner@lothar.com>**20110828232221
9674 Ignore-this: 4080ce065cf481b2180fd711c9772dd6
9675 
9676 publish:
9677 * encrypt and encode times are cumulative, not just current-segment
9678 
9679 retrieve:
9680 * same for decrypt and decode times
9681 * update "current status" to include segment number
9682 * set status to Finished/Failed when download is complete
9683 * set progress to 1.0 when complete
9684 
9685 More improvements to consider:
9686 * progress is currently 0% or 100%: should calculate how many segments are
9687   involved (remembering retrieve can be less than the whole file) and set it
9688   to a fraction
9689 * "fetch" time is fuzzy: what we want is to know how much of the delay is not
9690   our own fault, but since we do decode/decrypt work while waiting for more
9691   shares, it's not straightforward
9692]
9693[Teach 'tahoe debug catalog-shares about MDMF. Closes #1507.
9694Brian Warner <warner@lothar.com>**20110828080931
9695 Ignore-this: 56ef2951db1a648353d7daac6a04c7d1
9696]
9697[debug.py: remove some dead comments
9698Brian Warner <warner@lothar.com>**20110828074556
9699 Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31
9700]
9701[hush pyflakes
9702Brian Warner <warner@lothar.com>**20110828074254
9703 Ignore-this: bef9d537a969fa82fe4decc4ba2acb09
9704]
9705[MutableFileNode.set_downloader_hints: never depend upon order of dict.values()
9706Brian Warner <warner@lothar.com>**20110828074103
9707 Ignore-this: caaf1aa518dbdde4d797b7f335230faa
9708 
9709 The old code was calculating the "extension parameters" (a list) from the
9710 downloader hints (a dictionary) with hints.values(), which is not stable, and
9711 would result in corrupted filecaps (with the 'k' and 'segsize' hints
9712 occasionally swapped). The new code always uses [k,segsize].
9713]
9714[layout.py: fix MDMF share layout documentation
9715Brian Warner <warner@lothar.com>**20110828073921
9716 Ignore-this: 3f13366fed75b5e31b51ae895450a225
9717]
9718[teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507
9719Brian Warner <warner@lothar.com>**20110828073834
9720 Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea
9721]
9722[test_mutable.Version.test_debug: use splitlines() to fix buildslaves
9723Brian Warner <warner@lothar.com>**20110828064728
9724 Ignore-this: c7f6245426fc80b9d1ae901d5218246a
9725 
9726 Any slave running in a directory with spaces in the name was miscounting
9727 shares, causing the test to fail.
9728]
9729[test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507
9730Brian Warner <warner@lothar.com>**20110828005542
9731 Ignore-this: cb20bea1c28bfa50a72317d70e109672
9732 
9733 Also changes NoNetworkGrid to put shares in storage/shares/ .
9734]
9735[test_mutable.py: oops, missed a .todo
9736Brian Warner <warner@lothar.com>**20110828002118
9737 Ignore-this: fda09ae86481352b7a627c278d2a3940
9738]
9739[test_mutable: merge davidsarah's patch with my Version refactorings
9740warner@lothar.com**20110827235707
9741 Ignore-this: b5aaf481c90d99e33827273b5d118fd0
9742]
9743[Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393
9744david-sarah@jacaranda.org**20110823012720
9745 Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721
9746]
9747[Additional tests for MDMF URIs and for zero-length files. refs #393
9748david-sarah@jacaranda.org**20110823011532
9749 Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5
9750]
9751[Additional tests for zero-length partial reads and updates to mutable versions. refs #393
9752david-sarah@jacaranda.org**20110822014111
9753 Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea
9754]
9755[test_mutable.Version: factor out some expensive uploads, save 25% runtime
9756Brian Warner <warner@lothar.com>**20110827232737
9757 Ignore-this: ea37383eb85ea0894b254fe4dfb45544
9758]
9759[SDMF: update filenode with correct k/N after Retrieve. Fixes #1510.
9760Brian Warner <warner@lothar.com>**20110827225031
9761 Ignore-this: b50ae6e1045818c400079f118b4ef48
9762 
9763 Without this, we get a regression when modifying a mutable file that was
9764 created with more shares (larger N) than our current tahoe.cfg . The
9765 modification attempt creates new versions of the (0,1,..,newN-1) shares, but
9766 leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
9767 assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
9768 
9769 The mixed versions that result (some shares with e.g. N=10, some with N=20,
9770 such that both versions are recoverable) cause problems for the Publish code,
9771 even before MDMF landed. Might be related to refs #1390 and refs #1042.
9772]
9773[layout.py: annotate assertion to figure out 'tahoe backup' failure
9774Brian Warner <warner@lothar.com>**20110827195253
9775 Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5
9776]
9777[Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507.
9778Brian Warner <warner@lothar.com>**20110827195048
9779 Ignore-this: 61c6af5e33fc88e0251e697a50addb2c
9780 
9781 This also adds tests for all those cases, and fixes an omission in uri.py
9782 that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier.
9783]
9784[MDMF: more writable/writeable consistentifications
9785warner@lothar.com**20110827190602
9786 Ignore-this: 22492a9e20c1819ddb12091062888b55
9787]
9788[MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code
9789warner@lothar.com**20110827183357
9790 Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b
9791]
9792[setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479
9793david-sarah@jacaranda.org**20110826230345
9794 Ignore-this: 40e908b8937322a290fb8012bfcad02a
9795]
9796[test_mutable.Update: increase timeout from 120s to 400s, slaves are failing
9797Brian Warner <warner@lothar.com>**20110825230140
9798 Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec
9799]
9800[tests: fix check_memory test
9801zooko@zooko.com**20110825201116
9802 Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835
9803 fixes #1503
9804]
9805[TAG allmydata-tahoe-1.9.0a1
9806warner@lothar.com**20110825161122
9807 Ignore-this: 3cbf49f00dbda58189f893c427f65605
9808]
9809Patch bundle hash:
9810fc502d038e02cff4144b39e5603e82fcbbe73ff9