Ticket #999: pluggable-backends-davidsarah-v5.darcs.patch

File pluggable-backends-davidsarah-v5.darcs.patch, 308.1 KB (added by davidsarah, at 2011-09-20T03:42:59Z)

Work-in-progress, includes fix to bug involving BucketWriter?. refs #999

Line 
14 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15New patches:
16
17[interfaces.py: 'which -> that' grammar cleanup.
18david-sarah@jacaranda.org**20110825003217
19 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
20] {
21hunk ./src/allmydata/interfaces.py 38
22     the StubClient. This object doesn't actually offer any services, but the
23     announcement helps the Introducer keep track of which clients are
24     subscribed (so the grid admin can keep track of things like the size of
25-    the grid and the client versions in use. This is the (empty)
26+    the grid and the client versions in use). This is the (empty)
27     RemoteInterface for the StubClient."""
28 
29 class RIBucketWriter(RemoteInterface):
30hunk ./src/allmydata/interfaces.py 276
31         (binary) storage index string, and 'shnum' is the integer share
32         number. 'reason' is a human-readable explanation of the problem,
33         probably including some expected hash values and the computed ones
34-        which did not match. Corruption advisories for mutable shares should
35+        that did not match. Corruption advisories for mutable shares should
36         include a hash of the public key (the same value that appears in the
37         mutable-file verify-cap), since the current share format does not
38         store that on disk.
39hunk ./src/allmydata/interfaces.py 413
40           remote_host: the IAddress, if connected, otherwise None
41 
42         This method is intended for monitoring interfaces, such as a web page
43-        which describes connecting and connected peers.
44+        that describes connecting and connected peers.
45         """
46 
47     def get_all_peerids():
48hunk ./src/allmydata/interfaces.py 515
49 
50     # TODO: rename to get_read_cap()
51     def get_readonly():
52-        """Return another IURI instance, which represents a read-only form of
53+        """Return another IURI instance that represents a read-only form of
54         this one. If is_readonly() is True, this returns self."""
55 
56     def get_verify_cap():
57hunk ./src/allmydata/interfaces.py 542
58         passing into init_from_string."""
59 
60 class IDirnodeURI(Interface):
61-    """I am a URI which represents a dirnode."""
62+    """I am a URI that represents a dirnode."""
63 
64 class IFileURI(Interface):
65hunk ./src/allmydata/interfaces.py 545
66-    """I am a URI which represents a filenode."""
67+    """I am a URI that represents a filenode."""
68     def get_size():
69         """Return the length (in bytes) of the file that I represent."""
70 
71hunk ./src/allmydata/interfaces.py 553
72     pass
73 
74 class IMutableFileURI(Interface):
75-    """I am a URI which represents a mutable filenode."""
76+    """I am a URI that represents a mutable filenode."""
77     def get_extension_params():
78         """Return the extension parameters in the URI"""
79 
80hunk ./src/allmydata/interfaces.py 856
81         """
82 
83 class IFileNode(IFilesystemNode):
84-    """I am a node which represents a file: a sequence of bytes. I am not a
85+    """I am a node that represents a file: a sequence of bytes. I am not a
86     container, like IDirectoryNode."""
87     def get_best_readable_version():
88         """Return a Deferred that fires with an IReadable for the 'best'
89hunk ./src/allmydata/interfaces.py 905
90     multiple versions of a file present in the grid, some of which might be
91     unrecoverable (i.e. have fewer than 'k' shares). These versions are
92     loosely ordered: each has a sequence number and a hash, and any version
93-    with seqnum=N was uploaded by a node which has seen at least one version
94+    with seqnum=N was uploaded by a node that has seen at least one version
95     with seqnum=N-1.
96 
97     The 'servermap' (an instance of IMutableFileServerMap) is used to
98hunk ./src/allmydata/interfaces.py 1014
99         as a guide to where the shares are located.
100 
101         I return a Deferred that fires with the requested contents, or
102-        errbacks with UnrecoverableFileError. Note that a servermap which was
103+        errbacks with UnrecoverableFileError. Note that a servermap that was
104         updated with MODE_ANYTHING or MODE_READ may not know about shares for
105         all versions (those modes stop querying servers as soon as they can
106         fulfil their goals), so you may want to use MODE_CHECK (which checks
107hunk ./src/allmydata/interfaces.py 1073
108     """Upload was unable to satisfy 'servers_of_happiness'"""
109 
110 class UnableToFetchCriticalDownloadDataError(Exception):
111-    """I was unable to fetch some piece of critical data which is supposed to
112+    """I was unable to fetch some piece of critical data that is supposed to
113     be identically present in all shares."""
114 
115 class NoServersError(Exception):
116hunk ./src/allmydata/interfaces.py 1085
117     exists, and overwrite= was set to False."""
118 
119 class NoSuchChildError(Exception):
120-    """A directory node was asked to fetch a child which does not exist."""
121+    """A directory node was asked to fetch a child that does not exist."""
122 
123 class ChildOfWrongTypeError(Exception):
124     """An operation was attempted on a child of the wrong type (file or directory)."""
125hunk ./src/allmydata/interfaces.py 1403
126         if you initially thought you were going to use 10 peers, started
127         encoding, and then two of the peers dropped out: you could use
128         desired_share_ids= to skip the work (both memory and CPU) of
129-        producing shares for the peers which are no longer available.
130+        producing shares for the peers that are no longer available.
131 
132         """
133 
134hunk ./src/allmydata/interfaces.py 1478
135         if you initially thought you were going to use 10 peers, started
136         encoding, and then two of the peers dropped out: you could use
137         desired_share_ids= to skip the work (both memory and CPU) of
138-        producing shares for the peers which are no longer available.
139+        producing shares for the peers that are no longer available.
140 
141         For each call, encode() will return a Deferred that fires with two
142         lists, one containing shares and the other containing the shareids.
143hunk ./src/allmydata/interfaces.py 1535
144         required to be of the same length.  The i'th element of their_shareids
145         is required to be the shareid of the i'th buffer in some_shares.
146 
147-        This returns a Deferred which fires with a sequence of buffers. This
148+        This returns a Deferred that fires with a sequence of buffers. This
149         sequence will contain all of the segments of the original data, in
150         order. The sum of the lengths of all of the buffers will be the
151         'data_size' value passed into the original ICodecEncode.set_params()
152hunk ./src/allmydata/interfaces.py 1582
153         Encoding parameters can be set in three ways. 1: The Encoder class
154         provides defaults (3/7/10). 2: the Encoder can be constructed with
155         an 'options' dictionary, in which the
156-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
157+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
158         set_params((k,d,n)) can be called.
159 
160         If you intend to use set_params(), you must call it before
161hunk ./src/allmydata/interfaces.py 1780
162         produced, so that the segment hashes can be generated with only a
163         single pass.
164 
165-        This returns a Deferred which fires with a sequence of hashes, using:
166+        This returns a Deferred that fires with a sequence of hashes, using:
167 
168          tuple(segment_hashes[first:last])
169 
170hunk ./src/allmydata/interfaces.py 1796
171     def get_plaintext_hash():
172         """OBSOLETE; Get the hash of the whole plaintext.
173 
174-        This returns a Deferred which fires with a tagged SHA-256 hash of the
175+        This returns a Deferred that fires with a tagged SHA-256 hash of the
176         whole plaintext, obtained from hashutil.plaintext_hash(data).
177         """
178 
179hunk ./src/allmydata/interfaces.py 1856
180         be used to encrypt the data. The key will also be hashed to derive
181         the StorageIndex.
182 
183-        Uploadables which want to achieve convergence should hash their file
184+        Uploadables that want to achieve convergence should hash their file
185         contents and the serialized_encoding_parameters to form the key
186         (which of course requires a full pass over the data). Uploadables can
187         use the upload.ConvergentUploadMixin class to achieve this
188hunk ./src/allmydata/interfaces.py 1862
189         automatically.
190 
191-        Uploadables which do not care about convergence (or do not wish to
192+        Uploadables that do not care about convergence (or do not wish to
193         make multiple passes over the data) can simply return a
194         strongly-random 16 byte string.
195 
196hunk ./src/allmydata/interfaces.py 1872
197 
198     def read(length):
199         """Return a Deferred that fires with a list of strings (perhaps with
200-        only a single element) which, when concatenated together, contain the
201+        only a single element) that, when concatenated together, contain the
202         next 'length' bytes of data. If EOF is near, this may provide fewer
203         than 'length' bytes. The total number of bytes provided by read()
204         before it signals EOF must equal the size provided by get_size().
205hunk ./src/allmydata/interfaces.py 1919
206 
207     def read(length):
208         """
209-        Returns a list of strings which, when concatenated, are the next
210+        Returns a list of strings that, when concatenated, are the next
211         length bytes of the file, or fewer if there are fewer bytes
212         between the current location and the end of the file.
213         """
214hunk ./src/allmydata/interfaces.py 1932
215 
216 class IUploadResults(Interface):
217     """I am returned by upload() methods. I contain a number of public
218-    attributes which can be read to determine the results of the upload. Some
219+    attributes that can be read to determine the results of the upload. Some
220     of these are functional, some are timing information. All of these may be
221     None.
222 
223hunk ./src/allmydata/interfaces.py 1965
224 
225 class IDownloadResults(Interface):
226     """I am created internally by download() methods. I contain a number of
227-    public attributes which contain details about the download process.::
228+    public attributes that contain details about the download process.::
229 
230      .file_size : the size of the file, in bytes
231      .servers_used : set of server peerids that were used during download
232hunk ./src/allmydata/interfaces.py 1991
233 class IUploader(Interface):
234     def upload(uploadable):
235         """Upload the file. 'uploadable' must impement IUploadable. This
236-        returns a Deferred which fires with an IUploadResults instance, from
237+        returns a Deferred that fires with an IUploadResults instance, from
238         which the URI of the file can be obtained as results.uri ."""
239 
240     def upload_ssk(write_capability, new_version, uploadable):
241hunk ./src/allmydata/interfaces.py 2041
242         kind of lease that is obtained (which account number to claim, etc).
243 
244         TODO: any problems seen during checking will be reported to the
245-        health-manager.furl, a centralized object which is responsible for
246+        health-manager.furl, a centralized object that is responsible for
247         figuring out why files are unhealthy so corrective action can be
248         taken.
249         """
250hunk ./src/allmydata/interfaces.py 2056
251         will be put in the check-and-repair results. The Deferred will not
252         fire until the repair is complete.
253 
254-        This returns a Deferred which fires with an instance of
255+        This returns a Deferred that fires with an instance of
256         ICheckAndRepairResults."""
257 
258 class IDeepCheckable(Interface):
259hunk ./src/allmydata/interfaces.py 2141
260                               that was found to be corrupt. Each share
261                               locator is a list of (serverid, storage_index,
262                               sharenum).
263-         count-incompatible-shares: the number of shares which are of a share
264+         count-incompatible-shares: the number of shares that are of a share
265                                     format unknown to this checker
266          list-incompatible-shares: a list of 'share locators', one for each
267                                    share that was found to be of an unknown
268hunk ./src/allmydata/interfaces.py 2148
269                                    format. Each share locator is a list of
270                                    (serverid, storage_index, sharenum).
271          servers-responding: list of (binary) storage server identifiers,
272-                             one for each server which responded to the share
273+                             one for each server that responded to the share
274                              query (even if they said they didn't have
275                              shares, and even if they said they did have
276                              shares but then didn't send them when asked, or
277hunk ./src/allmydata/interfaces.py 2345
278         will use the data in the checker results to guide the repair process,
279         such as which servers provided bad data and should therefore be
280         avoided. The ICheckResults object is inside the
281-        ICheckAndRepairResults object, which is returned by the
282+        ICheckAndRepairResults object that is returned by the
283         ICheckable.check() method::
284 
285          d = filenode.check(repair=False)
286hunk ./src/allmydata/interfaces.py 2436
287         methods to create new objects. I return synchronously."""
288 
289     def create_mutable_file(contents=None, keysize=None):
290-        """I create a new mutable file, and return a Deferred which will fire
291+        """I create a new mutable file, and return a Deferred that will fire
292         with the IMutableFileNode instance when it is ready. If contents= is
293         provided (a bytestring), it will be used as the initial contents of
294         the new file, otherwise the file will contain zero bytes. keysize= is
295hunk ./src/allmydata/interfaces.py 2444
296         usual."""
297 
298     def create_new_mutable_directory(initial_children={}):
299-        """I create a new mutable directory, and return a Deferred which will
300+        """I create a new mutable directory, and return a Deferred that will
301         fire with the IDirectoryNode instance when it is ready. If
302         initial_children= is provided (a dict mapping unicode child name to
303         (childnode, metadata_dict) tuples), the directory will be populated
304hunk ./src/allmydata/interfaces.py 2452
305 
306 class IClientStatus(Interface):
307     def list_all_uploads():
308-        """Return a list of uploader objects, one for each upload which
309+        """Return a list of uploader objects, one for each upload that
310         currently has an object available (tracked with weakrefs). This is
311         intended for debugging purposes."""
312     def list_active_uploads():
313hunk ./src/allmydata/interfaces.py 2462
314         started uploads."""
315 
316     def list_all_downloads():
317-        """Return a list of downloader objects, one for each download which
318+        """Return a list of downloader objects, one for each download that
319         currently has an object available (tracked with weakrefs). This is
320         intended for debugging purposes."""
321     def list_active_downloads():
322hunk ./src/allmydata/interfaces.py 2689
323 
324     def provide(provider=RIStatsProvider, nickname=str):
325         """
326-        @param provider: a stats collector instance which should be polled
327+        @param provider: a stats collector instance that should be polled
328                          periodically by the gatherer to collect stats.
329         @param nickname: a name useful to identify the provided client
330         """
331hunk ./src/allmydata/interfaces.py 2722
332 
333 class IValidatedThingProxy(Interface):
334     def start():
335-        """ Acquire a thing and validate it. Return a deferred which is
336+        """ Acquire a thing and validate it. Return a deferred that is
337         eventually fired with self if the thing is valid or errbacked if it
338         can't be acquired or validated."""
339 
340}
341[Pluggable backends -- new and moved files, changes to moved files. refs #999
342david-sarah@jacaranda.org**20110919232926
343 Ignore-this: ec5d2d1362a092d919e84327d3092424
344] {
345adddir ./src/allmydata/storage/backends
346adddir ./src/allmydata/storage/backends/disk
347move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
348move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
349adddir ./src/allmydata/storage/backends/null
350addfile ./src/allmydata/storage/backends/__init__.py
351addfile ./src/allmydata/storage/backends/base.py
352hunk ./src/allmydata/storage/backends/base.py 1
353+
354+from twisted.application import service
355+
356+from allmydata.storage.common import si_b2a
357+from allmydata.storage.lease import LeaseInfo
358+from allmydata.storage.bucket import BucketReader
359+
360+
361+class Backend(service.MultiService):
362+    def __init__(self):
363+        service.MultiService.__init__(self)
364+
365+
366+class ShareSet(object):
367+    """
368+    This class implements shareset logic that could work for all backends, but
369+    might be useful to override for efficiency.
370+    """
371+
372+    def __init__(self, storageindex):
373+        self.storageindex = storageindex
374+
375+    def get_storage_index(self):
376+        return self.storageindex
377+
378+    def get_storage_index_string(self):
379+        return si_b2a(self.storageindex)
380+
381+    def renew_lease(self, renew_secret, new_expiration_time):
382+        found_shares = False
383+        for share in self.get_shares():
384+            found_shares = True
385+            share.renew_lease(renew_secret, new_expiration_time)
386+
387+        if not found_shares:
388+            raise IndexError("no such lease to renew")
389+
390+    def get_leases(self):
391+        # Since all shares get the same lease data, we just grab the leases
392+        # from the first share.
393+        try:
394+            sf = self.get_shares().next()
395+            return sf.get_leases()
396+        except StopIteration:
397+            return iter([])
398+
399+    def add_or_renew_lease(self, lease_info):
400+        # This implementation assumes that lease data is duplicated in
401+        # all shares of a shareset, which might not be true for all backends.
402+        for share in self.get_shares():
403+            share.add_or_renew_lease(lease_info)
404+
405+    def make_bucket_reader(self, storageserver, share):
406+        return BucketReader(storageserver, share)
407+
408+    def testv_and_readv_and_writev(self, storageserver, secrets,
409+                                   test_and_write_vectors, read_vector,
410+                                   expiration_time):
411+        # The implementation here depends on the following helper methods,
412+        # which must be provided by subclasses:
413+        #
414+        # def _clean_up_after_unlink(self):
415+        #     """clean up resources associated with the shareset after some
416+        #     shares might have been deleted"""
417+        #
418+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
419+        #     """create a mutable share with the given shnum and write_enabler"""
420+
421+        # secrets might be a triple with cancel_secret in secrets[2], but if
422+        # so we ignore the cancel_secret.
423+        write_enabler = secrets[0]
424+        renew_secret = secrets[1]
425+
426+        si_s = self.get_storage_index_string()
427+        shares = {}
428+        for share in self.get_shares():
429+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
430+            # have a parameter saying what type it's expecting.
431+            if share.sharetype == "mutable":
432+                share.check_write_enabler(write_enabler, si_s)
433+                shares[share.get_shnum()] = share
434+
435+        # write_enabler is good for all existing shares
436+
437+        # now evaluate test vectors
438+        testv_is_good = True
439+        for sharenum in test_and_write_vectors:
440+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
441+            if sharenum in shares:
442+                if not shares[sharenum].check_testv(testv):
443+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
444+                    testv_is_good = False
445+                    break
446+            else:
447+                # compare the vectors against an empty share, in which all
448+                # reads return empty strings
449+                if not EmptyShare().check_testv(testv):
450+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
451+                                                                testv))
452+                    testv_is_good = False
453+                    break
454+
455+        # gather the read vectors, before we do any writes
456+        read_data = {}
457+        for shnum, share in shares.items():
458+            read_data[shnum] = share.readv(read_vector)
459+
460+        ownerid = 1 # TODO
461+        lease_info = LeaseInfo(ownerid, renew_secret,
462+                               expiration_time, storageserver.get_serverid())
463+
464+        if testv_is_good:
465+            # now apply the write vectors
466+            for shnum in test_and_write_vectors:
467+                (testv, datav, new_length) = test_and_write_vectors[shnum]
468+                if new_length == 0:
469+                    if shnum in shares:
470+                        shares[shnum].unlink()
471+                else:
472+                    if shnum not in shares:
473+                        # allocate a new share
474+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
475+                        shares[shnum] = share
476+                    shares[shnum].writev(datav, new_length)
477+                    # and update the lease
478+                    shares[shnum].add_or_renew_lease(lease_info)
479+
480+            if new_length == 0:
481+                self._clean_up_after_unlink()
482+
483+        return (testv_is_good, read_data)
484+
485+    def readv(self, wanted_shnums, read_vector):
486+        """
487+        Read a vector from the numbered shares in this shareset. An empty
488+        shares list means to return data from all known shares.
489+
490+        @param wanted_shnums=ListOf(int)
491+        @param read_vector=ReadVector
492+        @return DictOf(int, ReadData): shnum -> results, with one key per share
493+        """
494+        datavs = {}
495+        for share in self.get_shares():
496+            shnum = share.get_shnum()
497+            if not wanted_shnums or shnum in wanted_shnums:
498+                datavs[shnum] = share.readv(read_vector)
499+
500+        return datavs
501+
502+
503+def testv_compare(a, op, b):
504+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
505+    if op == "lt":
506+        return a < b
507+    if op == "le":
508+        return a <= b
509+    if op == "eq":
510+        return a == b
511+    if op == "ne":
512+        return a != b
513+    if op == "ge":
514+        return a >= b
515+    if op == "gt":
516+        return a > b
517+    # never reached
518+
519+
520+class EmptyShare:
521+    def check_testv(self, testv):
522+        test_good = True
523+        for (offset, length, operator, specimen) in testv:
524+            data = ""
525+            if not testv_compare(data, operator, specimen):
526+                test_good = False
527+                break
528+        return test_good
529+
530addfile ./src/allmydata/storage/backends/disk/__init__.py
531addfile ./src/allmydata/storage/backends/disk/disk_backend.py
532hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
533+
534+import re
535+
536+from twisted.python.filepath import UnlistableError
537+
538+from zope.interface import implements
539+from allmydata.interfaces import IStorageBackend, IShareSet
540+from allmydata.util import fileutil, log, time_format
541+from allmydata.storage.common import si_b2a, si_a2b
542+from allmydata.storage.bucket import BucketWriter
543+from allmydata.storage.backends.base import Backend, ShareSet
544+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
545+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
546+
547+# storage/
548+# storage/shares/incoming
549+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
550+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
551+# storage/shares/$START/$STORAGEINDEX
552+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
553+
554+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
555+# base-32 chars).
556+# $SHARENUM matches this regex:
557+NUM_RE=re.compile("^[0-9]+$")
558+
559+
560+def si_si2dir(startfp, storageindex):
561+    sia = si_b2a(storageindex)
562+    newfp = startfp.child(sia[:2])
563+    return newfp.child(sia)
564+
565+
566+def get_share(fp):
567+    f = fp.open('rb')
568+    try:
569+        prefix = f.read(32)
570+    finally:
571+        f.close()
572+
573+    if prefix == MutableDiskShare.MAGIC:
574+        return MutableDiskShare(fp)
575+    else:
576+        # assume it's immutable
577+        return ImmutableDiskShare(fp)
578+
579+
580+class DiskBackend(Backend):
581+    implements(IStorageBackend)
582+
583+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
584+        Backend.__init__(self)
585+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
586+        self._setup_corruption_advisory()
587+
588+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
589+        self._storedir = storedir
590+        self._readonly = readonly
591+        self._reserved_space = int(reserved_space)
592+        self._discard_storage = discard_storage
593+        self._sharedir = self._storedir.child("shares")
594+        fileutil.fp_make_dirs(self._sharedir)
595+        self._incomingdir = self._sharedir.child('incoming')
596+        self._clean_incomplete()
597+        if self._reserved_space and (self.get_available_space() is None):
598+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
599+                    umid="0wZ27w", level=log.UNUSUAL)
600+
601+    def _clean_incomplete(self):
602+        fileutil.fp_remove(self._incomingdir)
603+        fileutil.fp_make_dirs(self._incomingdir)
604+
605+    def _setup_corruption_advisory(self):
606+        # we don't actually create the corruption-advisory dir until necessary
607+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
608+
609+    def _make_shareset(self, sharehomedir):
610+        return self.get_shareset(si_a2b(sharehomedir.basename()))
611+
612+    def get_sharesets_for_prefix(self, prefix):
613+        prefixfp = self._sharedir.child(prefix)
614+        try:
615+            sharesets = map(self._make_shareset, prefixfp.children())
616+            def _by_base32si(b):
617+                return b.get_storage_index_string()
618+            sharesets.sort(key=_by_base32si)
619+        except EnvironmentError:
620+            sharesets = []
621+        return sharesets
622+
623+    def get_shareset(self, storageindex):
624+        sharehomedir = si_si2dir(self._sharedir, storageindex)
625+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
626+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
627+
628+    def fill_in_space_stats(self, stats):
629+        stats['storage_server.reserved_space'] = self._reserved_space
630+        try:
631+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
632+            writeable = disk['avail'] > 0
633+
634+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
635+            stats['storage_server.disk_total'] = disk['total']
636+            stats['storage_server.disk_used'] = disk['used']
637+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
638+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
639+            stats['storage_server.disk_avail'] = disk['avail']
640+        except AttributeError:
641+            writeable = True
642+        except EnvironmentError:
643+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
644+            writeable = False
645+
646+        if self._readonly:
647+            stats['storage_server.disk_avail'] = 0
648+            writeable = False
649+
650+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
651+
652+    def get_available_space(self):
653+        if self._readonly:
654+            return 0
655+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
656+
657+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
658+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
659+        now = time_format.iso_utc(sep="T")
660+        si_s = si_b2a(storageindex)
661+
662+        # Windows can't handle colons in the filename.
663+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
664+        f = self._corruption_advisory_dir.child(name).open("w")
665+        try:
666+            f.write("report: Share Corruption\n")
667+            f.write("type: %s\n" % sharetype)
668+            f.write("storage_index: %s\n" % si_s)
669+            f.write("share_number: %d\n" % shnum)
670+            f.write("\n")
671+            f.write(reason)
672+            f.write("\n")
673+        finally:
674+            f.close()
675+
676+        log.msg(format=("client claims corruption in (%(share_type)s) " +
677+                        "%(si)s-%(shnum)d: %(reason)s"),
678+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
679+                level=log.SCARY, umid="SGx2fA")
680+
681+
682+class DiskShareSet(ShareSet):
683+    implements(IShareSet)
684+
685+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
686+        ShareSet.__init__(self, storageindex)
687+        self._sharehomedir = sharehomedir
688+        self._incominghomedir = incominghomedir
689+        self._discard_storage = discard_storage
690+
691+    def get_overhead(self):
692+        return (fileutil.get_disk_usage(self._sharehomedir) +
693+                fileutil.get_disk_usage(self._incominghomedir))
694+
695+    def get_shares(self):
696+        """
697+        Generate IStorageBackendShare objects for shares we have for this storage index.
698+        ("Shares we have" means completed ones, excluding incoming ones.)
699+        """
700+        try:
701+            for fp in self._sharehomedir.children():
702+                shnumstr = fp.basename()
703+                if not NUM_RE.match(shnumstr):
704+                    continue
705+                sharehome = self._sharehomedir.child(shnumstr)
706+                yield self.get_share(sharehome)
707+        except UnlistableError:
708+            # There is no shares directory at all.
709+            pass
710+
711+    def has_incoming(self, shnum):
712+        if self._incominghomedir is None:
713+            return False
714+        return self._incominghomedir.child(str(shnum)).exists()
715+
716+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
717+        sharehome = self._sharehomedir.child(str(shnum))
718+        incominghome = self._incominghomedir.child(str(shnum))
719+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
720+                                   max_size=max_space_per_bucket, create=True)
721+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
722+        if self._discard_storage:
723+            bw.throw_out_all_data = True
724+        return bw
725+
726+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
727+        fileutil.fp_make_dirs(self._sharehomedir)
728+        sharehome = self._sharehomedir.child(str(shnum))
729+        serverid = storageserver.get_serverid()
730+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
731+
732+    def _clean_up_after_unlink(self):
733+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
734+
735hunk ./src/allmydata/storage/backends/disk/immutable.py 1
736-import os, stat, struct, time
737 
738hunk ./src/allmydata/storage/backends/disk/immutable.py 2
739-from foolscap.api import Referenceable
740+import struct
741 
742 from zope.interface import implements
743hunk ./src/allmydata/storage/backends/disk/immutable.py 5
744-from allmydata.interfaces import RIBucketWriter, RIBucketReader
745-from allmydata.util import base32, fileutil, log
746+
747+from allmydata.interfaces import IStoredShare
748+from allmydata.util import fileutil
749 from allmydata.util.assertutil import precondition
750hunk ./src/allmydata/storage/backends/disk/immutable.py 9
751+from allmydata.util.fileutil import fp_make_dirs
752 from allmydata.util.hashutil import constant_time_compare
753hunk ./src/allmydata/storage/backends/disk/immutable.py 11
754+from allmydata.util.encodingutil import quote_filepath
755+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
756 from allmydata.storage.lease import LeaseInfo
757hunk ./src/allmydata/storage/backends/disk/immutable.py 14
758-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
759-     DataTooLargeError
760+
761 
762 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
763 # and share data. The share data is accessed by RIBucketWriter.write and
764hunk ./src/allmydata/storage/backends/disk/immutable.py 41
765 # then the value stored in this field will be the actual share data length
766 # modulo 2**32.
767 
768-class ShareFile:
769-    LEASE_SIZE = struct.calcsize(">L32s32sL")
770+class ImmutableDiskShare(object):
771+    implements(IStoredShare)
772+
773     sharetype = "immutable"
774hunk ./src/allmydata/storage/backends/disk/immutable.py 45
775+    LEASE_SIZE = struct.calcsize(">L32s32sL")
776+
777 
778hunk ./src/allmydata/storage/backends/disk/immutable.py 48
779-    def __init__(self, filename, max_size=None, create=False):
780-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
781+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
782+        """ If max_size is not None then I won't allow more than
783+        max_size to be written to me. If create=True then max_size
784+        must not be None. """
785         precondition((max_size is not None) or (not create), max_size, create)
786hunk ./src/allmydata/storage/backends/disk/immutable.py 53
787-        self.home = filename
788+        self._storageindex = storageindex
789         self._max_size = max_size
790hunk ./src/allmydata/storage/backends/disk/immutable.py 55
791+        self._incominghome = incominghome
792+        self._home = finalhome
793+        self._shnum = shnum
794         if create:
795             # touch the file, so later callers will see that we're working on
796             # it. Also construct the metadata.
797hunk ./src/allmydata/storage/backends/disk/immutable.py 61
798-            assert not os.path.exists(self.home)
799-            fileutil.make_dirs(os.path.dirname(self.home))
800-            f = open(self.home, 'wb')
801+            assert not finalhome.exists()
802+            fp_make_dirs(self._incominghome.parent())
803             # The second field -- the four-byte share data length -- is no
804             # longer used as of Tahoe v1.3.0, but we continue to write it in
805             # there in case someone downgrades a storage server from >=
806hunk ./src/allmydata/storage/backends/disk/immutable.py 72
807             # the largest length that can fit into the field. That way, even
808             # if this does happen, the old < v1.3.0 server will still allow
809             # clients to read the first part of the share.
810-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
811-            f.close()
812+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
813             self._lease_offset = max_size + 0x0c
814             self._num_leases = 0
815         else:
816hunk ./src/allmydata/storage/backends/disk/immutable.py 76
817-            f = open(self.home, 'rb')
818-            filesize = os.path.getsize(self.home)
819-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
820-            f.close()
821+            f = self._home.open(mode='rb')
822+            try:
823+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
824+            finally:
825+                f.close()
826+            filesize = self._home.getsize()
827             if version != 1:
828                 msg = "sharefile %s had version %d but we wanted 1" % \
829hunk ./src/allmydata/storage/backends/disk/immutable.py 84
830-                      (filename, version)
831+                      (self._home, version)
832                 raise UnknownImmutableContainerVersionError(msg)
833             self._num_leases = num_leases
834             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
835hunk ./src/allmydata/storage/backends/disk/immutable.py 90
836         self._data_offset = 0xc
837 
838+    def __repr__(self):
839+        return ("<ImmutableDiskShare %s:%r at %s>"
840+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
841+
842+    def close(self):
843+        fileutil.fp_make_dirs(self._home.parent())
844+        self._incominghome.moveTo(self._home)
845+        try:
846+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
847+            # We try to delete the parent (.../ab/abcde) to avoid leaving
848+            # these directories lying around forever, but the delete might
849+            # fail if we're working on another share for the same storage
850+            # index (like ab/abcde/5). The alternative approach would be to
851+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
852+            # ShareWriter), each of which is responsible for a single
853+            # directory on disk, and have them use reference counting of
854+            # their children to know when they should do the rmdir. This
855+            # approach is simpler, but relies on os.rmdir refusing to delete
856+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
857+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
858+            # we also delete the grandparent (prefix) directory, .../ab ,
859+            # again to avoid leaving directories lying around. This might
860+            # fail if there is another bucket open that shares a prefix (like
861+            # ab/abfff).
862+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
863+            # we leave the great-grandparent (incoming/) directory in place.
864+        except EnvironmentError:
865+            # ignore the "can't rmdir because the directory is not empty"
866+            # exceptions, those are normal consequences of the
867+            # above-mentioned conditions.
868+            pass
869+        pass
870+
871+    def get_used_space(self):
872+        return (fileutil.get_used_space(self._home) +
873+                fileutil.get_used_space(self._incominghome))
874+
875+    def get_storage_index(self):
876+        return self._storageindex
877+
878+    def get_shnum(self):
879+        return self._shnum
880+
881     def unlink(self):
882hunk ./src/allmydata/storage/backends/disk/immutable.py 134
883-        os.unlink(self.home)
884+        self._home.remove()
885+
886+    def get_size(self):
887+        return self._home.getsize()
888+
889+    def get_data_length(self):
890+        return self._lease_offset - self._data_offset
891+
892+    #def readv(self, read_vector):
893+    #    ...
894 
895     def read_share_data(self, offset, length):
896         precondition(offset >= 0)
897hunk ./src/allmydata/storage/backends/disk/immutable.py 147
898-        # reads beyond the end of the data are truncated. Reads that start
899+
900+        # Reads beyond the end of the data are truncated. Reads that start
901         # beyond the end of the data return an empty string.
902         seekpos = self._data_offset+offset
903         actuallength = max(0, min(length, self._lease_offset-seekpos))
904hunk ./src/allmydata/storage/backends/disk/immutable.py 154
905         if actuallength == 0:
906             return ""
907-        f = open(self.home, 'rb')
908-        f.seek(seekpos)
909-        return f.read(actuallength)
910+        f = self._home.open(mode='rb')
911+        try:
912+            f.seek(seekpos)
913+            sharedata = f.read(actuallength)
914+        finally:
915+            f.close()
916+        return sharedata
917 
918     def write_share_data(self, offset, data):
919         length = len(data)
920hunk ./src/allmydata/storage/backends/disk/immutable.py 167
921         precondition(offset >= 0, offset)
922         if self._max_size is not None and offset+length > self._max_size:
923             raise DataTooLargeError(self._max_size, offset, length)
924-        f = open(self.home, 'rb+')
925-        real_offset = self._data_offset+offset
926-        f.seek(real_offset)
927-        assert f.tell() == real_offset
928-        f.write(data)
929-        f.close()
930+        f = self._incominghome.open(mode='rb+')
931+        try:
932+            real_offset = self._data_offset+offset
933+            f.seek(real_offset)
934+            assert f.tell() == real_offset
935+            f.write(data)
936+        finally:
937+            f.close()
938 
939     def _write_lease_record(self, f, lease_number, lease_info):
940         offset = self._lease_offset + lease_number * self.LEASE_SIZE
941hunk ./src/allmydata/storage/backends/disk/immutable.py 184
942 
943     def _read_num_leases(self, f):
944         f.seek(0x08)
945-        (num_leases,) = struct.unpack(">L", f.read(4))
946+        ro = f.read(4)
947+        (num_leases,) = struct.unpack(">L", ro)
948         return num_leases
949 
950     def _write_num_leases(self, f, num_leases):
951hunk ./src/allmydata/storage/backends/disk/immutable.py 195
952     def _truncate_leases(self, f, num_leases):
953         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
954 
955+    # These lease operations are intended for use by disk_backend.py.
956+    # Other clients should not depend on the fact that the disk backend
957+    # stores leases in share files.
958+
959     def get_leases(self):
960         """Yields a LeaseInfo instance for all leases."""
961hunk ./src/allmydata/storage/backends/disk/immutable.py 201
962-        f = open(self.home, 'rb')
963-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
964-        f.seek(self._lease_offset)
965-        for i in range(num_leases):
966-            data = f.read(self.LEASE_SIZE)
967-            if data:
968-                yield LeaseInfo().from_immutable_data(data)
969+        f = self._home.open(mode='rb')
970+        try:
971+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
972+            f.seek(self._lease_offset)
973+            for i in range(num_leases):
974+                data = f.read(self.LEASE_SIZE)
975+                if data:
976+                    yield LeaseInfo().from_immutable_data(data)
977+        finally:
978+            f.close()
979 
980     def add_lease(self, lease_info):
981hunk ./src/allmydata/storage/backends/disk/immutable.py 213
982-        f = open(self.home, 'rb+')
983-        num_leases = self._read_num_leases(f)
984-        self._write_lease_record(f, num_leases, lease_info)
985-        self._write_num_leases(f, num_leases+1)
986-        f.close()
987+        f = self._incominghome.open(mode='rb')
988+        try:
989+            num_leases = self._read_num_leases(f)
990+        finally:
991+            f.close()
992+        f = self._home.open(mode='wb+')
993+        try:
994+            self._write_lease_record(f, num_leases, lease_info)
995+            self._write_num_leases(f, num_leases+1)
996+        finally:
997+            f.close()
998 
999     def renew_lease(self, renew_secret, new_expire_time):
1000hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1001-        for i,lease in enumerate(self.get_leases()):
1002-            if constant_time_compare(lease.renew_secret, renew_secret):
1003-                # yup. See if we need to update the owner time.
1004-                if new_expire_time > lease.expiration_time:
1005-                    # yes
1006-                    lease.expiration_time = new_expire_time
1007-                    f = open(self.home, 'rb+')
1008-                    self._write_lease_record(f, i, lease)
1009-                    f.close()
1010-                return
1011+        try:
1012+            for i, lease in enumerate(self.get_leases()):
1013+                if constant_time_compare(lease.renew_secret, renew_secret):
1014+                    # yup. See if we need to update the owner time.
1015+                    if new_expire_time > lease.expiration_time:
1016+                        # yes
1017+                        lease.expiration_time = new_expire_time
1018+                        f = self._home.open('rb+')
1019+                        try:
1020+                            self._write_lease_record(f, i, lease)
1021+                        finally:
1022+                            f.close()
1023+                    return
1024+        except IndexError, e:
1025+            raise Exception("IndexError: %s" % (e,))
1026         raise IndexError("unable to renew non-existent lease")
1027 
1028     def add_or_renew_lease(self, lease_info):
1029hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1030                              lease_info.expiration_time)
1031         except IndexError:
1032             self.add_lease(lease_info)
1033-
1034-
1035-    def cancel_lease(self, cancel_secret):
1036-        """Remove a lease with the given cancel_secret. If the last lease is
1037-        cancelled, the file will be removed. Return the number of bytes that
1038-        were freed (by truncating the list of leases, and possibly by
1039-        deleting the file. Raise IndexError if there was no lease with the
1040-        given cancel_secret.
1041-        """
1042-
1043-        leases = list(self.get_leases())
1044-        num_leases_removed = 0
1045-        for i,lease in enumerate(leases):
1046-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1047-                leases[i] = None
1048-                num_leases_removed += 1
1049-        if not num_leases_removed:
1050-            raise IndexError("unable to find matching lease to cancel")
1051-        if num_leases_removed:
1052-            # pack and write out the remaining leases. We write these out in
1053-            # the same order as they were added, so that if we crash while
1054-            # doing this, we won't lose any non-cancelled leases.
1055-            leases = [l for l in leases if l] # remove the cancelled leases
1056-            f = open(self.home, 'rb+')
1057-            for i,lease in enumerate(leases):
1058-                self._write_lease_record(f, i, lease)
1059-            self._write_num_leases(f, len(leases))
1060-            self._truncate_leases(f, len(leases))
1061-            f.close()
1062-        space_freed = self.LEASE_SIZE * num_leases_removed
1063-        if not len(leases):
1064-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1065-            self.unlink()
1066-        return space_freed
1067-
1068-
1069-class BucketWriter(Referenceable):
1070-    implements(RIBucketWriter)
1071-
1072-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1073-        self.ss = ss
1074-        self.incominghome = incominghome
1075-        self.finalhome = finalhome
1076-        self._max_size = max_size # don't allow the client to write more than this
1077-        self._canary = canary
1078-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1079-        self.closed = False
1080-        self.throw_out_all_data = False
1081-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1082-        # also, add our lease to the file now, so that other ones can be
1083-        # added by simultaneous uploaders
1084-        self._sharefile.add_lease(lease_info)
1085-
1086-    def allocated_size(self):
1087-        return self._max_size
1088-
1089-    def remote_write(self, offset, data):
1090-        start = time.time()
1091-        precondition(not self.closed)
1092-        if self.throw_out_all_data:
1093-            return
1094-        self._sharefile.write_share_data(offset, data)
1095-        self.ss.add_latency("write", time.time() - start)
1096-        self.ss.count("write")
1097-
1098-    def remote_close(self):
1099-        precondition(not self.closed)
1100-        start = time.time()
1101-
1102-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1103-        fileutil.rename(self.incominghome, self.finalhome)
1104-        try:
1105-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1106-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1107-            # these directories lying around forever, but the delete might
1108-            # fail if we're working on another share for the same storage
1109-            # index (like ab/abcde/5). The alternative approach would be to
1110-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1111-            # ShareWriter), each of which is responsible for a single
1112-            # directory on disk, and have them use reference counting of
1113-            # their children to know when they should do the rmdir. This
1114-            # approach is simpler, but relies on os.rmdir refusing to delete
1115-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1116-            os.rmdir(os.path.dirname(self.incominghome))
1117-            # we also delete the grandparent (prefix) directory, .../ab ,
1118-            # again to avoid leaving directories lying around. This might
1119-            # fail if there is another bucket open that shares a prefix (like
1120-            # ab/abfff).
1121-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1122-            # we leave the great-grandparent (incoming/) directory in place.
1123-        except EnvironmentError:
1124-            # ignore the "can't rmdir because the directory is not empty"
1125-            # exceptions, those are normal consequences of the
1126-            # above-mentioned conditions.
1127-            pass
1128-        self._sharefile = None
1129-        self.closed = True
1130-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1131-
1132-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1133-        self.ss.bucket_writer_closed(self, filelen)
1134-        self.ss.add_latency("close", time.time() - start)
1135-        self.ss.count("close")
1136-
1137-    def _disconnected(self):
1138-        if not self.closed:
1139-            self._abort()
1140-
1141-    def remote_abort(self):
1142-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1143-                facility="tahoe.storage", level=log.UNUSUAL)
1144-        if not self.closed:
1145-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1146-        self._abort()
1147-        self.ss.count("abort")
1148-
1149-    def _abort(self):
1150-        if self.closed:
1151-            return
1152-
1153-        os.remove(self.incominghome)
1154-        # if we were the last share to be moved, remove the incoming/
1155-        # directory that was our parent
1156-        parentdir = os.path.split(self.incominghome)[0]
1157-        if not os.listdir(parentdir):
1158-            os.rmdir(parentdir)
1159-        self._sharefile = None
1160-
1161-        # We are now considered closed for further writing. We must tell
1162-        # the storage server about this so that it stops expecting us to
1163-        # use the space it allocated for us earlier.
1164-        self.closed = True
1165-        self.ss.bucket_writer_closed(self, 0)
1166-
1167-
1168-class BucketReader(Referenceable):
1169-    implements(RIBucketReader)
1170-
1171-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1172-        self.ss = ss
1173-        self._share_file = ShareFile(sharefname)
1174-        self.storage_index = storage_index
1175-        self.shnum = shnum
1176-
1177-    def __repr__(self):
1178-        return "<%s %s %s>" % (self.__class__.__name__,
1179-                               base32.b2a_l(self.storage_index[:8], 60),
1180-                               self.shnum)
1181-
1182-    def remote_read(self, offset, length):
1183-        start = time.time()
1184-        data = self._share_file.read_share_data(offset, length)
1185-        self.ss.add_latency("read", time.time() - start)
1186-        self.ss.count("read")
1187-        return data
1188-
1189-    def remote_advise_corrupt_share(self, reason):
1190-        return self.ss.remote_advise_corrupt_share("immutable",
1191-                                                   self.storage_index,
1192-                                                   self.shnum,
1193-                                                   reason)
1194hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1195-import os, stat, struct
1196 
1197hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1198-from allmydata.interfaces import BadWriteEnablerError
1199-from allmydata.util import idlib, log
1200+import struct
1201+
1202+from zope.interface import implements
1203+
1204+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1205+from allmydata.util import fileutil, idlib, log
1206 from allmydata.util.assertutil import precondition
1207 from allmydata.util.hashutil import constant_time_compare
1208hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1209-from allmydata.storage.lease import LeaseInfo
1210-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1211+from allmydata.util.encodingutil import quote_filepath
1212+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1213      DataTooLargeError
1214hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1215+from allmydata.storage.lease import LeaseInfo
1216+from allmydata.storage.backends.base import testv_compare
1217 
1218hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1219-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1220-# has a different layout. See docs/mutable.txt for more details.
1221+
1222+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1223+# It has a different layout. See docs/mutable.rst for more details.
1224 
1225 # #   offset    size    name
1226 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1227hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1228 #                        4    4   expiration timestamp
1229 #                        8   32   renewal token
1230 #                        40  32   cancel token
1231-#                        72  20   nodeid which accepted the tokens
1232+#                        72  20   nodeid that accepted the tokens
1233 # 7   468       (a)     data
1234 # 8   ??        4       count of extra leases
1235 # 9   ??        n*92    extra leases
1236hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1237 
1238 
1239-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1240+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1241 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1242 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1243 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1244hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1245 
1246-class MutableShareFile:
1247+
1248+class MutableDiskShare(object):
1249+    implements(IStoredMutableShare)
1250 
1251     sharetype = "mutable"
1252     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1253hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1254     assert LEASE_SIZE == 92
1255     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1256     assert DATA_OFFSET == 468, DATA_OFFSET
1257+
1258     # our sharefiles share with a recognizable string, plus some random
1259     # binary data to reduce the chance that a regular text file will look
1260     # like a sharefile.
1261hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1262     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1263     # TODO: decide upon a policy for max share size
1264 
1265-    def __init__(self, filename, parent=None):
1266-        self.home = filename
1267-        if os.path.exists(self.home):
1268+    def __init__(self, storageindex, shnum, home, parent=None):
1269+        self._storageindex = storageindex
1270+        self._shnum = shnum
1271+        self._home = home
1272+        if self._home.exists():
1273             # we don't cache anything, just check the magic
1274hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1275-            f = open(self.home, 'rb')
1276-            data = f.read(self.HEADER_SIZE)
1277-            (magic,
1278-             write_enabler_nodeid, write_enabler,
1279-             data_length, extra_least_offset) = \
1280-             struct.unpack(">32s20s32sQQ", data)
1281-            if magic != self.MAGIC:
1282-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1283-                      (filename, magic, self.MAGIC)
1284-                raise UnknownMutableContainerVersionError(msg)
1285+            f = self._home.open('rb')
1286+            try:
1287+                data = f.read(self.HEADER_SIZE)
1288+                (magic,
1289+                 write_enabler_nodeid, write_enabler,
1290+                 data_length, extra_least_offset) = \
1291+                 struct.unpack(">32s20s32sQQ", data)
1292+                if magic != self.MAGIC:
1293+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1294+                          (quote_filepath(self._home), magic, self.MAGIC)
1295+                    raise UnknownMutableContainerVersionError(msg)
1296+            finally:
1297+                f.close()
1298         self.parent = parent # for logging
1299 
1300     def log(self, *args, **kwargs):
1301hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1302         return self.parent.log(*args, **kwargs)
1303 
1304-    def create(self, my_nodeid, write_enabler):
1305-        assert not os.path.exists(self.home)
1306+    def create(self, serverid, write_enabler):
1307+        assert not self._home.exists()
1308         data_length = 0
1309         extra_lease_offset = (self.HEADER_SIZE
1310                               + 4 * self.LEASE_SIZE
1311hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1312                               + data_length)
1313         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1314         num_extra_leases = 0
1315-        f = open(self.home, 'wb')
1316-        header = struct.pack(">32s20s32sQQ",
1317-                             self.MAGIC, my_nodeid, write_enabler,
1318-                             data_length, extra_lease_offset,
1319-                             )
1320-        leases = ("\x00"*self.LEASE_SIZE) * 4
1321-        f.write(header + leases)
1322-        # data goes here, empty after creation
1323-        f.write(struct.pack(">L", num_extra_leases))
1324-        # extra leases go here, none at creation
1325-        f.close()
1326+        f = self._home.open('wb')
1327+        try:
1328+            header = struct.pack(">32s20s32sQQ",
1329+                                 self.MAGIC, serverid, write_enabler,
1330+                                 data_length, extra_lease_offset,
1331+                                 )
1332+            leases = ("\x00"*self.LEASE_SIZE) * 4
1333+            f.write(header + leases)
1334+            # data goes here, empty after creation
1335+            f.write(struct.pack(">L", num_extra_leases))
1336+            # extra leases go here, none at creation
1337+        finally:
1338+            f.close()
1339+
1340+    def __repr__(self):
1341+        return ("<MutableDiskShare %s:%r at %s>"
1342+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1343+
1344+    def get_used_space(self):
1345+        return fileutil.get_used_space(self._home)
1346+
1347+    def get_storage_index(self):
1348+        return self._storageindex
1349+
1350+    def get_shnum(self):
1351+        return self._shnum
1352 
1353     def unlink(self):
1354hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1355-        os.unlink(self.home)
1356+        self._home.remove()
1357 
1358     def _read_data_length(self, f):
1359         f.seek(self.DATA_LENGTH_OFFSET)
1360hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1361 
1362     def get_leases(self):
1363         """Yields a LeaseInfo instance for all leases."""
1364-        f = open(self.home, 'rb')
1365-        for i, lease in self._enumerate_leases(f):
1366-            yield lease
1367-        f.close()
1368+        f = self._home.open('rb')
1369+        try:
1370+            for i, lease in self._enumerate_leases(f):
1371+                yield lease
1372+        finally:
1373+            f.close()
1374 
1375     def _enumerate_leases(self, f):
1376         for i in range(self._get_num_lease_slots(f)):
1377hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1378             try:
1379                 data = self._read_lease_record(f, i)
1380                 if data is not None:
1381-                    yield i,data
1382+                    yield i, data
1383             except IndexError:
1384                 return
1385 
1386hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1387+    # These lease operations are intended for use by disk_backend.py.
1388+    # Other non-test clients should not depend on the fact that the disk
1389+    # backend stores leases in share files.
1390+
1391     def add_lease(self, lease_info):
1392         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1393hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1394-        f = open(self.home, 'rb+')
1395-        num_lease_slots = self._get_num_lease_slots(f)
1396-        empty_slot = self._get_first_empty_lease_slot(f)
1397-        if empty_slot is not None:
1398-            self._write_lease_record(f, empty_slot, lease_info)
1399-        else:
1400-            self._write_lease_record(f, num_lease_slots, lease_info)
1401-        f.close()
1402+        f = self._home.open('rb+')
1403+        try:
1404+            num_lease_slots = self._get_num_lease_slots(f)
1405+            empty_slot = self._get_first_empty_lease_slot(f)
1406+            if empty_slot is not None:
1407+                self._write_lease_record(f, empty_slot, lease_info)
1408+            else:
1409+                self._write_lease_record(f, num_lease_slots, lease_info)
1410+        finally:
1411+            f.close()
1412 
1413     def renew_lease(self, renew_secret, new_expire_time):
1414         accepting_nodeids = set()
1415hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1416-        f = open(self.home, 'rb+')
1417-        for (leasenum,lease) in self._enumerate_leases(f):
1418-            if constant_time_compare(lease.renew_secret, renew_secret):
1419-                # yup. See if we need to update the owner time.
1420-                if new_expire_time > lease.expiration_time:
1421-                    # yes
1422-                    lease.expiration_time = new_expire_time
1423-                    self._write_lease_record(f, leasenum, lease)
1424-                f.close()
1425-                return
1426-            accepting_nodeids.add(lease.nodeid)
1427-        f.close()
1428+        f = self._home.open('rb+')
1429+        try:
1430+            for (leasenum, lease) in self._enumerate_leases(f):
1431+                if constant_time_compare(lease.renew_secret, renew_secret):
1432+                    # yup. See if we need to update the owner time.
1433+                    if new_expire_time > lease.expiration_time:
1434+                        # yes
1435+                        lease.expiration_time = new_expire_time
1436+                        self._write_lease_record(f, leasenum, lease)
1437+                    return
1438+                accepting_nodeids.add(lease.nodeid)
1439+        finally:
1440+            f.close()
1441         # Return the accepting_nodeids set, to give the client a chance to
1442hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1443-        # update the leases on a share which has been migrated from its
1444+        # update the leases on a share that has been migrated from its
1445         # original server to a new one.
1446         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1447                " nodeids: ")
1448hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1449         except IndexError:
1450             self.add_lease(lease_info)
1451 
1452-    def cancel_lease(self, cancel_secret):
1453-        """Remove any leases with the given cancel_secret. If the last lease
1454-        is cancelled, the file will be removed. Return the number of bytes
1455-        that were freed (by truncating the list of leases, and possibly by
1456-        deleting the file. Raise IndexError if there was no lease with the
1457-        given cancel_secret."""
1458-
1459-        accepting_nodeids = set()
1460-        modified = 0
1461-        remaining = 0
1462-        blank_lease = LeaseInfo(owner_num=0,
1463-                                renew_secret="\x00"*32,
1464-                                cancel_secret="\x00"*32,
1465-                                expiration_time=0,
1466-                                nodeid="\x00"*20)
1467-        f = open(self.home, 'rb+')
1468-        for (leasenum,lease) in self._enumerate_leases(f):
1469-            accepting_nodeids.add(lease.nodeid)
1470-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1471-                self._write_lease_record(f, leasenum, blank_lease)
1472-                modified += 1
1473-            else:
1474-                remaining += 1
1475-        if modified:
1476-            freed_space = self._pack_leases(f)
1477-            f.close()
1478-            if not remaining:
1479-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1480-                self.unlink()
1481-            return freed_space
1482-
1483-        msg = ("Unable to cancel non-existent lease. I have leases "
1484-               "accepted by nodeids: ")
1485-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1486-                         for anid in accepting_nodeids])
1487-        msg += " ."
1488-        raise IndexError(msg)
1489-
1490-    def _pack_leases(self, f):
1491-        # TODO: reclaim space from cancelled leases
1492-        return 0
1493-
1494     def _read_write_enabler_and_nodeid(self, f):
1495         f.seek(0)
1496         data = f.read(self.HEADER_SIZE)
1497hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1498 
1499     def readv(self, readv):
1500         datav = []
1501-        f = open(self.home, 'rb')
1502-        for (offset, length) in readv:
1503-            datav.append(self._read_share_data(f, offset, length))
1504-        f.close()
1505+        f = self._home.open('rb')
1506+        try:
1507+            for (offset, length) in readv:
1508+                datav.append(self._read_share_data(f, offset, length))
1509+        finally:
1510+            f.close()
1511         return datav
1512 
1513hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1514-#    def remote_get_length(self):
1515-#        f = open(self.home, 'rb')
1516-#        data_length = self._read_data_length(f)
1517-#        f.close()
1518-#        return data_length
1519+    def get_size(self):
1520+        return self._home.getsize()
1521+
1522+    def get_data_length(self):
1523+        f = self._home.open('rb')
1524+        try:
1525+            data_length = self._read_data_length(f)
1526+        finally:
1527+            f.close()
1528+        return data_length
1529 
1530     def check_write_enabler(self, write_enabler, si_s):
1531hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1532-        f = open(self.home, 'rb+')
1533-        (real_write_enabler, write_enabler_nodeid) = \
1534-                             self._read_write_enabler_and_nodeid(f)
1535-        f.close()
1536+        f = self._home.open('rb+')
1537+        try:
1538+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1539+        finally:
1540+            f.close()
1541         # avoid a timing attack
1542         #if write_enabler != real_write_enabler:
1543         if not constant_time_compare(write_enabler, real_write_enabler):
1544hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1545 
1546     def check_testv(self, testv):
1547         test_good = True
1548-        f = open(self.home, 'rb+')
1549-        for (offset, length, operator, specimen) in testv:
1550-            data = self._read_share_data(f, offset, length)
1551-            if not testv_compare(data, operator, specimen):
1552-                test_good = False
1553-                break
1554-        f.close()
1555+        f = self._home.open('rb+')
1556+        try:
1557+            for (offset, length, operator, specimen) in testv:
1558+                data = self._read_share_data(f, offset, length)
1559+                if not testv_compare(data, operator, specimen):
1560+                    test_good = False
1561+                    break
1562+        finally:
1563+            f.close()
1564         return test_good
1565 
1566     def writev(self, datav, new_length):
1567hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1568-        f = open(self.home, 'rb+')
1569-        for (offset, data) in datav:
1570-            self._write_share_data(f, offset, data)
1571-        if new_length is not None:
1572-            cur_length = self._read_data_length(f)
1573-            if new_length < cur_length:
1574-                self._write_data_length(f, new_length)
1575-                # TODO: if we're going to shrink the share file when the
1576-                # share data has shrunk, then call
1577-                # self._change_container_size() here.
1578-        f.close()
1579-
1580-def testv_compare(a, op, b):
1581-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1582-    if op == "lt":
1583-        return a < b
1584-    if op == "le":
1585-        return a <= b
1586-    if op == "eq":
1587-        return a == b
1588-    if op == "ne":
1589-        return a != b
1590-    if op == "ge":
1591-        return a >= b
1592-    if op == "gt":
1593-        return a > b
1594-    # never reached
1595+        f = self._home.open('rb+')
1596+        try:
1597+            for (offset, data) in datav:
1598+                self._write_share_data(f, offset, data)
1599+            if new_length is not None:
1600+                cur_length = self._read_data_length(f)
1601+                if new_length < cur_length:
1602+                    self._write_data_length(f, new_length)
1603+                    # TODO: if we're going to shrink the share file when the
1604+                    # share data has shrunk, then call
1605+                    # self._change_container_size() here.
1606+        finally:
1607+            f.close()
1608 
1609hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1610-class EmptyShare:
1611+    def close(self):
1612+        pass
1613 
1614hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1615-    def check_testv(self, testv):
1616-        test_good = True
1617-        for (offset, length, operator, specimen) in testv:
1618-            data = ""
1619-            if not testv_compare(data, operator, specimen):
1620-                test_good = False
1621-                break
1622-        return test_good
1623 
1624hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1625-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1626-    ms = MutableShareFile(filename, parent)
1627-    ms.create(my_nodeid, write_enabler)
1628+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1629+    ms = MutableDiskShare(fp, parent)
1630+    ms.create(serverid, write_enabler)
1631     del ms
1632hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1633-    return MutableShareFile(filename, parent)
1634-
1635+    return MutableDiskShare(fp, parent)
1636addfile ./src/allmydata/storage/backends/null/__init__.py
1637addfile ./src/allmydata/storage/backends/null/null_backend.py
1638hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1639 
1640+import os, struct
1641+
1642+from zope.interface import implements
1643+
1644+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1645+from allmydata.util.assertutil import precondition
1646+from allmydata.util.hashutil import constant_time_compare
1647+from allmydata.storage.backends.base import Backend, ShareSet
1648+from allmydata.storage.bucket import BucketWriter
1649+from allmydata.storage.common import si_b2a
1650+from allmydata.storage.lease import LeaseInfo
1651+
1652+
1653+class NullBackend(Backend):
1654+    implements(IStorageBackend)
1655+
1656+    def __init__(self):
1657+        Backend.__init__(self)
1658+
1659+    def get_available_space(self, reserved_space):
1660+        return None
1661+
1662+    def get_sharesets_for_prefix(self, prefix):
1663+        pass
1664+
1665+    def get_shareset(self, storageindex):
1666+        return NullShareSet(storageindex)
1667+
1668+    def fill_in_space_stats(self, stats):
1669+        pass
1670+
1671+    def set_storage_server(self, ss):
1672+        self.ss = ss
1673+
1674+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1675+        pass
1676+
1677+
1678+class NullShareSet(ShareSet):
1679+    implements(IShareSet)
1680+
1681+    def __init__(self, storageindex):
1682+        self.storageindex = storageindex
1683+
1684+    def get_overhead(self):
1685+        return 0
1686+
1687+    def get_incoming_shnums(self):
1688+        return frozenset()
1689+
1690+    def get_shares(self):
1691+        pass
1692+
1693+    def get_share(self, shnum):
1694+        return None
1695+
1696+    def get_storage_index(self):
1697+        return self.storageindex
1698+
1699+    def get_storage_index_string(self):
1700+        return si_b2a(self.storageindex)
1701+
1702+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1703+        immutableshare = ImmutableNullShare()
1704+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1705+
1706+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1707+        return MutableNullShare()
1708+
1709+    def _clean_up_after_unlink(self):
1710+        pass
1711+
1712+
1713+class ImmutableNullShare:
1714+    implements(IStoredShare)
1715+    sharetype = "immutable"
1716+
1717+    def __init__(self):
1718+        """ If max_size is not None then I won't allow more than
1719+        max_size to be written to me. If create=True then max_size
1720+        must not be None. """
1721+        pass
1722+
1723+    def get_shnum(self):
1724+        return self.shnum
1725+
1726+    def unlink(self):
1727+        os.unlink(self.fname)
1728+
1729+    def read_share_data(self, offset, length):
1730+        precondition(offset >= 0)
1731+        # Reads beyond the end of the data are truncated. Reads that start
1732+        # beyond the end of the data return an empty string.
1733+        seekpos = self._data_offset+offset
1734+        fsize = os.path.getsize(self.fname)
1735+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1736+        if actuallength == 0:
1737+            return ""
1738+        f = open(self.fname, 'rb')
1739+        f.seek(seekpos)
1740+        return f.read(actuallength)
1741+
1742+    def write_share_data(self, offset, data):
1743+        pass
1744+
1745+    def _write_lease_record(self, f, lease_number, lease_info):
1746+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1747+        f.seek(offset)
1748+        assert f.tell() == offset
1749+        f.write(lease_info.to_immutable_data())
1750+
1751+    def _read_num_leases(self, f):
1752+        f.seek(0x08)
1753+        (num_leases,) = struct.unpack(">L", f.read(4))
1754+        return num_leases
1755+
1756+    def _write_num_leases(self, f, num_leases):
1757+        f.seek(0x08)
1758+        f.write(struct.pack(">L", num_leases))
1759+
1760+    def _truncate_leases(self, f, num_leases):
1761+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1762+
1763+    def get_leases(self):
1764+        """Yields a LeaseInfo instance for all leases."""
1765+        f = open(self.fname, 'rb')
1766+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1767+        f.seek(self._lease_offset)
1768+        for i in range(num_leases):
1769+            data = f.read(self.LEASE_SIZE)
1770+            if data:
1771+                yield LeaseInfo().from_immutable_data(data)
1772+
1773+    def add_lease(self, lease):
1774+        pass
1775+
1776+    def renew_lease(self, renew_secret, new_expire_time):
1777+        for i,lease in enumerate(self.get_leases()):
1778+            if constant_time_compare(lease.renew_secret, renew_secret):
1779+                # yup. See if we need to update the owner time.
1780+                if new_expire_time > lease.expiration_time:
1781+                    # yes
1782+                    lease.expiration_time = new_expire_time
1783+                    f = open(self.fname, 'rb+')
1784+                    self._write_lease_record(f, i, lease)
1785+                    f.close()
1786+                return
1787+        raise IndexError("unable to renew non-existent lease")
1788+
1789+    def add_or_renew_lease(self, lease_info):
1790+        try:
1791+            self.renew_lease(lease_info.renew_secret,
1792+                             lease_info.expiration_time)
1793+        except IndexError:
1794+            self.add_lease(lease_info)
1795+
1796+
1797+class MutableNullShare:
1798+    implements(IStoredMutableShare)
1799+    sharetype = "mutable"
1800+
1801+    """ XXX: TODO """
1802addfile ./src/allmydata/storage/bucket.py
1803hunk ./src/allmydata/storage/bucket.py 1
1804+
1805+import time
1806+
1807+from foolscap.api import Referenceable
1808+
1809+from zope.interface import implements
1810+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1811+from allmydata.util import base32, log
1812+from allmydata.util.assertutil import precondition
1813+
1814+
1815+class BucketWriter(Referenceable):
1816+    implements(RIBucketWriter)
1817+
1818+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1819+        self.ss = ss
1820+        self._max_size = max_size # don't allow the client to write more than this
1821+        self._canary = canary
1822+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1823+        self.closed = False
1824+        self.throw_out_all_data = False
1825+        self._share = immutableshare
1826+        # also, add our lease to the file now, so that other ones can be
1827+        # added by simultaneous uploaders
1828+        self._share.add_lease(lease_info)
1829+
1830+    def allocated_size(self):
1831+        return self._max_size
1832+
1833+    def remote_write(self, offset, data):
1834+        start = time.time()
1835+        precondition(not self.closed)
1836+        if self.throw_out_all_data:
1837+            return
1838+        self._share.write_share_data(offset, data)
1839+        self.ss.add_latency("write", time.time() - start)
1840+        self.ss.count("write")
1841+
1842+    def remote_close(self):
1843+        precondition(not self.closed)
1844+        start = time.time()
1845+
1846+        self._share.close()
1847+        filelen = self._share.stat()
1848+        self._share = None
1849+
1850+        self.closed = True
1851+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1852+
1853+        self.ss.bucket_writer_closed(self, filelen)
1854+        self.ss.add_latency("close", time.time() - start)
1855+        self.ss.count("close")
1856+
1857+    def _disconnected(self):
1858+        if not self.closed:
1859+            self._abort()
1860+
1861+    def remote_abort(self):
1862+        log.msg("storage: aborting write to share %r" % self._share,
1863+                facility="tahoe.storage", level=log.UNUSUAL)
1864+        if not self.closed:
1865+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1866+        self._abort()
1867+        self.ss.count("abort")
1868+
1869+    def _abort(self):
1870+        if self.closed:
1871+            return
1872+        self._share.unlink()
1873+        self._share = None
1874+
1875+        # We are now considered closed for further writing. We must tell
1876+        # the storage server about this so that it stops expecting us to
1877+        # use the space it allocated for us earlier.
1878+        self.closed = True
1879+        self.ss.bucket_writer_closed(self, 0)
1880+
1881+
1882+class BucketReader(Referenceable):
1883+    implements(RIBucketReader)
1884+
1885+    def __init__(self, ss, share):
1886+        self.ss = ss
1887+        self._share = share
1888+        self.storageindex = share.storageindex
1889+        self.shnum = share.shnum
1890+
1891+    def __repr__(self):
1892+        return "<%s %s %s>" % (self.__class__.__name__,
1893+                               base32.b2a_l(self.storageindex[:8], 60),
1894+                               self.shnum)
1895+
1896+    def remote_read(self, offset, length):
1897+        start = time.time()
1898+        data = self._share.read_share_data(offset, length)
1899+        self.ss.add_latency("read", time.time() - start)
1900+        self.ss.count("read")
1901+        return data
1902+
1903+    def remote_advise_corrupt_share(self, reason):
1904+        return self.ss.remote_advise_corrupt_share("immutable",
1905+                                                   self.storageindex,
1906+                                                   self.shnum,
1907+                                                   reason)
1908addfile ./src/allmydata/test/test_backends.py
1909hunk ./src/allmydata/test/test_backends.py 1
1910+import os, stat
1911+from twisted.trial import unittest
1912+from allmydata.util.log import msg
1913+from allmydata.test.common_util import ReallyEqualMixin
1914+import mock
1915+
1916+# This is the code that we're going to be testing.
1917+from allmydata.storage.server import StorageServer
1918+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
1919+from allmydata.storage.backends.null.null_backend import NullBackend
1920+
1921+# The following share file content was generated with
1922+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1923+# with share data == 'a'. The total size of this input
1924+# is 85 bytes.
1925+shareversionnumber = '\x00\x00\x00\x01'
1926+sharedatalength = '\x00\x00\x00\x01'
1927+numberofleases = '\x00\x00\x00\x01'
1928+shareinputdata = 'a'
1929+ownernumber = '\x00\x00\x00\x00'
1930+renewsecret  = 'x'*32
1931+cancelsecret = 'y'*32
1932+expirationtime = '\x00(\xde\x80'
1933+nextlease = ''
1934+containerdata = shareversionnumber + sharedatalength + numberofleases
1935+client_data = shareinputdata + ownernumber + renewsecret + \
1936+    cancelsecret + expirationtime + nextlease
1937+share_data = containerdata + client_data
1938+testnodeid = 'testnodeidxxxxxxxxxx'
1939+
1940+
1941+class MockFileSystem(unittest.TestCase):
1942+    """ I simulate a filesystem that the code under test can use. I simulate
1943+    just the parts of the filesystem that the current implementation of Disk
1944+    backend needs. """
1945+    def setUp(self):
1946+        # Make patcher, patch, and effects for disk-using functions.
1947+        msg( "%s.setUp()" % (self,))
1948+        self.mockedfilepaths = {}
1949+        # keys are pathnames, values are MockFilePath objects. This is necessary because
1950+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
1951+        # self.mockedfilepaths has the relevant information.
1952+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
1953+        self.basedir = self.storedir.child('shares')
1954+        self.baseincdir = self.basedir.child('incoming')
1955+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1956+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
1957+        self.shareincomingname = self.sharedirincomingname.child('0')
1958+        self.sharefinalname = self.sharedirfinalname.child('0')
1959+
1960+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
1961+        # or LeaseCheckingCrawler.
1962+
1963+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
1964+        self.FilePathFake.__enter__()
1965+
1966+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
1967+        FakeBCC = self.BCountingCrawler.__enter__()
1968+        FakeBCC.side_effect = self.call_FakeBCC
1969+
1970+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
1971+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
1972+        FakeLCC.side_effect = self.call_FakeLCC
1973+
1974+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
1975+        GetSpace = self.get_available_space.__enter__()
1976+        GetSpace.side_effect = self.call_get_available_space
1977+
1978+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
1979+        getsize = self.statforsize.__enter__()
1980+        getsize.side_effect = self.call_statforsize
1981+
1982+    def call_FakeBCC(self, StateFile):
1983+        return MockBCC()
1984+
1985+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
1986+        return MockLCC()
1987+
1988+    def call_get_available_space(self, storedir, reservedspace):
1989+        # The input vector has an input size of 85.
1990+        return 85 - reservedspace
1991+
1992+    def call_statforsize(self, fakefpname):
1993+        return self.mockedfilepaths[fakefpname].fileobject.size()
1994+
1995+    def tearDown(self):
1996+        msg( "%s.tearDown()" % (self,))
1997+        self.FilePathFake.__exit__()
1998+        self.mockedfilepaths = {}
1999+
2000+
2001+class MockFilePath:
2002+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2003+        #  I can't just make the values MockFileObjects because they may be directories.
2004+        self.mockedfilepaths = ffpathsenvironment
2005+        self.path = pathstring
2006+        self.existence = existence
2007+        if not self.mockedfilepaths.has_key(self.path):
2008+            #  The first MockFilePath object is special
2009+            self.mockedfilepaths[self.path] = self
2010+            self.fileobject = None
2011+        else:
2012+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2013+        self.spawn = {}
2014+        self.antecedent = os.path.dirname(self.path)
2015+
2016+    def setContent(self, contentstring):
2017+        # This method rewrites the data in the file that corresponds to its path
2018+        # name whether it preexisted or not.
2019+        self.fileobject = MockFileObject(contentstring)
2020+        self.existence = True
2021+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2022+        self.mockedfilepaths[self.path].existence = self.existence
2023+        self.setparents()
2024+
2025+    def create(self):
2026+        # This method chokes if there's a pre-existing file!
2027+        if self.mockedfilepaths[self.path].fileobject:
2028+            raise OSError
2029+        else:
2030+            self.existence = True
2031+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2032+            self.mockedfilepaths[self.path].existence = self.existence
2033+            self.setparents()
2034+
2035+    def open(self, mode='r'):
2036+        # XXX Makes no use of mode.
2037+        if not self.mockedfilepaths[self.path].fileobject:
2038+            # If there's no fileobject there already then make one and put it there.
2039+            self.fileobject = MockFileObject()
2040+            self.existence = True
2041+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2042+            self.mockedfilepaths[self.path].existence = self.existence
2043+        else:
2044+            # Otherwise get a ref to it.
2045+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2046+            self.existence = self.mockedfilepaths[self.path].existence
2047+        return self.fileobject.open(mode)
2048+
2049+    def child(self, childstring):
2050+        arg2child = os.path.join(self.path, childstring)
2051+        child = MockFilePath(arg2child, self.mockedfilepaths)
2052+        return child
2053+
2054+    def children(self):
2055+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2056+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2057+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2058+        self.spawn = frozenset(childrenfromffs)
2059+        return self.spawn
2060+
2061+    def parent(self):
2062+        if self.mockedfilepaths.has_key(self.antecedent):
2063+            parent = self.mockedfilepaths[self.antecedent]
2064+        else:
2065+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2066+        return parent
2067+
2068+    def parents(self):
2069+        antecedents = []
2070+        def f(fps, antecedents):
2071+            newfps = os.path.split(fps)[0]
2072+            if newfps:
2073+                antecedents.append(newfps)
2074+                f(newfps, antecedents)
2075+        f(self.path, antecedents)
2076+        return antecedents
2077+
2078+    def setparents(self):
2079+        for fps in self.parents():
2080+            if not self.mockedfilepaths.has_key(fps):
2081+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2082+
2083+    def basename(self):
2084+        return os.path.split(self.path)[1]
2085+
2086+    def moveTo(self, newffp):
2087+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2088+        if self.mockedfilepaths[newffp.path].exists():
2089+            raise OSError
2090+        else:
2091+            self.mockedfilepaths[newffp.path] = self
2092+            self.path = newffp.path
2093+
2094+    def getsize(self):
2095+        return self.fileobject.getsize()
2096+
2097+    def exists(self):
2098+        return self.existence
2099+
2100+    def isdir(self):
2101+        return True
2102+
2103+    def makedirs(self):
2104+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2105+        pass
2106+
2107+    def remove(self):
2108+        pass
2109+
2110+
2111+class MockFileObject:
2112+    def __init__(self, contentstring=''):
2113+        self.buffer = contentstring
2114+        self.pos = 0
2115+    def open(self, mode='r'):
2116+        return self
2117+    def write(self, instring):
2118+        begin = self.pos
2119+        padlen = begin - len(self.buffer)
2120+        if padlen > 0:
2121+            self.buffer += '\x00' * padlen
2122+        end = self.pos + len(instring)
2123+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2124+        self.pos = end
2125+    def close(self):
2126+        self.pos = 0
2127+    def seek(self, pos):
2128+        self.pos = pos
2129+    def read(self, numberbytes):
2130+        return self.buffer[self.pos:self.pos+numberbytes]
2131+    def tell(self):
2132+        return self.pos
2133+    def size(self):
2134+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2135+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2136+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2137+        return {stat.ST_SIZE:len(self.buffer)}
2138+    def getsize(self):
2139+        return len(self.buffer)
2140+
2141+class MockBCC:
2142+    def setServiceParent(self, Parent):
2143+        pass
2144+
2145+
2146+class MockLCC:
2147+    def setServiceParent(self, Parent):
2148+        pass
2149+
2150+
2151+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2152+    """ NullBackend is just for testing and executable documentation, so
2153+    this test is actually a test of StorageServer in which we're using
2154+    NullBackend as helper code for the test, rather than a test of
2155+    NullBackend. """
2156+    def setUp(self):
2157+        self.ss = StorageServer(testnodeid, NullBackend())
2158+
2159+    @mock.patch('os.mkdir')
2160+    @mock.patch('__builtin__.open')
2161+    @mock.patch('os.listdir')
2162+    @mock.patch('os.path.isdir')
2163+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2164+        """
2165+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2166+        generates the correct return types when given test-vector arguments. That
2167+        bs is of the correct type is verified by attempting to invoke remote_write
2168+        on bs[0].
2169+        """
2170+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2171+        bs[0].remote_write(0, 'a')
2172+        self.failIf(mockisdir.called)
2173+        self.failIf(mocklistdir.called)
2174+        self.failIf(mockopen.called)
2175+        self.failIf(mockmkdir.called)
2176+
2177+
2178+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2179+    def test_create_server_disk_backend(self):
2180+        """ This tests whether a server instance can be constructed with a
2181+        filesystem backend. To pass the test, it mustn't use the filesystem
2182+        outside of its configured storedir. """
2183+        StorageServer(testnodeid, DiskBackend(self.storedir))
2184+
2185+
2186+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2187+    """ This tests both the StorageServer and the Disk backend together. """
2188+    def setUp(self):
2189+        MockFileSystem.setUp(self)
2190+        try:
2191+            self.backend = DiskBackend(self.storedir)
2192+            self.ss = StorageServer(testnodeid, self.backend)
2193+
2194+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2195+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2196+        except:
2197+            MockFileSystem.tearDown(self)
2198+            raise
2199+
2200+    @mock.patch('time.time')
2201+    @mock.patch('allmydata.util.fileutil.get_available_space')
2202+    def test_out_of_space(self, mockget_available_space, mocktime):
2203+        mocktime.return_value = 0
2204+
2205+        def call_get_available_space(dir, reserve):
2206+            return 0
2207+
2208+        mockget_available_space.side_effect = call_get_available_space
2209+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2210+        self.failUnlessReallyEqual(bsc, {})
2211+
2212+    @mock.patch('time.time')
2213+    def test_write_and_read_share(self, mocktime):
2214+        """
2215+        Write a new share, read it, and test the server's (and disk backend's)
2216+        handling of simultaneous and successive attempts to write the same
2217+        share.
2218+        """
2219+        mocktime.return_value = 0
2220+        # Inspect incoming and fail unless it's empty.
2221+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2222+
2223+        self.failUnlessReallyEqual(incomingset, frozenset())
2224+
2225+        # Populate incoming with the sharenum: 0.
2226+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2227+
2228+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2229+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2230+
2231+
2232+
2233+        # Attempt to create a second share writer with the same sharenum.
2234+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2235+
2236+        # Show that no sharewriter results from a remote_allocate_buckets
2237+        # with the same si and sharenum, until BucketWriter.remote_close()
2238+        # has been called.
2239+        self.failIf(bsa)
2240+
2241+        # Test allocated size.
2242+        spaceint = self.ss.allocated_size()
2243+        self.failUnlessReallyEqual(spaceint, 1)
2244+
2245+        # Write 'a' to shnum 0. Only tested together with close and read.
2246+        bs[0].remote_write(0, 'a')
2247+
2248+        # Preclose: Inspect final, failUnless nothing there.
2249+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2250+        bs[0].remote_close()
2251+
2252+        # Postclose: (Omnibus) failUnless written data is in final.
2253+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2254+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2255+        contents = sharesinfinal[0].read_share_data(0, 73)
2256+        self.failUnlessReallyEqual(contents, client_data)
2257+
2258+        # Exercise the case that the share we're asking to allocate is
2259+        # already (completely) uploaded.
2260+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2261+
2262+
2263+    def test_read_old_share(self):
2264+        """ This tests whether the code correctly finds and reads
2265+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2266+        servers. There is a similar test in test_download, but that one
2267+        is from the perspective of the client and exercises a deeper
2268+        stack of code. This one is for exercising just the
2269+        StorageServer object. """
2270+        # Contruct a file with the appropriate contents in the mockfilesystem.
2271+        datalen = len(share_data)
2272+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2273+        finalhome.setContent(share_data)
2274+
2275+        # Now begin the test.
2276+        bs = self.ss.remote_get_buckets('teststorage_index')
2277+
2278+        self.failUnlessEqual(len(bs), 1)
2279+        b = bs['0']
2280+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2281+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2282+        # If you try to read past the end you get the as much data as is there.
2283+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2284+        # If you start reading past the end of the file you get the empty string.
2285+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2286}
2287[Pluggable backends -- all other changes. refs #999
2288david-sarah@jacaranda.org**20110919233256
2289 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2290] {
2291hunk ./src/allmydata/client.py 245
2292             sharetypes.append("immutable")
2293         if self.get_config("storage", "expire.mutable", True, boolean=True):
2294             sharetypes.append("mutable")
2295-        expiration_sharetypes = tuple(sharetypes)
2296 
2297hunk ./src/allmydata/client.py 246
2298+        expiration_policy = {
2299+            'enabled': expire,
2300+            'mode': mode,
2301+            'override_lease_duration': o_l_d,
2302+            'cutoff_date': cutoff_date,
2303+            'sharetypes': tuple(sharetypes),
2304+        }
2305         ss = StorageServer(storedir, self.nodeid,
2306                            reserved_space=reserved,
2307                            discard_storage=discard,
2308hunk ./src/allmydata/client.py 258
2309                            readonly_storage=readonly,
2310                            stats_provider=self.stats_provider,
2311-                           expiration_enabled=expire,
2312-                           expiration_mode=mode,
2313-                           expiration_override_lease_duration=o_l_d,
2314-                           expiration_cutoff_date=cutoff_date,
2315-                           expiration_sharetypes=expiration_sharetypes)
2316+                           expiration_policy=expiration_policy)
2317         self.add_service(ss)
2318 
2319         d = self.when_tub_ready()
2320hunk ./src/allmydata/immutable/offloaded.py 306
2321         if os.path.exists(self._encoding_file):
2322             self.log("ciphertext already present, bypassing fetch",
2323                      level=log.UNUSUAL)
2324+            # XXX the following comment is probably stale, since
2325+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2326+            #
2327             # we'll still need the plaintext hashes (when
2328             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2329             # called), and currently the easiest way to get them is to ask
2330hunk ./src/allmydata/immutable/upload.py 765
2331             self._status.set_progress(1, progress)
2332         return cryptdata
2333 
2334-
2335     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2336hunk ./src/allmydata/immutable/upload.py 766
2337+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2338+        plaintext segments, i.e. get the tagged hashes of the given segments.
2339+        The segment size is expected to be generated by the
2340+        IEncryptedUploadable before any plaintext is read or ciphertext
2341+        produced, so that the segment hashes can be generated with only a
2342+        single pass.
2343+
2344+        This returns a Deferred that fires with a sequence of hashes, using:
2345+
2346+         tuple(segment_hashes[first:last])
2347+
2348+        'num_segments' is used to assert that the number of segments that the
2349+        IEncryptedUploadable handled matches the number of segments that the
2350+        encoder was expecting.
2351+
2352+        This method must not be called until the final byte has been read
2353+        from read_encrypted(). Once this method is called, read_encrypted()
2354+        can never be called again.
2355+        """
2356         # this is currently unused, but will live again when we fix #453
2357         if len(self._plaintext_segment_hashes) < num_segments:
2358             # close out the last one
2359hunk ./src/allmydata/immutable/upload.py 803
2360         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2361 
2362     def get_plaintext_hash(self):
2363+        """OBSOLETE; Get the hash of the whole plaintext.
2364+
2365+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2366+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2367+        """
2368+        # this is currently unused, but will live again when we fix #453
2369         h = self._plaintext_hasher.digest()
2370         return defer.succeed(h)
2371 
2372hunk ./src/allmydata/interfaces.py 29
2373 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2374 Offset = Number
2375 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2376-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2377-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2378-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2379+WriteEnablerSecret = Hash # used to protect mutable share modifications
2380+LeaseRenewSecret = Hash # used to protect lease renewal requests
2381+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2382 
2383 class RIStubClient(RemoteInterface):
2384     """Each client publishes a service announcement for a dummy object called
2385hunk ./src/allmydata/interfaces.py 106
2386                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2387                          allocated_size=Offset, canary=Referenceable):
2388         """
2389-        @param storage_index: the index of the bucket to be created or
2390+        @param storage_index: the index of the shareset to be created or
2391                               increfed.
2392         @param sharenums: these are the share numbers (probably between 0 and
2393                           99) that the sender is proposing to store on this
2394hunk ./src/allmydata/interfaces.py 111
2395                           server.
2396-        @param renew_secret: This is the secret used to protect bucket refresh
2397+        @param renew_secret: This is the secret used to protect lease renewal.
2398                              This secret is generated by the client and
2399                              stored for later comparison by the server. Each
2400                              server is given a different secret.
2401hunk ./src/allmydata/interfaces.py 115
2402-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2403-        @param canary: If the canary is lost before close(), the bucket is
2404+        @param cancel_secret: ignored
2405+        @param canary: If the canary is lost before close(), the allocation is
2406                        deleted.
2407         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2408                  already have and allocated is what we hereby agree to accept.
2409hunk ./src/allmydata/interfaces.py 129
2410                   renew_secret=LeaseRenewSecret,
2411                   cancel_secret=LeaseCancelSecret):
2412         """
2413-        Add a new lease on the given bucket. If the renew_secret matches an
2414+        Add a new lease on the given shareset. If the renew_secret matches an
2415         existing lease, that lease will be renewed instead. If there is no
2416hunk ./src/allmydata/interfaces.py 131
2417-        bucket for the given storage_index, return silently. (note that in
2418+        shareset for the given storage_index, return silently. (Note that in
2419         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2420hunk ./src/allmydata/interfaces.py 133
2421-        bucket)
2422+        shareset.)
2423         """
2424         return Any() # returns None now, but future versions might change
2425 
2426hunk ./src/allmydata/interfaces.py 139
2427     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2428         """
2429-        Renew the lease on a given bucket, resetting the timer to 31 days.
2430-        Some networks will use this, some will not. If there is no bucket for
2431+        Renew the lease on a given shareset, resetting the timer to 31 days.
2432+        Some networks will use this, some will not. If there is no shareset for
2433         the given storage_index, IndexError will be raised.
2434 
2435         For mutable shares, if the given renew_secret does not match an
2436hunk ./src/allmydata/interfaces.py 146
2437         existing lease, IndexError will be raised with a note listing the
2438         server-nodeids on the existing leases, so leases on migrated shares
2439-        can be renewed or cancelled. For immutable shares, IndexError
2440-        (without the note) will be raised.
2441+        can be renewed. For immutable shares, IndexError (without the note)
2442+        will be raised.
2443         """
2444         return Any()
2445 
2446hunk ./src/allmydata/interfaces.py 154
2447     def get_buckets(storage_index=StorageIndex):
2448         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2449 
2450-
2451-
2452     def slot_readv(storage_index=StorageIndex,
2453                    shares=ListOf(int), readv=ReadVector):
2454         """Read a vector from the numbered shares associated with the given
2455hunk ./src/allmydata/interfaces.py 163
2456 
2457     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2458                                         secrets=TupleOf(WriteEnablerSecret,
2459-                                                        LeaseRenewSecret,
2460-                                                        LeaseCancelSecret),
2461+                                                        LeaseRenewSecret),
2462                                         tw_vectors=TestAndWriteVectorsForShares,
2463                                         r_vector=ReadVector,
2464                                         ):
2465hunk ./src/allmydata/interfaces.py 167
2466-        """General-purpose test-and-set operation for mutable slots. Perform
2467-        a bunch of comparisons against the existing shares. If they all pass,
2468-        then apply a bunch of write vectors to those shares. Then use the
2469-        read vectors to extract data from all the shares and return the data.
2470+        """
2471+        General-purpose atomic test-read-and-set operation for mutable slots.
2472+        Perform a bunch of comparisons against the existing shares. If they
2473+        all pass: use the read vectors to extract data from all the shares,
2474+        then apply a bunch of write vectors to those shares. Return the read
2475+        data, which does not include any modifications made by the writes.
2476 
2477         This method is, um, large. The goal is to allow clients to update all
2478         the shares associated with a mutable file in a single round trip.
2479hunk ./src/allmydata/interfaces.py 177
2480 
2481-        @param storage_index: the index of the bucket to be created or
2482+        @param storage_index: the index of the shareset to be created or
2483                               increfed.
2484         @param write_enabler: a secret that is stored along with the slot.
2485                               Writes are accepted from any caller who can
2486hunk ./src/allmydata/interfaces.py 183
2487                               present the matching secret. A different secret
2488                               should be used for each slot*server pair.
2489-        @param renew_secret: This is the secret used to protect bucket refresh
2490+        @param renew_secret: This is the secret used to protect lease renewal.
2491                              This secret is generated by the client and
2492                              stored for later comparison by the server. Each
2493                              server is given a different secret.
2494hunk ./src/allmydata/interfaces.py 187
2495-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2496+        @param cancel_secret: ignored
2497 
2498hunk ./src/allmydata/interfaces.py 189
2499-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2500-        cancel_secret). The first is required to perform any write. The
2501-        latter two are used when allocating new shares. To simply acquire a
2502-        new lease on existing shares, use an empty testv and an empty writev.
2503+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2504+        The write_enabler is required to perform any write. The renew_secret
2505+        is used when allocating new shares.
2506 
2507         Each share can have a separate test vector (i.e. a list of
2508         comparisons to perform). If all vectors for all shares pass, then all
2509hunk ./src/allmydata/interfaces.py 280
2510         store that on disk.
2511         """
2512 
2513-class IStorageBucketWriter(Interface):
2514+
2515+class IStorageBackend(Interface):
2516     """
2517hunk ./src/allmydata/interfaces.py 283
2518-    Objects of this kind live on the client side.
2519+    Objects of this kind live on the server side and are used by the
2520+    storage server object.
2521     """
2522hunk ./src/allmydata/interfaces.py 286
2523-    def put_block(segmentnum=int, data=ShareData):
2524-        """@param data: For most segments, this data will be 'blocksize'
2525-        bytes in length. The last segment might be shorter.
2526-        @return: a Deferred that fires (with None) when the operation completes
2527+    def get_available_space():
2528+        """
2529+        Returns available space for share storage in bytes, or
2530+        None if this information is not available or if the available
2531+        space is unlimited.
2532+
2533+        If the backend is configured for read-only mode then this will
2534+        return 0.
2535+        """
2536+
2537+    def get_sharesets_for_prefix(prefix):
2538+        """
2539+        Generates IShareSet objects for all storage indices matching the
2540+        given prefix for which this backend holds shares.
2541+        """
2542+
2543+    def get_shareset(storageindex):
2544+        """
2545+        Get an IShareSet object for the given storage index.
2546+        """
2547+
2548+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2549+        """
2550+        Clients who discover hash failures in shares that they have
2551+        downloaded from me will use this method to inform me about the
2552+        failures. I will record their concern so that my operator can
2553+        manually inspect the shares in question.
2554+
2555+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2556+        share number. 'reason' is a human-readable explanation of the problem,
2557+        probably including some expected hash values and the computed ones
2558+        that did not match. Corruption advisories for mutable shares should
2559+        include a hash of the public key (the same value that appears in the
2560+        mutable-file verify-cap), since the current share format does not
2561+        store that on disk.
2562+
2563+        @param storageindex=str
2564+        @param sharetype=str
2565+        @param shnum=int
2566+        @param reason=str
2567+        """
2568+
2569+
2570+class IShareSet(Interface):
2571+    def get_storage_index():
2572+        """
2573+        Returns the storage index for this shareset.
2574+        """
2575+
2576+    def get_storage_index_string():
2577+        """
2578+        Returns the base32-encoded storage index for this shareset.
2579+        """
2580+
2581+    def get_overhead():
2582+        """
2583+        Returns the storage overhead, in bytes, of this shareset (exclusive
2584+        of the space used by its shares).
2585+        """
2586+
2587+    def get_shares():
2588+        """
2589+        Generates the IStoredShare objects held in this shareset.
2590+        """
2591+
2592+    def has_incoming(shnum):
2593+        """
2594+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2595+        """
2596+
2597+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2598+        """
2599+        Create a bucket writer that can be used to write data to a given share.
2600+
2601+        @param storageserver=RIStorageServer
2602+        @param shnum=int: A share number in this shareset
2603+        @param max_space_per_bucket=int: The maximum space allocated for the
2604+                 share, in bytes
2605+        @param lease_info=LeaseInfo: The initial lease information
2606+        @param canary=Referenceable: If the canary is lost before close(), the
2607+                 bucket is deleted.
2608+        @return an IStorageBucketWriter for the given share
2609+        """
2610+
2611+    def make_bucket_reader(storageserver, share):
2612+        """
2613+        Create a bucket reader that can be used to read data from a given share.
2614+
2615+        @param storageserver=RIStorageServer
2616+        @param share=IStoredShare
2617+        @return an IStorageBucketReader for the given share
2618+        """
2619+
2620+    def readv(wanted_shnums, read_vector):
2621+        """
2622+        Read a vector from the numbered shares in this shareset. An empty
2623+        wanted_shnums list means to return data from all known shares.
2624+
2625+        @param wanted_shnums=ListOf(int)
2626+        @param read_vector=ReadVector
2627+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2628+        """
2629+
2630+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2631+        """
2632+        General-purpose atomic test-read-and-set operation for mutable slots.
2633+        Perform a bunch of comparisons against the existing shares in this
2634+        shareset. If they all pass: use the read vectors to extract data from
2635+        all the shares, then apply a bunch of write vectors to those shares.
2636+        Return the read data, which does not include any modifications made by
2637+        the writes.
2638+
2639+        See the similar method in RIStorageServer for more detail.
2640+
2641+        @param storageserver=RIStorageServer
2642+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2643+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2644+        @param read_vector=ReadVector
2645+        @param expiration_time=int
2646+        @return TupleOf(bool, DictOf(int, ReadData))
2647+        """
2648+
2649+    def add_or_renew_lease(lease_info):
2650+        """
2651+        Add a new lease on the shares in this shareset. If the renew_secret
2652+        matches an existing lease, that lease will be renewed instead. If
2653+        there are no shares in this shareset, return silently.
2654+
2655+        @param lease_info=LeaseInfo
2656+        """
2657+
2658+    def renew_lease(renew_secret, new_expiration_time):
2659+        """
2660+        Renew a lease on the shares in this shareset, resetting the timer
2661+        to 31 days. Some grids will use this, some will not. If there are no
2662+        shares in this shareset, IndexError will be raised.
2663+
2664+        For mutable shares, if the given renew_secret does not match an
2665+        existing lease, IndexError will be raised with a note listing the
2666+        server-nodeids on the existing leases, so leases on migrated shares
2667+        can be renewed. For immutable shares, IndexError (without the note)
2668+        will be raised.
2669+
2670+        @param renew_secret=LeaseRenewSecret
2671+        """
2672+
2673+
2674+class IStoredShare(Interface):
2675+    """
2676+    This object contains as much as all of the share data.  It is intended
2677+    for lazy evaluation, such that in many use cases substantially less than
2678+    all of the share data will be accessed.
2679+    """
2680+    def close():
2681+        """
2682+        Complete writing to this share.
2683+        """
2684+
2685+    def get_storage_index():
2686+        """
2687+        Returns the storage index.
2688+        """
2689+
2690+    def get_shnum():
2691+        """
2692+        Returns the share number.
2693+        """
2694+
2695+    def get_data_length():
2696+        """
2697+        Returns the data length in bytes.
2698+        """
2699+
2700+    def get_size():
2701+        """
2702+        Returns the size of the share in bytes.
2703+        """
2704+
2705+    def get_used_space():
2706+        """
2707+        Returns the amount of backend storage including overhead, in bytes, used
2708+        by this share.
2709+        """
2710+
2711+    def unlink():
2712+        """
2713+        Signal that this share can be removed from the backend storage. This does
2714+        not guarantee that the share data will be immediately inaccessible, or
2715+        that it will be securely erased.
2716+        """
2717+
2718+    def readv(read_vector):
2719+        """
2720+        XXX
2721+        """
2722+
2723+
2724+class IStoredMutableShare(IStoredShare):
2725+    def check_write_enabler(write_enabler, si_s):
2726+        """
2727+        XXX
2728         """
2729 
2730hunk ./src/allmydata/interfaces.py 489
2731-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2732+    def check_testv(test_vector):
2733+        """
2734+        XXX
2735+        """
2736+
2737+    def writev(datav, new_length):
2738+        """
2739+        XXX
2740+        """
2741+
2742+
2743+class IStorageBucketWriter(Interface):
2744+    """
2745+    Objects of this kind live on the client side.
2746+    """
2747+    def put_block(segmentnum, data):
2748         """
2749hunk ./src/allmydata/interfaces.py 506
2750+        @param segmentnum=int
2751+        @param data=ShareData: For most segments, this data will be 'blocksize'
2752+        bytes in length. The last segment might be shorter.
2753         @return: a Deferred that fires (with None) when the operation completes
2754         """
2755 
2756hunk ./src/allmydata/interfaces.py 512
2757-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2758+    def put_crypttext_hashes(hashes):
2759         """
2760hunk ./src/allmydata/interfaces.py 514
2761+        @param hashes=ListOf(Hash)
2762         @return: a Deferred that fires (with None) when the operation completes
2763         """
2764 
2765hunk ./src/allmydata/interfaces.py 518
2766-    def put_block_hashes(blockhashes=ListOf(Hash)):
2767+    def put_block_hashes(blockhashes):
2768         """
2769hunk ./src/allmydata/interfaces.py 520
2770+        @param blockhashes=ListOf(Hash)
2771         @return: a Deferred that fires (with None) when the operation completes
2772         """
2773 
2774hunk ./src/allmydata/interfaces.py 524
2775-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2776+    def put_share_hashes(sharehashes):
2777         """
2778hunk ./src/allmydata/interfaces.py 526
2779+        @param sharehashes=ListOf(TupleOf(int, Hash))
2780         @return: a Deferred that fires (with None) when the operation completes
2781         """
2782 
2783hunk ./src/allmydata/interfaces.py 530
2784-    def put_uri_extension(data=URIExtensionData):
2785+    def put_uri_extension(data):
2786         """This block of data contains integrity-checking information (hashes
2787         of plaintext, crypttext, and shares), as well as encoding parameters
2788         that are necessary to recover the data. This is a serialized dict
2789hunk ./src/allmydata/interfaces.py 535
2790         mapping strings to other strings. The hash of this data is kept in
2791-        the URI and verified before any of the data is used. All buckets for
2792-        a given file contain identical copies of this data.
2793+        the URI and verified before any of the data is used. All share
2794+        containers for a given file contain identical copies of this data.
2795 
2796         The serialization format is specified with the following pseudocode:
2797         for k in sorted(dict.keys()):
2798hunk ./src/allmydata/interfaces.py 543
2799             assert re.match(r'^[a-zA-Z_\-]+$', k)
2800             write(k + ':' + netstring(dict[k]))
2801 
2802+        @param data=URIExtensionData
2803         @return: a Deferred that fires (with None) when the operation completes
2804         """
2805 
2806hunk ./src/allmydata/interfaces.py 558
2807 
2808 class IStorageBucketReader(Interface):
2809 
2810-    def get_block_data(blocknum=int, blocksize=int, size=int):
2811+    def get_block_data(blocknum, blocksize, size):
2812         """Most blocks will be the same size. The last block might be shorter
2813         than the others.
2814 
2815hunk ./src/allmydata/interfaces.py 562
2816+        @param blocknum=int
2817+        @param blocksize=int
2818+        @param size=int
2819         @return: ShareData
2820         """
2821 
2822hunk ./src/allmydata/interfaces.py 573
2823         @return: ListOf(Hash)
2824         """
2825 
2826-    def get_block_hashes(at_least_these=SetOf(int)):
2827+    def get_block_hashes(at_least_these=()):
2828         """
2829hunk ./src/allmydata/interfaces.py 575
2830+        @param at_least_these=SetOf(int)
2831         @return: ListOf(Hash)
2832         """
2833 
2834hunk ./src/allmydata/interfaces.py 579
2835-    def get_share_hashes(at_least_these=SetOf(int)):
2836+    def get_share_hashes():
2837         """
2838         @return: ListOf(TupleOf(int, Hash))
2839         """
2840hunk ./src/allmydata/interfaces.py 611
2841         @return: unicode nickname, or None
2842         """
2843 
2844-    # methods moved from IntroducerClient, need review
2845-    def get_all_connections():
2846-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2847-        each active connection we've established to a remote service. This is
2848-        mostly useful for unit tests that need to wait until a certain number
2849-        of connections have been made."""
2850-
2851-    def get_all_connectors():
2852-        """Return a dict that maps from (nodeid, service_name) to a
2853-        RemoteServiceConnector instance for all services that we are actively
2854-        trying to connect to. Each RemoteServiceConnector has the following
2855-        public attributes::
2856-
2857-          service_name: the type of service provided, like 'storage'
2858-          announcement_time: when we first heard about this service
2859-          last_connect_time: when we last established a connection
2860-          last_loss_time: when we last lost a connection
2861-
2862-          version: the peer's version, from the most recent connection
2863-          oldest_supported: the peer's oldest supported version, same
2864-
2865-          rref: the RemoteReference, if connected, otherwise None
2866-          remote_host: the IAddress, if connected, otherwise None
2867-
2868-        This method is intended for monitoring interfaces, such as a web page
2869-        that describes connecting and connected peers.
2870-        """
2871-
2872-    def get_all_peerids():
2873-        """Return a frozenset of all peerids to whom we have a connection (to
2874-        one or more services) established. Mostly useful for unit tests."""
2875-
2876-    def get_all_connections_for(service_name):
2877-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2878-        for each active connection that provides the given SERVICE_NAME."""
2879-
2880-    def get_permuted_peers(service_name, key):
2881-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2882-        the connections that provide SERVICE_NAME, using a hash-based
2883-        permutation keyed by KEY. This randomizes the service list in a
2884-        repeatable way, to distribute load over many peers.
2885-        """
2886-
2887 
2888 class IMutableSlotWriter(Interface):
2889     """
2890hunk ./src/allmydata/interfaces.py 616
2891     The interface for a writer around a mutable slot on a remote server.
2892     """
2893-    def set_checkstring(checkstring, *args):
2894+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
2895         """
2896         Set the checkstring that I will pass to the remote server when
2897         writing.
2898hunk ./src/allmydata/interfaces.py 640
2899         Add a block and salt to the share.
2900         """
2901 
2902-    def put_encprivey(encprivkey):
2903+    def put_encprivkey(encprivkey):
2904         """
2905         Add the encrypted private key to the share.
2906         """
2907hunk ./src/allmydata/interfaces.py 645
2908 
2909-    def put_blockhashes(blockhashes=list):
2910+    def put_blockhashes(blockhashes):
2911         """
2912hunk ./src/allmydata/interfaces.py 647
2913+        @param blockhashes=list
2914         Add the block hash tree to the share.
2915         """
2916 
2917hunk ./src/allmydata/interfaces.py 651
2918-    def put_sharehashes(sharehashes=dict):
2919+    def put_sharehashes(sharehashes):
2920         """
2921hunk ./src/allmydata/interfaces.py 653
2922+        @param sharehashes=dict
2923         Add the share hash chain to the share.
2924         """
2925 
2926hunk ./src/allmydata/interfaces.py 739
2927     def get_extension_params():
2928         """Return the extension parameters in the URI"""
2929 
2930-    def set_extension_params():
2931+    def set_extension_params(params):
2932         """Set the extension parameters that should be in the URI"""
2933 
2934 class IDirectoryURI(Interface):
2935hunk ./src/allmydata/interfaces.py 879
2936         writer-visible data using this writekey.
2937         """
2938 
2939-    # TODO: Can this be overwrite instead of replace?
2940-    def replace(new_contents):
2941-        """Replace the contents of the mutable file, provided that no other
2942+    def overwrite(new_contents):
2943+        """Overwrite the contents of the mutable file, provided that no other
2944         node has published (or is attempting to publish, concurrently) a
2945         newer version of the file than this one.
2946 
2947hunk ./src/allmydata/interfaces.py 1346
2948         is empty, the metadata will be an empty dictionary.
2949         """
2950 
2951-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
2952+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
2953         """I add a child (by writecap+readcap) at the specific name. I return
2954         a Deferred that fires when the operation finishes. If overwrite= is
2955         True, I will replace any existing child of the same name, otherwise
2956hunk ./src/allmydata/interfaces.py 1745
2957     Block Hash, and the encoding parameters, both of which must be included
2958     in the URI.
2959 
2960-    I do not choose shareholders, that is left to the IUploader. I must be
2961-    given a dict of RemoteReferences to storage buckets that are ready and
2962-    willing to receive data.
2963+    I do not choose shareholders, that is left to the IUploader.
2964     """
2965 
2966     def set_size(size):
2967hunk ./src/allmydata/interfaces.py 1752
2968         """Specify the number of bytes that will be encoded. This must be
2969         peformed before get_serialized_params() can be called.
2970         """
2971+
2972     def set_params(params):
2973         """Override the default encoding parameters. 'params' is a tuple of
2974         (k,d,n), where 'k' is the number of required shares, 'd' is the
2975hunk ./src/allmydata/interfaces.py 1848
2976     download, validate, decode, and decrypt data from them, writing the
2977     results to an output file.
2978 
2979-    I do not locate the shareholders, that is left to the IDownloader. I must
2980-    be given a dict of RemoteReferences to storage buckets that are ready to
2981-    send data.
2982+    I do not locate the shareholders, that is left to the IDownloader.
2983     """
2984 
2985     def setup(outfile):
2986hunk ./src/allmydata/interfaces.py 1950
2987         resuming an interrupted upload (where we need to compute the
2988         plaintext hashes, but don't need the redundant encrypted data)."""
2989 
2990-    def get_plaintext_hashtree_leaves(first, last, num_segments):
2991-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2992-        plaintext segments, i.e. get the tagged hashes of the given segments.
2993-        The segment size is expected to be generated by the
2994-        IEncryptedUploadable before any plaintext is read or ciphertext
2995-        produced, so that the segment hashes can be generated with only a
2996-        single pass.
2997-
2998-        This returns a Deferred that fires with a sequence of hashes, using:
2999-
3000-         tuple(segment_hashes[first:last])
3001-
3002-        'num_segments' is used to assert that the number of segments that the
3003-        IEncryptedUploadable handled matches the number of segments that the
3004-        encoder was expecting.
3005-
3006-        This method must not be called until the final byte has been read
3007-        from read_encrypted(). Once this method is called, read_encrypted()
3008-        can never be called again.
3009-        """
3010-
3011-    def get_plaintext_hash():
3012-        """OBSOLETE; Get the hash of the whole plaintext.
3013-
3014-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3015-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3016-        """
3017-
3018     def close():
3019         """Just like IUploadable.close()."""
3020 
3021hunk ./src/allmydata/interfaces.py 2144
3022         returns a Deferred that fires with an IUploadResults instance, from
3023         which the URI of the file can be obtained as results.uri ."""
3024 
3025-    def upload_ssk(write_capability, new_version, uploadable):
3026-        """TODO: how should this work?"""
3027-
3028 class ICheckable(Interface):
3029     def check(monitor, verify=False, add_lease=False):
3030         """Check up on my health, optionally repairing any problems.
3031hunk ./src/allmydata/interfaces.py 2505
3032 
3033 class IRepairResults(Interface):
3034     """I contain the results of a repair operation."""
3035-    def get_successful(self):
3036+    def get_successful():
3037         """Returns a boolean: True if the repair made the file healthy, False
3038         if not. Repair failure generally indicates a file that has been
3039         damaged beyond repair."""
3040hunk ./src/allmydata/interfaces.py 2577
3041     Tahoe process will typically have a single NodeMaker, but unit tests may
3042     create simplified/mocked forms for testing purposes.
3043     """
3044-    def create_from_cap(writecap, readcap=None, **kwargs):
3045+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3046         """I create an IFilesystemNode from the given writecap/readcap. I can
3047         only provide nodes for existing file/directory objects: use my other
3048         methods to create new objects. I return synchronously."""
3049hunk ./src/allmydata/monitor.py 30
3050 
3051     # the following methods are provided for the operation code
3052 
3053-    def is_cancelled(self):
3054+    def is_cancelled():
3055         """Returns True if the operation has been cancelled. If True,
3056         operation code should stop creating new work, and attempt to stop any
3057         work already in progress."""
3058hunk ./src/allmydata/monitor.py 35
3059 
3060-    def raise_if_cancelled(self):
3061+    def raise_if_cancelled():
3062         """Raise OperationCancelledError if the operation has been cancelled.
3063         Operation code that has a robust error-handling path can simply call
3064         this periodically."""
3065hunk ./src/allmydata/monitor.py 40
3066 
3067-    def set_status(self, status):
3068+    def set_status(status):
3069         """Sets the Monitor's 'status' object to an arbitrary value.
3070         Different operations will store different sorts of status information
3071         here. Operation code should use get+modify+set sequences to update
3072hunk ./src/allmydata/monitor.py 46
3073         this."""
3074 
3075-    def get_status(self):
3076+    def get_status():
3077         """Return the status object. If the operation failed, this will be a
3078         Failure instance."""
3079 
3080hunk ./src/allmydata/monitor.py 50
3081-    def finish(self, status):
3082+    def finish(status):
3083         """Call this when the operation is done, successful or not. The
3084         Monitor's lifetime is influenced by the completion of the operation
3085         it is monitoring. The Monitor's 'status' value will be set with the
3086hunk ./src/allmydata/monitor.py 63
3087 
3088     # the following methods are provided for the initiator of the operation
3089 
3090-    def is_finished(self):
3091+    def is_finished():
3092         """Return a boolean, True if the operation is done (whether
3093         successful or failed), False if it is still running."""
3094 
3095hunk ./src/allmydata/monitor.py 67
3096-    def when_done(self):
3097+    def when_done():
3098         """Return a Deferred that fires when the operation is complete. It
3099         will fire with the operation status, the same value as returned by
3100         get_status()."""
3101hunk ./src/allmydata/monitor.py 72
3102 
3103-    def cancel(self):
3104+    def cancel():
3105         """Cancel the operation as soon as possible. is_cancelled() will
3106         start returning True after this is called."""
3107 
3108hunk ./src/allmydata/mutable/filenode.py 753
3109         self._writekey = writekey
3110         self._serializer = defer.succeed(None)
3111 
3112-
3113     def get_sequence_number(self):
3114         """
3115         Get the sequence number of the mutable version that I represent.
3116hunk ./src/allmydata/mutable/filenode.py 759
3117         """
3118         return self._version[0] # verinfo[0] == the sequence number
3119 
3120+    def get_servermap(self):
3121+        return self._servermap
3122 
3123hunk ./src/allmydata/mutable/filenode.py 762
3124-    # TODO: Terminology?
3125     def get_writekey(self):
3126         """
3127         I return a writekey or None if I don't have a writekey.
3128hunk ./src/allmydata/mutable/filenode.py 768
3129         """
3130         return self._writekey
3131 
3132-
3133     def set_downloader_hints(self, hints):
3134         """
3135         I set the downloader hints.
3136hunk ./src/allmydata/mutable/filenode.py 776
3137 
3138         self._downloader_hints = hints
3139 
3140-
3141     def get_downloader_hints(self):
3142         """
3143         I return the downloader hints.
3144hunk ./src/allmydata/mutable/filenode.py 782
3145         """
3146         return self._downloader_hints
3147 
3148-
3149     def overwrite(self, new_contents):
3150         """
3151         I overwrite the contents of this mutable file version with the
3152hunk ./src/allmydata/mutable/filenode.py 791
3153 
3154         return self._do_serialized(self._overwrite, new_contents)
3155 
3156-
3157     def _overwrite(self, new_contents):
3158         assert IMutableUploadable.providedBy(new_contents)
3159         assert self._servermap.last_update_mode == MODE_WRITE
3160hunk ./src/allmydata/mutable/filenode.py 797
3161 
3162         return self._upload(new_contents)
3163 
3164-
3165     def modify(self, modifier, backoffer=None):
3166         """I use a modifier callback to apply a change to the mutable file.
3167         I implement the following pseudocode::
3168hunk ./src/allmydata/mutable/filenode.py 841
3169 
3170         return self._do_serialized(self._modify, modifier, backoffer)
3171 
3172-
3173     def _modify(self, modifier, backoffer):
3174         if backoffer is None:
3175             backoffer = BackoffAgent().delay
3176hunk ./src/allmydata/mutable/filenode.py 846
3177         return self._modify_and_retry(modifier, backoffer, True)
3178 
3179-
3180     def _modify_and_retry(self, modifier, backoffer, first_time):
3181         """
3182         I try to apply modifier to the contents of this version of the
3183hunk ./src/allmydata/mutable/filenode.py 878
3184         d.addErrback(_retry)
3185         return d
3186 
3187-
3188     def _modify_once(self, modifier, first_time):
3189         """
3190         I attempt to apply a modifier to the contents of the mutable
3191hunk ./src/allmydata/mutable/filenode.py 913
3192         d.addCallback(_apply)
3193         return d
3194 
3195-
3196     def is_readonly(self):
3197         """
3198         I return True if this MutableFileVersion provides no write
3199hunk ./src/allmydata/mutable/filenode.py 921
3200         """
3201         return self._writekey is None
3202 
3203-
3204     def is_mutable(self):
3205         """
3206         I return True, since mutable files are always mutable by
3207hunk ./src/allmydata/mutable/filenode.py 928
3208         """
3209         return True
3210 
3211-
3212     def get_storage_index(self):
3213         """
3214         I return the storage index of the reference that I encapsulate.
3215hunk ./src/allmydata/mutable/filenode.py 934
3216         """
3217         return self._storage_index
3218 
3219-
3220     def get_size(self):
3221         """
3222         I return the length, in bytes, of this readable object.
3223hunk ./src/allmydata/mutable/filenode.py 940
3224         """
3225         return self._servermap.size_of_version(self._version)
3226 
3227-
3228     def download_to_data(self, fetch_privkey=False):
3229         """
3230         I return a Deferred that fires with the contents of this
3231hunk ./src/allmydata/mutable/filenode.py 951
3232         d.addCallback(lambda mc: "".join(mc.chunks))
3233         return d
3234 
3235-
3236     def _try_to_download_data(self):
3237         """
3238         I am an unserialized cousin of download_to_data; I am called
3239hunk ./src/allmydata/mutable/filenode.py 963
3240         d.addCallback(lambda mc: "".join(mc.chunks))
3241         return d
3242 
3243-
3244     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3245         """
3246         I read a portion (possibly all) of the mutable file that I
3247hunk ./src/allmydata/mutable/filenode.py 971
3248         return self._do_serialized(self._read, consumer, offset, size,
3249                                    fetch_privkey)
3250 
3251-
3252     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3253         """
3254         I am the serialized companion of read.
3255hunk ./src/allmydata/mutable/filenode.py 981
3256         d = r.download(consumer, offset, size)
3257         return d
3258 
3259-
3260     def _do_serialized(self, cb, *args, **kwargs):
3261         # note: to avoid deadlock, this callable is *not* allowed to invoke
3262         # other serialized methods within this (or any other)
3263hunk ./src/allmydata/mutable/filenode.py 999
3264         self._serializer.addErrback(log.err)
3265         return d
3266 
3267-
3268     def _upload(self, new_contents):
3269         #assert self._pubkey, "update_servermap must be called before publish"
3270         p = Publish(self._node, self._storage_broker, self._servermap)
3271hunk ./src/allmydata/mutable/filenode.py 1009
3272         d.addCallback(self._did_upload, new_contents.get_size())
3273         return d
3274 
3275-
3276     def _did_upload(self, res, size):
3277         self._most_recent_size = size
3278         return res
3279hunk ./src/allmydata/mutable/filenode.py 1029
3280         """
3281         return self._do_serialized(self._update, data, offset)
3282 
3283-
3284     def _update(self, data, offset):
3285         """
3286         I update the mutable file version represented by this particular
3287hunk ./src/allmydata/mutable/filenode.py 1058
3288         d.addCallback(self._build_uploadable_and_finish, data, offset)
3289         return d
3290 
3291-
3292     def _do_modify_update(self, data, offset):
3293         """
3294         I perform a file update by modifying the contents of the file
3295hunk ./src/allmydata/mutable/filenode.py 1073
3296             return new
3297         return self._modify(m, None)
3298 
3299-
3300     def _do_update_update(self, data, offset):
3301         """
3302         I start the Servermap update that gets us the data we need to
3303hunk ./src/allmydata/mutable/filenode.py 1108
3304         return self._update_servermap(update_range=(start_segment,
3305                                                     end_segment))
3306 
3307-
3308     def _decode_and_decrypt_segments(self, ignored, data, offset):
3309         """
3310         After the servermap update, I take the encrypted and encoded
3311hunk ./src/allmydata/mutable/filenode.py 1148
3312         d3 = defer.succeed(blockhashes)
3313         return deferredutil.gatherResults([d1, d2, d3])
3314 
3315-
3316     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3317         """
3318         After the process has the plaintext segments, I build the
3319hunk ./src/allmydata/mutable/filenode.py 1163
3320         p = Publish(self._node, self._storage_broker, self._servermap)
3321         return p.update(u, offset, segments_and_bht[2], self._version)
3322 
3323-
3324     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3325         """
3326         I update the servermap. I return a Deferred that fires when the
3327hunk ./src/allmydata/storage/common.py 1
3328-
3329-import os.path
3330 from allmydata.util import base32
3331 
3332 class DataTooLargeError(Exception):
3333hunk ./src/allmydata/storage/common.py 5
3334     pass
3335+
3336 class UnknownMutableContainerVersionError(Exception):
3337     pass
3338hunk ./src/allmydata/storage/common.py 8
3339+
3340 class UnknownImmutableContainerVersionError(Exception):
3341     pass
3342 
3343hunk ./src/allmydata/storage/common.py 18
3344 
3345 def si_a2b(ascii_storageindex):
3346     return base32.a2b(ascii_storageindex)
3347-
3348-def storage_index_to_dir(storageindex):
3349-    sia = si_b2a(storageindex)
3350-    return os.path.join(sia[:2], sia)
3351hunk ./src/allmydata/storage/crawler.py 2
3352 
3353-import os, time, struct
3354+import time, struct
3355 import cPickle as pickle
3356 from twisted.internet import reactor
3357 from twisted.application import service
3358hunk ./src/allmydata/storage/crawler.py 6
3359+
3360+from allmydata.util.assertutil import precondition
3361+from allmydata.interfaces import IStorageBackend
3362 from allmydata.storage.common import si_b2a
3363hunk ./src/allmydata/storage/crawler.py 10
3364-from allmydata.util import fileutil
3365+
3366 
3367 class TimeSliceExceeded(Exception):
3368     pass
3369hunk ./src/allmydata/storage/crawler.py 15
3370 
3371+
3372 class ShareCrawler(service.MultiService):
3373hunk ./src/allmydata/storage/crawler.py 17
3374-    """A ShareCrawler subclass is attached to a StorageServer, and
3375-    periodically walks all of its shares, processing each one in some
3376-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3377-    since large servers can easily have a terabyte of shares, in several
3378-    million files, which can take hours or days to read.
3379+    """
3380+    An instance of a subclass of ShareCrawler is attached to a storage
3381+    backend, and periodically walks the backend's shares, processing them
3382+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3383+    the host, since large servers can easily have a terabyte of shares in
3384+    several million files, which can take hours or days to read.
3385 
3386     Once the crawler starts a cycle, it will proceed at a rate limited by the
3387     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3388hunk ./src/allmydata/storage/crawler.py 33
3389     long enough to ensure that 'minimum_cycle_time' elapses between the start
3390     of two consecutive cycles.
3391 
3392-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3393+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3394     grid will cause the prefixdir contents to be mostly cached in the kernel,
3395hunk ./src/allmydata/storage/crawler.py 35
3396-    or that the number of buckets in each prefixdir will be small enough to
3397-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3398-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3399+    or that the number of sharesets in each prefixdir will be small enough to
3400+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3401+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3402     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3403     time, and 17ms to list the second time.
3404 
3405hunk ./src/allmydata/storage/crawler.py 41
3406-    To use a crawler, create a subclass which implements the process_bucket()
3407-    method. It will be called with a prefixdir and a base32 storage index
3408-    string. process_bucket() must run synchronously. Any keys added to
3409-    self.state will be preserved. Override add_initial_state() to set up
3410-    initial state keys. Override finished_cycle() to perform additional
3411-    processing when the cycle is complete. Any status that the crawler
3412-    produces should be put in the self.state dictionary. Status renderers
3413-    (like a web page which describes the accomplishments of your crawler)
3414-    will use crawler.get_state() to retrieve this dictionary; they can
3415-    present the contents as they see fit.
3416+    To implement a crawler, create a subclass that implements the
3417+    process_shareset() method. It will be called with a prefixdir and an
3418+    object providing the IShareSet interface. process_shareset() must run
3419+    synchronously. Any keys added to self.state will be preserved. Override
3420+    add_initial_state() to set up initial state keys. Override
3421+    finished_cycle() to perform additional processing when the cycle is
3422+    complete. Any status that the crawler produces should be put in the
3423+    self.state dictionary. Status renderers (like a web page describing the
3424+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3425+    this dictionary; they can present the contents as they see fit.
3426 
3427hunk ./src/allmydata/storage/crawler.py 52
3428-    Then create an instance, with a reference to a StorageServer and a
3429-    filename where it can store persistent state. The statefile is used to
3430-    keep track of how far around the ring the process has travelled, as well
3431-    as timing history to allow the pace to be predicted and controlled. The
3432-    statefile will be updated and written to disk after each time slice (just
3433-    before the crawler yields to the reactor), and also after each cycle is
3434-    finished, and also when stopService() is called. Note that this means
3435-    that a crawler which is interrupted with SIGKILL while it is in the
3436-    middle of a time slice will lose progress: the next time the node is
3437-    started, the crawler will repeat some unknown amount of work.
3438+    Then create an instance, with a reference to a backend object providing
3439+    the IStorageBackend interface, and a filename where it can store
3440+    persistent state. The statefile is used to keep track of how far around
3441+    the ring the process has travelled, as well as timing history to allow
3442+    the pace to be predicted and controlled. The statefile will be updated
3443+    and written to disk after each time slice (just before the crawler yields
3444+    to the reactor), and also after each cycle is finished, and also when
3445+    stopService() is called. Note that this means that a crawler that is
3446+    interrupted with SIGKILL while it is in the middle of a time slice will
3447+    lose progress: the next time the node is started, the crawler will repeat
3448+    some unknown amount of work.
3449 
3450     The crawler instance must be started with startService() before it will
3451hunk ./src/allmydata/storage/crawler.py 65
3452-    do any work. To make it stop doing work, call stopService().
3453+    do any work. To make it stop doing work, call stopService(). A crawler
3454+    is usually a child service of a StorageServer, although it should not
3455+    depend on that.
3456+
3457+    For historical reasons, some dictionary key names use the term "bucket"
3458+    for what is now preferably called a "shareset" (the set of shares that a
3459+    server holds under a given storage index).
3460     """
3461 
3462     slow_start = 300 # don't start crawling for 5 minutes after startup
3463hunk ./src/allmydata/storage/crawler.py 80
3464     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3465     minimum_cycle_time = 300 # don't run a cycle faster than this
3466 
3467-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3468+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3469+        precondition(IStorageBackend.providedBy(backend), backend)
3470         service.MultiService.__init__(self)
3471hunk ./src/allmydata/storage/crawler.py 83
3472+        self.backend = backend
3473+        self.statefp = statefp
3474         if allowed_cpu_percentage is not None:
3475             self.allowed_cpu_percentage = allowed_cpu_percentage
3476hunk ./src/allmydata/storage/crawler.py 87
3477-        self.server = server
3478-        self.sharedir = server.sharedir
3479-        self.statefile = statefile
3480         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3481                          for i in range(2**10)]
3482         self.prefixes.sort()
3483hunk ./src/allmydata/storage/crawler.py 91
3484         self.timer = None
3485-        self.bucket_cache = (None, [])
3486+        self.shareset_cache = (None, [])
3487         self.current_sleep_time = None
3488         self.next_wake_time = None
3489         self.last_prefix_finished_time = None
3490hunk ./src/allmydata/storage/crawler.py 154
3491                 left = len(self.prefixes) - self.last_complete_prefix_index
3492                 remaining = left * self.last_prefix_elapsed_time
3493                 # TODO: remainder of this prefix: we need to estimate the
3494-                # per-bucket time, probably by measuring the time spent on
3495-                # this prefix so far, divided by the number of buckets we've
3496+                # per-shareset time, probably by measuring the time spent on
3497+                # this prefix so far, divided by the number of sharesets we've
3498                 # processed.
3499             d["estimated-cycle-complete-time-left"] = remaining
3500             # it's possible to call get_progress() from inside a crawler's
3501hunk ./src/allmydata/storage/crawler.py 175
3502         state dictionary.
3503 
3504         If we are not currently sleeping (i.e. get_state() was called from
3505-        inside the process_prefixdir, process_bucket, or finished_cycle()
3506+        inside the process_prefixdir, process_shareset, or finished_cycle()
3507         methods, or if startService has not yet been called on this crawler),
3508         these two keys will be None.
3509 
3510hunk ./src/allmydata/storage/crawler.py 188
3511     def load_state(self):
3512         # we use this to store state for both the crawler's internals and
3513         # anything the subclass-specific code needs. The state is stored
3514-        # after each bucket is processed, after each prefixdir is processed,
3515+        # after each shareset is processed, after each prefixdir is processed,
3516         # and after a cycle is complete. The internal keys we use are:
3517         #  ["version"]: int, always 1
3518         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3519hunk ./src/allmydata/storage/crawler.py 202
3520         #                            are sleeping between cycles, or if we
3521         #                            have not yet finished any prefixdir since
3522         #                            a cycle was started
3523-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3524-        #                            of the last bucket to be processed, or
3525-        #                            None if we are sleeping between cycles
3526+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3527+        #                            shareset to be processed, or None if we
3528+        #                            are sleeping between cycles
3529         try:
3530hunk ./src/allmydata/storage/crawler.py 206
3531-            f = open(self.statefile, "rb")
3532-            state = pickle.load(f)
3533-            f.close()
3534+            state = pickle.loads(self.statefp.getContent())
3535         except EnvironmentError:
3536             state = {"version": 1,
3537                      "last-cycle-finished": None,
3538hunk ./src/allmydata/storage/crawler.py 242
3539         else:
3540             last_complete_prefix = self.prefixes[lcpi]
3541         self.state["last-complete-prefix"] = last_complete_prefix
3542-        tmpfile = self.statefile + ".tmp"
3543-        f = open(tmpfile, "wb")
3544-        pickle.dump(self.state, f)
3545-        f.close()
3546-        fileutil.move_into_place(tmpfile, self.statefile)
3547+        self.statefp.setContent(pickle.dumps(self.state))
3548 
3549     def startService(self):
3550         # arrange things to look like we were just sleeping, so
3551hunk ./src/allmydata/storage/crawler.py 284
3552         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3553         # if the math gets weird, or a timequake happens, don't sleep
3554         # forever. Note that this means that, while a cycle is running, we
3555-        # will process at least one bucket every 5 minutes, no matter how
3556-        # long that bucket takes.
3557+        # will process at least one shareset every 5 minutes, no matter how
3558+        # long that shareset takes.
3559         sleep_time = max(0.0, min(sleep_time, 299))
3560         if finished_cycle:
3561             # how long should we sleep between cycles? Don't run faster than
3562hunk ./src/allmydata/storage/crawler.py 315
3563         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3564             # if we want to yield earlier, just raise TimeSliceExceeded()
3565             prefix = self.prefixes[i]
3566-            prefixdir = os.path.join(self.sharedir, prefix)
3567-            if i == self.bucket_cache[0]:
3568-                buckets = self.bucket_cache[1]
3569+            if i == self.shareset_cache[0]:
3570+                sharesets = self.shareset_cache[1]
3571             else:
3572hunk ./src/allmydata/storage/crawler.py 318
3573-                try:
3574-                    buckets = os.listdir(prefixdir)
3575-                    buckets.sort()
3576-                except EnvironmentError:
3577-                    buckets = []
3578-                self.bucket_cache = (i, buckets)
3579-            self.process_prefixdir(cycle, prefix, prefixdir,
3580-                                   buckets, start_slice)
3581+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3582+                self.shareset_cache = (i, sharesets)
3583+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3584             self.last_complete_prefix_index = i
3585 
3586             now = time.time()
3587hunk ./src/allmydata/storage/crawler.py 345
3588         self.finished_cycle(cycle)
3589         self.save_state()
3590 
3591-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3592-        """This gets a list of bucket names (i.e. storage index strings,
3593+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3594+        """
3595+        This gets a list of shareset names (i.e. storage index strings,
3596         base32-encoded) in sorted order.
3597 
3598         You can override this if your crawler doesn't care about the actual
3599hunk ./src/allmydata/storage/crawler.py 352
3600         shares, for example a crawler which merely keeps track of how many
3601-        buckets are being managed by this server.
3602+        sharesets are being managed by this server.
3603 
3604hunk ./src/allmydata/storage/crawler.py 354
3605-        Subclasses which *do* care about actual bucket should leave this
3606-        method along, and implement process_bucket() instead.
3607+        Subclasses which *do* care about actual shareset should leave this
3608+        method alone, and implement process_shareset() instead.
3609         """
3610 
3611hunk ./src/allmydata/storage/crawler.py 358
3612-        for bucket in buckets:
3613-            if bucket <= self.state["last-complete-bucket"]:
3614+        for shareset in sharesets:
3615+            base32si = shareset.get_storage_index_string()
3616+            if base32si <= self.state["last-complete-bucket"]:
3617                 continue
3618hunk ./src/allmydata/storage/crawler.py 362
3619-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3620-            self.state["last-complete-bucket"] = bucket
3621+            self.process_shareset(cycle, prefix, shareset)
3622+            self.state["last-complete-bucket"] = base32si
3623             if time.time() >= start_slice + self.cpu_slice:
3624                 raise TimeSliceExceeded()
3625 
3626hunk ./src/allmydata/storage/crawler.py 370
3627     # the remaining methods are explictly for subclasses to implement.
3628 
3629     def started_cycle(self, cycle):
3630-        """Notify a subclass that the crawler is about to start a cycle.
3631+        """
3632+        Notify a subclass that the crawler is about to start a cycle.
3633 
3634         This method is for subclasses to override. No upcall is necessary.
3635         """
3636hunk ./src/allmydata/storage/crawler.py 377
3637         pass
3638 
3639-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3640-        """Examine a single bucket. Subclasses should do whatever they want
3641+    def process_shareset(self, cycle, prefix, shareset):
3642+        """
3643+        Examine a single shareset. Subclasses should do whatever they want
3644         to do to the shares therein, then update self.state as necessary.
3645 
3646         If the crawler is never interrupted by SIGKILL, this method will be
3647hunk ./src/allmydata/storage/crawler.py 383
3648-        called exactly once per share (per cycle). If it *is* interrupted,
3649+        called exactly once per shareset (per cycle). If it *is* interrupted,
3650         then the next time the node is started, some amount of work will be
3651         duplicated, according to when self.save_state() was last called. By
3652         default, save_state() is called at the end of each timeslice, and
3653hunk ./src/allmydata/storage/crawler.py 391
3654 
3655         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3656         records to a database), you can call save_state() at the end of your
3657-        process_bucket() method. This will reduce the maximum duplicated work
3658-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3659-        per bucket (and some disk writes), which will count against your
3660-        allowed_cpu_percentage, and which may be considerable if
3661-        process_bucket() runs quickly.
3662+        process_shareset() method. This will reduce the maximum duplicated
3663+        work to one shareset per SIGKILL. It will also add overhead, probably
3664+        1-20ms per shareset (and some disk writes), which will count against
3665+        your allowed_cpu_percentage, and which may be considerable if
3666+        process_shareset() runs quickly.
3667 
3668         This method is for subclasses to override. No upcall is necessary.
3669         """
3670hunk ./src/allmydata/storage/crawler.py 402
3671         pass
3672 
3673     def finished_prefix(self, cycle, prefix):
3674-        """Notify a subclass that the crawler has just finished processing a
3675-        prefix directory (all buckets with the same two-character/10bit
3676+        """
3677+        Notify a subclass that the crawler has just finished processing a
3678+        prefix directory (all sharesets with the same two-character/10-bit
3679         prefix). To impose a limit on how much work might be duplicated by a
3680         SIGKILL that occurs during a timeslice, you can call
3681         self.save_state() here, but be aware that it may represent a
3682hunk ./src/allmydata/storage/crawler.py 415
3683         pass
3684 
3685     def finished_cycle(self, cycle):
3686-        """Notify subclass that a cycle (one complete traversal of all
3687+        """
3688+        Notify subclass that a cycle (one complete traversal of all
3689         prefixdirs) has just finished. 'cycle' is the number of the cycle
3690         that just finished. This method should perform summary work and
3691         update self.state to publish information to status displays.
3692hunk ./src/allmydata/storage/crawler.py 433
3693         pass
3694 
3695     def yielding(self, sleep_time):
3696-        """The crawler is about to sleep for 'sleep_time' seconds. This
3697+        """
3698+        The crawler is about to sleep for 'sleep_time' seconds. This
3699         method is mostly for the convenience of unit tests.
3700 
3701         This method is for subclasses to override. No upcall is necessary.
3702hunk ./src/allmydata/storage/crawler.py 443
3703 
3704 
3705 class BucketCountingCrawler(ShareCrawler):
3706-    """I keep track of how many buckets are being managed by this server.
3707-    This is equivalent to the number of distributed files and directories for
3708-    which I am providing storage. The actual number of files+directories in
3709-    the full grid is probably higher (especially when there are more servers
3710-    than 'N', the number of generated shares), because some files+directories
3711-    will have shares on other servers instead of me. Also note that the
3712-    number of buckets will differ from the number of shares in small grids,
3713-    when more than one share is placed on a single server.
3714+    """
3715+    I keep track of how many sharesets, each corresponding to a storage index,
3716+    are being managed by this server. This is equivalent to the number of
3717+    distributed files and directories for which I am providing storage. The
3718+    actual number of files and directories in the full grid is probably higher
3719+    (especially when there are more servers than 'N', the number of generated
3720+    shares), because some files and directories will have shares on other
3721+    servers instead of me. Also note that the number of sharesets will differ
3722+    from the number of shares in small grids, when more than one share is
3723+    placed on a single server.
3724     """
3725 
3726     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3727hunk ./src/allmydata/storage/crawler.py 457
3728 
3729-    def __init__(self, server, statefile, num_sample_prefixes=1):
3730-        ShareCrawler.__init__(self, server, statefile)
3731+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3732+        ShareCrawler.__init__(self, backend, statefp)
3733         self.num_sample_prefixes = num_sample_prefixes
3734 
3735     def add_initial_state(self):
3736hunk ./src/allmydata/storage/crawler.py 471
3737         self.state.setdefault("last-complete-bucket-count", None)
3738         self.state.setdefault("storage-index-samples", {})
3739 
3740-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3741+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3742         # we override process_prefixdir() because we don't want to look at
3743hunk ./src/allmydata/storage/crawler.py 473
3744-        # the individual buckets. We'll save state after each one. On my
3745+        # the individual sharesets. We'll save state after each one. On my
3746         # laptop, a mostly-empty storage server can process about 70
3747         # prefixdirs in a 1.0s slice.
3748         if cycle not in self.state["bucket-counts"]:
3749hunk ./src/allmydata/storage/crawler.py 478
3750             self.state["bucket-counts"][cycle] = {}
3751-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3752+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3753         if prefix in self.prefixes[:self.num_sample_prefixes]:
3754hunk ./src/allmydata/storage/crawler.py 480
3755-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3756+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3757 
3758     def finished_cycle(self, cycle):
3759         last_counts = self.state["bucket-counts"].get(cycle, [])
3760hunk ./src/allmydata/storage/crawler.py 486
3761         if len(last_counts) == len(self.prefixes):
3762             # great, we have a whole cycle.
3763-            num_buckets = sum(last_counts.values())
3764-            self.state["last-complete-bucket-count"] = num_buckets
3765+            num_sharesets = sum(last_counts.values())
3766+            self.state["last-complete-bucket-count"] = num_sharesets
3767             # get rid of old counts
3768             for old_cycle in list(self.state["bucket-counts"].keys()):
3769                 if old_cycle != cycle:
3770hunk ./src/allmydata/storage/crawler.py 494
3771                     del self.state["bucket-counts"][old_cycle]
3772         # get rid of old samples too
3773         for prefix in list(self.state["storage-index-samples"].keys()):
3774-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3775+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3776             if old_cycle != cycle:
3777                 del self.state["storage-index-samples"][prefix]
3778hunk ./src/allmydata/storage/crawler.py 497
3779-
3780hunk ./src/allmydata/storage/expirer.py 1
3781-import time, os, pickle, struct
3782+
3783+import time, pickle, struct
3784+from twisted.python import log as twlog
3785+
3786 from allmydata.storage.crawler import ShareCrawler
3787hunk ./src/allmydata/storage/expirer.py 6
3788-from allmydata.storage.shares import get_share_file
3789-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3790+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3791      UnknownImmutableContainerVersionError
3792hunk ./src/allmydata/storage/expirer.py 8
3793-from twisted.python import log as twlog
3794+
3795 
3796 class LeaseCheckingCrawler(ShareCrawler):
3797     """I examine the leases on all shares, determining which are still valid
3798hunk ./src/allmydata/storage/expirer.py 17
3799     removed.
3800 
3801     I collect statistics on the leases and make these available to a web
3802-    status page, including::
3803+    status page, including:
3804 
3805     Space recovered during this cycle-so-far:
3806      actual (only if expiration_enabled=True):
3807hunk ./src/allmydata/storage/expirer.py 21
3808-      num-buckets, num-shares, sum of share sizes, real disk usage
3809+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3810       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3811        space used by the directory)
3812      what it would have been with the original lease expiration time
3813hunk ./src/allmydata/storage/expirer.py 32
3814 
3815     Space recovered during the last 10 cycles  <-- saved in separate pickle
3816 
3817-    Shares/buckets examined:
3818+    Shares/storage-indices examined:
3819      this cycle-so-far
3820      prediction of rest of cycle
3821      during last 10 cycles <-- separate pickle
3822hunk ./src/allmydata/storage/expirer.py 42
3823     Histogram of leases-per-share:
3824      this-cycle-to-date
3825      last 10 cycles <-- separate pickle
3826-    Histogram of lease ages, buckets = 1day
3827+    Histogram of lease ages, storage-indices over 1 day
3828      cycle-to-date
3829      last 10 cycles <-- separate pickle
3830 
3831hunk ./src/allmydata/storage/expirer.py 53
3832     slow_start = 360 # wait 6 minutes after startup
3833     minimum_cycle_time = 12*60*60 # not more than twice per day
3834 
3835-    def __init__(self, server, statefile, historyfile,
3836-                 expiration_enabled, mode,
3837-                 override_lease_duration, # used if expiration_mode=="age"
3838-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3839-                 sharetypes):
3840-        self.historyfile = historyfile
3841-        self.expiration_enabled = expiration_enabled
3842-        self.mode = mode
3843+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3844+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3845+        self.historyfp = historyfp
3846+        ShareCrawler.__init__(self, backend, statefp)
3847+
3848+        self.expiration_enabled = expiration_policy['enabled']
3849+        self.mode = expiration_policy['mode']
3850         self.override_lease_duration = None
3851         self.cutoff_date = None
3852         if self.mode == "age":
3853hunk ./src/allmydata/storage/expirer.py 63
3854-            assert isinstance(override_lease_duration, (int, type(None)))
3855-            self.override_lease_duration = override_lease_duration # seconds
3856+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3857+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3858         elif self.mode == "cutoff-date":
3859hunk ./src/allmydata/storage/expirer.py 66
3860-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3861-            assert cutoff_date is not None
3862-            self.cutoff_date = cutoff_date
3863+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3864+            self.cutoff_date = expiration_policy['cutoff_date']
3865         else:
3866hunk ./src/allmydata/storage/expirer.py 69
3867-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3868-        self.sharetypes_to_expire = sharetypes
3869-        ShareCrawler.__init__(self, server, statefile)
3870+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3871+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3872 
3873     def add_initial_state(self):
3874         # we fill ["cycle-to-date"] here (even though they will be reset in
3875hunk ./src/allmydata/storage/expirer.py 84
3876             self.state["cycle-to-date"].setdefault(k, so_far[k])
3877 
3878         # initialize history
3879-        if not os.path.exists(self.historyfile):
3880+        if not self.historyfp.exists():
3881             history = {} # cyclenum -> dict
3882hunk ./src/allmydata/storage/expirer.py 86
3883-            f = open(self.historyfile, "wb")
3884-            pickle.dump(history, f)
3885-            f.close()
3886+            self.historyfp.setContent(pickle.dumps(history))
3887 
3888     def create_empty_cycle_dict(self):
3889         recovered = self.create_empty_recovered_dict()
3890hunk ./src/allmydata/storage/expirer.py 99
3891 
3892     def create_empty_recovered_dict(self):
3893         recovered = {}
3894+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
3895         for a in ("actual", "original", "configured", "examined"):
3896             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
3897                 recovered[a+"-"+b] = 0
3898hunk ./src/allmydata/storage/expirer.py 110
3899     def started_cycle(self, cycle):
3900         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
3901 
3902-    def stat(self, fn):
3903-        return os.stat(fn)
3904-
3905-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3906-        bucketdir = os.path.join(prefixdir, storage_index_b32)
3907-        s = self.stat(bucketdir)
3908+    def process_storage_index(self, cycle, prefix, container):
3909         would_keep_shares = []
3910         wks = None
3911hunk ./src/allmydata/storage/expirer.py 113
3912+        sharetype = None
3913 
3914hunk ./src/allmydata/storage/expirer.py 115
3915-        for fn in os.listdir(bucketdir):
3916-            try:
3917-                shnum = int(fn)
3918-            except ValueError:
3919-                continue # non-numeric means not a sharefile
3920-            sharefile = os.path.join(bucketdir, fn)
3921+        for share in container.get_shares():
3922+            sharetype = share.sharetype
3923             try:
3924hunk ./src/allmydata/storage/expirer.py 118
3925-                wks = self.process_share(sharefile)
3926+                wks = self.process_share(share)
3927             except (UnknownMutableContainerVersionError,
3928                     UnknownImmutableContainerVersionError,
3929                     struct.error):
3930hunk ./src/allmydata/storage/expirer.py 122
3931-                twlog.msg("lease-checker error processing %s" % sharefile)
3932+                twlog.msg("lease-checker error processing %r" % (share,))
3933                 twlog.err()
3934hunk ./src/allmydata/storage/expirer.py 124
3935-                which = (storage_index_b32, shnum)
3936+                which = (si_b2a(share.storageindex), share.get_shnum())
3937                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
3938                 wks = (1, 1, 1, "unknown")
3939             would_keep_shares.append(wks)
3940hunk ./src/allmydata/storage/expirer.py 129
3941 
3942-        sharetype = None
3943+        container_type = None
3944         if wks:
3945hunk ./src/allmydata/storage/expirer.py 131
3946-            # use the last share's sharetype as the buckettype
3947-            sharetype = wks[3]
3948+            # use the last share's sharetype as the container type
3949+            container_type = wks[3]
3950         rec = self.state["cycle-to-date"]["space-recovered"]
3951         self.increment(rec, "examined-buckets", 1)
3952         if sharetype:
3953hunk ./src/allmydata/storage/expirer.py 136
3954-            self.increment(rec, "examined-buckets-"+sharetype, 1)
3955+            self.increment(rec, "examined-buckets-"+container_type, 1)
3956+
3957+        container_diskbytes = container.get_overhead()
3958 
3959hunk ./src/allmydata/storage/expirer.py 140
3960-        try:
3961-            bucket_diskbytes = s.st_blocks * 512
3962-        except AttributeError:
3963-            bucket_diskbytes = 0 # no stat().st_blocks on windows
3964         if sum([wks[0] for wks in would_keep_shares]) == 0:
3965hunk ./src/allmydata/storage/expirer.py 141
3966-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
3967+            self.increment_container_space("original", container_diskbytes, sharetype)
3968         if sum([wks[1] for wks in would_keep_shares]) == 0:
3969hunk ./src/allmydata/storage/expirer.py 143
3970-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
3971+            self.increment_container_space("configured", container_diskbytes, sharetype)
3972         if sum([wks[2] for wks in would_keep_shares]) == 0:
3973hunk ./src/allmydata/storage/expirer.py 145
3974-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
3975+            self.increment_container_space("actual", container_diskbytes, sharetype)
3976 
3977hunk ./src/allmydata/storage/expirer.py 147
3978-    def process_share(self, sharefilename):
3979-        # first, find out what kind of a share it is
3980-        sf = get_share_file(sharefilename)
3981-        sharetype = sf.sharetype
3982+    def process_share(self, share):
3983+        sharetype = share.sharetype
3984         now = time.time()
3985hunk ./src/allmydata/storage/expirer.py 150
3986-        s = self.stat(sharefilename)
3987+        sharebytes = share.get_size()
3988+        diskbytes = share.get_used_space()
3989 
3990         num_leases = 0
3991         num_valid_leases_original = 0
3992hunk ./src/allmydata/storage/expirer.py 158
3993         num_valid_leases_configured = 0
3994         expired_leases_configured = []
3995 
3996-        for li in sf.get_leases():
3997+        for li in share.get_leases():
3998             num_leases += 1
3999             original_expiration_time = li.get_expiration_time()
4000             grant_renew_time = li.get_grant_renew_time_time()
4001hunk ./src/allmydata/storage/expirer.py 171
4002 
4003             #  expired-or-not according to our configured age limit
4004             expired = False
4005-            if self.mode == "age":
4006-                age_limit = original_expiration_time
4007-                if self.override_lease_duration is not None:
4008-                    age_limit = self.override_lease_duration
4009-                if age > age_limit:
4010-                    expired = True
4011-            else:
4012-                assert self.mode == "cutoff-date"
4013-                if grant_renew_time < self.cutoff_date:
4014-                    expired = True
4015-            if sharetype not in self.sharetypes_to_expire:
4016-                expired = False
4017+            if sharetype in self.sharetypes_to_expire:
4018+                if self.mode == "age":
4019+                    age_limit = original_expiration_time
4020+                    if self.override_lease_duration is not None:
4021+                        age_limit = self.override_lease_duration
4022+                    if age > age_limit:
4023+                        expired = True
4024+                else:
4025+                    assert self.mode == "cutoff-date"
4026+                    if grant_renew_time < self.cutoff_date:
4027+                        expired = True
4028 
4029             if expired:
4030                 expired_leases_configured.append(li)
4031hunk ./src/allmydata/storage/expirer.py 190
4032 
4033         so_far = self.state["cycle-to-date"]
4034         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4035-        self.increment_space("examined", s, sharetype)
4036+        self.increment_space("examined", diskbytes, sharetype)
4037 
4038         would_keep_share = [1, 1, 1, sharetype]
4039 
4040hunk ./src/allmydata/storage/expirer.py 196
4041         if self.expiration_enabled:
4042             for li in expired_leases_configured:
4043-                sf.cancel_lease(li.cancel_secret)
4044+                share.cancel_lease(li.cancel_secret)
4045 
4046         if num_valid_leases_original == 0:
4047             would_keep_share[0] = 0
4048hunk ./src/allmydata/storage/expirer.py 200
4049-            self.increment_space("original", s, sharetype)
4050+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4051 
4052         if num_valid_leases_configured == 0:
4053             would_keep_share[1] = 0
4054hunk ./src/allmydata/storage/expirer.py 204
4055-            self.increment_space("configured", s, sharetype)
4056+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4057             if self.expiration_enabled:
4058                 would_keep_share[2] = 0
4059hunk ./src/allmydata/storage/expirer.py 207
4060-                self.increment_space("actual", s, sharetype)
4061+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4062 
4063         return would_keep_share
4064 
4065hunk ./src/allmydata/storage/expirer.py 211
4066-    def increment_space(self, a, s, sharetype):
4067-        sharebytes = s.st_size
4068-        try:
4069-            # note that stat(2) says that st_blocks is 512 bytes, and that
4070-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4071-            # independent of the block-size that st_blocks uses.
4072-            diskbytes = s.st_blocks * 512
4073-        except AttributeError:
4074-            # the docs say that st_blocks is only on linux. I also see it on
4075-            # MacOS. But it isn't available on windows.
4076-            diskbytes = sharebytes
4077+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4078         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4079         self.increment(so_far_sr, a+"-shares", 1)
4080         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4081hunk ./src/allmydata/storage/expirer.py 221
4082             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4083             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4084 
4085-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4086+    def increment_container_space(self, a, container_diskbytes, container_type):
4087         rec = self.state["cycle-to-date"]["space-recovered"]
4088hunk ./src/allmydata/storage/expirer.py 223
4089-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4090+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4091         self.increment(rec, a+"-buckets", 1)
4092hunk ./src/allmydata/storage/expirer.py 225
4093-        if sharetype:
4094-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4095-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4096+        if container_type:
4097+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4098+            self.increment(rec, a+"-buckets-"+container_type, 1)
4099 
4100     def increment(self, d, k, delta=1):
4101         if k not in d:
4102hunk ./src/allmydata/storage/expirer.py 281
4103         # copy() needs to become a deepcopy
4104         h["space-recovered"] = s["space-recovered"].copy()
4105 
4106-        history = pickle.load(open(self.historyfile, "rb"))
4107+        history = pickle.load(self.historyfp.getContent())
4108         history[cycle] = h
4109         while len(history) > 10:
4110             oldcycles = sorted(history.keys())
4111hunk ./src/allmydata/storage/expirer.py 286
4112             del history[oldcycles[0]]
4113-        f = open(self.historyfile, "wb")
4114-        pickle.dump(history, f)
4115-        f.close()
4116+        self.historyfp.setContent(pickle.dumps(history))
4117 
4118     def get_state(self):
4119         """In addition to the crawler state described in
4120hunk ./src/allmydata/storage/expirer.py 355
4121         progress = self.get_progress()
4122 
4123         state = ShareCrawler.get_state(self) # does a shallow copy
4124-        history = pickle.load(open(self.historyfile, "rb"))
4125+        history = pickle.load(self.historyfp.getContent())
4126         state["history"] = history
4127 
4128         if not progress["cycle-in-progress"]:
4129hunk ./src/allmydata/storage/lease.py 3
4130 import struct, time
4131 
4132+
4133+class NonExistentLeaseError(Exception):
4134+    pass
4135+
4136 class LeaseInfo:
4137     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4138                  expiration_time=None, nodeid=None):
4139hunk ./src/allmydata/storage/lease.py 21
4140 
4141     def get_expiration_time(self):
4142         return self.expiration_time
4143+
4144     def get_grant_renew_time_time(self):
4145         # hack, based upon fixed 31day expiration period
4146         return self.expiration_time - 31*24*60*60
4147hunk ./src/allmydata/storage/lease.py 25
4148+
4149     def get_age(self):
4150         return time.time() - self.get_grant_renew_time_time()
4151 
4152hunk ./src/allmydata/storage/lease.py 36
4153          self.expiration_time) = struct.unpack(">L32s32sL", data)
4154         self.nodeid = None
4155         return self
4156+
4157     def to_immutable_data(self):
4158         return struct.pack(">L32s32sL",
4159                            self.owner_num,
4160hunk ./src/allmydata/storage/lease.py 49
4161                            int(self.expiration_time),
4162                            self.renew_secret, self.cancel_secret,
4163                            self.nodeid)
4164+
4165     def from_mutable_data(self, data):
4166         (self.owner_num,
4167          self.expiration_time,
4168hunk ./src/allmydata/storage/server.py 1
4169-import os, re, weakref, struct, time
4170+import weakref, time
4171 
4172 from foolscap.api import Referenceable
4173 from twisted.application import service
4174hunk ./src/allmydata/storage/server.py 7
4175 
4176 from zope.interface import implements
4177-from allmydata.interfaces import RIStorageServer, IStatsProducer
4178-from allmydata.util import fileutil, idlib, log, time_format
4179+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4180+from allmydata.util.assertutil import precondition
4181+from allmydata.util import idlib, log
4182 import allmydata # for __full_version__
4183 
4184hunk ./src/allmydata/storage/server.py 12
4185-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4186-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4187+from allmydata.storage.common import si_a2b, si_b2a
4188+[si_a2b]  # hush pyflakes
4189 from allmydata.storage.lease import LeaseInfo
4190hunk ./src/allmydata/storage/server.py 15
4191-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4192-     create_mutable_sharefile
4193-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4194-from allmydata.storage.crawler import BucketCountingCrawler
4195 from allmydata.storage.expirer import LeaseCheckingCrawler
4196hunk ./src/allmydata/storage/server.py 16
4197-
4198-# storage/
4199-# storage/shares/incoming
4200-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4201-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4202-# storage/shares/$START/$STORAGEINDEX
4203-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4204-
4205-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4206-# base-32 chars).
4207-
4208-# $SHARENUM matches this regex:
4209-NUM_RE=re.compile("^[0-9]+$")
4210-
4211+from allmydata.storage.crawler import BucketCountingCrawler
4212 
4213 
4214 class StorageServer(service.MultiService, Referenceable):
4215hunk ./src/allmydata/storage/server.py 21
4216     implements(RIStorageServer, IStatsProducer)
4217+
4218     name = 'storage'
4219     LeaseCheckerClass = LeaseCheckingCrawler
4220hunk ./src/allmydata/storage/server.py 24
4221+    DEFAULT_EXPIRATION_POLICY = {
4222+        'enabled': False,
4223+        'mode': 'age',
4224+        'override_lease_duration': None,
4225+        'cutoff_date': None,
4226+        'sharetypes': ('mutable', 'immutable'),
4227+    }
4228 
4229hunk ./src/allmydata/storage/server.py 32
4230-    def __init__(self, storedir, nodeid, reserved_space=0,
4231-                 discard_storage=False, readonly_storage=False,
4232+    def __init__(self, serverid, backend, statedir,
4233                  stats_provider=None,
4234hunk ./src/allmydata/storage/server.py 34
4235-                 expiration_enabled=False,
4236-                 expiration_mode="age",
4237-                 expiration_override_lease_duration=None,
4238-                 expiration_cutoff_date=None,
4239-                 expiration_sharetypes=("mutable", "immutable")):
4240+                 expiration_policy=None):
4241         service.MultiService.__init__(self)
4242hunk ./src/allmydata/storage/server.py 36
4243-        assert isinstance(nodeid, str)
4244-        assert len(nodeid) == 20
4245-        self.my_nodeid = nodeid
4246-        self.storedir = storedir
4247-        sharedir = os.path.join(storedir, "shares")
4248-        fileutil.make_dirs(sharedir)
4249-        self.sharedir = sharedir
4250-        # we don't actually create the corruption-advisory dir until necessary
4251-        self.corruption_advisory_dir = os.path.join(storedir,
4252-                                                    "corruption-advisories")
4253-        self.reserved_space = int(reserved_space)
4254-        self.no_storage = discard_storage
4255-        self.readonly_storage = readonly_storage
4256+        precondition(IStorageBackend.providedBy(backend), backend)
4257+        precondition(isinstance(serverid, str), serverid)
4258+        precondition(len(serverid) == 20, serverid)
4259+
4260+        self._serverid = serverid
4261         self.stats_provider = stats_provider
4262         if self.stats_provider:
4263             self.stats_provider.register_producer(self)
4264hunk ./src/allmydata/storage/server.py 44
4265-        self.incomingdir = os.path.join(sharedir, 'incoming')
4266-        self._clean_incomplete()
4267-        fileutil.make_dirs(self.incomingdir)
4268         self._active_writers = weakref.WeakKeyDictionary()
4269hunk ./src/allmydata/storage/server.py 45
4270+        self.backend = backend
4271+        self.backend.setServiceParent(self)
4272+        self._statedir = statedir
4273         log.msg("StorageServer created", facility="tahoe.storage")
4274 
4275hunk ./src/allmydata/storage/server.py 50
4276-        if reserved_space:
4277-            if self.get_available_space() is None:
4278-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4279-                        umin="0wZ27w", level=log.UNUSUAL)
4280-
4281         self.latencies = {"allocate": [], # immutable
4282                           "write": [],
4283                           "close": [],
4284hunk ./src/allmydata/storage/server.py 61
4285                           "renew": [],
4286                           "cancel": [],
4287                           }
4288-        self.add_bucket_counter()
4289-
4290-        statefile = os.path.join(self.storedir, "lease_checker.state")
4291-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4292-        klass = self.LeaseCheckerClass
4293-        self.lease_checker = klass(self, statefile, historyfile,
4294-                                   expiration_enabled, expiration_mode,
4295-                                   expiration_override_lease_duration,
4296-                                   expiration_cutoff_date,
4297-                                   expiration_sharetypes)
4298-        self.lease_checker.setServiceParent(self)
4299+        self._setup_bucket_counter()
4300+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4301 
4302     def __repr__(self):
4303hunk ./src/allmydata/storage/server.py 65
4304-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4305+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4306 
4307hunk ./src/allmydata/storage/server.py 67
4308-    def add_bucket_counter(self):
4309-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4310-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4311+    def _setup_bucket_counter(self):
4312+        statefp = self._statedir.child("bucket_counter.state")
4313+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4314         self.bucket_counter.setServiceParent(self)
4315 
4316hunk ./src/allmydata/storage/server.py 72
4317+    def _setup_lease_checker(self, expiration_policy):
4318+        statefp = self._statedir.child("lease_checker.state")
4319+        historyfp = self._statedir.child("lease_checker.history")
4320+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4321+        self.lease_checker.setServiceParent(self)
4322+
4323     def count(self, name, delta=1):
4324         if self.stats_provider:
4325             self.stats_provider.count("storage_server." + name, delta)
4326hunk ./src/allmydata/storage/server.py 92
4327         """Return a dict, indexed by category, that contains a dict of
4328         latency numbers for each category. If there are sufficient samples
4329         for unambiguous interpretation, each dict will contain the
4330-        following keys: mean, 01_0_percentile, 10_0_percentile,
4331+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4332         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4333         99_0_percentile, 99_9_percentile.  If there are insufficient
4334         samples for a given percentile to be interpreted unambiguously
4335hunk ./src/allmydata/storage/server.py 114
4336             else:
4337                 stats["mean"] = None
4338 
4339-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4340-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4341-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4342+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4343+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4344+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4345                              (0.999, "99_9_percentile", 1000)]
4346 
4347             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4348hunk ./src/allmydata/storage/server.py 133
4349             kwargs["facility"] = "tahoe.storage"
4350         return log.msg(*args, **kwargs)
4351 
4352-    def _clean_incomplete(self):
4353-        fileutil.rm_dir(self.incomingdir)
4354+    def get_serverid(self):
4355+        return self._serverid
4356 
4357     def get_stats(self):
4358         # remember: RIStatsProvider requires that our return dict
4359hunk ./src/allmydata/storage/server.py 138
4360-        # contains numeric values.
4361+        # contains numeric, or None values.
4362         stats = { 'storage_server.allocated': self.allocated_size(), }
4363hunk ./src/allmydata/storage/server.py 140
4364-        stats['storage_server.reserved_space'] = self.reserved_space
4365         for category,ld in self.get_latencies().items():
4366             for name,v in ld.items():
4367                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4368hunk ./src/allmydata/storage/server.py 144
4369 
4370-        try:
4371-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4372-            writeable = disk['avail'] > 0
4373-
4374-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4375-            stats['storage_server.disk_total'] = disk['total']
4376-            stats['storage_server.disk_used'] = disk['used']
4377-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4378-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4379-            stats['storage_server.disk_avail'] = disk['avail']
4380-        except AttributeError:
4381-            writeable = True
4382-        except EnvironmentError:
4383-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4384-            writeable = False
4385-
4386-        if self.readonly_storage:
4387-            stats['storage_server.disk_avail'] = 0
4388-            writeable = False
4389+        self.backend.fill_in_space_stats(stats)
4390 
4391hunk ./src/allmydata/storage/server.py 146
4392-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4393         s = self.bucket_counter.get_state()
4394         bucket_count = s.get("last-complete-bucket-count")
4395         if bucket_count:
4396hunk ./src/allmydata/storage/server.py 153
4397         return stats
4398 
4399     def get_available_space(self):
4400-        """Returns available space for share storage in bytes, or None if no
4401-        API to get this information is available."""
4402-
4403-        if self.readonly_storage:
4404-            return 0
4405-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4406+        return self.backend.get_available_space()
4407 
4408     def allocated_size(self):
4409         space = 0
4410hunk ./src/allmydata/storage/server.py 162
4411         return space
4412 
4413     def remote_get_version(self):
4414-        remaining_space = self.get_available_space()
4415+        remaining_space = self.backend.get_available_space()
4416         if remaining_space is None:
4417             # We're on a platform that has no API to get disk stats.
4418             remaining_space = 2**64
4419hunk ./src/allmydata/storage/server.py 178
4420                     }
4421         return version
4422 
4423-    def remote_allocate_buckets(self, storage_index,
4424+    def remote_allocate_buckets(self, storageindex,
4425                                 renew_secret, cancel_secret,
4426                                 sharenums, allocated_size,
4427                                 canary, owner_num=0):
4428hunk ./src/allmydata/storage/server.py 182
4429+        # cancel_secret is no longer used.
4430         # owner_num is not for clients to set, but rather it should be
4431hunk ./src/allmydata/storage/server.py 184
4432-        # curried into the PersonalStorageServer instance that is dedicated
4433-        # to a particular owner.
4434+        # curried into a StorageServer instance dedicated to a particular
4435+        # owner.
4436         start = time.time()
4437         self.count("allocate")
4438hunk ./src/allmydata/storage/server.py 188
4439-        alreadygot = set()
4440         bucketwriters = {} # k: shnum, v: BucketWriter
4441hunk ./src/allmydata/storage/server.py 189
4442-        si_dir = storage_index_to_dir(storage_index)
4443-        si_s = si_b2a(storage_index)
4444 
4445hunk ./src/allmydata/storage/server.py 190
4446+        si_s = si_b2a(storageindex)
4447         log.msg("storage: allocate_buckets %s" % si_s)
4448 
4449hunk ./src/allmydata/storage/server.py 193
4450-        # in this implementation, the lease information (including secrets)
4451-        # goes into the share files themselves. It could also be put into a
4452-        # separate database. Note that the lease should not be added until
4453-        # the BucketWriter has been closed.
4454+        # Note that the lease should not be added until the BucketWriter
4455+        # has been closed.
4456         expire_time = time.time() + 31*24*60*60
4457hunk ./src/allmydata/storage/server.py 196
4458-        lease_info = LeaseInfo(owner_num,
4459-                               renew_secret, cancel_secret,
4460-                               expire_time, self.my_nodeid)
4461+        lease_info = LeaseInfo(owner_num, renew_secret,
4462+                               expire_time, self._serverid)
4463 
4464         max_space_per_bucket = allocated_size
4465 
4466hunk ./src/allmydata/storage/server.py 201
4467-        remaining_space = self.get_available_space()
4468+        remaining_space = self.backend.get_available_space()
4469         limited = remaining_space is not None
4470         if limited:
4471hunk ./src/allmydata/storage/server.py 204
4472-            # this is a bit conservative, since some of this allocated_size()
4473-            # has already been written to disk, where it will show up in
4474+            # This is a bit conservative, since some of this allocated_size()
4475+            # has already been written to the backend, where it will show up in
4476             # get_available_space.
4477             remaining_space -= self.allocated_size()
4478hunk ./src/allmydata/storage/server.py 208
4479-        # self.readonly_storage causes remaining_space <= 0
4480+            # If the backend is read-only, remaining_space will be <= 0.
4481+
4482+        shareset = self.backend.get_shareset(storageindex)
4483 
4484hunk ./src/allmydata/storage/server.py 212
4485-        # fill alreadygot with all shares that we have, not just the ones
4486+        # Fill alreadygot with all shares that we have, not just the ones
4487         # they asked about: this will save them a lot of work. Add or update
4488         # leases for all of them: if they want us to hold shares for this
4489hunk ./src/allmydata/storage/server.py 215
4490-        # file, they'll want us to hold leases for this file.
4491-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4492-            alreadygot.add(shnum)
4493-            sf = ShareFile(fn)
4494-            sf.add_or_renew_lease(lease_info)
4495+        # file, they'll want us to hold leases for all the shares of it.
4496+        #
4497+        # XXX should we be making the assumption here that lease info is
4498+        # duplicated in all shares?
4499+        alreadygot = set()
4500+        for share in shareset.get_shares():
4501+            share.add_or_renew_lease(lease_info)
4502+            alreadygot.add(share.shnum)
4503 
4504hunk ./src/allmydata/storage/server.py 224
4505-        for shnum in sharenums:
4506-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4507-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4508-            if os.path.exists(finalhome):
4509-                # great! we already have it. easy.
4510-                pass
4511-            elif os.path.exists(incominghome):
4512+        for shnum in sharenums - alreadygot:
4513+            if shareset.has_incoming(shnum):
4514                 # Note that we don't create BucketWriters for shnums that
4515                 # have a partial share (in incoming/), so if a second upload
4516                 # occurs while the first is still in progress, the second
4517hunk ./src/allmydata/storage/server.py 232
4518                 # uploader will use different storage servers.
4519                 pass
4520             elif (not limited) or (remaining_space >= max_space_per_bucket):
4521-                # ok! we need to create the new share file.
4522-                bw = BucketWriter(self, incominghome, finalhome,
4523-                                  max_space_per_bucket, lease_info, canary)
4524-                if self.no_storage:
4525-                    bw.throw_out_all_data = True
4526+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4527+                                                 lease_info, canary)
4528                 bucketwriters[shnum] = bw
4529                 self._active_writers[bw] = 1
4530                 if limited:
4531hunk ./src/allmydata/storage/server.py 239
4532                     remaining_space -= max_space_per_bucket
4533             else:
4534-                # bummer! not enough space to accept this bucket
4535+                # Bummer not enough space to accept this share.
4536                 pass
4537 
4538hunk ./src/allmydata/storage/server.py 242
4539-        if bucketwriters:
4540-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4541-
4542         self.add_latency("allocate", time.time() - start)
4543         return alreadygot, bucketwriters
4544 
4545hunk ./src/allmydata/storage/server.py 245
4546-    def _iter_share_files(self, storage_index):
4547-        for shnum, filename in self._get_bucket_shares(storage_index):
4548-            f = open(filename, 'rb')
4549-            header = f.read(32)
4550-            f.close()
4551-            if header[:32] == MutableShareFile.MAGIC:
4552-                sf = MutableShareFile(filename, self)
4553-                # note: if the share has been migrated, the renew_lease()
4554-                # call will throw an exception, with information to help the
4555-                # client update the lease.
4556-            elif header[:4] == struct.pack(">L", 1):
4557-                sf = ShareFile(filename)
4558-            else:
4559-                continue # non-sharefile
4560-            yield sf
4561-
4562-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4563+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4564                          owner_num=1):
4565hunk ./src/allmydata/storage/server.py 247
4566+        # cancel_secret is no longer used.
4567         start = time.time()
4568         self.count("add-lease")
4569         new_expire_time = time.time() + 31*24*60*60
4570hunk ./src/allmydata/storage/server.py 251
4571-        lease_info = LeaseInfo(owner_num,
4572-                               renew_secret, cancel_secret,
4573-                               new_expire_time, self.my_nodeid)
4574-        for sf in self._iter_share_files(storage_index):
4575-            sf.add_or_renew_lease(lease_info)
4576-        self.add_latency("add-lease", time.time() - start)
4577-        return None
4578+        lease_info = LeaseInfo(owner_num, renew_secret,
4579+                               new_expire_time, self._serverid)
4580 
4581hunk ./src/allmydata/storage/server.py 254
4582-    def remote_renew_lease(self, storage_index, renew_secret):
4583+        try:
4584+            self.backend.add_or_renew_lease(lease_info)
4585+        finally:
4586+            self.add_latency("add-lease", time.time() - start)
4587+
4588+    def remote_renew_lease(self, storageindex, renew_secret):
4589         start = time.time()
4590         self.count("renew")
4591hunk ./src/allmydata/storage/server.py 262
4592-        new_expire_time = time.time() + 31*24*60*60
4593-        found_buckets = False
4594-        for sf in self._iter_share_files(storage_index):
4595-            found_buckets = True
4596-            sf.renew_lease(renew_secret, new_expire_time)
4597-        self.add_latency("renew", time.time() - start)
4598-        if not found_buckets:
4599-            raise IndexError("no such lease to renew")
4600+
4601+        try:
4602+            shareset = self.backend.get_shareset(storageindex)
4603+            new_expiration_time = start + 31*24*60*60   # one month from now
4604+            shareset.renew_lease(renew_secret, new_expiration_time)
4605+        finally:
4606+            self.add_latency("renew", time.time() - start)
4607 
4608     def bucket_writer_closed(self, bw, consumed_size):
4609         if self.stats_provider:
4610hunk ./src/allmydata/storage/server.py 275
4611             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4612         del self._active_writers[bw]
4613 
4614-    def _get_bucket_shares(self, storage_index):
4615-        """Return a list of (shnum, pathname) tuples for files that hold
4616-        shares for this storage_index. In each tuple, 'shnum' will always be
4617-        the integer form of the last component of 'pathname'."""
4618-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4619-        try:
4620-            for f in os.listdir(storagedir):
4621-                if NUM_RE.match(f):
4622-                    filename = os.path.join(storagedir, f)
4623-                    yield (int(f), filename)
4624-        except OSError:
4625-            # Commonly caused by there being no buckets at all.
4626-            pass
4627-
4628-    def remote_get_buckets(self, storage_index):
4629+    def remote_get_buckets(self, storageindex):
4630         start = time.time()
4631         self.count("get")
4632hunk ./src/allmydata/storage/server.py 278
4633-        si_s = si_b2a(storage_index)
4634+        si_s = si_b2a(storageindex)
4635         log.msg("storage: get_buckets %s" % si_s)
4636         bucketreaders = {} # k: sharenum, v: BucketReader
4637hunk ./src/allmydata/storage/server.py 281
4638-        for shnum, filename in self._get_bucket_shares(storage_index):
4639-            bucketreaders[shnum] = BucketReader(self, filename,
4640-                                                storage_index, shnum)
4641-        self.add_latency("get", time.time() - start)
4642-        return bucketreaders
4643 
4644hunk ./src/allmydata/storage/server.py 282
4645-    def get_leases(self, storage_index):
4646-        """Provide an iterator that yields all of the leases attached to this
4647-        bucket. Each lease is returned as a LeaseInfo instance.
4648+        try:
4649+            shareset = self.backend.get_shareset(storageindex)
4650+            for share in shareset.get_shares():
4651+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4652+            return bucketreaders
4653+        finally:
4654+            self.add_latency("get", time.time() - start)
4655 
4656hunk ./src/allmydata/storage/server.py 290
4657-        This method is not for client use.
4658+    def get_leases(self, storageindex):
4659         """
4660hunk ./src/allmydata/storage/server.py 292
4661+        Provide an iterator that yields all of the leases attached to this
4662+        bucket. Each lease is returned as a LeaseInfo instance.
4663 
4664hunk ./src/allmydata/storage/server.py 295
4665-        # since all shares get the same lease data, we just grab the leases
4666-        # from the first share
4667-        try:
4668-            shnum, filename = self._get_bucket_shares(storage_index).next()
4669-            sf = ShareFile(filename)
4670-            return sf.get_leases()
4671-        except StopIteration:
4672-            return iter([])
4673+        This method is not for client use. XXX do we need it at all?
4674+        """
4675+        return self.backend.get_shareset(storageindex).get_leases()
4676 
4677hunk ./src/allmydata/storage/server.py 299
4678-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4679+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4680                                                secrets,
4681                                                test_and_write_vectors,
4682                                                read_vector):
4683hunk ./src/allmydata/storage/server.py 305
4684         start = time.time()
4685         self.count("writev")
4686-        si_s = si_b2a(storage_index)
4687+        si_s = si_b2a(storageindex)
4688         log.msg("storage: slot_writev %s" % si_s)
4689hunk ./src/allmydata/storage/server.py 307
4690-        si_dir = storage_index_to_dir(storage_index)
4691-        (write_enabler, renew_secret, cancel_secret) = secrets
4692-        # shares exist if there is a file for them
4693-        bucketdir = os.path.join(self.sharedir, si_dir)
4694-        shares = {}
4695-        if os.path.isdir(bucketdir):
4696-            for sharenum_s in os.listdir(bucketdir):
4697-                try:
4698-                    sharenum = int(sharenum_s)
4699-                except ValueError:
4700-                    continue
4701-                filename = os.path.join(bucketdir, sharenum_s)
4702-                msf = MutableShareFile(filename, self)
4703-                msf.check_write_enabler(write_enabler, si_s)
4704-                shares[sharenum] = msf
4705-        # write_enabler is good for all existing shares.
4706-
4707-        # Now evaluate test vectors.
4708-        testv_is_good = True
4709-        for sharenum in test_and_write_vectors:
4710-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4711-            if sharenum in shares:
4712-                if not shares[sharenum].check_testv(testv):
4713-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4714-                    testv_is_good = False
4715-                    break
4716-            else:
4717-                # compare the vectors against an empty share, in which all
4718-                # reads return empty strings.
4719-                if not EmptyShare().check_testv(testv):
4720-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4721-                                                                testv))
4722-                    testv_is_good = False
4723-                    break
4724-
4725-        # now gather the read vectors, before we do any writes
4726-        read_data = {}
4727-        for sharenum, share in shares.items():
4728-            read_data[sharenum] = share.readv(read_vector)
4729-
4730-        ownerid = 1 # TODO
4731-        expire_time = time.time() + 31*24*60*60   # one month
4732-        lease_info = LeaseInfo(ownerid,
4733-                               renew_secret, cancel_secret,
4734-                               expire_time, self.my_nodeid)
4735-
4736-        if testv_is_good:
4737-            # now apply the write vectors
4738-            for sharenum in test_and_write_vectors:
4739-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4740-                if new_length == 0:
4741-                    if sharenum in shares:
4742-                        shares[sharenum].unlink()
4743-                else:
4744-                    if sharenum not in shares:
4745-                        # allocate a new share
4746-                        allocated_size = 2000 # arbitrary, really
4747-                        share = self._allocate_slot_share(bucketdir, secrets,
4748-                                                          sharenum,
4749-                                                          allocated_size,
4750-                                                          owner_num=0)
4751-                        shares[sharenum] = share
4752-                    shares[sharenum].writev(datav, new_length)
4753-                    # and update the lease
4754-                    shares[sharenum].add_or_renew_lease(lease_info)
4755-
4756-            if new_length == 0:
4757-                # delete empty bucket directories
4758-                if not os.listdir(bucketdir):
4759-                    os.rmdir(bucketdir)
4760 
4761hunk ./src/allmydata/storage/server.py 308
4762+        try:
4763+            shareset = self.backend.get_shareset(storageindex)
4764+            expiration_time = start + 31*24*60*60   # one month from now
4765+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4766+                                                       read_vector, expiration_time)
4767+        finally:
4768+            self.add_latency("writev", time.time() - start)
4769 
4770hunk ./src/allmydata/storage/server.py 316
4771-        # all done
4772-        self.add_latency("writev", time.time() - start)
4773-        return (testv_is_good, read_data)
4774-
4775-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4776-                             allocated_size, owner_num=0):
4777-        (write_enabler, renew_secret, cancel_secret) = secrets
4778-        my_nodeid = self.my_nodeid
4779-        fileutil.make_dirs(bucketdir)
4780-        filename = os.path.join(bucketdir, "%d" % sharenum)
4781-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4782-                                         self)
4783-        return share
4784-
4785-    def remote_slot_readv(self, storage_index, shares, readv):
4786+    def remote_slot_readv(self, storageindex, shares, readv):
4787         start = time.time()
4788         self.count("readv")
4789hunk ./src/allmydata/storage/server.py 319
4790-        si_s = si_b2a(storage_index)
4791-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4792-                     facility="tahoe.storage", level=log.OPERATIONAL)
4793-        si_dir = storage_index_to_dir(storage_index)
4794-        # shares exist if there is a file for them
4795-        bucketdir = os.path.join(self.sharedir, si_dir)
4796-        if not os.path.isdir(bucketdir):
4797+        si_s = si_b2a(storageindex)
4798+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4799+                facility="tahoe.storage", level=log.OPERATIONAL)
4800+
4801+        try:
4802+            shareset = self.backend.get_shareset(storageindex)
4803+            return shareset.readv(self, shares, readv)
4804+        finally:
4805             self.add_latency("readv", time.time() - start)
4806hunk ./src/allmydata/storage/server.py 328
4807-            return {}
4808-        datavs = {}
4809-        for sharenum_s in os.listdir(bucketdir):
4810-            try:
4811-                sharenum = int(sharenum_s)
4812-            except ValueError:
4813-                continue
4814-            if sharenum in shares or not shares:
4815-                filename = os.path.join(bucketdir, sharenum_s)
4816-                msf = MutableShareFile(filename, self)
4817-                datavs[sharenum] = msf.readv(readv)
4818-        log.msg("returning shares %s" % (datavs.keys(),),
4819-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4820-        self.add_latency("readv", time.time() - start)
4821-        return datavs
4822 
4823hunk ./src/allmydata/storage/server.py 329
4824-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4825-                                    reason):
4826-        fileutil.make_dirs(self.corruption_advisory_dir)
4827-        now = time_format.iso_utc(sep="T")
4828-        si_s = si_b2a(storage_index)
4829-        # windows can't handle colons in the filename
4830-        fn = os.path.join(self.corruption_advisory_dir,
4831-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4832-        f = open(fn, "w")
4833-        f.write("report: Share Corruption\n")
4834-        f.write("type: %s\n" % share_type)
4835-        f.write("storage_index: %s\n" % si_s)
4836-        f.write("share_number: %d\n" % shnum)
4837-        f.write("\n")
4838-        f.write(reason)
4839-        f.write("\n")
4840-        f.close()
4841-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4842-                        "%(si)s-%(shnum)d: %(reason)s"),
4843-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4844-                level=log.SCARY, umid="SGx2fA")
4845-        return None
4846+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4847+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4848hunk ./src/allmydata/test/common.py 20
4849 from allmydata.mutable.common import CorruptShareError
4850 from allmydata.mutable.layout import unpack_header
4851 from allmydata.mutable.publish import MutableData
4852-from allmydata.storage.mutable import MutableShareFile
4853+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4854 from allmydata.util import hashutil, log, fileutil, pollmixin
4855 from allmydata.util.assertutil import precondition
4856 from allmydata.util.consumer import download_to_data
4857hunk ./src/allmydata/test/common.py 1297
4858 
4859 def _corrupt_mutable_share_data(data, debug=False):
4860     prefix = data[:32]
4861-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4862-    data_offset = MutableShareFile.DATA_OFFSET
4863+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4864+    data_offset = MutableDiskShare.DATA_OFFSET
4865     sharetype = data[data_offset:data_offset+1]
4866     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4867     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4868hunk ./src/allmydata/test/no_network.py 21
4869 from twisted.application import service
4870 from twisted.internet import defer, reactor
4871 from twisted.python.failure import Failure
4872+from twisted.python.filepath import FilePath
4873 from foolscap.api import Referenceable, fireEventually, RemoteException
4874 from base64 import b32encode
4875hunk ./src/allmydata/test/no_network.py 24
4876+
4877 from allmydata import uri as tahoe_uri
4878 from allmydata.client import Client
4879hunk ./src/allmydata/test/no_network.py 27
4880-from allmydata.storage.server import StorageServer, storage_index_to_dir
4881+from allmydata.storage.server import StorageServer
4882+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4883 from allmydata.util import fileutil, idlib, hashutil
4884 from allmydata.util.hashutil import sha1
4885 from allmydata.test.common_web import HTTPClientGETFactory
4886hunk ./src/allmydata/test/no_network.py 155
4887             seed = server.get_permutation_seed()
4888             return sha1(peer_selection_index + seed).digest()
4889         return sorted(self.get_connected_servers(), key=_permuted)
4890+
4891     def get_connected_servers(self):
4892         return self.client._servers
4893hunk ./src/allmydata/test/no_network.py 158
4894+
4895     def get_nickname_for_serverid(self, serverid):
4896         return None
4897 
4898hunk ./src/allmydata/test/no_network.py 162
4899+    def get_known_servers(self):
4900+        return self.get_connected_servers()
4901+
4902+    def get_all_serverids(self):
4903+        return self.client.get_all_serverids()
4904+
4905+
4906 class NoNetworkClient(Client):
4907     def create_tub(self):
4908         pass
4909hunk ./src/allmydata/test/no_network.py 262
4910 
4911     def make_server(self, i, readonly=False):
4912         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
4913-        serverdir = os.path.join(self.basedir, "servers",
4914-                                 idlib.shortnodeid_b2a(serverid), "storage")
4915-        fileutil.make_dirs(serverdir)
4916-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
4917-                           readonly_storage=readonly)
4918+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
4919+
4920+        # The backend will make the storage directory and any necessary parents.
4921+        backend = DiskBackend(storagedir, readonly=readonly)
4922+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
4923         ss._no_network_server_number = i
4924         return ss
4925 
4926hunk ./src/allmydata/test/no_network.py 276
4927         middleman = service.MultiService()
4928         middleman.setServiceParent(self)
4929         ss.setServiceParent(middleman)
4930-        serverid = ss.my_nodeid
4931+        serverid = ss.get_serverid()
4932         self.servers_by_number[i] = ss
4933         wrapper = wrap_storage_server(ss)
4934         self.wrappers_by_id[serverid] = wrapper
4935hunk ./src/allmydata/test/no_network.py 295
4936         # it's enough to remove the server from c._servers (we don't actually
4937         # have to detach and stopService it)
4938         for i,ss in self.servers_by_number.items():
4939-            if ss.my_nodeid == serverid:
4940+            if ss.get_serverid() == serverid:
4941                 del self.servers_by_number[i]
4942                 break
4943         del self.wrappers_by_id[serverid]
4944hunk ./src/allmydata/test/no_network.py 345
4945     def get_clientdir(self, i=0):
4946         return self.g.clients[i].basedir
4947 
4948+    def get_server(self, i):
4949+        return self.g.servers_by_number[i]
4950+
4951     def get_serverdir(self, i):
4952hunk ./src/allmydata/test/no_network.py 349
4953-        return self.g.servers_by_number[i].storedir
4954+        return self.g.servers_by_number[i].backend.storedir
4955+
4956+    def remove_server(self, i):
4957+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
4958 
4959     def iterate_servers(self):
4960         for i in sorted(self.g.servers_by_number.keys()):
4961hunk ./src/allmydata/test/no_network.py 357
4962             ss = self.g.servers_by_number[i]
4963-            yield (i, ss, ss.storedir)
4964+            yield (i, ss, ss.backend.storedir)
4965 
4966     def find_uri_shares(self, uri):
4967         si = tahoe_uri.from_string(uri).get_storage_index()
4968hunk ./src/allmydata/test/no_network.py 361
4969-        prefixdir = storage_index_to_dir(si)
4970         shares = []
4971         for i,ss in self.g.servers_by_number.items():
4972hunk ./src/allmydata/test/no_network.py 363
4973-            serverid = ss.my_nodeid
4974-            basedir = os.path.join(ss.sharedir, prefixdir)
4975-            if not os.path.exists(basedir):
4976-                continue
4977-            for f in os.listdir(basedir):
4978-                try:
4979-                    shnum = int(f)
4980-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
4981-                except ValueError:
4982-                    pass
4983+            for share in ss.backend.get_shareset(si).get_shares():
4984+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
4985         return sorted(shares)
4986 
4987hunk ./src/allmydata/test/no_network.py 367
4988+    def count_leases(self, uri):
4989+        """Return (filename, leasecount) pairs in arbitrary order."""
4990+        si = tahoe_uri.from_string(uri).get_storage_index()
4991+        lease_counts = []
4992+        for i,ss in self.g.servers_by_number.items():
4993+            for share in ss.backend.get_shareset(si).get_shares():
4994+                num_leases = len(list(share.get_leases()))
4995+                lease_counts.append( (share._home.path, num_leases) )
4996+        return lease_counts
4997+
4998     def copy_shares(self, uri):
4999         shares = {}
5000hunk ./src/allmydata/test/no_network.py 379
5001-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5002-            shares[sharefile] = open(sharefile, "rb").read()
5003+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5004+            shares[sharefp.path] = sharefp.getContent()
5005         return shares
5006 
5007hunk ./src/allmydata/test/no_network.py 383
5008+    def copy_share(self, from_share, uri, to_server):
5009+        si = uri.from_string(self.uri).get_storage_index()
5010+        (i_shnum, i_serverid, i_sharefp) = from_share
5011+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5012+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5013+
5014     def restore_all_shares(self, shares):
5015hunk ./src/allmydata/test/no_network.py 390
5016-        for sharefile, data in shares.items():
5017-            open(sharefile, "wb").write(data)
5018+        for share, data in shares.items():
5019+            share.home.setContent(data)
5020 
5021hunk ./src/allmydata/test/no_network.py 393
5022-    def delete_share(self, (shnum, serverid, sharefile)):
5023-        os.unlink(sharefile)
5024+    def delete_share(self, (shnum, serverid, sharefp)):
5025+        sharefp.remove()
5026 
5027     def delete_shares_numbered(self, uri, shnums):
5028hunk ./src/allmydata/test/no_network.py 397
5029-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5030+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5031             if i_shnum in shnums:
5032hunk ./src/allmydata/test/no_network.py 399
5033-                os.unlink(i_sharefile)
5034+                i_sharefp.remove()
5035 
5036hunk ./src/allmydata/test/no_network.py 401
5037-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5038-        sharedata = open(sharefile, "rb").read()
5039-        corruptdata = corruptor_function(sharedata)
5040-        open(sharefile, "wb").write(corruptdata)
5041+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5042+        sharedata = sharefp.getContent()
5043+        corruptdata = corruptor_function(sharedata, debug=debug)
5044+        sharefp.setContent(corruptdata)
5045 
5046     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5047hunk ./src/allmydata/test/no_network.py 407
5048-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5049+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5050             if i_shnum in shnums:
5051hunk ./src/allmydata/test/no_network.py 409
5052-                sharedata = open(i_sharefile, "rb").read()
5053-                corruptdata = corruptor(sharedata, debug=debug)
5054-                open(i_sharefile, "wb").write(corruptdata)
5055+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5056 
5057     def corrupt_all_shares(self, uri, corruptor, debug=False):
5058hunk ./src/allmydata/test/no_network.py 412
5059-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5060-            sharedata = open(i_sharefile, "rb").read()
5061-            corruptdata = corruptor(sharedata, debug=debug)
5062-            open(i_sharefile, "wb").write(corruptdata)
5063+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5064+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5065 
5066     def GET(self, urlpath, followRedirect=False, return_response=False,
5067             method="GET", clientnum=0, **kwargs):
5068hunk ./src/allmydata/test/test_download.py 6
5069 # a previous run. This asserts that the current code is capable of decoding
5070 # shares from a previous version.
5071 
5072-import os
5073 from twisted.trial import unittest
5074 from twisted.internet import defer, reactor
5075 from allmydata import uri
5076hunk ./src/allmydata/test/test_download.py 9
5077-from allmydata.storage.server import storage_index_to_dir
5078 from allmydata.util import base32, fileutil, spans, log, hashutil
5079 from allmydata.util.consumer import download_to_data, MemoryConsumer
5080 from allmydata.immutable import upload, layout
5081hunk ./src/allmydata/test/test_download.py 85
5082         u = upload.Data(plaintext, None)
5083         d = self.c0.upload(u)
5084         f = open("stored_shares.py", "w")
5085-        def _created_immutable(ur):
5086-            # write the generated shares and URI to a file, which can then be
5087-            # incorporated into this one next time.
5088-            f.write('immutable_uri = "%s"\n' % ur.uri)
5089-            f.write('immutable_shares = {\n')
5090-            si = uri.from_string(ur.uri).get_storage_index()
5091-            si_dir = storage_index_to_dir(si)
5092+
5093+        def _write_py(uri):
5094+            si = uri.from_string(uri).get_storage_index()
5095             for (i,ss,ssdir) in self.iterate_servers():
5096hunk ./src/allmydata/test/test_download.py 89
5097-                sharedir = os.path.join(ssdir, "shares", si_dir)
5098                 shares = {}
5099hunk ./src/allmydata/test/test_download.py 90
5100-                for fn in os.listdir(sharedir):
5101-                    shnum = int(fn)
5102-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5103-                    shares[shnum] = sharedata
5104-                fileutil.rm_dir(sharedir)
5105+                shareset = ss.backend.get_shareset(si)
5106+                for share in shareset.get_shares():
5107+                    sharedata = share._home.getContent()
5108+                    shares[share.get_shnum()] = sharedata
5109+
5110+                fileutil.fp_remove(shareset._sharehomedir)
5111                 if shares:
5112                     f.write(' %d: { # client[%d]\n' % (i, i))
5113                     for shnum in sorted(shares.keys()):
5114hunk ./src/allmydata/test/test_download.py 103
5115                                 (shnum, base32.b2a(shares[shnum])))
5116                     f.write('    },\n')
5117             f.write('}\n')
5118-            f.write('\n')
5119 
5120hunk ./src/allmydata/test/test_download.py 104
5121+        def _created_immutable(ur):
5122+            # write the generated shares and URI to a file, which can then be
5123+            # incorporated into this one next time.
5124+            f.write('immutable_uri = "%s"\n' % ur.uri)
5125+            f.write('immutable_shares = {\n')
5126+            _write_py(ur.uri)
5127+            f.write('\n')
5128         d.addCallback(_created_immutable)
5129 
5130         d.addCallback(lambda ignored:
5131hunk ./src/allmydata/test/test_download.py 118
5132         def _created_mutable(n):
5133             f.write('mutable_uri = "%s"\n' % n.get_uri())
5134             f.write('mutable_shares = {\n')
5135-            si = uri.from_string(n.get_uri()).get_storage_index()
5136-            si_dir = storage_index_to_dir(si)
5137-            for (i,ss,ssdir) in self.iterate_servers():
5138-                sharedir = os.path.join(ssdir, "shares", si_dir)
5139-                shares = {}
5140-                for fn in os.listdir(sharedir):
5141-                    shnum = int(fn)
5142-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5143-                    shares[shnum] = sharedata
5144-                fileutil.rm_dir(sharedir)
5145-                if shares:
5146-                    f.write(' %d: { # client[%d]\n' % (i, i))
5147-                    for shnum in sorted(shares.keys()):
5148-                        f.write('  %d: base32.a2b("%s"),\n' %
5149-                                (shnum, base32.b2a(shares[shnum])))
5150-                    f.write('    },\n')
5151-            f.write('}\n')
5152-
5153-            f.close()
5154+            _write_py(n.get_uri())
5155         d.addCallback(_created_mutable)
5156 
5157         def _done(ignored):
5158hunk ./src/allmydata/test/test_download.py 123
5159             f.close()
5160-        d.addCallback(_done)
5161+        d.addBoth(_done)
5162 
5163         return d
5164 
5165hunk ./src/allmydata/test/test_download.py 127
5166+    def _write_shares(self, uri, shares):
5167+        si = uri.from_string(uri).get_storage_index()
5168+        for i in shares:
5169+            shares_for_server = shares[i]
5170+            for shnum in shares_for_server:
5171+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5172+                fileutil.fp_make_dirs(share_dir)
5173+                share_dir.child(str(shnum)).setContent(shares[shnum])
5174+
5175     def load_shares(self, ignored=None):
5176         # this uses the data generated by create_shares() to populate the
5177         # storage servers with pre-generated shares
5178hunk ./src/allmydata/test/test_download.py 139
5179-        si = uri.from_string(immutable_uri).get_storage_index()
5180-        si_dir = storage_index_to_dir(si)
5181-        for i in immutable_shares:
5182-            shares = immutable_shares[i]
5183-            for shnum in shares:
5184-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5185-                fileutil.make_dirs(dn)
5186-                fn = os.path.join(dn, str(shnum))
5187-                f = open(fn, "wb")
5188-                f.write(shares[shnum])
5189-                f.close()
5190-
5191-        si = uri.from_string(mutable_uri).get_storage_index()
5192-        si_dir = storage_index_to_dir(si)
5193-        for i in mutable_shares:
5194-            shares = mutable_shares[i]
5195-            for shnum in shares:
5196-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5197-                fileutil.make_dirs(dn)
5198-                fn = os.path.join(dn, str(shnum))
5199-                f = open(fn, "wb")
5200-                f.write(shares[shnum])
5201-                f.close()
5202+        self._write_shares(immutable_uri, immutable_shares)
5203+        self._write_shares(mutable_uri, mutable_shares)
5204 
5205     def download_immutable(self, ignored=None):
5206         n = self.c0.create_node_from_uri(immutable_uri)
5207hunk ./src/allmydata/test/test_download.py 183
5208 
5209         self.load_shares()
5210         si = uri.from_string(immutable_uri).get_storage_index()
5211-        si_dir = storage_index_to_dir(si)
5212 
5213         n = self.c0.create_node_from_uri(immutable_uri)
5214         d = download_to_data(n)
5215hunk ./src/allmydata/test/test_download.py 198
5216                 for clientnum in immutable_shares:
5217                     for shnum in immutable_shares[clientnum]:
5218                         if s._shnum == shnum:
5219-                            fn = os.path.join(self.get_serverdir(clientnum),
5220-                                              "shares", si_dir, str(shnum))
5221-                            os.unlink(fn)
5222+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5223+                            share_dir.child(str(shnum)).remove()
5224         d.addCallback(_clobber_some_shares)
5225         d.addCallback(lambda ign: download_to_data(n))
5226         d.addCallback(_got_data)
5227hunk ./src/allmydata/test/test_download.py 212
5228                 for shnum in immutable_shares[clientnum]:
5229                     if shnum == save_me:
5230                         continue
5231-                    fn = os.path.join(self.get_serverdir(clientnum),
5232-                                      "shares", si_dir, str(shnum))
5233-                    if os.path.exists(fn):
5234-                        os.unlink(fn)
5235+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5236+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5237             # now the download should fail with NotEnoughSharesError
5238             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5239                                    download_to_data, n)
5240hunk ./src/allmydata/test/test_download.py 223
5241             # delete the last remaining share
5242             for clientnum in immutable_shares:
5243                 for shnum in immutable_shares[clientnum]:
5244-                    fn = os.path.join(self.get_serverdir(clientnum),
5245-                                      "shares", si_dir, str(shnum))
5246-                    if os.path.exists(fn):
5247-                        os.unlink(fn)
5248+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5249+                    share_dir.child(str(shnum)).remove()
5250             # now a new download should fail with NoSharesError. We want a
5251             # new ImmutableFileNode so it will forget about the old shares.
5252             # If we merely called create_node_from_uri() without first
5253hunk ./src/allmydata/test/test_download.py 801
5254         # will report two shares, and the ShareFinder will handle the
5255         # duplicate by attaching both to the same CommonShare instance.
5256         si = uri.from_string(immutable_uri).get_storage_index()
5257-        si_dir = storage_index_to_dir(si)
5258-        sh0_file = [sharefile
5259-                    for (shnum, serverid, sharefile)
5260-                    in self.find_uri_shares(immutable_uri)
5261-                    if shnum == 0][0]
5262-        sh0_data = open(sh0_file, "rb").read()
5263+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5264+                          in self.find_uri_shares(immutable_uri)
5265+                          if shnum == 0][0]
5266+        sh0_data = sh0_fp.getContent()
5267         for clientnum in immutable_shares:
5268             if 0 in immutable_shares[clientnum]:
5269                 continue
5270hunk ./src/allmydata/test/test_download.py 808
5271-            cdir = self.get_serverdir(clientnum)
5272-            target = os.path.join(cdir, "shares", si_dir, "0")
5273-            outf = open(target, "wb")
5274-            outf.write(sh0_data)
5275-            outf.close()
5276+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5277+            fileutil.fp_make_dirs(cdir)
5278+            cdir.child(str(shnum)).setContent(sh0_data)
5279 
5280         d = self.download_immutable()
5281         return d
5282hunk ./src/allmydata/test/test_encode.py 134
5283         d.addCallback(_try)
5284         return d
5285 
5286-    def get_share_hashes(self, at_least_these=()):
5287+    def get_share_hashes(self):
5288         d = self._start()
5289         def _try(unused=None):
5290             if self.mode == "bad sharehash":
5291hunk ./src/allmydata/test/test_hung_server.py 3
5292 # -*- coding: utf-8 -*-
5293 
5294-import os, shutil
5295 from twisted.trial import unittest
5296 from twisted.internet import defer
5297hunk ./src/allmydata/test/test_hung_server.py 5
5298-from allmydata import uri
5299+
5300 from allmydata.util.consumer import download_to_data
5301 from allmydata.immutable import upload
5302 from allmydata.mutable.common import UnrecoverableFileError
5303hunk ./src/allmydata/test/test_hung_server.py 10
5304 from allmydata.mutable.publish import MutableData
5305-from allmydata.storage.common import storage_index_to_dir
5306 from allmydata.test.no_network import GridTestMixin
5307 from allmydata.test.common import ShouldFailMixin
5308 from allmydata.util.pollmixin import PollMixin
5309hunk ./src/allmydata/test/test_hung_server.py 18
5310 immutable_plaintext = "data" * 10000
5311 mutable_plaintext = "muta" * 10000
5312 
5313+
5314 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5315                              unittest.TestCase):
5316     # Many of these tests take around 60 seconds on François's ARM buildslave:
5317hunk ./src/allmydata/test/test_hung_server.py 31
5318     timeout = 240
5319 
5320     def _break(self, servers):
5321-        for (id, ss) in servers:
5322-            self.g.break_server(id)
5323+        for ss in servers:
5324+            self.g.break_server(ss.get_serverid())
5325 
5326     def _hang(self, servers, **kwargs):
5327hunk ./src/allmydata/test/test_hung_server.py 35
5328-        for (id, ss) in servers:
5329-            self.g.hang_server(id, **kwargs)
5330+        for ss in servers:
5331+            self.g.hang_server(ss.get_serverid(), **kwargs)
5332 
5333     def _unhang(self, servers, **kwargs):
5334hunk ./src/allmydata/test/test_hung_server.py 39
5335-        for (id, ss) in servers:
5336-            self.g.unhang_server(id, **kwargs)
5337+        for ss in servers:
5338+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5339 
5340     def _hang_shares(self, shnums, **kwargs):
5341         # hang all servers who are holding the given shares
5342hunk ./src/allmydata/test/test_hung_server.py 52
5343                     hung_serverids.add(i_serverid)
5344 
5345     def _delete_all_shares_from(self, servers):
5346-        serverids = [id for (id, ss) in servers]
5347-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5348+        serverids = [ss.get_serverid() for ss in servers]
5349+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5350             if i_serverid in serverids:
5351hunk ./src/allmydata/test/test_hung_server.py 55
5352-                os.unlink(i_sharefile)
5353+                i_sharefp.remove()
5354 
5355     def _corrupt_all_shares_in(self, servers, corruptor_func):
5356hunk ./src/allmydata/test/test_hung_server.py 58
5357-        serverids = [id for (id, ss) in servers]
5358-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5359+        serverids = [ss.get_serverid() for ss in servers]
5360+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5361             if i_serverid in serverids:
5362hunk ./src/allmydata/test/test_hung_server.py 61
5363-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5364+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5365 
5366     def _copy_all_shares_from(self, from_servers, to_server):
5367hunk ./src/allmydata/test/test_hung_server.py 64
5368-        serverids = [id for (id, ss) in from_servers]
5369-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5370+        serverids = [ss.get_serverid() for ss in from_servers]
5371+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5372             if i_serverid in serverids:
5373hunk ./src/allmydata/test/test_hung_server.py 67
5374-                self._copy_share((i_shnum, i_sharefile), to_server)
5375+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5376 
5377hunk ./src/allmydata/test/test_hung_server.py 69
5378-    def _copy_share(self, share, to_server):
5379-        (sharenum, sharefile) = share
5380-        (id, ss) = to_server
5381-        shares_dir = os.path.join(ss.original.storedir, "shares")
5382-        si = uri.from_string(self.uri).get_storage_index()
5383-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5384-        if not os.path.exists(si_dir):
5385-            os.makedirs(si_dir)
5386-        new_sharefile = os.path.join(si_dir, str(sharenum))
5387-        shutil.copy(sharefile, new_sharefile)
5388         self.shares = self.find_uri_shares(self.uri)
5389hunk ./src/allmydata/test/test_hung_server.py 70
5390-        # Make sure that the storage server has the share.
5391-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5392-                        in self.shares)
5393-
5394-    def _corrupt_share(self, share, corruptor_func):
5395-        (sharenum, sharefile) = share
5396-        data = open(sharefile, "rb").read()
5397-        newdata = corruptor_func(data)
5398-        os.unlink(sharefile)
5399-        wf = open(sharefile, "wb")
5400-        wf.write(newdata)
5401-        wf.close()
5402 
5403     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5404         self.mutable = mutable
5405hunk ./src/allmydata/test/test_hung_server.py 82
5406 
5407         self.c0 = self.g.clients[0]
5408         nm = self.c0.nodemaker
5409-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5410-                               for s in nm.storage_broker.get_connected_servers()])
5411+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5412+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5413         self.servers = self.servers[5:] + self.servers[:5]
5414 
5415         if mutable:
5416hunk ./src/allmydata/test/test_hung_server.py 244
5417             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5418             # will retire before the download is complete and the ShareFinder
5419             # is shut off. That will leave 4 OVERDUE and 1
5420-            # stuck-but-not-overdue, for a total of 5 requests in in
5421+            # stuck-but-not-overdue, for a total of 5 requests in
5422             # _sf.pending_requests
5423             for t in self._sf.overdue_timers.values()[:4]:
5424                 t.reset(-1.0)
5425hunk ./src/allmydata/test/test_mutable.py 21
5426 from foolscap.api import eventually, fireEventually
5427 from foolscap.logging import log
5428 from allmydata.storage_client import StorageFarmBroker
5429-from allmydata.storage.common import storage_index_to_dir
5430 from allmydata.scripts import debug
5431 
5432 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5433hunk ./src/allmydata/test/test_mutable.py 3662
5434         # Now execute each assignment by writing the storage.
5435         for (share, servernum) in assignments:
5436             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5437-            storedir = self.get_serverdir(servernum)
5438-            storage_path = os.path.join(storedir, "shares",
5439-                                        storage_index_to_dir(si))
5440-            fileutil.make_dirs(storage_path)
5441-            fileutil.write(os.path.join(storage_path, "%d" % share),
5442-                           sharedata)
5443+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5444+            fileutil.fp_make_dirs(storage_dir)
5445+            storage_dir.child("%d" % share).setContent(sharedata)
5446         # ...and verify that the shares are there.
5447         shares = self.find_uri_shares(self.sdmf_old_cap)
5448         assert len(shares) == 10
5449hunk ./src/allmydata/test/test_provisioning.py 13
5450 from nevow import inevow
5451 from zope.interface import implements
5452 
5453-class MyRequest:
5454+class MockRequest:
5455     implements(inevow.IRequest)
5456     pass
5457 
5458hunk ./src/allmydata/test/test_provisioning.py 26
5459     def test_load(self):
5460         pt = provisioning.ProvisioningTool()
5461         self.fields = {}
5462-        #r = MyRequest()
5463+        #r = MockRequest()
5464         #r.fields = self.fields
5465         #ctx = RequestContext()
5466         #unfilled = pt.renderSynchronously(ctx)
5467hunk ./src/allmydata/test/test_repairer.py 537
5468         # happiness setting.
5469         def _delete_some_servers(ignored):
5470             for i in xrange(7):
5471-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5472+                self.remove_server(i)
5473 
5474             assert len(self.g.servers_by_number) == 3
5475 
5476hunk ./src/allmydata/test/test_storage.py 14
5477 from allmydata import interfaces
5478 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5479 from allmydata.storage.server import StorageServer
5480-from allmydata.storage.mutable import MutableShareFile
5481-from allmydata.storage.immutable import BucketWriter, BucketReader
5482-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5483+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5484+from allmydata.storage.bucket import BucketWriter, BucketReader
5485+from allmydata.storage.common import DataTooLargeError, \
5486      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5487 from allmydata.storage.lease import LeaseInfo
5488 from allmydata.storage.crawler import BucketCountingCrawler
5489hunk ./src/allmydata/test/test_storage.py 474
5490         w[0].remote_write(0, "\xff"*10)
5491         w[0].remote_close()
5492 
5493-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5494-        f = open(fn, "rb+")
5495+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5496+        f = fp.open("rb+")
5497         f.seek(0)
5498         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5499         f.close()
5500hunk ./src/allmydata/test/test_storage.py 814
5501     def test_bad_magic(self):
5502         ss = self.create("test_bad_magic")
5503         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5504-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5505-        f = open(fn, "rb+")
5506+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5507+        f = fp.open("rb+")
5508         f.seek(0)
5509         f.write("BAD MAGIC")
5510         f.close()
5511hunk ./src/allmydata/test/test_storage.py 842
5512 
5513         # Trying to make the container too large (by sending a write vector
5514         # whose offset is too high) will raise an exception.
5515-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5516+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5517         self.failUnlessRaises(DataTooLargeError,
5518                               rstaraw, "si1", secrets,
5519                               {0: ([], [(TOOBIG,data)], None)},
5520hunk ./src/allmydata/test/test_storage.py 1229
5521 
5522         # create a random non-numeric file in the bucket directory, to
5523         # exercise the code that's supposed to ignore those.
5524-        bucket_dir = os.path.join(self.workdir("test_leases"),
5525-                                  "shares", storage_index_to_dir("si1"))
5526-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5527-        f.write("you ought to be ignoring me\n")
5528-        f.close()
5529+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5530+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5531 
5532hunk ./src/allmydata/test/test_storage.py 1232
5533-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5534+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5535         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5536 
5537         # add-lease on a missing storage index is silently ignored
5538hunk ./src/allmydata/test/test_storage.py 3118
5539         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5540 
5541         # add a non-sharefile to exercise another code path
5542-        fn = os.path.join(ss.sharedir,
5543-                          storage_index_to_dir(immutable_si_0),
5544-                          "not-a-share")
5545-        f = open(fn, "wb")
5546-        f.write("I am not a share.\n")
5547-        f.close()
5548+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5549+        fp.setContent("I am not a share.\n")
5550 
5551         # this is before the crawl has started, so we're not in a cycle yet
5552         initial_state = lc.get_state()
5553hunk ./src/allmydata/test/test_storage.py 3282
5554     def test_expire_age(self):
5555         basedir = "storage/LeaseCrawler/expire_age"
5556         fileutil.make_dirs(basedir)
5557-        # setting expiration_time to 2000 means that any lease which is more
5558-        # than 2000s old will be expired.
5559-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5560-                                       expiration_enabled=True,
5561-                                       expiration_mode="age",
5562-                                       expiration_override_lease_duration=2000)
5563+        # setting 'override_lease_duration' to 2000 means that any lease that
5564+        # is more than 2000 seconds old will be expired.
5565+        expiration_policy = {
5566+            'enabled': True,
5567+            'mode': 'age',
5568+            'override_lease_duration': 2000,
5569+            'sharetypes': ('mutable', 'immutable'),
5570+        }
5571+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5572         # make it start sooner than usual.
5573         lc = ss.lease_checker
5574         lc.slow_start = 0
5575hunk ./src/allmydata/test/test_storage.py 3423
5576     def test_expire_cutoff_date(self):
5577         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5578         fileutil.make_dirs(basedir)
5579-        # setting cutoff-date to 2000 seconds ago means that any lease which
5580-        # is more than 2000s old will be expired.
5581+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5582+        # is more than 2000 seconds old will be expired.
5583         now = time.time()
5584         then = int(now - 2000)
5585hunk ./src/allmydata/test/test_storage.py 3427
5586-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5587-                                       expiration_enabled=True,
5588-                                       expiration_mode="cutoff-date",
5589-                                       expiration_cutoff_date=then)
5590+        expiration_policy = {
5591+            'enabled': True,
5592+            'mode': 'cutoff-date',
5593+            'cutoff_date': then,
5594+            'sharetypes': ('mutable', 'immutable'),
5595+        }
5596+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5597         # make it start sooner than usual.
5598         lc = ss.lease_checker
5599         lc.slow_start = 0
5600hunk ./src/allmydata/test/test_storage.py 3575
5601     def test_only_immutable(self):
5602         basedir = "storage/LeaseCrawler/only_immutable"
5603         fileutil.make_dirs(basedir)
5604+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5605+        # is more than 2000 seconds old will be expired.
5606         now = time.time()
5607         then = int(now - 2000)
5608hunk ./src/allmydata/test/test_storage.py 3579
5609-        ss = StorageServer(basedir, "\x00" * 20,
5610-                           expiration_enabled=True,
5611-                           expiration_mode="cutoff-date",
5612-                           expiration_cutoff_date=then,
5613-                           expiration_sharetypes=("immutable",))
5614+        expiration_policy = {
5615+            'enabled': True,
5616+            'mode': 'cutoff-date',
5617+            'cutoff_date': then,
5618+            'sharetypes': ('immutable',),
5619+        }
5620+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5621         lc = ss.lease_checker
5622         lc.slow_start = 0
5623         webstatus = StorageStatus(ss)
5624hunk ./src/allmydata/test/test_storage.py 3636
5625     def test_only_mutable(self):
5626         basedir = "storage/LeaseCrawler/only_mutable"
5627         fileutil.make_dirs(basedir)
5628+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5629+        # is more than 2000 seconds old will be expired.
5630         now = time.time()
5631         then = int(now - 2000)
5632hunk ./src/allmydata/test/test_storage.py 3640
5633-        ss = StorageServer(basedir, "\x00" * 20,
5634-                           expiration_enabled=True,
5635-                           expiration_mode="cutoff-date",
5636-                           expiration_cutoff_date=then,
5637-                           expiration_sharetypes=("mutable",))
5638+        expiration_policy = {
5639+            'enabled': True,
5640+            'mode': 'cutoff-date',
5641+            'cutoff_date': then,
5642+            'sharetypes': ('mutable',),
5643+        }
5644+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5645         lc = ss.lease_checker
5646         lc.slow_start = 0
5647         webstatus = StorageStatus(ss)
5648hunk ./src/allmydata/test/test_storage.py 3819
5649     def test_no_st_blocks(self):
5650         basedir = "storage/LeaseCrawler/no_st_blocks"
5651         fileutil.make_dirs(basedir)
5652-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5653-                                        expiration_mode="age",
5654-                                        expiration_override_lease_duration=-1000)
5655-        # a negative expiration_time= means the "configured-"
5656+        # A negative 'override_lease_duration' means that the "configured-"
5657         # space-recovered counts will be non-zero, since all shares will have
5658hunk ./src/allmydata/test/test_storage.py 3821
5659-        # expired by then
5660+        # expired by then.
5661+        expiration_policy = {
5662+            'enabled': True,
5663+            'mode': 'age',
5664+            'override_lease_duration': -1000,
5665+            'sharetypes': ('mutable', 'immutable'),
5666+        }
5667+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5668 
5669         # make it start sooner than usual.
5670         lc = ss.lease_checker
5671hunk ./src/allmydata/test/test_storage.py 3877
5672         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5673         first = min(self.sis)
5674         first_b32 = base32.b2a(first)
5675-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5676-        f = open(fn, "rb+")
5677+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5678+        f = fp.open("rb+")
5679         f.seek(0)
5680         f.write("BAD MAGIC")
5681         f.close()
5682hunk ./src/allmydata/test/test_storage.py 3890
5683 
5684         # also create an empty bucket
5685         empty_si = base32.b2a("\x04"*16)
5686-        empty_bucket_dir = os.path.join(ss.sharedir,
5687-                                        storage_index_to_dir(empty_si))
5688-        fileutil.make_dirs(empty_bucket_dir)
5689+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5690+        fileutil.fp_make_dirs(empty_bucket_dir)
5691 
5692         ss.setServiceParent(self.s)
5693 
5694hunk ./src/allmydata/test/test_system.py 10
5695 
5696 import allmydata
5697 from allmydata import uri
5698-from allmydata.storage.mutable import MutableShareFile
5699+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5700 from allmydata.storage.server import si_a2b
5701 from allmydata.immutable import offloaded, upload
5702 from allmydata.immutable.literal import LiteralFileNode
5703hunk ./src/allmydata/test/test_system.py 421
5704         return shares
5705 
5706     def _corrupt_mutable_share(self, filename, which):
5707-        msf = MutableShareFile(filename)
5708+        msf = MutableDiskShare(filename)
5709         datav = msf.readv([ (0, 1000000) ])
5710         final_share = datav[0]
5711         assert len(final_share) < 1000000 # ought to be truncated
5712hunk ./src/allmydata/test/test_upload.py 22
5713 from allmydata.util.happinessutil import servers_of_happiness, \
5714                                          shares_by_server, merge_servers
5715 from allmydata.storage_client import StorageFarmBroker
5716-from allmydata.storage.server import storage_index_to_dir
5717 
5718 MiB = 1024*1024
5719 
5720hunk ./src/allmydata/test/test_upload.py 821
5721 
5722     def _copy_share_to_server(self, share_number, server_number):
5723         ss = self.g.servers_by_number[server_number]
5724-        # Copy share i from the directory associated with the first
5725-        # storage server to the directory associated with this one.
5726-        assert self.g, "I tried to find a grid at self.g, but failed"
5727-        assert self.shares, "I tried to find shares at self.shares, but failed"
5728-        old_share_location = self.shares[share_number][2]
5729-        new_share_location = os.path.join(ss.storedir, "shares")
5730-        si = uri.from_string(self.uri).get_storage_index()
5731-        new_share_location = os.path.join(new_share_location,
5732-                                          storage_index_to_dir(si))
5733-        if not os.path.exists(new_share_location):
5734-            os.makedirs(new_share_location)
5735-        new_share_location = os.path.join(new_share_location,
5736-                                          str(share_number))
5737-        if old_share_location != new_share_location:
5738-            shutil.copy(old_share_location, new_share_location)
5739-        shares = self.find_uri_shares(self.uri)
5740-        # Make sure that the storage server has the share.
5741-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5742-                        in shares)
5743+        self.copy_share(self.shares[share_number], ss)
5744 
5745     def _setup_grid(self):
5746         """
5747hunk ./src/allmydata/test/test_upload.py 1103
5748                 self._copy_share_to_server(i, 2)
5749         d.addCallback(_copy_shares)
5750         # Remove the first server, and add a placeholder with share 0
5751-        d.addCallback(lambda ign:
5752-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5753+        d.addCallback(lambda ign: self.remove_server(0))
5754         d.addCallback(lambda ign:
5755             self._add_server_with_share(server_number=4, share_number=0))
5756         # Now try uploading.
5757hunk ./src/allmydata/test/test_upload.py 1134
5758         d.addCallback(lambda ign:
5759             self._add_server(server_number=4))
5760         d.addCallback(_copy_shares)
5761-        d.addCallback(lambda ign:
5762-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5763+        d.addCallback(lambda ign: self.remove_server(0))
5764         d.addCallback(_reset_encoding_parameters)
5765         d.addCallback(lambda client:
5766             client.upload(upload.Data("data" * 10000, convergence="")))
5767hunk ./src/allmydata/test/test_upload.py 1196
5768                 self._copy_share_to_server(i, 2)
5769         d.addCallback(_copy_shares)
5770         # Remove server 0, and add another in its place
5771-        d.addCallback(lambda ign:
5772-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5773+        d.addCallback(lambda ign: self.remove_server(0))
5774         d.addCallback(lambda ign:
5775             self._add_server_with_share(server_number=4, share_number=0,
5776                                         readonly=True))
5777hunk ./src/allmydata/test/test_upload.py 1237
5778             for i in xrange(1, 10):
5779                 self._copy_share_to_server(i, 2)
5780         d.addCallback(_copy_shares)
5781-        d.addCallback(lambda ign:
5782-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5783+        d.addCallback(lambda ign: self.remove_server(0))
5784         def _reset_encoding_parameters(ign, happy=4):
5785             client = self.g.clients[0]
5786             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5787hunk ./src/allmydata/test/test_upload.py 1273
5788         # remove the original server
5789         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5790         #  all the shares)
5791-        def _remove_server(ign):
5792-            server = self.g.servers_by_number[0]
5793-            self.g.remove_server(server.my_nodeid)
5794-        d.addCallback(_remove_server)
5795+        d.addCallback(lambda ign: self.remove_server(0))
5796         # This should succeed; we still have 4 servers, and the
5797         # happiness of the upload is 4.
5798         d.addCallback(lambda ign:
5799hunk ./src/allmydata/test/test_upload.py 1285
5800         d.addCallback(lambda ign:
5801             self._setup_and_upload())
5802         d.addCallback(_do_server_setup)
5803-        d.addCallback(_remove_server)
5804+        d.addCallback(lambda ign: self.remove_server(0))
5805         d.addCallback(lambda ign:
5806             self.shouldFail(UploadUnhappinessError,
5807                             "test_dropped_servers_in_encoder",
5808hunk ./src/allmydata/test/test_upload.py 1307
5809             self._add_server_with_share(4, 7, readonly=True)
5810             self._add_server_with_share(5, 8, readonly=True)
5811         d.addCallback(_do_server_setup_2)
5812-        d.addCallback(_remove_server)
5813+        d.addCallback(lambda ign: self.remove_server(0))
5814         d.addCallback(lambda ign:
5815             self._do_upload_with_broken_servers(1))
5816         d.addCallback(_set_basedir)
5817hunk ./src/allmydata/test/test_upload.py 1314
5818         d.addCallback(lambda ign:
5819             self._setup_and_upload())
5820         d.addCallback(_do_server_setup_2)
5821-        d.addCallback(_remove_server)
5822+        d.addCallback(lambda ign: self.remove_server(0))
5823         d.addCallback(lambda ign:
5824             self.shouldFail(UploadUnhappinessError,
5825                             "test_dropped_servers_in_encoder",
5826hunk ./src/allmydata/test/test_upload.py 1528
5827             for i in xrange(1, 10):
5828                 self._copy_share_to_server(i, 1)
5829         d.addCallback(_copy_shares)
5830-        d.addCallback(lambda ign:
5831-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5832+        d.addCallback(lambda ign: self.remove_server(0))
5833         def _prepare_client(ign):
5834             client = self.g.clients[0]
5835             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5836hunk ./src/allmydata/test/test_upload.py 1550
5837         def _setup(ign):
5838             for i in xrange(1, 11):
5839                 self._add_server(server_number=i)
5840-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5841+            self.remove_server(0)
5842             c = self.g.clients[0]
5843             # We set happy to an unsatisfiable value so that we can check the
5844             # counting in the exception message. The same progress message
5845hunk ./src/allmydata/test/test_upload.py 1577
5846                 self._add_server(server_number=i)
5847             self._add_server(server_number=11, readonly=True)
5848             self._add_server(server_number=12, readonly=True)
5849-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5850+            self.remove_server(0)
5851             c = self.g.clients[0]
5852             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5853             return c
5854hunk ./src/allmydata/test/test_upload.py 1605
5855             # the first one that the selector sees.
5856             for i in xrange(10):
5857                 self._copy_share_to_server(i, 9)
5858-            # Remove server 0, and its contents
5859-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5860+            self.remove_server(0)
5861             # Make happiness unsatisfiable
5862             c = self.g.clients[0]
5863             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5864hunk ./src/allmydata/test/test_upload.py 1625
5865         def _then(ign):
5866             for i in xrange(1, 11):
5867                 self._add_server(server_number=i, readonly=True)
5868-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5869+            self.remove_server(0)
5870             c = self.g.clients[0]
5871             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5872             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5873hunk ./src/allmydata/test/test_upload.py 1661
5874             self._add_server(server_number=4, readonly=True))
5875         d.addCallback(lambda ign:
5876             self._add_server(server_number=5, readonly=True))
5877-        d.addCallback(lambda ign:
5878-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5879+        d.addCallback(lambda ign: self.remove_server(0))
5880         def _reset_encoding_parameters(ign, happy=4):
5881             client = self.g.clients[0]
5882             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5883hunk ./src/allmydata/test/test_upload.py 1696
5884         d.addCallback(lambda ign:
5885             self._add_server(server_number=2))
5886         def _break_server_2(ign):
5887-            serverid = self.g.servers_by_number[2].my_nodeid
5888+            serverid = self.get_server(2).get_serverid()
5889             self.g.break_server(serverid)
5890         d.addCallback(_break_server_2)
5891         d.addCallback(lambda ign:
5892hunk ./src/allmydata/test/test_upload.py 1705
5893             self._add_server(server_number=4, readonly=True))
5894         d.addCallback(lambda ign:
5895             self._add_server(server_number=5, readonly=True))
5896-        d.addCallback(lambda ign:
5897-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5898+        d.addCallback(lambda ign: self.remove_server(0))
5899         d.addCallback(_reset_encoding_parameters)
5900         d.addCallback(lambda client:
5901             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
5902hunk ./src/allmydata/test/test_upload.py 1816
5903             # Copy shares
5904             self._copy_share_to_server(1, 1)
5905             self._copy_share_to_server(2, 1)
5906-            # Remove server 0
5907-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5908+            self.remove_server(0)
5909             client = self.g.clients[0]
5910             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
5911             return client
5912hunk ./src/allmydata/test/test_upload.py 1930
5913                                         readonly=True)
5914             self._add_server_with_share(server_number=4, share_number=3,
5915                                         readonly=True)
5916-            # Remove server 0.
5917-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5918+            self.remove_server(0)
5919             # Set the client appropriately
5920             c = self.g.clients[0]
5921             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5922hunk ./src/allmydata/test/test_util.py 9
5923 from twisted.trial import unittest
5924 from twisted.internet import defer, reactor
5925 from twisted.python.failure import Failure
5926+from twisted.python.filepath import FilePath
5927 from twisted.python import log
5928 from pycryptopp.hash.sha256 import SHA256 as _hash
5929 
5930hunk ./src/allmydata/test/test_util.py 508
5931                 os.chdir(saved_cwd)
5932 
5933     def test_disk_stats(self):
5934-        avail = fileutil.get_available_space('.', 2**14)
5935+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
5936         if avail == 0:
5937             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
5938 
5939hunk ./src/allmydata/test/test_util.py 512
5940-        disk = fileutil.get_disk_stats('.', 2**13)
5941+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
5942         self.failUnless(disk['total'] > 0, disk['total'])
5943         self.failUnless(disk['used'] > 0, disk['used'])
5944         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
5945hunk ./src/allmydata/test/test_util.py 521
5946 
5947     def test_disk_stats_avail_nonnegative(self):
5948         # This test will spuriously fail if you have more than 2^128
5949-        # bytes of available space on your filesystem.
5950-        disk = fileutil.get_disk_stats('.', 2**128)
5951+        # bytes of available space on your filesystem (lucky you).
5952+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
5953         self.failUnlessEqual(disk['avail'], 0)
5954 
5955 class PollMixinTests(unittest.TestCase):
5956hunk ./src/allmydata/test/test_web.py 12
5957 from twisted.python import failure, log
5958 from nevow import rend
5959 from allmydata import interfaces, uri, webish, dirnode
5960-from allmydata.storage.shares import get_share_file
5961 from allmydata.storage_client import StorageFarmBroker
5962 from allmydata.immutable import upload
5963 from allmydata.immutable.downloader.status import DownloadStatus
5964hunk ./src/allmydata/test/test_web.py 4111
5965             good_shares = self.find_uri_shares(self.uris["good"])
5966             self.failUnlessReallyEqual(len(good_shares), 10)
5967             sick_shares = self.find_uri_shares(self.uris["sick"])
5968-            os.unlink(sick_shares[0][2])
5969+            sick_shares[0][2].remove()
5970             dead_shares = self.find_uri_shares(self.uris["dead"])
5971             for i in range(1, 10):
5972hunk ./src/allmydata/test/test_web.py 4114
5973-                os.unlink(dead_shares[i][2])
5974+                dead_shares[i][2].remove()
5975             c_shares = self.find_uri_shares(self.uris["corrupt"])
5976             cso = CorruptShareOptions()
5977             cso.stdout = StringIO()
5978hunk ./src/allmydata/test/test_web.py 4118
5979-            cso.parseOptions([c_shares[0][2]])
5980+            cso.parseOptions([c_shares[0][2].path])
5981             corrupt_share(cso)
5982         d.addCallback(_clobber_shares)
5983 
5984hunk ./src/allmydata/test/test_web.py 4253
5985             good_shares = self.find_uri_shares(self.uris["good"])
5986             self.failUnlessReallyEqual(len(good_shares), 10)
5987             sick_shares = self.find_uri_shares(self.uris["sick"])
5988-            os.unlink(sick_shares[0][2])
5989+            sick_shares[0][2].remove()
5990             dead_shares = self.find_uri_shares(self.uris["dead"])
5991             for i in range(1, 10):
5992hunk ./src/allmydata/test/test_web.py 4256
5993-                os.unlink(dead_shares[i][2])
5994+                dead_shares[i][2].remove()
5995             c_shares = self.find_uri_shares(self.uris["corrupt"])
5996             cso = CorruptShareOptions()
5997             cso.stdout = StringIO()
5998hunk ./src/allmydata/test/test_web.py 4260
5999-            cso.parseOptions([c_shares[0][2]])
6000+            cso.parseOptions([c_shares[0][2].path])
6001             corrupt_share(cso)
6002         d.addCallback(_clobber_shares)
6003 
6004hunk ./src/allmydata/test/test_web.py 4319
6005 
6006         def _clobber_shares(ignored):
6007             sick_shares = self.find_uri_shares(self.uris["sick"])
6008-            os.unlink(sick_shares[0][2])
6009+            sick_shares[0][2].remove()
6010         d.addCallback(_clobber_shares)
6011 
6012         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6013hunk ./src/allmydata/test/test_web.py 4811
6014             good_shares = self.find_uri_shares(self.uris["good"])
6015             self.failUnlessReallyEqual(len(good_shares), 10)
6016             sick_shares = self.find_uri_shares(self.uris["sick"])
6017-            os.unlink(sick_shares[0][2])
6018+            sick_shares[0][2].remove()
6019             #dead_shares = self.find_uri_shares(self.uris["dead"])
6020             #for i in range(1, 10):
6021hunk ./src/allmydata/test/test_web.py 4814
6022-            #    os.unlink(dead_shares[i][2])
6023+            #    dead_shares[i][2].remove()
6024 
6025             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6026             #cso = CorruptShareOptions()
6027hunk ./src/allmydata/test/test_web.py 4819
6028             #cso.stdout = StringIO()
6029-            #cso.parseOptions([c_shares[0][2]])
6030+            #cso.parseOptions([c_shares[0][2].path])
6031             #corrupt_share(cso)
6032         d.addCallback(_clobber_shares)
6033 
6034hunk ./src/allmydata/test/test_web.py 4870
6035         d.addErrback(self.explain_web_error)
6036         return d
6037 
6038-    def _count_leases(self, ignored, which):
6039-        u = self.uris[which]
6040-        shares = self.find_uri_shares(u)
6041-        lease_counts = []
6042-        for shnum, serverid, fn in shares:
6043-            sf = get_share_file(fn)
6044-            num_leases = len(list(sf.get_leases()))
6045-            lease_counts.append( (fn, num_leases) )
6046-        return lease_counts
6047-
6048-    def _assert_leasecount(self, lease_counts, expected):
6049+    def _assert_leasecount(self, ignored, which, expected):
6050+        lease_counts = self.count_leases(self.uris[which])
6051         for (fn, num_leases) in lease_counts:
6052             if num_leases != expected:
6053                 self.fail("expected %d leases, have %d, on %s" %
6054hunk ./src/allmydata/test/test_web.py 4903
6055                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6056         d.addCallback(_compute_fileurls)
6057 
6058-        d.addCallback(self._count_leases, "one")
6059-        d.addCallback(self._assert_leasecount, 1)
6060-        d.addCallback(self._count_leases, "two")
6061-        d.addCallback(self._assert_leasecount, 1)
6062-        d.addCallback(self._count_leases, "mutable")
6063-        d.addCallback(self._assert_leasecount, 1)
6064+        d.addCallback(self._assert_leasecount, "one", 1)
6065+        d.addCallback(self._assert_leasecount, "two", 1)
6066+        d.addCallback(self._assert_leasecount, "mutable", 1)
6067 
6068         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6069         def _got_html_good(res):
6070hunk ./src/allmydata/test/test_web.py 4913
6071             self.failIf("Not Healthy" in res, res)
6072         d.addCallback(_got_html_good)
6073 
6074-        d.addCallback(self._count_leases, "one")
6075-        d.addCallback(self._assert_leasecount, 1)
6076-        d.addCallback(self._count_leases, "two")
6077-        d.addCallback(self._assert_leasecount, 1)
6078-        d.addCallback(self._count_leases, "mutable")
6079-        d.addCallback(self._assert_leasecount, 1)
6080+        d.addCallback(self._assert_leasecount, "one", 1)
6081+        d.addCallback(self._assert_leasecount, "two", 1)
6082+        d.addCallback(self._assert_leasecount, "mutable", 1)
6083 
6084         # this CHECK uses the original client, which uses the same
6085         # lease-secrets, so it will just renew the original lease
6086hunk ./src/allmydata/test/test_web.py 4922
6087         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6088         d.addCallback(_got_html_good)
6089 
6090-        d.addCallback(self._count_leases, "one")
6091-        d.addCallback(self._assert_leasecount, 1)
6092-        d.addCallback(self._count_leases, "two")
6093-        d.addCallback(self._assert_leasecount, 1)
6094-        d.addCallback(self._count_leases, "mutable")
6095-        d.addCallback(self._assert_leasecount, 1)
6096+        d.addCallback(self._assert_leasecount, "one", 1)
6097+        d.addCallback(self._assert_leasecount, "two", 1)
6098+        d.addCallback(self._assert_leasecount, "mutable", 1)
6099 
6100         # this CHECK uses an alternate client, which adds a second lease
6101         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6102hunk ./src/allmydata/test/test_web.py 4930
6103         d.addCallback(_got_html_good)
6104 
6105-        d.addCallback(self._count_leases, "one")
6106-        d.addCallback(self._assert_leasecount, 2)
6107-        d.addCallback(self._count_leases, "two")
6108-        d.addCallback(self._assert_leasecount, 1)
6109-        d.addCallback(self._count_leases, "mutable")
6110-        d.addCallback(self._assert_leasecount, 1)
6111+        d.addCallback(self._assert_leasecount, "one", 2)
6112+        d.addCallback(self._assert_leasecount, "two", 1)
6113+        d.addCallback(self._assert_leasecount, "mutable", 1)
6114 
6115         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6116         d.addCallback(_got_html_good)
6117hunk ./src/allmydata/test/test_web.py 4937
6118 
6119-        d.addCallback(self._count_leases, "one")
6120-        d.addCallback(self._assert_leasecount, 2)
6121-        d.addCallback(self._count_leases, "two")
6122-        d.addCallback(self._assert_leasecount, 1)
6123-        d.addCallback(self._count_leases, "mutable")
6124-        d.addCallback(self._assert_leasecount, 1)
6125+        d.addCallback(self._assert_leasecount, "one", 2)
6126+        d.addCallback(self._assert_leasecount, "two", 1)
6127+        d.addCallback(self._assert_leasecount, "mutable", 1)
6128 
6129         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6130                       clientnum=1)
6131hunk ./src/allmydata/test/test_web.py 4945
6132         d.addCallback(_got_html_good)
6133 
6134-        d.addCallback(self._count_leases, "one")
6135-        d.addCallback(self._assert_leasecount, 2)
6136-        d.addCallback(self._count_leases, "two")
6137-        d.addCallback(self._assert_leasecount, 1)
6138-        d.addCallback(self._count_leases, "mutable")
6139-        d.addCallback(self._assert_leasecount, 2)
6140+        d.addCallback(self._assert_leasecount, "one", 2)
6141+        d.addCallback(self._assert_leasecount, "two", 1)
6142+        d.addCallback(self._assert_leasecount, "mutable", 2)
6143 
6144         d.addErrback(self.explain_web_error)
6145         return d
6146hunk ./src/allmydata/test/test_web.py 4989
6147             self.failUnlessReallyEqual(len(units), 4+1)
6148         d.addCallback(_done)
6149 
6150-        d.addCallback(self._count_leases, "root")
6151-        d.addCallback(self._assert_leasecount, 1)
6152-        d.addCallback(self._count_leases, "one")
6153-        d.addCallback(self._assert_leasecount, 1)
6154-        d.addCallback(self._count_leases, "mutable")
6155-        d.addCallback(self._assert_leasecount, 1)
6156+        d.addCallback(self._assert_leasecount, "root", 1)
6157+        d.addCallback(self._assert_leasecount, "one", 1)
6158+        d.addCallback(self._assert_leasecount, "mutable", 1)
6159 
6160         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6161         d.addCallback(_done)
6162hunk ./src/allmydata/test/test_web.py 4996
6163 
6164-        d.addCallback(self._count_leases, "root")
6165-        d.addCallback(self._assert_leasecount, 1)
6166-        d.addCallback(self._count_leases, "one")
6167-        d.addCallback(self._assert_leasecount, 1)
6168-        d.addCallback(self._count_leases, "mutable")
6169-        d.addCallback(self._assert_leasecount, 1)
6170+        d.addCallback(self._assert_leasecount, "root", 1)
6171+        d.addCallback(self._assert_leasecount, "one", 1)
6172+        d.addCallback(self._assert_leasecount, "mutable", 1)
6173 
6174         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6175                       clientnum=1)
6176hunk ./src/allmydata/test/test_web.py 5004
6177         d.addCallback(_done)
6178 
6179-        d.addCallback(self._count_leases, "root")
6180-        d.addCallback(self._assert_leasecount, 2)
6181-        d.addCallback(self._count_leases, "one")
6182-        d.addCallback(self._assert_leasecount, 2)
6183-        d.addCallback(self._count_leases, "mutable")
6184-        d.addCallback(self._assert_leasecount, 2)
6185+        d.addCallback(self._assert_leasecount, "root", 2)
6186+        d.addCallback(self._assert_leasecount, "one", 2)
6187+        d.addCallback(self._assert_leasecount, "mutable", 2)
6188 
6189         d.addErrback(self.explain_web_error)
6190         return d
6191hunk ./src/allmydata/uri.py 829
6192     def is_mutable(self):
6193         return False
6194 
6195+    def is_readonly(self):
6196+        return True
6197+
6198+    def get_readonly(self):
6199+        return self
6200+
6201+
6202 class DirectoryURIVerifier(_DirectoryBaseURI):
6203     implements(IVerifierURI)
6204 
6205hunk ./src/allmydata/uri.py 855
6206     def is_mutable(self):
6207         return False
6208 
6209+    def is_readonly(self):
6210+        return True
6211+
6212+    def get_readonly(self):
6213+        return self
6214+
6215 
6216 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
6217     implements(IVerifierURI)
6218hunk ./src/allmydata/util/encodingutil.py 221
6219 def quote_path(path, quotemarks=True):
6220     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6221 
6222+def quote_filepath(fp, quotemarks=True, encoding=None):
6223+    path = fp.path
6224+    if isinstance(path, str):
6225+        try:
6226+            path = path.decode(filesystem_encoding)
6227+        except UnicodeDecodeError:
6228+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6229+
6230+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6231+
6232 
6233 def unicode_platform():
6234     """
6235hunk ./src/allmydata/util/fileutil.py 5
6236 Futz with files like a pro.
6237 """
6238 
6239-import sys, exceptions, os, stat, tempfile, time, binascii
6240+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6241+
6242+from allmydata.util.assertutil import precondition
6243 
6244 from twisted.python import log
6245hunk ./src/allmydata/util/fileutil.py 10
6246+from twisted.python.filepath import FilePath, UnlistableError
6247 
6248 from pycryptopp.cipher.aes import AES
6249 
6250hunk ./src/allmydata/util/fileutil.py 189
6251             raise tx
6252         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6253 
6254-def rm_dir(dirname):
6255+def fp_make_dirs(dirfp):
6256+    """
6257+    An idempotent version of FilePath.makedirs().  If the dir already
6258+    exists, do nothing and return without raising an exception.  If this
6259+    call creates the dir, return without raising an exception.  If there is
6260+    an error that prevents creation or if the directory gets deleted after
6261+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6262+    exists, raise an exception.
6263+    """
6264+    log.msg( "xxx 0 %s" % (dirfp,))
6265+    tx = None
6266+    try:
6267+        dirfp.makedirs()
6268+    except OSError, x:
6269+        tx = x
6270+
6271+    if not dirfp.isdir():
6272+        if tx:
6273+            raise tx
6274+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6275+
6276+def fp_rmdir_if_empty(dirfp):
6277+    """ Remove the directory if it is empty. """
6278+    try:
6279+        os.rmdir(dirfp.path)
6280+    except OSError, e:
6281+        if e.errno != errno.ENOTEMPTY:
6282+            raise
6283+    else:
6284+        dirfp.changed()
6285+
6286+def rmtree(dirname):
6287     """
6288     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6289     already gone, do nothing and return without raising an exception.  If this
6290hunk ./src/allmydata/util/fileutil.py 239
6291             else:
6292                 remove(fullname)
6293         os.rmdir(dirname)
6294-    except Exception, le:
6295-        # Ignore "No such file or directory"
6296-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6297+    except EnvironmentError, le:
6298+        # Ignore "No such file or directory", collect any other exception.
6299+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6300             excs.append(le)
6301hunk ./src/allmydata/util/fileutil.py 243
6302+    except Exception, le:
6303+        excs.append(le)
6304 
6305     # Okay, now we've recursively removed everything, ignoring any "No
6306     # such file or directory" errors, and collecting any other errors.
6307hunk ./src/allmydata/util/fileutil.py 256
6308             raise OSError, "Failed to remove dir for unknown reason."
6309         raise OSError, excs
6310 
6311+def fp_remove(fp):
6312+    """
6313+    An idempotent version of shutil.rmtree().  If the file/dir is already
6314+    gone, do nothing and return without raising an exception.  If this call
6315+    removes the file/dir, return without raising an exception.  If there is
6316+    an error that prevents removal, or if a file or directory at the same
6317+    path gets created again by someone else after this deletes it and before
6318+    this checks that it is gone, raise an exception.
6319+    """
6320+    try:
6321+        fp.remove()
6322+    except UnlistableError, e:
6323+        if e.originalException.errno != errno.ENOENT:
6324+            raise
6325+    except OSError, e:
6326+        if e.errno != errno.ENOENT:
6327+            raise
6328+
6329+def rm_dir(dirname):
6330+    # Renamed to be like shutil.rmtree and unlike rmdir.
6331+    return rmtree(dirname)
6332 
6333 def remove_if_possible(f):
6334     try:
6335hunk ./src/allmydata/util/fileutil.py 387
6336         import traceback
6337         traceback.print_exc()
6338 
6339-def get_disk_stats(whichdir, reserved_space=0):
6340+def get_disk_stats(whichdirfp, reserved_space=0):
6341     """Return disk statistics for the storage disk, in the form of a dict
6342     with the following fields.
6343       total:            total bytes on disk
6344hunk ./src/allmydata/util/fileutil.py 408
6345     you can pass how many bytes you would like to leave unused on this
6346     filesystem as reserved_space.
6347     """
6348+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6349 
6350     if have_GetDiskFreeSpaceExW:
6351         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6352hunk ./src/allmydata/util/fileutil.py 419
6353         n_free_for_nonroot = c_ulonglong(0)
6354         n_total            = c_ulonglong(0)
6355         n_free_for_root    = c_ulonglong(0)
6356-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6357-                                               byref(n_total),
6358-                                               byref(n_free_for_root))
6359+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6360+                                                      byref(n_total),
6361+                                                      byref(n_free_for_root))
6362         if retval == 0:
6363             raise OSError("Windows error %d attempting to get disk statistics for %r"
6364hunk ./src/allmydata/util/fileutil.py 424
6365-                          % (GetLastError(), whichdir))
6366+                          % (GetLastError(), whichdirfp.path))
6367         free_for_nonroot = n_free_for_nonroot.value
6368         total            = n_total.value
6369         free_for_root    = n_free_for_root.value
6370hunk ./src/allmydata/util/fileutil.py 433
6371         # <http://docs.python.org/library/os.html#os.statvfs>
6372         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6373         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6374-        s = os.statvfs(whichdir)
6375+        s = os.statvfs(whichdirfp.path)
6376 
6377         # on my mac laptop:
6378         #  statvfs(2) is a wrapper around statfs(2).
6379hunk ./src/allmydata/util/fileutil.py 460
6380              'avail': avail,
6381            }
6382 
6383-def get_available_space(whichdir, reserved_space):
6384+def get_available_space(whichdirfp, reserved_space):
6385     """Returns available space for share storage in bytes, or None if no
6386     API to get this information is available.
6387 
6388hunk ./src/allmydata/util/fileutil.py 472
6389     you can pass how many bytes you would like to leave unused on this
6390     filesystem as reserved_space.
6391     """
6392+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6393     try:
6394hunk ./src/allmydata/util/fileutil.py 474
6395-        return get_disk_stats(whichdir, reserved_space)['avail']
6396+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6397     except AttributeError:
6398         return None
6399hunk ./src/allmydata/util/fileutil.py 477
6400-    except EnvironmentError:
6401-        log.msg("OS call to get disk statistics failed")
6402+
6403+
6404+def get_used_space(fp):
6405+    if fp is None:
6406         return 0
6407hunk ./src/allmydata/util/fileutil.py 482
6408+    try:
6409+        s = os.stat(fp.path)
6410+    except EnvironmentError:
6411+        if not fp.exists():
6412+            return 0
6413+        raise
6414+    else:
6415+        # POSIX defines st_blocks (originally a BSDism):
6416+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6417+        # but does not require stat() to give it a "meaningful value"
6418+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6419+        # and says:
6420+        #   "The unit for the st_blocks member of the stat structure is not defined
6421+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6422+        #    It may differ on a file system basis. There is no correlation between
6423+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6424+        #    structure members."
6425+        #
6426+        # The Linux docs define it as "the number of blocks allocated to the file,
6427+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6428+        # not set the attribute on Windows.
6429+        #
6430+        # We consider platforms that define st_blocks but give it a wrong value, or
6431+        # measure it in a unit other than 512 bytes, to be broken. See also
6432+        # <http://bugs.python.org/issue12350>.
6433+
6434+        if hasattr(s, 'st_blocks'):
6435+            return s.st_blocks * 512
6436+        else:
6437+            return s.st_size
6438}
6439[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6440david-sarah@jacaranda.org**20110920033803
6441 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6442] {
6443hunk ./src/allmydata/client.py 9
6444 from twisted.internet import reactor, defer
6445 from twisted.application import service
6446 from twisted.application.internet import TimerService
6447+from twisted.python.filepath import FilePath
6448 from foolscap.api import Referenceable
6449 from pycryptopp.publickey import rsa
6450 
6451hunk ./src/allmydata/client.py 15
6452 import allmydata
6453 from allmydata.storage.server import StorageServer
6454+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6455 from allmydata import storage_client
6456 from allmydata.immutable.upload import Uploader
6457 from allmydata.immutable.offloaded import Helper
6458hunk ./src/allmydata/client.py 213
6459             return
6460         readonly = self.get_config("storage", "readonly", False, boolean=True)
6461 
6462-        storedir = os.path.join(self.basedir, self.STOREDIR)
6463+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6464 
6465         data = self.get_config("storage", "reserved_space", None)
6466         reserved = None
6467hunk ./src/allmydata/client.py 255
6468             'cutoff_date': cutoff_date,
6469             'sharetypes': tuple(sharetypes),
6470         }
6471-        ss = StorageServer(storedir, self.nodeid,
6472-                           reserved_space=reserved,
6473-                           discard_storage=discard,
6474-                           readonly_storage=readonly,
6475+
6476+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6477+                              discard_storage=discard)
6478+        ss = StorageServer(nodeid, backend, storedir,
6479                            stats_provider=self.stats_provider,
6480                            expiration_policy=expiration_policy)
6481         self.add_service(ss)
6482hunk ./src/allmydata/interfaces.py 348
6483 
6484     def get_shares():
6485         """
6486-        Generates the IStoredShare objects held in this shareset.
6487+        Generates IStoredShare objects for all completed shares in this shareset.
6488         """
6489 
6490     def has_incoming(shnum):
6491hunk ./src/allmydata/storage/backends/base.py 69
6492         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6493         #     """create a mutable share with the given shnum and write_enabler"""
6494 
6495-        # secrets might be a triple with cancel_secret in secrets[2], but if
6496-        # so we ignore the cancel_secret.
6497         write_enabler = secrets[0]
6498         renew_secret = secrets[1]
6499hunk ./src/allmydata/storage/backends/base.py 71
6500+        cancel_secret = '\x00'*32
6501+        if len(secrets) > 2:
6502+            cancel_secret = secrets[2]
6503 
6504         si_s = self.get_storage_index_string()
6505         shares = {}
6506hunk ./src/allmydata/storage/backends/base.py 110
6507             read_data[shnum] = share.readv(read_vector)
6508 
6509         ownerid = 1 # TODO
6510-        lease_info = LeaseInfo(ownerid, renew_secret,
6511+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6512                                expiration_time, storageserver.get_serverid())
6513 
6514         if testv_is_good:
6515hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6516     return newfp.child(sia)
6517 
6518 
6519-def get_share(fp):
6520+def get_share(storageindex, shnum, fp):
6521     f = fp.open('rb')
6522     try:
6523         prefix = f.read(32)
6524hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6525         f.close()
6526 
6527     if prefix == MutableDiskShare.MAGIC:
6528-        return MutableDiskShare(fp)
6529+        return MutableDiskShare(storageindex, shnum, fp)
6530     else:
6531         # assume it's immutable
6532hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6533-        return ImmutableDiskShare(fp)
6534+        return ImmutableDiskShare(storageindex, shnum, fp)
6535 
6536 
6537 class DiskBackend(Backend):
6538hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6539                 if not NUM_RE.match(shnumstr):
6540                     continue
6541                 sharehome = self._sharehomedir.child(shnumstr)
6542-                yield self.get_share(sharehome)
6543+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6544         except UnlistableError:
6545             # There is no shares directory at all.
6546             pass
6547hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6548         return self._incominghomedir.child(str(shnum)).exists()
6549 
6550     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6551-        sharehome = self._sharehomedir.child(str(shnum))
6552+        finalhome = self._sharehomedir.child(str(shnum))
6553         incominghome = self._incominghomedir.child(str(shnum))
6554hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6555-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6556-                                   max_size=max_space_per_bucket, create=True)
6557+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6558+                                   max_size=max_space_per_bucket)
6559         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6560         if self._discard_storage:
6561             bw.throw_out_all_data = True
6562hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6563         fileutil.fp_make_dirs(self._sharehomedir)
6564         sharehome = self._sharehomedir.child(str(shnum))
6565         serverid = storageserver.get_serverid()
6566-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6567+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6568 
6569     def _clean_up_after_unlink(self):
6570         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6571hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6572     LEASE_SIZE = struct.calcsize(">L32s32sL")
6573 
6574 
6575-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6576-        """ If max_size is not None then I won't allow more than
6577-        max_size to be written to me. If create=True then max_size
6578-        must not be None. """
6579-        precondition((max_size is not None) or (not create), max_size, create)
6580+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6581+        """
6582+        If max_size is not None then I won't allow more than max_size to be written to me.
6583+        If finalhome is not None (meaning that we are creating the share) then max_size
6584+        must not be None.
6585+        """
6586+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6587         self._storageindex = storageindex
6588         self._max_size = max_size
6589hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6590-        self._incominghome = incominghome
6591-        self._home = finalhome
6592+
6593+        # If we are creating the share, _finalhome refers to the final path and
6594+        # _home to the incoming path. Otherwise, _finalhome is None.
6595+        self._finalhome = finalhome
6596+        self._home = home
6597         self._shnum = shnum
6598hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6599-        if create:
6600-            # touch the file, so later callers will see that we're working on
6601+
6602+        if self._finalhome is not None:
6603+            # Touch the file, so later callers will see that we're working on
6604             # it. Also construct the metadata.
6605hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6606-            assert not finalhome.exists()
6607-            fp_make_dirs(self._incominghome.parent())
6608+            assert not self._finalhome.exists()
6609+            fp_make_dirs(self._home.parent())
6610             # The second field -- the four-byte share data length -- is no
6611             # longer used as of Tahoe v1.3.0, but we continue to write it in
6612             # there in case someone downgrades a storage server from >=
6613hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6614             # the largest length that can fit into the field. That way, even
6615             # if this does happen, the old < v1.3.0 server will still allow
6616             # clients to read the first part of the share.
6617-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6618+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6619             self._lease_offset = max_size + 0x0c
6620             self._num_leases = 0
6621         else:
6622hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6623                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6624 
6625     def close(self):
6626-        fileutil.fp_make_dirs(self._home.parent())
6627-        self._incominghome.moveTo(self._home)
6628-        try:
6629-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6630-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6631-            # these directories lying around forever, but the delete might
6632-            # fail if we're working on another share for the same storage
6633-            # index (like ab/abcde/5). The alternative approach would be to
6634-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6635-            # ShareWriter), each of which is responsible for a single
6636-            # directory on disk, and have them use reference counting of
6637-            # their children to know when they should do the rmdir. This
6638-            # approach is simpler, but relies on os.rmdir refusing to delete
6639-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6640-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6641-            # we also delete the grandparent (prefix) directory, .../ab ,
6642-            # again to avoid leaving directories lying around. This might
6643-            # fail if there is another bucket open that shares a prefix (like
6644-            # ab/abfff).
6645-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6646-            # we leave the great-grandparent (incoming/) directory in place.
6647-        except EnvironmentError:
6648-            # ignore the "can't rmdir because the directory is not empty"
6649-            # exceptions, those are normal consequences of the
6650-            # above-mentioned conditions.
6651-            pass
6652-        pass
6653+        fileutil.fp_make_dirs(self._finalhome.parent())
6654+        self._home.moveTo(self._finalhome)
6655+
6656+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6657+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6658+        # these directories lying around forever, but the delete might
6659+        # fail if we're working on another share for the same storage
6660+        # index (like ab/abcde/5). The alternative approach would be to
6661+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6662+        # ShareWriter), each of which is responsible for a single
6663+        # directory on disk, and have them use reference counting of
6664+        # their children to know when they should do the rmdir. This
6665+        # approach is simpler, but relies on os.rmdir (used by
6666+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6667+        # Do *not* use fileutil.fp_remove() here!
6668+        parent = self._home.parent()
6669+        fileutil.fp_rmdir_if_empty(parent)
6670+
6671+        # we also delete the grandparent (prefix) directory, .../ab ,
6672+        # again to avoid leaving directories lying around. This might
6673+        # fail if there is another bucket open that shares a prefix (like
6674+        # ab/abfff).
6675+        fileutil.fp_rmdir_if_empty(parent.parent())
6676+
6677+        # we leave the great-grandparent (incoming/) directory in place.
6678+
6679+        # allow lease changes after closing.
6680+        self._home = self._finalhome
6681+        self._finalhome = None
6682 
6683     def get_used_space(self):
6684hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6685-        return (fileutil.get_used_space(self._home) +
6686-                fileutil.get_used_space(self._incominghome))
6687+        return (fileutil.get_used_space(self._finalhome) +
6688+                fileutil.get_used_space(self._home))
6689 
6690     def get_storage_index(self):
6691         return self._storageindex
6692hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6693         precondition(offset >= 0, offset)
6694         if self._max_size is not None and offset+length > self._max_size:
6695             raise DataTooLargeError(self._max_size, offset, length)
6696-        f = self._incominghome.open(mode='rb+')
6697+        f = self._home.open(mode='rb+')
6698         try:
6699             real_offset = self._data_offset+offset
6700             f.seek(real_offset)
6701hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6702 
6703     # These lease operations are intended for use by disk_backend.py.
6704     # Other clients should not depend on the fact that the disk backend
6705-    # stores leases in share files.
6706+    # stores leases in share files. XXX bucket.py also relies on this.
6707 
6708     def get_leases(self):
6709         """Yields a LeaseInfo instance for all leases."""
6710hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6711             f.close()
6712 
6713     def add_lease(self, lease_info):
6714-        f = self._incominghome.open(mode='rb')
6715+        f = self._home.open(mode='rb+')
6716         try:
6717             num_leases = self._read_num_leases(f)
6718hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6719-        finally:
6720-            f.close()
6721-        f = self._home.open(mode='wb+')
6722-        try:
6723             self._write_lease_record(f, num_leases, lease_info)
6724             self._write_num_leases(f, num_leases+1)
6725         finally:
6726hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6727         pass
6728 
6729 
6730-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6731-    ms = MutableDiskShare(fp, parent)
6732+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6733+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6734     ms.create(serverid, write_enabler)
6735     del ms
6736hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6737-    return MutableDiskShare(fp, parent)
6738+    return MutableDiskShare(storageindex, shnum, fp, parent)
6739hunk ./src/allmydata/storage/bucket.py 44
6740         start = time.time()
6741 
6742         self._share.close()
6743-        filelen = self._share.stat()
6744+        # XXX should this be self._share.get_used_space() ?
6745+        consumed_size = self._share.get_size()
6746         self._share = None
6747 
6748         self.closed = True
6749hunk ./src/allmydata/storage/bucket.py 51
6750         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6751 
6752-        self.ss.bucket_writer_closed(self, filelen)
6753+        self.ss.bucket_writer_closed(self, consumed_size)
6754         self.ss.add_latency("close", time.time() - start)
6755         self.ss.count("close")
6756 
6757hunk ./src/allmydata/storage/server.py 182
6758                                 renew_secret, cancel_secret,
6759                                 sharenums, allocated_size,
6760                                 canary, owner_num=0):
6761-        # cancel_secret is no longer used.
6762         # owner_num is not for clients to set, but rather it should be
6763         # curried into a StorageServer instance dedicated to a particular
6764         # owner.
6765hunk ./src/allmydata/storage/server.py 195
6766         # Note that the lease should not be added until the BucketWriter
6767         # has been closed.
6768         expire_time = time.time() + 31*24*60*60
6769-        lease_info = LeaseInfo(owner_num, renew_secret,
6770+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6771                                expire_time, self._serverid)
6772 
6773         max_space_per_bucket = allocated_size
6774hunk ./src/allmydata/test/no_network.py 349
6775         return self.g.servers_by_number[i]
6776 
6777     def get_serverdir(self, i):
6778-        return self.g.servers_by_number[i].backend.storedir
6779+        return self.g.servers_by_number[i].backend._storedir
6780 
6781     def remove_server(self, i):
6782         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6783hunk ./src/allmydata/test/no_network.py 357
6784     def iterate_servers(self):
6785         for i in sorted(self.g.servers_by_number.keys()):
6786             ss = self.g.servers_by_number[i]
6787-            yield (i, ss, ss.backend.storedir)
6788+            yield (i, ss, ss.backend._storedir)
6789 
6790     def find_uri_shares(self, uri):
6791         si = tahoe_uri.from_string(uri).get_storage_index()
6792hunk ./src/allmydata/test/no_network.py 384
6793         return shares
6794 
6795     def copy_share(self, from_share, uri, to_server):
6796-        si = uri.from_string(self.uri).get_storage_index()
6797+        si = tahoe_uri.from_string(uri).get_storage_index()
6798         (i_shnum, i_serverid, i_sharefp) = from_share
6799         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6800         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6801hunk ./src/allmydata/test/test_download.py 127
6802 
6803         return d
6804 
6805-    def _write_shares(self, uri, shares):
6806-        si = uri.from_string(uri).get_storage_index()
6807+    def _write_shares(self, fileuri, shares):
6808+        si = uri.from_string(fileuri).get_storage_index()
6809         for i in shares:
6810             shares_for_server = shares[i]
6811             for shnum in shares_for_server:
6812hunk ./src/allmydata/test/test_hung_server.py 36
6813 
6814     def _hang(self, servers, **kwargs):
6815         for ss in servers:
6816-            self.g.hang_server(ss.get_serverid(), **kwargs)
6817+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6818 
6819     def _unhang(self, servers, **kwargs):
6820         for ss in servers:
6821hunk ./src/allmydata/test/test_hung_server.py 40
6822-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6823+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6824 
6825     def _hang_shares(self, shnums, **kwargs):
6826         # hang all servers who are holding the given shares
6827hunk ./src/allmydata/test/test_hung_server.py 52
6828                     hung_serverids.add(i_serverid)
6829 
6830     def _delete_all_shares_from(self, servers):
6831-        serverids = [ss.get_serverid() for ss in servers]
6832+        serverids = [ss.original.get_serverid() for ss in servers]
6833         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6834             if i_serverid in serverids:
6835                 i_sharefp.remove()
6836hunk ./src/allmydata/test/test_hung_server.py 58
6837 
6838     def _corrupt_all_shares_in(self, servers, corruptor_func):
6839-        serverids = [ss.get_serverid() for ss in servers]
6840+        serverids = [ss.original.get_serverid() for ss in servers]
6841         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6842             if i_serverid in serverids:
6843                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6844hunk ./src/allmydata/test/test_hung_server.py 64
6845 
6846     def _copy_all_shares_from(self, from_servers, to_server):
6847-        serverids = [ss.get_serverid() for ss in from_servers]
6848+        serverids = [ss.original.get_serverid() for ss in from_servers]
6849         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6850             if i_serverid in serverids:
6851                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6852hunk ./src/allmydata/test/test_mutable.py 2983
6853             fso = debug.FindSharesOptions()
6854             storage_index = base32.b2a(n.get_storage_index())
6855             fso.si_s = storage_index
6856-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6857+            fso.nodedirs = [unicode(storedir.parent().path)
6858                             for (i,ss,storedir)
6859                             in self.iterate_servers()]
6860             fso.stdout = StringIO()
6861hunk ./src/allmydata/test/test_upload.py 818
6862         if share_number is not None:
6863             self._copy_share_to_server(share_number, server_number)
6864 
6865-
6866     def _copy_share_to_server(self, share_number, server_number):
6867         ss = self.g.servers_by_number[server_number]
6868hunk ./src/allmydata/test/test_upload.py 820
6869-        self.copy_share(self.shares[share_number], ss)
6870+        self.copy_share(self.shares[share_number], self.uri, ss)
6871 
6872     def _setup_grid(self):
6873         """
6874}
6875
6876Context:
6877
6878[Make platform-detection code tolerate linux-3.0, patch by zooko.
6879Brian Warner <warner@lothar.com>**20110915202620
6880 Ignore-this: af63cf9177ae531984dea7a1cad03762
6881 
6882 Otherwise address-autodetection can't find ifconfig. refs #1536
6883]
6884[test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended.
6885david-sarah@jacaranda.org**20110915185126
6886 Ignore-this: d96632bc48d770b9b577cda1bbd8ff94
6887]
6888[docs: insert a newline at the beginning of known_issues.rst to see if this makes it render more nicely in trac
6889zooko@zooko.com**20110914064728
6890 Ignore-this: aca15190fa22083c5d4114d3965f5d65
6891]
6892[docs: remove the coding: utf-8 declaration at the to of known_issues.rst, since the trac rendering doesn't hide it
6893zooko@zooko.com**20110914055713
6894 Ignore-this: 941ed32f83ead377171aa7a6bd198fcf
6895]
6896[docs: more cleanup of known_issues.rst -- now it passes "rst2html --verbose" without comment
6897zooko@zooko.com**20110914055419
6898 Ignore-this: 5505b3d76934bd97d0312cc59ed53879
6899]
6900[docs: more formatting improvements to known_issues.rst
6901zooko@zooko.com**20110914051639
6902 Ignore-this: 9ae9230ec9a38a312cbacaf370826691
6903]
6904[docs: reformatting of known_issues.rst
6905zooko@zooko.com**20110914050240
6906 Ignore-this: b8be0375079fb478be9d07500f9aaa87
6907]
6908[docs: fix formatting error in docs/known_issues.rst
6909zooko@zooko.com**20110914045909
6910 Ignore-this: f73fe74ad2b9e655aa0c6075acced15a
6911]
6912[merge Tahoe-LAFS v1.8.3 release announcement with trunk
6913zooko@zooko.com**20110913210544
6914 Ignore-this: 163f2c3ddacca387d7308e4b9332516e
6915]
6916[docs: release notes for Tahoe-LAFS v1.8.3
6917zooko@zooko.com**20110913165826
6918 Ignore-this: 84223604985b14733a956d2fbaeb4e9f
6919]
6920[tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow
6921zooko@zooko.com**20110913024255
6922 Ignore-this: 6a86d691e878cec583722faad06fb8e4
6923]
6924[interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528
6925david-sarah@jacaranda.org**20110913002843
6926 Ignore-this: 1a00a6029d40f6792af48c5578c1fd69
6927]
6928[CREDITS: more CREDITS for Kevan and David-Sarah
6929zooko@zooko.com**20110912223357
6930 Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada
6931]
6932[merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress
6933zooko@zooko.com**20110913205521
6934 Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8
6935]
6936[doc: add NEWS item about fixes to potential palimpsest issues in mutable files
6937zooko@zooko.com**20110912223329
6938 Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a
6939 ref. #1528
6940]
6941[merge the NEWS about the security fix (#1528) with the work-in-progress NEWS
6942zooko@zooko.com**20110913205153
6943 Ignore-this: 88e88a2ad140238c62010cf7c66953fc
6944]
6945[doc: add NEWS entry about the issue which allows unauthorized deletion of shares
6946zooko@zooko.com**20110912223246
6947 Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0
6948 ref. #1528
6949]
6950[doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares
6951zooko@zooko.com**20110912223135
6952 Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd
6953 ref. #1528
6954]
6955[storage: more paranoid handling of bounds and palimpsests in mutable share files
6956zooko@zooko.com**20110912222655
6957 Ignore-this: a20782fa423779ee851ea086901e1507
6958 * storage server ignores requests to extend shares by sending a new_length
6959 * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
6960 * storage server zeroes out lease info at the old location when moving it to a new location
6961 ref. #1528
6962]
6963[storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents
6964zooko@zooko.com**20110912222554
6965 Ignore-this: 61ebd7b11250963efdf5b1734a35271
6966 ref. #1528
6967]
6968[immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret
6969zooko@zooko.com**20110912222458
6970 Ignore-this: da1ebd31433ea052087b75b2e3480c25
6971 Declare explicitly that we prevent this problem in the server's version dict.
6972 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
6973]
6974[storage: remove the storage server's "remote_cancel_lease" function
6975zooko@zooko.com**20110912222331
6976 Ignore-this: 1c32dee50e0981408576daffad648c50
6977 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
6978 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
6979]
6980[storage: test that the storage server does *not* have a "remote_cancel_lease" function
6981zooko@zooko.com**20110912222324
6982 Ignore-this: 21c652009704652d35f34651f98dd403
6983 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
6984 ref. #1528
6985]
6986[immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret
6987zooko@zooko.com**20110912221201
6988 Ignore-this: 376e47b346c713d37096531491176349
6989 Also test whether the server explicitly declares that it prevents this problem.
6990 ref #1528
6991]
6992[Retrieve._activate_enough_peers: rewrite Verify logic
6993Brian Warner <warner@lothar.com>**20110909181150
6994 Ignore-this: 9367c11e1eacbf025f75ce034030d717
6995]
6996[Retrieve: implement/test stopProducing
6997Brian Warner <warner@lothar.com>**20110909181150
6998 Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178
6999]
7000[move DownloadStopped from download.common to interfaces
7001Brian Warner <warner@lothar.com>**20110909181150
7002 Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50
7003]
7004[retrieve.py: remove vestigal self._validated_readers
7005Brian Warner <warner@lothar.com>**20110909181150
7006 Ignore-this: faab2ec14e314a53a2ffb714de626e2d
7007]
7008[Retrieve: rewrite flow-control: use a top-level loop() to catch all errors
7009Brian Warner <warner@lothar.com>**20110909181150
7010 Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714
7011 
7012 This ought to close the potential for dropped errors and hanging downloads.
7013 Verify needs to be examined, I may have broken it, although all tests pass.
7014]
7015[Retrieve: merge _validate_active_prefixes into _add_active_peers
7016Brian Warner <warner@lothar.com>**20110909181150
7017 Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c
7018]
7019[Retrieve: remove the initial prefix-is-still-good check
7020Brian Warner <warner@lothar.com>**20110909181150
7021 Ignore-this: da66ee51c894eaa4e862e2dffb458acc
7022 
7023 This check needs to be done with each fetch from the storage server, to
7024 detect when someone has changed the share (i.e. our servermap goes stale).
7025 Doing it just once at the beginning of retrieve isn't enough: a write might
7026 occur after the first segment but before the second, etc.
7027 
7028 _try_to_validate_prefix() was not removed: it will be used by the future
7029 check-with-each-fetch code.
7030 
7031 test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
7032 fails until this check is brought back. (the corruption it applies only
7033 touches the prefix, not the block data, so the check-less retrieve actually
7034 tolerates it). Don't forget to re-enable it once the check is brought back.
7035]
7036[MDMFSlotReadProxy: remove the queue
7037Brian Warner <warner@lothar.com>**20110909181150
7038 Ignore-this: 96673cb8dda7a87a423de2f4897d66d2
7039 
7040 This is a neat trick to reduce Foolscap overhead, but the need for an
7041 explicit flush() complicates the Retrieve path and makes it prone to
7042 lost-progress bugs.
7043 
7044 Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
7045 same share in a row, a limitation exposed by turning off the queue.
7046]
7047[rearrange Retrieve: first step, shouldn't change order of execution
7048Brian Warner <warner@lothar.com>**20110909181149
7049 Ignore-this: e3006368bfd2802b82ea45c52409e8d6
7050]
7051[CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527
7052david-sarah@jacaranda.org**20110906183730
7053 Ignore-this: 122e2ffbee84861c32eda766a57759cf
7054]
7055[CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527
7056david-sarah@jacaranda.org**20110906183020
7057 Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d
7058]
7059[CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527
7060david-sarah@jacaranda.org**20110905020922
7061 Ignore-this: 75a6df0a2df9c467d8c010579e9a024e
7062]
7063[cli: make --mutable-type imply --mutable in 'tahoe put'
7064Kevan Carstensen <kevan@isnotajoke.com>**20110903190920
7065 Ignore-this: 23336d3c43b2a9554e40c2a11c675e93
7066]
7067[SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on.
7068david-sarah@jacaranda.org**20110903222304
7069 Ignore-this: 980c61d4dd0119337f1463a69aeebaf0
7070]
7071[improve the storage/mutable.py asserts even more
7072warner@lothar.com**20110901160543
7073 Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946
7074]
7075[storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts
7076wilcoxjg@gmail.com**20110901084144
7077 Ignore-this: 28ace2b2678642e4d7269ddab8c67f30
7078]
7079[docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs.
7080david-sarah@jacaranda.org**20110831232148
7081 Ignore-this: cd9c851d3eb4e0a1e088f337c291586c
7082]
7083[test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s
7084warner@lothar.com**20110831050451
7085 Ignore-this: 64815284d9e536f8f3798b5f44cf580c
7086]
7087[mutable/retrieve: handle the case where self._read_length is 0.
7088Kevan Carstensen <kevan@isnotajoke.com>**20110830210141
7089 Ignore-this: fceafbe485851ca53f2774e5a4fd8d30
7090 
7091 Note that the downloader will still fetch a segment for a zero-length
7092 read, which is wasteful. Fixing that isn't specifically required to fix
7093 #1512, but it should probably be fixed before 1.9.
7094]
7095[NEWS: added summary of all changes since 1.8.2. Needs editing.
7096Brian Warner <warner@lothar.com>**20110830163205
7097 Ignore-this: 273899b37a899fc6919b74572454b8b2
7098]
7099[test_mutable.Update: only upload the files needed for each test. refs #1500
7100Brian Warner <warner@lothar.com>**20110829072717
7101 Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7
7102 
7103 This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
7104 It also fixes a couple of places where a Deferred was being dropped, which
7105 would cause two tests to run in parallel and also confuse error reporting.
7106]
7107[Let Uploader retain History instead of passing it into upload(). Fixes #1079.
7108Brian Warner <warner@lothar.com>**20110829063246
7109 Ignore-this: 3902c58ec12bd4b2d876806248e19f17
7110 
7111 This consistently records all immutable uploads in the Recent Uploads And
7112 Downloads page, regardless of code path. Previously, certain webapi upload
7113 operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
7114 object and were left out.
7115]
7116[Fix mutable publish/retrieve timing status displays. Fixes #1505.
7117Brian Warner <warner@lothar.com>**20110828232221
7118 Ignore-this: 4080ce065cf481b2180fd711c9772dd6
7119 
7120 publish:
7121 * encrypt and encode times are cumulative, not just current-segment
7122 
7123 retrieve:
7124 * same for decrypt and decode times
7125 * update "current status" to include segment number
7126 * set status to Finished/Failed when download is complete
7127 * set progress to 1.0 when complete
7128 
7129 More improvements to consider:
7130 * progress is currently 0% or 100%: should calculate how many segments are
7131   involved (remembering retrieve can be less than the whole file) and set it
7132   to a fraction
7133 * "fetch" time is fuzzy: what we want is to know how much of the delay is not
7134   our own fault, but since we do decode/decrypt work while waiting for more
7135   shares, it's not straightforward
7136]
7137[Teach 'tahoe debug catalog-shares about MDMF. Closes #1507.
7138Brian Warner <warner@lothar.com>**20110828080931
7139 Ignore-this: 56ef2951db1a648353d7daac6a04c7d1
7140]
7141[debug.py: remove some dead comments
7142Brian Warner <warner@lothar.com>**20110828074556
7143 Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31
7144]
7145[hush pyflakes
7146Brian Warner <warner@lothar.com>**20110828074254
7147 Ignore-this: bef9d537a969fa82fe4decc4ba2acb09
7148]
7149[MutableFileNode.set_downloader_hints: never depend upon order of dict.values()
7150Brian Warner <warner@lothar.com>**20110828074103
7151 Ignore-this: caaf1aa518dbdde4d797b7f335230faa
7152 
7153 The old code was calculating the "extension parameters" (a list) from the
7154 downloader hints (a dictionary) with hints.values(), which is not stable, and
7155 would result in corrupted filecaps (with the 'k' and 'segsize' hints
7156 occasionally swapped). The new code always uses [k,segsize].
7157]
7158[layout.py: fix MDMF share layout documentation
7159Brian Warner <warner@lothar.com>**20110828073921
7160 Ignore-this: 3f13366fed75b5e31b51ae895450a225
7161]
7162[teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507
7163Brian Warner <warner@lothar.com>**20110828073834
7164 Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea
7165]
7166[test_mutable.Version.test_debug: use splitlines() to fix buildslaves
7167Brian Warner <warner@lothar.com>**20110828064728
7168 Ignore-this: c7f6245426fc80b9d1ae901d5218246a
7169 
7170 Any slave running in a directory with spaces in the name was miscounting
7171 shares, causing the test to fail.
7172]
7173[test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507
7174Brian Warner <warner@lothar.com>**20110828005542
7175 Ignore-this: cb20bea1c28bfa50a72317d70e109672
7176 
7177 Also changes NoNetworkGrid to put shares in storage/shares/ .
7178]
7179[test_mutable.py: oops, missed a .todo
7180Brian Warner <warner@lothar.com>**20110828002118
7181 Ignore-this: fda09ae86481352b7a627c278d2a3940
7182]
7183[test_mutable: merge davidsarah's patch with my Version refactorings
7184warner@lothar.com**20110827235707
7185 Ignore-this: b5aaf481c90d99e33827273b5d118fd0
7186]
7187[Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393
7188david-sarah@jacaranda.org**20110823012720
7189 Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721
7190]
7191[Additional tests for MDMF URIs and for zero-length files. refs #393
7192david-sarah@jacaranda.org**20110823011532
7193 Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5
7194]
7195[Additional tests for zero-length partial reads and updates to mutable versions. refs #393
7196david-sarah@jacaranda.org**20110822014111
7197 Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea
7198]
7199[test_mutable.Version: factor out some expensive uploads, save 25% runtime
7200Brian Warner <warner@lothar.com>**20110827232737
7201 Ignore-this: ea37383eb85ea0894b254fe4dfb45544
7202]
7203[SDMF: update filenode with correct k/N after Retrieve. Fixes #1510.
7204Brian Warner <warner@lothar.com>**20110827225031
7205 Ignore-this: b50ae6e1045818c400079f118b4ef48
7206 
7207 Without this, we get a regression when modifying a mutable file that was
7208 created with more shares (larger N) than our current tahoe.cfg . The
7209 modification attempt creates new versions of the (0,1,..,newN-1) shares, but
7210 leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
7211 assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
7212 
7213 The mixed versions that result (some shares with e.g. N=10, some with N=20,
7214 such that both versions are recoverable) cause problems for the Publish code,
7215 even before MDMF landed. Might be related to refs #1390 and refs #1042.
7216]
7217[layout.py: annotate assertion to figure out 'tahoe backup' failure
7218Brian Warner <warner@lothar.com>**20110827195253
7219 Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5
7220]
7221[Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507.
7222Brian Warner <warner@lothar.com>**20110827195048
7223 Ignore-this: 61c6af5e33fc88e0251e697a50addb2c
7224 
7225 This also adds tests for all those cases, and fixes an omission in uri.py
7226 that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier.
7227]
7228[MDMF: more writable/writeable consistentifications
7229warner@lothar.com**20110827190602
7230 Ignore-this: 22492a9e20c1819ddb12091062888b55
7231]
7232[MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code
7233warner@lothar.com**20110827183357
7234 Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b
7235]
7236[setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479
7237david-sarah@jacaranda.org**20110826230345
7238 Ignore-this: 40e908b8937322a290fb8012bfcad02a
7239]
7240[test_mutable.Update: increase timeout from 120s to 400s, slaves are failing
7241Brian Warner <warner@lothar.com>**20110825230140
7242 Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec
7243]
7244[tests: fix check_memory test
7245zooko@zooko.com**20110825201116
7246 Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835
7247 fixes #1503
7248]
7249[TAG allmydata-tahoe-1.9.0a1
7250warner@lothar.com**20110825161122
7251 Ignore-this: 3cbf49f00dbda58189f893c427f65605
7252]
7253Patch bundle hash:
72544cbbb3fc43ad8c407d84cc16cac65401b2878c77