Ticket #999: passtest_status_bad_disk_stats.darcs.patch

File passtest_status_bad_disk_stats.darcs.patch, 500.1 KB (added by zancas, at 2011-09-27T06:37:30Z)

contains changes in v12

Line 
1Wed Aug 24 17:32:17 PDT 2011  david-sarah@jacaranda.org
2  * interfaces.py: 'which -> that' grammar cleanup.
3
4Mon Sep 19 16:29:26 PDT 2011  david-sarah@jacaranda.org
5  * Pluggable backends -- new and moved files, changes to moved files. refs #999
6
7Mon Sep 19 16:32:56 PDT 2011  david-sarah@jacaranda.org
8  * Pluggable backends -- all other changes. refs #999
9
10Mon Sep 19 20:38:03 PDT 2011  david-sarah@jacaranda.org
11  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
12
13Tue Sep 20 10:17:37 PDT 2011  david-sarah@jacaranda.org
14  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
15
16Tue Sep 20 20:12:07 PDT 2011  david-sarah@jacaranda.org
17  * Fix some incorrect attribute accesses. refs #999
18
19Tue Sep 20 20:16:25 PDT 2011  david-sarah@jacaranda.org
20  * docs/backends/S3.rst: remove Issues section. refs #999
21
22Tue Sep 20 20:17:05 PDT 2011  david-sarah@jacaranda.org
23  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
24
25Wed Sep 21 11:46:49 PDT 2011  david-sarah@jacaranda.org
26  * More fixes to tests needed for pluggable backends. refs #999
27
28Wed Sep 21 15:14:21 PDT 2011  david-sarah@jacaranda.org
29  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
30
31Wed Sep 21 15:20:38 PDT 2011  david-sarah@jacaranda.org
32  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
33
34Wed Sep 21 21:54:51 PDT 2011  david-sarah@jacaranda.org
35  * Fix some more test failures. refs #999
36
37Thu Sep 22 11:30:08 PDT 2011  david-sarah@jacaranda.org
38  * Fix most of the crawler tests. refs #999
39
40Thu Sep 22 11:33:23 PDT 2011  david-sarah@jacaranda.org
41  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
42
43Thu Sep 22 18:20:44 PDT 2011  david-sarah@jacaranda.org
44  * Blank line cleanups.
45
46Thu Sep 22 21:08:25 PDT 2011  david-sarah@jacaranda.org
47  * mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
48
49Thu Sep 22 21:10:03 PDT 2011  david-sarah@jacaranda.org
50  * A few comment cleanups. refs #999
51
52Thu Sep 22 21:11:15 PDT 2011  david-sarah@jacaranda.org
53  * Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
54
55Thu Sep 22 21:13:14 PDT 2011  david-sarah@jacaranda.org
56  * Add incomplete S3 backend. refs #999
57
58Fri Sep 23 13:37:23 PDT 2011  david-sarah@jacaranda.org
59  * interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
60
61Fri Sep 23 13:44:25 PDT 2011  david-sarah@jacaranda.org
62  * Remove redundant si_s argument from check_write_enabler. refs #999
63
64Fri Sep 23 13:46:11 PDT 2011  david-sarah@jacaranda.org
65  * Implement readv for immutable shares. refs #999
66
67Fri Sep 23 13:49:14 PDT 2011  david-sarah@jacaranda.org
68  * The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
69
70Fri Sep 23 13:49:45 PDT 2011  david-sarah@jacaranda.org
71  * Make EmptyShare.check_testv a simple function. refs #999
72
73Fri Sep 23 13:52:19 PDT 2011  david-sarah@jacaranda.org
74  * Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
75
76Fri Sep 23 13:53:45 PDT 2011  david-sarah@jacaranda.org
77  * Update the S3 backend. refs #999
78
79Fri Sep 23 13:55:10 PDT 2011  david-sarah@jacaranda.org
80  * Minor cleanup to disk backend. refs #999
81
82Mon Sep 26 23:31:56 PDT 2011  wilcoxjg@gmail.com
83  * disk/disk_backends.py:  Modify get_available_space to handle EnvironmentErrors with a return value of 0, indicating that there's an error in the OS call.
84
85New patches:
86
87[interfaces.py: 'which -> that' grammar cleanup.
88david-sarah@jacaranda.org**20110825003217
89 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
90] {
91hunk ./src/allmydata/interfaces.py 38
92     the StubClient. This object doesn't actually offer any services, but the
93     announcement helps the Introducer keep track of which clients are
94     subscribed (so the grid admin can keep track of things like the size of
95-    the grid and the client versions in use. This is the (empty)
96+    the grid and the client versions in use). This is the (empty)
97     RemoteInterface for the StubClient."""
98 
99 class RIBucketWriter(RemoteInterface):
100hunk ./src/allmydata/interfaces.py 276
101         (binary) storage index string, and 'shnum' is the integer share
102         number. 'reason' is a human-readable explanation of the problem,
103         probably including some expected hash values and the computed ones
104-        which did not match. Corruption advisories for mutable shares should
105+        that did not match. Corruption advisories for mutable shares should
106         include a hash of the public key (the same value that appears in the
107         mutable-file verify-cap), since the current share format does not
108         store that on disk.
109hunk ./src/allmydata/interfaces.py 413
110           remote_host: the IAddress, if connected, otherwise None
111 
112         This method is intended for monitoring interfaces, such as a web page
113-        which describes connecting and connected peers.
114+        that describes connecting and connected peers.
115         """
116 
117     def get_all_peerids():
118hunk ./src/allmydata/interfaces.py 515
119 
120     # TODO: rename to get_read_cap()
121     def get_readonly():
122-        """Return another IURI instance, which represents a read-only form of
123+        """Return another IURI instance that represents a read-only form of
124         this one. If is_readonly() is True, this returns self."""
125 
126     def get_verify_cap():
127hunk ./src/allmydata/interfaces.py 542
128         passing into init_from_string."""
129 
130 class IDirnodeURI(Interface):
131-    """I am a URI which represents a dirnode."""
132+    """I am a URI that represents a dirnode."""
133 
134 class IFileURI(Interface):
135hunk ./src/allmydata/interfaces.py 545
136-    """I am a URI which represents a filenode."""
137+    """I am a URI that represents a filenode."""
138     def get_size():
139         """Return the length (in bytes) of the file that I represent."""
140 
141hunk ./src/allmydata/interfaces.py 553
142     pass
143 
144 class IMutableFileURI(Interface):
145-    """I am a URI which represents a mutable filenode."""
146+    """I am a URI that represents a mutable filenode."""
147     def get_extension_params():
148         """Return the extension parameters in the URI"""
149 
150hunk ./src/allmydata/interfaces.py 856
151         """
152 
153 class IFileNode(IFilesystemNode):
154-    """I am a node which represents a file: a sequence of bytes. I am not a
155+    """I am a node that represents a file: a sequence of bytes. I am not a
156     container, like IDirectoryNode."""
157     def get_best_readable_version():
158         """Return a Deferred that fires with an IReadable for the 'best'
159hunk ./src/allmydata/interfaces.py 905
160     multiple versions of a file present in the grid, some of which might be
161     unrecoverable (i.e. have fewer than 'k' shares). These versions are
162     loosely ordered: each has a sequence number and a hash, and any version
163-    with seqnum=N was uploaded by a node which has seen at least one version
164+    with seqnum=N was uploaded by a node that has seen at least one version
165     with seqnum=N-1.
166 
167     The 'servermap' (an instance of IMutableFileServerMap) is used to
168hunk ./src/allmydata/interfaces.py 1014
169         as a guide to where the shares are located.
170 
171         I return a Deferred that fires with the requested contents, or
172-        errbacks with UnrecoverableFileError. Note that a servermap which was
173+        errbacks with UnrecoverableFileError. Note that a servermap that was
174         updated with MODE_ANYTHING or MODE_READ may not know about shares for
175         all versions (those modes stop querying servers as soon as they can
176         fulfil their goals), so you may want to use MODE_CHECK (which checks
177hunk ./src/allmydata/interfaces.py 1073
178     """Upload was unable to satisfy 'servers_of_happiness'"""
179 
180 class UnableToFetchCriticalDownloadDataError(Exception):
181-    """I was unable to fetch some piece of critical data which is supposed to
182+    """I was unable to fetch some piece of critical data that is supposed to
183     be identically present in all shares."""
184 
185 class NoServersError(Exception):
186hunk ./src/allmydata/interfaces.py 1085
187     exists, and overwrite= was set to False."""
188 
189 class NoSuchChildError(Exception):
190-    """A directory node was asked to fetch a child which does not exist."""
191+    """A directory node was asked to fetch a child that does not exist."""
192 
193 class ChildOfWrongTypeError(Exception):
194     """An operation was attempted on a child of the wrong type (file or directory)."""
195hunk ./src/allmydata/interfaces.py 1403
196         if you initially thought you were going to use 10 peers, started
197         encoding, and then two of the peers dropped out: you could use
198         desired_share_ids= to skip the work (both memory and CPU) of
199-        producing shares for the peers which are no longer available.
200+        producing shares for the peers that are no longer available.
201 
202         """
203 
204hunk ./src/allmydata/interfaces.py 1478
205         if you initially thought you were going to use 10 peers, started
206         encoding, and then two of the peers dropped out: you could use
207         desired_share_ids= to skip the work (both memory and CPU) of
208-        producing shares for the peers which are no longer available.
209+        producing shares for the peers that are no longer available.
210 
211         For each call, encode() will return a Deferred that fires with two
212         lists, one containing shares and the other containing the shareids.
213hunk ./src/allmydata/interfaces.py 1535
214         required to be of the same length.  The i'th element of their_shareids
215         is required to be the shareid of the i'th buffer in some_shares.
216 
217-        This returns a Deferred which fires with a sequence of buffers. This
218+        This returns a Deferred that fires with a sequence of buffers. This
219         sequence will contain all of the segments of the original data, in
220         order. The sum of the lengths of all of the buffers will be the
221         'data_size' value passed into the original ICodecEncode.set_params()
222hunk ./src/allmydata/interfaces.py 1582
223         Encoding parameters can be set in three ways. 1: The Encoder class
224         provides defaults (3/7/10). 2: the Encoder can be constructed with
225         an 'options' dictionary, in which the
226-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
227+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
228         set_params((k,d,n)) can be called.
229 
230         If you intend to use set_params(), you must call it before
231hunk ./src/allmydata/interfaces.py 1780
232         produced, so that the segment hashes can be generated with only a
233         single pass.
234 
235-        This returns a Deferred which fires with a sequence of hashes, using:
236+        This returns a Deferred that fires with a sequence of hashes, using:
237 
238          tuple(segment_hashes[first:last])
239 
240hunk ./src/allmydata/interfaces.py 1796
241     def get_plaintext_hash():
242         """OBSOLETE; Get the hash of the whole plaintext.
243 
244-        This returns a Deferred which fires with a tagged SHA-256 hash of the
245+        This returns a Deferred that fires with a tagged SHA-256 hash of the
246         whole plaintext, obtained from hashutil.plaintext_hash(data).
247         """
248 
249hunk ./src/allmydata/interfaces.py 1856
250         be used to encrypt the data. The key will also be hashed to derive
251         the StorageIndex.
252 
253-        Uploadables which want to achieve convergence should hash their file
254+        Uploadables that want to achieve convergence should hash their file
255         contents and the serialized_encoding_parameters to form the key
256         (which of course requires a full pass over the data). Uploadables can
257         use the upload.ConvergentUploadMixin class to achieve this
258hunk ./src/allmydata/interfaces.py 1862
259         automatically.
260 
261-        Uploadables which do not care about convergence (or do not wish to
262+        Uploadables that do not care about convergence (or do not wish to
263         make multiple passes over the data) can simply return a
264         strongly-random 16 byte string.
265 
266hunk ./src/allmydata/interfaces.py 1872
267 
268     def read(length):
269         """Return a Deferred that fires with a list of strings (perhaps with
270-        only a single element) which, when concatenated together, contain the
271+        only a single element) that, when concatenated together, contain the
272         next 'length' bytes of data. If EOF is near, this may provide fewer
273         than 'length' bytes. The total number of bytes provided by read()
274         before it signals EOF must equal the size provided by get_size().
275hunk ./src/allmydata/interfaces.py 1919
276 
277     def read(length):
278         """
279-        Returns a list of strings which, when concatenated, are the next
280+        Returns a list of strings that, when concatenated, are the next
281         length bytes of the file, or fewer if there are fewer bytes
282         between the current location and the end of the file.
283         """
284hunk ./src/allmydata/interfaces.py 1932
285 
286 class IUploadResults(Interface):
287     """I am returned by upload() methods. I contain a number of public
288-    attributes which can be read to determine the results of the upload. Some
289+    attributes that can be read to determine the results of the upload. Some
290     of these are functional, some are timing information. All of these may be
291     None.
292 
293hunk ./src/allmydata/interfaces.py 1965
294 
295 class IDownloadResults(Interface):
296     """I am created internally by download() methods. I contain a number of
297-    public attributes which contain details about the download process.::
298+    public attributes that contain details about the download process.::
299 
300      .file_size : the size of the file, in bytes
301      .servers_used : set of server peerids that were used during download
302hunk ./src/allmydata/interfaces.py 1991
303 class IUploader(Interface):
304     def upload(uploadable):
305         """Upload the file. 'uploadable' must impement IUploadable. This
306-        returns a Deferred which fires with an IUploadResults instance, from
307+        returns a Deferred that fires with an IUploadResults instance, from
308         which the URI of the file can be obtained as results.uri ."""
309 
310     def upload_ssk(write_capability, new_version, uploadable):
311hunk ./src/allmydata/interfaces.py 2041
312         kind of lease that is obtained (which account number to claim, etc).
313 
314         TODO: any problems seen during checking will be reported to the
315-        health-manager.furl, a centralized object which is responsible for
316+        health-manager.furl, a centralized object that is responsible for
317         figuring out why files are unhealthy so corrective action can be
318         taken.
319         """
320hunk ./src/allmydata/interfaces.py 2056
321         will be put in the check-and-repair results. The Deferred will not
322         fire until the repair is complete.
323 
324-        This returns a Deferred which fires with an instance of
325+        This returns a Deferred that fires with an instance of
326         ICheckAndRepairResults."""
327 
328 class IDeepCheckable(Interface):
329hunk ./src/allmydata/interfaces.py 2141
330                               that was found to be corrupt. Each share
331                               locator is a list of (serverid, storage_index,
332                               sharenum).
333-         count-incompatible-shares: the number of shares which are of a share
334+         count-incompatible-shares: the number of shares that are of a share
335                                     format unknown to this checker
336          list-incompatible-shares: a list of 'share locators', one for each
337                                    share that was found to be of an unknown
338hunk ./src/allmydata/interfaces.py 2148
339                                    format. Each share locator is a list of
340                                    (serverid, storage_index, sharenum).
341          servers-responding: list of (binary) storage server identifiers,
342-                             one for each server which responded to the share
343+                             one for each server that responded to the share
344                              query (even if they said they didn't have
345                              shares, and even if they said they did have
346                              shares but then didn't send them when asked, or
347hunk ./src/allmydata/interfaces.py 2345
348         will use the data in the checker results to guide the repair process,
349         such as which servers provided bad data and should therefore be
350         avoided. The ICheckResults object is inside the
351-        ICheckAndRepairResults object, which is returned by the
352+        ICheckAndRepairResults object that is returned by the
353         ICheckable.check() method::
354 
355          d = filenode.check(repair=False)
356hunk ./src/allmydata/interfaces.py 2436
357         methods to create new objects. I return synchronously."""
358 
359     def create_mutable_file(contents=None, keysize=None):
360-        """I create a new mutable file, and return a Deferred which will fire
361+        """I create a new mutable file, and return a Deferred that will fire
362         with the IMutableFileNode instance when it is ready. If contents= is
363         provided (a bytestring), it will be used as the initial contents of
364         the new file, otherwise the file will contain zero bytes. keysize= is
365hunk ./src/allmydata/interfaces.py 2444
366         usual."""
367 
368     def create_new_mutable_directory(initial_children={}):
369-        """I create a new mutable directory, and return a Deferred which will
370+        """I create a new mutable directory, and return a Deferred that will
371         fire with the IDirectoryNode instance when it is ready. If
372         initial_children= is provided (a dict mapping unicode child name to
373         (childnode, metadata_dict) tuples), the directory will be populated
374hunk ./src/allmydata/interfaces.py 2452
375 
376 class IClientStatus(Interface):
377     def list_all_uploads():
378-        """Return a list of uploader objects, one for each upload which
379+        """Return a list of uploader objects, one for each upload that
380         currently has an object available (tracked with weakrefs). This is
381         intended for debugging purposes."""
382     def list_active_uploads():
383hunk ./src/allmydata/interfaces.py 2462
384         started uploads."""
385 
386     def list_all_downloads():
387-        """Return a list of downloader objects, one for each download which
388+        """Return a list of downloader objects, one for each download that
389         currently has an object available (tracked with weakrefs). This is
390         intended for debugging purposes."""
391     def list_active_downloads():
392hunk ./src/allmydata/interfaces.py 2689
393 
394     def provide(provider=RIStatsProvider, nickname=str):
395         """
396-        @param provider: a stats collector instance which should be polled
397+        @param provider: a stats collector instance that should be polled
398                          periodically by the gatherer to collect stats.
399         @param nickname: a name useful to identify the provided client
400         """
401hunk ./src/allmydata/interfaces.py 2722
402 
403 class IValidatedThingProxy(Interface):
404     def start():
405-        """ Acquire a thing and validate it. Return a deferred which is
406+        """ Acquire a thing and validate it. Return a deferred that is
407         eventually fired with self if the thing is valid or errbacked if it
408         can't be acquired or validated."""
409 
410}
411[Pluggable backends -- new and moved files, changes to moved files. refs #999
412david-sarah@jacaranda.org**20110919232926
413 Ignore-this: ec5d2d1362a092d919e84327d3092424
414] {
415adddir ./src/allmydata/storage/backends
416adddir ./src/allmydata/storage/backends/disk
417move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
418move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
419adddir ./src/allmydata/storage/backends/null
420addfile ./src/allmydata/storage/backends/__init__.py
421addfile ./src/allmydata/storage/backends/base.py
422hunk ./src/allmydata/storage/backends/base.py 1
423+
424+from twisted.application import service
425+
426+from allmydata.storage.common import si_b2a
427+from allmydata.storage.lease import LeaseInfo
428+from allmydata.storage.bucket import BucketReader
429+
430+
431+class Backend(service.MultiService):
432+    def __init__(self):
433+        service.MultiService.__init__(self)
434+
435+
436+class ShareSet(object):
437+    """
438+    This class implements shareset logic that could work for all backends, but
439+    might be useful to override for efficiency.
440+    """
441+
442+    def __init__(self, storageindex):
443+        self.storageindex = storageindex
444+
445+    def get_storage_index(self):
446+        return self.storageindex
447+
448+    def get_storage_index_string(self):
449+        return si_b2a(self.storageindex)
450+
451+    def renew_lease(self, renew_secret, new_expiration_time):
452+        found_shares = False
453+        for share in self.get_shares():
454+            found_shares = True
455+            share.renew_lease(renew_secret, new_expiration_time)
456+
457+        if not found_shares:
458+            raise IndexError("no such lease to renew")
459+
460+    def get_leases(self):
461+        # Since all shares get the same lease data, we just grab the leases
462+        # from the first share.
463+        try:
464+            sf = self.get_shares().next()
465+            return sf.get_leases()
466+        except StopIteration:
467+            return iter([])
468+
469+    def add_or_renew_lease(self, lease_info):
470+        # This implementation assumes that lease data is duplicated in
471+        # all shares of a shareset, which might not be true for all backends.
472+        for share in self.get_shares():
473+            share.add_or_renew_lease(lease_info)
474+
475+    def make_bucket_reader(self, storageserver, share):
476+        return BucketReader(storageserver, share)
477+
478+    def testv_and_readv_and_writev(self, storageserver, secrets,
479+                                   test_and_write_vectors, read_vector,
480+                                   expiration_time):
481+        # The implementation here depends on the following helper methods,
482+        # which must be provided by subclasses:
483+        #
484+        # def _clean_up_after_unlink(self):
485+        #     """clean up resources associated with the shareset after some
486+        #     shares might have been deleted"""
487+        #
488+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
489+        #     """create a mutable share with the given shnum and write_enabler"""
490+
491+        # secrets might be a triple with cancel_secret in secrets[2], but if
492+        # so we ignore the cancel_secret.
493+        write_enabler = secrets[0]
494+        renew_secret = secrets[1]
495+
496+        si_s = self.get_storage_index_string()
497+        shares = {}
498+        for share in self.get_shares():
499+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
500+            # have a parameter saying what type it's expecting.
501+            if share.sharetype == "mutable":
502+                share.check_write_enabler(write_enabler, si_s)
503+                shares[share.get_shnum()] = share
504+
505+        # write_enabler is good for all existing shares
506+
507+        # now evaluate test vectors
508+        testv_is_good = True
509+        for sharenum in test_and_write_vectors:
510+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
511+            if sharenum in shares:
512+                if not shares[sharenum].check_testv(testv):
513+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
514+                    testv_is_good = False
515+                    break
516+            else:
517+                # compare the vectors against an empty share, in which all
518+                # reads return empty strings
519+                if not EmptyShare().check_testv(testv):
520+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
521+                                                                testv))
522+                    testv_is_good = False
523+                    break
524+
525+        # gather the read vectors, before we do any writes
526+        read_data = {}
527+        for shnum, share in shares.items():
528+            read_data[shnum] = share.readv(read_vector)
529+
530+        ownerid = 1 # TODO
531+        lease_info = LeaseInfo(ownerid, renew_secret,
532+                               expiration_time, storageserver.get_serverid())
533+
534+        if testv_is_good:
535+            # now apply the write vectors
536+            for shnum in test_and_write_vectors:
537+                (testv, datav, new_length) = test_and_write_vectors[shnum]
538+                if new_length == 0:
539+                    if shnum in shares:
540+                        shares[shnum].unlink()
541+                else:
542+                    if shnum not in shares:
543+                        # allocate a new share
544+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
545+                        shares[shnum] = share
546+                    shares[shnum].writev(datav, new_length)
547+                    # and update the lease
548+                    shares[shnum].add_or_renew_lease(lease_info)
549+
550+            if new_length == 0:
551+                self._clean_up_after_unlink()
552+
553+        return (testv_is_good, read_data)
554+
555+    def readv(self, wanted_shnums, read_vector):
556+        """
557+        Read a vector from the numbered shares in this shareset. An empty
558+        shares list means to return data from all known shares.
559+
560+        @param wanted_shnums=ListOf(int)
561+        @param read_vector=ReadVector
562+        @return DictOf(int, ReadData): shnum -> results, with one key per share
563+        """
564+        datavs = {}
565+        for share in self.get_shares():
566+            shnum = share.get_shnum()
567+            if not wanted_shnums or shnum in wanted_shnums:
568+                datavs[shnum] = share.readv(read_vector)
569+
570+        return datavs
571+
572+
573+def testv_compare(a, op, b):
574+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
575+    if op == "lt":
576+        return a < b
577+    if op == "le":
578+        return a <= b
579+    if op == "eq":
580+        return a == b
581+    if op == "ne":
582+        return a != b
583+    if op == "ge":
584+        return a >= b
585+    if op == "gt":
586+        return a > b
587+    # never reached
588+
589+
590+class EmptyShare:
591+    def check_testv(self, testv):
592+        test_good = True
593+        for (offset, length, operator, specimen) in testv:
594+            data = ""
595+            if not testv_compare(data, operator, specimen):
596+                test_good = False
597+                break
598+        return test_good
599+
600addfile ./src/allmydata/storage/backends/disk/__init__.py
601addfile ./src/allmydata/storage/backends/disk/disk_backend.py
602hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
603+
604+import re
605+
606+from twisted.python.filepath import UnlistableError
607+
608+from zope.interface import implements
609+from allmydata.interfaces import IStorageBackend, IShareSet
610+from allmydata.util import fileutil, log, time_format
611+from allmydata.storage.common import si_b2a, si_a2b
612+from allmydata.storage.bucket import BucketWriter
613+from allmydata.storage.backends.base import Backend, ShareSet
614+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
615+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
616+
617+# storage/
618+# storage/shares/incoming
619+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
620+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
621+# storage/shares/$START/$STORAGEINDEX
622+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
623+
624+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
625+# base-32 chars).
626+# $SHARENUM matches this regex:
627+NUM_RE=re.compile("^[0-9]+$")
628+
629+
630+def si_si2dir(startfp, storageindex):
631+    sia = si_b2a(storageindex)
632+    newfp = startfp.child(sia[:2])
633+    return newfp.child(sia)
634+
635+
636+def get_share(fp):
637+    f = fp.open('rb')
638+    try:
639+        prefix = f.read(32)
640+    finally:
641+        f.close()
642+
643+    if prefix == MutableDiskShare.MAGIC:
644+        return MutableDiskShare(fp)
645+    else:
646+        # assume it's immutable
647+        return ImmutableDiskShare(fp)
648+
649+
650+class DiskBackend(Backend):
651+    implements(IStorageBackend)
652+
653+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
654+        Backend.__init__(self)
655+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
656+        self._setup_corruption_advisory()
657+
658+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
659+        self._storedir = storedir
660+        self._readonly = readonly
661+        self._reserved_space = int(reserved_space)
662+        self._discard_storage = discard_storage
663+        self._sharedir = self._storedir.child("shares")
664+        fileutil.fp_make_dirs(self._sharedir)
665+        self._incomingdir = self._sharedir.child('incoming')
666+        self._clean_incomplete()
667+        if self._reserved_space and (self.get_available_space() is None):
668+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
669+                    umid="0wZ27w", level=log.UNUSUAL)
670+
671+    def _clean_incomplete(self):
672+        fileutil.fp_remove(self._incomingdir)
673+        fileutil.fp_make_dirs(self._incomingdir)
674+
675+    def _setup_corruption_advisory(self):
676+        # we don't actually create the corruption-advisory dir until necessary
677+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
678+
679+    def _make_shareset(self, sharehomedir):
680+        return self.get_shareset(si_a2b(sharehomedir.basename()))
681+
682+    def get_sharesets_for_prefix(self, prefix):
683+        prefixfp = self._sharedir.child(prefix)
684+        try:
685+            sharesets = map(self._make_shareset, prefixfp.children())
686+            def _by_base32si(b):
687+                return b.get_storage_index_string()
688+            sharesets.sort(key=_by_base32si)
689+        except EnvironmentError:
690+            sharesets = []
691+        return sharesets
692+
693+    def get_shareset(self, storageindex):
694+        sharehomedir = si_si2dir(self._sharedir, storageindex)
695+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
696+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
697+
698+    def fill_in_space_stats(self, stats):
699+        stats['storage_server.reserved_space'] = self._reserved_space
700+        try:
701+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
702+            writeable = disk['avail'] > 0
703+
704+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
705+            stats['storage_server.disk_total'] = disk['total']
706+            stats['storage_server.disk_used'] = disk['used']
707+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
708+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
709+            stats['storage_server.disk_avail'] = disk['avail']
710+        except AttributeError:
711+            writeable = True
712+        except EnvironmentError:
713+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
714+            writeable = False
715+
716+        if self._readonly:
717+            stats['storage_server.disk_avail'] = 0
718+            writeable = False
719+
720+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
721+
722+    def get_available_space(self):
723+        if self._readonly:
724+            return 0
725+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
726+
727+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
728+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
729+        now = time_format.iso_utc(sep="T")
730+        si_s = si_b2a(storageindex)
731+
732+        # Windows can't handle colons in the filename.
733+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
734+        f = self._corruption_advisory_dir.child(name).open("w")
735+        try:
736+            f.write("report: Share Corruption\n")
737+            f.write("type: %s\n" % sharetype)
738+            f.write("storage_index: %s\n" % si_s)
739+            f.write("share_number: %d\n" % shnum)
740+            f.write("\n")
741+            f.write(reason)
742+            f.write("\n")
743+        finally:
744+            f.close()
745+
746+        log.msg(format=("client claims corruption in (%(share_type)s) " +
747+                        "%(si)s-%(shnum)d: %(reason)s"),
748+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
749+                level=log.SCARY, umid="SGx2fA")
750+
751+
752+class DiskShareSet(ShareSet):
753+    implements(IShareSet)
754+
755+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
756+        ShareSet.__init__(self, storageindex)
757+        self._sharehomedir = sharehomedir
758+        self._incominghomedir = incominghomedir
759+        self._discard_storage = discard_storage
760+
761+    def get_overhead(self):
762+        return (fileutil.get_disk_usage(self._sharehomedir) +
763+                fileutil.get_disk_usage(self._incominghomedir))
764+
765+    def get_shares(self):
766+        """
767+        Generate IStorageBackendShare objects for shares we have for this storage index.
768+        ("Shares we have" means completed ones, excluding incoming ones.)
769+        """
770+        try:
771+            for fp in self._sharehomedir.children():
772+                shnumstr = fp.basename()
773+                if not NUM_RE.match(shnumstr):
774+                    continue
775+                sharehome = self._sharehomedir.child(shnumstr)
776+                yield self.get_share(sharehome)
777+        except UnlistableError:
778+            # There is no shares directory at all.
779+            pass
780+
781+    def has_incoming(self, shnum):
782+        if self._incominghomedir is None:
783+            return False
784+        return self._incominghomedir.child(str(shnum)).exists()
785+
786+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
787+        sharehome = self._sharehomedir.child(str(shnum))
788+        incominghome = self._incominghomedir.child(str(shnum))
789+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
790+                                   max_size=max_space_per_bucket, create=True)
791+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
792+        if self._discard_storage:
793+            bw.throw_out_all_data = True
794+        return bw
795+
796+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
797+        fileutil.fp_make_dirs(self._sharehomedir)
798+        sharehome = self._sharehomedir.child(str(shnum))
799+        serverid = storageserver.get_serverid()
800+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
801+
802+    def _clean_up_after_unlink(self):
803+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
804+
805hunk ./src/allmydata/storage/backends/disk/immutable.py 1
806-import os, stat, struct, time
807 
808hunk ./src/allmydata/storage/backends/disk/immutable.py 2
809-from foolscap.api import Referenceable
810+import struct
811 
812 from zope.interface import implements
813hunk ./src/allmydata/storage/backends/disk/immutable.py 5
814-from allmydata.interfaces import RIBucketWriter, RIBucketReader
815-from allmydata.util import base32, fileutil, log
816+
817+from allmydata.interfaces import IStoredShare
818+from allmydata.util import fileutil
819 from allmydata.util.assertutil import precondition
820hunk ./src/allmydata/storage/backends/disk/immutable.py 9
821+from allmydata.util.fileutil import fp_make_dirs
822 from allmydata.util.hashutil import constant_time_compare
823hunk ./src/allmydata/storage/backends/disk/immutable.py 11
824+from allmydata.util.encodingutil import quote_filepath
825+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
826 from allmydata.storage.lease import LeaseInfo
827hunk ./src/allmydata/storage/backends/disk/immutable.py 14
828-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
829-     DataTooLargeError
830+
831 
832 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
833 # and share data. The share data is accessed by RIBucketWriter.write and
834hunk ./src/allmydata/storage/backends/disk/immutable.py 41
835 # then the value stored in this field will be the actual share data length
836 # modulo 2**32.
837 
838-class ShareFile:
839-    LEASE_SIZE = struct.calcsize(">L32s32sL")
840+class ImmutableDiskShare(object):
841+    implements(IStoredShare)
842+
843     sharetype = "immutable"
844hunk ./src/allmydata/storage/backends/disk/immutable.py 45
845+    LEASE_SIZE = struct.calcsize(">L32s32sL")
846+
847 
848hunk ./src/allmydata/storage/backends/disk/immutable.py 48
849-    def __init__(self, filename, max_size=None, create=False):
850-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
851+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
852+        """ If max_size is not None then I won't allow more than
853+        max_size to be written to me. If create=True then max_size
854+        must not be None. """
855         precondition((max_size is not None) or (not create), max_size, create)
856hunk ./src/allmydata/storage/backends/disk/immutable.py 53
857-        self.home = filename
858+        self._storageindex = storageindex
859         self._max_size = max_size
860hunk ./src/allmydata/storage/backends/disk/immutable.py 55
861+        self._incominghome = incominghome
862+        self._home = finalhome
863+        self._shnum = shnum
864         if create:
865             # touch the file, so later callers will see that we're working on
866             # it. Also construct the metadata.
867hunk ./src/allmydata/storage/backends/disk/immutable.py 61
868-            assert not os.path.exists(self.home)
869-            fileutil.make_dirs(os.path.dirname(self.home))
870-            f = open(self.home, 'wb')
871+            assert not finalhome.exists()
872+            fp_make_dirs(self._incominghome.parent())
873             # The second field -- the four-byte share data length -- is no
874             # longer used as of Tahoe v1.3.0, but we continue to write it in
875             # there in case someone downgrades a storage server from >=
876hunk ./src/allmydata/storage/backends/disk/immutable.py 72
877             # the largest length that can fit into the field. That way, even
878             # if this does happen, the old < v1.3.0 server will still allow
879             # clients to read the first part of the share.
880-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
881-            f.close()
882+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
883             self._lease_offset = max_size + 0x0c
884             self._num_leases = 0
885         else:
886hunk ./src/allmydata/storage/backends/disk/immutable.py 76
887-            f = open(self.home, 'rb')
888-            filesize = os.path.getsize(self.home)
889-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
890-            f.close()
891+            f = self._home.open(mode='rb')
892+            try:
893+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
894+            finally:
895+                f.close()
896+            filesize = self._home.getsize()
897             if version != 1:
898                 msg = "sharefile %s had version %d but we wanted 1" % \
899hunk ./src/allmydata/storage/backends/disk/immutable.py 84
900-                      (filename, version)
901+                      (self._home, version)
902                 raise UnknownImmutableContainerVersionError(msg)
903             self._num_leases = num_leases
904             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
905hunk ./src/allmydata/storage/backends/disk/immutable.py 90
906         self._data_offset = 0xc
907 
908+    def __repr__(self):
909+        return ("<ImmutableDiskShare %s:%r at %s>"
910+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
911+
912+    def close(self):
913+        fileutil.fp_make_dirs(self._home.parent())
914+        self._incominghome.moveTo(self._home)
915+        try:
916+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
917+            # We try to delete the parent (.../ab/abcde) to avoid leaving
918+            # these directories lying around forever, but the delete might
919+            # fail if we're working on another share for the same storage
920+            # index (like ab/abcde/5). The alternative approach would be to
921+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
922+            # ShareWriter), each of which is responsible for a single
923+            # directory on disk, and have them use reference counting of
924+            # their children to know when they should do the rmdir. This
925+            # approach is simpler, but relies on os.rmdir refusing to delete
926+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
927+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
928+            # we also delete the grandparent (prefix) directory, .../ab ,
929+            # again to avoid leaving directories lying around. This might
930+            # fail if there is another bucket open that shares a prefix (like
931+            # ab/abfff).
932+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
933+            # we leave the great-grandparent (incoming/) directory in place.
934+        except EnvironmentError:
935+            # ignore the "can't rmdir because the directory is not empty"
936+            # exceptions, those are normal consequences of the
937+            # above-mentioned conditions.
938+            pass
939+        pass
940+
941+    def get_used_space(self):
942+        return (fileutil.get_used_space(self._home) +
943+                fileutil.get_used_space(self._incominghome))
944+
945+    def get_storage_index(self):
946+        return self._storageindex
947+
948+    def get_shnum(self):
949+        return self._shnum
950+
951     def unlink(self):
952hunk ./src/allmydata/storage/backends/disk/immutable.py 134
953-        os.unlink(self.home)
954+        self._home.remove()
955+
956+    def get_size(self):
957+        return self._home.getsize()
958+
959+    def get_data_length(self):
960+        return self._lease_offset - self._data_offset
961+
962+    #def readv(self, read_vector):
963+    #    ...
964 
965     def read_share_data(self, offset, length):
966         precondition(offset >= 0)
967hunk ./src/allmydata/storage/backends/disk/immutable.py 147
968-        # reads beyond the end of the data are truncated. Reads that start
969+
970+        # Reads beyond the end of the data are truncated. Reads that start
971         # beyond the end of the data return an empty string.
972         seekpos = self._data_offset+offset
973         actuallength = max(0, min(length, self._lease_offset-seekpos))
974hunk ./src/allmydata/storage/backends/disk/immutable.py 154
975         if actuallength == 0:
976             return ""
977-        f = open(self.home, 'rb')
978-        f.seek(seekpos)
979-        return f.read(actuallength)
980+        f = self._home.open(mode='rb')
981+        try:
982+            f.seek(seekpos)
983+            sharedata = f.read(actuallength)
984+        finally:
985+            f.close()
986+        return sharedata
987 
988     def write_share_data(self, offset, data):
989         length = len(data)
990hunk ./src/allmydata/storage/backends/disk/immutable.py 167
991         precondition(offset >= 0, offset)
992         if self._max_size is not None and offset+length > self._max_size:
993             raise DataTooLargeError(self._max_size, offset, length)
994-        f = open(self.home, 'rb+')
995-        real_offset = self._data_offset+offset
996-        f.seek(real_offset)
997-        assert f.tell() == real_offset
998-        f.write(data)
999-        f.close()
1000+        f = self._incominghome.open(mode='rb+')
1001+        try:
1002+            real_offset = self._data_offset+offset
1003+            f.seek(real_offset)
1004+            assert f.tell() == real_offset
1005+            f.write(data)
1006+        finally:
1007+            f.close()
1008 
1009     def _write_lease_record(self, f, lease_number, lease_info):
1010         offset = self._lease_offset + lease_number * self.LEASE_SIZE
1011hunk ./src/allmydata/storage/backends/disk/immutable.py 184
1012 
1013     def _read_num_leases(self, f):
1014         f.seek(0x08)
1015-        (num_leases,) = struct.unpack(">L", f.read(4))
1016+        ro = f.read(4)
1017+        (num_leases,) = struct.unpack(">L", ro)
1018         return num_leases
1019 
1020     def _write_num_leases(self, f, num_leases):
1021hunk ./src/allmydata/storage/backends/disk/immutable.py 195
1022     def _truncate_leases(self, f, num_leases):
1023         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1024 
1025+    # These lease operations are intended for use by disk_backend.py.
1026+    # Other clients should not depend on the fact that the disk backend
1027+    # stores leases in share files.
1028+
1029     def get_leases(self):
1030         """Yields a LeaseInfo instance for all leases."""
1031hunk ./src/allmydata/storage/backends/disk/immutable.py 201
1032-        f = open(self.home, 'rb')
1033-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1034-        f.seek(self._lease_offset)
1035-        for i in range(num_leases):
1036-            data = f.read(self.LEASE_SIZE)
1037-            if data:
1038-                yield LeaseInfo().from_immutable_data(data)
1039+        f = self._home.open(mode='rb')
1040+        try:
1041+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1042+            f.seek(self._lease_offset)
1043+            for i in range(num_leases):
1044+                data = f.read(self.LEASE_SIZE)
1045+                if data:
1046+                    yield LeaseInfo().from_immutable_data(data)
1047+        finally:
1048+            f.close()
1049 
1050     def add_lease(self, lease_info):
1051hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1052-        f = open(self.home, 'rb+')
1053-        num_leases = self._read_num_leases(f)
1054-        self._write_lease_record(f, num_leases, lease_info)
1055-        self._write_num_leases(f, num_leases+1)
1056-        f.close()
1057+        f = self._incominghome.open(mode='rb')
1058+        try:
1059+            num_leases = self._read_num_leases(f)
1060+        finally:
1061+            f.close()
1062+        f = self._home.open(mode='wb+')
1063+        try:
1064+            self._write_lease_record(f, num_leases, lease_info)
1065+            self._write_num_leases(f, num_leases+1)
1066+        finally:
1067+            f.close()
1068 
1069     def renew_lease(self, renew_secret, new_expire_time):
1070hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1071-        for i,lease in enumerate(self.get_leases()):
1072-            if constant_time_compare(lease.renew_secret, renew_secret):
1073-                # yup. See if we need to update the owner time.
1074-                if new_expire_time > lease.expiration_time:
1075-                    # yes
1076-                    lease.expiration_time = new_expire_time
1077-                    f = open(self.home, 'rb+')
1078-                    self._write_lease_record(f, i, lease)
1079-                    f.close()
1080-                return
1081+        try:
1082+            for i, lease in enumerate(self.get_leases()):
1083+                if constant_time_compare(lease.renew_secret, renew_secret):
1084+                    # yup. See if we need to update the owner time.
1085+                    if new_expire_time > lease.expiration_time:
1086+                        # yes
1087+                        lease.expiration_time = new_expire_time
1088+                        f = self._home.open('rb+')
1089+                        try:
1090+                            self._write_lease_record(f, i, lease)
1091+                        finally:
1092+                            f.close()
1093+                    return
1094+        except IndexError, e:
1095+            raise Exception("IndexError: %s" % (e,))
1096         raise IndexError("unable to renew non-existent lease")
1097 
1098     def add_or_renew_lease(self, lease_info):
1099hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1100                              lease_info.expiration_time)
1101         except IndexError:
1102             self.add_lease(lease_info)
1103-
1104-
1105-    def cancel_lease(self, cancel_secret):
1106-        """Remove a lease with the given cancel_secret. If the last lease is
1107-        cancelled, the file will be removed. Return the number of bytes that
1108-        were freed (by truncating the list of leases, and possibly by
1109-        deleting the file. Raise IndexError if there was no lease with the
1110-        given cancel_secret.
1111-        """
1112-
1113-        leases = list(self.get_leases())
1114-        num_leases_removed = 0
1115-        for i,lease in enumerate(leases):
1116-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1117-                leases[i] = None
1118-                num_leases_removed += 1
1119-        if not num_leases_removed:
1120-            raise IndexError("unable to find matching lease to cancel")
1121-        if num_leases_removed:
1122-            # pack and write out the remaining leases. We write these out in
1123-            # the same order as they were added, so that if we crash while
1124-            # doing this, we won't lose any non-cancelled leases.
1125-            leases = [l for l in leases if l] # remove the cancelled leases
1126-            f = open(self.home, 'rb+')
1127-            for i,lease in enumerate(leases):
1128-                self._write_lease_record(f, i, lease)
1129-            self._write_num_leases(f, len(leases))
1130-            self._truncate_leases(f, len(leases))
1131-            f.close()
1132-        space_freed = self.LEASE_SIZE * num_leases_removed
1133-        if not len(leases):
1134-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1135-            self.unlink()
1136-        return space_freed
1137-
1138-
1139-class BucketWriter(Referenceable):
1140-    implements(RIBucketWriter)
1141-
1142-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1143-        self.ss = ss
1144-        self.incominghome = incominghome
1145-        self.finalhome = finalhome
1146-        self._max_size = max_size # don't allow the client to write more than this
1147-        self._canary = canary
1148-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1149-        self.closed = False
1150-        self.throw_out_all_data = False
1151-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1152-        # also, add our lease to the file now, so that other ones can be
1153-        # added by simultaneous uploaders
1154-        self._sharefile.add_lease(lease_info)
1155-
1156-    def allocated_size(self):
1157-        return self._max_size
1158-
1159-    def remote_write(self, offset, data):
1160-        start = time.time()
1161-        precondition(not self.closed)
1162-        if self.throw_out_all_data:
1163-            return
1164-        self._sharefile.write_share_data(offset, data)
1165-        self.ss.add_latency("write", time.time() - start)
1166-        self.ss.count("write")
1167-
1168-    def remote_close(self):
1169-        precondition(not self.closed)
1170-        start = time.time()
1171-
1172-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1173-        fileutil.rename(self.incominghome, self.finalhome)
1174-        try:
1175-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1176-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1177-            # these directories lying around forever, but the delete might
1178-            # fail if we're working on another share for the same storage
1179-            # index (like ab/abcde/5). The alternative approach would be to
1180-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1181-            # ShareWriter), each of which is responsible for a single
1182-            # directory on disk, and have them use reference counting of
1183-            # their children to know when they should do the rmdir. This
1184-            # approach is simpler, but relies on os.rmdir refusing to delete
1185-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1186-            os.rmdir(os.path.dirname(self.incominghome))
1187-            # we also delete the grandparent (prefix) directory, .../ab ,
1188-            # again to avoid leaving directories lying around. This might
1189-            # fail if there is another bucket open that shares a prefix (like
1190-            # ab/abfff).
1191-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1192-            # we leave the great-grandparent (incoming/) directory in place.
1193-        except EnvironmentError:
1194-            # ignore the "can't rmdir because the directory is not empty"
1195-            # exceptions, those are normal consequences of the
1196-            # above-mentioned conditions.
1197-            pass
1198-        self._sharefile = None
1199-        self.closed = True
1200-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1201-
1202-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1203-        self.ss.bucket_writer_closed(self, filelen)
1204-        self.ss.add_latency("close", time.time() - start)
1205-        self.ss.count("close")
1206-
1207-    def _disconnected(self):
1208-        if not self.closed:
1209-            self._abort()
1210-
1211-    def remote_abort(self):
1212-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1213-                facility="tahoe.storage", level=log.UNUSUAL)
1214-        if not self.closed:
1215-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1216-        self._abort()
1217-        self.ss.count("abort")
1218-
1219-    def _abort(self):
1220-        if self.closed:
1221-            return
1222-
1223-        os.remove(self.incominghome)
1224-        # if we were the last share to be moved, remove the incoming/
1225-        # directory that was our parent
1226-        parentdir = os.path.split(self.incominghome)[0]
1227-        if not os.listdir(parentdir):
1228-            os.rmdir(parentdir)
1229-        self._sharefile = None
1230-
1231-        # We are now considered closed for further writing. We must tell
1232-        # the storage server about this so that it stops expecting us to
1233-        # use the space it allocated for us earlier.
1234-        self.closed = True
1235-        self.ss.bucket_writer_closed(self, 0)
1236-
1237-
1238-class BucketReader(Referenceable):
1239-    implements(RIBucketReader)
1240-
1241-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1242-        self.ss = ss
1243-        self._share_file = ShareFile(sharefname)
1244-        self.storage_index = storage_index
1245-        self.shnum = shnum
1246-
1247-    def __repr__(self):
1248-        return "<%s %s %s>" % (self.__class__.__name__,
1249-                               base32.b2a_l(self.storage_index[:8], 60),
1250-                               self.shnum)
1251-
1252-    def remote_read(self, offset, length):
1253-        start = time.time()
1254-        data = self._share_file.read_share_data(offset, length)
1255-        self.ss.add_latency("read", time.time() - start)
1256-        self.ss.count("read")
1257-        return data
1258-
1259-    def remote_advise_corrupt_share(self, reason):
1260-        return self.ss.remote_advise_corrupt_share("immutable",
1261-                                                   self.storage_index,
1262-                                                   self.shnum,
1263-                                                   reason)
1264hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1265-import os, stat, struct
1266 
1267hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1268-from allmydata.interfaces import BadWriteEnablerError
1269-from allmydata.util import idlib, log
1270+import struct
1271+
1272+from zope.interface import implements
1273+
1274+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1275+from allmydata.util import fileutil, idlib, log
1276 from allmydata.util.assertutil import precondition
1277 from allmydata.util.hashutil import constant_time_compare
1278hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1279-from allmydata.storage.lease import LeaseInfo
1280-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1281+from allmydata.util.encodingutil import quote_filepath
1282+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1283      DataTooLargeError
1284hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1285+from allmydata.storage.lease import LeaseInfo
1286+from allmydata.storage.backends.base import testv_compare
1287 
1288hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1289-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1290-# has a different layout. See docs/mutable.txt for more details.
1291+
1292+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1293+# It has a different layout. See docs/mutable.rst for more details.
1294 
1295 # #   offset    size    name
1296 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1297hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1298 #                        4    4   expiration timestamp
1299 #                        8   32   renewal token
1300 #                        40  32   cancel token
1301-#                        72  20   nodeid which accepted the tokens
1302+#                        72  20   nodeid that accepted the tokens
1303 # 7   468       (a)     data
1304 # 8   ??        4       count of extra leases
1305 # 9   ??        n*92    extra leases
1306hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1307 
1308 
1309-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1310+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1311 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1312 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1313 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1314hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1315 
1316-class MutableShareFile:
1317+
1318+class MutableDiskShare(object):
1319+    implements(IStoredMutableShare)
1320 
1321     sharetype = "mutable"
1322     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1323hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1324     assert LEASE_SIZE == 92
1325     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1326     assert DATA_OFFSET == 468, DATA_OFFSET
1327+
1328     # our sharefiles share with a recognizable string, plus some random
1329     # binary data to reduce the chance that a regular text file will look
1330     # like a sharefile.
1331hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1332     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1333     # TODO: decide upon a policy for max share size
1334 
1335-    def __init__(self, filename, parent=None):
1336-        self.home = filename
1337-        if os.path.exists(self.home):
1338+    def __init__(self, storageindex, shnum, home, parent=None):
1339+        self._storageindex = storageindex
1340+        self._shnum = shnum
1341+        self._home = home
1342+        if self._home.exists():
1343             # we don't cache anything, just check the magic
1344hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1345-            f = open(self.home, 'rb')
1346-            data = f.read(self.HEADER_SIZE)
1347-            (magic,
1348-             write_enabler_nodeid, write_enabler,
1349-             data_length, extra_least_offset) = \
1350-             struct.unpack(">32s20s32sQQ", data)
1351-            if magic != self.MAGIC:
1352-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1353-                      (filename, magic, self.MAGIC)
1354-                raise UnknownMutableContainerVersionError(msg)
1355+            f = self._home.open('rb')
1356+            try:
1357+                data = f.read(self.HEADER_SIZE)
1358+                (magic,
1359+                 write_enabler_nodeid, write_enabler,
1360+                 data_length, extra_least_offset) = \
1361+                 struct.unpack(">32s20s32sQQ", data)
1362+                if magic != self.MAGIC:
1363+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1364+                          (quote_filepath(self._home), magic, self.MAGIC)
1365+                    raise UnknownMutableContainerVersionError(msg)
1366+            finally:
1367+                f.close()
1368         self.parent = parent # for logging
1369 
1370     def log(self, *args, **kwargs):
1371hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1372         return self.parent.log(*args, **kwargs)
1373 
1374-    def create(self, my_nodeid, write_enabler):
1375-        assert not os.path.exists(self.home)
1376+    def create(self, serverid, write_enabler):
1377+        assert not self._home.exists()
1378         data_length = 0
1379         extra_lease_offset = (self.HEADER_SIZE
1380                               + 4 * self.LEASE_SIZE
1381hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1382                               + data_length)
1383         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1384         num_extra_leases = 0
1385-        f = open(self.home, 'wb')
1386-        header = struct.pack(">32s20s32sQQ",
1387-                             self.MAGIC, my_nodeid, write_enabler,
1388-                             data_length, extra_lease_offset,
1389-                             )
1390-        leases = ("\x00"*self.LEASE_SIZE) * 4
1391-        f.write(header + leases)
1392-        # data goes here, empty after creation
1393-        f.write(struct.pack(">L", num_extra_leases))
1394-        # extra leases go here, none at creation
1395-        f.close()
1396+        f = self._home.open('wb')
1397+        try:
1398+            header = struct.pack(">32s20s32sQQ",
1399+                                 self.MAGIC, serverid, write_enabler,
1400+                                 data_length, extra_lease_offset,
1401+                                 )
1402+            leases = ("\x00"*self.LEASE_SIZE) * 4
1403+            f.write(header + leases)
1404+            # data goes here, empty after creation
1405+            f.write(struct.pack(">L", num_extra_leases))
1406+            # extra leases go here, none at creation
1407+        finally:
1408+            f.close()
1409+
1410+    def __repr__(self):
1411+        return ("<MutableDiskShare %s:%r at %s>"
1412+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1413+
1414+    def get_used_space(self):
1415+        return fileutil.get_used_space(self._home)
1416+
1417+    def get_storage_index(self):
1418+        return self._storageindex
1419+
1420+    def get_shnum(self):
1421+        return self._shnum
1422 
1423     def unlink(self):
1424hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1425-        os.unlink(self.home)
1426+        self._home.remove()
1427 
1428     def _read_data_length(self, f):
1429         f.seek(self.DATA_LENGTH_OFFSET)
1430hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1431 
1432     def get_leases(self):
1433         """Yields a LeaseInfo instance for all leases."""
1434-        f = open(self.home, 'rb')
1435-        for i, lease in self._enumerate_leases(f):
1436-            yield lease
1437-        f.close()
1438+        f = self._home.open('rb')
1439+        try:
1440+            for i, lease in self._enumerate_leases(f):
1441+                yield lease
1442+        finally:
1443+            f.close()
1444 
1445     def _enumerate_leases(self, f):
1446         for i in range(self._get_num_lease_slots(f)):
1447hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1448             try:
1449                 data = self._read_lease_record(f, i)
1450                 if data is not None:
1451-                    yield i,data
1452+                    yield i, data
1453             except IndexError:
1454                 return
1455 
1456hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1457+    # These lease operations are intended for use by disk_backend.py.
1458+    # Other non-test clients should not depend on the fact that the disk
1459+    # backend stores leases in share files.
1460+
1461     def add_lease(self, lease_info):
1462         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1463hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1464-        f = open(self.home, 'rb+')
1465-        num_lease_slots = self._get_num_lease_slots(f)
1466-        empty_slot = self._get_first_empty_lease_slot(f)
1467-        if empty_slot is not None:
1468-            self._write_lease_record(f, empty_slot, lease_info)
1469-        else:
1470-            self._write_lease_record(f, num_lease_slots, lease_info)
1471-        f.close()
1472+        f = self._home.open('rb+')
1473+        try:
1474+            num_lease_slots = self._get_num_lease_slots(f)
1475+            empty_slot = self._get_first_empty_lease_slot(f)
1476+            if empty_slot is not None:
1477+                self._write_lease_record(f, empty_slot, lease_info)
1478+            else:
1479+                self._write_lease_record(f, num_lease_slots, lease_info)
1480+        finally:
1481+            f.close()
1482 
1483     def renew_lease(self, renew_secret, new_expire_time):
1484         accepting_nodeids = set()
1485hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1486-        f = open(self.home, 'rb+')
1487-        for (leasenum,lease) in self._enumerate_leases(f):
1488-            if constant_time_compare(lease.renew_secret, renew_secret):
1489-                # yup. See if we need to update the owner time.
1490-                if new_expire_time > lease.expiration_time:
1491-                    # yes
1492-                    lease.expiration_time = new_expire_time
1493-                    self._write_lease_record(f, leasenum, lease)
1494-                f.close()
1495-                return
1496-            accepting_nodeids.add(lease.nodeid)
1497-        f.close()
1498+        f = self._home.open('rb+')
1499+        try:
1500+            for (leasenum, lease) in self._enumerate_leases(f):
1501+                if constant_time_compare(lease.renew_secret, renew_secret):
1502+                    # yup. See if we need to update the owner time.
1503+                    if new_expire_time > lease.expiration_time:
1504+                        # yes
1505+                        lease.expiration_time = new_expire_time
1506+                        self._write_lease_record(f, leasenum, lease)
1507+                    return
1508+                accepting_nodeids.add(lease.nodeid)
1509+        finally:
1510+            f.close()
1511         # Return the accepting_nodeids set, to give the client a chance to
1512hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1513-        # update the leases on a share which has been migrated from its
1514+        # update the leases on a share that has been migrated from its
1515         # original server to a new one.
1516         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1517                " nodeids: ")
1518hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1519         except IndexError:
1520             self.add_lease(lease_info)
1521 
1522-    def cancel_lease(self, cancel_secret):
1523-        """Remove any leases with the given cancel_secret. If the last lease
1524-        is cancelled, the file will be removed. Return the number of bytes
1525-        that were freed (by truncating the list of leases, and possibly by
1526-        deleting the file. Raise IndexError if there was no lease with the
1527-        given cancel_secret."""
1528-
1529-        accepting_nodeids = set()
1530-        modified = 0
1531-        remaining = 0
1532-        blank_lease = LeaseInfo(owner_num=0,
1533-                                renew_secret="\x00"*32,
1534-                                cancel_secret="\x00"*32,
1535-                                expiration_time=0,
1536-                                nodeid="\x00"*20)
1537-        f = open(self.home, 'rb+')
1538-        for (leasenum,lease) in self._enumerate_leases(f):
1539-            accepting_nodeids.add(lease.nodeid)
1540-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1541-                self._write_lease_record(f, leasenum, blank_lease)
1542-                modified += 1
1543-            else:
1544-                remaining += 1
1545-        if modified:
1546-            freed_space = self._pack_leases(f)
1547-            f.close()
1548-            if not remaining:
1549-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1550-                self.unlink()
1551-            return freed_space
1552-
1553-        msg = ("Unable to cancel non-existent lease. I have leases "
1554-               "accepted by nodeids: ")
1555-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1556-                         for anid in accepting_nodeids])
1557-        msg += " ."
1558-        raise IndexError(msg)
1559-
1560-    def _pack_leases(self, f):
1561-        # TODO: reclaim space from cancelled leases
1562-        return 0
1563-
1564     def _read_write_enabler_and_nodeid(self, f):
1565         f.seek(0)
1566         data = f.read(self.HEADER_SIZE)
1567hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1568 
1569     def readv(self, readv):
1570         datav = []
1571-        f = open(self.home, 'rb')
1572-        for (offset, length) in readv:
1573-            datav.append(self._read_share_data(f, offset, length))
1574-        f.close()
1575+        f = self._home.open('rb')
1576+        try:
1577+            for (offset, length) in readv:
1578+                datav.append(self._read_share_data(f, offset, length))
1579+        finally:
1580+            f.close()
1581         return datav
1582 
1583hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1584-#    def remote_get_length(self):
1585-#        f = open(self.home, 'rb')
1586-#        data_length = self._read_data_length(f)
1587-#        f.close()
1588-#        return data_length
1589+    def get_size(self):
1590+        return self._home.getsize()
1591+
1592+    def get_data_length(self):
1593+        f = self._home.open('rb')
1594+        try:
1595+            data_length = self._read_data_length(f)
1596+        finally:
1597+            f.close()
1598+        return data_length
1599 
1600     def check_write_enabler(self, write_enabler, si_s):
1601hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1602-        f = open(self.home, 'rb+')
1603-        (real_write_enabler, write_enabler_nodeid) = \
1604-                             self._read_write_enabler_and_nodeid(f)
1605-        f.close()
1606+        f = self._home.open('rb+')
1607+        try:
1608+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1609+        finally:
1610+            f.close()
1611         # avoid a timing attack
1612         #if write_enabler != real_write_enabler:
1613         if not constant_time_compare(write_enabler, real_write_enabler):
1614hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1615 
1616     def check_testv(self, testv):
1617         test_good = True
1618-        f = open(self.home, 'rb+')
1619-        for (offset, length, operator, specimen) in testv:
1620-            data = self._read_share_data(f, offset, length)
1621-            if not testv_compare(data, operator, specimen):
1622-                test_good = False
1623-                break
1624-        f.close()
1625+        f = self._home.open('rb+')
1626+        try:
1627+            for (offset, length, operator, specimen) in testv:
1628+                data = self._read_share_data(f, offset, length)
1629+                if not testv_compare(data, operator, specimen):
1630+                    test_good = False
1631+                    break
1632+        finally:
1633+            f.close()
1634         return test_good
1635 
1636     def writev(self, datav, new_length):
1637hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1638-        f = open(self.home, 'rb+')
1639-        for (offset, data) in datav:
1640-            self._write_share_data(f, offset, data)
1641-        if new_length is not None:
1642-            cur_length = self._read_data_length(f)
1643-            if new_length < cur_length:
1644-                self._write_data_length(f, new_length)
1645-                # TODO: if we're going to shrink the share file when the
1646-                # share data has shrunk, then call
1647-                # self._change_container_size() here.
1648-        f.close()
1649-
1650-def testv_compare(a, op, b):
1651-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1652-    if op == "lt":
1653-        return a < b
1654-    if op == "le":
1655-        return a <= b
1656-    if op == "eq":
1657-        return a == b
1658-    if op == "ne":
1659-        return a != b
1660-    if op == "ge":
1661-        return a >= b
1662-    if op == "gt":
1663-        return a > b
1664-    # never reached
1665+        f = self._home.open('rb+')
1666+        try:
1667+            for (offset, data) in datav:
1668+                self._write_share_data(f, offset, data)
1669+            if new_length is not None:
1670+                cur_length = self._read_data_length(f)
1671+                if new_length < cur_length:
1672+                    self._write_data_length(f, new_length)
1673+                    # TODO: if we're going to shrink the share file when the
1674+                    # share data has shrunk, then call
1675+                    # self._change_container_size() here.
1676+        finally:
1677+            f.close()
1678 
1679hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1680-class EmptyShare:
1681+    def close(self):
1682+        pass
1683 
1684hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1685-    def check_testv(self, testv):
1686-        test_good = True
1687-        for (offset, length, operator, specimen) in testv:
1688-            data = ""
1689-            if not testv_compare(data, operator, specimen):
1690-                test_good = False
1691-                break
1692-        return test_good
1693 
1694hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1695-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1696-    ms = MutableShareFile(filename, parent)
1697-    ms.create(my_nodeid, write_enabler)
1698+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1699+    ms = MutableDiskShare(fp, parent)
1700+    ms.create(serverid, write_enabler)
1701     del ms
1702hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1703-    return MutableShareFile(filename, parent)
1704-
1705+    return MutableDiskShare(fp, parent)
1706addfile ./src/allmydata/storage/backends/null/__init__.py
1707addfile ./src/allmydata/storage/backends/null/null_backend.py
1708hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1709 
1710+import os, struct
1711+
1712+from zope.interface import implements
1713+
1714+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1715+from allmydata.util.assertutil import precondition
1716+from allmydata.util.hashutil import constant_time_compare
1717+from allmydata.storage.backends.base import Backend, ShareSet
1718+from allmydata.storage.bucket import BucketWriter
1719+from allmydata.storage.common import si_b2a
1720+from allmydata.storage.lease import LeaseInfo
1721+
1722+
1723+class NullBackend(Backend):
1724+    implements(IStorageBackend)
1725+
1726+    def __init__(self):
1727+        Backend.__init__(self)
1728+
1729+    def get_available_space(self, reserved_space):
1730+        return None
1731+
1732+    def get_sharesets_for_prefix(self, prefix):
1733+        pass
1734+
1735+    def get_shareset(self, storageindex):
1736+        return NullShareSet(storageindex)
1737+
1738+    def fill_in_space_stats(self, stats):
1739+        pass
1740+
1741+    def set_storage_server(self, ss):
1742+        self.ss = ss
1743+
1744+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1745+        pass
1746+
1747+
1748+class NullShareSet(ShareSet):
1749+    implements(IShareSet)
1750+
1751+    def __init__(self, storageindex):
1752+        self.storageindex = storageindex
1753+
1754+    def get_overhead(self):
1755+        return 0
1756+
1757+    def get_incoming_shnums(self):
1758+        return frozenset()
1759+
1760+    def get_shares(self):
1761+        pass
1762+
1763+    def get_share(self, shnum):
1764+        return None
1765+
1766+    def get_storage_index(self):
1767+        return self.storageindex
1768+
1769+    def get_storage_index_string(self):
1770+        return si_b2a(self.storageindex)
1771+
1772+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1773+        immutableshare = ImmutableNullShare()
1774+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1775+
1776+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1777+        return MutableNullShare()
1778+
1779+    def _clean_up_after_unlink(self):
1780+        pass
1781+
1782+
1783+class ImmutableNullShare:
1784+    implements(IStoredShare)
1785+    sharetype = "immutable"
1786+
1787+    def __init__(self):
1788+        """ If max_size is not None then I won't allow more than
1789+        max_size to be written to me. If create=True then max_size
1790+        must not be None. """
1791+        pass
1792+
1793+    def get_shnum(self):
1794+        return self.shnum
1795+
1796+    def unlink(self):
1797+        os.unlink(self.fname)
1798+
1799+    def read_share_data(self, offset, length):
1800+        precondition(offset >= 0)
1801+        # Reads beyond the end of the data are truncated. Reads that start
1802+        # beyond the end of the data return an empty string.
1803+        seekpos = self._data_offset+offset
1804+        fsize = os.path.getsize(self.fname)
1805+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1806+        if actuallength == 0:
1807+            return ""
1808+        f = open(self.fname, 'rb')
1809+        f.seek(seekpos)
1810+        return f.read(actuallength)
1811+
1812+    def write_share_data(self, offset, data):
1813+        pass
1814+
1815+    def _write_lease_record(self, f, lease_number, lease_info):
1816+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1817+        f.seek(offset)
1818+        assert f.tell() == offset
1819+        f.write(lease_info.to_immutable_data())
1820+
1821+    def _read_num_leases(self, f):
1822+        f.seek(0x08)
1823+        (num_leases,) = struct.unpack(">L", f.read(4))
1824+        return num_leases
1825+
1826+    def _write_num_leases(self, f, num_leases):
1827+        f.seek(0x08)
1828+        f.write(struct.pack(">L", num_leases))
1829+
1830+    def _truncate_leases(self, f, num_leases):
1831+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1832+
1833+    def get_leases(self):
1834+        """Yields a LeaseInfo instance for all leases."""
1835+        f = open(self.fname, 'rb')
1836+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1837+        f.seek(self._lease_offset)
1838+        for i in range(num_leases):
1839+            data = f.read(self.LEASE_SIZE)
1840+            if data:
1841+                yield LeaseInfo().from_immutable_data(data)
1842+
1843+    def add_lease(self, lease):
1844+        pass
1845+
1846+    def renew_lease(self, renew_secret, new_expire_time):
1847+        for i,lease in enumerate(self.get_leases()):
1848+            if constant_time_compare(lease.renew_secret, renew_secret):
1849+                # yup. See if we need to update the owner time.
1850+                if new_expire_time > lease.expiration_time:
1851+                    # yes
1852+                    lease.expiration_time = new_expire_time
1853+                    f = open(self.fname, 'rb+')
1854+                    self._write_lease_record(f, i, lease)
1855+                    f.close()
1856+                return
1857+        raise IndexError("unable to renew non-existent lease")
1858+
1859+    def add_or_renew_lease(self, lease_info):
1860+        try:
1861+            self.renew_lease(lease_info.renew_secret,
1862+                             lease_info.expiration_time)
1863+        except IndexError:
1864+            self.add_lease(lease_info)
1865+
1866+
1867+class MutableNullShare:
1868+    implements(IStoredMutableShare)
1869+    sharetype = "mutable"
1870+
1871+    """ XXX: TODO """
1872addfile ./src/allmydata/storage/bucket.py
1873hunk ./src/allmydata/storage/bucket.py 1
1874+
1875+import time
1876+
1877+from foolscap.api import Referenceable
1878+
1879+from zope.interface import implements
1880+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1881+from allmydata.util import base32, log
1882+from allmydata.util.assertutil import precondition
1883+
1884+
1885+class BucketWriter(Referenceable):
1886+    implements(RIBucketWriter)
1887+
1888+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1889+        self.ss = ss
1890+        self._max_size = max_size # don't allow the client to write more than this
1891+        self._canary = canary
1892+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1893+        self.closed = False
1894+        self.throw_out_all_data = False
1895+        self._share = immutableshare
1896+        # also, add our lease to the file now, so that other ones can be
1897+        # added by simultaneous uploaders
1898+        self._share.add_lease(lease_info)
1899+
1900+    def allocated_size(self):
1901+        return self._max_size
1902+
1903+    def remote_write(self, offset, data):
1904+        start = time.time()
1905+        precondition(not self.closed)
1906+        if self.throw_out_all_data:
1907+            return
1908+        self._share.write_share_data(offset, data)
1909+        self.ss.add_latency("write", time.time() - start)
1910+        self.ss.count("write")
1911+
1912+    def remote_close(self):
1913+        precondition(not self.closed)
1914+        start = time.time()
1915+
1916+        self._share.close()
1917+        filelen = self._share.stat()
1918+        self._share = None
1919+
1920+        self.closed = True
1921+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1922+
1923+        self.ss.bucket_writer_closed(self, filelen)
1924+        self.ss.add_latency("close", time.time() - start)
1925+        self.ss.count("close")
1926+
1927+    def _disconnected(self):
1928+        if not self.closed:
1929+            self._abort()
1930+
1931+    def remote_abort(self):
1932+        log.msg("storage: aborting write to share %r" % self._share,
1933+                facility="tahoe.storage", level=log.UNUSUAL)
1934+        if not self.closed:
1935+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1936+        self._abort()
1937+        self.ss.count("abort")
1938+
1939+    def _abort(self):
1940+        if self.closed:
1941+            return
1942+        self._share.unlink()
1943+        self._share = None
1944+
1945+        # We are now considered closed for further writing. We must tell
1946+        # the storage server about this so that it stops expecting us to
1947+        # use the space it allocated for us earlier.
1948+        self.closed = True
1949+        self.ss.bucket_writer_closed(self, 0)
1950+
1951+
1952+class BucketReader(Referenceable):
1953+    implements(RIBucketReader)
1954+
1955+    def __init__(self, ss, share):
1956+        self.ss = ss
1957+        self._share = share
1958+        self.storageindex = share.storageindex
1959+        self.shnum = share.shnum
1960+
1961+    def __repr__(self):
1962+        return "<%s %s %s>" % (self.__class__.__name__,
1963+                               base32.b2a_l(self.storageindex[:8], 60),
1964+                               self.shnum)
1965+
1966+    def remote_read(self, offset, length):
1967+        start = time.time()
1968+        data = self._share.read_share_data(offset, length)
1969+        self.ss.add_latency("read", time.time() - start)
1970+        self.ss.count("read")
1971+        return data
1972+
1973+    def remote_advise_corrupt_share(self, reason):
1974+        return self.ss.remote_advise_corrupt_share("immutable",
1975+                                                   self.storageindex,
1976+                                                   self.shnum,
1977+                                                   reason)
1978addfile ./src/allmydata/test/test_backends.py
1979hunk ./src/allmydata/test/test_backends.py 1
1980+import os, stat
1981+from twisted.trial import unittest
1982+from allmydata.util.log import msg
1983+from allmydata.test.common_util import ReallyEqualMixin
1984+import mock
1985+
1986+# This is the code that we're going to be testing.
1987+from allmydata.storage.server import StorageServer
1988+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
1989+from allmydata.storage.backends.null.null_backend import NullBackend
1990+
1991+# The following share file content was generated with
1992+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1993+# with share data == 'a'. The total size of this input
1994+# is 85 bytes.
1995+shareversionnumber = '\x00\x00\x00\x01'
1996+sharedatalength = '\x00\x00\x00\x01'
1997+numberofleases = '\x00\x00\x00\x01'
1998+shareinputdata = 'a'
1999+ownernumber = '\x00\x00\x00\x00'
2000+renewsecret  = 'x'*32
2001+cancelsecret = 'y'*32
2002+expirationtime = '\x00(\xde\x80'
2003+nextlease = ''
2004+containerdata = shareversionnumber + sharedatalength + numberofleases
2005+client_data = shareinputdata + ownernumber + renewsecret + \
2006+    cancelsecret + expirationtime + nextlease
2007+share_data = containerdata + client_data
2008+testnodeid = 'testnodeidxxxxxxxxxx'
2009+
2010+
2011+class MockFileSystem(unittest.TestCase):
2012+    """ I simulate a filesystem that the code under test can use. I simulate
2013+    just the parts of the filesystem that the current implementation of Disk
2014+    backend needs. """
2015+    def setUp(self):
2016+        # Make patcher, patch, and effects for disk-using functions.
2017+        msg( "%s.setUp()" % (self,))
2018+        self.mockedfilepaths = {}
2019+        # keys are pathnames, values are MockFilePath objects. This is necessary because
2020+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
2021+        # self.mockedfilepaths has the relevant information.
2022+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
2023+        self.basedir = self.storedir.child('shares')
2024+        self.baseincdir = self.basedir.child('incoming')
2025+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2026+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2027+        self.shareincomingname = self.sharedirincomingname.child('0')
2028+        self.sharefinalname = self.sharedirfinalname.child('0')
2029+
2030+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
2031+        # or LeaseCheckingCrawler.
2032+
2033+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
2034+        self.FilePathFake.__enter__()
2035+
2036+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
2037+        FakeBCC = self.BCountingCrawler.__enter__()
2038+        FakeBCC.side_effect = self.call_FakeBCC
2039+
2040+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
2041+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
2042+        FakeLCC.side_effect = self.call_FakeLCC
2043+
2044+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
2045+        GetSpace = self.get_available_space.__enter__()
2046+        GetSpace.side_effect = self.call_get_available_space
2047+
2048+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2049+        getsize = self.statforsize.__enter__()
2050+        getsize.side_effect = self.call_statforsize
2051+
2052+    def call_FakeBCC(self, StateFile):
2053+        return MockBCC()
2054+
2055+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2056+        return MockLCC()
2057+
2058+    def call_get_available_space(self, storedir, reservedspace):
2059+        # The input vector has an input size of 85.
2060+        return 85 - reservedspace
2061+
2062+    def call_statforsize(self, fakefpname):
2063+        return self.mockedfilepaths[fakefpname].fileobject.size()
2064+
2065+    def tearDown(self):
2066+        msg( "%s.tearDown()" % (self,))
2067+        self.FilePathFake.__exit__()
2068+        self.mockedfilepaths = {}
2069+
2070+
2071+class MockFilePath:
2072+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2073+        #  I can't just make the values MockFileObjects because they may be directories.
2074+        self.mockedfilepaths = ffpathsenvironment
2075+        self.path = pathstring
2076+        self.existence = existence
2077+        if not self.mockedfilepaths.has_key(self.path):
2078+            #  The first MockFilePath object is special
2079+            self.mockedfilepaths[self.path] = self
2080+            self.fileobject = None
2081+        else:
2082+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2083+        self.spawn = {}
2084+        self.antecedent = os.path.dirname(self.path)
2085+
2086+    def setContent(self, contentstring):
2087+        # This method rewrites the data in the file that corresponds to its path
2088+        # name whether it preexisted or not.
2089+        self.fileobject = MockFileObject(contentstring)
2090+        self.existence = True
2091+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2092+        self.mockedfilepaths[self.path].existence = self.existence
2093+        self.setparents()
2094+
2095+    def create(self):
2096+        # This method chokes if there's a pre-existing file!
2097+        if self.mockedfilepaths[self.path].fileobject:
2098+            raise OSError
2099+        else:
2100+            self.existence = True
2101+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2102+            self.mockedfilepaths[self.path].existence = self.existence
2103+            self.setparents()
2104+
2105+    def open(self, mode='r'):
2106+        # XXX Makes no use of mode.
2107+        if not self.mockedfilepaths[self.path].fileobject:
2108+            # If there's no fileobject there already then make one and put it there.
2109+            self.fileobject = MockFileObject()
2110+            self.existence = True
2111+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2112+            self.mockedfilepaths[self.path].existence = self.existence
2113+        else:
2114+            # Otherwise get a ref to it.
2115+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2116+            self.existence = self.mockedfilepaths[self.path].existence
2117+        return self.fileobject.open(mode)
2118+
2119+    def child(self, childstring):
2120+        arg2child = os.path.join(self.path, childstring)
2121+        child = MockFilePath(arg2child, self.mockedfilepaths)
2122+        return child
2123+
2124+    def children(self):
2125+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2126+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2127+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2128+        self.spawn = frozenset(childrenfromffs)
2129+        return self.spawn
2130+
2131+    def parent(self):
2132+        if self.mockedfilepaths.has_key(self.antecedent):
2133+            parent = self.mockedfilepaths[self.antecedent]
2134+        else:
2135+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2136+        return parent
2137+
2138+    def parents(self):
2139+        antecedents = []
2140+        def f(fps, antecedents):
2141+            newfps = os.path.split(fps)[0]
2142+            if newfps:
2143+                antecedents.append(newfps)
2144+                f(newfps, antecedents)
2145+        f(self.path, antecedents)
2146+        return antecedents
2147+
2148+    def setparents(self):
2149+        for fps in self.parents():
2150+            if not self.mockedfilepaths.has_key(fps):
2151+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2152+
2153+    def basename(self):
2154+        return os.path.split(self.path)[1]
2155+
2156+    def moveTo(self, newffp):
2157+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2158+        if self.mockedfilepaths[newffp.path].exists():
2159+            raise OSError
2160+        else:
2161+            self.mockedfilepaths[newffp.path] = self
2162+            self.path = newffp.path
2163+
2164+    def getsize(self):
2165+        return self.fileobject.getsize()
2166+
2167+    def exists(self):
2168+        return self.existence
2169+
2170+    def isdir(self):
2171+        return True
2172+
2173+    def makedirs(self):
2174+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2175+        pass
2176+
2177+    def remove(self):
2178+        pass
2179+
2180+
2181+class MockFileObject:
2182+    def __init__(self, contentstring=''):
2183+        self.buffer = contentstring
2184+        self.pos = 0
2185+    def open(self, mode='r'):
2186+        return self
2187+    def write(self, instring):
2188+        begin = self.pos
2189+        padlen = begin - len(self.buffer)
2190+        if padlen > 0:
2191+            self.buffer += '\x00' * padlen
2192+        end = self.pos + len(instring)
2193+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2194+        self.pos = end
2195+    def close(self):
2196+        self.pos = 0
2197+    def seek(self, pos):
2198+        self.pos = pos
2199+    def read(self, numberbytes):
2200+        return self.buffer[self.pos:self.pos+numberbytes]
2201+    def tell(self):
2202+        return self.pos
2203+    def size(self):
2204+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2205+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2206+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2207+        return {stat.ST_SIZE:len(self.buffer)}
2208+    def getsize(self):
2209+        return len(self.buffer)
2210+
2211+class MockBCC:
2212+    def setServiceParent(self, Parent):
2213+        pass
2214+
2215+
2216+class MockLCC:
2217+    def setServiceParent(self, Parent):
2218+        pass
2219+
2220+
2221+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2222+    """ NullBackend is just for testing and executable documentation, so
2223+    this test is actually a test of StorageServer in which we're using
2224+    NullBackend as helper code for the test, rather than a test of
2225+    NullBackend. """
2226+    def setUp(self):
2227+        self.ss = StorageServer(testnodeid, NullBackend())
2228+
2229+    @mock.patch('os.mkdir')
2230+    @mock.patch('__builtin__.open')
2231+    @mock.patch('os.listdir')
2232+    @mock.patch('os.path.isdir')
2233+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2234+        """
2235+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2236+        generates the correct return types when given test-vector arguments. That
2237+        bs is of the correct type is verified by attempting to invoke remote_write
2238+        on bs[0].
2239+        """
2240+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2241+        bs[0].remote_write(0, 'a')
2242+        self.failIf(mockisdir.called)
2243+        self.failIf(mocklistdir.called)
2244+        self.failIf(mockopen.called)
2245+        self.failIf(mockmkdir.called)
2246+
2247+
2248+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2249+    def test_create_server_disk_backend(self):
2250+        """ This tests whether a server instance can be constructed with a
2251+        filesystem backend. To pass the test, it mustn't use the filesystem
2252+        outside of its configured storedir. """
2253+        StorageServer(testnodeid, DiskBackend(self.storedir))
2254+
2255+
2256+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2257+    """ This tests both the StorageServer and the Disk backend together. """
2258+    def setUp(self):
2259+        MockFileSystem.setUp(self)
2260+        try:
2261+            self.backend = DiskBackend(self.storedir)
2262+            self.ss = StorageServer(testnodeid, self.backend)
2263+
2264+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2265+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2266+        except:
2267+            MockFileSystem.tearDown(self)
2268+            raise
2269+
2270+    @mock.patch('time.time')
2271+    @mock.patch('allmydata.util.fileutil.get_available_space')
2272+    def test_out_of_space(self, mockget_available_space, mocktime):
2273+        mocktime.return_value = 0
2274+
2275+        def call_get_available_space(dir, reserve):
2276+            return 0
2277+
2278+        mockget_available_space.side_effect = call_get_available_space
2279+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2280+        self.failUnlessReallyEqual(bsc, {})
2281+
2282+    @mock.patch('time.time')
2283+    def test_write_and_read_share(self, mocktime):
2284+        """
2285+        Write a new share, read it, and test the server's (and disk backend's)
2286+        handling of simultaneous and successive attempts to write the same
2287+        share.
2288+        """
2289+        mocktime.return_value = 0
2290+        # Inspect incoming and fail unless it's empty.
2291+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2292+
2293+        self.failUnlessReallyEqual(incomingset, frozenset())
2294+
2295+        # Populate incoming with the sharenum: 0.
2296+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2297+
2298+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2299+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2300+
2301+
2302+
2303+        # Attempt to create a second share writer with the same sharenum.
2304+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2305+
2306+        # Show that no sharewriter results from a remote_allocate_buckets
2307+        # with the same si and sharenum, until BucketWriter.remote_close()
2308+        # has been called.
2309+        self.failIf(bsa)
2310+
2311+        # Test allocated size.
2312+        spaceint = self.ss.allocated_size()
2313+        self.failUnlessReallyEqual(spaceint, 1)
2314+
2315+        # Write 'a' to shnum 0. Only tested together with close and read.
2316+        bs[0].remote_write(0, 'a')
2317+
2318+        # Preclose: Inspect final, failUnless nothing there.
2319+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2320+        bs[0].remote_close()
2321+
2322+        # Postclose: (Omnibus) failUnless written data is in final.
2323+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2324+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2325+        contents = sharesinfinal[0].read_share_data(0, 73)
2326+        self.failUnlessReallyEqual(contents, client_data)
2327+
2328+        # Exercise the case that the share we're asking to allocate is
2329+        # already (completely) uploaded.
2330+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2331+
2332+
2333+    def test_read_old_share(self):
2334+        """ This tests whether the code correctly finds and reads
2335+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2336+        servers. There is a similar test in test_download, but that one
2337+        is from the perspective of the client and exercises a deeper
2338+        stack of code. This one is for exercising just the
2339+        StorageServer object. """
2340+        # Contruct a file with the appropriate contents in the mockfilesystem.
2341+        datalen = len(share_data)
2342+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2343+        finalhome.setContent(share_data)
2344+
2345+        # Now begin the test.
2346+        bs = self.ss.remote_get_buckets('teststorage_index')
2347+
2348+        self.failUnlessEqual(len(bs), 1)
2349+        b = bs['0']
2350+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2351+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2352+        # If you try to read past the end you get the as much data as is there.
2353+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2354+        # If you start reading past the end of the file you get the empty string.
2355+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2356}
2357[Pluggable backends -- all other changes. refs #999
2358david-sarah@jacaranda.org**20110919233256
2359 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2360] {
2361hunk ./src/allmydata/client.py 245
2362             sharetypes.append("immutable")
2363         if self.get_config("storage", "expire.mutable", True, boolean=True):
2364             sharetypes.append("mutable")
2365-        expiration_sharetypes = tuple(sharetypes)
2366 
2367hunk ./src/allmydata/client.py 246
2368+        expiration_policy = {
2369+            'enabled': expire,
2370+            'mode': mode,
2371+            'override_lease_duration': o_l_d,
2372+            'cutoff_date': cutoff_date,
2373+            'sharetypes': tuple(sharetypes),
2374+        }
2375         ss = StorageServer(storedir, self.nodeid,
2376                            reserved_space=reserved,
2377                            discard_storage=discard,
2378hunk ./src/allmydata/client.py 258
2379                            readonly_storage=readonly,
2380                            stats_provider=self.stats_provider,
2381-                           expiration_enabled=expire,
2382-                           expiration_mode=mode,
2383-                           expiration_override_lease_duration=o_l_d,
2384-                           expiration_cutoff_date=cutoff_date,
2385-                           expiration_sharetypes=expiration_sharetypes)
2386+                           expiration_policy=expiration_policy)
2387         self.add_service(ss)
2388 
2389         d = self.when_tub_ready()
2390hunk ./src/allmydata/immutable/offloaded.py 306
2391         if os.path.exists(self._encoding_file):
2392             self.log("ciphertext already present, bypassing fetch",
2393                      level=log.UNUSUAL)
2394+            # XXX the following comment is probably stale, since
2395+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2396+            #
2397             # we'll still need the plaintext hashes (when
2398             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2399             # called), and currently the easiest way to get them is to ask
2400hunk ./src/allmydata/immutable/upload.py 765
2401             self._status.set_progress(1, progress)
2402         return cryptdata
2403 
2404-
2405     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2406hunk ./src/allmydata/immutable/upload.py 766
2407+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2408+        plaintext segments, i.e. get the tagged hashes of the given segments.
2409+        The segment size is expected to be generated by the
2410+        IEncryptedUploadable before any plaintext is read or ciphertext
2411+        produced, so that the segment hashes can be generated with only a
2412+        single pass.
2413+
2414+        This returns a Deferred that fires with a sequence of hashes, using:
2415+
2416+         tuple(segment_hashes[first:last])
2417+
2418+        'num_segments' is used to assert that the number of segments that the
2419+        IEncryptedUploadable handled matches the number of segments that the
2420+        encoder was expecting.
2421+
2422+        This method must not be called until the final byte has been read
2423+        from read_encrypted(). Once this method is called, read_encrypted()
2424+        can never be called again.
2425+        """
2426         # this is currently unused, but will live again when we fix #453
2427         if len(self._plaintext_segment_hashes) < num_segments:
2428             # close out the last one
2429hunk ./src/allmydata/immutable/upload.py 803
2430         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2431 
2432     def get_plaintext_hash(self):
2433+        """OBSOLETE; Get the hash of the whole plaintext.
2434+
2435+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2436+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2437+        """
2438+        # this is currently unused, but will live again when we fix #453
2439         h = self._plaintext_hasher.digest()
2440         return defer.succeed(h)
2441 
2442hunk ./src/allmydata/interfaces.py 29
2443 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2444 Offset = Number
2445 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2446-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2447-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2448-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2449+WriteEnablerSecret = Hash # used to protect mutable share modifications
2450+LeaseRenewSecret = Hash # used to protect lease renewal requests
2451+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2452 
2453 class RIStubClient(RemoteInterface):
2454     """Each client publishes a service announcement for a dummy object called
2455hunk ./src/allmydata/interfaces.py 106
2456                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2457                          allocated_size=Offset, canary=Referenceable):
2458         """
2459-        @param storage_index: the index of the bucket to be created or
2460+        @param storage_index: the index of the shareset to be created or
2461                               increfed.
2462         @param sharenums: these are the share numbers (probably between 0 and
2463                           99) that the sender is proposing to store on this
2464hunk ./src/allmydata/interfaces.py 111
2465                           server.
2466-        @param renew_secret: This is the secret used to protect bucket refresh
2467+        @param renew_secret: This is the secret used to protect lease renewal.
2468                              This secret is generated by the client and
2469                              stored for later comparison by the server. Each
2470                              server is given a different secret.
2471hunk ./src/allmydata/interfaces.py 115
2472-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2473-        @param canary: If the canary is lost before close(), the bucket is
2474+        @param cancel_secret: ignored
2475+        @param canary: If the canary is lost before close(), the allocation is
2476                        deleted.
2477         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2478                  already have and allocated is what we hereby agree to accept.
2479hunk ./src/allmydata/interfaces.py 129
2480                   renew_secret=LeaseRenewSecret,
2481                   cancel_secret=LeaseCancelSecret):
2482         """
2483-        Add a new lease on the given bucket. If the renew_secret matches an
2484+        Add a new lease on the given shareset. If the renew_secret matches an
2485         existing lease, that lease will be renewed instead. If there is no
2486hunk ./src/allmydata/interfaces.py 131
2487-        bucket for the given storage_index, return silently. (note that in
2488+        shareset for the given storage_index, return silently. (Note that in
2489         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2490hunk ./src/allmydata/interfaces.py 133
2491-        bucket)
2492+        shareset.)
2493         """
2494         return Any() # returns None now, but future versions might change
2495 
2496hunk ./src/allmydata/interfaces.py 139
2497     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2498         """
2499-        Renew the lease on a given bucket, resetting the timer to 31 days.
2500-        Some networks will use this, some will not. If there is no bucket for
2501+        Renew the lease on a given shareset, resetting the timer to 31 days.
2502+        Some networks will use this, some will not. If there is no shareset for
2503         the given storage_index, IndexError will be raised.
2504 
2505         For mutable shares, if the given renew_secret does not match an
2506hunk ./src/allmydata/interfaces.py 146
2507         existing lease, IndexError will be raised with a note listing the
2508         server-nodeids on the existing leases, so leases on migrated shares
2509-        can be renewed or cancelled. For immutable shares, IndexError
2510-        (without the note) will be raised.
2511+        can be renewed. For immutable shares, IndexError (without the note)
2512+        will be raised.
2513         """
2514         return Any()
2515 
2516hunk ./src/allmydata/interfaces.py 154
2517     def get_buckets(storage_index=StorageIndex):
2518         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2519 
2520-
2521-
2522     def slot_readv(storage_index=StorageIndex,
2523                    shares=ListOf(int), readv=ReadVector):
2524         """Read a vector from the numbered shares associated with the given
2525hunk ./src/allmydata/interfaces.py 163
2526 
2527     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2528                                         secrets=TupleOf(WriteEnablerSecret,
2529-                                                        LeaseRenewSecret,
2530-                                                        LeaseCancelSecret),
2531+                                                        LeaseRenewSecret),
2532                                         tw_vectors=TestAndWriteVectorsForShares,
2533                                         r_vector=ReadVector,
2534                                         ):
2535hunk ./src/allmydata/interfaces.py 167
2536-        """General-purpose test-and-set operation for mutable slots. Perform
2537-        a bunch of comparisons against the existing shares. If they all pass,
2538-        then apply a bunch of write vectors to those shares. Then use the
2539-        read vectors to extract data from all the shares and return the data.
2540+        """
2541+        General-purpose atomic test-read-and-set operation for mutable slots.
2542+        Perform a bunch of comparisons against the existing shares. If they
2543+        all pass: use the read vectors to extract data from all the shares,
2544+        then apply a bunch of write vectors to those shares. Return the read
2545+        data, which does not include any modifications made by the writes.
2546 
2547         This method is, um, large. The goal is to allow clients to update all
2548         the shares associated with a mutable file in a single round trip.
2549hunk ./src/allmydata/interfaces.py 177
2550 
2551-        @param storage_index: the index of the bucket to be created or
2552+        @param storage_index: the index of the shareset to be created or
2553                               increfed.
2554         @param write_enabler: a secret that is stored along with the slot.
2555                               Writes are accepted from any caller who can
2556hunk ./src/allmydata/interfaces.py 183
2557                               present the matching secret. A different secret
2558                               should be used for each slot*server pair.
2559-        @param renew_secret: This is the secret used to protect bucket refresh
2560+        @param renew_secret: This is the secret used to protect lease renewal.
2561                              This secret is generated by the client and
2562                              stored for later comparison by the server. Each
2563                              server is given a different secret.
2564hunk ./src/allmydata/interfaces.py 187
2565-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2566+        @param cancel_secret: ignored
2567 
2568hunk ./src/allmydata/interfaces.py 189
2569-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2570-        cancel_secret). The first is required to perform any write. The
2571-        latter two are used when allocating new shares. To simply acquire a
2572-        new lease on existing shares, use an empty testv and an empty writev.
2573+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2574+        The write_enabler is required to perform any write. The renew_secret
2575+        is used when allocating new shares.
2576 
2577         Each share can have a separate test vector (i.e. a list of
2578         comparisons to perform). If all vectors for all shares pass, then all
2579hunk ./src/allmydata/interfaces.py 280
2580         store that on disk.
2581         """
2582 
2583-class IStorageBucketWriter(Interface):
2584+
2585+class IStorageBackend(Interface):
2586     """
2587hunk ./src/allmydata/interfaces.py 283
2588-    Objects of this kind live on the client side.
2589+    Objects of this kind live on the server side and are used by the
2590+    storage server object.
2591     """
2592hunk ./src/allmydata/interfaces.py 286
2593-    def put_block(segmentnum=int, data=ShareData):
2594-        """@param data: For most segments, this data will be 'blocksize'
2595-        bytes in length. The last segment might be shorter.
2596-        @return: a Deferred that fires (with None) when the operation completes
2597+    def get_available_space():
2598+        """
2599+        Returns available space for share storage in bytes, or
2600+        None if this information is not available or if the available
2601+        space is unlimited.
2602+
2603+        If the backend is configured for read-only mode then this will
2604+        return 0.
2605+        """
2606+
2607+    def get_sharesets_for_prefix(prefix):
2608+        """
2609+        Generates IShareSet objects for all storage indices matching the
2610+        given prefix for which this backend holds shares.
2611+        """
2612+
2613+    def get_shareset(storageindex):
2614+        """
2615+        Get an IShareSet object for the given storage index.
2616+        """
2617+
2618+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2619+        """
2620+        Clients who discover hash failures in shares that they have
2621+        downloaded from me will use this method to inform me about the
2622+        failures. I will record their concern so that my operator can
2623+        manually inspect the shares in question.
2624+
2625+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2626+        share number. 'reason' is a human-readable explanation of the problem,
2627+        probably including some expected hash values and the computed ones
2628+        that did not match. Corruption advisories for mutable shares should
2629+        include a hash of the public key (the same value that appears in the
2630+        mutable-file verify-cap), since the current share format does not
2631+        store that on disk.
2632+
2633+        @param storageindex=str
2634+        @param sharetype=str
2635+        @param shnum=int
2636+        @param reason=str
2637+        """
2638+
2639+
2640+class IShareSet(Interface):
2641+    def get_storage_index():
2642+        """
2643+        Returns the storage index for this shareset.
2644+        """
2645+
2646+    def get_storage_index_string():
2647+        """
2648+        Returns the base32-encoded storage index for this shareset.
2649+        """
2650+
2651+    def get_overhead():
2652+        """
2653+        Returns the storage overhead, in bytes, of this shareset (exclusive
2654+        of the space used by its shares).
2655+        """
2656+
2657+    def get_shares():
2658+        """
2659+        Generates the IStoredShare objects held in this shareset.
2660+        """
2661+
2662+    def has_incoming(shnum):
2663+        """
2664+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2665+        """
2666+
2667+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2668+        """
2669+        Create a bucket writer that can be used to write data to a given share.
2670+
2671+        @param storageserver=RIStorageServer
2672+        @param shnum=int: A share number in this shareset
2673+        @param max_space_per_bucket=int: The maximum space allocated for the
2674+                 share, in bytes
2675+        @param lease_info=LeaseInfo: The initial lease information
2676+        @param canary=Referenceable: If the canary is lost before close(), the
2677+                 bucket is deleted.
2678+        @return an IStorageBucketWriter for the given share
2679+        """
2680+
2681+    def make_bucket_reader(storageserver, share):
2682+        """
2683+        Create a bucket reader that can be used to read data from a given share.
2684+
2685+        @param storageserver=RIStorageServer
2686+        @param share=IStoredShare
2687+        @return an IStorageBucketReader for the given share
2688+        """
2689+
2690+    def readv(wanted_shnums, read_vector):
2691+        """
2692+        Read a vector from the numbered shares in this shareset. An empty
2693+        wanted_shnums list means to return data from all known shares.
2694+
2695+        @param wanted_shnums=ListOf(int)
2696+        @param read_vector=ReadVector
2697+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2698+        """
2699+
2700+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2701+        """
2702+        General-purpose atomic test-read-and-set operation for mutable slots.
2703+        Perform a bunch of comparisons against the existing shares in this
2704+        shareset. If they all pass: use the read vectors to extract data from
2705+        all the shares, then apply a bunch of write vectors to those shares.
2706+        Return the read data, which does not include any modifications made by
2707+        the writes.
2708+
2709+        See the similar method in RIStorageServer for more detail.
2710+
2711+        @param storageserver=RIStorageServer
2712+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2713+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2714+        @param read_vector=ReadVector
2715+        @param expiration_time=int
2716+        @return TupleOf(bool, DictOf(int, ReadData))
2717+        """
2718+
2719+    def add_or_renew_lease(lease_info):
2720+        """
2721+        Add a new lease on the shares in this shareset. If the renew_secret
2722+        matches an existing lease, that lease will be renewed instead. If
2723+        there are no shares in this shareset, return silently.
2724+
2725+        @param lease_info=LeaseInfo
2726+        """
2727+
2728+    def renew_lease(renew_secret, new_expiration_time):
2729+        """
2730+        Renew a lease on the shares in this shareset, resetting the timer
2731+        to 31 days. Some grids will use this, some will not. If there are no
2732+        shares in this shareset, IndexError will be raised.
2733+
2734+        For mutable shares, if the given renew_secret does not match an
2735+        existing lease, IndexError will be raised with a note listing the
2736+        server-nodeids on the existing leases, so leases on migrated shares
2737+        can be renewed. For immutable shares, IndexError (without the note)
2738+        will be raised.
2739+
2740+        @param renew_secret=LeaseRenewSecret
2741+        """
2742+
2743+
2744+class IStoredShare(Interface):
2745+    """
2746+    This object contains as much as all of the share data.  It is intended
2747+    for lazy evaluation, such that in many use cases substantially less than
2748+    all of the share data will be accessed.
2749+    """
2750+    def close():
2751+        """
2752+        Complete writing to this share.
2753+        """
2754+
2755+    def get_storage_index():
2756+        """
2757+        Returns the storage index.
2758+        """
2759+
2760+    def get_shnum():
2761+        """
2762+        Returns the share number.
2763+        """
2764+
2765+    def get_data_length():
2766+        """
2767+        Returns the data length in bytes.
2768+        """
2769+
2770+    def get_size():
2771+        """
2772+        Returns the size of the share in bytes.
2773+        """
2774+
2775+    def get_used_space():
2776+        """
2777+        Returns the amount of backend storage including overhead, in bytes, used
2778+        by this share.
2779+        """
2780+
2781+    def unlink():
2782+        """
2783+        Signal that this share can be removed from the backend storage. This does
2784+        not guarantee that the share data will be immediately inaccessible, or
2785+        that it will be securely erased.
2786+        """
2787+
2788+    def readv(read_vector):
2789+        """
2790+        XXX
2791+        """
2792+
2793+
2794+class IStoredMutableShare(IStoredShare):
2795+    def check_write_enabler(write_enabler, si_s):
2796+        """
2797+        XXX
2798         """
2799 
2800hunk ./src/allmydata/interfaces.py 489
2801-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2802+    def check_testv(test_vector):
2803+        """
2804+        XXX
2805+        """
2806+
2807+    def writev(datav, new_length):
2808+        """
2809+        XXX
2810+        """
2811+
2812+
2813+class IStorageBucketWriter(Interface):
2814+    """
2815+    Objects of this kind live on the client side.
2816+    """
2817+    def put_block(segmentnum, data):
2818         """
2819hunk ./src/allmydata/interfaces.py 506
2820+        @param segmentnum=int
2821+        @param data=ShareData: For most segments, this data will be 'blocksize'
2822+        bytes in length. The last segment might be shorter.
2823         @return: a Deferred that fires (with None) when the operation completes
2824         """
2825 
2826hunk ./src/allmydata/interfaces.py 512
2827-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2828+    def put_crypttext_hashes(hashes):
2829         """
2830hunk ./src/allmydata/interfaces.py 514
2831+        @param hashes=ListOf(Hash)
2832         @return: a Deferred that fires (with None) when the operation completes
2833         """
2834 
2835hunk ./src/allmydata/interfaces.py 518
2836-    def put_block_hashes(blockhashes=ListOf(Hash)):
2837+    def put_block_hashes(blockhashes):
2838         """
2839hunk ./src/allmydata/interfaces.py 520
2840+        @param blockhashes=ListOf(Hash)
2841         @return: a Deferred that fires (with None) when the operation completes
2842         """
2843 
2844hunk ./src/allmydata/interfaces.py 524
2845-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2846+    def put_share_hashes(sharehashes):
2847         """
2848hunk ./src/allmydata/interfaces.py 526
2849+        @param sharehashes=ListOf(TupleOf(int, Hash))
2850         @return: a Deferred that fires (with None) when the operation completes
2851         """
2852 
2853hunk ./src/allmydata/interfaces.py 530
2854-    def put_uri_extension(data=URIExtensionData):
2855+    def put_uri_extension(data):
2856         """This block of data contains integrity-checking information (hashes
2857         of plaintext, crypttext, and shares), as well as encoding parameters
2858         that are necessary to recover the data. This is a serialized dict
2859hunk ./src/allmydata/interfaces.py 535
2860         mapping strings to other strings. The hash of this data is kept in
2861-        the URI and verified before any of the data is used. All buckets for
2862-        a given file contain identical copies of this data.
2863+        the URI and verified before any of the data is used. All share
2864+        containers for a given file contain identical copies of this data.
2865 
2866         The serialization format is specified with the following pseudocode:
2867         for k in sorted(dict.keys()):
2868hunk ./src/allmydata/interfaces.py 543
2869             assert re.match(r'^[a-zA-Z_\-]+$', k)
2870             write(k + ':' + netstring(dict[k]))
2871 
2872+        @param data=URIExtensionData
2873         @return: a Deferred that fires (with None) when the operation completes
2874         """
2875 
2876hunk ./src/allmydata/interfaces.py 558
2877 
2878 class IStorageBucketReader(Interface):
2879 
2880-    def get_block_data(blocknum=int, blocksize=int, size=int):
2881+    def get_block_data(blocknum, blocksize, size):
2882         """Most blocks will be the same size. The last block might be shorter
2883         than the others.
2884 
2885hunk ./src/allmydata/interfaces.py 562
2886+        @param blocknum=int
2887+        @param blocksize=int
2888+        @param size=int
2889         @return: ShareData
2890         """
2891 
2892hunk ./src/allmydata/interfaces.py 573
2893         @return: ListOf(Hash)
2894         """
2895 
2896-    def get_block_hashes(at_least_these=SetOf(int)):
2897+    def get_block_hashes(at_least_these=()):
2898         """
2899hunk ./src/allmydata/interfaces.py 575
2900+        @param at_least_these=SetOf(int)
2901         @return: ListOf(Hash)
2902         """
2903 
2904hunk ./src/allmydata/interfaces.py 579
2905-    def get_share_hashes(at_least_these=SetOf(int)):
2906+    def get_share_hashes():
2907         """
2908         @return: ListOf(TupleOf(int, Hash))
2909         """
2910hunk ./src/allmydata/interfaces.py 611
2911         @return: unicode nickname, or None
2912         """
2913 
2914-    # methods moved from IntroducerClient, need review
2915-    def get_all_connections():
2916-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2917-        each active connection we've established to a remote service. This is
2918-        mostly useful for unit tests that need to wait until a certain number
2919-        of connections have been made."""
2920-
2921-    def get_all_connectors():
2922-        """Return a dict that maps from (nodeid, service_name) to a
2923-        RemoteServiceConnector instance for all services that we are actively
2924-        trying to connect to. Each RemoteServiceConnector has the following
2925-        public attributes::
2926-
2927-          service_name: the type of service provided, like 'storage'
2928-          announcement_time: when we first heard about this service
2929-          last_connect_time: when we last established a connection
2930-          last_loss_time: when we last lost a connection
2931-
2932-          version: the peer's version, from the most recent connection
2933-          oldest_supported: the peer's oldest supported version, same
2934-
2935-          rref: the RemoteReference, if connected, otherwise None
2936-          remote_host: the IAddress, if connected, otherwise None
2937-
2938-        This method is intended for monitoring interfaces, such as a web page
2939-        that describes connecting and connected peers.
2940-        """
2941-
2942-    def get_all_peerids():
2943-        """Return a frozenset of all peerids to whom we have a connection (to
2944-        one or more services) established. Mostly useful for unit tests."""
2945-
2946-    def get_all_connections_for(service_name):
2947-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2948-        for each active connection that provides the given SERVICE_NAME."""
2949-
2950-    def get_permuted_peers(service_name, key):
2951-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2952-        the connections that provide SERVICE_NAME, using a hash-based
2953-        permutation keyed by KEY. This randomizes the service list in a
2954-        repeatable way, to distribute load over many peers.
2955-        """
2956-
2957 
2958 class IMutableSlotWriter(Interface):
2959     """
2960hunk ./src/allmydata/interfaces.py 616
2961     The interface for a writer around a mutable slot on a remote server.
2962     """
2963-    def set_checkstring(checkstring, *args):
2964+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
2965         """
2966         Set the checkstring that I will pass to the remote server when
2967         writing.
2968hunk ./src/allmydata/interfaces.py 640
2969         Add a block and salt to the share.
2970         """
2971 
2972-    def put_encprivey(encprivkey):
2973+    def put_encprivkey(encprivkey):
2974         """
2975         Add the encrypted private key to the share.
2976         """
2977hunk ./src/allmydata/interfaces.py 645
2978 
2979-    def put_blockhashes(blockhashes=list):
2980+    def put_blockhashes(blockhashes):
2981         """
2982hunk ./src/allmydata/interfaces.py 647
2983+        @param blockhashes=list
2984         Add the block hash tree to the share.
2985         """
2986 
2987hunk ./src/allmydata/interfaces.py 651
2988-    def put_sharehashes(sharehashes=dict):
2989+    def put_sharehashes(sharehashes):
2990         """
2991hunk ./src/allmydata/interfaces.py 653
2992+        @param sharehashes=dict
2993         Add the share hash chain to the share.
2994         """
2995 
2996hunk ./src/allmydata/interfaces.py 739
2997     def get_extension_params():
2998         """Return the extension parameters in the URI"""
2999 
3000-    def set_extension_params():
3001+    def set_extension_params(params):
3002         """Set the extension parameters that should be in the URI"""
3003 
3004 class IDirectoryURI(Interface):
3005hunk ./src/allmydata/interfaces.py 879
3006         writer-visible data using this writekey.
3007         """
3008 
3009-    # TODO: Can this be overwrite instead of replace?
3010-    def replace(new_contents):
3011-        """Replace the contents of the mutable file, provided that no other
3012+    def overwrite(new_contents):
3013+        """Overwrite the contents of the mutable file, provided that no other
3014         node has published (or is attempting to publish, concurrently) a
3015         newer version of the file than this one.
3016 
3017hunk ./src/allmydata/interfaces.py 1346
3018         is empty, the metadata will be an empty dictionary.
3019         """
3020 
3021-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
3022+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
3023         """I add a child (by writecap+readcap) at the specific name. I return
3024         a Deferred that fires when the operation finishes. If overwrite= is
3025         True, I will replace any existing child of the same name, otherwise
3026hunk ./src/allmydata/interfaces.py 1745
3027     Block Hash, and the encoding parameters, both of which must be included
3028     in the URI.
3029 
3030-    I do not choose shareholders, that is left to the IUploader. I must be
3031-    given a dict of RemoteReferences to storage buckets that are ready and
3032-    willing to receive data.
3033+    I do not choose shareholders, that is left to the IUploader.
3034     """
3035 
3036     def set_size(size):
3037hunk ./src/allmydata/interfaces.py 1752
3038         """Specify the number of bytes that will be encoded. This must be
3039         peformed before get_serialized_params() can be called.
3040         """
3041+
3042     def set_params(params):
3043         """Override the default encoding parameters. 'params' is a tuple of
3044         (k,d,n), where 'k' is the number of required shares, 'd' is the
3045hunk ./src/allmydata/interfaces.py 1848
3046     download, validate, decode, and decrypt data from them, writing the
3047     results to an output file.
3048 
3049-    I do not locate the shareholders, that is left to the IDownloader. I must
3050-    be given a dict of RemoteReferences to storage buckets that are ready to
3051-    send data.
3052+    I do not locate the shareholders, that is left to the IDownloader.
3053     """
3054 
3055     def setup(outfile):
3056hunk ./src/allmydata/interfaces.py 1950
3057         resuming an interrupted upload (where we need to compute the
3058         plaintext hashes, but don't need the redundant encrypted data)."""
3059 
3060-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3061-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3062-        plaintext segments, i.e. get the tagged hashes of the given segments.
3063-        The segment size is expected to be generated by the
3064-        IEncryptedUploadable before any plaintext is read or ciphertext
3065-        produced, so that the segment hashes can be generated with only a
3066-        single pass.
3067-
3068-        This returns a Deferred that fires with a sequence of hashes, using:
3069-
3070-         tuple(segment_hashes[first:last])
3071-
3072-        'num_segments' is used to assert that the number of segments that the
3073-        IEncryptedUploadable handled matches the number of segments that the
3074-        encoder was expecting.
3075-
3076-        This method must not be called until the final byte has been read
3077-        from read_encrypted(). Once this method is called, read_encrypted()
3078-        can never be called again.
3079-        """
3080-
3081-    def get_plaintext_hash():
3082-        """OBSOLETE; Get the hash of the whole plaintext.
3083-
3084-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3085-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3086-        """
3087-
3088     def close():
3089         """Just like IUploadable.close()."""
3090 
3091hunk ./src/allmydata/interfaces.py 2144
3092         returns a Deferred that fires with an IUploadResults instance, from
3093         which the URI of the file can be obtained as results.uri ."""
3094 
3095-    def upload_ssk(write_capability, new_version, uploadable):
3096-        """TODO: how should this work?"""
3097-
3098 class ICheckable(Interface):
3099     def check(monitor, verify=False, add_lease=False):
3100         """Check up on my health, optionally repairing any problems.
3101hunk ./src/allmydata/interfaces.py 2505
3102 
3103 class IRepairResults(Interface):
3104     """I contain the results of a repair operation."""
3105-    def get_successful(self):
3106+    def get_successful():
3107         """Returns a boolean: True if the repair made the file healthy, False
3108         if not. Repair failure generally indicates a file that has been
3109         damaged beyond repair."""
3110hunk ./src/allmydata/interfaces.py 2577
3111     Tahoe process will typically have a single NodeMaker, but unit tests may
3112     create simplified/mocked forms for testing purposes.
3113     """
3114-    def create_from_cap(writecap, readcap=None, **kwargs):
3115+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3116         """I create an IFilesystemNode from the given writecap/readcap. I can
3117         only provide nodes for existing file/directory objects: use my other
3118         methods to create new objects. I return synchronously."""
3119hunk ./src/allmydata/monitor.py 30
3120 
3121     # the following methods are provided for the operation code
3122 
3123-    def is_cancelled(self):
3124+    def is_cancelled():
3125         """Returns True if the operation has been cancelled. If True,
3126         operation code should stop creating new work, and attempt to stop any
3127         work already in progress."""
3128hunk ./src/allmydata/monitor.py 35
3129 
3130-    def raise_if_cancelled(self):
3131+    def raise_if_cancelled():
3132         """Raise OperationCancelledError if the operation has been cancelled.
3133         Operation code that has a robust error-handling path can simply call
3134         this periodically."""
3135hunk ./src/allmydata/monitor.py 40
3136 
3137-    def set_status(self, status):
3138+    def set_status(status):
3139         """Sets the Monitor's 'status' object to an arbitrary value.
3140         Different operations will store different sorts of status information
3141         here. Operation code should use get+modify+set sequences to update
3142hunk ./src/allmydata/monitor.py 46
3143         this."""
3144 
3145-    def get_status(self):
3146+    def get_status():
3147         """Return the status object. If the operation failed, this will be a
3148         Failure instance."""
3149 
3150hunk ./src/allmydata/monitor.py 50
3151-    def finish(self, status):
3152+    def finish(status):
3153         """Call this when the operation is done, successful or not. The
3154         Monitor's lifetime is influenced by the completion of the operation
3155         it is monitoring. The Monitor's 'status' value will be set with the
3156hunk ./src/allmydata/monitor.py 63
3157 
3158     # the following methods are provided for the initiator of the operation
3159 
3160-    def is_finished(self):
3161+    def is_finished():
3162         """Return a boolean, True if the operation is done (whether
3163         successful or failed), False if it is still running."""
3164 
3165hunk ./src/allmydata/monitor.py 67
3166-    def when_done(self):
3167+    def when_done():
3168         """Return a Deferred that fires when the operation is complete. It
3169         will fire with the operation status, the same value as returned by
3170         get_status()."""
3171hunk ./src/allmydata/monitor.py 72
3172 
3173-    def cancel(self):
3174+    def cancel():
3175         """Cancel the operation as soon as possible. is_cancelled() will
3176         start returning True after this is called."""
3177 
3178hunk ./src/allmydata/mutable/filenode.py 753
3179         self._writekey = writekey
3180         self._serializer = defer.succeed(None)
3181 
3182-
3183     def get_sequence_number(self):
3184         """
3185         Get the sequence number of the mutable version that I represent.
3186hunk ./src/allmydata/mutable/filenode.py 759
3187         """
3188         return self._version[0] # verinfo[0] == the sequence number
3189 
3190+    def get_servermap(self):
3191+        return self._servermap
3192 
3193hunk ./src/allmydata/mutable/filenode.py 762
3194-    # TODO: Terminology?
3195     def get_writekey(self):
3196         """
3197         I return a writekey or None if I don't have a writekey.
3198hunk ./src/allmydata/mutable/filenode.py 768
3199         """
3200         return self._writekey
3201 
3202-
3203     def set_downloader_hints(self, hints):
3204         """
3205         I set the downloader hints.
3206hunk ./src/allmydata/mutable/filenode.py 776
3207 
3208         self._downloader_hints = hints
3209 
3210-
3211     def get_downloader_hints(self):
3212         """
3213         I return the downloader hints.
3214hunk ./src/allmydata/mutable/filenode.py 782
3215         """
3216         return self._downloader_hints
3217 
3218-
3219     def overwrite(self, new_contents):
3220         """
3221         I overwrite the contents of this mutable file version with the
3222hunk ./src/allmydata/mutable/filenode.py 791
3223 
3224         return self._do_serialized(self._overwrite, new_contents)
3225 
3226-
3227     def _overwrite(self, new_contents):
3228         assert IMutableUploadable.providedBy(new_contents)
3229         assert self._servermap.last_update_mode == MODE_WRITE
3230hunk ./src/allmydata/mutable/filenode.py 797
3231 
3232         return self._upload(new_contents)
3233 
3234-
3235     def modify(self, modifier, backoffer=None):
3236         """I use a modifier callback to apply a change to the mutable file.
3237         I implement the following pseudocode::
3238hunk ./src/allmydata/mutable/filenode.py 841
3239 
3240         return self._do_serialized(self._modify, modifier, backoffer)
3241 
3242-
3243     def _modify(self, modifier, backoffer):
3244         if backoffer is None:
3245             backoffer = BackoffAgent().delay
3246hunk ./src/allmydata/mutable/filenode.py 846
3247         return self._modify_and_retry(modifier, backoffer, True)
3248 
3249-
3250     def _modify_and_retry(self, modifier, backoffer, first_time):
3251         """
3252         I try to apply modifier to the contents of this version of the
3253hunk ./src/allmydata/mutable/filenode.py 878
3254         d.addErrback(_retry)
3255         return d
3256 
3257-
3258     def _modify_once(self, modifier, first_time):
3259         """
3260         I attempt to apply a modifier to the contents of the mutable
3261hunk ./src/allmydata/mutable/filenode.py 913
3262         d.addCallback(_apply)
3263         return d
3264 
3265-
3266     def is_readonly(self):
3267         """
3268         I return True if this MutableFileVersion provides no write
3269hunk ./src/allmydata/mutable/filenode.py 921
3270         """
3271         return self._writekey is None
3272 
3273-
3274     def is_mutable(self):
3275         """
3276         I return True, since mutable files are always mutable by
3277hunk ./src/allmydata/mutable/filenode.py 928
3278         """
3279         return True
3280 
3281-
3282     def get_storage_index(self):
3283         """
3284         I return the storage index of the reference that I encapsulate.
3285hunk ./src/allmydata/mutable/filenode.py 934
3286         """
3287         return self._storage_index
3288 
3289-
3290     def get_size(self):
3291         """
3292         I return the length, in bytes, of this readable object.
3293hunk ./src/allmydata/mutable/filenode.py 940
3294         """
3295         return self._servermap.size_of_version(self._version)
3296 
3297-
3298     def download_to_data(self, fetch_privkey=False):
3299         """
3300         I return a Deferred that fires with the contents of this
3301hunk ./src/allmydata/mutable/filenode.py 951
3302         d.addCallback(lambda mc: "".join(mc.chunks))
3303         return d
3304 
3305-
3306     def _try_to_download_data(self):
3307         """
3308         I am an unserialized cousin of download_to_data; I am called
3309hunk ./src/allmydata/mutable/filenode.py 963
3310         d.addCallback(lambda mc: "".join(mc.chunks))
3311         return d
3312 
3313-
3314     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3315         """
3316         I read a portion (possibly all) of the mutable file that I
3317hunk ./src/allmydata/mutable/filenode.py 971
3318         return self._do_serialized(self._read, consumer, offset, size,
3319                                    fetch_privkey)
3320 
3321-
3322     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3323         """
3324         I am the serialized companion of read.
3325hunk ./src/allmydata/mutable/filenode.py 981
3326         d = r.download(consumer, offset, size)
3327         return d
3328 
3329-
3330     def _do_serialized(self, cb, *args, **kwargs):
3331         # note: to avoid deadlock, this callable is *not* allowed to invoke
3332         # other serialized methods within this (or any other)
3333hunk ./src/allmydata/mutable/filenode.py 999
3334         self._serializer.addErrback(log.err)
3335         return d
3336 
3337-
3338     def _upload(self, new_contents):
3339         #assert self._pubkey, "update_servermap must be called before publish"
3340         p = Publish(self._node, self._storage_broker, self._servermap)
3341hunk ./src/allmydata/mutable/filenode.py 1009
3342         d.addCallback(self._did_upload, new_contents.get_size())
3343         return d
3344 
3345-
3346     def _did_upload(self, res, size):
3347         self._most_recent_size = size
3348         return res
3349hunk ./src/allmydata/mutable/filenode.py 1029
3350         """
3351         return self._do_serialized(self._update, data, offset)
3352 
3353-
3354     def _update(self, data, offset):
3355         """
3356         I update the mutable file version represented by this particular
3357hunk ./src/allmydata/mutable/filenode.py 1058
3358         d.addCallback(self._build_uploadable_and_finish, data, offset)
3359         return d
3360 
3361-
3362     def _do_modify_update(self, data, offset):
3363         """
3364         I perform a file update by modifying the contents of the file
3365hunk ./src/allmydata/mutable/filenode.py 1073
3366             return new
3367         return self._modify(m, None)
3368 
3369-
3370     def _do_update_update(self, data, offset):
3371         """
3372         I start the Servermap update that gets us the data we need to
3373hunk ./src/allmydata/mutable/filenode.py 1108
3374         return self._update_servermap(update_range=(start_segment,
3375                                                     end_segment))
3376 
3377-
3378     def _decode_and_decrypt_segments(self, ignored, data, offset):
3379         """
3380         After the servermap update, I take the encrypted and encoded
3381hunk ./src/allmydata/mutable/filenode.py 1148
3382         d3 = defer.succeed(blockhashes)
3383         return deferredutil.gatherResults([d1, d2, d3])
3384 
3385-
3386     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3387         """
3388         After the process has the plaintext segments, I build the
3389hunk ./src/allmydata/mutable/filenode.py 1163
3390         p = Publish(self._node, self._storage_broker, self._servermap)
3391         return p.update(u, offset, segments_and_bht[2], self._version)
3392 
3393-
3394     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3395         """
3396         I update the servermap. I return a Deferred that fires when the
3397hunk ./src/allmydata/storage/common.py 1
3398-
3399-import os.path
3400 from allmydata.util import base32
3401 
3402 class DataTooLargeError(Exception):
3403hunk ./src/allmydata/storage/common.py 5
3404     pass
3405+
3406 class UnknownMutableContainerVersionError(Exception):
3407     pass
3408hunk ./src/allmydata/storage/common.py 8
3409+
3410 class UnknownImmutableContainerVersionError(Exception):
3411     pass
3412 
3413hunk ./src/allmydata/storage/common.py 18
3414 
3415 def si_a2b(ascii_storageindex):
3416     return base32.a2b(ascii_storageindex)
3417-
3418-def storage_index_to_dir(storageindex):
3419-    sia = si_b2a(storageindex)
3420-    return os.path.join(sia[:2], sia)
3421hunk ./src/allmydata/storage/crawler.py 2
3422 
3423-import os, time, struct
3424+import time, struct
3425 import cPickle as pickle
3426 from twisted.internet import reactor
3427 from twisted.application import service
3428hunk ./src/allmydata/storage/crawler.py 6
3429+
3430+from allmydata.util.assertutil import precondition
3431+from allmydata.interfaces import IStorageBackend
3432 from allmydata.storage.common import si_b2a
3433hunk ./src/allmydata/storage/crawler.py 10
3434-from allmydata.util import fileutil
3435+
3436 
3437 class TimeSliceExceeded(Exception):
3438     pass
3439hunk ./src/allmydata/storage/crawler.py 15
3440 
3441+
3442 class ShareCrawler(service.MultiService):
3443hunk ./src/allmydata/storage/crawler.py 17
3444-    """A ShareCrawler subclass is attached to a StorageServer, and
3445-    periodically walks all of its shares, processing each one in some
3446-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3447-    since large servers can easily have a terabyte of shares, in several
3448-    million files, which can take hours or days to read.
3449+    """
3450+    An instance of a subclass of ShareCrawler is attached to a storage
3451+    backend, and periodically walks the backend's shares, processing them
3452+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3453+    the host, since large servers can easily have a terabyte of shares in
3454+    several million files, which can take hours or days to read.
3455 
3456     Once the crawler starts a cycle, it will proceed at a rate limited by the
3457     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3458hunk ./src/allmydata/storage/crawler.py 33
3459     long enough to ensure that 'minimum_cycle_time' elapses between the start
3460     of two consecutive cycles.
3461 
3462-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3463+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3464     grid will cause the prefixdir contents to be mostly cached in the kernel,
3465hunk ./src/allmydata/storage/crawler.py 35
3466-    or that the number of buckets in each prefixdir will be small enough to
3467-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3468-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3469+    or that the number of sharesets in each prefixdir will be small enough to
3470+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3471+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3472     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3473     time, and 17ms to list the second time.
3474 
3475hunk ./src/allmydata/storage/crawler.py 41
3476-    To use a crawler, create a subclass which implements the process_bucket()
3477-    method. It will be called with a prefixdir and a base32 storage index
3478-    string. process_bucket() must run synchronously. Any keys added to
3479-    self.state will be preserved. Override add_initial_state() to set up
3480-    initial state keys. Override finished_cycle() to perform additional
3481-    processing when the cycle is complete. Any status that the crawler
3482-    produces should be put in the self.state dictionary. Status renderers
3483-    (like a web page which describes the accomplishments of your crawler)
3484-    will use crawler.get_state() to retrieve this dictionary; they can
3485-    present the contents as they see fit.
3486+    To implement a crawler, create a subclass that implements the
3487+    process_shareset() method. It will be called with a prefixdir and an
3488+    object providing the IShareSet interface. process_shareset() must run
3489+    synchronously. Any keys added to self.state will be preserved. Override
3490+    add_initial_state() to set up initial state keys. Override
3491+    finished_cycle() to perform additional processing when the cycle is
3492+    complete. Any status that the crawler produces should be put in the
3493+    self.state dictionary. Status renderers (like a web page describing the
3494+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3495+    this dictionary; they can present the contents as they see fit.
3496 
3497hunk ./src/allmydata/storage/crawler.py 52
3498-    Then create an instance, with a reference to a StorageServer and a
3499-    filename where it can store persistent state. The statefile is used to
3500-    keep track of how far around the ring the process has travelled, as well
3501-    as timing history to allow the pace to be predicted and controlled. The
3502-    statefile will be updated and written to disk after each time slice (just
3503-    before the crawler yields to the reactor), and also after each cycle is
3504-    finished, and also when stopService() is called. Note that this means
3505-    that a crawler which is interrupted with SIGKILL while it is in the
3506-    middle of a time slice will lose progress: the next time the node is
3507-    started, the crawler will repeat some unknown amount of work.
3508+    Then create an instance, with a reference to a backend object providing
3509+    the IStorageBackend interface, and a filename where it can store
3510+    persistent state. The statefile is used to keep track of how far around
3511+    the ring the process has travelled, as well as timing history to allow
3512+    the pace to be predicted and controlled. The statefile will be updated
3513+    and written to disk after each time slice (just before the crawler yields
3514+    to the reactor), and also after each cycle is finished, and also when
3515+    stopService() is called. Note that this means that a crawler that is
3516+    interrupted with SIGKILL while it is in the middle of a time slice will
3517+    lose progress: the next time the node is started, the crawler will repeat
3518+    some unknown amount of work.
3519 
3520     The crawler instance must be started with startService() before it will
3521hunk ./src/allmydata/storage/crawler.py 65
3522-    do any work. To make it stop doing work, call stopService().
3523+    do any work. To make it stop doing work, call stopService(). A crawler
3524+    is usually a child service of a StorageServer, although it should not
3525+    depend on that.
3526+
3527+    For historical reasons, some dictionary key names use the term "bucket"
3528+    for what is now preferably called a "shareset" (the set of shares that a
3529+    server holds under a given storage index).
3530     """
3531 
3532     slow_start = 300 # don't start crawling for 5 minutes after startup
3533hunk ./src/allmydata/storage/crawler.py 80
3534     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3535     minimum_cycle_time = 300 # don't run a cycle faster than this
3536 
3537-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3538+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3539+        precondition(IStorageBackend.providedBy(backend), backend)
3540         service.MultiService.__init__(self)
3541hunk ./src/allmydata/storage/crawler.py 83
3542+        self.backend = backend
3543+        self.statefp = statefp
3544         if allowed_cpu_percentage is not None:
3545             self.allowed_cpu_percentage = allowed_cpu_percentage
3546hunk ./src/allmydata/storage/crawler.py 87
3547-        self.server = server
3548-        self.sharedir = server.sharedir
3549-        self.statefile = statefile
3550         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3551                          for i in range(2**10)]
3552         self.prefixes.sort()
3553hunk ./src/allmydata/storage/crawler.py 91
3554         self.timer = None
3555-        self.bucket_cache = (None, [])
3556+        self.shareset_cache = (None, [])
3557         self.current_sleep_time = None
3558         self.next_wake_time = None
3559         self.last_prefix_finished_time = None
3560hunk ./src/allmydata/storage/crawler.py 154
3561                 left = len(self.prefixes) - self.last_complete_prefix_index
3562                 remaining = left * self.last_prefix_elapsed_time
3563                 # TODO: remainder of this prefix: we need to estimate the
3564-                # per-bucket time, probably by measuring the time spent on
3565-                # this prefix so far, divided by the number of buckets we've
3566+                # per-shareset time, probably by measuring the time spent on
3567+                # this prefix so far, divided by the number of sharesets we've
3568                 # processed.
3569             d["estimated-cycle-complete-time-left"] = remaining
3570             # it's possible to call get_progress() from inside a crawler's
3571hunk ./src/allmydata/storage/crawler.py 175
3572         state dictionary.
3573 
3574         If we are not currently sleeping (i.e. get_state() was called from
3575-        inside the process_prefixdir, process_bucket, or finished_cycle()
3576+        inside the process_prefixdir, process_shareset, or finished_cycle()
3577         methods, or if startService has not yet been called on this crawler),
3578         these two keys will be None.
3579 
3580hunk ./src/allmydata/storage/crawler.py 188
3581     def load_state(self):
3582         # we use this to store state for both the crawler's internals and
3583         # anything the subclass-specific code needs. The state is stored
3584-        # after each bucket is processed, after each prefixdir is processed,
3585+        # after each shareset is processed, after each prefixdir is processed,
3586         # and after a cycle is complete. The internal keys we use are:
3587         #  ["version"]: int, always 1
3588         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3589hunk ./src/allmydata/storage/crawler.py 202
3590         #                            are sleeping between cycles, or if we
3591         #                            have not yet finished any prefixdir since
3592         #                            a cycle was started
3593-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3594-        #                            of the last bucket to be processed, or
3595-        #                            None if we are sleeping between cycles
3596+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3597+        #                            shareset to be processed, or None if we
3598+        #                            are sleeping between cycles
3599         try:
3600hunk ./src/allmydata/storage/crawler.py 206
3601-            f = open(self.statefile, "rb")
3602-            state = pickle.load(f)
3603-            f.close()
3604+            state = pickle.loads(self.statefp.getContent())
3605         except EnvironmentError:
3606             state = {"version": 1,
3607                      "last-cycle-finished": None,
3608hunk ./src/allmydata/storage/crawler.py 242
3609         else:
3610             last_complete_prefix = self.prefixes[lcpi]
3611         self.state["last-complete-prefix"] = last_complete_prefix
3612-        tmpfile = self.statefile + ".tmp"
3613-        f = open(tmpfile, "wb")
3614-        pickle.dump(self.state, f)
3615-        f.close()
3616-        fileutil.move_into_place(tmpfile, self.statefile)
3617+        self.statefp.setContent(pickle.dumps(self.state))
3618 
3619     def startService(self):
3620         # arrange things to look like we were just sleeping, so
3621hunk ./src/allmydata/storage/crawler.py 284
3622         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3623         # if the math gets weird, or a timequake happens, don't sleep
3624         # forever. Note that this means that, while a cycle is running, we
3625-        # will process at least one bucket every 5 minutes, no matter how
3626-        # long that bucket takes.
3627+        # will process at least one shareset every 5 minutes, no matter how
3628+        # long that shareset takes.
3629         sleep_time = max(0.0, min(sleep_time, 299))
3630         if finished_cycle:
3631             # how long should we sleep between cycles? Don't run faster than
3632hunk ./src/allmydata/storage/crawler.py 315
3633         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3634             # if we want to yield earlier, just raise TimeSliceExceeded()
3635             prefix = self.prefixes[i]
3636-            prefixdir = os.path.join(self.sharedir, prefix)
3637-            if i == self.bucket_cache[0]:
3638-                buckets = self.bucket_cache[1]
3639+            if i == self.shareset_cache[0]:
3640+                sharesets = self.shareset_cache[1]
3641             else:
3642hunk ./src/allmydata/storage/crawler.py 318
3643-                try:
3644-                    buckets = os.listdir(prefixdir)
3645-                    buckets.sort()
3646-                except EnvironmentError:
3647-                    buckets = []
3648-                self.bucket_cache = (i, buckets)
3649-            self.process_prefixdir(cycle, prefix, prefixdir,
3650-                                   buckets, start_slice)
3651+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3652+                self.shareset_cache = (i, sharesets)
3653+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3654             self.last_complete_prefix_index = i
3655 
3656             now = time.time()
3657hunk ./src/allmydata/storage/crawler.py 345
3658         self.finished_cycle(cycle)
3659         self.save_state()
3660 
3661-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3662-        """This gets a list of bucket names (i.e. storage index strings,
3663+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3664+        """
3665+        This gets a list of shareset names (i.e. storage index strings,
3666         base32-encoded) in sorted order.
3667 
3668         You can override this if your crawler doesn't care about the actual
3669hunk ./src/allmydata/storage/crawler.py 352
3670         shares, for example a crawler which merely keeps track of how many
3671-        buckets are being managed by this server.
3672+        sharesets are being managed by this server.
3673 
3674hunk ./src/allmydata/storage/crawler.py 354
3675-        Subclasses which *do* care about actual bucket should leave this
3676-        method along, and implement process_bucket() instead.
3677+        Subclasses which *do* care about actual shareset should leave this
3678+        method alone, and implement process_shareset() instead.
3679         """
3680 
3681hunk ./src/allmydata/storage/crawler.py 358
3682-        for bucket in buckets:
3683-            if bucket <= self.state["last-complete-bucket"]:
3684+        for shareset in sharesets:
3685+            base32si = shareset.get_storage_index_string()
3686+            if base32si <= self.state["last-complete-bucket"]:
3687                 continue
3688hunk ./src/allmydata/storage/crawler.py 362
3689-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3690-            self.state["last-complete-bucket"] = bucket
3691+            self.process_shareset(cycle, prefix, shareset)
3692+            self.state["last-complete-bucket"] = base32si
3693             if time.time() >= start_slice + self.cpu_slice:
3694                 raise TimeSliceExceeded()
3695 
3696hunk ./src/allmydata/storage/crawler.py 370
3697     # the remaining methods are explictly for subclasses to implement.
3698 
3699     def started_cycle(self, cycle):
3700-        """Notify a subclass that the crawler is about to start a cycle.
3701+        """
3702+        Notify a subclass that the crawler is about to start a cycle.
3703 
3704         This method is for subclasses to override. No upcall is necessary.
3705         """
3706hunk ./src/allmydata/storage/crawler.py 377
3707         pass
3708 
3709-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3710-        """Examine a single bucket. Subclasses should do whatever they want
3711+    def process_shareset(self, cycle, prefix, shareset):
3712+        """
3713+        Examine a single shareset. Subclasses should do whatever they want
3714         to do to the shares therein, then update self.state as necessary.
3715 
3716         If the crawler is never interrupted by SIGKILL, this method will be
3717hunk ./src/allmydata/storage/crawler.py 383
3718-        called exactly once per share (per cycle). If it *is* interrupted,
3719+        called exactly once per shareset (per cycle). If it *is* interrupted,
3720         then the next time the node is started, some amount of work will be
3721         duplicated, according to when self.save_state() was last called. By
3722         default, save_state() is called at the end of each timeslice, and
3723hunk ./src/allmydata/storage/crawler.py 391
3724 
3725         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3726         records to a database), you can call save_state() at the end of your
3727-        process_bucket() method. This will reduce the maximum duplicated work
3728-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3729-        per bucket (and some disk writes), which will count against your
3730-        allowed_cpu_percentage, and which may be considerable if
3731-        process_bucket() runs quickly.
3732+        process_shareset() method. This will reduce the maximum duplicated
3733+        work to one shareset per SIGKILL. It will also add overhead, probably
3734+        1-20ms per shareset (and some disk writes), which will count against
3735+        your allowed_cpu_percentage, and which may be considerable if
3736+        process_shareset() runs quickly.
3737 
3738         This method is for subclasses to override. No upcall is necessary.
3739         """
3740hunk ./src/allmydata/storage/crawler.py 402
3741         pass
3742 
3743     def finished_prefix(self, cycle, prefix):
3744-        """Notify a subclass that the crawler has just finished processing a
3745-        prefix directory (all buckets with the same two-character/10bit
3746+        """
3747+        Notify a subclass that the crawler has just finished processing a
3748+        prefix directory (all sharesets with the same two-character/10-bit
3749         prefix). To impose a limit on how much work might be duplicated by a
3750         SIGKILL that occurs during a timeslice, you can call
3751         self.save_state() here, but be aware that it may represent a
3752hunk ./src/allmydata/storage/crawler.py 415
3753         pass
3754 
3755     def finished_cycle(self, cycle):
3756-        """Notify subclass that a cycle (one complete traversal of all
3757+        """
3758+        Notify subclass that a cycle (one complete traversal of all
3759         prefixdirs) has just finished. 'cycle' is the number of the cycle
3760         that just finished. This method should perform summary work and
3761         update self.state to publish information to status displays.
3762hunk ./src/allmydata/storage/crawler.py 433
3763         pass
3764 
3765     def yielding(self, sleep_time):
3766-        """The crawler is about to sleep for 'sleep_time' seconds. This
3767+        """
3768+        The crawler is about to sleep for 'sleep_time' seconds. This
3769         method is mostly for the convenience of unit tests.
3770 
3771         This method is for subclasses to override. No upcall is necessary.
3772hunk ./src/allmydata/storage/crawler.py 443
3773 
3774 
3775 class BucketCountingCrawler(ShareCrawler):
3776-    """I keep track of how many buckets are being managed by this server.
3777-    This is equivalent to the number of distributed files and directories for
3778-    which I am providing storage. The actual number of files+directories in
3779-    the full grid is probably higher (especially when there are more servers
3780-    than 'N', the number of generated shares), because some files+directories
3781-    will have shares on other servers instead of me. Also note that the
3782-    number of buckets will differ from the number of shares in small grids,
3783-    when more than one share is placed on a single server.
3784+    """
3785+    I keep track of how many sharesets, each corresponding to a storage index,
3786+    are being managed by this server. This is equivalent to the number of
3787+    distributed files and directories for which I am providing storage. The
3788+    actual number of files and directories in the full grid is probably higher
3789+    (especially when there are more servers than 'N', the number of generated
3790+    shares), because some files and directories will have shares on other
3791+    servers instead of me. Also note that the number of sharesets will differ
3792+    from the number of shares in small grids, when more than one share is
3793+    placed on a single server.
3794     """
3795 
3796     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3797hunk ./src/allmydata/storage/crawler.py 457
3798 
3799-    def __init__(self, server, statefile, num_sample_prefixes=1):
3800-        ShareCrawler.__init__(self, server, statefile)
3801+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3802+        ShareCrawler.__init__(self, backend, statefp)
3803         self.num_sample_prefixes = num_sample_prefixes
3804 
3805     def add_initial_state(self):
3806hunk ./src/allmydata/storage/crawler.py 471
3807         self.state.setdefault("last-complete-bucket-count", None)
3808         self.state.setdefault("storage-index-samples", {})
3809 
3810-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3811+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3812         # we override process_prefixdir() because we don't want to look at
3813hunk ./src/allmydata/storage/crawler.py 473
3814-        # the individual buckets. We'll save state after each one. On my
3815+        # the individual sharesets. We'll save state after each one. On my
3816         # laptop, a mostly-empty storage server can process about 70
3817         # prefixdirs in a 1.0s slice.
3818         if cycle not in self.state["bucket-counts"]:
3819hunk ./src/allmydata/storage/crawler.py 478
3820             self.state["bucket-counts"][cycle] = {}
3821-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3822+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3823         if prefix in self.prefixes[:self.num_sample_prefixes]:
3824hunk ./src/allmydata/storage/crawler.py 480
3825-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3826+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3827 
3828     def finished_cycle(self, cycle):
3829         last_counts = self.state["bucket-counts"].get(cycle, [])
3830hunk ./src/allmydata/storage/crawler.py 486
3831         if len(last_counts) == len(self.prefixes):
3832             # great, we have a whole cycle.
3833-            num_buckets = sum(last_counts.values())
3834-            self.state["last-complete-bucket-count"] = num_buckets
3835+            num_sharesets = sum(last_counts.values())
3836+            self.state["last-complete-bucket-count"] = num_sharesets
3837             # get rid of old counts
3838             for old_cycle in list(self.state["bucket-counts"].keys()):
3839                 if old_cycle != cycle:
3840hunk ./src/allmydata/storage/crawler.py 494
3841                     del self.state["bucket-counts"][old_cycle]
3842         # get rid of old samples too
3843         for prefix in list(self.state["storage-index-samples"].keys()):
3844-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3845+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3846             if old_cycle != cycle:
3847                 del self.state["storage-index-samples"][prefix]
3848hunk ./src/allmydata/storage/crawler.py 497
3849-
3850hunk ./src/allmydata/storage/expirer.py 1
3851-import time, os, pickle, struct
3852+
3853+import time, pickle, struct
3854+from twisted.python import log as twlog
3855+
3856 from allmydata.storage.crawler import ShareCrawler
3857hunk ./src/allmydata/storage/expirer.py 6
3858-from allmydata.storage.shares import get_share_file
3859-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3860+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3861      UnknownImmutableContainerVersionError
3862hunk ./src/allmydata/storage/expirer.py 8
3863-from twisted.python import log as twlog
3864+
3865 
3866 class LeaseCheckingCrawler(ShareCrawler):
3867     """I examine the leases on all shares, determining which are still valid
3868hunk ./src/allmydata/storage/expirer.py 17
3869     removed.
3870 
3871     I collect statistics on the leases and make these available to a web
3872-    status page, including::
3873+    status page, including:
3874 
3875     Space recovered during this cycle-so-far:
3876      actual (only if expiration_enabled=True):
3877hunk ./src/allmydata/storage/expirer.py 21
3878-      num-buckets, num-shares, sum of share sizes, real disk usage
3879+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3880       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3881        space used by the directory)
3882      what it would have been with the original lease expiration time
3883hunk ./src/allmydata/storage/expirer.py 32
3884 
3885     Space recovered during the last 10 cycles  <-- saved in separate pickle
3886 
3887-    Shares/buckets examined:
3888+    Shares/storage-indices examined:
3889      this cycle-so-far
3890      prediction of rest of cycle
3891      during last 10 cycles <-- separate pickle
3892hunk ./src/allmydata/storage/expirer.py 42
3893     Histogram of leases-per-share:
3894      this-cycle-to-date
3895      last 10 cycles <-- separate pickle
3896-    Histogram of lease ages, buckets = 1day
3897+    Histogram of lease ages, storage-indices over 1 day
3898      cycle-to-date
3899      last 10 cycles <-- separate pickle
3900 
3901hunk ./src/allmydata/storage/expirer.py 53
3902     slow_start = 360 # wait 6 minutes after startup
3903     minimum_cycle_time = 12*60*60 # not more than twice per day
3904 
3905-    def __init__(self, server, statefile, historyfile,
3906-                 expiration_enabled, mode,
3907-                 override_lease_duration, # used if expiration_mode=="age"
3908-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3909-                 sharetypes):
3910-        self.historyfile = historyfile
3911-        self.expiration_enabled = expiration_enabled
3912-        self.mode = mode
3913+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3914+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3915+        self.historyfp = historyfp
3916+        ShareCrawler.__init__(self, backend, statefp)
3917+
3918+        self.expiration_enabled = expiration_policy['enabled']
3919+        self.mode = expiration_policy['mode']
3920         self.override_lease_duration = None
3921         self.cutoff_date = None
3922         if self.mode == "age":
3923hunk ./src/allmydata/storage/expirer.py 63
3924-            assert isinstance(override_lease_duration, (int, type(None)))
3925-            self.override_lease_duration = override_lease_duration # seconds
3926+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3927+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3928         elif self.mode == "cutoff-date":
3929hunk ./src/allmydata/storage/expirer.py 66
3930-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3931-            assert cutoff_date is not None
3932-            self.cutoff_date = cutoff_date
3933+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3934+            self.cutoff_date = expiration_policy['cutoff_date']
3935         else:
3936hunk ./src/allmydata/storage/expirer.py 69
3937-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3938-        self.sharetypes_to_expire = sharetypes
3939-        ShareCrawler.__init__(self, server, statefile)
3940+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3941+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3942 
3943     def add_initial_state(self):
3944         # we fill ["cycle-to-date"] here (even though they will be reset in
3945hunk ./src/allmydata/storage/expirer.py 84
3946             self.state["cycle-to-date"].setdefault(k, so_far[k])
3947 
3948         # initialize history
3949-        if not os.path.exists(self.historyfile):
3950+        if not self.historyfp.exists():
3951             history = {} # cyclenum -> dict
3952hunk ./src/allmydata/storage/expirer.py 86
3953-            f = open(self.historyfile, "wb")
3954-            pickle.dump(history, f)
3955-            f.close()
3956+            self.historyfp.setContent(pickle.dumps(history))
3957 
3958     def create_empty_cycle_dict(self):
3959         recovered = self.create_empty_recovered_dict()
3960hunk ./src/allmydata/storage/expirer.py 99
3961 
3962     def create_empty_recovered_dict(self):
3963         recovered = {}
3964+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
3965         for a in ("actual", "original", "configured", "examined"):
3966             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
3967                 recovered[a+"-"+b] = 0
3968hunk ./src/allmydata/storage/expirer.py 110
3969     def started_cycle(self, cycle):
3970         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
3971 
3972-    def stat(self, fn):
3973-        return os.stat(fn)
3974-
3975-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3976-        bucketdir = os.path.join(prefixdir, storage_index_b32)
3977-        s = self.stat(bucketdir)
3978+    def process_storage_index(self, cycle, prefix, container):
3979         would_keep_shares = []
3980         wks = None
3981hunk ./src/allmydata/storage/expirer.py 113
3982+        sharetype = None
3983 
3984hunk ./src/allmydata/storage/expirer.py 115
3985-        for fn in os.listdir(bucketdir):
3986-            try:
3987-                shnum = int(fn)
3988-            except ValueError:
3989-                continue # non-numeric means not a sharefile
3990-            sharefile = os.path.join(bucketdir, fn)
3991+        for share in container.get_shares():
3992+            sharetype = share.sharetype
3993             try:
3994hunk ./src/allmydata/storage/expirer.py 118
3995-                wks = self.process_share(sharefile)
3996+                wks = self.process_share(share)
3997             except (UnknownMutableContainerVersionError,
3998                     UnknownImmutableContainerVersionError,
3999                     struct.error):
4000hunk ./src/allmydata/storage/expirer.py 122
4001-                twlog.msg("lease-checker error processing %s" % sharefile)
4002+                twlog.msg("lease-checker error processing %r" % (share,))
4003                 twlog.err()
4004hunk ./src/allmydata/storage/expirer.py 124
4005-                which = (storage_index_b32, shnum)
4006+                which = (si_b2a(share.storageindex), share.get_shnum())
4007                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
4008                 wks = (1, 1, 1, "unknown")
4009             would_keep_shares.append(wks)
4010hunk ./src/allmydata/storage/expirer.py 129
4011 
4012-        sharetype = None
4013+        container_type = None
4014         if wks:
4015hunk ./src/allmydata/storage/expirer.py 131
4016-            # use the last share's sharetype as the buckettype
4017-            sharetype = wks[3]
4018+            # use the last share's sharetype as the container type
4019+            container_type = wks[3]
4020         rec = self.state["cycle-to-date"]["space-recovered"]
4021         self.increment(rec, "examined-buckets", 1)
4022         if sharetype:
4023hunk ./src/allmydata/storage/expirer.py 136
4024-            self.increment(rec, "examined-buckets-"+sharetype, 1)
4025+            self.increment(rec, "examined-buckets-"+container_type, 1)
4026+
4027+        container_diskbytes = container.get_overhead()
4028 
4029hunk ./src/allmydata/storage/expirer.py 140
4030-        try:
4031-            bucket_diskbytes = s.st_blocks * 512
4032-        except AttributeError:
4033-            bucket_diskbytes = 0 # no stat().st_blocks on windows
4034         if sum([wks[0] for wks in would_keep_shares]) == 0:
4035hunk ./src/allmydata/storage/expirer.py 141
4036-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
4037+            self.increment_container_space("original", container_diskbytes, sharetype)
4038         if sum([wks[1] for wks in would_keep_shares]) == 0:
4039hunk ./src/allmydata/storage/expirer.py 143
4040-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
4041+            self.increment_container_space("configured", container_diskbytes, sharetype)
4042         if sum([wks[2] for wks in would_keep_shares]) == 0:
4043hunk ./src/allmydata/storage/expirer.py 145
4044-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
4045+            self.increment_container_space("actual", container_diskbytes, sharetype)
4046 
4047hunk ./src/allmydata/storage/expirer.py 147
4048-    def process_share(self, sharefilename):
4049-        # first, find out what kind of a share it is
4050-        sf = get_share_file(sharefilename)
4051-        sharetype = sf.sharetype
4052+    def process_share(self, share):
4053+        sharetype = share.sharetype
4054         now = time.time()
4055hunk ./src/allmydata/storage/expirer.py 150
4056-        s = self.stat(sharefilename)
4057+        sharebytes = share.get_size()
4058+        diskbytes = share.get_used_space()
4059 
4060         num_leases = 0
4061         num_valid_leases_original = 0
4062hunk ./src/allmydata/storage/expirer.py 158
4063         num_valid_leases_configured = 0
4064         expired_leases_configured = []
4065 
4066-        for li in sf.get_leases():
4067+        for li in share.get_leases():
4068             num_leases += 1
4069             original_expiration_time = li.get_expiration_time()
4070             grant_renew_time = li.get_grant_renew_time_time()
4071hunk ./src/allmydata/storage/expirer.py 171
4072 
4073             #  expired-or-not according to our configured age limit
4074             expired = False
4075-            if self.mode == "age":
4076-                age_limit = original_expiration_time
4077-                if self.override_lease_duration is not None:
4078-                    age_limit = self.override_lease_duration
4079-                if age > age_limit:
4080-                    expired = True
4081-            else:
4082-                assert self.mode == "cutoff-date"
4083-                if grant_renew_time < self.cutoff_date:
4084-                    expired = True
4085-            if sharetype not in self.sharetypes_to_expire:
4086-                expired = False
4087+            if sharetype in self.sharetypes_to_expire:
4088+                if self.mode == "age":
4089+                    age_limit = original_expiration_time
4090+                    if self.override_lease_duration is not None:
4091+                        age_limit = self.override_lease_duration
4092+                    if age > age_limit:
4093+                        expired = True
4094+                else:
4095+                    assert self.mode == "cutoff-date"
4096+                    if grant_renew_time < self.cutoff_date:
4097+                        expired = True
4098 
4099             if expired:
4100                 expired_leases_configured.append(li)
4101hunk ./src/allmydata/storage/expirer.py 190
4102 
4103         so_far = self.state["cycle-to-date"]
4104         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4105-        self.increment_space("examined", s, sharetype)
4106+        self.increment_space("examined", diskbytes, sharetype)
4107 
4108         would_keep_share = [1, 1, 1, sharetype]
4109 
4110hunk ./src/allmydata/storage/expirer.py 196
4111         if self.expiration_enabled:
4112             for li in expired_leases_configured:
4113-                sf.cancel_lease(li.cancel_secret)
4114+                share.cancel_lease(li.cancel_secret)
4115 
4116         if num_valid_leases_original == 0:
4117             would_keep_share[0] = 0
4118hunk ./src/allmydata/storage/expirer.py 200
4119-            self.increment_space("original", s, sharetype)
4120+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4121 
4122         if num_valid_leases_configured == 0:
4123             would_keep_share[1] = 0
4124hunk ./src/allmydata/storage/expirer.py 204
4125-            self.increment_space("configured", s, sharetype)
4126+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4127             if self.expiration_enabled:
4128                 would_keep_share[2] = 0
4129hunk ./src/allmydata/storage/expirer.py 207
4130-                self.increment_space("actual", s, sharetype)
4131+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4132 
4133         return would_keep_share
4134 
4135hunk ./src/allmydata/storage/expirer.py 211
4136-    def increment_space(self, a, s, sharetype):
4137-        sharebytes = s.st_size
4138-        try:
4139-            # note that stat(2) says that st_blocks is 512 bytes, and that
4140-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4141-            # independent of the block-size that st_blocks uses.
4142-            diskbytes = s.st_blocks * 512
4143-        except AttributeError:
4144-            # the docs say that st_blocks is only on linux. I also see it on
4145-            # MacOS. But it isn't available on windows.
4146-            diskbytes = sharebytes
4147+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4148         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4149         self.increment(so_far_sr, a+"-shares", 1)
4150         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4151hunk ./src/allmydata/storage/expirer.py 221
4152             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4153             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4154 
4155-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4156+    def increment_container_space(self, a, container_diskbytes, container_type):
4157         rec = self.state["cycle-to-date"]["space-recovered"]
4158hunk ./src/allmydata/storage/expirer.py 223
4159-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4160+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4161         self.increment(rec, a+"-buckets", 1)
4162hunk ./src/allmydata/storage/expirer.py 225
4163-        if sharetype:
4164-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4165-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4166+        if container_type:
4167+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4168+            self.increment(rec, a+"-buckets-"+container_type, 1)
4169 
4170     def increment(self, d, k, delta=1):
4171         if k not in d:
4172hunk ./src/allmydata/storage/expirer.py 281
4173         # copy() needs to become a deepcopy
4174         h["space-recovered"] = s["space-recovered"].copy()
4175 
4176-        history = pickle.load(open(self.historyfile, "rb"))
4177+        history = pickle.load(self.historyfp.getContent())
4178         history[cycle] = h
4179         while len(history) > 10:
4180             oldcycles = sorted(history.keys())
4181hunk ./src/allmydata/storage/expirer.py 286
4182             del history[oldcycles[0]]
4183-        f = open(self.historyfile, "wb")
4184-        pickle.dump(history, f)
4185-        f.close()
4186+        self.historyfp.setContent(pickle.dumps(history))
4187 
4188     def get_state(self):
4189         """In addition to the crawler state described in
4190hunk ./src/allmydata/storage/expirer.py 355
4191         progress = self.get_progress()
4192 
4193         state = ShareCrawler.get_state(self) # does a shallow copy
4194-        history = pickle.load(open(self.historyfile, "rb"))
4195+        history = pickle.load(self.historyfp.getContent())
4196         state["history"] = history
4197 
4198         if not progress["cycle-in-progress"]:
4199hunk ./src/allmydata/storage/lease.py 3
4200 import struct, time
4201 
4202+
4203+class NonExistentLeaseError(Exception):
4204+    pass
4205+
4206 class LeaseInfo:
4207     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4208                  expiration_time=None, nodeid=None):
4209hunk ./src/allmydata/storage/lease.py 21
4210 
4211     def get_expiration_time(self):
4212         return self.expiration_time
4213+
4214     def get_grant_renew_time_time(self):
4215         # hack, based upon fixed 31day expiration period
4216         return self.expiration_time - 31*24*60*60
4217hunk ./src/allmydata/storage/lease.py 25
4218+
4219     def get_age(self):
4220         return time.time() - self.get_grant_renew_time_time()
4221 
4222hunk ./src/allmydata/storage/lease.py 36
4223          self.expiration_time) = struct.unpack(">L32s32sL", data)
4224         self.nodeid = None
4225         return self
4226+
4227     def to_immutable_data(self):
4228         return struct.pack(">L32s32sL",
4229                            self.owner_num,
4230hunk ./src/allmydata/storage/lease.py 49
4231                            int(self.expiration_time),
4232                            self.renew_secret, self.cancel_secret,
4233                            self.nodeid)
4234+
4235     def from_mutable_data(self, data):
4236         (self.owner_num,
4237          self.expiration_time,
4238hunk ./src/allmydata/storage/server.py 1
4239-import os, re, weakref, struct, time
4240+import weakref, time
4241 
4242 from foolscap.api import Referenceable
4243 from twisted.application import service
4244hunk ./src/allmydata/storage/server.py 7
4245 
4246 from zope.interface import implements
4247-from allmydata.interfaces import RIStorageServer, IStatsProducer
4248-from allmydata.util import fileutil, idlib, log, time_format
4249+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4250+from allmydata.util.assertutil import precondition
4251+from allmydata.util import idlib, log
4252 import allmydata # for __full_version__
4253 
4254hunk ./src/allmydata/storage/server.py 12
4255-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4256-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4257+from allmydata.storage.common import si_a2b, si_b2a
4258+[si_a2b]  # hush pyflakes
4259 from allmydata.storage.lease import LeaseInfo
4260hunk ./src/allmydata/storage/server.py 15
4261-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4262-     create_mutable_sharefile
4263-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4264-from allmydata.storage.crawler import BucketCountingCrawler
4265 from allmydata.storage.expirer import LeaseCheckingCrawler
4266hunk ./src/allmydata/storage/server.py 16
4267-
4268-# storage/
4269-# storage/shares/incoming
4270-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4271-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4272-# storage/shares/$START/$STORAGEINDEX
4273-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4274-
4275-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4276-# base-32 chars).
4277-
4278-# $SHARENUM matches this regex:
4279-NUM_RE=re.compile("^[0-9]+$")
4280-
4281+from allmydata.storage.crawler import BucketCountingCrawler
4282 
4283 
4284 class StorageServer(service.MultiService, Referenceable):
4285hunk ./src/allmydata/storage/server.py 21
4286     implements(RIStorageServer, IStatsProducer)
4287+
4288     name = 'storage'
4289     LeaseCheckerClass = LeaseCheckingCrawler
4290hunk ./src/allmydata/storage/server.py 24
4291+    DEFAULT_EXPIRATION_POLICY = {
4292+        'enabled': False,
4293+        'mode': 'age',
4294+        'override_lease_duration': None,
4295+        'cutoff_date': None,
4296+        'sharetypes': ('mutable', 'immutable'),
4297+    }
4298 
4299hunk ./src/allmydata/storage/server.py 32
4300-    def __init__(self, storedir, nodeid, reserved_space=0,
4301-                 discard_storage=False, readonly_storage=False,
4302+    def __init__(self, serverid, backend, statedir,
4303                  stats_provider=None,
4304hunk ./src/allmydata/storage/server.py 34
4305-                 expiration_enabled=False,
4306-                 expiration_mode="age",
4307-                 expiration_override_lease_duration=None,
4308-                 expiration_cutoff_date=None,
4309-                 expiration_sharetypes=("mutable", "immutable")):
4310+                 expiration_policy=None):
4311         service.MultiService.__init__(self)
4312hunk ./src/allmydata/storage/server.py 36
4313-        assert isinstance(nodeid, str)
4314-        assert len(nodeid) == 20
4315-        self.my_nodeid = nodeid
4316-        self.storedir = storedir
4317-        sharedir = os.path.join(storedir, "shares")
4318-        fileutil.make_dirs(sharedir)
4319-        self.sharedir = sharedir
4320-        # we don't actually create the corruption-advisory dir until necessary
4321-        self.corruption_advisory_dir = os.path.join(storedir,
4322-                                                    "corruption-advisories")
4323-        self.reserved_space = int(reserved_space)
4324-        self.no_storage = discard_storage
4325-        self.readonly_storage = readonly_storage
4326+        precondition(IStorageBackend.providedBy(backend), backend)
4327+        precondition(isinstance(serverid, str), serverid)
4328+        precondition(len(serverid) == 20, serverid)
4329+
4330+        self._serverid = serverid
4331         self.stats_provider = stats_provider
4332         if self.stats_provider:
4333             self.stats_provider.register_producer(self)
4334hunk ./src/allmydata/storage/server.py 44
4335-        self.incomingdir = os.path.join(sharedir, 'incoming')
4336-        self._clean_incomplete()
4337-        fileutil.make_dirs(self.incomingdir)
4338         self._active_writers = weakref.WeakKeyDictionary()
4339hunk ./src/allmydata/storage/server.py 45
4340+        self.backend = backend
4341+        self.backend.setServiceParent(self)
4342+        self._statedir = statedir
4343         log.msg("StorageServer created", facility="tahoe.storage")
4344 
4345hunk ./src/allmydata/storage/server.py 50
4346-        if reserved_space:
4347-            if self.get_available_space() is None:
4348-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4349-                        umin="0wZ27w", level=log.UNUSUAL)
4350-
4351         self.latencies = {"allocate": [], # immutable
4352                           "write": [],
4353                           "close": [],
4354hunk ./src/allmydata/storage/server.py 61
4355                           "renew": [],
4356                           "cancel": [],
4357                           }
4358-        self.add_bucket_counter()
4359-
4360-        statefile = os.path.join(self.storedir, "lease_checker.state")
4361-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4362-        klass = self.LeaseCheckerClass
4363-        self.lease_checker = klass(self, statefile, historyfile,
4364-                                   expiration_enabled, expiration_mode,
4365-                                   expiration_override_lease_duration,
4366-                                   expiration_cutoff_date,
4367-                                   expiration_sharetypes)
4368-        self.lease_checker.setServiceParent(self)
4369+        self._setup_bucket_counter()
4370+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4371 
4372     def __repr__(self):
4373hunk ./src/allmydata/storage/server.py 65
4374-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4375+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4376 
4377hunk ./src/allmydata/storage/server.py 67
4378-    def add_bucket_counter(self):
4379-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4380-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4381+    def _setup_bucket_counter(self):
4382+        statefp = self._statedir.child("bucket_counter.state")
4383+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4384         self.bucket_counter.setServiceParent(self)
4385 
4386hunk ./src/allmydata/storage/server.py 72
4387+    def _setup_lease_checker(self, expiration_policy):
4388+        statefp = self._statedir.child("lease_checker.state")
4389+        historyfp = self._statedir.child("lease_checker.history")
4390+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4391+        self.lease_checker.setServiceParent(self)
4392+
4393     def count(self, name, delta=1):
4394         if self.stats_provider:
4395             self.stats_provider.count("storage_server." + name, delta)
4396hunk ./src/allmydata/storage/server.py 92
4397         """Return a dict, indexed by category, that contains a dict of
4398         latency numbers for each category. If there are sufficient samples
4399         for unambiguous interpretation, each dict will contain the
4400-        following keys: mean, 01_0_percentile, 10_0_percentile,
4401+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4402         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4403         99_0_percentile, 99_9_percentile.  If there are insufficient
4404         samples for a given percentile to be interpreted unambiguously
4405hunk ./src/allmydata/storage/server.py 114
4406             else:
4407                 stats["mean"] = None
4408 
4409-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4410-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4411-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4412+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4413+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4414+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4415                              (0.999, "99_9_percentile", 1000)]
4416 
4417             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4418hunk ./src/allmydata/storage/server.py 133
4419             kwargs["facility"] = "tahoe.storage"
4420         return log.msg(*args, **kwargs)
4421 
4422-    def _clean_incomplete(self):
4423-        fileutil.rm_dir(self.incomingdir)
4424+    def get_serverid(self):
4425+        return self._serverid
4426 
4427     def get_stats(self):
4428         # remember: RIStatsProvider requires that our return dict
4429hunk ./src/allmydata/storage/server.py 138
4430-        # contains numeric values.
4431+        # contains numeric, or None values.
4432         stats = { 'storage_server.allocated': self.allocated_size(), }
4433hunk ./src/allmydata/storage/server.py 140
4434-        stats['storage_server.reserved_space'] = self.reserved_space
4435         for category,ld in self.get_latencies().items():
4436             for name,v in ld.items():
4437                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4438hunk ./src/allmydata/storage/server.py 144
4439 
4440-        try:
4441-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4442-            writeable = disk['avail'] > 0
4443-
4444-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4445-            stats['storage_server.disk_total'] = disk['total']
4446-            stats['storage_server.disk_used'] = disk['used']
4447-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4448-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4449-            stats['storage_server.disk_avail'] = disk['avail']
4450-        except AttributeError:
4451-            writeable = True
4452-        except EnvironmentError:
4453-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4454-            writeable = False
4455-
4456-        if self.readonly_storage:
4457-            stats['storage_server.disk_avail'] = 0
4458-            writeable = False
4459+        self.backend.fill_in_space_stats(stats)
4460 
4461hunk ./src/allmydata/storage/server.py 146
4462-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4463         s = self.bucket_counter.get_state()
4464         bucket_count = s.get("last-complete-bucket-count")
4465         if bucket_count:
4466hunk ./src/allmydata/storage/server.py 153
4467         return stats
4468 
4469     def get_available_space(self):
4470-        """Returns available space for share storage in bytes, or None if no
4471-        API to get this information is available."""
4472-
4473-        if self.readonly_storage:
4474-            return 0
4475-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4476+        return self.backend.get_available_space()
4477 
4478     def allocated_size(self):
4479         space = 0
4480hunk ./src/allmydata/storage/server.py 162
4481         return space
4482 
4483     def remote_get_version(self):
4484-        remaining_space = self.get_available_space()
4485+        remaining_space = self.backend.get_available_space()
4486         if remaining_space is None:
4487             # We're on a platform that has no API to get disk stats.
4488             remaining_space = 2**64
4489hunk ./src/allmydata/storage/server.py 178
4490                     }
4491         return version
4492 
4493-    def remote_allocate_buckets(self, storage_index,
4494+    def remote_allocate_buckets(self, storageindex,
4495                                 renew_secret, cancel_secret,
4496                                 sharenums, allocated_size,
4497                                 canary, owner_num=0):
4498hunk ./src/allmydata/storage/server.py 182
4499+        # cancel_secret is no longer used.
4500         # owner_num is not for clients to set, but rather it should be
4501hunk ./src/allmydata/storage/server.py 184
4502-        # curried into the PersonalStorageServer instance that is dedicated
4503-        # to a particular owner.
4504+        # curried into a StorageServer instance dedicated to a particular
4505+        # owner.
4506         start = time.time()
4507         self.count("allocate")
4508hunk ./src/allmydata/storage/server.py 188
4509-        alreadygot = set()
4510         bucketwriters = {} # k: shnum, v: BucketWriter
4511hunk ./src/allmydata/storage/server.py 189
4512-        si_dir = storage_index_to_dir(storage_index)
4513-        si_s = si_b2a(storage_index)
4514 
4515hunk ./src/allmydata/storage/server.py 190
4516+        si_s = si_b2a(storageindex)
4517         log.msg("storage: allocate_buckets %s" % si_s)
4518 
4519hunk ./src/allmydata/storage/server.py 193
4520-        # in this implementation, the lease information (including secrets)
4521-        # goes into the share files themselves. It could also be put into a
4522-        # separate database. Note that the lease should not be added until
4523-        # the BucketWriter has been closed.
4524+        # Note that the lease should not be added until the BucketWriter
4525+        # has been closed.
4526         expire_time = time.time() + 31*24*60*60
4527hunk ./src/allmydata/storage/server.py 196
4528-        lease_info = LeaseInfo(owner_num,
4529-                               renew_secret, cancel_secret,
4530-                               expire_time, self.my_nodeid)
4531+        lease_info = LeaseInfo(owner_num, renew_secret,
4532+                               expire_time, self._serverid)
4533 
4534         max_space_per_bucket = allocated_size
4535 
4536hunk ./src/allmydata/storage/server.py 201
4537-        remaining_space = self.get_available_space()
4538+        remaining_space = self.backend.get_available_space()
4539         limited = remaining_space is not None
4540         if limited:
4541hunk ./src/allmydata/storage/server.py 204
4542-            # this is a bit conservative, since some of this allocated_size()
4543-            # has already been written to disk, where it will show up in
4544+            # This is a bit conservative, since some of this allocated_size()
4545+            # has already been written to the backend, where it will show up in
4546             # get_available_space.
4547             remaining_space -= self.allocated_size()
4548hunk ./src/allmydata/storage/server.py 208
4549-        # self.readonly_storage causes remaining_space <= 0
4550+            # If the backend is read-only, remaining_space will be <= 0.
4551+
4552+        shareset = self.backend.get_shareset(storageindex)
4553 
4554hunk ./src/allmydata/storage/server.py 212
4555-        # fill alreadygot with all shares that we have, not just the ones
4556+        # Fill alreadygot with all shares that we have, not just the ones
4557         # they asked about: this will save them a lot of work. Add or update
4558         # leases for all of them: if they want us to hold shares for this
4559hunk ./src/allmydata/storage/server.py 215
4560-        # file, they'll want us to hold leases for this file.
4561-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4562-            alreadygot.add(shnum)
4563-            sf = ShareFile(fn)
4564-            sf.add_or_renew_lease(lease_info)
4565+        # file, they'll want us to hold leases for all the shares of it.
4566+        #
4567+        # XXX should we be making the assumption here that lease info is
4568+        # duplicated in all shares?
4569+        alreadygot = set()
4570+        for share in shareset.get_shares():
4571+            share.add_or_renew_lease(lease_info)
4572+            alreadygot.add(share.shnum)
4573 
4574hunk ./src/allmydata/storage/server.py 224
4575-        for shnum in sharenums:
4576-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4577-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4578-            if os.path.exists(finalhome):
4579-                # great! we already have it. easy.
4580-                pass
4581-            elif os.path.exists(incominghome):
4582+        for shnum in sharenums - alreadygot:
4583+            if shareset.has_incoming(shnum):
4584                 # Note that we don't create BucketWriters for shnums that
4585                 # have a partial share (in incoming/), so if a second upload
4586                 # occurs while the first is still in progress, the second
4587hunk ./src/allmydata/storage/server.py 232
4588                 # uploader will use different storage servers.
4589                 pass
4590             elif (not limited) or (remaining_space >= max_space_per_bucket):
4591-                # ok! we need to create the new share file.
4592-                bw = BucketWriter(self, incominghome, finalhome,
4593-                                  max_space_per_bucket, lease_info, canary)
4594-                if self.no_storage:
4595-                    bw.throw_out_all_data = True
4596+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4597+                                                 lease_info, canary)
4598                 bucketwriters[shnum] = bw
4599                 self._active_writers[bw] = 1
4600                 if limited:
4601hunk ./src/allmydata/storage/server.py 239
4602                     remaining_space -= max_space_per_bucket
4603             else:
4604-                # bummer! not enough space to accept this bucket
4605+                # Bummer not enough space to accept this share.
4606                 pass
4607 
4608hunk ./src/allmydata/storage/server.py 242
4609-        if bucketwriters:
4610-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4611-
4612         self.add_latency("allocate", time.time() - start)
4613         return alreadygot, bucketwriters
4614 
4615hunk ./src/allmydata/storage/server.py 245
4616-    def _iter_share_files(self, storage_index):
4617-        for shnum, filename in self._get_bucket_shares(storage_index):
4618-            f = open(filename, 'rb')
4619-            header = f.read(32)
4620-            f.close()
4621-            if header[:32] == MutableShareFile.MAGIC:
4622-                sf = MutableShareFile(filename, self)
4623-                # note: if the share has been migrated, the renew_lease()
4624-                # call will throw an exception, with information to help the
4625-                # client update the lease.
4626-            elif header[:4] == struct.pack(">L", 1):
4627-                sf = ShareFile(filename)
4628-            else:
4629-                continue # non-sharefile
4630-            yield sf
4631-
4632-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4633+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4634                          owner_num=1):
4635hunk ./src/allmydata/storage/server.py 247
4636+        # cancel_secret is no longer used.
4637         start = time.time()
4638         self.count("add-lease")
4639         new_expire_time = time.time() + 31*24*60*60
4640hunk ./src/allmydata/storage/server.py 251
4641-        lease_info = LeaseInfo(owner_num,
4642-                               renew_secret, cancel_secret,
4643-                               new_expire_time, self.my_nodeid)
4644-        for sf in self._iter_share_files(storage_index):
4645-            sf.add_or_renew_lease(lease_info)
4646-        self.add_latency("add-lease", time.time() - start)
4647-        return None
4648+        lease_info = LeaseInfo(owner_num, renew_secret,
4649+                               new_expire_time, self._serverid)
4650 
4651hunk ./src/allmydata/storage/server.py 254
4652-    def remote_renew_lease(self, storage_index, renew_secret):
4653+        try:
4654+            self.backend.add_or_renew_lease(lease_info)
4655+        finally:
4656+            self.add_latency("add-lease", time.time() - start)
4657+
4658+    def remote_renew_lease(self, storageindex, renew_secret):
4659         start = time.time()
4660         self.count("renew")
4661hunk ./src/allmydata/storage/server.py 262
4662-        new_expire_time = time.time() + 31*24*60*60
4663-        found_buckets = False
4664-        for sf in self._iter_share_files(storage_index):
4665-            found_buckets = True
4666-            sf.renew_lease(renew_secret, new_expire_time)
4667-        self.add_latency("renew", time.time() - start)
4668-        if not found_buckets:
4669-            raise IndexError("no such lease to renew")
4670+
4671+        try:
4672+            shareset = self.backend.get_shareset(storageindex)
4673+            new_expiration_time = start + 31*24*60*60   # one month from now
4674+            shareset.renew_lease(renew_secret, new_expiration_time)
4675+        finally:
4676+            self.add_latency("renew", time.time() - start)
4677 
4678     def bucket_writer_closed(self, bw, consumed_size):
4679         if self.stats_provider:
4680hunk ./src/allmydata/storage/server.py 275
4681             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4682         del self._active_writers[bw]
4683 
4684-    def _get_bucket_shares(self, storage_index):
4685-        """Return a list of (shnum, pathname) tuples for files that hold
4686-        shares for this storage_index. In each tuple, 'shnum' will always be
4687-        the integer form of the last component of 'pathname'."""
4688-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4689-        try:
4690-            for f in os.listdir(storagedir):
4691-                if NUM_RE.match(f):
4692-                    filename = os.path.join(storagedir, f)
4693-                    yield (int(f), filename)
4694-        except OSError:
4695-            # Commonly caused by there being no buckets at all.
4696-            pass
4697-
4698-    def remote_get_buckets(self, storage_index):
4699+    def remote_get_buckets(self, storageindex):
4700         start = time.time()
4701         self.count("get")
4702hunk ./src/allmydata/storage/server.py 278
4703-        si_s = si_b2a(storage_index)
4704+        si_s = si_b2a(storageindex)
4705         log.msg("storage: get_buckets %s" % si_s)
4706         bucketreaders = {} # k: sharenum, v: BucketReader
4707hunk ./src/allmydata/storage/server.py 281
4708-        for shnum, filename in self._get_bucket_shares(storage_index):
4709-            bucketreaders[shnum] = BucketReader(self, filename,
4710-                                                storage_index, shnum)
4711-        self.add_latency("get", time.time() - start)
4712-        return bucketreaders
4713 
4714hunk ./src/allmydata/storage/server.py 282
4715-    def get_leases(self, storage_index):
4716-        """Provide an iterator that yields all of the leases attached to this
4717-        bucket. Each lease is returned as a LeaseInfo instance.
4718+        try:
4719+            shareset = self.backend.get_shareset(storageindex)
4720+            for share in shareset.get_shares():
4721+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4722+            return bucketreaders
4723+        finally:
4724+            self.add_latency("get", time.time() - start)
4725 
4726hunk ./src/allmydata/storage/server.py 290
4727-        This method is not for client use.
4728+    def get_leases(self, storageindex):
4729         """
4730hunk ./src/allmydata/storage/server.py 292
4731+        Provide an iterator that yields all of the leases attached to this
4732+        bucket. Each lease is returned as a LeaseInfo instance.
4733 
4734hunk ./src/allmydata/storage/server.py 295
4735-        # since all shares get the same lease data, we just grab the leases
4736-        # from the first share
4737-        try:
4738-            shnum, filename = self._get_bucket_shares(storage_index).next()
4739-            sf = ShareFile(filename)
4740-            return sf.get_leases()
4741-        except StopIteration:
4742-            return iter([])
4743+        This method is not for client use. XXX do we need it at all?
4744+        """
4745+        return self.backend.get_shareset(storageindex).get_leases()
4746 
4747hunk ./src/allmydata/storage/server.py 299
4748-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4749+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4750                                                secrets,
4751                                                test_and_write_vectors,
4752                                                read_vector):
4753hunk ./src/allmydata/storage/server.py 305
4754         start = time.time()
4755         self.count("writev")
4756-        si_s = si_b2a(storage_index)
4757+        si_s = si_b2a(storageindex)
4758         log.msg("storage: slot_writev %s" % si_s)
4759hunk ./src/allmydata/storage/server.py 307
4760-        si_dir = storage_index_to_dir(storage_index)
4761-        (write_enabler, renew_secret, cancel_secret) = secrets
4762-        # shares exist if there is a file for them
4763-        bucketdir = os.path.join(self.sharedir, si_dir)
4764-        shares = {}
4765-        if os.path.isdir(bucketdir):
4766-            for sharenum_s in os.listdir(bucketdir):
4767-                try:
4768-                    sharenum = int(sharenum_s)
4769-                except ValueError:
4770-                    continue
4771-                filename = os.path.join(bucketdir, sharenum_s)
4772-                msf = MutableShareFile(filename, self)
4773-                msf.check_write_enabler(write_enabler, si_s)
4774-                shares[sharenum] = msf
4775-        # write_enabler is good for all existing shares.
4776-
4777-        # Now evaluate test vectors.
4778-        testv_is_good = True
4779-        for sharenum in test_and_write_vectors:
4780-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4781-            if sharenum in shares:
4782-                if not shares[sharenum].check_testv(testv):
4783-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4784-                    testv_is_good = False
4785-                    break
4786-            else:
4787-                # compare the vectors against an empty share, in which all
4788-                # reads return empty strings.
4789-                if not EmptyShare().check_testv(testv):
4790-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4791-                                                                testv))
4792-                    testv_is_good = False
4793-                    break
4794-
4795-        # now gather the read vectors, before we do any writes
4796-        read_data = {}
4797-        for sharenum, share in shares.items():
4798-            read_data[sharenum] = share.readv(read_vector)
4799-
4800-        ownerid = 1 # TODO
4801-        expire_time = time.time() + 31*24*60*60   # one month
4802-        lease_info = LeaseInfo(ownerid,
4803-                               renew_secret, cancel_secret,
4804-                               expire_time, self.my_nodeid)
4805-
4806-        if testv_is_good:
4807-            # now apply the write vectors
4808-            for sharenum in test_and_write_vectors:
4809-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4810-                if new_length == 0:
4811-                    if sharenum in shares:
4812-                        shares[sharenum].unlink()
4813-                else:
4814-                    if sharenum not in shares:
4815-                        # allocate a new share
4816-                        allocated_size = 2000 # arbitrary, really
4817-                        share = self._allocate_slot_share(bucketdir, secrets,
4818-                                                          sharenum,
4819-                                                          allocated_size,
4820-                                                          owner_num=0)
4821-                        shares[sharenum] = share
4822-                    shares[sharenum].writev(datav, new_length)
4823-                    # and update the lease
4824-                    shares[sharenum].add_or_renew_lease(lease_info)
4825-
4826-            if new_length == 0:
4827-                # delete empty bucket directories
4828-                if not os.listdir(bucketdir):
4829-                    os.rmdir(bucketdir)
4830 
4831hunk ./src/allmydata/storage/server.py 308
4832+        try:
4833+            shareset = self.backend.get_shareset(storageindex)
4834+            expiration_time = start + 31*24*60*60   # one month from now
4835+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4836+                                                       read_vector, expiration_time)
4837+        finally:
4838+            self.add_latency("writev", time.time() - start)
4839 
4840hunk ./src/allmydata/storage/server.py 316
4841-        # all done
4842-        self.add_latency("writev", time.time() - start)
4843-        return (testv_is_good, read_data)
4844-
4845-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4846-                             allocated_size, owner_num=0):
4847-        (write_enabler, renew_secret, cancel_secret) = secrets
4848-        my_nodeid = self.my_nodeid
4849-        fileutil.make_dirs(bucketdir)
4850-        filename = os.path.join(bucketdir, "%d" % sharenum)
4851-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4852-                                         self)
4853-        return share
4854-
4855-    def remote_slot_readv(self, storage_index, shares, readv):
4856+    def remote_slot_readv(self, storageindex, shares, readv):
4857         start = time.time()
4858         self.count("readv")
4859hunk ./src/allmydata/storage/server.py 319
4860-        si_s = si_b2a(storage_index)
4861-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4862-                     facility="tahoe.storage", level=log.OPERATIONAL)
4863-        si_dir = storage_index_to_dir(storage_index)
4864-        # shares exist if there is a file for them
4865-        bucketdir = os.path.join(self.sharedir, si_dir)
4866-        if not os.path.isdir(bucketdir):
4867+        si_s = si_b2a(storageindex)
4868+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4869+                facility="tahoe.storage", level=log.OPERATIONAL)
4870+
4871+        try:
4872+            shareset = self.backend.get_shareset(storageindex)
4873+            return shareset.readv(self, shares, readv)
4874+        finally:
4875             self.add_latency("readv", time.time() - start)
4876hunk ./src/allmydata/storage/server.py 328
4877-            return {}
4878-        datavs = {}
4879-        for sharenum_s in os.listdir(bucketdir):
4880-            try:
4881-                sharenum = int(sharenum_s)
4882-            except ValueError:
4883-                continue
4884-            if sharenum in shares or not shares:
4885-                filename = os.path.join(bucketdir, sharenum_s)
4886-                msf = MutableShareFile(filename, self)
4887-                datavs[sharenum] = msf.readv(readv)
4888-        log.msg("returning shares %s" % (datavs.keys(),),
4889-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4890-        self.add_latency("readv", time.time() - start)
4891-        return datavs
4892 
4893hunk ./src/allmydata/storage/server.py 329
4894-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4895-                                    reason):
4896-        fileutil.make_dirs(self.corruption_advisory_dir)
4897-        now = time_format.iso_utc(sep="T")
4898-        si_s = si_b2a(storage_index)
4899-        # windows can't handle colons in the filename
4900-        fn = os.path.join(self.corruption_advisory_dir,
4901-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4902-        f = open(fn, "w")
4903-        f.write("report: Share Corruption\n")
4904-        f.write("type: %s\n" % share_type)
4905-        f.write("storage_index: %s\n" % si_s)
4906-        f.write("share_number: %d\n" % shnum)
4907-        f.write("\n")
4908-        f.write(reason)
4909-        f.write("\n")
4910-        f.close()
4911-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4912-                        "%(si)s-%(shnum)d: %(reason)s"),
4913-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4914-                level=log.SCARY, umid="SGx2fA")
4915-        return None
4916+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4917+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4918hunk ./src/allmydata/test/common.py 20
4919 from allmydata.mutable.common import CorruptShareError
4920 from allmydata.mutable.layout import unpack_header
4921 from allmydata.mutable.publish import MutableData
4922-from allmydata.storage.mutable import MutableShareFile
4923+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4924 from allmydata.util import hashutil, log, fileutil, pollmixin
4925 from allmydata.util.assertutil import precondition
4926 from allmydata.util.consumer import download_to_data
4927hunk ./src/allmydata/test/common.py 1297
4928 
4929 def _corrupt_mutable_share_data(data, debug=False):
4930     prefix = data[:32]
4931-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4932-    data_offset = MutableShareFile.DATA_OFFSET
4933+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4934+    data_offset = MutableDiskShare.DATA_OFFSET
4935     sharetype = data[data_offset:data_offset+1]
4936     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4937     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4938hunk ./src/allmydata/test/no_network.py 21
4939 from twisted.application import service
4940 from twisted.internet import defer, reactor
4941 from twisted.python.failure import Failure
4942+from twisted.python.filepath import FilePath
4943 from foolscap.api import Referenceable, fireEventually, RemoteException
4944 from base64 import b32encode
4945hunk ./src/allmydata/test/no_network.py 24
4946+
4947 from allmydata import uri as tahoe_uri
4948 from allmydata.client import Client
4949hunk ./src/allmydata/test/no_network.py 27
4950-from allmydata.storage.server import StorageServer, storage_index_to_dir
4951+from allmydata.storage.server import StorageServer
4952+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4953 from allmydata.util import fileutil, idlib, hashutil
4954 from allmydata.util.hashutil import sha1
4955 from allmydata.test.common_web import HTTPClientGETFactory
4956hunk ./src/allmydata/test/no_network.py 155
4957             seed = server.get_permutation_seed()
4958             return sha1(peer_selection_index + seed).digest()
4959         return sorted(self.get_connected_servers(), key=_permuted)
4960+
4961     def get_connected_servers(self):
4962         return self.client._servers
4963hunk ./src/allmydata/test/no_network.py 158
4964+
4965     def get_nickname_for_serverid(self, serverid):
4966         return None
4967 
4968hunk ./src/allmydata/test/no_network.py 162
4969+    def get_known_servers(self):
4970+        return self.get_connected_servers()
4971+
4972+    def get_all_serverids(self):
4973+        return self.client.get_all_serverids()
4974+
4975+
4976 class NoNetworkClient(Client):
4977     def create_tub(self):
4978         pass
4979hunk ./src/allmydata/test/no_network.py 262
4980 
4981     def make_server(self, i, readonly=False):
4982         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
4983-        serverdir = os.path.join(self.basedir, "servers",
4984-                                 idlib.shortnodeid_b2a(serverid), "storage")
4985-        fileutil.make_dirs(serverdir)
4986-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
4987-                           readonly_storage=readonly)
4988+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
4989+
4990+        # The backend will make the storage directory and any necessary parents.
4991+        backend = DiskBackend(storagedir, readonly=readonly)
4992+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
4993         ss._no_network_server_number = i
4994         return ss
4995 
4996hunk ./src/allmydata/test/no_network.py 276
4997         middleman = service.MultiService()
4998         middleman.setServiceParent(self)
4999         ss.setServiceParent(middleman)
5000-        serverid = ss.my_nodeid
5001+        serverid = ss.get_serverid()
5002         self.servers_by_number[i] = ss
5003         wrapper = wrap_storage_server(ss)
5004         self.wrappers_by_id[serverid] = wrapper
5005hunk ./src/allmydata/test/no_network.py 295
5006         # it's enough to remove the server from c._servers (we don't actually
5007         # have to detach and stopService it)
5008         for i,ss in self.servers_by_number.items():
5009-            if ss.my_nodeid == serverid:
5010+            if ss.get_serverid() == serverid:
5011                 del self.servers_by_number[i]
5012                 break
5013         del self.wrappers_by_id[serverid]
5014hunk ./src/allmydata/test/no_network.py 345
5015     def get_clientdir(self, i=0):
5016         return self.g.clients[i].basedir
5017 
5018+    def get_server(self, i):
5019+        return self.g.servers_by_number[i]
5020+
5021     def get_serverdir(self, i):
5022hunk ./src/allmydata/test/no_network.py 349
5023-        return self.g.servers_by_number[i].storedir
5024+        return self.g.servers_by_number[i].backend.storedir
5025+
5026+    def remove_server(self, i):
5027+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
5028 
5029     def iterate_servers(self):
5030         for i in sorted(self.g.servers_by_number.keys()):
5031hunk ./src/allmydata/test/no_network.py 357
5032             ss = self.g.servers_by_number[i]
5033-            yield (i, ss, ss.storedir)
5034+            yield (i, ss, ss.backend.storedir)
5035 
5036     def find_uri_shares(self, uri):
5037         si = tahoe_uri.from_string(uri).get_storage_index()
5038hunk ./src/allmydata/test/no_network.py 361
5039-        prefixdir = storage_index_to_dir(si)
5040         shares = []
5041         for i,ss in self.g.servers_by_number.items():
5042hunk ./src/allmydata/test/no_network.py 363
5043-            serverid = ss.my_nodeid
5044-            basedir = os.path.join(ss.sharedir, prefixdir)
5045-            if not os.path.exists(basedir):
5046-                continue
5047-            for f in os.listdir(basedir):
5048-                try:
5049-                    shnum = int(f)
5050-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5051-                except ValueError:
5052-                    pass
5053+            for share in ss.backend.get_shareset(si).get_shares():
5054+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5055         return sorted(shares)
5056 
5057hunk ./src/allmydata/test/no_network.py 367
5058+    def count_leases(self, uri):
5059+        """Return (filename, leasecount) pairs in arbitrary order."""
5060+        si = tahoe_uri.from_string(uri).get_storage_index()
5061+        lease_counts = []
5062+        for i,ss in self.g.servers_by_number.items():
5063+            for share in ss.backend.get_shareset(si).get_shares():
5064+                num_leases = len(list(share.get_leases()))
5065+                lease_counts.append( (share._home.path, num_leases) )
5066+        return lease_counts
5067+
5068     def copy_shares(self, uri):
5069         shares = {}
5070hunk ./src/allmydata/test/no_network.py 379
5071-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5072-            shares[sharefile] = open(sharefile, "rb").read()
5073+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5074+            shares[sharefp.path] = sharefp.getContent()
5075         return shares
5076 
5077hunk ./src/allmydata/test/no_network.py 383
5078+    def copy_share(self, from_share, uri, to_server):
5079+        si = uri.from_string(self.uri).get_storage_index()
5080+        (i_shnum, i_serverid, i_sharefp) = from_share
5081+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5082+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5083+
5084     def restore_all_shares(self, shares):
5085hunk ./src/allmydata/test/no_network.py 390
5086-        for sharefile, data in shares.items():
5087-            open(sharefile, "wb").write(data)
5088+        for share, data in shares.items():
5089+            share.home.setContent(data)
5090 
5091hunk ./src/allmydata/test/no_network.py 393
5092-    def delete_share(self, (shnum, serverid, sharefile)):
5093-        os.unlink(sharefile)
5094+    def delete_share(self, (shnum, serverid, sharefp)):
5095+        sharefp.remove()
5096 
5097     def delete_shares_numbered(self, uri, shnums):
5098hunk ./src/allmydata/test/no_network.py 397
5099-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5100+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5101             if i_shnum in shnums:
5102hunk ./src/allmydata/test/no_network.py 399
5103-                os.unlink(i_sharefile)
5104+                i_sharefp.remove()
5105 
5106hunk ./src/allmydata/test/no_network.py 401
5107-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5108-        sharedata = open(sharefile, "rb").read()
5109-        corruptdata = corruptor_function(sharedata)
5110-        open(sharefile, "wb").write(corruptdata)
5111+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5112+        sharedata = sharefp.getContent()
5113+        corruptdata = corruptor_function(sharedata, debug=debug)
5114+        sharefp.setContent(corruptdata)
5115 
5116     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5117hunk ./src/allmydata/test/no_network.py 407
5118-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5119+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5120             if i_shnum in shnums:
5121hunk ./src/allmydata/test/no_network.py 409
5122-                sharedata = open(i_sharefile, "rb").read()
5123-                corruptdata = corruptor(sharedata, debug=debug)
5124-                open(i_sharefile, "wb").write(corruptdata)
5125+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5126 
5127     def corrupt_all_shares(self, uri, corruptor, debug=False):
5128hunk ./src/allmydata/test/no_network.py 412
5129-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5130-            sharedata = open(i_sharefile, "rb").read()
5131-            corruptdata = corruptor(sharedata, debug=debug)
5132-            open(i_sharefile, "wb").write(corruptdata)
5133+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5134+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5135 
5136     def GET(self, urlpath, followRedirect=False, return_response=False,
5137             method="GET", clientnum=0, **kwargs):
5138hunk ./src/allmydata/test/test_download.py 6
5139 # a previous run. This asserts that the current code is capable of decoding
5140 # shares from a previous version.
5141 
5142-import os
5143 from twisted.trial import unittest
5144 from twisted.internet import defer, reactor
5145 from allmydata import uri
5146hunk ./src/allmydata/test/test_download.py 9
5147-from allmydata.storage.server import storage_index_to_dir
5148 from allmydata.util import base32, fileutil, spans, log, hashutil
5149 from allmydata.util.consumer import download_to_data, MemoryConsumer
5150 from allmydata.immutable import upload, layout
5151hunk ./src/allmydata/test/test_download.py 85
5152         u = upload.Data(plaintext, None)
5153         d = self.c0.upload(u)
5154         f = open("stored_shares.py", "w")
5155-        def _created_immutable(ur):
5156-            # write the generated shares and URI to a file, which can then be
5157-            # incorporated into this one next time.
5158-            f.write('immutable_uri = "%s"\n' % ur.uri)
5159-            f.write('immutable_shares = {\n')
5160-            si = uri.from_string(ur.uri).get_storage_index()
5161-            si_dir = storage_index_to_dir(si)
5162+
5163+        def _write_py(uri):
5164+            si = uri.from_string(uri).get_storage_index()
5165             for (i,ss,ssdir) in self.iterate_servers():
5166hunk ./src/allmydata/test/test_download.py 89
5167-                sharedir = os.path.join(ssdir, "shares", si_dir)
5168                 shares = {}
5169hunk ./src/allmydata/test/test_download.py 90
5170-                for fn in os.listdir(sharedir):
5171-                    shnum = int(fn)
5172-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5173-                    shares[shnum] = sharedata
5174-                fileutil.rm_dir(sharedir)
5175+                shareset = ss.backend.get_shareset(si)
5176+                for share in shareset.get_shares():
5177+                    sharedata = share._home.getContent()
5178+                    shares[share.get_shnum()] = sharedata
5179+
5180+                fileutil.fp_remove(shareset._sharehomedir)
5181                 if shares:
5182                     f.write(' %d: { # client[%d]\n' % (i, i))
5183                     for shnum in sorted(shares.keys()):
5184hunk ./src/allmydata/test/test_download.py 103
5185                                 (shnum, base32.b2a(shares[shnum])))
5186                     f.write('    },\n')
5187             f.write('}\n')
5188-            f.write('\n')
5189 
5190hunk ./src/allmydata/test/test_download.py 104
5191+        def _created_immutable(ur):
5192+            # write the generated shares and URI to a file, which can then be
5193+            # incorporated into this one next time.
5194+            f.write('immutable_uri = "%s"\n' % ur.uri)
5195+            f.write('immutable_shares = {\n')
5196+            _write_py(ur.uri)
5197+            f.write('\n')
5198         d.addCallback(_created_immutable)
5199 
5200         d.addCallback(lambda ignored:
5201hunk ./src/allmydata/test/test_download.py 118
5202         def _created_mutable(n):
5203             f.write('mutable_uri = "%s"\n' % n.get_uri())
5204             f.write('mutable_shares = {\n')
5205-            si = uri.from_string(n.get_uri()).get_storage_index()
5206-            si_dir = storage_index_to_dir(si)
5207-            for (i,ss,ssdir) in self.iterate_servers():
5208-                sharedir = os.path.join(ssdir, "shares", si_dir)
5209-                shares = {}
5210-                for fn in os.listdir(sharedir):
5211-                    shnum = int(fn)
5212-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5213-                    shares[shnum] = sharedata
5214-                fileutil.rm_dir(sharedir)
5215-                if shares:
5216-                    f.write(' %d: { # client[%d]\n' % (i, i))
5217-                    for shnum in sorted(shares.keys()):
5218-                        f.write('  %d: base32.a2b("%s"),\n' %
5219-                                (shnum, base32.b2a(shares[shnum])))
5220-                    f.write('    },\n')
5221-            f.write('}\n')
5222-
5223-            f.close()
5224+            _write_py(n.get_uri())
5225         d.addCallback(_created_mutable)
5226 
5227         def _done(ignored):
5228hunk ./src/allmydata/test/test_download.py 123
5229             f.close()
5230-        d.addCallback(_done)
5231+        d.addBoth(_done)
5232 
5233         return d
5234 
5235hunk ./src/allmydata/test/test_download.py 127
5236+    def _write_shares(self, uri, shares):
5237+        si = uri.from_string(uri).get_storage_index()
5238+        for i in shares:
5239+            shares_for_server = shares[i]
5240+            for shnum in shares_for_server:
5241+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5242+                fileutil.fp_make_dirs(share_dir)
5243+                share_dir.child(str(shnum)).setContent(shares[shnum])
5244+
5245     def load_shares(self, ignored=None):
5246         # this uses the data generated by create_shares() to populate the
5247         # storage servers with pre-generated shares
5248hunk ./src/allmydata/test/test_download.py 139
5249-        si = uri.from_string(immutable_uri).get_storage_index()
5250-        si_dir = storage_index_to_dir(si)
5251-        for i in immutable_shares:
5252-            shares = immutable_shares[i]
5253-            for shnum in shares:
5254-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5255-                fileutil.make_dirs(dn)
5256-                fn = os.path.join(dn, str(shnum))
5257-                f = open(fn, "wb")
5258-                f.write(shares[shnum])
5259-                f.close()
5260-
5261-        si = uri.from_string(mutable_uri).get_storage_index()
5262-        si_dir = storage_index_to_dir(si)
5263-        for i in mutable_shares:
5264-            shares = mutable_shares[i]
5265-            for shnum in shares:
5266-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5267-                fileutil.make_dirs(dn)
5268-                fn = os.path.join(dn, str(shnum))
5269-                f = open(fn, "wb")
5270-                f.write(shares[shnum])
5271-                f.close()
5272+        self._write_shares(immutable_uri, immutable_shares)
5273+        self._write_shares(mutable_uri, mutable_shares)
5274 
5275     def download_immutable(self, ignored=None):
5276         n = self.c0.create_node_from_uri(immutable_uri)
5277hunk ./src/allmydata/test/test_download.py 183
5278 
5279         self.load_shares()
5280         si = uri.from_string(immutable_uri).get_storage_index()
5281-        si_dir = storage_index_to_dir(si)
5282 
5283         n = self.c0.create_node_from_uri(immutable_uri)
5284         d = download_to_data(n)
5285hunk ./src/allmydata/test/test_download.py 198
5286                 for clientnum in immutable_shares:
5287                     for shnum in immutable_shares[clientnum]:
5288                         if s._shnum == shnum:
5289-                            fn = os.path.join(self.get_serverdir(clientnum),
5290-                                              "shares", si_dir, str(shnum))
5291-                            os.unlink(fn)
5292+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5293+                            share_dir.child(str(shnum)).remove()
5294         d.addCallback(_clobber_some_shares)
5295         d.addCallback(lambda ign: download_to_data(n))
5296         d.addCallback(_got_data)
5297hunk ./src/allmydata/test/test_download.py 212
5298                 for shnum in immutable_shares[clientnum]:
5299                     if shnum == save_me:
5300                         continue
5301-                    fn = os.path.join(self.get_serverdir(clientnum),
5302-                                      "shares", si_dir, str(shnum))
5303-                    if os.path.exists(fn):
5304-                        os.unlink(fn)
5305+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5306+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5307             # now the download should fail with NotEnoughSharesError
5308             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5309                                    download_to_data, n)
5310hunk ./src/allmydata/test/test_download.py 223
5311             # delete the last remaining share
5312             for clientnum in immutable_shares:
5313                 for shnum in immutable_shares[clientnum]:
5314-                    fn = os.path.join(self.get_serverdir(clientnum),
5315-                                      "shares", si_dir, str(shnum))
5316-                    if os.path.exists(fn):
5317-                        os.unlink(fn)
5318+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5319+                    share_dir.child(str(shnum)).remove()
5320             # now a new download should fail with NoSharesError. We want a
5321             # new ImmutableFileNode so it will forget about the old shares.
5322             # If we merely called create_node_from_uri() without first
5323hunk ./src/allmydata/test/test_download.py 801
5324         # will report two shares, and the ShareFinder will handle the
5325         # duplicate by attaching both to the same CommonShare instance.
5326         si = uri.from_string(immutable_uri).get_storage_index()
5327-        si_dir = storage_index_to_dir(si)
5328-        sh0_file = [sharefile
5329-                    for (shnum, serverid, sharefile)
5330-                    in self.find_uri_shares(immutable_uri)
5331-                    if shnum == 0][0]
5332-        sh0_data = open(sh0_file, "rb").read()
5333+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5334+                          in self.find_uri_shares(immutable_uri)
5335+                          if shnum == 0][0]
5336+        sh0_data = sh0_fp.getContent()
5337         for clientnum in immutable_shares:
5338             if 0 in immutable_shares[clientnum]:
5339                 continue
5340hunk ./src/allmydata/test/test_download.py 808
5341-            cdir = self.get_serverdir(clientnum)
5342-            target = os.path.join(cdir, "shares", si_dir, "0")
5343-            outf = open(target, "wb")
5344-            outf.write(sh0_data)
5345-            outf.close()
5346+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5347+            fileutil.fp_make_dirs(cdir)
5348+            cdir.child(str(shnum)).setContent(sh0_data)
5349 
5350         d = self.download_immutable()
5351         return d
5352hunk ./src/allmydata/test/test_encode.py 134
5353         d.addCallback(_try)
5354         return d
5355 
5356-    def get_share_hashes(self, at_least_these=()):
5357+    def get_share_hashes(self):
5358         d = self._start()
5359         def _try(unused=None):
5360             if self.mode == "bad sharehash":
5361hunk ./src/allmydata/test/test_hung_server.py 3
5362 # -*- coding: utf-8 -*-
5363 
5364-import os, shutil
5365 from twisted.trial import unittest
5366 from twisted.internet import defer
5367hunk ./src/allmydata/test/test_hung_server.py 5
5368-from allmydata import uri
5369+
5370 from allmydata.util.consumer import download_to_data
5371 from allmydata.immutable import upload
5372 from allmydata.mutable.common import UnrecoverableFileError
5373hunk ./src/allmydata/test/test_hung_server.py 10
5374 from allmydata.mutable.publish import MutableData
5375-from allmydata.storage.common import storage_index_to_dir
5376 from allmydata.test.no_network import GridTestMixin
5377 from allmydata.test.common import ShouldFailMixin
5378 from allmydata.util.pollmixin import PollMixin
5379hunk ./src/allmydata/test/test_hung_server.py 18
5380 immutable_plaintext = "data" * 10000
5381 mutable_plaintext = "muta" * 10000
5382 
5383+
5384 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5385                              unittest.TestCase):
5386     # Many of these tests take around 60 seconds on François's ARM buildslave:
5387hunk ./src/allmydata/test/test_hung_server.py 31
5388     timeout = 240
5389 
5390     def _break(self, servers):
5391-        for (id, ss) in servers:
5392-            self.g.break_server(id)
5393+        for ss in servers:
5394+            self.g.break_server(ss.get_serverid())
5395 
5396     def _hang(self, servers, **kwargs):
5397hunk ./src/allmydata/test/test_hung_server.py 35
5398-        for (id, ss) in servers:
5399-            self.g.hang_server(id, **kwargs)
5400+        for ss in servers:
5401+            self.g.hang_server(ss.get_serverid(), **kwargs)
5402 
5403     def _unhang(self, servers, **kwargs):
5404hunk ./src/allmydata/test/test_hung_server.py 39
5405-        for (id, ss) in servers:
5406-            self.g.unhang_server(id, **kwargs)
5407+        for ss in servers:
5408+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5409 
5410     def _hang_shares(self, shnums, **kwargs):
5411         # hang all servers who are holding the given shares
5412hunk ./src/allmydata/test/test_hung_server.py 52
5413                     hung_serverids.add(i_serverid)
5414 
5415     def _delete_all_shares_from(self, servers):
5416-        serverids = [id for (id, ss) in servers]
5417-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5418+        serverids = [ss.get_serverid() for ss in servers]
5419+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5420             if i_serverid in serverids:
5421hunk ./src/allmydata/test/test_hung_server.py 55
5422-                os.unlink(i_sharefile)
5423+                i_sharefp.remove()
5424 
5425     def _corrupt_all_shares_in(self, servers, corruptor_func):
5426hunk ./src/allmydata/test/test_hung_server.py 58
5427-        serverids = [id for (id, ss) in servers]
5428-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5429+        serverids = [ss.get_serverid() for ss in servers]
5430+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5431             if i_serverid in serverids:
5432hunk ./src/allmydata/test/test_hung_server.py 61
5433-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5434+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5435 
5436     def _copy_all_shares_from(self, from_servers, to_server):
5437hunk ./src/allmydata/test/test_hung_server.py 64
5438-        serverids = [id for (id, ss) in from_servers]
5439-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5440+        serverids = [ss.get_serverid() for ss in from_servers]
5441+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5442             if i_serverid in serverids:
5443hunk ./src/allmydata/test/test_hung_server.py 67
5444-                self._copy_share((i_shnum, i_sharefile), to_server)
5445+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5446 
5447hunk ./src/allmydata/test/test_hung_server.py 69
5448-    def _copy_share(self, share, to_server):
5449-        (sharenum, sharefile) = share
5450-        (id, ss) = to_server
5451-        shares_dir = os.path.join(ss.original.storedir, "shares")
5452-        si = uri.from_string(self.uri).get_storage_index()
5453-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5454-        if not os.path.exists(si_dir):
5455-            os.makedirs(si_dir)
5456-        new_sharefile = os.path.join(si_dir, str(sharenum))
5457-        shutil.copy(sharefile, new_sharefile)
5458         self.shares = self.find_uri_shares(self.uri)
5459hunk ./src/allmydata/test/test_hung_server.py 70
5460-        # Make sure that the storage server has the share.
5461-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5462-                        in self.shares)
5463-
5464-    def _corrupt_share(self, share, corruptor_func):
5465-        (sharenum, sharefile) = share
5466-        data = open(sharefile, "rb").read()
5467-        newdata = corruptor_func(data)
5468-        os.unlink(sharefile)
5469-        wf = open(sharefile, "wb")
5470-        wf.write(newdata)
5471-        wf.close()
5472 
5473     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5474         self.mutable = mutable
5475hunk ./src/allmydata/test/test_hung_server.py 82
5476 
5477         self.c0 = self.g.clients[0]
5478         nm = self.c0.nodemaker
5479-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5480-                               for s in nm.storage_broker.get_connected_servers()])
5481+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5482+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5483         self.servers = self.servers[5:] + self.servers[:5]
5484 
5485         if mutable:
5486hunk ./src/allmydata/test/test_hung_server.py 244
5487             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5488             # will retire before the download is complete and the ShareFinder
5489             # is shut off. That will leave 4 OVERDUE and 1
5490-            # stuck-but-not-overdue, for a total of 5 requests in in
5491+            # stuck-but-not-overdue, for a total of 5 requests in
5492             # _sf.pending_requests
5493             for t in self._sf.overdue_timers.values()[:4]:
5494                 t.reset(-1.0)
5495hunk ./src/allmydata/test/test_mutable.py 21
5496 from foolscap.api import eventually, fireEventually
5497 from foolscap.logging import log
5498 from allmydata.storage_client import StorageFarmBroker
5499-from allmydata.storage.common import storage_index_to_dir
5500 from allmydata.scripts import debug
5501 
5502 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5503hunk ./src/allmydata/test/test_mutable.py 3669
5504         # Now execute each assignment by writing the storage.
5505         for (share, servernum) in assignments:
5506             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5507-            storedir = self.get_serverdir(servernum)
5508-            storage_path = os.path.join(storedir, "shares",
5509-                                        storage_index_to_dir(si))
5510-            fileutil.make_dirs(storage_path)
5511-            fileutil.write(os.path.join(storage_path, "%d" % share),
5512-                           sharedata)
5513+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5514+            fileutil.fp_make_dirs(storage_dir)
5515+            storage_dir.child("%d" % share).setContent(sharedata)
5516         # ...and verify that the shares are there.
5517         shares = self.find_uri_shares(self.sdmf_old_cap)
5518         assert len(shares) == 10
5519hunk ./src/allmydata/test/test_provisioning.py 13
5520 from nevow import inevow
5521 from zope.interface import implements
5522 
5523-class MyRequest:
5524+class MockRequest:
5525     implements(inevow.IRequest)
5526     pass
5527 
5528hunk ./src/allmydata/test/test_provisioning.py 26
5529     def test_load(self):
5530         pt = provisioning.ProvisioningTool()
5531         self.fields = {}
5532-        #r = MyRequest()
5533+        #r = MockRequest()
5534         #r.fields = self.fields
5535         #ctx = RequestContext()
5536         #unfilled = pt.renderSynchronously(ctx)
5537hunk ./src/allmydata/test/test_repairer.py 537
5538         # happiness setting.
5539         def _delete_some_servers(ignored):
5540             for i in xrange(7):
5541-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5542+                self.remove_server(i)
5543 
5544             assert len(self.g.servers_by_number) == 3
5545 
5546hunk ./src/allmydata/test/test_storage.py 14
5547 from allmydata import interfaces
5548 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5549 from allmydata.storage.server import StorageServer
5550-from allmydata.storage.mutable import MutableShareFile
5551-from allmydata.storage.immutable import BucketWriter, BucketReader
5552-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5553+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5554+from allmydata.storage.bucket import BucketWriter, BucketReader
5555+from allmydata.storage.common import DataTooLargeError, \
5556      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5557 from allmydata.storage.lease import LeaseInfo
5558 from allmydata.storage.crawler import BucketCountingCrawler
5559hunk ./src/allmydata/test/test_storage.py 474
5560         w[0].remote_write(0, "\xff"*10)
5561         w[0].remote_close()
5562 
5563-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5564-        f = open(fn, "rb+")
5565+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5566+        f = fp.open("rb+")
5567         f.seek(0)
5568         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5569         f.close()
5570hunk ./src/allmydata/test/test_storage.py 814
5571     def test_bad_magic(self):
5572         ss = self.create("test_bad_magic")
5573         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5574-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5575-        f = open(fn, "rb+")
5576+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5577+        f = fp.open("rb+")
5578         f.seek(0)
5579         f.write("BAD MAGIC")
5580         f.close()
5581hunk ./src/allmydata/test/test_storage.py 842
5582 
5583         # Trying to make the container too large (by sending a write vector
5584         # whose offset is too high) will raise an exception.
5585-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5586+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5587         self.failUnlessRaises(DataTooLargeError,
5588                               rstaraw, "si1", secrets,
5589                               {0: ([], [(TOOBIG,data)], None)},
5590hunk ./src/allmydata/test/test_storage.py 1229
5591 
5592         # create a random non-numeric file in the bucket directory, to
5593         # exercise the code that's supposed to ignore those.
5594-        bucket_dir = os.path.join(self.workdir("test_leases"),
5595-                                  "shares", storage_index_to_dir("si1"))
5596-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5597-        f.write("you ought to be ignoring me\n")
5598-        f.close()
5599+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5600+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5601 
5602hunk ./src/allmydata/test/test_storage.py 1232
5603-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5604+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5605         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5606 
5607         # add-lease on a missing storage index is silently ignored
5608hunk ./src/allmydata/test/test_storage.py 3118
5609         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5610 
5611         # add a non-sharefile to exercise another code path
5612-        fn = os.path.join(ss.sharedir,
5613-                          storage_index_to_dir(immutable_si_0),
5614-                          "not-a-share")
5615-        f = open(fn, "wb")
5616-        f.write("I am not a share.\n")
5617-        f.close()
5618+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5619+        fp.setContent("I am not a share.\n")
5620 
5621         # this is before the crawl has started, so we're not in a cycle yet
5622         initial_state = lc.get_state()
5623hunk ./src/allmydata/test/test_storage.py 3282
5624     def test_expire_age(self):
5625         basedir = "storage/LeaseCrawler/expire_age"
5626         fileutil.make_dirs(basedir)
5627-        # setting expiration_time to 2000 means that any lease which is more
5628-        # than 2000s old will be expired.
5629-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5630-                                       expiration_enabled=True,
5631-                                       expiration_mode="age",
5632-                                       expiration_override_lease_duration=2000)
5633+        # setting 'override_lease_duration' to 2000 means that any lease that
5634+        # is more than 2000 seconds old will be expired.
5635+        expiration_policy = {
5636+            'enabled': True,
5637+            'mode': 'age',
5638+            'override_lease_duration': 2000,
5639+            'sharetypes': ('mutable', 'immutable'),
5640+        }
5641+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5642         # make it start sooner than usual.
5643         lc = ss.lease_checker
5644         lc.slow_start = 0
5645hunk ./src/allmydata/test/test_storage.py 3423
5646     def test_expire_cutoff_date(self):
5647         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5648         fileutil.make_dirs(basedir)
5649-        # setting cutoff-date to 2000 seconds ago means that any lease which
5650-        # is more than 2000s old will be expired.
5651+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5652+        # is more than 2000 seconds old will be expired.
5653         now = time.time()
5654         then = int(now - 2000)
5655hunk ./src/allmydata/test/test_storage.py 3427
5656-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5657-                                       expiration_enabled=True,
5658-                                       expiration_mode="cutoff-date",
5659-                                       expiration_cutoff_date=then)
5660+        expiration_policy = {
5661+            'enabled': True,
5662+            'mode': 'cutoff-date',
5663+            'cutoff_date': then,
5664+            'sharetypes': ('mutable', 'immutable'),
5665+        }
5666+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5667         # make it start sooner than usual.
5668         lc = ss.lease_checker
5669         lc.slow_start = 0
5670hunk ./src/allmydata/test/test_storage.py 3575
5671     def test_only_immutable(self):
5672         basedir = "storage/LeaseCrawler/only_immutable"
5673         fileutil.make_dirs(basedir)
5674+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5675+        # is more than 2000 seconds old will be expired.
5676         now = time.time()
5677         then = int(now - 2000)
5678hunk ./src/allmydata/test/test_storage.py 3579
5679-        ss = StorageServer(basedir, "\x00" * 20,
5680-                           expiration_enabled=True,
5681-                           expiration_mode="cutoff-date",
5682-                           expiration_cutoff_date=then,
5683-                           expiration_sharetypes=("immutable",))
5684+        expiration_policy = {
5685+            'enabled': True,
5686+            'mode': 'cutoff-date',
5687+            'cutoff_date': then,
5688+            'sharetypes': ('immutable',),
5689+        }
5690+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5691         lc = ss.lease_checker
5692         lc.slow_start = 0
5693         webstatus = StorageStatus(ss)
5694hunk ./src/allmydata/test/test_storage.py 3636
5695     def test_only_mutable(self):
5696         basedir = "storage/LeaseCrawler/only_mutable"
5697         fileutil.make_dirs(basedir)
5698+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5699+        # is more than 2000 seconds old will be expired.
5700         now = time.time()
5701         then = int(now - 2000)
5702hunk ./src/allmydata/test/test_storage.py 3640
5703-        ss = StorageServer(basedir, "\x00" * 20,
5704-                           expiration_enabled=True,
5705-                           expiration_mode="cutoff-date",
5706-                           expiration_cutoff_date=then,
5707-                           expiration_sharetypes=("mutable",))
5708+        expiration_policy = {
5709+            'enabled': True,
5710+            'mode': 'cutoff-date',
5711+            'cutoff_date': then,
5712+            'sharetypes': ('mutable',),
5713+        }
5714+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5715         lc = ss.lease_checker
5716         lc.slow_start = 0
5717         webstatus = StorageStatus(ss)
5718hunk ./src/allmydata/test/test_storage.py 3819
5719     def test_no_st_blocks(self):
5720         basedir = "storage/LeaseCrawler/no_st_blocks"
5721         fileutil.make_dirs(basedir)
5722-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5723-                                        expiration_mode="age",
5724-                                        expiration_override_lease_duration=-1000)
5725-        # a negative expiration_time= means the "configured-"
5726+        # A negative 'override_lease_duration' means that the "configured-"
5727         # space-recovered counts will be non-zero, since all shares will have
5728hunk ./src/allmydata/test/test_storage.py 3821
5729-        # expired by then
5730+        # expired by then.
5731+        expiration_policy = {
5732+            'enabled': True,
5733+            'mode': 'age',
5734+            'override_lease_duration': -1000,
5735+            'sharetypes': ('mutable', 'immutable'),
5736+        }
5737+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5738 
5739         # make it start sooner than usual.
5740         lc = ss.lease_checker
5741hunk ./src/allmydata/test/test_storage.py 3877
5742         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5743         first = min(self.sis)
5744         first_b32 = base32.b2a(first)
5745-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5746-        f = open(fn, "rb+")
5747+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5748+        f = fp.open("rb+")
5749         f.seek(0)
5750         f.write("BAD MAGIC")
5751         f.close()
5752hunk ./src/allmydata/test/test_storage.py 3890
5753 
5754         # also create an empty bucket
5755         empty_si = base32.b2a("\x04"*16)
5756-        empty_bucket_dir = os.path.join(ss.sharedir,
5757-                                        storage_index_to_dir(empty_si))
5758-        fileutil.make_dirs(empty_bucket_dir)
5759+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5760+        fileutil.fp_make_dirs(empty_bucket_dir)
5761 
5762         ss.setServiceParent(self.s)
5763 
5764hunk ./src/allmydata/test/test_system.py 10
5765 
5766 import allmydata
5767 from allmydata import uri
5768-from allmydata.storage.mutable import MutableShareFile
5769+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5770 from allmydata.storage.server import si_a2b
5771 from allmydata.immutable import offloaded, upload
5772 from allmydata.immutable.literal import LiteralFileNode
5773hunk ./src/allmydata/test/test_system.py 421
5774         return shares
5775 
5776     def _corrupt_mutable_share(self, filename, which):
5777-        msf = MutableShareFile(filename)
5778+        msf = MutableDiskShare(filename)
5779         datav = msf.readv([ (0, 1000000) ])
5780         final_share = datav[0]
5781         assert len(final_share) < 1000000 # ought to be truncated
5782hunk ./src/allmydata/test/test_upload.py 22
5783 from allmydata.util.happinessutil import servers_of_happiness, \
5784                                          shares_by_server, merge_servers
5785 from allmydata.storage_client import StorageFarmBroker
5786-from allmydata.storage.server import storage_index_to_dir
5787 
5788 MiB = 1024*1024
5789 
5790hunk ./src/allmydata/test/test_upload.py 821
5791 
5792     def _copy_share_to_server(self, share_number, server_number):
5793         ss = self.g.servers_by_number[server_number]
5794-        # Copy share i from the directory associated with the first
5795-        # storage server to the directory associated with this one.
5796-        assert self.g, "I tried to find a grid at self.g, but failed"
5797-        assert self.shares, "I tried to find shares at self.shares, but failed"
5798-        old_share_location = self.shares[share_number][2]
5799-        new_share_location = os.path.join(ss.storedir, "shares")
5800-        si = uri.from_string(self.uri).get_storage_index()
5801-        new_share_location = os.path.join(new_share_location,
5802-                                          storage_index_to_dir(si))
5803-        if not os.path.exists(new_share_location):
5804-            os.makedirs(new_share_location)
5805-        new_share_location = os.path.join(new_share_location,
5806-                                          str(share_number))
5807-        if old_share_location != new_share_location:
5808-            shutil.copy(old_share_location, new_share_location)
5809-        shares = self.find_uri_shares(self.uri)
5810-        # Make sure that the storage server has the share.
5811-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5812-                        in shares)
5813+        self.copy_share(self.shares[share_number], ss)
5814 
5815     def _setup_grid(self):
5816         """
5817hunk ./src/allmydata/test/test_upload.py 1103
5818                 self._copy_share_to_server(i, 2)
5819         d.addCallback(_copy_shares)
5820         # Remove the first server, and add a placeholder with share 0
5821-        d.addCallback(lambda ign:
5822-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5823+        d.addCallback(lambda ign: self.remove_server(0))
5824         d.addCallback(lambda ign:
5825             self._add_server_with_share(server_number=4, share_number=0))
5826         # Now try uploading.
5827hunk ./src/allmydata/test/test_upload.py 1134
5828         d.addCallback(lambda ign:
5829             self._add_server(server_number=4))
5830         d.addCallback(_copy_shares)
5831-        d.addCallback(lambda ign:
5832-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5833+        d.addCallback(lambda ign: self.remove_server(0))
5834         d.addCallback(_reset_encoding_parameters)
5835         d.addCallback(lambda client:
5836             client.upload(upload.Data("data" * 10000, convergence="")))
5837hunk ./src/allmydata/test/test_upload.py 1196
5838                 self._copy_share_to_server(i, 2)
5839         d.addCallback(_copy_shares)
5840         # Remove server 0, and add another in its place
5841-        d.addCallback(lambda ign:
5842-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5843+        d.addCallback(lambda ign: self.remove_server(0))
5844         d.addCallback(lambda ign:
5845             self._add_server_with_share(server_number=4, share_number=0,
5846                                         readonly=True))
5847hunk ./src/allmydata/test/test_upload.py 1237
5848             for i in xrange(1, 10):
5849                 self._copy_share_to_server(i, 2)
5850         d.addCallback(_copy_shares)
5851-        d.addCallback(lambda ign:
5852-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5853+        d.addCallback(lambda ign: self.remove_server(0))
5854         def _reset_encoding_parameters(ign, happy=4):
5855             client = self.g.clients[0]
5856             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5857hunk ./src/allmydata/test/test_upload.py 1273
5858         # remove the original server
5859         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5860         #  all the shares)
5861-        def _remove_server(ign):
5862-            server = self.g.servers_by_number[0]
5863-            self.g.remove_server(server.my_nodeid)
5864-        d.addCallback(_remove_server)
5865+        d.addCallback(lambda ign: self.remove_server(0))
5866         # This should succeed; we still have 4 servers, and the
5867         # happiness of the upload is 4.
5868         d.addCallback(lambda ign:
5869hunk ./src/allmydata/test/test_upload.py 1285
5870         d.addCallback(lambda ign:
5871             self._setup_and_upload())
5872         d.addCallback(_do_server_setup)
5873-        d.addCallback(_remove_server)
5874+        d.addCallback(lambda ign: self.remove_server(0))
5875         d.addCallback(lambda ign:
5876             self.shouldFail(UploadUnhappinessError,
5877                             "test_dropped_servers_in_encoder",
5878hunk ./src/allmydata/test/test_upload.py 1307
5879             self._add_server_with_share(4, 7, readonly=True)
5880             self._add_server_with_share(5, 8, readonly=True)
5881         d.addCallback(_do_server_setup_2)
5882-        d.addCallback(_remove_server)
5883+        d.addCallback(lambda ign: self.remove_server(0))
5884         d.addCallback(lambda ign:
5885             self._do_upload_with_broken_servers(1))
5886         d.addCallback(_set_basedir)
5887hunk ./src/allmydata/test/test_upload.py 1314
5888         d.addCallback(lambda ign:
5889             self._setup_and_upload())
5890         d.addCallback(_do_server_setup_2)
5891-        d.addCallback(_remove_server)
5892+        d.addCallback(lambda ign: self.remove_server(0))
5893         d.addCallback(lambda ign:
5894             self.shouldFail(UploadUnhappinessError,
5895                             "test_dropped_servers_in_encoder",
5896hunk ./src/allmydata/test/test_upload.py 1528
5897             for i in xrange(1, 10):
5898                 self._copy_share_to_server(i, 1)
5899         d.addCallback(_copy_shares)
5900-        d.addCallback(lambda ign:
5901-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5902+        d.addCallback(lambda ign: self.remove_server(0))
5903         def _prepare_client(ign):
5904             client = self.g.clients[0]
5905             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5906hunk ./src/allmydata/test/test_upload.py 1550
5907         def _setup(ign):
5908             for i in xrange(1, 11):
5909                 self._add_server(server_number=i)
5910-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5911+            self.remove_server(0)
5912             c = self.g.clients[0]
5913             # We set happy to an unsatisfiable value so that we can check the
5914             # counting in the exception message. The same progress message
5915hunk ./src/allmydata/test/test_upload.py 1577
5916                 self._add_server(server_number=i)
5917             self._add_server(server_number=11, readonly=True)
5918             self._add_server(server_number=12, readonly=True)
5919-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5920+            self.remove_server(0)
5921             c = self.g.clients[0]
5922             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5923             return c
5924hunk ./src/allmydata/test/test_upload.py 1605
5925             # the first one that the selector sees.
5926             for i in xrange(10):
5927                 self._copy_share_to_server(i, 9)
5928-            # Remove server 0, and its contents
5929-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5930+            self.remove_server(0)
5931             # Make happiness unsatisfiable
5932             c = self.g.clients[0]
5933             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5934hunk ./src/allmydata/test/test_upload.py 1625
5935         def _then(ign):
5936             for i in xrange(1, 11):
5937                 self._add_server(server_number=i, readonly=True)
5938-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5939+            self.remove_server(0)
5940             c = self.g.clients[0]
5941             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5942             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5943hunk ./src/allmydata/test/test_upload.py 1661
5944             self._add_server(server_number=4, readonly=True))
5945         d.addCallback(lambda ign:
5946             self._add_server(server_number=5, readonly=True))
5947-        d.addCallback(lambda ign:
5948-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5949+        d.addCallback(lambda ign: self.remove_server(0))
5950         def _reset_encoding_parameters(ign, happy=4):
5951             client = self.g.clients[0]
5952             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5953hunk ./src/allmydata/test/test_upload.py 1696
5954         d.addCallback(lambda ign:
5955             self._add_server(server_number=2))
5956         def _break_server_2(ign):
5957-            serverid = self.g.servers_by_number[2].my_nodeid
5958+            serverid = self.get_server(2).get_serverid()
5959             self.g.break_server(serverid)
5960         d.addCallback(_break_server_2)
5961         d.addCallback(lambda ign:
5962hunk ./src/allmydata/test/test_upload.py 1705
5963             self._add_server(server_number=4, readonly=True))
5964         d.addCallback(lambda ign:
5965             self._add_server(server_number=5, readonly=True))
5966-        d.addCallback(lambda ign:
5967-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5968+        d.addCallback(lambda ign: self.remove_server(0))
5969         d.addCallback(_reset_encoding_parameters)
5970         d.addCallback(lambda client:
5971             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
5972hunk ./src/allmydata/test/test_upload.py 1816
5973             # Copy shares
5974             self._copy_share_to_server(1, 1)
5975             self._copy_share_to_server(2, 1)
5976-            # Remove server 0
5977-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5978+            self.remove_server(0)
5979             client = self.g.clients[0]
5980             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
5981             return client
5982hunk ./src/allmydata/test/test_upload.py 1930
5983                                         readonly=True)
5984             self._add_server_with_share(server_number=4, share_number=3,
5985                                         readonly=True)
5986-            # Remove server 0.
5987-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5988+            self.remove_server(0)
5989             # Set the client appropriately
5990             c = self.g.clients[0]
5991             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5992hunk ./src/allmydata/test/test_util.py 9
5993 from twisted.trial import unittest
5994 from twisted.internet import defer, reactor
5995 from twisted.python.failure import Failure
5996+from twisted.python.filepath import FilePath
5997 from twisted.python import log
5998 from pycryptopp.hash.sha256 import SHA256 as _hash
5999 
6000hunk ./src/allmydata/test/test_util.py 508
6001                 os.chdir(saved_cwd)
6002 
6003     def test_disk_stats(self):
6004-        avail = fileutil.get_available_space('.', 2**14)
6005+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
6006         if avail == 0:
6007             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
6008 
6009hunk ./src/allmydata/test/test_util.py 512
6010-        disk = fileutil.get_disk_stats('.', 2**13)
6011+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
6012         self.failUnless(disk['total'] > 0, disk['total'])
6013         self.failUnless(disk['used'] > 0, disk['used'])
6014         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
6015hunk ./src/allmydata/test/test_util.py 521
6016 
6017     def test_disk_stats_avail_nonnegative(self):
6018         # This test will spuriously fail if you have more than 2^128
6019-        # bytes of available space on your filesystem.
6020-        disk = fileutil.get_disk_stats('.', 2**128)
6021+        # bytes of available space on your filesystem (lucky you).
6022+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
6023         self.failUnlessEqual(disk['avail'], 0)
6024 
6025 class PollMixinTests(unittest.TestCase):
6026hunk ./src/allmydata/test/test_web.py 12
6027 from twisted.python import failure, log
6028 from nevow import rend
6029 from allmydata import interfaces, uri, webish, dirnode
6030-from allmydata.storage.shares import get_share_file
6031 from allmydata.storage_client import StorageFarmBroker
6032 from allmydata.immutable import upload
6033 from allmydata.immutable.downloader.status import DownloadStatus
6034hunk ./src/allmydata/test/test_web.py 4111
6035             good_shares = self.find_uri_shares(self.uris["good"])
6036             self.failUnlessReallyEqual(len(good_shares), 10)
6037             sick_shares = self.find_uri_shares(self.uris["sick"])
6038-            os.unlink(sick_shares[0][2])
6039+            sick_shares[0][2].remove()
6040             dead_shares = self.find_uri_shares(self.uris["dead"])
6041             for i in range(1, 10):
6042hunk ./src/allmydata/test/test_web.py 4114
6043-                os.unlink(dead_shares[i][2])
6044+                dead_shares[i][2].remove()
6045             c_shares = self.find_uri_shares(self.uris["corrupt"])
6046             cso = CorruptShareOptions()
6047             cso.stdout = StringIO()
6048hunk ./src/allmydata/test/test_web.py 4118
6049-            cso.parseOptions([c_shares[0][2]])
6050+            cso.parseOptions([c_shares[0][2].path])
6051             corrupt_share(cso)
6052         d.addCallback(_clobber_shares)
6053 
6054hunk ./src/allmydata/test/test_web.py 4253
6055             good_shares = self.find_uri_shares(self.uris["good"])
6056             self.failUnlessReallyEqual(len(good_shares), 10)
6057             sick_shares = self.find_uri_shares(self.uris["sick"])
6058-            os.unlink(sick_shares[0][2])
6059+            sick_shares[0][2].remove()
6060             dead_shares = self.find_uri_shares(self.uris["dead"])
6061             for i in range(1, 10):
6062hunk ./src/allmydata/test/test_web.py 4256
6063-                os.unlink(dead_shares[i][2])
6064+                dead_shares[i][2].remove()
6065             c_shares = self.find_uri_shares(self.uris["corrupt"])
6066             cso = CorruptShareOptions()
6067             cso.stdout = StringIO()
6068hunk ./src/allmydata/test/test_web.py 4260
6069-            cso.parseOptions([c_shares[0][2]])
6070+            cso.parseOptions([c_shares[0][2].path])
6071             corrupt_share(cso)
6072         d.addCallback(_clobber_shares)
6073 
6074hunk ./src/allmydata/test/test_web.py 4319
6075 
6076         def _clobber_shares(ignored):
6077             sick_shares = self.find_uri_shares(self.uris["sick"])
6078-            os.unlink(sick_shares[0][2])
6079+            sick_shares[0][2].remove()
6080         d.addCallback(_clobber_shares)
6081 
6082         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6083hunk ./src/allmydata/test/test_web.py 4811
6084             good_shares = self.find_uri_shares(self.uris["good"])
6085             self.failUnlessReallyEqual(len(good_shares), 10)
6086             sick_shares = self.find_uri_shares(self.uris["sick"])
6087-            os.unlink(sick_shares[0][2])
6088+            sick_shares[0][2].remove()
6089             #dead_shares = self.find_uri_shares(self.uris["dead"])
6090             #for i in range(1, 10):
6091hunk ./src/allmydata/test/test_web.py 4814
6092-            #    os.unlink(dead_shares[i][2])
6093+            #    dead_shares[i][2].remove()
6094 
6095             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6096             #cso = CorruptShareOptions()
6097hunk ./src/allmydata/test/test_web.py 4819
6098             #cso.stdout = StringIO()
6099-            #cso.parseOptions([c_shares[0][2]])
6100+            #cso.parseOptions([c_shares[0][2].path])
6101             #corrupt_share(cso)
6102         d.addCallback(_clobber_shares)
6103 
6104hunk ./src/allmydata/test/test_web.py 4870
6105         d.addErrback(self.explain_web_error)
6106         return d
6107 
6108-    def _count_leases(self, ignored, which):
6109-        u = self.uris[which]
6110-        shares = self.find_uri_shares(u)
6111-        lease_counts = []
6112-        for shnum, serverid, fn in shares:
6113-            sf = get_share_file(fn)
6114-            num_leases = len(list(sf.get_leases()))
6115-            lease_counts.append( (fn, num_leases) )
6116-        return lease_counts
6117-
6118-    def _assert_leasecount(self, lease_counts, expected):
6119+    def _assert_leasecount(self, ignored, which, expected):
6120+        lease_counts = self.count_leases(self.uris[which])
6121         for (fn, num_leases) in lease_counts:
6122             if num_leases != expected:
6123                 self.fail("expected %d leases, have %d, on %s" %
6124hunk ./src/allmydata/test/test_web.py 4903
6125                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6126         d.addCallback(_compute_fileurls)
6127 
6128-        d.addCallback(self._count_leases, "one")
6129-        d.addCallback(self._assert_leasecount, 1)
6130-        d.addCallback(self._count_leases, "two")
6131-        d.addCallback(self._assert_leasecount, 1)
6132-        d.addCallback(self._count_leases, "mutable")
6133-        d.addCallback(self._assert_leasecount, 1)
6134+        d.addCallback(self._assert_leasecount, "one", 1)
6135+        d.addCallback(self._assert_leasecount, "two", 1)
6136+        d.addCallback(self._assert_leasecount, "mutable", 1)
6137 
6138         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6139         def _got_html_good(res):
6140hunk ./src/allmydata/test/test_web.py 4913
6141             self.failIf("Not Healthy" in res, res)
6142         d.addCallback(_got_html_good)
6143 
6144-        d.addCallback(self._count_leases, "one")
6145-        d.addCallback(self._assert_leasecount, 1)
6146-        d.addCallback(self._count_leases, "two")
6147-        d.addCallback(self._assert_leasecount, 1)
6148-        d.addCallback(self._count_leases, "mutable")
6149-        d.addCallback(self._assert_leasecount, 1)
6150+        d.addCallback(self._assert_leasecount, "one", 1)
6151+        d.addCallback(self._assert_leasecount, "two", 1)
6152+        d.addCallback(self._assert_leasecount, "mutable", 1)
6153 
6154         # this CHECK uses the original client, which uses the same
6155         # lease-secrets, so it will just renew the original lease
6156hunk ./src/allmydata/test/test_web.py 4922
6157         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6158         d.addCallback(_got_html_good)
6159 
6160-        d.addCallback(self._count_leases, "one")
6161-        d.addCallback(self._assert_leasecount, 1)
6162-        d.addCallback(self._count_leases, "two")
6163-        d.addCallback(self._assert_leasecount, 1)
6164-        d.addCallback(self._count_leases, "mutable")
6165-        d.addCallback(self._assert_leasecount, 1)
6166+        d.addCallback(self._assert_leasecount, "one", 1)
6167+        d.addCallback(self._assert_leasecount, "two", 1)
6168+        d.addCallback(self._assert_leasecount, "mutable", 1)
6169 
6170         # this CHECK uses an alternate client, which adds a second lease
6171         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6172hunk ./src/allmydata/test/test_web.py 4930
6173         d.addCallback(_got_html_good)
6174 
6175-        d.addCallback(self._count_leases, "one")
6176-        d.addCallback(self._assert_leasecount, 2)
6177-        d.addCallback(self._count_leases, "two")
6178-        d.addCallback(self._assert_leasecount, 1)
6179-        d.addCallback(self._count_leases, "mutable")
6180-        d.addCallback(self._assert_leasecount, 1)
6181+        d.addCallback(self._assert_leasecount, "one", 2)
6182+        d.addCallback(self._assert_leasecount, "two", 1)
6183+        d.addCallback(self._assert_leasecount, "mutable", 1)
6184 
6185         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6186         d.addCallback(_got_html_good)
6187hunk ./src/allmydata/test/test_web.py 4937
6188 
6189-        d.addCallback(self._count_leases, "one")
6190-        d.addCallback(self._assert_leasecount, 2)
6191-        d.addCallback(self._count_leases, "two")
6192-        d.addCallback(self._assert_leasecount, 1)
6193-        d.addCallback(self._count_leases, "mutable")
6194-        d.addCallback(self._assert_leasecount, 1)
6195+        d.addCallback(self._assert_leasecount, "one", 2)
6196+        d.addCallback(self._assert_leasecount, "two", 1)
6197+        d.addCallback(self._assert_leasecount, "mutable", 1)
6198 
6199         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6200                       clientnum=1)
6201hunk ./src/allmydata/test/test_web.py 4945
6202         d.addCallback(_got_html_good)
6203 
6204-        d.addCallback(self._count_leases, "one")
6205-        d.addCallback(self._assert_leasecount, 2)
6206-        d.addCallback(self._count_leases, "two")
6207-        d.addCallback(self._assert_leasecount, 1)
6208-        d.addCallback(self._count_leases, "mutable")
6209-        d.addCallback(self._assert_leasecount, 2)
6210+        d.addCallback(self._assert_leasecount, "one", 2)
6211+        d.addCallback(self._assert_leasecount, "two", 1)
6212+        d.addCallback(self._assert_leasecount, "mutable", 2)
6213 
6214         d.addErrback(self.explain_web_error)
6215         return d
6216hunk ./src/allmydata/test/test_web.py 4989
6217             self.failUnlessReallyEqual(len(units), 4+1)
6218         d.addCallback(_done)
6219 
6220-        d.addCallback(self._count_leases, "root")
6221-        d.addCallback(self._assert_leasecount, 1)
6222-        d.addCallback(self._count_leases, "one")
6223-        d.addCallback(self._assert_leasecount, 1)
6224-        d.addCallback(self._count_leases, "mutable")
6225-        d.addCallback(self._assert_leasecount, 1)
6226+        d.addCallback(self._assert_leasecount, "root", 1)
6227+        d.addCallback(self._assert_leasecount, "one", 1)
6228+        d.addCallback(self._assert_leasecount, "mutable", 1)
6229 
6230         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6231         d.addCallback(_done)
6232hunk ./src/allmydata/test/test_web.py 4996
6233 
6234-        d.addCallback(self._count_leases, "root")
6235-        d.addCallback(self._assert_leasecount, 1)
6236-        d.addCallback(self._count_leases, "one")
6237-        d.addCallback(self._assert_leasecount, 1)
6238-        d.addCallback(self._count_leases, "mutable")
6239-        d.addCallback(self._assert_leasecount, 1)
6240+        d.addCallback(self._assert_leasecount, "root", 1)
6241+        d.addCallback(self._assert_leasecount, "one", 1)
6242+        d.addCallback(self._assert_leasecount, "mutable", 1)
6243 
6244         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6245                       clientnum=1)
6246hunk ./src/allmydata/test/test_web.py 5004
6247         d.addCallback(_done)
6248 
6249-        d.addCallback(self._count_leases, "root")
6250-        d.addCallback(self._assert_leasecount, 2)
6251-        d.addCallback(self._count_leases, "one")
6252-        d.addCallback(self._assert_leasecount, 2)
6253-        d.addCallback(self._count_leases, "mutable")
6254-        d.addCallback(self._assert_leasecount, 2)
6255+        d.addCallback(self._assert_leasecount, "root", 2)
6256+        d.addCallback(self._assert_leasecount, "one", 2)
6257+        d.addCallback(self._assert_leasecount, "mutable", 2)
6258 
6259         d.addErrback(self.explain_web_error)
6260         return d
6261merger 0.0 (
6262hunk ./src/allmydata/uri.py 829
6263+    def is_readonly(self):
6264+        return True
6265+
6266+    def get_readonly(self):
6267+        return self
6268+
6269+
6270hunk ./src/allmydata/uri.py 829
6271+    def is_readonly(self):
6272+        return True
6273+
6274+    def get_readonly(self):
6275+        return self
6276+
6277+
6278)
6279merger 0.0 (
6280hunk ./src/allmydata/uri.py 848
6281+    def is_readonly(self):
6282+        return True
6283+
6284+    def get_readonly(self):
6285+        return self
6286+
6287hunk ./src/allmydata/uri.py 848
6288+    def is_readonly(self):
6289+        return True
6290+
6291+    def get_readonly(self):
6292+        return self
6293+
6294)
6295hunk ./src/allmydata/util/encodingutil.py 221
6296 def quote_path(path, quotemarks=True):
6297     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6298 
6299+def quote_filepath(fp, quotemarks=True, encoding=None):
6300+    path = fp.path
6301+    if isinstance(path, str):
6302+        try:
6303+            path = path.decode(filesystem_encoding)
6304+        except UnicodeDecodeError:
6305+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6306+
6307+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6308+
6309 
6310 def unicode_platform():
6311     """
6312hunk ./src/allmydata/util/fileutil.py 5
6313 Futz with files like a pro.
6314 """
6315 
6316-import sys, exceptions, os, stat, tempfile, time, binascii
6317+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6318+
6319+from allmydata.util.assertutil import precondition
6320 
6321 from twisted.python import log
6322hunk ./src/allmydata/util/fileutil.py 10
6323+from twisted.python.filepath import FilePath, UnlistableError
6324 
6325 from pycryptopp.cipher.aes import AES
6326 
6327hunk ./src/allmydata/util/fileutil.py 189
6328             raise tx
6329         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6330 
6331-def rm_dir(dirname):
6332+def fp_make_dirs(dirfp):
6333+    """
6334+    An idempotent version of FilePath.makedirs().  If the dir already
6335+    exists, do nothing and return without raising an exception.  If this
6336+    call creates the dir, return without raising an exception.  If there is
6337+    an error that prevents creation or if the directory gets deleted after
6338+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6339+    exists, raise an exception.
6340+    """
6341+    log.msg( "xxx 0 %s" % (dirfp,))
6342+    tx = None
6343+    try:
6344+        dirfp.makedirs()
6345+    except OSError, x:
6346+        tx = x
6347+
6348+    if not dirfp.isdir():
6349+        if tx:
6350+            raise tx
6351+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6352+
6353+def fp_rmdir_if_empty(dirfp):
6354+    """ Remove the directory if it is empty. """
6355+    try:
6356+        os.rmdir(dirfp.path)
6357+    except OSError, e:
6358+        if e.errno != errno.ENOTEMPTY:
6359+            raise
6360+    else:
6361+        dirfp.changed()
6362+
6363+def rmtree(dirname):
6364     """
6365     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6366     already gone, do nothing and return without raising an exception.  If this
6367hunk ./src/allmydata/util/fileutil.py 239
6368             else:
6369                 remove(fullname)
6370         os.rmdir(dirname)
6371-    except Exception, le:
6372-        # Ignore "No such file or directory"
6373-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6374+    except EnvironmentError, le:
6375+        # Ignore "No such file or directory", collect any other exception.
6376+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6377             excs.append(le)
6378hunk ./src/allmydata/util/fileutil.py 243
6379+    except Exception, le:
6380+        excs.append(le)
6381 
6382     # Okay, now we've recursively removed everything, ignoring any "No
6383     # such file or directory" errors, and collecting any other errors.
6384hunk ./src/allmydata/util/fileutil.py 256
6385             raise OSError, "Failed to remove dir for unknown reason."
6386         raise OSError, excs
6387 
6388+def fp_remove(fp):
6389+    """
6390+    An idempotent version of shutil.rmtree().  If the file/dir is already
6391+    gone, do nothing and return without raising an exception.  If this call
6392+    removes the file/dir, return without raising an exception.  If there is
6393+    an error that prevents removal, or if a file or directory at the same
6394+    path gets created again by someone else after this deletes it and before
6395+    this checks that it is gone, raise an exception.
6396+    """
6397+    try:
6398+        fp.remove()
6399+    except UnlistableError, e:
6400+        if e.originalException.errno != errno.ENOENT:
6401+            raise
6402+    except OSError, e:
6403+        if e.errno != errno.ENOENT:
6404+            raise
6405+
6406+def rm_dir(dirname):
6407+    # Renamed to be like shutil.rmtree and unlike rmdir.
6408+    return rmtree(dirname)
6409 
6410 def remove_if_possible(f):
6411     try:
6412hunk ./src/allmydata/util/fileutil.py 387
6413         import traceback
6414         traceback.print_exc()
6415 
6416-def get_disk_stats(whichdir, reserved_space=0):
6417+def get_disk_stats(whichdirfp, reserved_space=0):
6418     """Return disk statistics for the storage disk, in the form of a dict
6419     with the following fields.
6420       total:            total bytes on disk
6421hunk ./src/allmydata/util/fileutil.py 408
6422     you can pass how many bytes you would like to leave unused on this
6423     filesystem as reserved_space.
6424     """
6425+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6426 
6427     if have_GetDiskFreeSpaceExW:
6428         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6429hunk ./src/allmydata/util/fileutil.py 419
6430         n_free_for_nonroot = c_ulonglong(0)
6431         n_total            = c_ulonglong(0)
6432         n_free_for_root    = c_ulonglong(0)
6433-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6434-                                               byref(n_total),
6435-                                               byref(n_free_for_root))
6436+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6437+                                                      byref(n_total),
6438+                                                      byref(n_free_for_root))
6439         if retval == 0:
6440             raise OSError("Windows error %d attempting to get disk statistics for %r"
6441hunk ./src/allmydata/util/fileutil.py 424
6442-                          % (GetLastError(), whichdir))
6443+                          % (GetLastError(), whichdirfp.path))
6444         free_for_nonroot = n_free_for_nonroot.value
6445         total            = n_total.value
6446         free_for_root    = n_free_for_root.value
6447hunk ./src/allmydata/util/fileutil.py 433
6448         # <http://docs.python.org/library/os.html#os.statvfs>
6449         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6450         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6451-        s = os.statvfs(whichdir)
6452+        s = os.statvfs(whichdirfp.path)
6453 
6454         # on my mac laptop:
6455         #  statvfs(2) is a wrapper around statfs(2).
6456hunk ./src/allmydata/util/fileutil.py 460
6457              'avail': avail,
6458            }
6459 
6460-def get_available_space(whichdir, reserved_space):
6461+def get_available_space(whichdirfp, reserved_space):
6462     """Returns available space for share storage in bytes, or None if no
6463     API to get this information is available.
6464 
6465hunk ./src/allmydata/util/fileutil.py 472
6466     you can pass how many bytes you would like to leave unused on this
6467     filesystem as reserved_space.
6468     """
6469+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6470     try:
6471hunk ./src/allmydata/util/fileutil.py 474
6472-        return get_disk_stats(whichdir, reserved_space)['avail']
6473+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6474     except AttributeError:
6475         return None
6476hunk ./src/allmydata/util/fileutil.py 477
6477-    except EnvironmentError:
6478-        log.msg("OS call to get disk statistics failed")
6479+
6480+
6481+def get_used_space(fp):
6482+    if fp is None:
6483         return 0
6484hunk ./src/allmydata/util/fileutil.py 482
6485+    try:
6486+        s = os.stat(fp.path)
6487+    except EnvironmentError:
6488+        if not fp.exists():
6489+            return 0
6490+        raise
6491+    else:
6492+        # POSIX defines st_blocks (originally a BSDism):
6493+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6494+        # but does not require stat() to give it a "meaningful value"
6495+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6496+        # and says:
6497+        #   "The unit for the st_blocks member of the stat structure is not defined
6498+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6499+        #    It may differ on a file system basis. There is no correlation between
6500+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6501+        #    structure members."
6502+        #
6503+        # The Linux docs define it as "the number of blocks allocated to the file,
6504+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6505+        # not set the attribute on Windows.
6506+        #
6507+        # We consider platforms that define st_blocks but give it a wrong value, or
6508+        # measure it in a unit other than 512 bytes, to be broken. See also
6509+        # <http://bugs.python.org/issue12350>.
6510+
6511+        if hasattr(s, 'st_blocks'):
6512+            return s.st_blocks * 512
6513+        else:
6514+            return s.st_size
6515}
6516[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6517david-sarah@jacaranda.org**20110920033803
6518 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6519] {
6520hunk ./src/allmydata/client.py 9
6521 from twisted.internet import reactor, defer
6522 from twisted.application import service
6523 from twisted.application.internet import TimerService
6524+from twisted.python.filepath import FilePath
6525 from foolscap.api import Referenceable
6526 from pycryptopp.publickey import rsa
6527 
6528hunk ./src/allmydata/client.py 15
6529 import allmydata
6530 from allmydata.storage.server import StorageServer
6531+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6532 from allmydata import storage_client
6533 from allmydata.immutable.upload import Uploader
6534 from allmydata.immutable.offloaded import Helper
6535hunk ./src/allmydata/client.py 213
6536             return
6537         readonly = self.get_config("storage", "readonly", False, boolean=True)
6538 
6539-        storedir = os.path.join(self.basedir, self.STOREDIR)
6540+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6541 
6542         data = self.get_config("storage", "reserved_space", None)
6543         reserved = None
6544hunk ./src/allmydata/client.py 255
6545             'cutoff_date': cutoff_date,
6546             'sharetypes': tuple(sharetypes),
6547         }
6548-        ss = StorageServer(storedir, self.nodeid,
6549-                           reserved_space=reserved,
6550-                           discard_storage=discard,
6551-                           readonly_storage=readonly,
6552+
6553+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6554+                              discard_storage=discard)
6555+        ss = StorageServer(nodeid, backend, storedir,
6556                            stats_provider=self.stats_provider,
6557                            expiration_policy=expiration_policy)
6558         self.add_service(ss)
6559hunk ./src/allmydata/interfaces.py 348
6560 
6561     def get_shares():
6562         """
6563-        Generates the IStoredShare objects held in this shareset.
6564+        Generates IStoredShare objects for all completed shares in this shareset.
6565         """
6566 
6567     def has_incoming(shnum):
6568hunk ./src/allmydata/storage/backends/base.py 69
6569         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6570         #     """create a mutable share with the given shnum and write_enabler"""
6571 
6572-        # secrets might be a triple with cancel_secret in secrets[2], but if
6573-        # so we ignore the cancel_secret.
6574         write_enabler = secrets[0]
6575         renew_secret = secrets[1]
6576hunk ./src/allmydata/storage/backends/base.py 71
6577+        cancel_secret = '\x00'*32
6578+        if len(secrets) > 2:
6579+            cancel_secret = secrets[2]
6580 
6581         si_s = self.get_storage_index_string()
6582         shares = {}
6583hunk ./src/allmydata/storage/backends/base.py 110
6584             read_data[shnum] = share.readv(read_vector)
6585 
6586         ownerid = 1 # TODO
6587-        lease_info = LeaseInfo(ownerid, renew_secret,
6588+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6589                                expiration_time, storageserver.get_serverid())
6590 
6591         if testv_is_good:
6592hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6593     return newfp.child(sia)
6594 
6595 
6596-def get_share(fp):
6597+def get_share(storageindex, shnum, fp):
6598     f = fp.open('rb')
6599     try:
6600         prefix = f.read(32)
6601hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6602         f.close()
6603 
6604     if prefix == MutableDiskShare.MAGIC:
6605-        return MutableDiskShare(fp)
6606+        return MutableDiskShare(storageindex, shnum, fp)
6607     else:
6608         # assume it's immutable
6609hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6610-        return ImmutableDiskShare(fp)
6611+        return ImmutableDiskShare(storageindex, shnum, fp)
6612 
6613 
6614 class DiskBackend(Backend):
6615hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6616                 if not NUM_RE.match(shnumstr):
6617                     continue
6618                 sharehome = self._sharehomedir.child(shnumstr)
6619-                yield self.get_share(sharehome)
6620+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6621         except UnlistableError:
6622             # There is no shares directory at all.
6623             pass
6624hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6625         return self._incominghomedir.child(str(shnum)).exists()
6626 
6627     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6628-        sharehome = self._sharehomedir.child(str(shnum))
6629+        finalhome = self._sharehomedir.child(str(shnum))
6630         incominghome = self._incominghomedir.child(str(shnum))
6631hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6632-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6633-                                   max_size=max_space_per_bucket, create=True)
6634+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6635+                                   max_size=max_space_per_bucket)
6636         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6637         if self._discard_storage:
6638             bw.throw_out_all_data = True
6639hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6640         fileutil.fp_make_dirs(self._sharehomedir)
6641         sharehome = self._sharehomedir.child(str(shnum))
6642         serverid = storageserver.get_serverid()
6643-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6644+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6645 
6646     def _clean_up_after_unlink(self):
6647         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6648hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6649     LEASE_SIZE = struct.calcsize(">L32s32sL")
6650 
6651 
6652-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6653-        """ If max_size is not None then I won't allow more than
6654-        max_size to be written to me. If create=True then max_size
6655-        must not be None. """
6656-        precondition((max_size is not None) or (not create), max_size, create)
6657+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6658+        """
6659+        If max_size is not None then I won't allow more than max_size to be written to me.
6660+        If finalhome is not None (meaning that we are creating the share) then max_size
6661+        must not be None.
6662+        """
6663+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6664         self._storageindex = storageindex
6665         self._max_size = max_size
6666hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6667-        self._incominghome = incominghome
6668-        self._home = finalhome
6669+
6670+        # If we are creating the share, _finalhome refers to the final path and
6671+        # _home to the incoming path. Otherwise, _finalhome is None.
6672+        self._finalhome = finalhome
6673+        self._home = home
6674         self._shnum = shnum
6675hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6676-        if create:
6677-            # touch the file, so later callers will see that we're working on
6678+
6679+        if self._finalhome is not None:
6680+            # Touch the file, so later callers will see that we're working on
6681             # it. Also construct the metadata.
6682hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6683-            assert not finalhome.exists()
6684-            fp_make_dirs(self._incominghome.parent())
6685+            assert not self._finalhome.exists()
6686+            fp_make_dirs(self._home.parent())
6687             # The second field -- the four-byte share data length -- is no
6688             # longer used as of Tahoe v1.3.0, but we continue to write it in
6689             # there in case someone downgrades a storage server from >=
6690hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6691             # the largest length that can fit into the field. That way, even
6692             # if this does happen, the old < v1.3.0 server will still allow
6693             # clients to read the first part of the share.
6694-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6695+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6696             self._lease_offset = max_size + 0x0c
6697             self._num_leases = 0
6698         else:
6699hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6700                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6701 
6702     def close(self):
6703-        fileutil.fp_make_dirs(self._home.parent())
6704-        self._incominghome.moveTo(self._home)
6705-        try:
6706-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6707-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6708-            # these directories lying around forever, but the delete might
6709-            # fail if we're working on another share for the same storage
6710-            # index (like ab/abcde/5). The alternative approach would be to
6711-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6712-            # ShareWriter), each of which is responsible for a single
6713-            # directory on disk, and have them use reference counting of
6714-            # their children to know when they should do the rmdir. This
6715-            # approach is simpler, but relies on os.rmdir refusing to delete
6716-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6717-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6718-            # we also delete the grandparent (prefix) directory, .../ab ,
6719-            # again to avoid leaving directories lying around. This might
6720-            # fail if there is another bucket open that shares a prefix (like
6721-            # ab/abfff).
6722-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6723-            # we leave the great-grandparent (incoming/) directory in place.
6724-        except EnvironmentError:
6725-            # ignore the "can't rmdir because the directory is not empty"
6726-            # exceptions, those are normal consequences of the
6727-            # above-mentioned conditions.
6728-            pass
6729-        pass
6730+        fileutil.fp_make_dirs(self._finalhome.parent())
6731+        self._home.moveTo(self._finalhome)
6732+
6733+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6734+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6735+        # these directories lying around forever, but the delete might
6736+        # fail if we're working on another share for the same storage
6737+        # index (like ab/abcde/5). The alternative approach would be to
6738+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6739+        # ShareWriter), each of which is responsible for a single
6740+        # directory on disk, and have them use reference counting of
6741+        # their children to know when they should do the rmdir. This
6742+        # approach is simpler, but relies on os.rmdir (used by
6743+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6744+        # Do *not* use fileutil.fp_remove() here!
6745+        parent = self._home.parent()
6746+        fileutil.fp_rmdir_if_empty(parent)
6747+
6748+        # we also delete the grandparent (prefix) directory, .../ab ,
6749+        # again to avoid leaving directories lying around. This might
6750+        # fail if there is another bucket open that shares a prefix (like
6751+        # ab/abfff).
6752+        fileutil.fp_rmdir_if_empty(parent.parent())
6753+
6754+        # we leave the great-grandparent (incoming/) directory in place.
6755+
6756+        # allow lease changes after closing.
6757+        self._home = self._finalhome
6758+        self._finalhome = None
6759 
6760     def get_used_space(self):
6761hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6762-        return (fileutil.get_used_space(self._home) +
6763-                fileutil.get_used_space(self._incominghome))
6764+        return (fileutil.get_used_space(self._finalhome) +
6765+                fileutil.get_used_space(self._home))
6766 
6767     def get_storage_index(self):
6768         return self._storageindex
6769hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6770         precondition(offset >= 0, offset)
6771         if self._max_size is not None and offset+length > self._max_size:
6772             raise DataTooLargeError(self._max_size, offset, length)
6773-        f = self._incominghome.open(mode='rb+')
6774+        f = self._home.open(mode='rb+')
6775         try:
6776             real_offset = self._data_offset+offset
6777             f.seek(real_offset)
6778hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6779 
6780     # These lease operations are intended for use by disk_backend.py.
6781     # Other clients should not depend on the fact that the disk backend
6782-    # stores leases in share files.
6783+    # stores leases in share files. XXX bucket.py also relies on this.
6784 
6785     def get_leases(self):
6786         """Yields a LeaseInfo instance for all leases."""
6787hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6788             f.close()
6789 
6790     def add_lease(self, lease_info):
6791-        f = self._incominghome.open(mode='rb')
6792+        f = self._home.open(mode='rb+')
6793         try:
6794             num_leases = self._read_num_leases(f)
6795hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6796-        finally:
6797-            f.close()
6798-        f = self._home.open(mode='wb+')
6799-        try:
6800             self._write_lease_record(f, num_leases, lease_info)
6801             self._write_num_leases(f, num_leases+1)
6802         finally:
6803hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6804         pass
6805 
6806 
6807-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6808-    ms = MutableDiskShare(fp, parent)
6809+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6810+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6811     ms.create(serverid, write_enabler)
6812     del ms
6813hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6814-    return MutableDiskShare(fp, parent)
6815+    return MutableDiskShare(storageindex, shnum, fp, parent)
6816hunk ./src/allmydata/storage/bucket.py 44
6817         start = time.time()
6818 
6819         self._share.close()
6820-        filelen = self._share.stat()
6821+        # XXX should this be self._share.get_used_space() ?
6822+        consumed_size = self._share.get_size()
6823         self._share = None
6824 
6825         self.closed = True
6826hunk ./src/allmydata/storage/bucket.py 51
6827         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6828 
6829-        self.ss.bucket_writer_closed(self, filelen)
6830+        self.ss.bucket_writer_closed(self, consumed_size)
6831         self.ss.add_latency("close", time.time() - start)
6832         self.ss.count("close")
6833 
6834hunk ./src/allmydata/storage/server.py 182
6835                                 renew_secret, cancel_secret,
6836                                 sharenums, allocated_size,
6837                                 canary, owner_num=0):
6838-        # cancel_secret is no longer used.
6839         # owner_num is not for clients to set, but rather it should be
6840         # curried into a StorageServer instance dedicated to a particular
6841         # owner.
6842hunk ./src/allmydata/storage/server.py 195
6843         # Note that the lease should not be added until the BucketWriter
6844         # has been closed.
6845         expire_time = time.time() + 31*24*60*60
6846-        lease_info = LeaseInfo(owner_num, renew_secret,
6847+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6848                                expire_time, self._serverid)
6849 
6850         max_space_per_bucket = allocated_size
6851hunk ./src/allmydata/test/no_network.py 349
6852         return self.g.servers_by_number[i]
6853 
6854     def get_serverdir(self, i):
6855-        return self.g.servers_by_number[i].backend.storedir
6856+        return self.g.servers_by_number[i].backend._storedir
6857 
6858     def remove_server(self, i):
6859         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6860hunk ./src/allmydata/test/no_network.py 357
6861     def iterate_servers(self):
6862         for i in sorted(self.g.servers_by_number.keys()):
6863             ss = self.g.servers_by_number[i]
6864-            yield (i, ss, ss.backend.storedir)
6865+            yield (i, ss, ss.backend._storedir)
6866 
6867     def find_uri_shares(self, uri):
6868         si = tahoe_uri.from_string(uri).get_storage_index()
6869hunk ./src/allmydata/test/no_network.py 384
6870         return shares
6871 
6872     def copy_share(self, from_share, uri, to_server):
6873-        si = uri.from_string(self.uri).get_storage_index()
6874+        si = tahoe_uri.from_string(uri).get_storage_index()
6875         (i_shnum, i_serverid, i_sharefp) = from_share
6876         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6877         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6878hunk ./src/allmydata/test/test_download.py 127
6879 
6880         return d
6881 
6882-    def _write_shares(self, uri, shares):
6883-        si = uri.from_string(uri).get_storage_index()
6884+    def _write_shares(self, fileuri, shares):
6885+        si = uri.from_string(fileuri).get_storage_index()
6886         for i in shares:
6887             shares_for_server = shares[i]
6888             for shnum in shares_for_server:
6889hunk ./src/allmydata/test/test_hung_server.py 36
6890 
6891     def _hang(self, servers, **kwargs):
6892         for ss in servers:
6893-            self.g.hang_server(ss.get_serverid(), **kwargs)
6894+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6895 
6896     def _unhang(self, servers, **kwargs):
6897         for ss in servers:
6898hunk ./src/allmydata/test/test_hung_server.py 40
6899-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6900+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6901 
6902     def _hang_shares(self, shnums, **kwargs):
6903         # hang all servers who are holding the given shares
6904hunk ./src/allmydata/test/test_hung_server.py 52
6905                     hung_serverids.add(i_serverid)
6906 
6907     def _delete_all_shares_from(self, servers):
6908-        serverids = [ss.get_serverid() for ss in servers]
6909+        serverids = [ss.original.get_serverid() for ss in servers]
6910         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6911             if i_serverid in serverids:
6912                 i_sharefp.remove()
6913hunk ./src/allmydata/test/test_hung_server.py 58
6914 
6915     def _corrupt_all_shares_in(self, servers, corruptor_func):
6916-        serverids = [ss.get_serverid() for ss in servers]
6917+        serverids = [ss.original.get_serverid() for ss in servers]
6918         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6919             if i_serverid in serverids:
6920                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6921hunk ./src/allmydata/test/test_hung_server.py 64
6922 
6923     def _copy_all_shares_from(self, from_servers, to_server):
6924-        serverids = [ss.get_serverid() for ss in from_servers]
6925+        serverids = [ss.original.get_serverid() for ss in from_servers]
6926         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6927             if i_serverid in serverids:
6928                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6929hunk ./src/allmydata/test/test_mutable.py 2990
6930             fso = debug.FindSharesOptions()
6931             storage_index = base32.b2a(n.get_storage_index())
6932             fso.si_s = storage_index
6933-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6934+            fso.nodedirs = [unicode(storedir.parent().path)
6935                             for (i,ss,storedir)
6936                             in self.iterate_servers()]
6937             fso.stdout = StringIO()
6938hunk ./src/allmydata/test/test_upload.py 818
6939         if share_number is not None:
6940             self._copy_share_to_server(share_number, server_number)
6941 
6942-
6943     def _copy_share_to_server(self, share_number, server_number):
6944         ss = self.g.servers_by_number[server_number]
6945hunk ./src/allmydata/test/test_upload.py 820
6946-        self.copy_share(self.shares[share_number], ss)
6947+        self.copy_share(self.shares[share_number], self.uri, ss)
6948 
6949     def _setup_grid(self):
6950         """
6951}
6952[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
6953david-sarah@jacaranda.org**20110920171737
6954 Ignore-this: 5947e864682a43cb04e557334cda7c19
6955] {
6956adddir ./docs/backends
6957addfile ./docs/backends/S3.rst
6958hunk ./docs/backends/S3.rst 1
6959+====================================================
6960+Storing Shares in Amazon Simple Storage Service (S3)
6961+====================================================
6962+
6963+S3 is a commercial storage service provided by Amazon, described at
6964+`<https://aws.amazon.com/s3/>`_.
6965+
6966+The Tahoe-LAFS storage server can be configured to store its shares in
6967+an S3 bucket, rather than on local filesystem. To enable this, add the
6968+following keys to the server's ``tahoe.cfg`` file:
6969+
6970+``[storage]``
6971+
6972+``backend = s3``
6973+
6974+    This turns off the local filesystem backend and enables use of S3.
6975+
6976+``s3.access_key_id = (string, required)``
6977+``s3.secret_access_key = (string, required)``
6978+
6979+    These two give the storage server permission to access your Amazon
6980+    Web Services account, allowing them to upload and download shares
6981+    from S3.
6982+
6983+``s3.bucket = (string, required)``
6984+
6985+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
6986+    storage server will only modify and access objects in the configured S3
6987+    bucket.
6988+
6989+``s3.url = (URL string, optional)``
6990+
6991+    This URL tells the storage server how to access the S3 service. It
6992+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
6993+    else, you may be able to use some other S3-like service if it is
6994+    sufficiently compatible.
6995+
6996+``s3.max_space = (str, optional)``
6997+
6998+    This tells the server to limit how much space can be used in the S3
6999+    bucket. Before each share is uploaded, the server will ask S3 for the
7000+    current bucket usage, and will only accept the share if it does not cause
7001+    the usage to grow above this limit.
7002+
7003+    The string contains a number, with an optional case-insensitive scale
7004+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7005+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7006+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7007+    thing.
7008+
7009+    If ``s3.max_space`` is omitted, the default behavior is to allow
7010+    unlimited usage.
7011+
7012+
7013+Once configured, the WUI "storage server" page will provide information about
7014+how much space is being used and how many shares are being stored.
7015+
7016+
7017+Issues
7018+------
7019+
7020+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7021+is configured to store shares in S3 rather than on local disk, some common
7022+operations may behave differently:
7023+
7024+* Lease crawling/expiration is not yet implemented. As a result, shares will
7025+  be retained forever, and the Storage Server status web page will not show
7026+  information about the number of mutable/immutable shares present.
7027+
7028+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7029+  each share upload, causing the upload process to run slightly slower and
7030+  incur more S3 request charges.
7031addfile ./docs/backends/disk.rst
7032hunk ./docs/backends/disk.rst 1
7033+====================================
7034+Storing Shares on a Local Filesystem
7035+====================================
7036+
7037+The "disk" backend stores shares on the local filesystem. Versions of
7038+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
7039+
7040+``[storage]``
7041+
7042+``backend = disk``
7043+
7044+    This enables use of the disk backend, and is the default.
7045+
7046+``reserved_space = (str, optional)``
7047+
7048+    If provided, this value defines how much disk space is reserved: the
7049+    storage server will not accept any share that causes the amount of free
7050+    disk space to drop below this value. (The free space is measured by a
7051+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7052+    space available to the user account under which the storage server runs.)
7053+
7054+    This string contains a number, with an optional case-insensitive scale
7055+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7056+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7057+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7058+    thing.
7059+
7060+    "``tahoe create-node``" generates a tahoe.cfg with
7061+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7062+    reservation to suit your needs.
7063+
7064+``expire.enabled =``
7065+
7066+``expire.mode =``
7067+
7068+``expire.override_lease_duration =``
7069+
7070+``expire.cutoff_date =``
7071+
7072+``expire.immutable =``
7073+
7074+``expire.mutable =``
7075+
7076+    These settings control garbage collection, causing the server to
7077+    delete shares that no longer have an up-to-date lease on them. Please
7078+    see `<garbage-collection.rst>`_ for full details.
7079hunk ./docs/configuration.rst 436
7080     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7081     status of this bug. The default value is ``False``.
7082 
7083-``reserved_space = (str, optional)``
7084+``backend = (string, optional)``
7085 
7086hunk ./docs/configuration.rst 438
7087-    If provided, this value defines how much disk space is reserved: the
7088-    storage server will not accept any share that causes the amount of free
7089-    disk space to drop below this value. (The free space is measured by a
7090-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7091-    space available to the user account under which the storage server runs.)
7092+    Storage servers can store the data into different "backends". Clients
7093+    need not be aware of which backend is used by a server. The default
7094+    value is ``disk``.
7095 
7096hunk ./docs/configuration.rst 442
7097-    This string contains a number, with an optional case-insensitive scale
7098-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7099-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7100-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7101-    thing.
7102+``backend = disk``
7103 
7104hunk ./docs/configuration.rst 444
7105-    "``tahoe create-node``" generates a tahoe.cfg with
7106-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7107-    reservation to suit your needs.
7108+    The default is to store shares on the local filesystem (in
7109+    BASEDIR/storage/shares/). For configuration details (including how to
7110+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7111 
7112hunk ./docs/configuration.rst 448
7113-``expire.enabled =``
7114+``backend = S3``
7115 
7116hunk ./docs/configuration.rst 450
7117-``expire.mode =``
7118-
7119-``expire.override_lease_duration =``
7120-
7121-``expire.cutoff_date =``
7122-
7123-``expire.immutable =``
7124-
7125-``expire.mutable =``
7126-
7127-    These settings control garbage collection, in which the server will
7128-    delete shares that no longer have an up-to-date lease on them. Please see
7129-    `<garbage-collection.rst>`_ for full details.
7130+    The storage server can store all shares to an Amazon Simple Storage
7131+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7132 
7133 
7134 Running A Helper
7135}
7136[Fix some incorrect attribute accesses. refs #999
7137david-sarah@jacaranda.org**20110921031207
7138 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7139] {
7140hunk ./src/allmydata/client.py 258
7141 
7142         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7143                               discard_storage=discard)
7144-        ss = StorageServer(nodeid, backend, storedir,
7145+        ss = StorageServer(self.nodeid, backend, storedir,
7146                            stats_provider=self.stats_provider,
7147                            expiration_policy=expiration_policy)
7148         self.add_service(ss)
7149hunk ./src/allmydata/interfaces.py 449
7150         Returns the storage index.
7151         """
7152 
7153+    def get_storage_index_string():
7154+        """
7155+        Returns the base32-encoded storage index.
7156+        """
7157+
7158     def get_shnum():
7159         """
7160         Returns the share number.
7161hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7162     def get_storage_index(self):
7163         return self._storageindex
7164 
7165+    def get_storage_index_string(self):
7166+        return si_b2a(self._storageindex)
7167+
7168     def get_shnum(self):
7169         return self._shnum
7170 
7171hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7172     def get_storage_index(self):
7173         return self._storageindex
7174 
7175+    def get_storage_index_string(self):
7176+        return si_b2a(self._storageindex)
7177+
7178     def get_shnum(self):
7179         return self._shnum
7180 
7181hunk ./src/allmydata/storage/bucket.py 86
7182     def __init__(self, ss, share):
7183         self.ss = ss
7184         self._share = share
7185-        self.storageindex = share.storageindex
7186-        self.shnum = share.shnum
7187+        self.storageindex = share.get_storage_index()
7188+        self.shnum = share.get_shnum()
7189 
7190     def __repr__(self):
7191         return "<%s %s %s>" % (self.__class__.__name__,
7192hunk ./src/allmydata/storage/expirer.py 6
7193 from twisted.python import log as twlog
7194 
7195 from allmydata.storage.crawler import ShareCrawler
7196-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7197+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7198      UnknownImmutableContainerVersionError
7199 
7200 
7201hunk ./src/allmydata/storage/expirer.py 124
7202                     struct.error):
7203                 twlog.msg("lease-checker error processing %r" % (share,))
7204                 twlog.err()
7205-                which = (si_b2a(share.storageindex), share.get_shnum())
7206+                which = (share.get_storage_index_string(), share.get_shnum())
7207                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7208                 wks = (1, 1, 1, "unknown")
7209             would_keep_shares.append(wks)
7210hunk ./src/allmydata/storage/server.py 221
7211         alreadygot = set()
7212         for share in shareset.get_shares():
7213             share.add_or_renew_lease(lease_info)
7214-            alreadygot.add(share.shnum)
7215+            alreadygot.add(share.get_shnum())
7216 
7217         for shnum in sharenums - alreadygot:
7218             if shareset.has_incoming(shnum):
7219hunk ./src/allmydata/storage/server.py 324
7220 
7221         try:
7222             shareset = self.backend.get_shareset(storageindex)
7223-            return shareset.readv(self, shares, readv)
7224+            return shareset.readv(shares, readv)
7225         finally:
7226             self.add_latency("readv", time.time() - start)
7227 
7228hunk ./src/allmydata/storage/shares.py 1
7229-#! /usr/bin/python
7230-
7231-from allmydata.storage.mutable import MutableShareFile
7232-from allmydata.storage.immutable import ShareFile
7233-
7234-def get_share_file(filename):
7235-    f = open(filename, "rb")
7236-    prefix = f.read(32)
7237-    f.close()
7238-    if prefix == MutableShareFile.MAGIC:
7239-        return MutableShareFile(filename)
7240-    # otherwise assume it's immutable
7241-    return ShareFile(filename)
7242-
7243rmfile ./src/allmydata/storage/shares.py
7244hunk ./src/allmydata/test/no_network.py 387
7245         si = tahoe_uri.from_string(uri).get_storage_index()
7246         (i_shnum, i_serverid, i_sharefp) = from_share
7247         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7248+        fileutil.fp_make_dirs(shares_dir)
7249         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7250 
7251     def restore_all_shares(self, shares):
7252hunk ./src/allmydata/test/no_network.py 391
7253-        for share, data in shares.items():
7254-            share.home.setContent(data)
7255+        for sharepath, data in shares.items():
7256+            FilePath(sharepath).setContent(data)
7257 
7258     def delete_share(self, (shnum, serverid, sharefp)):
7259         sharefp.remove()
7260hunk ./src/allmydata/test/test_upload.py 744
7261         servertoshnums = {} # k: server, v: set(shnum)
7262 
7263         for i, c in self.g.servers_by_number.iteritems():
7264-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7265+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7266                 for fn in fns:
7267                     try:
7268                         sharenum = int(fn)
7269}
7270[docs/backends/S3.rst: remove Issues section. refs #999
7271david-sarah@jacaranda.org**20110921031625
7272 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7273] hunk ./docs/backends/S3.rst 57
7274 
7275 Once configured, the WUI "storage server" page will provide information about
7276 how much space is being used and how many shares are being stored.
7277-
7278-
7279-Issues
7280-------
7281-
7282-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7283-is configured to store shares in S3 rather than on local disk, some common
7284-operations may behave differently:
7285-
7286-* Lease crawling/expiration is not yet implemented. As a result, shares will
7287-  be retained forever, and the Storage Server status web page will not show
7288-  information about the number of mutable/immutable shares present.
7289-
7290-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7291-  each share upload, causing the upload process to run slightly slower and
7292-  incur more S3 request charges.
7293[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7294david-sarah@jacaranda.org**20110921031705
7295 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7296] {
7297hunk ./docs/backends/S3.rst 38
7298     else, you may be able to use some other S3-like service if it is
7299     sufficiently compatible.
7300 
7301-``s3.max_space = (str, optional)``
7302+``s3.max_space = (quantity of space, optional)``
7303 
7304     This tells the server to limit how much space can be used in the S3
7305     bucket. Before each share is uploaded, the server will ask S3 for the
7306hunk ./docs/backends/disk.rst 14
7307 
7308     This enables use of the disk backend, and is the default.
7309 
7310-``reserved_space = (str, optional)``
7311+``reserved_space = (quantity of space, optional)``
7312 
7313     If provided, this value defines how much disk space is reserved: the
7314     storage server will not accept any share that causes the amount of free
7315}
7316[More fixes to tests needed for pluggable backends. refs #999
7317david-sarah@jacaranda.org**20110921184649
7318 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7319] {
7320hunk ./src/allmydata/scripts/debug.py 8
7321 from twisted.python import usage, failure
7322 from twisted.internet import defer
7323 from twisted.scripts import trial as twisted_trial
7324+from twisted.python.filepath import FilePath
7325 
7326 
7327 class DumpOptions(usage.Options):
7328hunk ./src/allmydata/scripts/debug.py 38
7329         self['filename'] = argv_to_abspath(filename)
7330 
7331 def dump_share(options):
7332-    from allmydata.storage.mutable import MutableShareFile
7333+    from allmydata.storage.backends.disk.disk_backend import get_share
7334     from allmydata.util.encodingutil import quote_output
7335 
7336     out = options.stdout
7337hunk ./src/allmydata/scripts/debug.py 46
7338     # check the version, to see if we have a mutable or immutable share
7339     print >>out, "share filename: %s" % quote_output(options['filename'])
7340 
7341-    f = open(options['filename'], "rb")
7342-    prefix = f.read(32)
7343-    f.close()
7344-    if prefix == MutableShareFile.MAGIC:
7345-        return dump_mutable_share(options)
7346-    # otherwise assume it's immutable
7347-    return dump_immutable_share(options)
7348-
7349-def dump_immutable_share(options):
7350-    from allmydata.storage.immutable import ShareFile
7351+    share = get_share("", 0, fp)
7352+    if share.sharetype == "mutable":
7353+        return dump_mutable_share(options, share)
7354+    else:
7355+        assert share.sharetype == "immutable", share.sharetype
7356+        return dump_immutable_share(options)
7357 
7358hunk ./src/allmydata/scripts/debug.py 53
7359+def dump_immutable_share(options, share):
7360     out = options.stdout
7361hunk ./src/allmydata/scripts/debug.py 55
7362-    f = ShareFile(options['filename'])
7363     if not options["leases-only"]:
7364hunk ./src/allmydata/scripts/debug.py 56
7365-        dump_immutable_chk_share(f, out, options)
7366-    dump_immutable_lease_info(f, out)
7367+        dump_immutable_chk_share(share, out, options)
7368+    dump_immutable_lease_info(share, out)
7369     print >>out
7370     return 0
7371 
7372hunk ./src/allmydata/scripts/debug.py 166
7373     return when
7374 
7375 
7376-def dump_mutable_share(options):
7377-    from allmydata.storage.mutable import MutableShareFile
7378+def dump_mutable_share(options, m):
7379     from allmydata.util import base32, idlib
7380     out = options.stdout
7381hunk ./src/allmydata/scripts/debug.py 169
7382-    m = MutableShareFile(options['filename'])
7383     f = open(options['filename'], "rb")
7384     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7385     num_extra_leases = m._read_num_extra_leases(f)
7386hunk ./src/allmydata/scripts/debug.py 641
7387     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7388     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7389     """
7390-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7391-    from allmydata.util.encodingutil import listdir_unicode
7392+    from allmydata.storage.server import si_a2b
7393+    from allmydata.storage.backends.disk_backend import si_si2dir
7394+    from allmydata.util.encodingutil import quote_filepath
7395 
7396     out = options.stdout
7397hunk ./src/allmydata/scripts/debug.py 646
7398-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7399-    for d in options.nodedirs:
7400-        d = os.path.join(d, "storage/shares", sharedir)
7401-        if os.path.exists(d):
7402-            for shnum in listdir_unicode(d):
7403-                print >>out, os.path.join(d, shnum)
7404+    si = si_a2b(options.si_s)
7405+    for nodedir in options.nodedirs:
7406+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7407+        if sharedir.exists():
7408+            for sharefp in sharedir.children():
7409+                print >>out, quote_filepath(sharefp, quotemarks=False)
7410 
7411     return 0
7412 
7413hunk ./src/allmydata/scripts/debug.py 878
7414         print >>err, "Error processing %s" % quote_output(si_dir)
7415         failure.Failure().printTraceback(err)
7416 
7417+
7418 class CorruptShareOptions(usage.Options):
7419     def getSynopsis(self):
7420         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7421hunk ./src/allmydata/scripts/debug.py 902
7422 Obviously, this command should not be used in normal operation.
7423 """
7424         return t
7425+
7426     def parseArgs(self, filename):
7427         self['filename'] = filename
7428 
7429hunk ./src/allmydata/scripts/debug.py 907
7430 def corrupt_share(options):
7431+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7432+
7433+def do_corrupt_share(out, fp, offset="block-random"):
7434     import random
7435hunk ./src/allmydata/scripts/debug.py 911
7436-    from allmydata.storage.mutable import MutableShareFile
7437-    from allmydata.storage.immutable import ShareFile
7438+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7439+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7440     from allmydata.mutable.layout import unpack_header
7441     from allmydata.immutable.layout import ReadBucketProxy
7442hunk ./src/allmydata/scripts/debug.py 915
7443-    out = options.stdout
7444-    fn = options['filename']
7445-    assert options["offset"] == "block-random", "other offsets not implemented"
7446+
7447+    assert offset == "block-random", "other offsets not implemented"
7448+
7449     # first, what kind of share is it?
7450 
7451     def flip_bit(start, end):
7452hunk ./src/allmydata/scripts/debug.py 924
7453         offset = random.randrange(start, end)
7454         bit = random.randrange(0, 8)
7455         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7456-        f = open(fn, "rb+")
7457-        f.seek(offset)
7458-        d = f.read(1)
7459-        d = chr(ord(d) ^ 0x01)
7460-        f.seek(offset)
7461-        f.write(d)
7462-        f.close()
7463+        f = fp.open("rb+")
7464+        try:
7465+            f.seek(offset)
7466+            d = f.read(1)
7467+            d = chr(ord(d) ^ 0x01)
7468+            f.seek(offset)
7469+            f.write(d)
7470+        finally:
7471+            f.close()
7472 
7473hunk ./src/allmydata/scripts/debug.py 934
7474-    f = open(fn, "rb")
7475-    prefix = f.read(32)
7476-    f.close()
7477-    if prefix == MutableShareFile.MAGIC:
7478-        # mutable
7479-        m = MutableShareFile(fn)
7480-        f = open(fn, "rb")
7481-        f.seek(m.DATA_OFFSET)
7482-        data = f.read(2000)
7483-        # make sure this slot contains an SMDF share
7484-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7485+    f = fp.open("rb")
7486+    try:
7487+        prefix = f.read(32)
7488+    finally:
7489         f.close()
7490hunk ./src/allmydata/scripts/debug.py 939
7491+    if prefix == MutableDiskShare.MAGIC:
7492+        # mutable
7493+        m = MutableDiskShare("", 0, fp)
7494+        f = fp.open("rb")
7495+        try:
7496+            f.seek(m.DATA_OFFSET)
7497+            data = f.read(2000)
7498+            # make sure this slot contains an SMDF share
7499+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7500+        finally:
7501+            f.close()
7502 
7503         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7504          ig_datalen, offsets) = unpack_header(data)
7505hunk ./src/allmydata/scripts/debug.py 960
7506         flip_bit(start, end)
7507     else:
7508         # otherwise assume it's immutable
7509-        f = ShareFile(fn)
7510+        f = ImmutableDiskShare("", 0, fp)
7511         bp = ReadBucketProxy(None, None, '')
7512         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7513         start = f._data_offset + offsets["data"]
7514hunk ./src/allmydata/storage/backends/base.py 92
7515             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7516             if sharenum in shares:
7517                 if not shares[sharenum].check_testv(testv):
7518-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7519+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7520                     testv_is_good = False
7521                     break
7522             else:
7523hunk ./src/allmydata/storage/backends/base.py 99
7524                 # compare the vectors against an empty share, in which all
7525                 # reads return empty strings
7526                 if not EmptyShare().check_testv(testv):
7527-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7528-                                                                testv))
7529+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7530                     testv_is_good = False
7531                     break
7532 
7533hunk ./src/allmydata/test/test_cli.py 2892
7534             # delete one, corrupt a second
7535             shares = self.find_uri_shares(self.uri)
7536             self.failUnlessReallyEqual(len(shares), 10)
7537-            os.unlink(shares[0][2])
7538-            cso = debug.CorruptShareOptions()
7539-            cso.stdout = StringIO()
7540-            cso.parseOptions([shares[1][2]])
7541+            shares[0][2].remove()
7542+            stdout = StringIO()
7543+            sharefile = shares[1][2]
7544             storage_index = uri.from_string(self.uri).get_storage_index()
7545             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7546                                        (base32.b2a(shares[1][1]),
7547hunk ./src/allmydata/test/test_cli.py 2900
7548                                         base32.b2a(storage_index),
7549                                         shares[1][0])
7550-            debug.corrupt_share(cso)
7551+            debug.do_corrupt_share(stdout, sharefile)
7552         d.addCallback(_clobber_shares)
7553 
7554         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7555hunk ./src/allmydata/test/test_cli.py 3017
7556         def _clobber_shares(ignored):
7557             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7558             self.failUnlessReallyEqual(len(shares), 10)
7559-            os.unlink(shares[0][2])
7560+            shares[0][2].remove()
7561 
7562             shares = self.find_uri_shares(self.uris["mutable"])
7563hunk ./src/allmydata/test/test_cli.py 3020
7564-            cso = debug.CorruptShareOptions()
7565-            cso.stdout = StringIO()
7566-            cso.parseOptions([shares[1][2]])
7567+            stdout = StringIO()
7568+            sharefile = shares[1][2]
7569             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7570             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7571                                        (base32.b2a(shares[1][1]),
7572hunk ./src/allmydata/test/test_cli.py 3027
7573                                         base32.b2a(storage_index),
7574                                         shares[1][0])
7575-            debug.corrupt_share(cso)
7576+            debug.do_corrupt_share(stdout, sharefile)
7577         d.addCallback(_clobber_shares)
7578 
7579         # root
7580hunk ./src/allmydata/test/test_client.py 90
7581                            "enabled = true\n" + \
7582                            "reserved_space = 1000\n")
7583         c = client.Client(basedir)
7584-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7585+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7586 
7587     def test_reserved_2(self):
7588         basedir = "client.Basic.test_reserved_2"
7589hunk ./src/allmydata/test/test_client.py 101
7590                            "enabled = true\n" + \
7591                            "reserved_space = 10K\n")
7592         c = client.Client(basedir)
7593-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7594+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7595 
7596     def test_reserved_3(self):
7597         basedir = "client.Basic.test_reserved_3"
7598hunk ./src/allmydata/test/test_client.py 112
7599                            "enabled = true\n" + \
7600                            "reserved_space = 5mB\n")
7601         c = client.Client(basedir)
7602-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7603+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7604                              5*1000*1000)
7605 
7606     def test_reserved_4(self):
7607hunk ./src/allmydata/test/test_client.py 124
7608                            "enabled = true\n" + \
7609                            "reserved_space = 78Gb\n")
7610         c = client.Client(basedir)
7611-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7612+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7613                              78*1000*1000*1000)
7614 
7615     def test_reserved_bad(self):
7616hunk ./src/allmydata/test/test_client.py 136
7617                            "enabled = true\n" + \
7618                            "reserved_space = bogus\n")
7619         c = client.Client(basedir)
7620-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7621+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7622 
7623     def _permute(self, sb, key):
7624         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7625hunk ./src/allmydata/test/test_crawler.py 7
7626 from twisted.trial import unittest
7627 from twisted.application import service
7628 from twisted.internet import defer
7629+from twisted.python.filepath import FilePath
7630 from foolscap.api import eventually, fireEventually
7631 
7632 from allmydata.util import fileutil, hashutil, pollmixin
7633hunk ./src/allmydata/test/test_crawler.py 13
7634 from allmydata.storage.server import StorageServer, si_b2a
7635 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7636+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7637 
7638 from allmydata.test.test_storage import FakeCanary
7639 from allmydata.test.common_util import StallMixin
7640hunk ./src/allmydata/test/test_crawler.py 115
7641 
7642     def test_immediate(self):
7643         self.basedir = "crawler/Basic/immediate"
7644-        fileutil.make_dirs(self.basedir)
7645         serverid = "\x00" * 20
7646hunk ./src/allmydata/test/test_crawler.py 116
7647-        ss = StorageServer(self.basedir, serverid)
7648+        fp = FilePath(self.basedir)
7649+        backend = DiskBackend(fp)
7650+        ss = StorageServer(serverid, backend, fp)
7651         ss.setServiceParent(self.s)
7652 
7653         sis = [self.write(i, ss, serverid) for i in range(10)]
7654hunk ./src/allmydata/test/test_crawler.py 122
7655-        statefile = os.path.join(self.basedir, "statefile")
7656+        statefp = fp.child("statefile")
7657 
7658hunk ./src/allmydata/test/test_crawler.py 124
7659-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7660+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7661         c.load_state()
7662 
7663         c.start_current_prefix(time.time())
7664hunk ./src/allmydata/test/test_crawler.py 137
7665         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7666 
7667         # check that a new crawler picks up on the state file properly
7668-        c2 = BucketEnumeratingCrawler(ss, statefile)
7669+        c2 = BucketEnumeratingCrawler(backend, statefp)
7670         c2.load_state()
7671 
7672         c2.start_current_prefix(time.time())
7673hunk ./src/allmydata/test/test_crawler.py 145
7674 
7675     def test_service(self):
7676         self.basedir = "crawler/Basic/service"
7677-        fileutil.make_dirs(self.basedir)
7678         serverid = "\x00" * 20
7679hunk ./src/allmydata/test/test_crawler.py 146
7680-        ss = StorageServer(self.basedir, serverid)
7681+        fp = FilePath(self.basedir)
7682+        backend = DiskBackend(fp)
7683+        ss = StorageServer(serverid, backend, fp)
7684         ss.setServiceParent(self.s)
7685 
7686         sis = [self.write(i, ss, serverid) for i in range(10)]
7687hunk ./src/allmydata/test/test_crawler.py 153
7688 
7689-        statefile = os.path.join(self.basedir, "statefile")
7690-        c = BucketEnumeratingCrawler(ss, statefile)
7691+        statefp = fp.child("statefile")
7692+        c = BucketEnumeratingCrawler(backend, statefp)
7693         c.setServiceParent(self.s)
7694 
7695         # it should be legal to call get_state() and get_progress() right
7696hunk ./src/allmydata/test/test_crawler.py 174
7697 
7698     def test_paced(self):
7699         self.basedir = "crawler/Basic/paced"
7700-        fileutil.make_dirs(self.basedir)
7701         serverid = "\x00" * 20
7702hunk ./src/allmydata/test/test_crawler.py 175
7703-        ss = StorageServer(self.basedir, serverid)
7704+        fp = FilePath(self.basedir)
7705+        backend = DiskBackend(fp)
7706+        ss = StorageServer(serverid, backend, fp)
7707         ss.setServiceParent(self.s)
7708 
7709         # put four buckets in each prefixdir
7710hunk ./src/allmydata/test/test_crawler.py 186
7711             for tail in range(4):
7712                 sis.append(self.write(i, ss, serverid, tail))
7713 
7714-        statefile = os.path.join(self.basedir, "statefile")
7715+        statefp = fp.child("statefile")
7716 
7717hunk ./src/allmydata/test/test_crawler.py 188
7718-        c = PacedCrawler(ss, statefile)
7719+        c = PacedCrawler(backend, statefp)
7720         c.load_state()
7721         try:
7722             c.start_current_prefix(time.time())
7723hunk ./src/allmydata/test/test_crawler.py 213
7724         del c
7725 
7726         # start a new crawler, it should start from the beginning
7727-        c = PacedCrawler(ss, statefile)
7728+        c = PacedCrawler(backend, statefp)
7729         c.load_state()
7730         try:
7731             c.start_current_prefix(time.time())
7732hunk ./src/allmydata/test/test_crawler.py 226
7733         c.cpu_slice = PacedCrawler.cpu_slice
7734 
7735         # a third crawler should pick up from where it left off
7736-        c2 = PacedCrawler(ss, statefile)
7737+        c2 = PacedCrawler(backend, statefp)
7738         c2.all_buckets = c.all_buckets[:]
7739         c2.load_state()
7740         c2.countdown = -1
7741hunk ./src/allmydata/test/test_crawler.py 237
7742 
7743         # now stop it at the end of a bucket (countdown=4), to exercise a
7744         # different place that checks the time
7745-        c = PacedCrawler(ss, statefile)
7746+        c = PacedCrawler(backend, statefp)
7747         c.load_state()
7748         c.countdown = 4
7749         try:
7750hunk ./src/allmydata/test/test_crawler.py 256
7751 
7752         # stop it again at the end of the bucket, check that a new checker
7753         # picks up correctly
7754-        c = PacedCrawler(ss, statefile)
7755+        c = PacedCrawler(backend, statefp)
7756         c.load_state()
7757         c.countdown = 4
7758         try:
7759hunk ./src/allmydata/test/test_crawler.py 266
7760         # that should stop at the end of one of the buckets.
7761         c.save_state()
7762 
7763-        c2 = PacedCrawler(ss, statefile)
7764+        c2 = PacedCrawler(backend, statefp)
7765         c2.all_buckets = c.all_buckets[:]
7766         c2.load_state()
7767         c2.countdown = -1
7768hunk ./src/allmydata/test/test_crawler.py 277
7769 
7770     def test_paced_service(self):
7771         self.basedir = "crawler/Basic/paced_service"
7772-        fileutil.make_dirs(self.basedir)
7773         serverid = "\x00" * 20
7774hunk ./src/allmydata/test/test_crawler.py 278
7775-        ss = StorageServer(self.basedir, serverid)
7776+        fp = FilePath(self.basedir)
7777+        backend = DiskBackend(fp)
7778+        ss = StorageServer(serverid, backend, fp)
7779         ss.setServiceParent(self.s)
7780 
7781         sis = [self.write(i, ss, serverid) for i in range(10)]
7782hunk ./src/allmydata/test/test_crawler.py 285
7783 
7784-        statefile = os.path.join(self.basedir, "statefile")
7785-        c = PacedCrawler(ss, statefile)
7786+        statefp = fp.child("statefile")
7787+        c = PacedCrawler(backend, statefp)
7788 
7789         did_check_progress = [False]
7790         def check_progress():
7791hunk ./src/allmydata/test/test_crawler.py 345
7792         # and read the stdout when it runs.
7793 
7794         self.basedir = "crawler/Basic/cpu_usage"
7795-        fileutil.make_dirs(self.basedir)
7796         serverid = "\x00" * 20
7797hunk ./src/allmydata/test/test_crawler.py 346
7798-        ss = StorageServer(self.basedir, serverid)
7799+        fp = FilePath(self.basedir)
7800+        backend = DiskBackend(fp)
7801+        ss = StorageServer(serverid, backend, fp)
7802         ss.setServiceParent(self.s)
7803 
7804         for i in range(10):
7805hunk ./src/allmydata/test/test_crawler.py 354
7806             self.write(i, ss, serverid)
7807 
7808-        statefile = os.path.join(self.basedir, "statefile")
7809-        c = ConsumingCrawler(ss, statefile)
7810+        statefp = fp.child("statefile")
7811+        c = ConsumingCrawler(backend, statefp)
7812         c.setServiceParent(self.s)
7813 
7814         # this will run as fast as it can, consuming about 50ms per call to
7815hunk ./src/allmydata/test/test_crawler.py 391
7816 
7817     def test_empty_subclass(self):
7818         self.basedir = "crawler/Basic/empty_subclass"
7819-        fileutil.make_dirs(self.basedir)
7820         serverid = "\x00" * 20
7821hunk ./src/allmydata/test/test_crawler.py 392
7822-        ss = StorageServer(self.basedir, serverid)
7823+        fp = FilePath(self.basedir)
7824+        backend = DiskBackend(fp)
7825+        ss = StorageServer(serverid, backend, fp)
7826         ss.setServiceParent(self.s)
7827 
7828         for i in range(10):
7829hunk ./src/allmydata/test/test_crawler.py 400
7830             self.write(i, ss, serverid)
7831 
7832-        statefile = os.path.join(self.basedir, "statefile")
7833-        c = ShareCrawler(ss, statefile)
7834+        statefp = fp.child("statefile")
7835+        c = ShareCrawler(backend, statefp)
7836         c.slow_start = 0
7837         c.setServiceParent(self.s)
7838 
7839hunk ./src/allmydata/test/test_crawler.py 417
7840         d.addCallback(_done)
7841         return d
7842 
7843-
7844     def test_oneshot(self):
7845         self.basedir = "crawler/Basic/oneshot"
7846hunk ./src/allmydata/test/test_crawler.py 419
7847-        fileutil.make_dirs(self.basedir)
7848         serverid = "\x00" * 20
7849hunk ./src/allmydata/test/test_crawler.py 420
7850-        ss = StorageServer(self.basedir, serverid)
7851+        fp = FilePath(self.basedir)
7852+        backend = DiskBackend(fp)
7853+        ss = StorageServer(serverid, backend, fp)
7854         ss.setServiceParent(self.s)
7855 
7856         for i in range(30):
7857hunk ./src/allmydata/test/test_crawler.py 428
7858             self.write(i, ss, serverid)
7859 
7860-        statefile = os.path.join(self.basedir, "statefile")
7861-        c = OneShotCrawler(ss, statefile)
7862+        statefp = fp.child("statefile")
7863+        c = OneShotCrawler(backend, statefp)
7864         c.setServiceParent(self.s)
7865 
7866         d = c.finished_d
7867hunk ./src/allmydata/test/test_crawler.py 447
7868             self.failUnlessEqual(s["current-cycle"], None)
7869         d.addCallback(_check)
7870         return d
7871-
7872hunk ./src/allmydata/test/test_deepcheck.py 23
7873      ShouldFailMixin
7874 from allmydata.test.common_util import StallMixin
7875 from allmydata.test.no_network import GridTestMixin
7876+from allmydata.scripts import debug
7877+
7878 
7879 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7880 
7881hunk ./src/allmydata/test/test_deepcheck.py 905
7882         d.addErrback(self.explain_error)
7883         return d
7884 
7885-
7886-
7887     def set_up_damaged_tree(self):
7888         # 6.4s
7889 
7890hunk ./src/allmydata/test/test_deepcheck.py 989
7891 
7892         return d
7893 
7894-    def _run_cli(self, argv):
7895-        stdout, stderr = StringIO(), StringIO()
7896-        # this can only do synchronous operations
7897-        assert argv[0] == "debug"
7898-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7899-        return stdout.getvalue()
7900-
7901     def _delete_some_shares(self, node):
7902         self.delete_shares_numbered(node.get_uri(), [0,1])
7903 
7904hunk ./src/allmydata/test/test_deepcheck.py 995
7905     def _corrupt_some_shares(self, node):
7906         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7907             if shnum in (0,1):
7908-                self._run_cli(["debug", "corrupt-share", sharefile])
7909+                debug.do_corrupt_share(StringIO(), sharefile)
7910 
7911     def _delete_most_shares(self, node):
7912         self.delete_shares_numbered(node.get_uri(), range(1,10))
7913hunk ./src/allmydata/test/test_deepcheck.py 1000
7914 
7915-
7916     def check_is_healthy(self, cr, where):
7917         try:
7918             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
7919hunk ./src/allmydata/test/test_download.py 134
7920             for shnum in shares_for_server:
7921                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
7922                 fileutil.fp_make_dirs(share_dir)
7923-                share_dir.child(str(shnum)).setContent(shares[shnum])
7924+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
7925 
7926     def load_shares(self, ignored=None):
7927         # this uses the data generated by create_shares() to populate the
7928hunk ./src/allmydata/test/test_hung_server.py 32
7929 
7930     def _break(self, servers):
7931         for ss in servers:
7932-            self.g.break_server(ss.get_serverid())
7933+            self.g.break_server(ss.original.get_serverid())
7934 
7935     def _hang(self, servers, **kwargs):
7936         for ss in servers:
7937hunk ./src/allmydata/test/test_hung_server.py 67
7938         serverids = [ss.original.get_serverid() for ss in from_servers]
7939         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7940             if i_serverid in serverids:
7941-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7942+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
7943 
7944         self.shares = self.find_uri_shares(self.uri)
7945 
7946hunk ./src/allmydata/test/test_mutable.py 3669
7947         # Now execute each assignment by writing the storage.
7948         for (share, servernum) in assignments:
7949             sharedata = base64.b64decode(self.sdmf_old_shares[share])
7950-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
7951+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
7952             fileutil.fp_make_dirs(storage_dir)
7953             storage_dir.child("%d" % share).setContent(sharedata)
7954         # ...and verify that the shares are there.
7955hunk ./src/allmydata/test/test_no_network.py 10
7956 from allmydata.immutable.upload import Data
7957 from allmydata.util.consumer import download_to_data
7958 
7959+
7960 class Harness(unittest.TestCase):
7961     def setUp(self):
7962         self.s = service.MultiService()
7963hunk ./src/allmydata/test/test_storage.py 1
7964-import time, os.path, platform, stat, re, simplejson, struct, shutil
7965+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
7966 
7967 import mock
7968 
7969hunk ./src/allmydata/test/test_storage.py 6
7970 from twisted.trial import unittest
7971-
7972 from twisted.internet import defer
7973 from twisted.application import service
7974hunk ./src/allmydata/test/test_storage.py 8
7975+from twisted.python.filepath import FilePath
7976 from foolscap.api import fireEventually
7977hunk ./src/allmydata/test/test_storage.py 10
7978-import itertools
7979+
7980 from allmydata import interfaces
7981 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
7982 from allmydata.storage.server import StorageServer
7983hunk ./src/allmydata/test/test_storage.py 14
7984+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7985 from allmydata.storage.backends.disk.mutable import MutableDiskShare
7986 from allmydata.storage.bucket import BucketWriter, BucketReader
7987 from allmydata.storage.common import DataTooLargeError, \
7988hunk ./src/allmydata/test/test_storage.py 310
7989         return self.sparent.stopService()
7990 
7991     def workdir(self, name):
7992-        basedir = os.path.join("storage", "Server", name)
7993-        return basedir
7994+        return FilePath("storage").child("Server").child(name)
7995 
7996     def create(self, name, reserved_space=0, klass=StorageServer):
7997         workdir = self.workdir(name)
7998hunk ./src/allmydata/test/test_storage.py 314
7999-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
8000+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
8001+        ss = klass("\x00" * 20, backend, workdir,
8002                    stats_provider=FakeStatsProvider())
8003         ss.setServiceParent(self.sparent)
8004         return ss
8005hunk ./src/allmydata/test/test_storage.py 1386
8006 
8007     def tearDown(self):
8008         self.sparent.stopService()
8009-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
8010+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
8011 
8012 
8013     def write_enabler(self, we_tag):
8014hunk ./src/allmydata/test/test_storage.py 2781
8015         return self.sparent.stopService()
8016 
8017     def workdir(self, name):
8018-        basedir = os.path.join("storage", "Server", name)
8019-        return basedir
8020+        return FilePath("storage").child("Server").child(name)
8021 
8022     def create(self, name):
8023         workdir = self.workdir(name)
8024hunk ./src/allmydata/test/test_storage.py 2785
8025-        ss = StorageServer(workdir, "\x00" * 20)
8026+        backend = DiskBackend(workdir)
8027+        ss = StorageServer("\x00" * 20, backend, workdir)
8028         ss.setServiceParent(self.sparent)
8029         return ss
8030 
8031hunk ./src/allmydata/test/test_storage.py 4061
8032         }
8033 
8034         basedir = "storage/WebStatus/status_right_disk_stats"
8035-        fileutil.make_dirs(basedir)
8036-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
8037-        expecteddir = ss.sharedir
8038+        fp = FilePath(basedir)
8039+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
8040+        ss = StorageServer("\x00" * 20, backend, fp)
8041+        expecteddir = backend._sharedir
8042         ss.setServiceParent(self.s)
8043         w = StorageStatus(ss)
8044         html = w.renderSynchronously()
8045hunk ./src/allmydata/test/test_storage.py 4084
8046 
8047     def test_readonly(self):
8048         basedir = "storage/WebStatus/readonly"
8049-        fileutil.make_dirs(basedir)
8050-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8051+        fp = FilePath(basedir)
8052+        backend = DiskBackend(fp, readonly=True)
8053+        ss = StorageServer("\x00" * 20, backend, fp)
8054         ss.setServiceParent(self.s)
8055         w = StorageStatus(ss)
8056         html = w.renderSynchronously()
8057hunk ./src/allmydata/test/test_storage.py 4096
8058 
8059     def test_reserved(self):
8060         basedir = "storage/WebStatus/reserved"
8061-        fileutil.make_dirs(basedir)
8062-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8063-        ss.setServiceParent(self.s)
8064-        w = StorageStatus(ss)
8065-        html = w.renderSynchronously()
8066-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8067-        s = remove_tags(html)
8068-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8069-
8070-    def test_huge_reserved(self):
8071-        basedir = "storage/WebStatus/reserved"
8072-        fileutil.make_dirs(basedir)
8073-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8074+        fp = FilePath(basedir)
8075+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8076+        ss = StorageServer("\x00" * 20, backend, fp)
8077         ss.setServiceParent(self.s)
8078         w = StorageStatus(ss)
8079         html = w.renderSynchronously()
8080hunk ./src/allmydata/test/test_upload.py 3
8081 # -*- coding: utf-8 -*-
8082 
8083-import os, shutil
8084+import os
8085 from cStringIO import StringIO
8086 from twisted.trial import unittest
8087 from twisted.python.failure import Failure
8088hunk ./src/allmydata/test/test_upload.py 14
8089 from allmydata import uri, monitor, client
8090 from allmydata.immutable import upload, encode
8091 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8092-from allmydata.util import log
8093+from allmydata.util import log, fileutil
8094 from allmydata.util.assertutil import precondition
8095 from allmydata.util.deferredutil import DeferredListShouldSucceed
8096 from allmydata.test.no_network import GridTestMixin
8097hunk ./src/allmydata/test/test_upload.py 972
8098                                         readonly=True))
8099         # Remove the first share from server 0.
8100         def _remove_share_0_from_server_0():
8101-            share_location = self.shares[0][2]
8102-            os.remove(share_location)
8103+            self.shares[0][2].remove()
8104         d.addCallback(lambda ign:
8105             _remove_share_0_from_server_0())
8106         # Set happy = 4 in the client.
8107hunk ./src/allmydata/test/test_upload.py 1847
8108             self._copy_share_to_server(3, 1)
8109             storedir = self.get_serverdir(0)
8110             # remove the storedir, wiping out any existing shares
8111-            shutil.rmtree(storedir)
8112+            fileutil.fp_remove(storedir)
8113             # create an empty storedir to replace the one we just removed
8114hunk ./src/allmydata/test/test_upload.py 1849
8115-            os.mkdir(storedir)
8116+            storedir.mkdir()
8117             client = self.g.clients[0]
8118             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8119             return client
8120hunk ./src/allmydata/test/test_upload.py 1888
8121             self._copy_share_to_server(3, 1)
8122             storedir = self.get_serverdir(0)
8123             # remove the storedir, wiping out any existing shares
8124-            shutil.rmtree(storedir)
8125+            fileutil.fp_remove(storedir)
8126             # create an empty storedir to replace the one we just removed
8127hunk ./src/allmydata/test/test_upload.py 1890
8128-            os.mkdir(storedir)
8129+            storedir.mkdir()
8130             client = self.g.clients[0]
8131             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8132             return client
8133hunk ./src/allmydata/test/test_web.py 4870
8134         d.addErrback(self.explain_web_error)
8135         return d
8136 
8137-    def _assert_leasecount(self, ignored, which, expected):
8138+    def _assert_leasecount(self, which, expected):
8139         lease_counts = self.count_leases(self.uris[which])
8140         for (fn, num_leases) in lease_counts:
8141             if num_leases != expected:
8142hunk ./src/allmydata/test/test_web.py 4903
8143                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8144         d.addCallback(_compute_fileurls)
8145 
8146-        d.addCallback(self._assert_leasecount, "one", 1)
8147-        d.addCallback(self._assert_leasecount, "two", 1)
8148-        d.addCallback(self._assert_leasecount, "mutable", 1)
8149+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8150+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8151+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8152 
8153         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8154         def _got_html_good(res):
8155hunk ./src/allmydata/test/test_web.py 4913
8156             self.failIf("Not Healthy" in res, res)
8157         d.addCallback(_got_html_good)
8158 
8159-        d.addCallback(self._assert_leasecount, "one", 1)
8160-        d.addCallback(self._assert_leasecount, "two", 1)
8161-        d.addCallback(self._assert_leasecount, "mutable", 1)
8162+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8163+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8164+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8165 
8166         # this CHECK uses the original client, which uses the same
8167         # lease-secrets, so it will just renew the original lease
8168hunk ./src/allmydata/test/test_web.py 4922
8169         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8170         d.addCallback(_got_html_good)
8171 
8172-        d.addCallback(self._assert_leasecount, "one", 1)
8173-        d.addCallback(self._assert_leasecount, "two", 1)
8174-        d.addCallback(self._assert_leasecount, "mutable", 1)
8175+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8176+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8177+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8178 
8179         # this CHECK uses an alternate client, which adds a second lease
8180         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8181hunk ./src/allmydata/test/test_web.py 4930
8182         d.addCallback(_got_html_good)
8183 
8184-        d.addCallback(self._assert_leasecount, "one", 2)
8185-        d.addCallback(self._assert_leasecount, "two", 1)
8186-        d.addCallback(self._assert_leasecount, "mutable", 1)
8187+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8188+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8189+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8190 
8191         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8192         d.addCallback(_got_html_good)
8193hunk ./src/allmydata/test/test_web.py 4937
8194 
8195-        d.addCallback(self._assert_leasecount, "one", 2)
8196-        d.addCallback(self._assert_leasecount, "two", 1)
8197-        d.addCallback(self._assert_leasecount, "mutable", 1)
8198+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8199+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8200+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8201 
8202         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8203                       clientnum=1)
8204hunk ./src/allmydata/test/test_web.py 4945
8205         d.addCallback(_got_html_good)
8206 
8207-        d.addCallback(self._assert_leasecount, "one", 2)
8208-        d.addCallback(self._assert_leasecount, "two", 1)
8209-        d.addCallback(self._assert_leasecount, "mutable", 2)
8210+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8211+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8212+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8213 
8214         d.addErrback(self.explain_web_error)
8215         return d
8216hunk ./src/allmydata/test/test_web.py 4989
8217             self.failUnlessReallyEqual(len(units), 4+1)
8218         d.addCallback(_done)
8219 
8220-        d.addCallback(self._assert_leasecount, "root", 1)
8221-        d.addCallback(self._assert_leasecount, "one", 1)
8222-        d.addCallback(self._assert_leasecount, "mutable", 1)
8223+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8224+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8225+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8226 
8227         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8228         d.addCallback(_done)
8229hunk ./src/allmydata/test/test_web.py 4996
8230 
8231-        d.addCallback(self._assert_leasecount, "root", 1)
8232-        d.addCallback(self._assert_leasecount, "one", 1)
8233-        d.addCallback(self._assert_leasecount, "mutable", 1)
8234+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8235+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8236+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8237 
8238         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8239                       clientnum=1)
8240hunk ./src/allmydata/test/test_web.py 5004
8241         d.addCallback(_done)
8242 
8243-        d.addCallback(self._assert_leasecount, "root", 2)
8244-        d.addCallback(self._assert_leasecount, "one", 2)
8245-        d.addCallback(self._assert_leasecount, "mutable", 2)
8246+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
8247+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8248+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8249 
8250         d.addErrback(self.explain_web_error)
8251         return d
8252}
8253[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
8254david-sarah@jacaranda.org**20110921221421
8255 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
8256] {
8257hunk ./src/allmydata/scripts/debug.py 642
8258     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
8259     """
8260     from allmydata.storage.server import si_a2b
8261-    from allmydata.storage.backends.disk_backend import si_si2dir
8262+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
8263     from allmydata.util.encodingutil import quote_filepath
8264 
8265     out = options.stdout
8266hunk ./src/allmydata/scripts/debug.py 648
8267     si = si_a2b(options.si_s)
8268     for nodedir in options.nodedirs:
8269-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
8270+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
8271         if sharedir.exists():
8272             for sharefp in sharedir.children():
8273                 print >>out, quote_filepath(sharefp, quotemarks=False)
8274hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
8275         incominghome = self._incominghomedir.child(str(shnum))
8276         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
8277                                    max_size=max_space_per_bucket)
8278-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
8279+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
8280         if self._discard_storage:
8281             bw.throw_out_all_data = True
8282         return bw
8283hunk ./src/allmydata/storage/backends/disk/immutable.py 147
8284     def unlink(self):
8285         self._home.remove()
8286 
8287+    def get_allocated_size(self):
8288+        return self._max_size
8289+
8290     def get_size(self):
8291         return self._home.getsize()
8292 
8293hunk ./src/allmydata/storage/bucket.py 15
8294 class BucketWriter(Referenceable):
8295     implements(RIBucketWriter)
8296 
8297-    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
8298+    def __init__(self, ss, immutableshare, lease_info, canary):
8299         self.ss = ss
8300hunk ./src/allmydata/storage/bucket.py 17
8301-        self._max_size = max_size # don't allow the client to write more than this
8302         self._canary = canary
8303         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
8304         self.closed = False
8305hunk ./src/allmydata/storage/bucket.py 27
8306         self._share.add_lease(lease_info)
8307 
8308     def allocated_size(self):
8309-        return self._max_size
8310+        return self._share.get_allocated_size()
8311 
8312     def remote_write(self, offset, data):
8313         start = time.time()
8314hunk ./src/allmydata/storage/crawler.py 480
8315             self.state["bucket-counts"][cycle] = {}
8316         self.state["bucket-counts"][cycle][prefix] = len(sharesets)
8317         if prefix in self.prefixes[:self.num_sample_prefixes]:
8318-            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
8319+            si_strings = [shareset.get_storage_index_string() for shareset in sharesets]
8320+            self.state["storage-index-samples"][prefix] = (cycle, si_strings)
8321 
8322     def finished_cycle(self, cycle):
8323         last_counts = self.state["bucket-counts"].get(cycle, [])
8324hunk ./src/allmydata/storage/expirer.py 281
8325         # copy() needs to become a deepcopy
8326         h["space-recovered"] = s["space-recovered"].copy()
8327 
8328-        history = pickle.load(self.historyfp.getContent())
8329+        history = pickle.loads(self.historyfp.getContent())
8330         history[cycle] = h
8331         while len(history) > 10:
8332             oldcycles = sorted(history.keys())
8333hunk ./src/allmydata/storage/expirer.py 355
8334         progress = self.get_progress()
8335 
8336         state = ShareCrawler.get_state(self) # does a shallow copy
8337-        history = pickle.load(self.historyfp.getContent())
8338+        history = pickle.loads(self.historyfp.getContent())
8339         state["history"] = history
8340 
8341         if not progress["cycle-in-progress"]:
8342hunk ./src/allmydata/test/test_download.py 199
8343                     for shnum in immutable_shares[clientnum]:
8344                         if s._shnum == shnum:
8345                             share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8346-                            share_dir.child(str(shnum)).remove()
8347+                            fileutil.fp_remove(share_dir.child(str(shnum)))
8348         d.addCallback(_clobber_some_shares)
8349         d.addCallback(lambda ign: download_to_data(n))
8350         d.addCallback(_got_data)
8351hunk ./src/allmydata/test/test_download.py 224
8352             for clientnum in immutable_shares:
8353                 for shnum in immutable_shares[clientnum]:
8354                     share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8355-                    share_dir.child(str(shnum)).remove()
8356+                    fileutil.fp_remove(share_dir.child(str(shnum)))
8357             # now a new download should fail with NoSharesError. We want a
8358             # new ImmutableFileNode so it will forget about the old shares.
8359             # If we merely called create_node_from_uri() without first
8360hunk ./src/allmydata/test/test_repairer.py 415
8361         def _test_corrupt(ignored):
8362             olddata = {}
8363             shares = self.find_uri_shares(self.uri)
8364-            for (shnum, serverid, sharefile) in shares:
8365-                olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
8366+            for (shnum, serverid, sharefp) in shares:
8367+                olddata[ (shnum, serverid) ] = sharefp.getContent()
8368             for sh in shares:
8369                 self.corrupt_share(sh, common._corrupt_uri_extension)
8370hunk ./src/allmydata/test/test_repairer.py 419
8371-            for (shnum, serverid, sharefile) in shares:
8372-                newdata = open(sharefile, "rb").read()
8373+            for (shnum, serverid, sharefp) in shares:
8374+                newdata = sharefp.getContent()
8375                 self.failIfEqual(olddata[ (shnum, serverid) ], newdata)
8376         d.addCallback(_test_corrupt)
8377 
8378hunk ./src/allmydata/test/test_storage.py 63
8379 
8380 class Bucket(unittest.TestCase):
8381     def make_workdir(self, name):
8382-        basedir = os.path.join("storage", "Bucket", name)
8383-        incoming = os.path.join(basedir, "tmp", "bucket")
8384-        final = os.path.join(basedir, "bucket")
8385-        fileutil.make_dirs(basedir)
8386-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8387+        basedir = FilePath("storage").child("Bucket").child(name)
8388+        tmpdir = basedir.child("tmp")
8389+        tmpdir.makedirs()
8390+        incoming = tmpdir.child("bucket")
8391+        final = basedir.child("bucket")
8392         return incoming, final
8393 
8394     def bucket_writer_closed(self, bw, consumed):
8395hunk ./src/allmydata/test/test_storage.py 87
8396 
8397     def test_create(self):
8398         incoming, final = self.make_workdir("test_create")
8399-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8400-                          FakeCanary())
8401+        share = ImmutableDiskShare("", 0, incoming, final, 200)
8402+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8403         bw.remote_write(0, "a"*25)
8404         bw.remote_write(25, "b"*25)
8405         bw.remote_write(50, "c"*25)
8406hunk ./src/allmydata/test/test_storage.py 97
8407 
8408     def test_readwrite(self):
8409         incoming, final = self.make_workdir("test_readwrite")
8410-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8411-                          FakeCanary())
8412+        share = ImmutableDiskShare("", 0, incoming, 200)
8413+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8414         bw.remote_write(0, "a"*25)
8415         bw.remote_write(25, "b"*25)
8416         bw.remote_write(50, "c"*7) # last block may be short
8417hunk ./src/allmydata/test/test_storage.py 140
8418 
8419         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8420 
8421-        fileutil.write(final, share_file_data)
8422+        final.setContent(share_file_data)
8423 
8424         mockstorageserver = mock.Mock()
8425 
8426hunk ./src/allmydata/test/test_storage.py 179
8427 
8428 class BucketProxy(unittest.TestCase):
8429     def make_bucket(self, name, size):
8430-        basedir = os.path.join("storage", "BucketProxy", name)
8431-        incoming = os.path.join(basedir, "tmp", "bucket")
8432-        final = os.path.join(basedir, "bucket")
8433-        fileutil.make_dirs(basedir)
8434-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8435-        bw = BucketWriter(self, incoming, final, size, self.make_lease(),
8436-                          FakeCanary())
8437+        basedir = FilePath("storage").child("BucketProxy").child(name)
8438+        tmpdir = basedir.child("tmp")
8439+        tmpdir.makedirs()
8440+        incoming = tmpdir.child("bucket")
8441+        final = basedir.child("bucket")
8442+        share = ImmutableDiskShare("", 0, incoming, final, size)
8443+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8444         rb = RemoteBucket()
8445         rb.target = bw
8446         return bw, rb, final
8447hunk ./src/allmydata/test/test_storage.py 206
8448         pass
8449 
8450     def test_create(self):
8451-        bw, rb, sharefname = self.make_bucket("test_create", 500)
8452+        bw, rb, sharefp = self.make_bucket("test_create", 500)
8453         bp = WriteBucketProxy(rb, None,
8454                               data_size=300,
8455                               block_size=10,
8456hunk ./src/allmydata/test/test_storage.py 237
8457                         for i in (1,9,13)]
8458         uri_extension = "s" + "E"*498 + "e"
8459 
8460-        bw, rb, sharefname = self.make_bucket(name, sharesize)
8461+        bw, rb, sharefp = self.make_bucket(name, sharesize)
8462         bp = wbp_class(rb, None,
8463                        data_size=95,
8464                        block_size=25,
8465hunk ./src/allmydata/test/test_storage.py 258
8466 
8467         # now read everything back
8468         def _start_reading(res):
8469-            br = BucketReader(self, sharefname)
8470+            br = BucketReader(self, sharefp)
8471             rb = RemoteBucket()
8472             rb.target = br
8473             server = NoNetworkServer("abc", None)
8474hunk ./src/allmydata/test/test_storage.py 373
8475         for i, wb in writers.items():
8476             wb.remote_write(0, "%10d" % i)
8477             wb.remote_close()
8478-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8479-                                "shares")
8480-        children_of_storedir = set(os.listdir(storedir))
8481+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8482+        children_of_storedir = sorted([child.basename() for child in storedir.children()])
8483 
8484         # Now store another one under another storageindex that has leading
8485         # chars the same as the first storageindex.
8486hunk ./src/allmydata/test/test_storage.py 382
8487         for i, wb in writers.items():
8488             wb.remote_write(0, "%10d" % i)
8489             wb.remote_close()
8490-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8491-                                "shares")
8492-        new_children_of_storedir = set(os.listdir(storedir))
8493+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8494+        new_children_of_storedir = sorted([child.basename() for child in storedir.children()])
8495         self.failUnlessEqual(children_of_storedir, new_children_of_storedir)
8496 
8497     def test_remove_incoming(self):
8498hunk ./src/allmydata/test/test_storage.py 390
8499         ss = self.create("test_remove_incoming")
8500         already, writers = self.allocate(ss, "vid", range(3), 10)
8501         for i,wb in writers.items():
8502+            incoming_share_home = wb._share._home
8503             wb.remote_write(0, "%10d" % i)
8504             wb.remote_close()
8505hunk ./src/allmydata/test/test_storage.py 393
8506-        incoming_share_dir = wb.incominghome
8507-        incoming_bucket_dir = os.path.dirname(incoming_share_dir)
8508-        incoming_prefix_dir = os.path.dirname(incoming_bucket_dir)
8509-        incoming_dir = os.path.dirname(incoming_prefix_dir)
8510-        self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir)
8511-        self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir)
8512-        self.failUnless(os.path.exists(incoming_dir), incoming_dir)
8513+        incoming_bucket_dir = incoming_share_home.parent()
8514+        incoming_prefix_dir = incoming_bucket_dir.parent()
8515+        incoming_dir = incoming_prefix_dir.parent()
8516+        self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir)
8517+        self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir)
8518+        self.failUnless(incoming_dir.exists(), incoming_dir)
8519 
8520     def test_abort(self):
8521         # remote_abort, when called on a writer, should make sure that
8522hunk ./src/allmydata/test/test_upload.py 1849
8523             # remove the storedir, wiping out any existing shares
8524             fileutil.fp_remove(storedir)
8525             # create an empty storedir to replace the one we just removed
8526-            storedir.mkdir()
8527+            storedir.makedirs()
8528             client = self.g.clients[0]
8529             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8530             return client
8531hunk ./src/allmydata/test/test_upload.py 1890
8532             # remove the storedir, wiping out any existing shares
8533             fileutil.fp_remove(storedir)
8534             # create an empty storedir to replace the one we just removed
8535-            storedir.mkdir()
8536+            storedir.makedirs()
8537             client = self.g.clients[0]
8538             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8539             return client
8540}
8541[uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
8542david-sarah@jacaranda.org**20110921222038
8543 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf
8544] {
8545hunk ./src/allmydata/uri.py 829
8546     def is_mutable(self):
8547         return False
8548 
8549+    def is_readonly(self):
8550+        return True
8551+
8552+    def get_readonly(self):
8553+        return self
8554+
8555+
8556 class DirectoryURIVerifier(_DirectoryBaseURI):
8557     implements(IVerifierURI)
8558 
8559hunk ./src/allmydata/uri.py 855
8560     def is_mutable(self):
8561         return False
8562 
8563+    def is_readonly(self):
8564+        return True
8565+
8566+    def get_readonly(self):
8567+        return self
8568+
8569 
8570 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
8571     implements(IVerifierURI)
8572}
8573[Fix some more test failures. refs #999
8574david-sarah@jacaranda.org**20110922045451
8575 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7
8576] {
8577hunk ./src/allmydata/scripts/debug.py 42
8578     from allmydata.util.encodingutil import quote_output
8579 
8580     out = options.stdout
8581+    filename = options['filename']
8582 
8583     # check the version, to see if we have a mutable or immutable share
8584hunk ./src/allmydata/scripts/debug.py 45
8585-    print >>out, "share filename: %s" % quote_output(options['filename'])
8586+    print >>out, "share filename: %s" % quote_output(filename)
8587 
8588hunk ./src/allmydata/scripts/debug.py 47
8589-    share = get_share("", 0, fp)
8590+    share = get_share("", 0, FilePath(filename))
8591     if share.sharetype == "mutable":
8592         return dump_mutable_share(options, share)
8593     else:
8594hunk ./src/allmydata/storage/backends/disk/mutable.py 85
8595         self.parent = parent # for logging
8596 
8597     def log(self, *args, **kwargs):
8598-        return self.parent.log(*args, **kwargs)
8599+        if self.parent:
8600+            return self.parent.log(*args, **kwargs)
8601 
8602     def create(self, serverid, write_enabler):
8603         assert not self._home.exists()
8604hunk ./src/allmydata/storage/common.py 6
8605 class DataTooLargeError(Exception):
8606     pass
8607 
8608-class UnknownMutableContainerVersionError(Exception):
8609+class UnknownContainerVersionError(Exception):
8610     pass
8611 
8612hunk ./src/allmydata/storage/common.py 9
8613-class UnknownImmutableContainerVersionError(Exception):
8614+class UnknownMutableContainerVersionError(UnknownContainerVersionError):
8615+    pass
8616+
8617+class UnknownImmutableContainerVersionError(UnknownContainerVersionError):
8618     pass
8619 
8620 
8621hunk ./src/allmydata/storage/crawler.py 208
8622         try:
8623             state = pickle.loads(self.statefp.getContent())
8624         except EnvironmentError:
8625+            if self.statefp.exists():
8626+                raise
8627             state = {"version": 1,
8628                      "last-cycle-finished": None,
8629                      "current-cycle": None,
8630hunk ./src/allmydata/storage/server.py 24
8631 
8632     name = 'storage'
8633     LeaseCheckerClass = LeaseCheckingCrawler
8634+    BucketCounterClass = BucketCountingCrawler
8635     DEFAULT_EXPIRATION_POLICY = {
8636         'enabled': False,
8637         'mode': 'age',
8638hunk ./src/allmydata/storage/server.py 70
8639 
8640     def _setup_bucket_counter(self):
8641         statefp = self._statedir.child("bucket_counter.state")
8642-        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
8643+        self.bucket_counter = self.BucketCounterClass(self.backend, statefp)
8644         self.bucket_counter.setServiceParent(self)
8645 
8646     def _setup_lease_checker(self, expiration_policy):
8647hunk ./src/allmydata/storage/server.py 224
8648             share.add_or_renew_lease(lease_info)
8649             alreadygot.add(share.get_shnum())
8650 
8651-        for shnum in sharenums - alreadygot:
8652+        for shnum in set(sharenums) - alreadygot:
8653             if shareset.has_incoming(shnum):
8654                 # Note that we don't create BucketWriters for shnums that
8655                 # have a partial share (in incoming/), so if a second upload
8656hunk ./src/allmydata/storage/server.py 247
8657 
8658     def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
8659                          owner_num=1):
8660-        # cancel_secret is no longer used.
8661         start = time.time()
8662         self.count("add-lease")
8663         new_expire_time = time.time() + 31*24*60*60
8664hunk ./src/allmydata/storage/server.py 250
8665-        lease_info = LeaseInfo(owner_num, renew_secret,
8666+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
8667                                new_expire_time, self._serverid)
8668 
8669         try:
8670hunk ./src/allmydata/storage/server.py 254
8671-            self.backend.add_or_renew_lease(lease_info)
8672+            shareset = self.backend.get_shareset(storageindex)
8673+            shareset.add_or_renew_lease(lease_info)
8674         finally:
8675             self.add_latency("add-lease", time.time() - start)
8676 
8677hunk ./src/allmydata/test/test_crawler.py 3
8678 
8679 import time
8680-import os.path
8681+
8682 from twisted.trial import unittest
8683 from twisted.application import service
8684 from twisted.internet import defer
8685hunk ./src/allmydata/test/test_crawler.py 10
8686 from twisted.python.filepath import FilePath
8687 from foolscap.api import eventually, fireEventually
8688 
8689-from allmydata.util import fileutil, hashutil, pollmixin
8690+from allmydata.util import hashutil, pollmixin
8691 from allmydata.storage.server import StorageServer, si_b2a
8692 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
8693 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8694hunk ./src/allmydata/test/test_mutable.py 3024
8695             cso.stderr = StringIO()
8696             debug.catalog_shares(cso)
8697             shares = cso.stdout.getvalue().splitlines()
8698+            self.failIf(len(shares) < 1, shares)
8699             oneshare = shares[0] # all shares should be MDMF
8700             self.failIf(oneshare.startswith("UNKNOWN"), oneshare)
8701             self.failUnless(oneshare.startswith("MDMF"), oneshare)
8702hunk ./src/allmydata/test/test_storage.py 1
8703-import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8704+import time, os.path, platform, re, simplejson, struct, itertools
8705 
8706 import mock
8707 
8708hunk ./src/allmydata/test/test_storage.py 15
8709 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8710 from allmydata.storage.server import StorageServer
8711 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8712+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
8713 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8714 from allmydata.storage.bucket import BucketWriter, BucketReader
8715hunk ./src/allmydata/test/test_storage.py 18
8716-from allmydata.storage.common import DataTooLargeError, \
8717+from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \
8718      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
8719 from allmydata.storage.lease import LeaseInfo
8720 from allmydata.storage.crawler import BucketCountingCrawler
8721hunk ./src/allmydata/test/test_storage.py 88
8722 
8723     def test_create(self):
8724         incoming, final = self.make_workdir("test_create")
8725-        share = ImmutableDiskShare("", 0, incoming, final, 200)
8726+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8727         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8728         bw.remote_write(0, "a"*25)
8729         bw.remote_write(25, "b"*25)
8730hunk ./src/allmydata/test/test_storage.py 98
8731 
8732     def test_readwrite(self):
8733         incoming, final = self.make_workdir("test_readwrite")
8734-        share = ImmutableDiskShare("", 0, incoming, 200)
8735+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8736         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8737         bw.remote_write(0, "a"*25)
8738         bw.remote_write(25, "b"*25)
8739hunk ./src/allmydata/test/test_storage.py 106
8740         bw.remote_close()
8741 
8742         # now read from it
8743-        br = BucketReader(self, bw.finalhome)
8744+        br = BucketReader(self, share)
8745         self.failUnlessEqual(br.remote_read(0, 25), "a"*25)
8746         self.failUnlessEqual(br.remote_read(25, 25), "b"*25)
8747         self.failUnlessEqual(br.remote_read(50, 7), "c"*7)
8748hunk ./src/allmydata/test/test_storage.py 131
8749         ownernumber = struct.pack('>L', 0)
8750         renewsecret  = 'THIS LETS ME RENEW YOUR FILE....'
8751         assert len(renewsecret) == 32
8752-        cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA'
8753+        cancelsecret = 'THIS USED TO LET ME KILL YR FILE'
8754         assert len(cancelsecret) == 32
8755         expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds
8756 
8757hunk ./src/allmydata/test/test_storage.py 142
8758         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8759 
8760         final.setContent(share_file_data)
8761+        share = ImmutableDiskShare("", 0, final)
8762 
8763         mockstorageserver = mock.Mock()
8764 
8765hunk ./src/allmydata/test/test_storage.py 147
8766         # Now read from it.
8767-        br = BucketReader(mockstorageserver, final)
8768+        br = BucketReader(mockstorageserver, share)
8769 
8770         self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data)
8771 
8772hunk ./src/allmydata/test/test_storage.py 260
8773 
8774         # now read everything back
8775         def _start_reading(res):
8776-            br = BucketReader(self, sharefp)
8777+            share = ImmutableDiskShare("", 0, sharefp)
8778+            br = BucketReader(self, share)
8779             rb = RemoteBucket()
8780             rb.target = br
8781             server = NoNetworkServer("abc", None)
8782hunk ./src/allmydata/test/test_storage.py 346
8783         if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow:
8784             raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).")
8785 
8786-        avail = fileutil.get_available_space('.', 512*2**20)
8787+        avail = fileutil.get_available_space(FilePath('.'), 512*2**20)
8788         if avail <= 4*2**30:
8789             raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.")
8790 
8791hunk ./src/allmydata/test/test_storage.py 476
8792         w[0].remote_write(0, "\xff"*10)
8793         w[0].remote_close()
8794 
8795-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8796+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8797         f = fp.open("rb+")
8798hunk ./src/allmydata/test/test_storage.py 478
8799-        f.seek(0)
8800-        f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8801-        f.close()
8802+        try:
8803+            f.seek(0)
8804+            f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8805+        finally:
8806+            f.close()
8807 
8808         ss.remote_get_buckets("allocate")
8809 
8810hunk ./src/allmydata/test/test_storage.py 575
8811 
8812     def test_seek(self):
8813         basedir = self.workdir("test_seek_behavior")
8814-        fileutil.make_dirs(basedir)
8815-        filename = os.path.join(basedir, "testfile")
8816-        f = open(filename, "wb")
8817-        f.write("start")
8818-        f.close()
8819+        basedir.makedirs()
8820+        fp = basedir.child("testfile")
8821+        fp.setContent("start")
8822+
8823         # mode="w" allows seeking-to-create-holes, but truncates pre-existing
8824         # files. mode="a" preserves previous contents but does not allow
8825         # seeking-to-create-holes. mode="r+" allows both.
8826hunk ./src/allmydata/test/test_storage.py 582
8827-        f = open(filename, "rb+")
8828-        f.seek(100)
8829-        f.write("100")
8830-        f.close()
8831-        filelen = os.stat(filename)[stat.ST_SIZE]
8832+        f = fp.open("rb+")
8833+        try:
8834+            f.seek(100)
8835+            f.write("100")
8836+        finally:
8837+            f.close()
8838+        fp.restat()
8839+        filelen = fp.getsize()
8840         self.failUnlessEqual(filelen, 100+3)
8841hunk ./src/allmydata/test/test_storage.py 591
8842-        f2 = open(filename, "rb")
8843-        self.failUnlessEqual(f2.read(5), "start")
8844-
8845+        f2 = fp.open("rb")
8846+        try:
8847+            self.failUnlessEqual(f2.read(5), "start")
8848+        finally:
8849+            f2.close()
8850 
8851     def test_leases(self):
8852         ss = self.create("test_leases")
8853hunk ./src/allmydata/test/test_storage.py 693
8854 
8855     def test_readonly(self):
8856         workdir = self.workdir("test_readonly")
8857-        ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True)
8858+        backend = DiskBackend(workdir, readonly=True)
8859+        ss = StorageServer("\x00" * 20, backend, workdir)
8860         ss.setServiceParent(self.sparent)
8861 
8862         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8863hunk ./src/allmydata/test/test_storage.py 710
8864 
8865     def test_discard(self):
8866         # discard is really only used for other tests, but we test it anyways
8867+        # XXX replace this with a null backend test
8868         workdir = self.workdir("test_discard")
8869hunk ./src/allmydata/test/test_storage.py 712
8870-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8871+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8872+        ss = StorageServer("\x00" * 20, backend, workdir)
8873         ss.setServiceParent(self.sparent)
8874 
8875         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8876hunk ./src/allmydata/test/test_storage.py 731
8877 
8878     def test_advise_corruption(self):
8879         workdir = self.workdir("test_advise_corruption")
8880-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8881+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8882+        ss = StorageServer("\x00" * 20, backend, workdir)
8883         ss.setServiceParent(self.sparent)
8884 
8885         si0_s = base32.b2a("si0")
8886hunk ./src/allmydata/test/test_storage.py 738
8887         ss.remote_advise_corrupt_share("immutable", "si0", 0,
8888                                        "This share smells funny.\n")
8889-        reportdir = os.path.join(workdir, "corruption-advisories")
8890-        reports = os.listdir(reportdir)
8891+        reportdir = workdir.child("corruption-advisories")
8892+        reports = [child.basename() for child in reportdir.children()]
8893         self.failUnlessEqual(len(reports), 1)
8894         report_si0 = reports[0]
8895hunk ./src/allmydata/test/test_storage.py 742
8896-        self.failUnlessIn(si0_s, report_si0)
8897-        f = open(os.path.join(reportdir, report_si0), "r")
8898-        report = f.read()
8899-        f.close()
8900+        self.failUnlessIn(si0_s, str(report_si0))
8901+        report = reportdir.child(report_si0).getContent()
8902+
8903         self.failUnlessIn("type: immutable", report)
8904         self.failUnlessIn("storage_index: %s" % si0_s, report)
8905         self.failUnlessIn("share_number: 0", report)
8906hunk ./src/allmydata/test/test_storage.py 762
8907         self.failUnlessEqual(set(b.keys()), set([1]))
8908         b[1].remote_advise_corrupt_share("This share tastes like dust.\n")
8909 
8910-        reports = os.listdir(reportdir)
8911+        reports = [child.basename() for child in reportdir.children()]
8912         self.failUnlessEqual(len(reports), 2)
8913hunk ./src/allmydata/test/test_storage.py 764
8914-        report_si1 = [r for r in reports if si1_s in r][0]
8915-        f = open(os.path.join(reportdir, report_si1), "r")
8916-        report = f.read()
8917-        f.close()
8918+        report_si1 = [r for r in reports if si1_s in str(r)][0]
8919+        report = reportdir.child(report_si1).getContent()
8920+
8921         self.failUnlessIn("type: immutable", report)
8922         self.failUnlessIn("storage_index: %s" % si1_s, report)
8923         self.failUnlessIn("share_number: 1", report)
8924hunk ./src/allmydata/test/test_storage.py 783
8925         return self.sparent.stopService()
8926 
8927     def workdir(self, name):
8928-        basedir = os.path.join("storage", "MutableServer", name)
8929-        return basedir
8930+        return FilePath("storage").child("MutableServer").child(name)
8931 
8932     def create(self, name):
8933         workdir = self.workdir(name)
8934hunk ./src/allmydata/test/test_storage.py 787
8935-        ss = StorageServer(workdir, "\x00" * 20)
8936+        backend = DiskBackend(workdir)
8937+        ss = StorageServer("\x00" * 20, backend, workdir)
8938         ss.setServiceParent(self.sparent)
8939         return ss
8940 
8941hunk ./src/allmydata/test/test_storage.py 810
8942         cancel_secret = self.cancel_secret(lease_tag)
8943         rstaraw = ss.remote_slot_testv_and_readv_and_writev
8944         testandwritev = dict( [ (shnum, ([], [], None) )
8945-                         for shnum in sharenums ] )
8946+                                for shnum in sharenums ] )
8947         readv = []
8948         rc = rstaraw(storage_index,
8949                      (write_enabler, renew_secret, cancel_secret),
8950hunk ./src/allmydata/test/test_storage.py 824
8951     def test_bad_magic(self):
8952         ss = self.create("test_bad_magic")
8953         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
8954-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8955+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8956         f = fp.open("rb+")
8957hunk ./src/allmydata/test/test_storage.py 826
8958-        f.seek(0)
8959-        f.write("BAD MAGIC")
8960-        f.close()
8961+        try:
8962+            f.seek(0)
8963+            f.write("BAD MAGIC")
8964+        finally:
8965+            f.close()
8966         read = ss.remote_slot_readv
8967hunk ./src/allmydata/test/test_storage.py 832
8968-        e = self.failUnlessRaises(UnknownMutableContainerVersionError,
8969+
8970+        # This used to test for UnknownMutableContainerVersionError,
8971+        # but the current code raises UnknownImmutableContainerVersionError.
8972+        # (It changed because remote_slot_readv now works with either
8973+        # mutable or immutable shares.) Since the share file doesn't have
8974+        # the mutable magic, it's not clear that this is wrong.
8975+        # For now, accept either exception.
8976+        e = self.failUnlessRaises(UnknownContainerVersionError,
8977                                   read, "si1", [0], [(0,10)])
8978hunk ./src/allmydata/test/test_storage.py 841
8979-        self.failUnlessIn(" had magic ", str(e))
8980+        self.failUnlessIn(" had ", str(e))
8981         self.failUnlessIn(" but we wanted ", str(e))
8982 
8983     def test_container_size(self):
8984hunk ./src/allmydata/test/test_storage.py 1248
8985 
8986         # create a random non-numeric file in the bucket directory, to
8987         # exercise the code that's supposed to ignore those.
8988-        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
8989+        bucket_dir = ss.backend.get_shareset("si1")._sharehomedir
8990         bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
8991 
8992hunk ./src/allmydata/test/test_storage.py 1251
8993-        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
8994+        s0 = MutableDiskShare("", 0, bucket_dir.child("0"))
8995         self.failUnlessEqual(len(list(s0.get_leases())), 1)
8996 
8997         # add-lease on a missing storage index is silently ignored
8998hunk ./src/allmydata/test/test_storage.py 1365
8999         # note: this is a detail of the storage server implementation, and
9000         # may change in the future
9001         prefix = si[:2]
9002-        prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix)
9003-        bucketdir = os.path.join(prefixdir, si)
9004-        self.failUnless(os.path.exists(prefixdir), prefixdir)
9005-        self.failIf(os.path.exists(bucketdir), bucketdir)
9006+        prefixdir = self.workdir("test_remove").child("shares").child(prefix)
9007+        bucketdir = prefixdir.child(si)
9008+        self.failUnless(prefixdir.exists(), prefixdir)
9009+        self.failIf(bucketdir.exists(), bucketdir)
9010 
9011 
9012 class MDMFProxies(unittest.TestCase, ShouldFailMixin):
9013hunk ./src/allmydata/test/test_storage.py 1420
9014 
9015 
9016     def workdir(self, name):
9017-        basedir = os.path.join("storage", "MutableServer", name)
9018-        return basedir
9019-
9020+        return FilePath("storage").child("MDMFProxies").child(name)
9021 
9022     def create(self, name):
9023         workdir = self.workdir(name)
9024hunk ./src/allmydata/test/test_storage.py 1424
9025-        ss = StorageServer(workdir, "\x00" * 20)
9026+        backend = DiskBackend(workdir)
9027+        ss = StorageServer("\x00" * 20, backend, workdir)
9028         ss.setServiceParent(self.sparent)
9029         return ss
9030 
9031hunk ./src/allmydata/test/test_storage.py 2798
9032         return self.sparent.stopService()
9033 
9034     def workdir(self, name):
9035-        return FilePath("storage").child("Server").child(name)
9036+        return FilePath("storage").child("Stats").child(name)
9037 
9038     def create(self, name):
9039         workdir = self.workdir(name)
9040hunk ./src/allmydata/test/test_storage.py 2886
9041             d.callback(None)
9042 
9043 class MyStorageServer(StorageServer):
9044-    def add_bucket_counter(self):
9045-        statefile = os.path.join(self.storedir, "bucket_counter.state")
9046-        self.bucket_counter = MyBucketCountingCrawler(self, statefile)
9047-        self.bucket_counter.setServiceParent(self)
9048+    BucketCounterClass = MyBucketCountingCrawler
9049+
9050 
9051 class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
9052 
9053hunk ./src/allmydata/test/test_storage.py 2899
9054 
9055     def test_bucket_counter(self):
9056         basedir = "storage/BucketCounter/bucket_counter"
9057-        fileutil.make_dirs(basedir)
9058-        ss = StorageServer(basedir, "\x00" * 20)
9059+        fp = FilePath(basedir)
9060+        backend = DiskBackend(fp)
9061+        ss = StorageServer("\x00" * 20, backend, fp)
9062+
9063         # to make sure we capture the bucket-counting-crawler in the middle
9064         # of a cycle, we reach in and reduce its maximum slice time to 0. We
9065         # also make it start sooner than usual.
9066hunk ./src/allmydata/test/test_storage.py 2958
9067 
9068     def test_bucket_counter_cleanup(self):
9069         basedir = "storage/BucketCounter/bucket_counter_cleanup"
9070-        fileutil.make_dirs(basedir)
9071-        ss = StorageServer(basedir, "\x00" * 20)
9072+        fp = FilePath(basedir)
9073+        backend = DiskBackend(fp)
9074+        ss = StorageServer("\x00" * 20, backend, fp)
9075+
9076         # to make sure we capture the bucket-counting-crawler in the middle
9077         # of a cycle, we reach in and reduce its maximum slice time to 0.
9078         ss.bucket_counter.slow_start = 0
9079hunk ./src/allmydata/test/test_storage.py 3002
9080 
9081     def test_bucket_counter_eta(self):
9082         basedir = "storage/BucketCounter/bucket_counter_eta"
9083-        fileutil.make_dirs(basedir)
9084-        ss = MyStorageServer(basedir, "\x00" * 20)
9085+        fp = FilePath(basedir)
9086+        backend = DiskBackend(fp)
9087+        ss = MyStorageServer("\x00" * 20, backend, fp)
9088         ss.bucket_counter.slow_start = 0
9089         # these will be fired inside finished_prefix()
9090         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
9091hunk ./src/allmydata/test/test_storage.py 3125
9092 
9093     def test_basic(self):
9094         basedir = "storage/LeaseCrawler/basic"
9095-        fileutil.make_dirs(basedir)
9096-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9097+        fp = FilePath(basedir)
9098+        backend = DiskBackend(fp)
9099+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9100+
9101         # make it start sooner than usual.
9102         lc = ss.lease_checker
9103         lc.slow_start = 0
9104hunk ./src/allmydata/test/test_storage.py 3141
9105         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9106 
9107         # add a non-sharefile to exercise another code path
9108-        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
9109+        fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share")
9110         fp.setContent("I am not a share.\n")
9111 
9112         # this is before the crawl has started, so we're not in a cycle yet
9113hunk ./src/allmydata/test/test_storage.py 3264
9114             self.failUnlessEqual(rec["configured-sharebytes"], 0)
9115 
9116             def _get_sharefile(si):
9117-                return list(ss._iter_share_files(si))[0]
9118+                return list(ss.backend.get_shareset(si).get_shares())[0]
9119             def count_leases(si):
9120                 return len(list(_get_sharefile(si).get_leases()))
9121             self.failUnlessEqual(count_leases(immutable_si_0), 1)
9122hunk ./src/allmydata/test/test_storage.py 3296
9123         for i,lease in enumerate(sf.get_leases()):
9124             if lease.renew_secret == renew_secret:
9125                 lease.expiration_time = new_expire_time
9126-                f = open(sf.home, 'rb+')
9127-                sf._write_lease_record(f, i, lease)
9128-                f.close()
9129+                f = sf._home.open('rb+')
9130+                try:
9131+                    sf._write_lease_record(f, i, lease)
9132+                finally:
9133+                    f.close()
9134                 return
9135         raise IndexError("unable to renew non-existent lease")
9136 
9137hunk ./src/allmydata/test/test_storage.py 3306
9138     def test_expire_age(self):
9139         basedir = "storage/LeaseCrawler/expire_age"
9140-        fileutil.make_dirs(basedir)
9141+        fp = FilePath(basedir)
9142+        backend = DiskBackend(fp)
9143+
9144         # setting 'override_lease_duration' to 2000 means that any lease that
9145         # is more than 2000 seconds old will be expired.
9146         expiration_policy = {
9147hunk ./src/allmydata/test/test_storage.py 3317
9148             'override_lease_duration': 2000,
9149             'sharetypes': ('mutable', 'immutable'),
9150         }
9151-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9152+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9153+
9154         # make it start sooner than usual.
9155         lc = ss.lease_checker
9156         lc.slow_start = 0
9157hunk ./src/allmydata/test/test_storage.py 3330
9158         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9159 
9160         def count_shares(si):
9161-            return len(list(ss._iter_share_files(si)))
9162+            return len(list(ss.backend.get_shareset(si).get_shares()))
9163         def _get_sharefile(si):
9164hunk ./src/allmydata/test/test_storage.py 3332
9165-            return list(ss._iter_share_files(si))[0]
9166+            return list(ss.backend.get_shareset(si).get_shares())[0]
9167         def count_leases(si):
9168             return len(list(_get_sharefile(si).get_leases()))
9169 
9170hunk ./src/allmydata/test/test_storage.py 3355
9171 
9172         sf0 = _get_sharefile(immutable_si_0)
9173         self.backdate_lease(sf0, self.renew_secrets[0], now - 1000)
9174-        sf0_size = os.stat(sf0.home).st_size
9175+        sf0_size = sf0.get_size()
9176 
9177         # immutable_si_1 gets an extra lease
9178         sf1 = _get_sharefile(immutable_si_1)
9179hunk ./src/allmydata/test/test_storage.py 3363
9180 
9181         sf2 = _get_sharefile(mutable_si_2)
9182         self.backdate_lease(sf2, self.renew_secrets[3], now - 1000)
9183-        sf2_size = os.stat(sf2.home).st_size
9184+        sf2_size = sf2.get_size()
9185 
9186         # mutable_si_3 gets an extra lease
9187         sf3 = _get_sharefile(mutable_si_3)
9188hunk ./src/allmydata/test/test_storage.py 3450
9189 
9190     def test_expire_cutoff_date(self):
9191         basedir = "storage/LeaseCrawler/expire_cutoff_date"
9192-        fileutil.make_dirs(basedir)
9193+        fp = FilePath(basedir)
9194+        backend = DiskBackend(fp)
9195+
9196         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9197         # is more than 2000 seconds old will be expired.
9198         now = time.time()
9199hunk ./src/allmydata/test/test_storage.py 3463
9200             'cutoff_date': then,
9201             'sharetypes': ('mutable', 'immutable'),
9202         }
9203-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9204+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9205+
9206         # make it start sooner than usual.
9207         lc = ss.lease_checker
9208         lc.slow_start = 0
9209hunk ./src/allmydata/test/test_storage.py 3476
9210         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9211 
9212         def count_shares(si):
9213-            return len(list(ss._iter_share_files(si)))
9214+            return len(list(ss.backend.get_shareset(si).get_shares()))
9215         def _get_sharefile(si):
9216hunk ./src/allmydata/test/test_storage.py 3478
9217-            return list(ss._iter_share_files(si))[0]
9218+            return list(ss.backend.get_shareset(si).get_shares())[0]
9219         def count_leases(si):
9220             return len(list(_get_sharefile(si).get_leases()))
9221 
9222hunk ./src/allmydata/test/test_storage.py 3505
9223 
9224         sf0 = _get_sharefile(immutable_si_0)
9225         self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time)
9226-        sf0_size = os.stat(sf0.home).st_size
9227+        sf0_size = sf0.get_size()
9228 
9229         # immutable_si_1 gets an extra lease
9230         sf1 = _get_sharefile(immutable_si_1)
9231hunk ./src/allmydata/test/test_storage.py 3513
9232 
9233         sf2 = _get_sharefile(mutable_si_2)
9234         self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time)
9235-        sf2_size = os.stat(sf2.home).st_size
9236+        sf2_size = sf2.get_size()
9237 
9238         # mutable_si_3 gets an extra lease
9239         sf3 = _get_sharefile(mutable_si_3)
9240hunk ./src/allmydata/test/test_storage.py 3605
9241 
9242     def test_only_immutable(self):
9243         basedir = "storage/LeaseCrawler/only_immutable"
9244-        fileutil.make_dirs(basedir)
9245+        fp = FilePath(basedir)
9246+        backend = DiskBackend(fp)
9247+
9248         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9249         # is more than 2000 seconds old will be expired.
9250         now = time.time()
9251hunk ./src/allmydata/test/test_storage.py 3618
9252             'cutoff_date': then,
9253             'sharetypes': ('immutable',),
9254         }
9255-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9256+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9257         lc = ss.lease_checker
9258         lc.slow_start = 0
9259         webstatus = StorageStatus(ss)
9260hunk ./src/allmydata/test/test_storage.py 3629
9261         new_expiration_time = now - 3000 + 31*24*60*60
9262 
9263         def count_shares(si):
9264-            return len(list(ss._iter_share_files(si)))
9265+            return len(list(ss.backend.get_shareset(si).get_shares()))
9266         def _get_sharefile(si):
9267hunk ./src/allmydata/test/test_storage.py 3631
9268-            return list(ss._iter_share_files(si))[0]
9269+            return list(ss.backend.get_shareset(si).get_shares())[0]
9270         def count_leases(si):
9271             return len(list(_get_sharefile(si).get_leases()))
9272 
9273hunk ./src/allmydata/test/test_storage.py 3668
9274 
9275     def test_only_mutable(self):
9276         basedir = "storage/LeaseCrawler/only_mutable"
9277-        fileutil.make_dirs(basedir)
9278+        fp = FilePath(basedir)
9279+        backend = DiskBackend(fp)
9280+
9281         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9282         # is more than 2000 seconds old will be expired.
9283         now = time.time()
9284hunk ./src/allmydata/test/test_storage.py 3681
9285             'cutoff_date': then,
9286             'sharetypes': ('mutable',),
9287         }
9288-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9289+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9290         lc = ss.lease_checker
9291         lc.slow_start = 0
9292         webstatus = StorageStatus(ss)
9293hunk ./src/allmydata/test/test_storage.py 3692
9294         new_expiration_time = now - 3000 + 31*24*60*60
9295 
9296         def count_shares(si):
9297-            return len(list(ss._iter_share_files(si)))
9298+            return len(list(ss.backend.get_shareset(si).get_shares()))
9299         def _get_sharefile(si):
9300hunk ./src/allmydata/test/test_storage.py 3694
9301-            return list(ss._iter_share_files(si))[0]
9302+            return list(ss.backend.get_shareset(si).get_shares())[0]
9303         def count_leases(si):
9304             return len(list(_get_sharefile(si).get_leases()))
9305 
9306hunk ./src/allmydata/test/test_storage.py 3731
9307 
9308     def test_bad_mode(self):
9309         basedir = "storage/LeaseCrawler/bad_mode"
9310-        fileutil.make_dirs(basedir)
9311+        fp = FilePath(basedir)
9312+        backend = DiskBackend(fp)
9313+
9314+        expiration_policy = {
9315+            'enabled': True,
9316+            'mode': 'bogus',
9317+            'override_lease_duration': None,
9318+            'cutoff_date': None,
9319+            'sharetypes': ('mutable', 'immutable'),
9320+        }
9321         e = self.failUnlessRaises(ValueError,
9322hunk ./src/allmydata/test/test_storage.py 3742
9323-                                  StorageServer, basedir, "\x00" * 20,
9324-                                  expiration_mode="bogus")
9325+                                  StorageServer, "\x00" * 20, backend, fp,
9326+                                  expiration_policy=expiration_policy)
9327         self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e))
9328 
9329     def test_parse_duration(self):
9330hunk ./src/allmydata/test/test_storage.py 3767
9331 
9332     def test_limited_history(self):
9333         basedir = "storage/LeaseCrawler/limited_history"
9334-        fileutil.make_dirs(basedir)
9335-        ss = StorageServer(basedir, "\x00" * 20)
9336+        fp = FilePath(basedir)
9337+        backend = DiskBackend(fp)
9338+        ss = StorageServer("\x00" * 20, backend, fp)
9339+
9340         # make it start sooner than usual.
9341         lc = ss.lease_checker
9342         lc.slow_start = 0
9343hunk ./src/allmydata/test/test_storage.py 3801
9344 
9345     def test_unpredictable_future(self):
9346         basedir = "storage/LeaseCrawler/unpredictable_future"
9347-        fileutil.make_dirs(basedir)
9348-        ss = StorageServer(basedir, "\x00" * 20)
9349+        fp = FilePath(basedir)
9350+        backend = DiskBackend(fp)
9351+        ss = StorageServer("\x00" * 20, backend, fp)
9352+
9353         # make it start sooner than usual.
9354         lc = ss.lease_checker
9355         lc.slow_start = 0
9356hunk ./src/allmydata/test/test_storage.py 3866
9357 
9358     def test_no_st_blocks(self):
9359         basedir = "storage/LeaseCrawler/no_st_blocks"
9360-        fileutil.make_dirs(basedir)
9361+        fp = FilePath(basedir)
9362+        backend = DiskBackend(fp)
9363+
9364         # A negative 'override_lease_duration' means that the "configured-"
9365         # space-recovered counts will be non-zero, since all shares will have
9366         # expired by then.
9367hunk ./src/allmydata/test/test_storage.py 3878
9368             'override_lease_duration': -1000,
9369             'sharetypes': ('mutable', 'immutable'),
9370         }
9371-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
9372+        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9373 
9374         # make it start sooner than usual.
9375         lc = ss.lease_checker
9376hunk ./src/allmydata/test/test_storage.py 3911
9377             UnknownImmutableContainerVersionError,
9378             ]
9379         basedir = "storage/LeaseCrawler/share_corruption"
9380-        fileutil.make_dirs(basedir)
9381-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9382+        fp = FilePath(basedir)
9383+        backend = DiskBackend(fp)
9384+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9385         w = StorageStatus(ss)
9386         # make it start sooner than usual.
9387         lc = ss.lease_checker
9388hunk ./src/allmydata/test/test_storage.py 3928
9389         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9390         first = min(self.sis)
9391         first_b32 = base32.b2a(first)
9392-        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
9393+        fp = ss.backend.get_shareset(first)._sharehomedir.child("0")
9394         f = fp.open("rb+")
9395hunk ./src/allmydata/test/test_storage.py 3930
9396-        f.seek(0)
9397-        f.write("BAD MAGIC")
9398-        f.close()
9399+        try:
9400+            f.seek(0)
9401+            f.write("BAD MAGIC")
9402+        finally:
9403+            f.close()
9404         # if get_share_file() doesn't see the correct mutable magic, it
9405         # assumes the file is an immutable share, and then
9406         # immutable.ShareFile sees a bad version. So regardless of which kind
9407hunk ./src/allmydata/test/test_storage.py 3943
9408 
9409         # also create an empty bucket
9410         empty_si = base32.b2a("\x04"*16)
9411-        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
9412+        empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir
9413         fileutil.fp_make_dirs(empty_bucket_dir)
9414 
9415         ss.setServiceParent(self.s)
9416hunk ./src/allmydata/test/test_storage.py 4031
9417 
9418     def test_status(self):
9419         basedir = "storage/WebStatus/status"
9420-        fileutil.make_dirs(basedir)
9421-        ss = StorageServer(basedir, "\x00" * 20)
9422+        fp = FilePath(basedir)
9423+        backend = DiskBackend(fp)
9424+        ss = StorageServer("\x00" * 20, backend, fp)
9425         ss.setServiceParent(self.s)
9426         w = StorageStatus(ss)
9427         d = self.render1(w)
9428hunk ./src/allmydata/test/test_storage.py 4065
9429         # Some platforms may have no disk stats API. Make sure the code can handle that
9430         # (test runs on all platforms).
9431         basedir = "storage/WebStatus/status_no_disk_stats"
9432-        fileutil.make_dirs(basedir)
9433-        ss = StorageServer(basedir, "\x00" * 20)
9434+        fp = FilePath(basedir)
9435+        backend = DiskBackend(fp)
9436+        ss = StorageServer("\x00" * 20, backend, fp)
9437         ss.setServiceParent(self.s)
9438         w = StorageStatus(ss)
9439         html = w.renderSynchronously()
9440hunk ./src/allmydata/test/test_storage.py 4085
9441         # If the API to get disk stats exists but a call to it fails, then the status should
9442         # show that no shares will be accepted, and get_available_space() should be 0.
9443         basedir = "storage/WebStatus/status_bad_disk_stats"
9444-        fileutil.make_dirs(basedir)
9445-        ss = StorageServer(basedir, "\x00" * 20)
9446+        fp = FilePath(basedir)
9447+        backend = DiskBackend(fp)
9448+        ss = StorageServer("\x00" * 20, backend, fp)
9449         ss.setServiceParent(self.s)
9450         w = StorageStatus(ss)
9451         html = w.renderSynchronously()
9452}
9453[Fix most of the crawler tests. refs #999
9454david-sarah@jacaranda.org**20110922183008
9455 Ignore-this: 116c0848008f3989ba78d87c07ec783c
9456] {
9457hunk ./src/allmydata/storage/backends/disk/disk_backend.py 160
9458         self._discard_storage = discard_storage
9459 
9460     def get_overhead(self):
9461-        return (fileutil.get_disk_usage(self._sharehomedir) +
9462-                fileutil.get_disk_usage(self._incominghomedir))
9463+        return (fileutil.get_used_space(self._sharehomedir) +
9464+                fileutil.get_used_space(self._incominghomedir))
9465 
9466     def get_shares(self):
9467         """
9468hunk ./src/allmydata/storage/crawler.py 2
9469 
9470-import time, struct
9471-import cPickle as pickle
9472+import time, pickle, struct
9473 from twisted.internet import reactor
9474 from twisted.application import service
9475 
9476hunk ./src/allmydata/storage/crawler.py 205
9477         #                            shareset to be processed, or None if we
9478         #                            are sleeping between cycles
9479         try:
9480-            state = pickle.loads(self.statefp.getContent())
9481+            pickled = self.statefp.getContent()
9482         except EnvironmentError:
9483             if self.statefp.exists():
9484                 raise
9485hunk ./src/allmydata/storage/crawler.py 215
9486                      "last-complete-prefix": None,
9487                      "last-complete-bucket": None,
9488                      }
9489+        else:
9490+            state = pickle.loads(pickled)
9491+
9492         state.setdefault("current-cycle-start-time", time.time()) # approximate
9493         self.state = state
9494         lcp = state["last-complete-prefix"]
9495hunk ./src/allmydata/storage/crawler.py 246
9496         else:
9497             last_complete_prefix = self.prefixes[lcpi]
9498         self.state["last-complete-prefix"] = last_complete_prefix
9499-        self.statefp.setContent(pickle.dumps(self.state))
9500+        pickled = pickle.dumps(self.state)
9501+        self.statefp.setContent(pickled)
9502 
9503     def startService(self):
9504         # arrange things to look like we were just sleeping, so
9505hunk ./src/allmydata/storage/expirer.py 86
9506         # initialize history
9507         if not self.historyfp.exists():
9508             history = {} # cyclenum -> dict
9509-            self.historyfp.setContent(pickle.dumps(history))
9510+            pickled = pickle.dumps(history)
9511+            self.historyfp.setContent(pickled)
9512 
9513     def create_empty_cycle_dict(self):
9514         recovered = self.create_empty_recovered_dict()
9515hunk ./src/allmydata/storage/expirer.py 111
9516     def started_cycle(self, cycle):
9517         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
9518 
9519-    def process_storage_index(self, cycle, prefix, container):
9520+    def process_shareset(self, cycle, prefix, shareset):
9521         would_keep_shares = []
9522         wks = None
9523hunk ./src/allmydata/storage/expirer.py 114
9524-        sharetype = None
9525 
9526hunk ./src/allmydata/storage/expirer.py 115
9527-        for share in container.get_shares():
9528-            sharetype = share.sharetype
9529+        for share in shareset.get_shares():
9530             try:
9531                 wks = self.process_share(share)
9532             except (UnknownMutableContainerVersionError,
9533hunk ./src/allmydata/storage/expirer.py 128
9534                 wks = (1, 1, 1, "unknown")
9535             would_keep_shares.append(wks)
9536 
9537-        container_type = None
9538+        shareset_type = None
9539         if wks:
9540hunk ./src/allmydata/storage/expirer.py 130
9541-            # use the last share's sharetype as the container type
9542-            container_type = wks[3]
9543+            # use the last share's type as the shareset type
9544+            shareset_type = wks[3]
9545         rec = self.state["cycle-to-date"]["space-recovered"]
9546         self.increment(rec, "examined-buckets", 1)
9547hunk ./src/allmydata/storage/expirer.py 134
9548-        if sharetype:
9549-            self.increment(rec, "examined-buckets-"+container_type, 1)
9550+        if shareset_type:
9551+            self.increment(rec, "examined-buckets-"+shareset_type, 1)
9552 
9553hunk ./src/allmydata/storage/expirer.py 137
9554-        container_diskbytes = container.get_overhead()
9555+        shareset_diskbytes = shareset.get_overhead()
9556 
9557         if sum([wks[0] for wks in would_keep_shares]) == 0:
9558hunk ./src/allmydata/storage/expirer.py 140
9559-            self.increment_container_space("original", container_diskbytes, sharetype)
9560+            self.increment_shareset_space("original", shareset_diskbytes, shareset_type)
9561         if sum([wks[1] for wks in would_keep_shares]) == 0:
9562hunk ./src/allmydata/storage/expirer.py 142
9563-            self.increment_container_space("configured", container_diskbytes, sharetype)
9564+            self.increment_shareset_space("configured", shareset_diskbytes, shareset_type)
9565         if sum([wks[2] for wks in would_keep_shares]) == 0:
9566hunk ./src/allmydata/storage/expirer.py 144
9567-            self.increment_container_space("actual", container_diskbytes, sharetype)
9568+            self.increment_shareset_space("actual", shareset_diskbytes, shareset_type)
9569 
9570     def process_share(self, share):
9571         sharetype = share.sharetype
9572hunk ./src/allmydata/storage/expirer.py 189
9573 
9574         so_far = self.state["cycle-to-date"]
9575         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
9576-        self.increment_space("examined", diskbytes, sharetype)
9577+        self.increment_space("examined", sharebytes, diskbytes, sharetype)
9578 
9579         would_keep_share = [1, 1, 1, sharetype]
9580 
9581hunk ./src/allmydata/storage/expirer.py 220
9582             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
9583             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
9584 
9585-    def increment_container_space(self, a, container_diskbytes, container_type):
9586+    def increment_shareset_space(self, a, shareset_diskbytes, shareset_type):
9587         rec = self.state["cycle-to-date"]["space-recovered"]
9588hunk ./src/allmydata/storage/expirer.py 222
9589-        self.increment(rec, a+"-diskbytes", container_diskbytes)
9590+        self.increment(rec, a+"-diskbytes", shareset_diskbytes)
9591         self.increment(rec, a+"-buckets", 1)
9592hunk ./src/allmydata/storage/expirer.py 224
9593-        if container_type:
9594-            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
9595-            self.increment(rec, a+"-buckets-"+container_type, 1)
9596+        if shareset_type:
9597+            self.increment(rec, a+"-diskbytes-"+shareset_type, shareset_diskbytes)
9598+            self.increment(rec, a+"-buckets-"+shareset_type, 1)
9599 
9600     def increment(self, d, k, delta=1):
9601         if k not in d:
9602hunk ./src/allmydata/storage/expirer.py 280
9603         # copy() needs to become a deepcopy
9604         h["space-recovered"] = s["space-recovered"].copy()
9605 
9606-        history = pickle.loads(self.historyfp.getContent())
9607+        pickled = self.historyfp.getContent()
9608+        history = pickle.loads(pickled)
9609         history[cycle] = h
9610         while len(history) > 10:
9611             oldcycles = sorted(history.keys())
9612hunk ./src/allmydata/storage/expirer.py 286
9613             del history[oldcycles[0]]
9614-        self.historyfp.setContent(pickle.dumps(history))
9615+        repickled = pickle.dumps(history)
9616+        self.historyfp.setContent(repickled)
9617 
9618     def get_state(self):
9619         """In addition to the crawler state described in
9620hunk ./src/allmydata/storage/expirer.py 356
9621         progress = self.get_progress()
9622 
9623         state = ShareCrawler.get_state(self) # does a shallow copy
9624-        history = pickle.loads(self.historyfp.getContent())
9625+        pickled = self.historyfp.getContent()
9626+        history = pickle.loads(pickled)
9627         state["history"] = history
9628 
9629         if not progress["cycle-in-progress"]:
9630hunk ./src/allmydata/test/test_crawler.py 25
9631         ShareCrawler.__init__(self, *args, **kwargs)
9632         self.all_buckets = []
9633         self.finished_d = defer.Deferred()
9634-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9635-        self.all_buckets.append(storage_index_b32)
9636+
9637+    def process_shareset(self, cycle, prefix, shareset):
9638+        self.all_buckets.append(shareset.get_storage_index_string())
9639+
9640     def finished_cycle(self, cycle):
9641         eventually(self.finished_d.callback, None)
9642 
9643hunk ./src/allmydata/test/test_crawler.py 41
9644         self.all_buckets = []
9645         self.finished_d = defer.Deferred()
9646         self.yield_cb = None
9647-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9648-        self.all_buckets.append(storage_index_b32)
9649+
9650+    def process_shareset(self, cycle, prefix, shareset):
9651+        self.all_buckets.append(shareset.get_storage_index_string())
9652         self.countdown -= 1
9653         if self.countdown == 0:
9654             # force a timeout. We restore it in yielding()
9655hunk ./src/allmydata/test/test_crawler.py 66
9656         self.accumulated = 0.0
9657         self.cycles = 0
9658         self.last_yield = 0.0
9659-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9660+
9661+    def process_shareset(self, cycle, prefix, shareset):
9662         start = time.time()
9663         time.sleep(0.05)
9664         elapsed = time.time() - start
9665hunk ./src/allmydata/test/test_crawler.py 85
9666         ShareCrawler.__init__(self, *args, **kwargs)
9667         self.counter = 0
9668         self.finished_d = defer.Deferred()
9669-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9670+
9671+    def process_shareset(self, cycle, prefix, shareset):
9672         self.counter += 1
9673     def finished_cycle(self, cycle):
9674         self.finished_d.callback(None)
9675hunk ./src/allmydata/test/test_storage.py 3041
9676 
9677 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
9678     stop_after_first_bucket = False
9679-    def process_bucket(self, *args, **kwargs):
9680-        LeaseCheckingCrawler.process_bucket(self, *args, **kwargs)
9681+
9682+    def process_shareset(self, cycle, prefix, shareset):
9683+        LeaseCheckingCrawler.process_shareset(self, cycle, prefix, shareset)
9684         if self.stop_after_first_bucket:
9685             self.stop_after_first_bucket = False
9686             self.cpu_slice = -1.0
9687hunk ./src/allmydata/test/test_storage.py 3051
9688         if not self.stop_after_first_bucket:
9689             self.cpu_slice = 500
9690 
9691+class InstrumentedStorageServer(StorageServer):
9692+    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9693+
9694+
9695 class BrokenStatResults:
9696     pass
9697 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
9698hunk ./src/allmydata/test/test_storage.py 3069
9699             setattr(bsr, attrname, getattr(s, attrname))
9700         return bsr
9701 
9702-class InstrumentedStorageServer(StorageServer):
9703-    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9704 class No_ST_BLOCKS_StorageServer(StorageServer):
9705     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
9706 
9707}
9708[Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
9709david-sarah@jacaranda.org**20110922183323
9710 Ignore-this: a11fb0dd0078ff627cb727fc769ec848
9711] {
9712hunk ./src/allmydata/storage/backends/disk/immutable.py 260
9713         except IndexError:
9714             self.add_lease(lease_info)
9715 
9716+    def cancel_lease(self, cancel_secret):
9717+        """Remove a lease with the given cancel_secret. If the last lease is
9718+        cancelled, the file will be removed. Return the number of bytes that
9719+        were freed (by truncating the list of leases, and possibly by
9720+        deleting the file). Raise IndexError if there was no lease with the
9721+        given cancel_secret.
9722+        """
9723+
9724+        leases = list(self.get_leases())
9725+        num_leases_removed = 0
9726+        for i, lease in enumerate(leases):
9727+            if constant_time_compare(lease.cancel_secret, cancel_secret):
9728+                leases[i] = None
9729+                num_leases_removed += 1
9730+        if not num_leases_removed:
9731+            raise IndexError("unable to find matching lease to cancel")
9732+
9733+        space_freed = 0
9734+        if num_leases_removed:
9735+            # pack and write out the remaining leases. We write these out in
9736+            # the same order as they were added, so that if we crash while
9737+            # doing this, we won't lose any non-cancelled leases.
9738+            leases = [l for l in leases if l] # remove the cancelled leases
9739+            if len(leases) > 0:
9740+                f = self._home.open('rb+')
9741+                try:
9742+                    for i, lease in enumerate(leases):
9743+                        self._write_lease_record(f, i, lease)
9744+                    self._write_num_leases(f, len(leases))
9745+                    self._truncate_leases(f, len(leases))
9746+                finally:
9747+                    f.close()
9748+                space_freed = self.LEASE_SIZE * num_leases_removed
9749+            else:
9750+                space_freed = fileutil.get_used_space(self._home)
9751+                self.unlink()
9752+        return space_freed
9753+
9754hunk ./src/allmydata/storage/backends/disk/mutable.py 361
9755         except IndexError:
9756             self.add_lease(lease_info)
9757 
9758+    def cancel_lease(self, cancel_secret):
9759+        """Remove any leases with the given cancel_secret. If the last lease
9760+        is cancelled, the file will be removed. Return the number of bytes
9761+        that were freed (by truncating the list of leases, and possibly by
9762+        deleting the file). Raise IndexError if there was no lease with the
9763+        given cancel_secret."""
9764+
9765+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
9766+
9767+        accepting_nodeids = set()
9768+        modified = 0
9769+        remaining = 0
9770+        blank_lease = LeaseInfo(owner_num=0,
9771+                                renew_secret="\x00"*32,
9772+                                cancel_secret="\x00"*32,
9773+                                expiration_time=0,
9774+                                nodeid="\x00"*20)
9775+        f = self._home.open('rb+')
9776+        try:
9777+            for (leasenum, lease) in self._enumerate_leases(f):
9778+                accepting_nodeids.add(lease.nodeid)
9779+                if constant_time_compare(lease.cancel_secret, cancel_secret):
9780+                    self._write_lease_record(f, leasenum, blank_lease)
9781+                    modified += 1
9782+                else:
9783+                    remaining += 1
9784+            if modified:
9785+                freed_space = self._pack_leases(f)
9786+        finally:
9787+            f.close()
9788+
9789+        if modified > 0:
9790+            if remaining == 0:
9791+                freed_space = fileutil.get_used_space(self._home)
9792+                self.unlink()
9793+            return freed_space
9794+
9795+        msg = ("Unable to cancel non-existent lease. I have leases "
9796+               "accepted by nodeids: ")
9797+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
9798+                         for anid in accepting_nodeids])
9799+        msg += " ."
9800+        raise IndexError(msg)
9801+
9802+    def _pack_leases(self, f):
9803+        # TODO: reclaim space from cancelled leases
9804+        return 0
9805+
9806     def _read_write_enabler_and_nodeid(self, f):
9807         f.seek(0)
9808         data = f.read(self.HEADER_SIZE)
9809}
9810[Blank line cleanups.
9811david-sarah@jacaranda.org**20110923012044
9812 Ignore-this: 8e1c4ecb5b0c65673af35872876a8591
9813] {
9814hunk ./src/allmydata/interfaces.py 33
9815 LeaseRenewSecret = Hash # used to protect lease renewal requests
9816 LeaseCancelSecret = Hash # used to protect lease cancellation requests
9817 
9818+
9819 class RIStubClient(RemoteInterface):
9820     """Each client publishes a service announcement for a dummy object called
9821     the StubClient. This object doesn't actually offer any services, but the
9822hunk ./src/allmydata/interfaces.py 42
9823     the grid and the client versions in use). This is the (empty)
9824     RemoteInterface for the StubClient."""
9825 
9826+
9827 class RIBucketWriter(RemoteInterface):
9828     """ Objects of this kind live on the server side. """
9829     def write(offset=Offset, data=ShareData):
9830hunk ./src/allmydata/interfaces.py 61
9831         """
9832         return None
9833 
9834+
9835 class RIBucketReader(RemoteInterface):
9836     def read(offset=Offset, length=ReadSize):
9837         return ShareData
9838hunk ./src/allmydata/interfaces.py 78
9839         documentation.
9840         """
9841 
9842+
9843 TestVector = ListOf(TupleOf(Offset, ReadSize, str, str))
9844 # elements are (offset, length, operator, specimen)
9845 # operator is one of "lt, le, eq, ne, ge, gt"
9846hunk ./src/allmydata/interfaces.py 95
9847 ReadData = ListOf(ShareData)
9848 # returns data[offset:offset+length] for each element of TestVector
9849 
9850+
9851 class RIStorageServer(RemoteInterface):
9852     __remote_name__ = "RIStorageServer.tahoe.allmydata.com"
9853 
9854hunk ./src/allmydata/interfaces.py 2255
9855 
9856     def get_storage_index():
9857         """Return a string with the (binary) storage index."""
9858+
9859     def get_storage_index_string():
9860         """Return a string with the (printable) abbreviated storage index."""
9861hunk ./src/allmydata/interfaces.py 2258
9862+
9863     def get_uri():
9864         """Return the (string) URI of the object that was checked."""
9865 
9866hunk ./src/allmydata/interfaces.py 2353
9867     def get_report():
9868         """Return a list of strings with more detailed results."""
9869 
9870+
9871 class ICheckAndRepairResults(Interface):
9872     """I contain the detailed results of a check/verify/repair operation.
9873 
9874hunk ./src/allmydata/interfaces.py 2363
9875 
9876     def get_storage_index():
9877         """Return a string with the (binary) storage index."""
9878+
9879     def get_storage_index_string():
9880         """Return a string with the (printable) abbreviated storage index."""
9881hunk ./src/allmydata/interfaces.py 2366
9882+
9883     def get_repair_attempted():
9884         """Return a boolean, True if a repair was attempted. We might not
9885         attempt to repair the file because it was healthy, or healthy enough
9886hunk ./src/allmydata/interfaces.py 2372
9887         (i.e. some shares were missing but not enough to exceed some
9888         threshold), or because we don't know how to repair this object."""
9889+
9890     def get_repair_successful():
9891         """Return a boolean, True if repair was attempted and the file/dir
9892         was fully healthy afterwards. False if no repair was attempted or if
9893hunk ./src/allmydata/interfaces.py 2377
9894         a repair attempt failed."""
9895+
9896     def get_pre_repair_results():
9897         """Return an ICheckResults instance that describes the state of the
9898         file/dir before any repair was attempted."""
9899hunk ./src/allmydata/interfaces.py 2381
9900+
9901     def get_post_repair_results():
9902         """Return an ICheckResults instance that describes the state of the
9903         file/dir after any repair was attempted. If no repair was attempted,
9904hunk ./src/allmydata/interfaces.py 2615
9905         (childnode, metadata_dict) tuples), the directory will be populated
9906         with those children, otherwise it will be empty."""
9907 
9908+
9909 class IClientStatus(Interface):
9910     def list_all_uploads():
9911         """Return a list of uploader objects, one for each upload that
9912hunk ./src/allmydata/interfaces.py 2621
9913         currently has an object available (tracked with weakrefs). This is
9914         intended for debugging purposes."""
9915+
9916     def list_active_uploads():
9917         """Return a list of active IUploadStatus objects."""
9918hunk ./src/allmydata/interfaces.py 2624
9919+
9920     def list_recent_uploads():
9921         """Return a list of IUploadStatus objects for the most recently
9922         started uploads."""
9923hunk ./src/allmydata/interfaces.py 2633
9924         """Return a list of downloader objects, one for each download that
9925         currently has an object available (tracked with weakrefs). This is
9926         intended for debugging purposes."""
9927+
9928     def list_active_downloads():
9929         """Return a list of active IDownloadStatus objects."""
9930hunk ./src/allmydata/interfaces.py 2636
9931+
9932     def list_recent_downloads():
9933         """Return a list of IDownloadStatus objects for the most recently
9934         started downloads."""
9935hunk ./src/allmydata/interfaces.py 2641
9936 
9937+
9938 class IUploadStatus(Interface):
9939     def get_started():
9940         """Return a timestamp (float with seconds since epoch) indicating
9941hunk ./src/allmydata/interfaces.py 2646
9942         when the operation was started."""
9943+
9944     def get_storage_index():
9945         """Return a string with the (binary) storage index in use on this
9946         upload. Returns None if the storage index has not yet been
9947hunk ./src/allmydata/interfaces.py 2651
9948         calculated."""
9949+
9950     def get_size():
9951         """Return an integer with the number of bytes that will eventually
9952         be uploaded for this file. Returns None if the size is not yet known.
9953hunk ./src/allmydata/interfaces.py 2656
9954         """
9955+
9956     def using_helper():
9957         """Return True if this upload is using a Helper, False if not."""
9958hunk ./src/allmydata/interfaces.py 2659
9959+
9960     def get_status():
9961         """Return a string describing the current state of the upload
9962         process."""
9963hunk ./src/allmydata/interfaces.py 2663
9964+
9965     def get_progress():
9966         """Returns a tuple of floats, (chk, ciphertext, encode_and_push),
9967         each from 0.0 to 1.0 . 'chk' describes how much progress has been
9968hunk ./src/allmydata/interfaces.py 2675
9969         process has finished: for helper uploads this is dependent upon the
9970         helper providing progress reports. It might be reasonable to add all
9971         three numbers and report the sum to the user."""
9972+
9973     def get_active():
9974         """Return True if the upload is currently active, False if not."""
9975hunk ./src/allmydata/interfaces.py 2678
9976+
9977     def get_results():
9978         """Return an instance of UploadResults (which contains timing and
9979         sharemap information). Might return None if the upload is not yet
9980hunk ./src/allmydata/interfaces.py 2683
9981         finished."""
9982+
9983     def get_counter():
9984         """Each upload status gets a unique number: this method returns that
9985         number. This provides a handle to this particular upload, so a web
9986hunk ./src/allmydata/interfaces.py 2689
9987         page can generate a suitable hyperlink."""
9988 
9989+
9990 class IDownloadStatus(Interface):
9991     def get_started():
9992         """Return a timestamp (float with seconds since epoch) indicating
9993hunk ./src/allmydata/interfaces.py 2694
9994         when the operation was started."""
9995+
9996     def get_storage_index():
9997         """Return a string with the (binary) storage index in use on this
9998         download. This may be None if there is no storage index (i.e. LIT
9999hunk ./src/allmydata/interfaces.py 2699
10000         files)."""
10001+
10002     def get_size():
10003         """Return an integer with the number of bytes that will eventually be
10004         retrieved for this file. Returns None if the size is not yet known.
10005hunk ./src/allmydata/interfaces.py 2704
10006         """
10007+
10008     def using_helper():
10009         """Return True if this download is using a Helper, False if not."""
10010hunk ./src/allmydata/interfaces.py 2707
10011+
10012     def get_status():
10013         """Return a string describing the current state of the download
10014         process."""
10015hunk ./src/allmydata/interfaces.py 2711
10016+
10017     def get_progress():
10018         """Returns a float (from 0.0 to 1.0) describing the amount of the
10019         download that has completed. This value will remain at 0.0 until the
10020hunk ./src/allmydata/interfaces.py 2716
10021         first byte of plaintext is pushed to the download target."""
10022+
10023     def get_active():
10024         """Return True if the download is currently active, False if not."""
10025hunk ./src/allmydata/interfaces.py 2719
10026+
10027     def get_counter():
10028         """Each download status gets a unique number: this method returns
10029         that number. This provides a handle to this particular download, so a
10030hunk ./src/allmydata/interfaces.py 2725
10031         web page can generate a suitable hyperlink."""
10032 
10033+
10034 class IServermapUpdaterStatus(Interface):
10035     pass
10036hunk ./src/allmydata/interfaces.py 2728
10037+
10038+
10039 class IPublishStatus(Interface):
10040     pass
10041hunk ./src/allmydata/interfaces.py 2732
10042+
10043+
10044 class IRetrieveStatus(Interface):
10045     pass
10046 
10047hunk ./src/allmydata/interfaces.py 2737
10048+
10049 class NotCapableError(Exception):
10050     """You have tried to write to a read-only node."""
10051 
10052hunk ./src/allmydata/interfaces.py 2741
10053+
10054 class BadWriteEnablerError(Exception):
10055     pass
10056 
10057hunk ./src/allmydata/interfaces.py 2745
10058-class RIControlClient(RemoteInterface):
10059 
10060hunk ./src/allmydata/interfaces.py 2746
10061+class RIControlClient(RemoteInterface):
10062     def wait_for_client_connections(num_clients=int):
10063         """Do not return until we have connections to at least NUM_CLIENTS
10064         storage servers.
10065hunk ./src/allmydata/interfaces.py 2801
10066 
10067         return DictOf(str, float)
10068 
10069+
10070 UploadResults = Any() #DictOf(str, str)
10071 
10072hunk ./src/allmydata/interfaces.py 2804
10073+
10074 class RIEncryptedUploadable(RemoteInterface):
10075     __remote_name__ = "RIEncryptedUploadable.tahoe.allmydata.com"
10076 
10077hunk ./src/allmydata/interfaces.py 2877
10078         """
10079         return DictOf(str, DictOf(str, ChoiceOf(float, int, long, None)))
10080 
10081+
10082 class RIStatsGatherer(RemoteInterface):
10083     __remote_name__ = "RIStatsGatherer.tahoe.allmydata.com"
10084     """
10085hunk ./src/allmydata/interfaces.py 2917
10086 class FileTooLargeError(Exception):
10087     pass
10088 
10089+
10090 class IValidatedThingProxy(Interface):
10091     def start():
10092         """ Acquire a thing and validate it. Return a deferred that is
10093hunk ./src/allmydata/interfaces.py 2924
10094         eventually fired with self if the thing is valid or errbacked if it
10095         can't be acquired or validated."""
10096 
10097+
10098 class InsufficientVersionError(Exception):
10099     def __init__(self, needed, got):
10100         self.needed = needed
10101hunk ./src/allmydata/interfaces.py 2933
10102         return "InsufficientVersionError(need '%s', got %s)" % (self.needed,
10103                                                                 self.got)
10104 
10105+
10106 class EmptyPathnameComponentError(Exception):
10107     """The webapi disallows empty pathname components."""
10108hunk ./src/allmydata/test/test_crawler.py 21
10109 class BucketEnumeratingCrawler(ShareCrawler):
10110     cpu_slice = 500 # make sure it can complete in a single slice
10111     slow_start = 0
10112+
10113     def __init__(self, *args, **kwargs):
10114         ShareCrawler.__init__(self, *args, **kwargs)
10115         self.all_buckets = []
10116hunk ./src/allmydata/test/test_crawler.py 33
10117     def finished_cycle(self, cycle):
10118         eventually(self.finished_d.callback, None)
10119 
10120+
10121 class PacedCrawler(ShareCrawler):
10122     cpu_slice = 500 # make sure it can complete in a single slice
10123     slow_start = 0
10124hunk ./src/allmydata/test/test_crawler.py 37
10125+
10126     def __init__(self, *args, **kwargs):
10127         ShareCrawler.__init__(self, *args, **kwargs)
10128         self.countdown = 6
10129hunk ./src/allmydata/test/test_crawler.py 51
10130         if self.countdown == 0:
10131             # force a timeout. We restore it in yielding()
10132             self.cpu_slice = -1.0
10133+
10134     def yielding(self, sleep_time):
10135         self.cpu_slice = 500
10136         if self.yield_cb:
10137hunk ./src/allmydata/test/test_crawler.py 56
10138             self.yield_cb()
10139+
10140     def finished_cycle(self, cycle):
10141         eventually(self.finished_d.callback, None)
10142 
10143hunk ./src/allmydata/test/test_crawler.py 60
10144+
10145 class ConsumingCrawler(ShareCrawler):
10146     cpu_slice = 0.5
10147     allowed_cpu_percentage = 0.5
10148hunk ./src/allmydata/test/test_crawler.py 79
10149         elapsed = time.time() - start
10150         self.accumulated += elapsed
10151         self.last_yield += elapsed
10152+
10153     def finished_cycle(self, cycle):
10154         self.cycles += 1
10155hunk ./src/allmydata/test/test_crawler.py 82
10156+
10157     def yielding(self, sleep_time):
10158         self.last_yield = 0.0
10159 
10160hunk ./src/allmydata/test/test_crawler.py 86
10161+
10162 class OneShotCrawler(ShareCrawler):
10163     cpu_slice = 500 # make sure it can complete in a single slice
10164     slow_start = 0
10165hunk ./src/allmydata/test/test_crawler.py 90
10166+
10167     def __init__(self, *args, **kwargs):
10168         ShareCrawler.__init__(self, *args, **kwargs)
10169         self.counter = 0
10170hunk ./src/allmydata/test/test_crawler.py 98
10171 
10172     def process_shareset(self, cycle, prefix, shareset):
10173         self.counter += 1
10174+
10175     def finished_cycle(self, cycle):
10176         self.finished_d.callback(None)
10177         self.disownServiceParent()
10178hunk ./src/allmydata/test/test_crawler.py 103
10179 
10180+
10181 class Basic(unittest.TestCase, StallMixin, pollmixin.PollMixin):
10182     def setUp(self):
10183         self.s = service.MultiService()
10184hunk ./src/allmydata/test/test_crawler.py 114
10185 
10186     def si(self, i):
10187         return hashutil.storage_index_hash(str(i))
10188+
10189     def rs(self, i, serverid):
10190         return hashutil.bucket_renewal_secret_hash(str(i), serverid)
10191hunk ./src/allmydata/test/test_crawler.py 117
10192+
10193     def cs(self, i, serverid):
10194         return hashutil.bucket_cancel_secret_hash(str(i), serverid)
10195 
10196hunk ./src/allmydata/test/test_storage.py 39
10197 from allmydata.test.no_network import NoNetworkServer
10198 from allmydata.web.storage import StorageStatus, remove_prefix
10199 
10200+
10201 class Marker:
10202     pass
10203hunk ./src/allmydata/test/test_storage.py 42
10204+
10205+
10206 class FakeCanary:
10207     def __init__(self, ignore_disconnectors=False):
10208         self.ignore = ignore_disconnectors
10209hunk ./src/allmydata/test/test_storage.py 59
10210             return
10211         del self.disconnectors[marker]
10212 
10213+
10214 class FakeStatsProvider:
10215     def count(self, name, delta=1):
10216         pass
10217hunk ./src/allmydata/test/test_storage.py 66
10218     def register_producer(self, producer):
10219         pass
10220 
10221+
10222 class Bucket(unittest.TestCase):
10223     def make_workdir(self, name):
10224         basedir = FilePath("storage").child("Bucket").child(name)
10225hunk ./src/allmydata/test/test_storage.py 165
10226         result_of_read = br.remote_read(0, len(share_data)+1)
10227         self.failUnlessEqual(result_of_read, share_data)
10228 
10229+
10230 class RemoteBucket:
10231 
10232     def __init__(self):
10233hunk ./src/allmydata/test/test_storage.py 309
10234         return self._do_test_readwrite("test_readwrite_v2",
10235                                        0x44, WriteBucketProxy_v2, ReadBucketProxy)
10236 
10237+
10238 class Server(unittest.TestCase):
10239 
10240     def setUp(self):
10241hunk ./src/allmydata/test/test_storage.py 780
10242         self.failUnlessIn("This share tastes like dust.", report)
10243 
10244 
10245-
10246 class MutableServer(unittest.TestCase):
10247 
10248     def setUp(self):
10249hunk ./src/allmydata/test/test_storage.py 1407
10250         # header.
10251         self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
10252 
10253-
10254     def tearDown(self):
10255         self.sparent.stopService()
10256         fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
10257hunk ./src/allmydata/test/test_storage.py 1411
10258 
10259-
10260     def write_enabler(self, we_tag):
10261         return hashutil.tagged_hash("we_blah", we_tag)
10262 
10263hunk ./src/allmydata/test/test_storage.py 1414
10264-
10265     def renew_secret(self, tag):
10266         return hashutil.tagged_hash("renew_blah", str(tag))
10267 
10268hunk ./src/allmydata/test/test_storage.py 1417
10269-
10270     def cancel_secret(self, tag):
10271         return hashutil.tagged_hash("cancel_blah", str(tag))
10272 
10273hunk ./src/allmydata/test/test_storage.py 1420
10274-
10275     def workdir(self, name):
10276         return FilePath("storage").child("MDMFProxies").child(name)
10277 
10278hunk ./src/allmydata/test/test_storage.py 1430
10279         ss.setServiceParent(self.sparent)
10280         return ss
10281 
10282-
10283     def build_test_mdmf_share(self, tail_segment=False, empty=False):
10284         # Start with the checkstring
10285         data = struct.pack(">BQ32s",
10286hunk ./src/allmydata/test/test_storage.py 1527
10287         data += self.block_hash_tree_s
10288         return data
10289 
10290-
10291     def write_test_share_to_server(self,
10292                                    storage_index,
10293                                    tail_segment=False,
10294hunk ./src/allmydata/test/test_storage.py 1548
10295         results = write(storage_index, self.secrets, tws, readv)
10296         self.failUnless(results[0])
10297 
10298-
10299     def build_test_sdmf_share(self, empty=False):
10300         if empty:
10301             sharedata = ""
10302hunk ./src/allmydata/test/test_storage.py 1598
10303         self.offsets['EOF'] = eof_offset
10304         return final_share
10305 
10306-
10307     def write_sdmf_share_to_server(self,
10308                                    storage_index,
10309                                    empty=False):
10310hunk ./src/allmydata/test/test_storage.py 1613
10311         results = write(storage_index, self.secrets, tws, readv)
10312         self.failUnless(results[0])
10313 
10314-
10315     def test_read(self):
10316         self.write_test_share_to_server("si1")
10317         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10318hunk ./src/allmydata/test/test_storage.py 1682
10319             self.failUnlessEqual(checkstring, checkstring))
10320         return d
10321 
10322-
10323     def test_read_with_different_tail_segment_size(self):
10324         self.write_test_share_to_server("si1", tail_segment=True)
10325         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10326hunk ./src/allmydata/test/test_storage.py 1693
10327         d.addCallback(_check_tail_segment)
10328         return d
10329 
10330-
10331     def test_get_block_with_invalid_segnum(self):
10332         self.write_test_share_to_server("si1")
10333         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10334hunk ./src/allmydata/test/test_storage.py 1703
10335                             mr.get_block_and_salt, 7))
10336         return d
10337 
10338-
10339     def test_get_encoding_parameters_first(self):
10340         self.write_test_share_to_server("si1")
10341         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10342hunk ./src/allmydata/test/test_storage.py 1715
10343         d.addCallback(_check_encoding_parameters)
10344         return d
10345 
10346-
10347     def test_get_seqnum_first(self):
10348         self.write_test_share_to_server("si1")
10349         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10350hunk ./src/allmydata/test/test_storage.py 1723
10351             self.failUnlessEqual(seqnum, 0))
10352         return d
10353 
10354-
10355     def test_get_root_hash_first(self):
10356         self.write_test_share_to_server("si1")
10357         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10358hunk ./src/allmydata/test/test_storage.py 1731
10359             self.failUnlessEqual(root_hash, self.root_hash))
10360         return d
10361 
10362-
10363     def test_get_checkstring_first(self):
10364         self.write_test_share_to_server("si1")
10365         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10366hunk ./src/allmydata/test/test_storage.py 1739
10367             self.failUnlessEqual(checkstring, self.checkstring))
10368         return d
10369 
10370-
10371     def test_write_read_vectors(self):
10372         # When writing for us, the storage server will return to us a
10373         # read vector, along with its result. If a write fails because
10374hunk ./src/allmydata/test/test_storage.py 1777
10375         # The checkstring remains the same for the rest of the process.
10376         return d
10377 
10378-
10379     def test_private_key_after_share_hash_chain(self):
10380         mw = self._make_new_mw("si1", 0)
10381         d = defer.succeed(None)
10382hunk ./src/allmydata/test/test_storage.py 1795
10383                             mw.put_encprivkey, self.encprivkey))
10384         return d
10385 
10386-
10387     def test_signature_after_verification_key(self):
10388         mw = self._make_new_mw("si1", 0)
10389         d = defer.succeed(None)
10390hunk ./src/allmydata/test/test_storage.py 1821
10391                             mw.put_signature, self.signature))
10392         return d
10393 
10394-
10395     def test_uncoordinated_write(self):
10396         # Make two mutable writers, both pointing to the same storage
10397         # server, both at the same storage index, and try writing to the
10398hunk ./src/allmydata/test/test_storage.py 1853
10399         d.addCallback(_check_failure)
10400         return d
10401 
10402-
10403     def test_invalid_salt_size(self):
10404         # Salts need to be 16 bytes in size. Writes that attempt to
10405         # write more or less than this should be rejected.
10406hunk ./src/allmydata/test/test_storage.py 1871
10407                             another_invalid_salt))
10408         return d
10409 
10410-
10411     def test_write_test_vectors(self):
10412         # If we give the write proxy a bogus test vector at
10413         # any point during the process, it should fail to write when we
10414hunk ./src/allmydata/test/test_storage.py 1904
10415         d.addCallback(_check_success)
10416         return d
10417 
10418-
10419     def serialize_blockhashes(self, blockhashes):
10420         return "".join(blockhashes)
10421 
10422hunk ./src/allmydata/test/test_storage.py 1907
10423-
10424     def serialize_sharehashes(self, sharehashes):
10425         ret = "".join([struct.pack(">H32s", i, sharehashes[i])
10426                         for i in sorted(sharehashes.keys())])
10427hunk ./src/allmydata/test/test_storage.py 1912
10428         return ret
10429 
10430-
10431     def test_write(self):
10432         # This translates to a file with 6 6-byte segments, and with 2-byte
10433         # blocks.
10434hunk ./src/allmydata/test/test_storage.py 2043
10435                                 6, datalength)
10436         return mw
10437 
10438-
10439     def test_write_rejected_with_too_many_blocks(self):
10440         mw = self._make_new_mw("si0", 0)
10441 
10442hunk ./src/allmydata/test/test_storage.py 2059
10443                             mw.put_block, self.block, 7, self.salt))
10444         return d
10445 
10446-
10447     def test_write_rejected_with_invalid_salt(self):
10448         # Try writing an invalid salt. Salts are 16 bytes -- any more or
10449         # less should cause an error.
10450hunk ./src/allmydata/test/test_storage.py 2070
10451                             None, mw.put_block, self.block, 7, bad_salt))
10452         return d
10453 
10454-
10455     def test_write_rejected_with_invalid_root_hash(self):
10456         # Try writing an invalid root hash. This should be SHA256d, and
10457         # 32 bytes long as a result.
10458hunk ./src/allmydata/test/test_storage.py 2095
10459                             None, mw.put_root_hash, invalid_root_hash))
10460         return d
10461 
10462-
10463     def test_write_rejected_with_invalid_blocksize(self):
10464         # The blocksize implied by the writer that we get from
10465         # _make_new_mw is 2bytes -- any more or any less than this
10466hunk ./src/allmydata/test/test_storage.py 2128
10467             mw.put_block(valid_block, 5, self.salt))
10468         return d
10469 
10470-
10471     def test_write_enforces_order_constraints(self):
10472         # We require that the MDMFSlotWriteProxy be interacted with in a
10473         # specific way.
10474hunk ./src/allmydata/test/test_storage.py 2213
10475             mw0.put_verification_key(self.verification_key))
10476         return d
10477 
10478-
10479     def test_end_to_end(self):
10480         mw = self._make_new_mw("si1", 0)
10481         # Write a share using the mutable writer, and make sure that the
10482hunk ./src/allmydata/test/test_storage.py 2378
10483             self.failUnlessEqual(root_hash, self.root_hash, root_hash))
10484         return d
10485 
10486-
10487     def test_only_reads_one_segment_sdmf(self):
10488         # SDMF shares have only one segment, so it doesn't make sense to
10489         # read more segments than that. The reader should know this and
10490hunk ./src/allmydata/test/test_storage.py 2395
10491                             mr.get_block_and_salt, 1))
10492         return d
10493 
10494-
10495     def test_read_with_prefetched_mdmf_data(self):
10496         # The MDMFSlotReadProxy will prefill certain fields if you pass
10497         # it data that you have already fetched. This is useful for
10498hunk ./src/allmydata/test/test_storage.py 2459
10499         d.addCallback(_check_block_and_salt)
10500         return d
10501 
10502-
10503     def test_read_with_prefetched_sdmf_data(self):
10504         sdmf_data = self.build_test_sdmf_share()
10505         self.write_sdmf_share_to_server("si1")
10506hunk ./src/allmydata/test/test_storage.py 2522
10507         d.addCallback(_check_block_and_salt)
10508         return d
10509 
10510-
10511     def test_read_with_empty_mdmf_file(self):
10512         # Some tests upload a file with no contents to test things
10513         # unrelated to the actual handling of the content of the file.
10514hunk ./src/allmydata/test/test_storage.py 2550
10515                             mr.get_block_and_salt, 0))
10516         return d
10517 
10518-
10519     def test_read_with_empty_sdmf_file(self):
10520         self.write_sdmf_share_to_server("si1", empty=True)
10521         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10522hunk ./src/allmydata/test/test_storage.py 2575
10523                             mr.get_block_and_salt, 0))
10524         return d
10525 
10526-
10527     def test_verinfo_with_sdmf_file(self):
10528         self.write_sdmf_share_to_server("si1")
10529         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10530hunk ./src/allmydata/test/test_storage.py 2615
10531         d.addCallback(_check_verinfo)
10532         return d
10533 
10534-
10535     def test_verinfo_with_mdmf_file(self):
10536         self.write_test_share_to_server("si1")
10537         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10538hunk ./src/allmydata/test/test_storage.py 2653
10539         d.addCallback(_check_verinfo)
10540         return d
10541 
10542-
10543     def test_sdmf_writer(self):
10544         # Go through the motions of writing an SDMF share to the storage
10545         # server. Then read the storage server to see that the share got
10546hunk ./src/allmydata/test/test_storage.py 2696
10547         d.addCallback(_then)
10548         return d
10549 
10550-
10551     def test_sdmf_writer_preexisting_share(self):
10552         data = self.build_test_sdmf_share()
10553         self.write_sdmf_share_to_server("si1")
10554hunk ./src/allmydata/test/test_storage.py 2839
10555         self.failUnless(output["get"]["99_0_percentile"] is None, output)
10556         self.failUnless(output["get"]["99_9_percentile"] is None, output)
10557 
10558+
10559 def remove_tags(s):
10560     s = re.sub(r'<[^>]*>', ' ', s)
10561     s = re.sub(r'\s+', ' ', s)
10562hunk ./src/allmydata/test/test_storage.py 2845
10563     return s
10564 
10565+
10566 class MyBucketCountingCrawler(BucketCountingCrawler):
10567     def finished_prefix(self, cycle, prefix):
10568         BucketCountingCrawler.finished_prefix(self, cycle, prefix)
10569hunk ./src/allmydata/test/test_storage.py 2974
10570         backend = DiskBackend(fp)
10571         ss = MyStorageServer("\x00" * 20, backend, fp)
10572         ss.bucket_counter.slow_start = 0
10573+
10574         # these will be fired inside finished_prefix()
10575         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
10576         w = StorageStatus(ss)
10577hunk ./src/allmydata/test/test_storage.py 3008
10578         ss.setServiceParent(self.s)
10579         return d
10580 
10581+
10582 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
10583     stop_after_first_bucket = False
10584 
10585hunk ./src/allmydata/test/test_storage.py 3017
10586         if self.stop_after_first_bucket:
10587             self.stop_after_first_bucket = False
10588             self.cpu_slice = -1.0
10589+
10590     def yielding(self, sleep_time):
10591         if not self.stop_after_first_bucket:
10592             self.cpu_slice = 500
10593hunk ./src/allmydata/test/test_storage.py 3028
10594 
10595 class BrokenStatResults:
10596     pass
10597+
10598 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
10599     def stat(self, fn):
10600         s = os.stat(fn)
10601hunk ./src/allmydata/test/test_storage.py 3044
10602 class No_ST_BLOCKS_StorageServer(StorageServer):
10603     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
10604 
10605+
10606 class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
10607 
10608     def setUp(self):
10609hunk ./src/allmydata/test/test_storage.py 3891
10610         backend = DiskBackend(fp)
10611         ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
10612         w = StorageStatus(ss)
10613+
10614         # make it start sooner than usual.
10615         lc = ss.lease_checker
10616         lc.stop_after_first_bucket = True
10617hunk ./src/allmydata/util/fileutil.py 460
10618              'avail': avail,
10619            }
10620 
10621+
10622 def get_available_space(whichdirfp, reserved_space):
10623     """Returns available space for share storage in bytes, or None if no
10624     API to get this information is available.
10625}
10626[mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
10627david-sarah@jacaranda.org**20110923040825
10628 Ignore-this: 135da94bd344db6ccd59a576b54901c1
10629] {
10630hunk ./src/allmydata/mutable/publish.py 6
10631 import os, time
10632 from StringIO import StringIO
10633 from itertools import count
10634+from copy import copy
10635 from zope.interface import implements
10636 from twisted.internet import defer
10637 from twisted.python import failure
10638merger 0.0 (
10639hunk ./src/allmydata/mutable/publish.py 868
10640-
10641-        # TODO: Bad, since we remove from this same dict. We need to
10642-        # make a copy, or just use a non-iterated value.
10643-        for (shnum, writer) in self.writers.iteritems():
10644+        for (shnum, writer) in self.writers.copy().iteritems():
10645hunk ./src/allmydata/mutable/publish.py 868
10646-
10647-        # TODO: Bad, since we remove from this same dict. We need to
10648-        # make a copy, or just use a non-iterated value.
10649-        for (shnum, writer) in self.writers.iteritems():
10650+        for (shnum, writer) in copy(self.writers).iteritems():
10651)
10652}
10653[A few comment cleanups. refs #999
10654david-sarah@jacaranda.org**20110923041003
10655 Ignore-this: f574b4a3954b6946016646011ad15edf
10656] {
10657hunk ./src/allmydata/storage/backends/disk/disk_backend.py 17
10658 
10659 # storage/
10660 # storage/shares/incoming
10661-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
10662-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
10663-# storage/shares/$START/$STORAGEINDEX
10664-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
10665+#   incoming/ holds temp dirs named $PREFIX/$STORAGEINDEX/$SHNUM which will
10666+#   be moved to storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM upon success
10667+# storage/shares/$PREFIX/$STORAGEINDEX
10668+# storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM
10669 
10670hunk ./src/allmydata/storage/backends/disk/disk_backend.py 22
10671-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
10672+# Where "$PREFIX" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
10673 # base-32 chars).
10674 # $SHARENUM matches this regex:
10675 NUM_RE=re.compile("^[0-9]+$")
10676hunk ./src/allmydata/storage/backends/disk/immutable.py 16
10677 from allmydata.storage.lease import LeaseInfo
10678 
10679 
10680-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
10681-# and share data. The share data is accessed by RIBucketWriter.write and
10682-# RIBucketReader.read . The lease information is not accessible through these
10683-# interfaces.
10684+# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
10685+# lease information and share data. The share data is accessed by
10686+# RIBucketWriter.write and RIBucketReader.read . The lease information is not
10687+# accessible through these remote interfaces.
10688 
10689 # The share file has the following layout:
10690 #  0x00: share file version number, four bytes, current version is 1
10691hunk ./src/allmydata/storage/backends/disk/immutable.py 211
10692 
10693     # These lease operations are intended for use by disk_backend.py.
10694     # Other clients should not depend on the fact that the disk backend
10695-    # stores leases in share files. XXX bucket.py also relies on this.
10696+    # stores leases in share files.
10697+    # XXX BucketWriter in bucket.py also relies on add_lease.
10698 
10699     def get_leases(self):
10700         """Yields a LeaseInfo instance for all leases."""
10701}
10702[Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
10703david-sarah@jacaranda.org**20110923041115
10704 Ignore-this: 782b49f243bd98fcb6c249f8e40fd9f
10705] {
10706hunk ./src/allmydata/storage/backends/base.py 4
10707 
10708 from twisted.application import service
10709 
10710+from allmydata.util import fileutil, log, time_format
10711 from allmydata.storage.common import si_b2a
10712 from allmydata.storage.lease import LeaseInfo
10713 from allmydata.storage.bucket import BucketReader
10714hunk ./src/allmydata/storage/backends/base.py 13
10715 class Backend(service.MultiService):
10716     def __init__(self):
10717         service.MultiService.__init__(self)
10718+        self._corruption_advisory_dir = None
10719+
10720+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
10721+        if self._corruption_advisory_dir is not None:
10722+            fileutil.fp_make_dirs(self._corruption_advisory_dir)
10723+            now = time_format.iso_utc(sep="T")
10724+            si_s = si_b2a(storageindex)
10725+
10726+            # Windows can't handle colons in the filename.
10727+            name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
10728+            f = self._corruption_advisory_dir.child(name).open("w")
10729+            try:
10730+                f.write("report: Share Corruption\n")
10731+                f.write("type: %s\n" % sharetype)
10732+                f.write("storage_index: %s\n" % si_s)
10733+                f.write("share_number: %d\n" % shnum)
10734+                f.write("\n")
10735+                f.write(reason)
10736+                f.write("\n")
10737+            finally:
10738+                f.close()
10739+
10740+        log.msg(format=("client claims corruption in (%(share_type)s) " +
10741+                        "%(si)s-%(shnum)d: %(reason)s"),
10742+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
10743+                level=log.SCARY, umid="2fASGx")
10744 
10745 
10746 class ShareSet(object):
10747hunk ./src/allmydata/storage/backends/disk/disk_backend.py 8
10748 
10749 from zope.interface import implements
10750 from allmydata.interfaces import IStorageBackend, IShareSet
10751-from allmydata.util import fileutil, log, time_format
10752+from allmydata.util import fileutil, log
10753 from allmydata.storage.common import si_b2a, si_a2b
10754 from allmydata.storage.bucket import BucketWriter
10755 from allmydata.storage.backends.base import Backend, ShareSet
10756hunk ./src/allmydata/storage/backends/disk/disk_backend.py 125
10757             return 0
10758         return fileutil.get_available_space(self._sharedir, self._reserved_space)
10759 
10760-    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
10761-        fileutil.fp_make_dirs(self._corruption_advisory_dir)
10762-        now = time_format.iso_utc(sep="T")
10763-        si_s = si_b2a(storageindex)
10764-
10765-        # Windows can't handle colons in the filename.
10766-        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
10767-        f = self._corruption_advisory_dir.child(name).open("w")
10768-        try:
10769-            f.write("report: Share Corruption\n")
10770-            f.write("type: %s\n" % sharetype)
10771-            f.write("storage_index: %s\n" % si_s)
10772-            f.write("share_number: %d\n" % shnum)
10773-            f.write("\n")
10774-            f.write(reason)
10775-            f.write("\n")
10776-        finally:
10777-            f.close()
10778-
10779-        log.msg(format=("client claims corruption in (%(share_type)s) " +
10780-                        "%(si)s-%(shnum)d: %(reason)s"),
10781-                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
10782-                level=log.SCARY, umid="SGx2fA")
10783-
10784 
10785 class DiskShareSet(ShareSet):
10786     implements(IShareSet)
10787}
10788[Add incomplete S3 backend. refs #999
10789david-sarah@jacaranda.org**20110923041314
10790 Ignore-this: b48df65699e3926dcbb87b5f755cdbf1
10791] {
10792adddir ./src/allmydata/storage/backends/s3
10793addfile ./src/allmydata/storage/backends/s3/__init__.py
10794addfile ./src/allmydata/storage/backends/s3/immutable.py
10795hunk ./src/allmydata/storage/backends/s3/immutable.py 1
10796+
10797+import struct
10798+
10799+from zope.interface import implements
10800+
10801+from allmydata.interfaces import IStoredShare
10802+from allmydata.util.assertutil import precondition
10803+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
10804+
10805+
10806+# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
10807+# lease information [currently inaccessible] and share data. The share data is
10808+# accessed by RIBucketWriter.write and RIBucketReader.read .
10809+
10810+# The share file has the following layout:
10811+#  0x00: share file version number, four bytes, current version is 1
10812+#  0x04: always zero (was share data length prior to Tahoe-LAFS v1.3.0)
10813+#  0x08: number of leases, four bytes big-endian
10814+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
10815+#  data_length+0x0c: first lease. Each lease record is 72 bytes.
10816+
10817+
10818+class ImmutableS3Share(object):
10819+    implements(IStoredShare)
10820+
10821+    sharetype = "immutable"
10822+    LEASE_SIZE = struct.calcsize(">L32s32sL")  # for compatibility
10823+
10824+
10825+    def __init__(self, storageindex, shnum, s3bucket, create=False, max_size=None):
10826+        """
10827+        If max_size is not None then I won't allow more than max_size to be written to me.
10828+        """
10829+        precondition((max_size is not None) or not create, max_size, create)
10830+        self._storageindex = storageindex
10831+        self._max_size = max_size
10832+
10833+        self._s3bucket = s3bucket
10834+        si_s = si_b2a(storageindex)
10835+        self._key = "storage/shares/%s/%s/%d" % (si_s[:2], si_s, shnum)
10836+        self._shnum = shnum
10837+
10838+        if create:
10839+            # The second field, which was the four-byte share data length in
10840+            # Tahoe-LAFS versions prior to 1.3.0, is not used; we always write 0.
10841+            # We also write 0 for the number of leases.
10842+            self._home.setContent(struct.pack(">LLL", 1, 0, 0) )
10843+            self._end_offset = max_size + 0x0c
10844+
10845+            # TODO: start write to S3.
10846+        else:
10847+            # TODO: get header
10848+            header = "\x00"*12
10849+            (version, unused, num_leases) = struct.unpack(">LLL", header)
10850+
10851+            if version != 1:
10852+                msg = "sharefile %s had version %d but we wanted 1" % \
10853+                      (self._home, version)
10854+                raise UnknownImmutableContainerVersionError(msg)
10855+
10856+            # We cannot write leases in share files, but allow them to be present
10857+            # in case a share file is copied from a disk backend, or in case we
10858+            # need them in future.
10859+            # TODO: filesize = size of S3 object
10860+            self._end_offset = filesize - (num_leases * self.LEASE_SIZE)
10861+        self._data_offset = 0xc
10862+
10863+    def __repr__(self):
10864+        return ("<ImmutableS3Share %s:%r at %r>"
10865+                % (si_b2a(self._storageindex), self._shnum, self._key))
10866+
10867+    def close(self):
10868+        # TODO: finalize write to S3.
10869+        pass
10870+
10871+    def get_used_space(self):
10872+        return self._size
10873+
10874+    def get_storage_index(self):
10875+        return self._storageindex
10876+
10877+    def get_storage_index_string(self):
10878+        return si_b2a(self._storageindex)
10879+
10880+    def get_shnum(self):
10881+        return self._shnum
10882+
10883+    def unlink(self):
10884+        # TODO: remove the S3 object.
10885+        pass
10886+
10887+    def get_allocated_size(self):
10888+        return self._max_size
10889+
10890+    def get_size(self):
10891+        return self._size
10892+
10893+    def get_data_length(self):
10894+        return self._end_offset - self._data_offset
10895+
10896+    def read_share_data(self, offset, length):
10897+        precondition(offset >= 0)
10898+
10899+        # Reads beyond the end of the data are truncated. Reads that start
10900+        # beyond the end of the data return an empty string.
10901+        seekpos = self._data_offset+offset
10902+        actuallength = max(0, min(length, self._end_offset-seekpos))
10903+        if actuallength == 0:
10904+            return ""
10905+
10906+        # TODO: perform an S3 GET request, possibly with a Content-Range header.
10907+        return "\x00"*actuallength
10908+
10909+    def write_share_data(self, offset, data):
10910+        assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size)
10911+
10912+        # TODO: write data to S3. If offset > self._size, fill the space
10913+        # between with zeroes.
10914+
10915+        self._size = offset + len(data)
10916+
10917+    def add_lease(self, lease_info):
10918+        pass
10919addfile ./src/allmydata/storage/backends/s3/mutable.py
10920hunk ./src/allmydata/storage/backends/s3/mutable.py 1
10921+
10922+import struct
10923+
10924+from zope.interface import implements
10925+
10926+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
10927+from allmydata.util import fileutil, idlib, log
10928+from allmydata.util.assertutil import precondition
10929+from allmydata.util.hashutil import constant_time_compare
10930+from allmydata.util.encodingutil import quote_filepath
10931+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
10932+     DataTooLargeError
10933+from allmydata.storage.lease import LeaseInfo
10934+from allmydata.storage.backends.base import testv_compare
10935+
10936+
10937+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
10938+# It has a different layout. See docs/mutable.rst for more details.
10939+
10940+# #   offset    size    name
10941+# 1   0         32      magic verstr "tahoe mutable container v1" plus binary
10942+# 2   32        20      write enabler's nodeid
10943+# 3   52        32      write enabler
10944+# 4   84        8       data size (actual share data present) (a)
10945+# 5   92        8       offset of (8) count of extra leases (after data)
10946+# 6   100       368     four leases, 92 bytes each
10947+#                        0    4   ownerid (0 means "no lease here")
10948+#                        4    4   expiration timestamp
10949+#                        8   32   renewal token
10950+#                        40  32   cancel token
10951+#                        72  20   nodeid that accepted the tokens
10952+# 7   468       (a)     data
10953+# 8   ??        4       count of extra leases
10954+# 9   ??        n*92    extra leases
10955+
10956+
10957+# The struct module doc says that L's are 4 bytes in size, and that Q's are
10958+# 8 bytes in size. Since compatibility depends upon this, double-check it.
10959+assert struct.calcsize(">L") == 4, struct.calcsize(">L")
10960+assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
10961+
10962+
10963+class MutableDiskShare(object):
10964+    implements(IStoredMutableShare)
10965+
10966+    sharetype = "mutable"
10967+    DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
10968+    EXTRA_LEASE_OFFSET = DATA_LENGTH_OFFSET + 8
10969+    HEADER_SIZE = struct.calcsize(">32s20s32sQQ") # doesn't include leases
10970+    LEASE_SIZE = struct.calcsize(">LL32s32s20s")
10971+    assert LEASE_SIZE == 92
10972+    DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
10973+    assert DATA_OFFSET == 468, DATA_OFFSET
10974+
10975+    # our sharefiles share with a recognizable string, plus some random
10976+    # binary data to reduce the chance that a regular text file will look
10977+    # like a sharefile.
10978+    MAGIC = "Tahoe mutable container v1\n" + "\x75\x09\x44\x03\x8e"
10979+    assert len(MAGIC) == 32
10980+    MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
10981+    # TODO: decide upon a policy for max share size
10982+
10983+    def __init__(self, storageindex, shnum, home, parent=None):
10984+        self._storageindex = storageindex
10985+        self._shnum = shnum
10986+        self._home = home
10987+        if self._home.exists():
10988+            # we don't cache anything, just check the magic
10989+            f = self._home.open('rb')
10990+            try:
10991+                data = f.read(self.HEADER_SIZE)
10992+                (magic,
10993+                 write_enabler_nodeid, write_enabler,
10994+                 data_length, extra_least_offset) = \
10995+                 struct.unpack(">32s20s32sQQ", data)
10996+                if magic != self.MAGIC:
10997+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
10998+                          (quote_filepath(self._home), magic, self.MAGIC)
10999+                    raise UnknownMutableContainerVersionError(msg)
11000+            finally:
11001+                f.close()
11002+        self.parent = parent # for logging
11003+
11004+    def log(self, *args, **kwargs):
11005+        if self.parent:
11006+            return self.parent.log(*args, **kwargs)
11007+
11008+    def create(self, serverid, write_enabler):
11009+        assert not self._home.exists()
11010+        data_length = 0
11011+        extra_lease_offset = (self.HEADER_SIZE
11012+                              + 4 * self.LEASE_SIZE
11013+                              + data_length)
11014+        assert extra_lease_offset == self.DATA_OFFSET # true at creation
11015+        num_extra_leases = 0
11016+        f = self._home.open('wb')
11017+        try:
11018+            header = struct.pack(">32s20s32sQQ",
11019+                                 self.MAGIC, serverid, write_enabler,
11020+                                 data_length, extra_lease_offset,
11021+                                 )
11022+            leases = ("\x00"*self.LEASE_SIZE) * 4
11023+            f.write(header + leases)
11024+            # data goes here, empty after creation
11025+            f.write(struct.pack(">L", num_extra_leases))
11026+            # extra leases go here, none at creation
11027+        finally:
11028+            f.close()
11029+
11030+    def __repr__(self):
11031+        return ("<MutableDiskShare %s:%r at %s>"
11032+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
11033+
11034+    def get_used_space(self):
11035+        return fileutil.get_used_space(self._home)
11036+
11037+    def get_storage_index(self):
11038+        return self._storageindex
11039+
11040+    def get_storage_index_string(self):
11041+        return si_b2a(self._storageindex)
11042+
11043+    def get_shnum(self):
11044+        return self._shnum
11045+
11046+    def unlink(self):
11047+        self._home.remove()
11048+
11049+    def _read_data_length(self, f):
11050+        f.seek(self.DATA_LENGTH_OFFSET)
11051+        (data_length,) = struct.unpack(">Q", f.read(8))
11052+        return data_length
11053+
11054+    def _write_data_length(self, f, data_length):
11055+        f.seek(self.DATA_LENGTH_OFFSET)
11056+        f.write(struct.pack(">Q", data_length))
11057+
11058+    def _read_share_data(self, f, offset, length):
11059+        precondition(offset >= 0)
11060+        data_length = self._read_data_length(f)
11061+        if offset+length > data_length:
11062+            # reads beyond the end of the data are truncated. Reads that
11063+            # start beyond the end of the data return an empty string.
11064+            length = max(0, data_length-offset)
11065+        if length == 0:
11066+            return ""
11067+        precondition(offset+length <= data_length)
11068+        f.seek(self.DATA_OFFSET+offset)
11069+        data = f.read(length)
11070+        return data
11071+
11072+    def _read_extra_lease_offset(self, f):
11073+        f.seek(self.EXTRA_LEASE_OFFSET)
11074+        (extra_lease_offset,) = struct.unpack(">Q", f.read(8))
11075+        return extra_lease_offset
11076+
11077+    def _write_extra_lease_offset(self, f, offset):
11078+        f.seek(self.EXTRA_LEASE_OFFSET)
11079+        f.write(struct.pack(">Q", offset))
11080+
11081+    def _read_num_extra_leases(self, f):
11082+        offset = self._read_extra_lease_offset(f)
11083+        f.seek(offset)
11084+        (num_extra_leases,) = struct.unpack(">L", f.read(4))
11085+        return num_extra_leases
11086+
11087+    def _write_num_extra_leases(self, f, num_leases):
11088+        extra_lease_offset = self._read_extra_lease_offset(f)
11089+        f.seek(extra_lease_offset)
11090+        f.write(struct.pack(">L", num_leases))
11091+
11092+    def _change_container_size(self, f, new_container_size):
11093+        if new_container_size > self.MAX_SIZE:
11094+            raise DataTooLargeError()
11095+        old_extra_lease_offset = self._read_extra_lease_offset(f)
11096+        new_extra_lease_offset = self.DATA_OFFSET + new_container_size
11097+        if new_extra_lease_offset < old_extra_lease_offset:
11098+            # TODO: allow containers to shrink. For now they remain large.
11099+            return
11100+        num_extra_leases = self._read_num_extra_leases(f)
11101+        f.seek(old_extra_lease_offset)
11102+        leases_size = 4 + num_extra_leases * self.LEASE_SIZE
11103+        extra_lease_data = f.read(leases_size)
11104+
11105+        # Zero out the old lease info (in order to minimize the chance that
11106+        # it could accidentally be exposed to a reader later, re #1528).
11107+        f.seek(old_extra_lease_offset)
11108+        f.write('\x00' * leases_size)
11109+        f.flush()
11110+
11111+        # An interrupt here will corrupt the leases.
11112+
11113+        f.seek(new_extra_lease_offset)
11114+        f.write(extra_lease_data)
11115+        self._write_extra_lease_offset(f, new_extra_lease_offset)
11116+
11117+    def _write_share_data(self, f, offset, data):
11118+        length = len(data)
11119+        precondition(offset >= 0)
11120+        data_length = self._read_data_length(f)
11121+        extra_lease_offset = self._read_extra_lease_offset(f)
11122+
11123+        if offset+length >= data_length:
11124+            # They are expanding their data size.
11125+
11126+            if self.DATA_OFFSET+offset+length > extra_lease_offset:
11127+                # TODO: allow containers to shrink. For now, they remain
11128+                # large.
11129+
11130+                # Their new data won't fit in the current container, so we
11131+                # have to move the leases. With luck, they're expanding it
11132+                # more than the size of the extra lease block, which will
11133+                # minimize the corrupt-the-share window
11134+                self._change_container_size(f, offset+length)
11135+                extra_lease_offset = self._read_extra_lease_offset(f)
11136+
11137+                # an interrupt here is ok.. the container has been enlarged
11138+                # but the data remains untouched
11139+
11140+            assert self.DATA_OFFSET+offset+length <= extra_lease_offset
11141+            # Their data now fits in the current container. We must write
11142+            # their new data and modify the recorded data size.
11143+
11144+            # Fill any newly exposed empty space with 0's.
11145+            if offset > data_length:
11146+                f.seek(self.DATA_OFFSET+data_length)
11147+                f.write('\x00'*(offset - data_length))
11148+                f.flush()
11149+
11150+            new_data_length = offset+length
11151+            self._write_data_length(f, new_data_length)
11152+            # an interrupt here will result in a corrupted share
11153+
11154+        # now all that's left to do is write out their data
11155+        f.seek(self.DATA_OFFSET+offset)
11156+        f.write(data)
11157+        return
11158+
11159+    def _write_lease_record(self, f, lease_number, lease_info):
11160+        extra_lease_offset = self._read_extra_lease_offset(f)
11161+        num_extra_leases = self._read_num_extra_leases(f)
11162+        if lease_number < 4:
11163+            offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE
11164+        elif (lease_number-4) < num_extra_leases:
11165+            offset = (extra_lease_offset
11166+                      + 4
11167+                      + (lease_number-4)*self.LEASE_SIZE)
11168+        else:
11169+            # must add an extra lease record
11170+            self._write_num_extra_leases(f, num_extra_leases+1)
11171+            offset = (extra_lease_offset
11172+                      + 4
11173+                      + (lease_number-4)*self.LEASE_SIZE)
11174+        f.seek(offset)
11175+        assert f.tell() == offset
11176+        f.write(lease_info.to_mutable_data())
11177+
11178+    def _read_lease_record(self, f, lease_number):
11179+        # returns a LeaseInfo instance, or None
11180+        extra_lease_offset = self._read_extra_lease_offset(f)
11181+        num_extra_leases = self._read_num_extra_leases(f)
11182+        if lease_number < 4:
11183+            offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE
11184+        elif (lease_number-4) < num_extra_leases:
11185+            offset = (extra_lease_offset
11186+                      + 4
11187+                      + (lease_number-4)*self.LEASE_SIZE)
11188+        else:
11189+            raise IndexError("No such lease number %d" % lease_number)
11190+        f.seek(offset)
11191+        assert f.tell() == offset
11192+        data = f.read(self.LEASE_SIZE)
11193+        lease_info = LeaseInfo().from_mutable_data(data)
11194+        if lease_info.owner_num == 0:
11195+            return None
11196+        return lease_info
11197+
11198+    def _get_num_lease_slots(self, f):
11199+        # how many places do we have allocated for leases? Not all of them
11200+        # are filled.
11201+        num_extra_leases = self._read_num_extra_leases(f)
11202+        return 4+num_extra_leases
11203+
11204+    def _get_first_empty_lease_slot(self, f):
11205+        # return an int with the index of an empty slot, or None if we do not
11206+        # currently have an empty slot
11207+
11208+        for i in range(self._get_num_lease_slots(f)):
11209+            if self._read_lease_record(f, i) is None:
11210+                return i
11211+        return None
11212+
11213+    def get_leases(self):
11214+        """Yields a LeaseInfo instance for all leases."""
11215+        f = self._home.open('rb')
11216+        try:
11217+            for i, lease in self._enumerate_leases(f):
11218+                yield lease
11219+        finally:
11220+            f.close()
11221+
11222+    def _enumerate_leases(self, f):
11223+        for i in range(self._get_num_lease_slots(f)):
11224+            try:
11225+                data = self._read_lease_record(f, i)
11226+                if data is not None:
11227+                    yield i, data
11228+            except IndexError:
11229+                return
11230+
11231+    # These lease operations are intended for use by disk_backend.py.
11232+    # Other non-test clients should not depend on the fact that the disk
11233+    # backend stores leases in share files.
11234+
11235+    def add_lease(self, lease_info):
11236+        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
11237+        f = self._home.open('rb+')
11238+        try:
11239+            num_lease_slots = self._get_num_lease_slots(f)
11240+            empty_slot = self._get_first_empty_lease_slot(f)
11241+            if empty_slot is not None:
11242+                self._write_lease_record(f, empty_slot, lease_info)
11243+            else:
11244+                self._write_lease_record(f, num_lease_slots, lease_info)
11245+        finally:
11246+            f.close()
11247+
11248+    def renew_lease(self, renew_secret, new_expire_time):
11249+        accepting_nodeids = set()
11250+        f = self._home.open('rb+')
11251+        try:
11252+            for (leasenum, lease) in self._enumerate_leases(f):
11253+                if constant_time_compare(lease.renew_secret, renew_secret):
11254+                    # yup. See if we need to update the owner time.
11255+                    if new_expire_time > lease.expiration_time:
11256+                        # yes
11257+                        lease.expiration_time = new_expire_time
11258+                        self._write_lease_record(f, leasenum, lease)
11259+                    return
11260+                accepting_nodeids.add(lease.nodeid)
11261+        finally:
11262+            f.close()
11263+        # Return the accepting_nodeids set, to give the client a chance to
11264+        # update the leases on a share that has been migrated from its
11265+        # original server to a new one.
11266+        msg = ("Unable to renew non-existent lease. I have leases accepted by"
11267+               " nodeids: ")
11268+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
11269+                         for anid in accepting_nodeids])
11270+        msg += " ."
11271+        raise IndexError(msg)
11272+
11273+    def add_or_renew_lease(self, lease_info):
11274+        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
11275+        try:
11276+            self.renew_lease(lease_info.renew_secret,
11277+                             lease_info.expiration_time)
11278+        except IndexError:
11279+            self.add_lease(lease_info)
11280+
11281+    def cancel_lease(self, cancel_secret):
11282+        """Remove any leases with the given cancel_secret. If the last lease
11283+        is cancelled, the file will be removed. Return the number of bytes
11284+        that were freed (by truncating the list of leases, and possibly by
11285+        deleting the file). Raise IndexError if there was no lease with the
11286+        given cancel_secret."""
11287+
11288+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
11289+
11290+        accepting_nodeids = set()
11291+        modified = 0
11292+        remaining = 0
11293+        blank_lease = LeaseInfo(owner_num=0,
11294+                                renew_secret="\x00"*32,
11295+                                cancel_secret="\x00"*32,
11296+                                expiration_time=0,
11297+                                nodeid="\x00"*20)
11298+        f = self._home.open('rb+')
11299+        try:
11300+            for (leasenum, lease) in self._enumerate_leases(f):
11301+                accepting_nodeids.add(lease.nodeid)
11302+                if constant_time_compare(lease.cancel_secret, cancel_secret):
11303+                    self._write_lease_record(f, leasenum, blank_lease)
11304+                    modified += 1
11305+                else:
11306+                    remaining += 1
11307+            if modified:
11308+                freed_space = self._pack_leases(f)
11309+        finally:
11310+            f.close()
11311+
11312+        if modified > 0:
11313+            if remaining == 0:
11314+                freed_space = fileutil.get_used_space(self._home)
11315+                self.unlink()
11316+            return freed_space
11317+
11318+        msg = ("Unable to cancel non-existent lease. I have leases "
11319+               "accepted by nodeids: ")
11320+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
11321+                         for anid in accepting_nodeids])
11322+        msg += " ."
11323+        raise IndexError(msg)
11324+
11325+    def _pack_leases(self, f):
11326+        # TODO: reclaim space from cancelled leases
11327+        return 0
11328+
11329+    def _read_write_enabler_and_nodeid(self, f):
11330+        f.seek(0)
11331+        data = f.read(self.HEADER_SIZE)
11332+        (magic,
11333+         write_enabler_nodeid, write_enabler,
11334+         data_length, extra_least_offset) = \
11335+         struct.unpack(">32s20s32sQQ", data)
11336+        assert magic == self.MAGIC
11337+        return (write_enabler, write_enabler_nodeid)
11338+
11339+    def readv(self, readv):
11340+        datav = []
11341+        f = self._home.open('rb')
11342+        try:
11343+            for (offset, length) in readv:
11344+                datav.append(self._read_share_data(f, offset, length))
11345+        finally:
11346+            f.close()
11347+        return datav
11348+
11349+    def get_size(self):
11350+        return self._home.getsize()
11351+
11352+    def get_data_length(self):
11353+        f = self._home.open('rb')
11354+        try:
11355+            data_length = self._read_data_length(f)
11356+        finally:
11357+            f.close()
11358+        return data_length
11359+
11360+    def check_write_enabler(self, write_enabler, si_s):
11361+        f = self._home.open('rb+')
11362+        try:
11363+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11364+        finally:
11365+            f.close()
11366+        # avoid a timing attack
11367+        #if write_enabler != real_write_enabler:
11368+        if not constant_time_compare(write_enabler, real_write_enabler):
11369+            # accomodate share migration by reporting the nodeid used for the
11370+            # old write enabler.
11371+            self.log(format="bad write enabler on SI %(si)s,"
11372+                     " recorded by nodeid %(nodeid)s",
11373+                     facility="tahoe.storage",
11374+                     level=log.WEIRD, umid="cE1eBQ",
11375+                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11376+            msg = "The write enabler was recorded by nodeid '%s'." % \
11377+                  (idlib.nodeid_b2a(write_enabler_nodeid),)
11378+            raise BadWriteEnablerError(msg)
11379+
11380+    def check_testv(self, testv):
11381+        test_good = True
11382+        f = self._home.open('rb+')
11383+        try:
11384+            for (offset, length, operator, specimen) in testv:
11385+                data = self._read_share_data(f, offset, length)
11386+                if not testv_compare(data, operator, specimen):
11387+                    test_good = False
11388+                    break
11389+        finally:
11390+            f.close()
11391+        return test_good
11392+
11393+    def writev(self, datav, new_length):
11394+        f = self._home.open('rb+')
11395+        try:
11396+            for (offset, data) in datav:
11397+                self._write_share_data(f, offset, data)
11398+            if new_length is not None:
11399+                cur_length = self._read_data_length(f)
11400+                if new_length < cur_length:
11401+                    self._write_data_length(f, new_length)
11402+                    # TODO: if we're going to shrink the share file when the
11403+                    # share data has shrunk, then call
11404+                    # self._change_container_size() here.
11405+        finally:
11406+            f.close()
11407+
11408+    def close(self):
11409+        pass
11410+
11411+
11412+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
11413+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
11414+    ms.create(serverid, write_enabler)
11415+    del ms
11416+    return MutableDiskShare(storageindex, shnum, fp, parent)
11417addfile ./src/allmydata/storage/backends/s3/s3_backend.py
11418hunk ./src/allmydata/storage/backends/s3/s3_backend.py 1
11419+
11420+from zope.interface import implements
11421+from allmydata.interfaces import IStorageBackend, IShareSet
11422+from allmydata.storage.common import si_b2a, si_a2b
11423+from allmydata.storage.bucket import BucketWriter
11424+from allmydata.storage.backends.base import Backend, ShareSet
11425+from allmydata.storage.backends.s3.immutable import ImmutableS3Share
11426+from allmydata.storage.backends.s3.mutable import MutableS3Share
11427+
11428+# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM
11429+
11430+
11431+class S3Backend(Backend):
11432+    implements(IStorageBackend)
11433+
11434+    def __init__(self, s3bucket, readonly=False, max_space=None, corruption_advisory_dir=None):
11435+        Backend.__init__(self)
11436+        self._s3bucket = s3bucket
11437+        self._readonly = readonly
11438+        if max_space is None:
11439+            self._max_space = 2**64
11440+        else:
11441+            self._max_space = int(max_space)
11442+
11443+        # TODO: any set-up for S3?
11444+
11445+        # we don't actually create the corruption-advisory dir until necessary
11446+        self._corruption_advisory_dir = corruption_advisory_dir
11447+
11448+    def get_sharesets_for_prefix(self, prefix):
11449+        # TODO: query S3 for keys matching prefix
11450+        return []
11451+
11452+    def get_shareset(self, storageindex):
11453+        return S3ShareSet(storageindex, self._s3bucket)
11454+
11455+    def fill_in_space_stats(self, stats):
11456+        stats['storage_server.max_space'] = self._max_space
11457+
11458+        # TODO: query space usage of S3 bucket
11459+        stats['storage_server.accepting_immutable_shares'] = int(not self._readonly)
11460+
11461+    def get_available_space(self):
11462+        if self._readonly:
11463+            return 0
11464+        # TODO: query space usage of S3 bucket
11465+        return self._max_space
11466+
11467+
11468+class S3ShareSet(ShareSet):
11469+    implements(IShareSet)
11470+
11471+    def __init__(self, storageindex, s3bucket):
11472+        ShareSet.__init__(self, storageindex)
11473+        self._s3bucket = s3bucket
11474+
11475+    def get_overhead(self):
11476+        return 0
11477+
11478+    def get_shares(self):
11479+        """
11480+        Generate IStorageBackendShare objects for shares we have for this storage index.
11481+        ("Shares we have" means completed ones, excluding incoming ones.)
11482+        """
11483+        pass
11484+
11485+    def has_incoming(self, shnum):
11486+        # TODO: this might need to be more like the disk backend; review callers
11487+        return False
11488+
11489+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
11490+        immsh = ImmutableS3Share(self.get_storage_index(), shnum, self._s3bucket,
11491+                                 max_size=max_space_per_bucket)
11492+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
11493+        return bw
11494+
11495+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
11496+        # TODO
11497+        serverid = storageserver.get_serverid()
11498+        return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, write_enabler, storageserver)
11499+
11500+    def _clean_up_after_unlink(self):
11501+        pass
11502+
11503}
11504[interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
11505david-sarah@jacaranda.org**20110923203723
11506 Ignore-this: 59371c150532055939794fed6c77dcb6
11507] {
11508hunk ./src/allmydata/interfaces.py 304
11509     def get_sharesets_for_prefix(prefix):
11510         """
11511         Generates IShareSet objects for all storage indices matching the
11512-        given prefix for which this backend holds shares.
11513+        given base-32 prefix for which this backend holds shares.
11514         """
11515 
11516     def get_shareset(storageindex):
11517hunk ./src/allmydata/interfaces.py 312
11518         Get an IShareSet object for the given storage index.
11519         """
11520 
11521+    def fill_in_space_stats(stats):
11522+        """
11523+        Fill in the 'stats' dict with space statistics for this backend, in
11524+        'storage_server.*' keys.
11525+        """
11526+
11527     def advise_corrupt_share(storageindex, sharetype, shnum, reason):
11528         """
11529         Clients who discover hash failures in shares that they have
11530}
11531[Remove redundant si_s argument from check_write_enabler. refs #999
11532david-sarah@jacaranda.org**20110923204425
11533 Ignore-this: 25be760118dbce2eb661137f7d46dd20
11534] {
11535hunk ./src/allmydata/interfaces.py 500
11536 
11537 
11538 class IStoredMutableShare(IStoredShare):
11539-    def check_write_enabler(write_enabler, si_s):
11540+    def check_write_enabler(write_enabler):
11541         """
11542         XXX
11543         """
11544hunk ./src/allmydata/storage/backends/base.py 102
11545         if len(secrets) > 2:
11546             cancel_secret = secrets[2]
11547 
11548-        si_s = self.get_storage_index_string()
11549         shares = {}
11550         for share in self.get_shares():
11551             # XXX is it correct to ignore immutable shares? Maybe get_shares should
11552hunk ./src/allmydata/storage/backends/base.py 107
11553             # have a parameter saying what type it's expecting.
11554             if share.sharetype == "mutable":
11555-                share.check_write_enabler(write_enabler, si_s)
11556+                share.check_write_enabler(write_enabler)
11557                 shares[share.get_shnum()] = share
11558 
11559         # write_enabler is good for all existing shares
11560hunk ./src/allmydata/storage/backends/disk/mutable.py 440
11561             f.close()
11562         return data_length
11563 
11564-    def check_write_enabler(self, write_enabler, si_s):
11565+    def check_write_enabler(self, write_enabler):
11566         f = self._home.open('rb+')
11567         try:
11568             (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11569hunk ./src/allmydata/storage/backends/disk/mutable.py 447
11570         finally:
11571             f.close()
11572         # avoid a timing attack
11573-        #if write_enabler != real_write_enabler:
11574         if not constant_time_compare(write_enabler, real_write_enabler):
11575             # accomodate share migration by reporting the nodeid used for the
11576             # old write enabler.
11577hunk ./src/allmydata/storage/backends/disk/mutable.py 454
11578                      " recorded by nodeid %(nodeid)s",
11579                      facility="tahoe.storage",
11580                      level=log.WEIRD, umid="cE1eBQ",
11581-                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11582+                     si=self.get_storage_index_string(),
11583+                     nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11584             msg = "The write enabler was recorded by nodeid '%s'." % \
11585                   (idlib.nodeid_b2a(write_enabler_nodeid),)
11586             raise BadWriteEnablerError(msg)
11587hunk ./src/allmydata/storage/backends/s3/mutable.py 440
11588             f.close()
11589         return data_length
11590 
11591-    def check_write_enabler(self, write_enabler, si_s):
11592+    def check_write_enabler(self, write_enabler):
11593         f = self._home.open('rb+')
11594         try:
11595             (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11596hunk ./src/allmydata/storage/backends/s3/mutable.py 447
11597         finally:
11598             f.close()
11599         # avoid a timing attack
11600-        #if write_enabler != real_write_enabler:
11601         if not constant_time_compare(write_enabler, real_write_enabler):
11602             # accomodate share migration by reporting the nodeid used for the
11603             # old write enabler.
11604hunk ./src/allmydata/storage/backends/s3/mutable.py 454
11605                      " recorded by nodeid %(nodeid)s",
11606                      facility="tahoe.storage",
11607                      level=log.WEIRD, umid="cE1eBQ",
11608-                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11609+                     si=self.get_storage_index_string(),
11610+                     nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11611             msg = "The write enabler was recorded by nodeid '%s'." % \
11612                   (idlib.nodeid_b2a(write_enabler_nodeid),)
11613             raise BadWriteEnablerError(msg)
11614}
11615[Implement readv for immutable shares. refs #999
11616david-sarah@jacaranda.org**20110923204611
11617 Ignore-this: 24f14b663051169d66293020e40c5a05
11618] {
11619hunk ./src/allmydata/storage/backends/disk/immutable.py 156
11620     def get_data_length(self):
11621         return self._lease_offset - self._data_offset
11622 
11623-    #def readv(self, read_vector):
11624-    #    ...
11625+    def readv(self, readv):
11626+        datav = []
11627+        f = self._home.open('rb')
11628+        try:
11629+            for (offset, length) in readv:
11630+                datav.append(self._read_share_data(f, offset, length))
11631+        finally:
11632+            f.close()
11633+        return datav
11634 
11635hunk ./src/allmydata/storage/backends/disk/immutable.py 166
11636-    def read_share_data(self, offset, length):
11637+    def _read_share_data(self, f, offset, length):
11638         precondition(offset >= 0)
11639 
11640         # Reads beyond the end of the data are truncated. Reads that start
11641hunk ./src/allmydata/storage/backends/disk/immutable.py 175
11642         actuallength = max(0, min(length, self._lease_offset-seekpos))
11643         if actuallength == 0:
11644             return ""
11645+        f.seek(seekpos)
11646+        return f.read(actuallength)
11647+
11648+    def read_share_data(self, offset, length):
11649         f = self._home.open(mode='rb')
11650         try:
11651hunk ./src/allmydata/storage/backends/disk/immutable.py 181
11652-            f.seek(seekpos)
11653-            sharedata = f.read(actuallength)
11654+            return self._read_share_data(f, offset, length)
11655         finally:
11656             f.close()
11657hunk ./src/allmydata/storage/backends/disk/immutable.py 184
11658-        return sharedata
11659 
11660     def write_share_data(self, offset, data):
11661         length = len(data)
11662hunk ./src/allmydata/storage/backends/null/null_backend.py 89
11663         return self.shnum
11664 
11665     def unlink(self):
11666-        os.unlink(self.fname)
11667+        pass
11668+
11669+    def readv(self, readv):
11670+        datav = []
11671+        for (offset, length) in readv:
11672+            datav.append("")
11673+        return datav
11674 
11675     def read_share_data(self, offset, length):
11676         precondition(offset >= 0)
11677hunk ./src/allmydata/storage/backends/s3/immutable.py 101
11678     def get_data_length(self):
11679         return self._end_offset - self._data_offset
11680 
11681+    def readv(self, readv):
11682+        datav = []
11683+        for (offset, length) in readv:
11684+            datav.append(self.read_share_data(offset, length))
11685+        return datav
11686+
11687     def read_share_data(self, offset, length):
11688         precondition(offset >= 0)
11689 
11690}
11691[The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
11692david-sarah@jacaranda.org**20110923204914
11693 Ignore-this: 6c44bb908dd4c0cdc59506b2d87a47b0
11694] {
11695hunk ./src/allmydata/storage/backends/base.py 98
11696 
11697         write_enabler = secrets[0]
11698         renew_secret = secrets[1]
11699-        cancel_secret = '\x00'*32
11700         if len(secrets) > 2:
11701             cancel_secret = secrets[2]
11702hunk ./src/allmydata/storage/backends/base.py 100
11703+        else:
11704+            cancel_secret = renew_secret
11705 
11706         shares = {}
11707         for share in self.get_shares():
11708}
11709[Make EmptyShare.check_testv a simple function. refs #999
11710david-sarah@jacaranda.org**20110923204945
11711 Ignore-this: d0132c085f40c39815fa920b77fc39ab
11712] {
11713hunk ./src/allmydata/storage/backends/base.py 125
11714             else:
11715                 # compare the vectors against an empty share, in which all
11716                 # reads return empty strings
11717-                if not EmptyShare().check_testv(testv):
11718+                if not empty_check_testv(testv):
11719                     storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
11720                     testv_is_good = False
11721                     break
11722hunk ./src/allmydata/storage/backends/base.py 195
11723     # never reached
11724 
11725 
11726-class EmptyShare:
11727-    def check_testv(self, testv):
11728-        test_good = True
11729-        for (offset, length, operator, specimen) in testv:
11730-            data = ""
11731-            if not testv_compare(data, operator, specimen):
11732-                test_good = False
11733-                break
11734-        return test_good
11735+def empty_check_testv(testv):
11736+    test_good = True
11737+    for (offset, length, operator, specimen) in testv:
11738+        data = ""
11739+        if not testv_compare(data, operator, specimen):
11740+            test_good = False
11741+            break
11742+    return test_good
11743 
11744}
11745[Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
11746david-sarah@jacaranda.org**20110923205219
11747 Ignore-this: 42a23d7e253255003dc63facea783251
11748] {
11749hunk ./src/allmydata/storage/backends/null/null_backend.py 2
11750 
11751-import os, struct
11752-
11753 from zope.interface import implements
11754 
11755 from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
11756hunk ./src/allmydata/storage/backends/null/null_backend.py 6
11757 from allmydata.util.assertutil import precondition
11758-from allmydata.util.hashutil import constant_time_compare
11759-from allmydata.storage.backends.base import Backend, ShareSet
11760-from allmydata.storage.bucket import BucketWriter
11761+from allmydata.storage.backends.base import Backend, empty_check_testv
11762+from allmydata.storage.bucket import BucketWriter, BucketReader
11763 from allmydata.storage.common import si_b2a
11764hunk ./src/allmydata/storage/backends/null/null_backend.py 9
11765-from allmydata.storage.lease import LeaseInfo
11766 
11767 
11768 class NullBackend(Backend):
11769hunk ./src/allmydata/storage/backends/null/null_backend.py 13
11770     implements(IStorageBackend)
11771+    """
11772+    I am a test backend that records (in memory) which shares exist, but not their contents, leases,
11773+    or write-enablers.
11774+    """
11775 
11776     def __init__(self):
11777         Backend.__init__(self)
11778hunk ./src/allmydata/storage/backends/null/null_backend.py 20
11779+        # mapping from storageindex to NullShareSet
11780+        self._sharesets = {}
11781 
11782hunk ./src/allmydata/storage/backends/null/null_backend.py 23
11783-    def get_available_space(self, reserved_space):
11784+    def get_available_space(self):
11785         return None
11786 
11787     def get_sharesets_for_prefix(self, prefix):
11788hunk ./src/allmydata/storage/backends/null/null_backend.py 27
11789-        pass
11790+        sharesets = []
11791+        for (si, shareset) in self._sharesets.iteritems():
11792+            if si_b2a(si).startswith(prefix):
11793+                sharesets.append(shareset)
11794+
11795+        def _by_base32si(b):
11796+            return b.get_storage_index_string()
11797+        sharesets.sort(key=_by_base32si)
11798+        return sharesets
11799 
11800     def get_shareset(self, storageindex):
11801hunk ./src/allmydata/storage/backends/null/null_backend.py 38
11802-        return NullShareSet(storageindex)
11803+        shareset = self._sharesets.get(storageindex, None)
11804+        if shareset is None:
11805+            shareset = NullShareSet(storageindex)
11806+            self._sharesets[storageindex] = shareset
11807+        return shareset
11808 
11809     def fill_in_space_stats(self, stats):
11810         pass
11811hunk ./src/allmydata/storage/backends/null/null_backend.py 47
11812 
11813-    def set_storage_server(self, ss):
11814-        self.ss = ss
11815 
11816hunk ./src/allmydata/storage/backends/null/null_backend.py 48
11817-    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
11818-        pass
11819-
11820-
11821-class NullShareSet(ShareSet):
11822+class NullShareSet(object):
11823     implements(IShareSet)
11824 
11825     def __init__(self, storageindex):
11826hunk ./src/allmydata/storage/backends/null/null_backend.py 53
11827         self.storageindex = storageindex
11828+        self._incoming_shnums = set()
11829+        self._immutable_shnums = set()
11830+        self._mutable_shnums = set()
11831+
11832+    def close_shnum(self, shnum):
11833+        self._incoming_shnums.remove(shnum)
11834+        self._immutable_shnums.add(shnum)
11835 
11836     def get_overhead(self):
11837         return 0
11838hunk ./src/allmydata/storage/backends/null/null_backend.py 64
11839 
11840-    def get_incoming_shnums(self):
11841-        return frozenset()
11842-
11843     def get_shares(self):
11844hunk ./src/allmydata/storage/backends/null/null_backend.py 65
11845+        for shnum in self._immutable_shnums:
11846+            yield ImmutableNullShare(self, shnum)
11847+        for shnum in self._mutable_shnums:
11848+            yield MutableNullShare(self, shnum)
11849+
11850+    def renew_lease(self, renew_secret, new_expiration_time):
11851+        raise IndexError("no such lease to renew")
11852+
11853+    def get_leases(self):
11854         pass
11855 
11856hunk ./src/allmydata/storage/backends/null/null_backend.py 76
11857-    def get_share(self, shnum):
11858-        return None
11859+    def add_or_renew_lease(self, lease_info):
11860+        pass
11861+
11862+    def has_incoming(self, shnum):
11863+        return shnum in self._incoming_shnums
11864 
11865     def get_storage_index(self):
11866         return self.storageindex
11867hunk ./src/allmydata/storage/backends/null/null_backend.py 89
11868         return si_b2a(self.storageindex)
11869 
11870     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
11871-        immutableshare = ImmutableNullShare()
11872-        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
11873+        self._incoming_shnums.add(shnum)
11874+        immutableshare = ImmutableNullShare(self, shnum)
11875+        bw = BucketWriter(storageserver, immutableshare, lease_info, canary)
11876+        bw.throw_out_all_data = True
11877+        return bw
11878 
11879hunk ./src/allmydata/storage/backends/null/null_backend.py 95
11880-    def _create_mutable_share(self, storageserver, shnum, write_enabler):
11881-        return MutableNullShare()
11882+    def make_bucket_reader(self, storageserver, share):
11883+        return BucketReader(storageserver, share)
11884 
11885hunk ./src/allmydata/storage/backends/null/null_backend.py 98
11886-    def _clean_up_after_unlink(self):
11887-        pass
11888+    def testv_and_readv_and_writev(self, storageserver, secrets,
11889+                                   test_and_write_vectors, read_vector,
11890+                                   expiration_time):
11891+        # evaluate test vectors
11892+        testv_is_good = True
11893+        for sharenum in test_and_write_vectors:
11894+            # compare the vectors against an empty share, in which all
11895+            # reads return empty strings
11896+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
11897+            if not empty_check_testv(testv):
11898+                storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
11899+                testv_is_good = False
11900+                break
11901 
11902hunk ./src/allmydata/storage/backends/null/null_backend.py 112
11903+        # gather the read vectors
11904+        read_data = {}
11905+        for shnum in self._mutable_shnums:
11906+            read_data[shnum] = ""
11907 
11908hunk ./src/allmydata/storage/backends/null/null_backend.py 117
11909-class ImmutableNullShare:
11910-    implements(IStoredShare)
11911-    sharetype = "immutable"
11912+        if testv_is_good:
11913+            # now apply the write vectors
11914+            for shnum in test_and_write_vectors:
11915+                (testv, datav, new_length) = test_and_write_vectors[shnum]
11916+                if new_length == 0:
11917+                    self._mutable_shnums.remove(shnum)
11918+                else:
11919+                    self._mutable_shnums.add(shnum)
11920 
11921hunk ./src/allmydata/storage/backends/null/null_backend.py 126
11922-    def __init__(self):
11923-        """ If max_size is not None then I won't allow more than
11924-        max_size to be written to me. If create=True then max_size
11925-        must not be None. """
11926-        pass
11927+        return (testv_is_good, read_data)
11928+
11929+    def readv(self, wanted_shnums, read_vector):
11930+        return {}
11931+
11932+
11933+class NullShareBase(object):
11934+    def __init__(self, shareset, shnum):
11935+        self.shareset = shareset
11936+        self.shnum = shnum
11937+
11938+    def get_storage_index(self):
11939+        return self.shareset.get_storage_index()
11940+
11941+    def get_storage_index_string(self):
11942+        return self.shareset.get_storage_index_string()
11943 
11944     def get_shnum(self):
11945         return self.shnum
11946hunk ./src/allmydata/storage/backends/null/null_backend.py 146
11947 
11948+    def get_data_length(self):
11949+        return 0
11950+
11951+    def get_size(self):
11952+        return 0
11953+
11954+    def get_used_space(self):
11955+        return 0
11956+
11957     def unlink(self):
11958         pass
11959 
11960hunk ./src/allmydata/storage/backends/null/null_backend.py 166
11961 
11962     def read_share_data(self, offset, length):
11963         precondition(offset >= 0)
11964-        # Reads beyond the end of the data are truncated. Reads that start
11965-        # beyond the end of the data return an empty string.
11966-        seekpos = self._data_offset+offset
11967-        fsize = os.path.getsize(self.fname)
11968-        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
11969-        if actuallength == 0:
11970-            return ""
11971-        f = open(self.fname, 'rb')
11972-        f.seek(seekpos)
11973-        return f.read(actuallength)
11974+        return ""
11975 
11976     def write_share_data(self, offset, data):
11977         pass
11978hunk ./src/allmydata/storage/backends/null/null_backend.py 171
11979 
11980-    def _write_lease_record(self, f, lease_number, lease_info):
11981-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
11982-        f.seek(offset)
11983-        assert f.tell() == offset
11984-        f.write(lease_info.to_immutable_data())
11985-
11986-    def _read_num_leases(self, f):
11987-        f.seek(0x08)
11988-        (num_leases,) = struct.unpack(">L", f.read(4))
11989-        return num_leases
11990-
11991-    def _write_num_leases(self, f, num_leases):
11992-        f.seek(0x08)
11993-        f.write(struct.pack(">L", num_leases))
11994-
11995-    def _truncate_leases(self, f, num_leases):
11996-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
11997-
11998     def get_leases(self):
11999hunk ./src/allmydata/storage/backends/null/null_backend.py 172
12000-        """Yields a LeaseInfo instance for all leases."""
12001-        f = open(self.fname, 'rb')
12002-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12003-        f.seek(self._lease_offset)
12004-        for i in range(num_leases):
12005-            data = f.read(self.LEASE_SIZE)
12006-            if data:
12007-                yield LeaseInfo().from_immutable_data(data)
12008+        pass
12009 
12010     def add_lease(self, lease):
12011         pass
12012hunk ./src/allmydata/storage/backends/null/null_backend.py 178
12013 
12014     def renew_lease(self, renew_secret, new_expire_time):
12015-        for i,lease in enumerate(self.get_leases()):
12016-            if constant_time_compare(lease.renew_secret, renew_secret):
12017-                # yup. See if we need to update the owner time.
12018-                if new_expire_time > lease.expiration_time:
12019-                    # yes
12020-                    lease.expiration_time = new_expire_time
12021-                    f = open(self.fname, 'rb+')
12022-                    self._write_lease_record(f, i, lease)
12023-                    f.close()
12024-                return
12025         raise IndexError("unable to renew non-existent lease")
12026 
12027     def add_or_renew_lease(self, lease_info):
12028hunk ./src/allmydata/storage/backends/null/null_backend.py 181
12029-        try:
12030-            self.renew_lease(lease_info.renew_secret,
12031-                             lease_info.expiration_time)
12032-        except IndexError:
12033-            self.add_lease(lease_info)
12034+        pass
12035 
12036 
12037hunk ./src/allmydata/storage/backends/null/null_backend.py 184
12038-class MutableNullShare:
12039+class ImmutableNullShare(NullShareBase):
12040+    implements(IStoredShare)
12041+    sharetype = "immutable"
12042+
12043+    def close(self):
12044+        self.shareset.close_shnum(self.shnum)
12045+
12046+
12047+class MutableNullShare(NullShareBase):
12048     implements(IStoredMutableShare)
12049     sharetype = "mutable"
12050hunk ./src/allmydata/storage/backends/null/null_backend.py 195
12051+
12052+    def check_write_enabler(self, write_enabler):
12053+        # Null backend doesn't check write enablers.
12054+        pass
12055+
12056+    def check_testv(self, testv):
12057+        return empty_check_testv(testv)
12058+
12059+    def writev(self, datav, new_length):
12060+        pass
12061+
12062+    def close(self):
12063+        pass
12064 
12065hunk ./src/allmydata/storage/backends/null/null_backend.py 209
12066-    """ XXX: TODO """
12067}
12068[Update the S3 backend. refs #999
12069david-sarah@jacaranda.org**20110923205345
12070 Ignore-this: 5ca623a17e09ddad4cab2f51b49aec0a
12071] {
12072hunk ./src/allmydata/storage/backends/s3/immutable.py 11
12073 from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
12074 
12075 
12076-# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
12077+# Each share file (with key 'shares/$PREFIX/$STORAGEINDEX/$SHNUM') contains
12078 # lease information [currently inaccessible] and share data. The share data is
12079 # accessed by RIBucketWriter.write and RIBucketReader.read .
12080 
12081hunk ./src/allmydata/storage/backends/s3/immutable.py 65
12082             # in case a share file is copied from a disk backend, or in case we
12083             # need them in future.
12084             # TODO: filesize = size of S3 object
12085+            filesize = 0
12086             self._end_offset = filesize - (num_leases * self.LEASE_SIZE)
12087         self._data_offset = 0xc
12088 
12089hunk ./src/allmydata/storage/backends/s3/immutable.py 122
12090         return "\x00"*actuallength
12091 
12092     def write_share_data(self, offset, data):
12093-        assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size)
12094+        length = len(data)
12095+        precondition(offset >= self._size, "offset = %r, size = %r" % (offset, self._size))
12096+        if self._max_size is not None and offset+length > self._max_size:
12097+            raise DataTooLargeError(self._max_size, offset, length)
12098 
12099         # TODO: write data to S3. If offset > self._size, fill the space
12100         # between with zeroes.
12101hunk ./src/allmydata/storage/backends/s3/mutable.py 17
12102 from allmydata.storage.backends.base import testv_compare
12103 
12104 
12105-# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
12106+# The MutableS3Share is like the ImmutableS3Share, but used for mutable data.
12107 # It has a different layout. See docs/mutable.rst for more details.
12108 
12109 # #   offset    size    name
12110hunk ./src/allmydata/storage/backends/s3/mutable.py 43
12111 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
12112 
12113 
12114-class MutableDiskShare(object):
12115+class MutableS3Share(object):
12116     implements(IStoredMutableShare)
12117 
12118     sharetype = "mutable"
12119hunk ./src/allmydata/storage/backends/s3/mutable.py 111
12120             f.close()
12121 
12122     def __repr__(self):
12123-        return ("<MutableDiskShare %s:%r at %s>"
12124+        return ("<MutableS3Share %s:%r at %s>"
12125                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
12126 
12127     def get_used_space(self):
12128hunk ./src/allmydata/storage/backends/s3/mutable.py 311
12129             except IndexError:
12130                 return
12131 
12132-    # These lease operations are intended for use by disk_backend.py.
12133-    # Other non-test clients should not depend on the fact that the disk
12134-    # backend stores leases in share files.
12135-
12136-    def add_lease(self, lease_info):
12137-        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
12138-        f = self._home.open('rb+')
12139-        try:
12140-            num_lease_slots = self._get_num_lease_slots(f)
12141-            empty_slot = self._get_first_empty_lease_slot(f)
12142-            if empty_slot is not None:
12143-                self._write_lease_record(f, empty_slot, lease_info)
12144-            else:
12145-                self._write_lease_record(f, num_lease_slots, lease_info)
12146-        finally:
12147-            f.close()
12148-
12149-    def renew_lease(self, renew_secret, new_expire_time):
12150-        accepting_nodeids = set()
12151-        f = self._home.open('rb+')
12152-        try:
12153-            for (leasenum, lease) in self._enumerate_leases(f):
12154-                if constant_time_compare(lease.renew_secret, renew_secret):
12155-                    # yup. See if we need to update the owner time.
12156-                    if new_expire_time > lease.expiration_time:
12157-                        # yes
12158-                        lease.expiration_time = new_expire_time
12159-                        self._write_lease_record(f, leasenum, lease)
12160-                    return
12161-                accepting_nodeids.add(lease.nodeid)
12162-        finally:
12163-            f.close()
12164-        # Return the accepting_nodeids set, to give the client a chance to
12165-        # update the leases on a share that has been migrated from its
12166-        # original server to a new one.
12167-        msg = ("Unable to renew non-existent lease. I have leases accepted by"
12168-               " nodeids: ")
12169-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
12170-                         for anid in accepting_nodeids])
12171-        msg += " ."
12172-        raise IndexError(msg)
12173-
12174-    def add_or_renew_lease(self, lease_info):
12175-        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
12176-        try:
12177-            self.renew_lease(lease_info.renew_secret,
12178-                             lease_info.expiration_time)
12179-        except IndexError:
12180-            self.add_lease(lease_info)
12181-
12182-    def cancel_lease(self, cancel_secret):
12183-        """Remove any leases with the given cancel_secret. If the last lease
12184-        is cancelled, the file will be removed. Return the number of bytes
12185-        that were freed (by truncating the list of leases, and possibly by
12186-        deleting the file). Raise IndexError if there was no lease with the
12187-        given cancel_secret."""
12188-
12189-        # XXX can this be more like ImmutableDiskShare.cancel_lease?
12190-
12191-        accepting_nodeids = set()
12192-        modified = 0
12193-        remaining = 0
12194-        blank_lease = LeaseInfo(owner_num=0,
12195-                                renew_secret="\x00"*32,
12196-                                cancel_secret="\x00"*32,
12197-                                expiration_time=0,
12198-                                nodeid="\x00"*20)
12199-        f = self._home.open('rb+')
12200-        try:
12201-            for (leasenum, lease) in self._enumerate_leases(f):
12202-                accepting_nodeids.add(lease.nodeid)
12203-                if constant_time_compare(lease.cancel_secret, cancel_secret):
12204-                    self._write_lease_record(f, leasenum, blank_lease)
12205-                    modified += 1
12206-                else:
12207-                    remaining += 1
12208-            if modified:
12209-                freed_space = self._pack_leases(f)
12210-        finally:
12211-            f.close()
12212-
12213-        if modified > 0:
12214-            if remaining == 0:
12215-                freed_space = fileutil.get_used_space(self._home)
12216-                self.unlink()
12217-            return freed_space
12218-
12219-        msg = ("Unable to cancel non-existent lease. I have leases "
12220-               "accepted by nodeids: ")
12221-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
12222-                         for anid in accepting_nodeids])
12223-        msg += " ."
12224-        raise IndexError(msg)
12225-
12226-    def _pack_leases(self, f):
12227-        # TODO: reclaim space from cancelled leases
12228-        return 0
12229-
12230     def _read_write_enabler_and_nodeid(self, f):
12231         f.seek(0)
12232         data = f.read(self.HEADER_SIZE)
12233hunk ./src/allmydata/storage/backends/s3/mutable.py 394
12234         pass
12235 
12236 
12237-def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
12238-    ms = MutableDiskShare(storageindex, shnum, fp, parent)
12239+def create_mutable_s3_share(storageindex, shnum, fp, serverid, write_enabler, parent):
12240+    ms = MutableS3Share(storageindex, shnum, fp, parent)
12241     ms.create(serverid, write_enabler)
12242     del ms
12243hunk ./src/allmydata/storage/backends/s3/mutable.py 398
12244-    return MutableDiskShare(storageindex, shnum, fp, parent)
12245+    return MutableS3Share(storageindex, shnum, fp, parent)
12246hunk ./src/allmydata/storage/backends/s3/s3_backend.py 10
12247 from allmydata.storage.backends.s3.immutable import ImmutableS3Share
12248 from allmydata.storage.backends.s3.mutable import MutableS3Share
12249 
12250-# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM
12251-
12252+# The S3 bucket has keys of the form shares/$PREFIX/$STORAGEINDEX/$SHNUM .
12253 
12254 class S3Backend(Backend):
12255     implements(IStorageBackend)
12256}
12257[Minor cleanup to disk backend. refs #999
12258david-sarah@jacaranda.org**20110923205510
12259 Ignore-this: 79f92d7c2edb14cfedb167247c3f0d08
12260] {
12261hunk ./src/allmydata/storage/backends/disk/immutable.py 87
12262                 (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12263             finally:
12264                 f.close()
12265-            filesize = self._home.getsize()
12266             if version != 1:
12267                 msg = "sharefile %s had version %d but we wanted 1" % \
12268                       (self._home, version)
12269hunk ./src/allmydata/storage/backends/disk/immutable.py 91
12270                 raise UnknownImmutableContainerVersionError(msg)
12271+
12272+            filesize = self._home.getsize()
12273             self._num_leases = num_leases
12274             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
12275         self._data_offset = 0xc
12276}
12277[disk/disk_backends.py:  Modify get_available_space to handle EnvironmentErrors with a return value of 0, indicating that there's an error in the OS call.
12278wilcoxjg@gmail.com**20110927063156
12279 Ignore-this: 95a3b66e3b59ea50ede0aad140e5acf0
12280] hunk ./src/allmydata/storage/backends/disk/disk_backend.py 123
12281     def get_available_space(self):
12282         if self._readonly:
12283             return 0
12284-        return fileutil.get_available_space(self._sharedir, self._reserved_space)
12285-
12286+        try:
12287+            return fileutil.get_available_space(self._sharedir, self._reserved_space)
12288+        except EnvironmentError:
12289+            return 0
12290 
12291 class DiskShareSet(ShareSet):
12292     implements(IShareSet)
12293
12294Context:
12295
12296[docs/configuration.rst: add section about the types of node, and clarify when setting web.port enables web-API service. fixes #1444
12297zooko@zooko.com**20110926203801
12298 Ignore-this: ab94d470c68e720101a7ff3c207a719e
12299]
12300[TAG allmydata-tahoe-1.9.0a2
12301warner@lothar.com**20110925234811
12302 Ignore-this: e9649c58f9c9017a7d55008938dba64f
12303]
12304Patch bundle hash:
12305884913a238bd27726459ca57deba435384c57795