Ticket #999: pluggable-backends-davidsarah-v13a.darcs.patch

File pluggable-backends-davidsarah-v13a.darcs.patch, 532.1 KB (added by davidsarah, at 2011-09-28T01:45:53Z)

This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.

Line 
139 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:
2
3Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
4  * interfaces.py: 'which -> that' grammar cleanup.
5
6Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
7  * Pluggable backends -- new and moved files, changes to moved files. refs #999
8
9Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
10  * Pluggable backends -- all other changes. refs #999
11
12Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
13  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999
14
15Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
16  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999
17
18Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
19  * Fix some incorrect attribute accesses. refs #999
20
21Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
22  * docs/backends/S3.rst: remove Issues section. refs #999
23
24Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
25  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
26
27Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
28  * More fixes to tests needed for pluggable backends. refs #999
29
30Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
31  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
32
33Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
34  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
35
36Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
37  * Fix some more test failures. refs #999
38
39Thu Sep 22 19:30:08 BST 2011  david-sarah@jacaranda.org
40  * Fix most of the crawler tests. refs #999
41
42Thu Sep 22 19:33:23 BST 2011  david-sarah@jacaranda.org
43  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
44
45Fri Sep 23 02:20:44 BST 2011  david-sarah@jacaranda.org
46  * Blank line cleanups.
47
48Fri Sep 23 05:08:25 BST 2011  david-sarah@jacaranda.org
49  * mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
50
51Fri Sep 23 05:10:03 BST 2011  david-sarah@jacaranda.org
52  * A few comment cleanups. refs #999
53
54Fri Sep 23 05:11:15 BST 2011  david-sarah@jacaranda.org
55  * Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
56
57Fri Sep 23 05:13:14 BST 2011  david-sarah@jacaranda.org
58  * Add incomplete S3 backend. refs #999
59
60Fri Sep 23 21:37:23 BST 2011  david-sarah@jacaranda.org
61  * interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
62
63Fri Sep 23 21:44:25 BST 2011  david-sarah@jacaranda.org
64  * Remove redundant si_s argument from check_write_enabler. refs #999
65
66Fri Sep 23 21:46:11 BST 2011  david-sarah@jacaranda.org
67  * Implement readv for immutable shares. refs #999
68
69Fri Sep 23 21:49:14 BST 2011  david-sarah@jacaranda.org
70  * The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
71
72Fri Sep 23 21:49:45 BST 2011  david-sarah@jacaranda.org
73  * Make EmptyShare.check_testv a simple function. refs #999
74
75Fri Sep 23 21:52:19 BST 2011  david-sarah@jacaranda.org
76  * Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
77
78Fri Sep 23 21:53:45 BST 2011  david-sarah@jacaranda.org
79  * Update the S3 backend. refs #999
80
81Fri Sep 23 21:55:10 BST 2011  david-sarah@jacaranda.org
82  * Minor cleanup to disk backend. refs #999
83
84Fri Sep 23 23:09:35 BST 2011  david-sarah@jacaranda.org
85  * Add 'has-immutable-readv' to server version information. refs #999
86
87Tue Sep 27 08:09:47 BST 2011  david-sarah@jacaranda.org
88  * util/deferredutil.py: add some utilities for asynchronous iteration. refs #999
89
90Tue Sep 27 08:14:03 BST 2011  david-sarah@jacaranda.org
91  * test_storage.py: fix test_status_bad_disk_stats. refs #999
92
93Tue Sep 27 08:15:44 BST 2011  david-sarah@jacaranda.org
94  * Cleanups to disk backend. refs #999
95
96Tue Sep 27 08:18:55 BST 2011  david-sarah@jacaranda.org
97  * Cleanups to S3 backend (not including Deferred changes). refs #999
98
99Tue Sep 27 08:28:48 BST 2011  david-sarah@jacaranda.org
100  * test_storage.py: fix test_no_st_blocks. refs #999
101
102Tue Sep 27 08:35:30 BST 2011  david-sarah@jacaranda.org
103  * mutable/publish.py: resolve conflicting patches. refs #999
104
105Wed Sep 28 02:37:29 BST 2011  david-sarah@jacaranda.org
106  * Undo an incompatible change to RIStorageServer. refs #999
107
108Wed Sep 28 02:38:57 BST 2011  david-sarah@jacaranda.org
109  * test_system.py: incorrect arguments were being passed to the constructor for MutableDiskShare. refs #999
110
111Wed Sep 28 02:40:19 BST 2011  david-sarah@jacaranda.org
112  * test_system.py: more debug output for a failing check in test_filesystem. refs #999
113
114Wed Sep 28 02:40:49 BST 2011  david-sarah@jacaranda.org
115  * scripts/debug.py: fix incorrect arguments to dump_immutable_share. refs #999
116
117Wed Sep 28 02:41:26 BST 2011  david-sarah@jacaranda.org
118  * mutable/publish.py: don't crash if there are no writers in _report_verinfo. refs #999
119
120New patches:
121
122[interfaces.py: 'which -> that' grammar cleanup.
123david-sarah@jacaranda.org**20110825003217
124 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
125] {
126hunk ./src/allmydata/interfaces.py 38
127     the StubClient. This object doesn't actually offer any services, but the
128     announcement helps the Introducer keep track of which clients are
129     subscribed (so the grid admin can keep track of things like the size of
130-    the grid and the client versions in use. This is the (empty)
131+    the grid and the client versions in use). This is the (empty)
132     RemoteInterface for the StubClient."""
133 
134 class RIBucketWriter(RemoteInterface):
135hunk ./src/allmydata/interfaces.py 276
136         (binary) storage index string, and 'shnum' is the integer share
137         number. 'reason' is a human-readable explanation of the problem,
138         probably including some expected hash values and the computed ones
139-        which did not match. Corruption advisories for mutable shares should
140+        that did not match. Corruption advisories for mutable shares should
141         include a hash of the public key (the same value that appears in the
142         mutable-file verify-cap), since the current share format does not
143         store that on disk.
144hunk ./src/allmydata/interfaces.py 413
145           remote_host: the IAddress, if connected, otherwise None
146 
147         This method is intended for monitoring interfaces, such as a web page
148-        which describes connecting and connected peers.
149+        that describes connecting and connected peers.
150         """
151 
152     def get_all_peerids():
153hunk ./src/allmydata/interfaces.py 515
154 
155     # TODO: rename to get_read_cap()
156     def get_readonly():
157-        """Return another IURI instance, which represents a read-only form of
158+        """Return another IURI instance that represents a read-only form of
159         this one. If is_readonly() is True, this returns self."""
160 
161     def get_verify_cap():
162hunk ./src/allmydata/interfaces.py 542
163         passing into init_from_string."""
164 
165 class IDirnodeURI(Interface):
166-    """I am a URI which represents a dirnode."""
167+    """I am a URI that represents a dirnode."""
168 
169 class IFileURI(Interface):
170hunk ./src/allmydata/interfaces.py 545
171-    """I am a URI which represents a filenode."""
172+    """I am a URI that represents a filenode."""
173     def get_size():
174         """Return the length (in bytes) of the file that I represent."""
175 
176hunk ./src/allmydata/interfaces.py 553
177     pass
178 
179 class IMutableFileURI(Interface):
180-    """I am a URI which represents a mutable filenode."""
181+    """I am a URI that represents a mutable filenode."""
182     def get_extension_params():
183         """Return the extension parameters in the URI"""
184 
185hunk ./src/allmydata/interfaces.py 856
186         """
187 
188 class IFileNode(IFilesystemNode):
189-    """I am a node which represents a file: a sequence of bytes. I am not a
190+    """I am a node that represents a file: a sequence of bytes. I am not a
191     container, like IDirectoryNode."""
192     def get_best_readable_version():
193         """Return a Deferred that fires with an IReadable for the 'best'
194hunk ./src/allmydata/interfaces.py 905
195     multiple versions of a file present in the grid, some of which might be
196     unrecoverable (i.e. have fewer than 'k' shares). These versions are
197     loosely ordered: each has a sequence number and a hash, and any version
198-    with seqnum=N was uploaded by a node which has seen at least one version
199+    with seqnum=N was uploaded by a node that has seen at least one version
200     with seqnum=N-1.
201 
202     The 'servermap' (an instance of IMutableFileServerMap) is used to
203hunk ./src/allmydata/interfaces.py 1014
204         as a guide to where the shares are located.
205 
206         I return a Deferred that fires with the requested contents, or
207-        errbacks with UnrecoverableFileError. Note that a servermap which was
208+        errbacks with UnrecoverableFileError. Note that a servermap that was
209         updated with MODE_ANYTHING or MODE_READ may not know about shares for
210         all versions (those modes stop querying servers as soon as they can
211         fulfil their goals), so you may want to use MODE_CHECK (which checks
212hunk ./src/allmydata/interfaces.py 1073
213     """Upload was unable to satisfy 'servers_of_happiness'"""
214 
215 class UnableToFetchCriticalDownloadDataError(Exception):
216-    """I was unable to fetch some piece of critical data which is supposed to
217+    """I was unable to fetch some piece of critical data that is supposed to
218     be identically present in all shares."""
219 
220 class NoServersError(Exception):
221hunk ./src/allmydata/interfaces.py 1085
222     exists, and overwrite= was set to False."""
223 
224 class NoSuchChildError(Exception):
225-    """A directory node was asked to fetch a child which does not exist."""
226+    """A directory node was asked to fetch a child that does not exist."""
227 
228 class ChildOfWrongTypeError(Exception):
229     """An operation was attempted on a child of the wrong type (file or directory)."""
230hunk ./src/allmydata/interfaces.py 1403
231         if you initially thought you were going to use 10 peers, started
232         encoding, and then two of the peers dropped out: you could use
233         desired_share_ids= to skip the work (both memory and CPU) of
234-        producing shares for the peers which are no longer available.
235+        producing shares for the peers that are no longer available.
236 
237         """
238 
239hunk ./src/allmydata/interfaces.py 1478
240         if you initially thought you were going to use 10 peers, started
241         encoding, and then two of the peers dropped out: you could use
242         desired_share_ids= to skip the work (both memory and CPU) of
243-        producing shares for the peers which are no longer available.
244+        producing shares for the peers that are no longer available.
245 
246         For each call, encode() will return a Deferred that fires with two
247         lists, one containing shares and the other containing the shareids.
248hunk ./src/allmydata/interfaces.py 1535
249         required to be of the same length.  The i'th element of their_shareids
250         is required to be the shareid of the i'th buffer in some_shares.
251 
252-        This returns a Deferred which fires with a sequence of buffers. This
253+        This returns a Deferred that fires with a sequence of buffers. This
254         sequence will contain all of the segments of the original data, in
255         order. The sum of the lengths of all of the buffers will be the
256         'data_size' value passed into the original ICodecEncode.set_params()
257hunk ./src/allmydata/interfaces.py 1582
258         Encoding parameters can be set in three ways. 1: The Encoder class
259         provides defaults (3/7/10). 2: the Encoder can be constructed with
260         an 'options' dictionary, in which the
261-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
262+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
263         set_params((k,d,n)) can be called.
264 
265         If you intend to use set_params(), you must call it before
266hunk ./src/allmydata/interfaces.py 1780
267         produced, so that the segment hashes can be generated with only a
268         single pass.
269 
270-        This returns a Deferred which fires with a sequence of hashes, using:
271+        This returns a Deferred that fires with a sequence of hashes, using:
272 
273          tuple(segment_hashes[first:last])
274 
275hunk ./src/allmydata/interfaces.py 1796
276     def get_plaintext_hash():
277         """OBSOLETE; Get the hash of the whole plaintext.
278 
279-        This returns a Deferred which fires with a tagged SHA-256 hash of the
280+        This returns a Deferred that fires with a tagged SHA-256 hash of the
281         whole plaintext, obtained from hashutil.plaintext_hash(data).
282         """
283 
284hunk ./src/allmydata/interfaces.py 1856
285         be used to encrypt the data. The key will also be hashed to derive
286         the StorageIndex.
287 
288-        Uploadables which want to achieve convergence should hash their file
289+        Uploadables that want to achieve convergence should hash their file
290         contents and the serialized_encoding_parameters to form the key
291         (which of course requires a full pass over the data). Uploadables can
292         use the upload.ConvergentUploadMixin class to achieve this
293hunk ./src/allmydata/interfaces.py 1862
294         automatically.
295 
296-        Uploadables which do not care about convergence (or do not wish to
297+        Uploadables that do not care about convergence (or do not wish to
298         make multiple passes over the data) can simply return a
299         strongly-random 16 byte string.
300 
301hunk ./src/allmydata/interfaces.py 1872
302 
303     def read(length):
304         """Return a Deferred that fires with a list of strings (perhaps with
305-        only a single element) which, when concatenated together, contain the
306+        only a single element) that, when concatenated together, contain the
307         next 'length' bytes of data. If EOF is near, this may provide fewer
308         than 'length' bytes. The total number of bytes provided by read()
309         before it signals EOF must equal the size provided by get_size().
310hunk ./src/allmydata/interfaces.py 1919
311 
312     def read(length):
313         """
314-        Returns a list of strings which, when concatenated, are the next
315+        Returns a list of strings that, when concatenated, are the next
316         length bytes of the file, or fewer if there are fewer bytes
317         between the current location and the end of the file.
318         """
319hunk ./src/allmydata/interfaces.py 1932
320 
321 class IUploadResults(Interface):
322     """I am returned by upload() methods. I contain a number of public
323-    attributes which can be read to determine the results of the upload. Some
324+    attributes that can be read to determine the results of the upload. Some
325     of these are functional, some are timing information. All of these may be
326     None.
327 
328hunk ./src/allmydata/interfaces.py 1965
329 
330 class IDownloadResults(Interface):
331     """I am created internally by download() methods. I contain a number of
332-    public attributes which contain details about the download process.::
333+    public attributes that contain details about the download process.::
334 
335      .file_size : the size of the file, in bytes
336      .servers_used : set of server peerids that were used during download
337hunk ./src/allmydata/interfaces.py 1991
338 class IUploader(Interface):
339     def upload(uploadable):
340         """Upload the file. 'uploadable' must impement IUploadable. This
341-        returns a Deferred which fires with an IUploadResults instance, from
342+        returns a Deferred that fires with an IUploadResults instance, from
343         which the URI of the file can be obtained as results.uri ."""
344 
345     def upload_ssk(write_capability, new_version, uploadable):
346hunk ./src/allmydata/interfaces.py 2041
347         kind of lease that is obtained (which account number to claim, etc).
348 
349         TODO: any problems seen during checking will be reported to the
350-        health-manager.furl, a centralized object which is responsible for
351+        health-manager.furl, a centralized object that is responsible for
352         figuring out why files are unhealthy so corrective action can be
353         taken.
354         """
355hunk ./src/allmydata/interfaces.py 2056
356         will be put in the check-and-repair results. The Deferred will not
357         fire until the repair is complete.
358 
359-        This returns a Deferred which fires with an instance of
360+        This returns a Deferred that fires with an instance of
361         ICheckAndRepairResults."""
362 
363 class IDeepCheckable(Interface):
364hunk ./src/allmydata/interfaces.py 2141
365                               that was found to be corrupt. Each share
366                               locator is a list of (serverid, storage_index,
367                               sharenum).
368-         count-incompatible-shares: the number of shares which are of a share
369+         count-incompatible-shares: the number of shares that are of a share
370                                     format unknown to this checker
371          list-incompatible-shares: a list of 'share locators', one for each
372                                    share that was found to be of an unknown
373hunk ./src/allmydata/interfaces.py 2148
374                                    format. Each share locator is a list of
375                                    (serverid, storage_index, sharenum).
376          servers-responding: list of (binary) storage server identifiers,
377-                             one for each server which responded to the share
378+                             one for each server that responded to the share
379                              query (even if they said they didn't have
380                              shares, and even if they said they did have
381                              shares but then didn't send them when asked, or
382hunk ./src/allmydata/interfaces.py 2345
383         will use the data in the checker results to guide the repair process,
384         such as which servers provided bad data and should therefore be
385         avoided. The ICheckResults object is inside the
386-        ICheckAndRepairResults object, which is returned by the
387+        ICheckAndRepairResults object that is returned by the
388         ICheckable.check() method::
389 
390          d = filenode.check(repair=False)
391hunk ./src/allmydata/interfaces.py 2436
392         methods to create new objects. I return synchronously."""
393 
394     def create_mutable_file(contents=None, keysize=None):
395-        """I create a new mutable file, and return a Deferred which will fire
396+        """I create a new mutable file, and return a Deferred that will fire
397         with the IMutableFileNode instance when it is ready. If contents= is
398         provided (a bytestring), it will be used as the initial contents of
399         the new file, otherwise the file will contain zero bytes. keysize= is
400hunk ./src/allmydata/interfaces.py 2444
401         usual."""
402 
403     def create_new_mutable_directory(initial_children={}):
404-        """I create a new mutable directory, and return a Deferred which will
405+        """I create a new mutable directory, and return a Deferred that will
406         fire with the IDirectoryNode instance when it is ready. If
407         initial_children= is provided (a dict mapping unicode child name to
408         (childnode, metadata_dict) tuples), the directory will be populated
409hunk ./src/allmydata/interfaces.py 2452
410 
411 class IClientStatus(Interface):
412     def list_all_uploads():
413-        """Return a list of uploader objects, one for each upload which
414+        """Return a list of uploader objects, one for each upload that
415         currently has an object available (tracked with weakrefs). This is
416         intended for debugging purposes."""
417     def list_active_uploads():
418hunk ./src/allmydata/interfaces.py 2462
419         started uploads."""
420 
421     def list_all_downloads():
422-        """Return a list of downloader objects, one for each download which
423+        """Return a list of downloader objects, one for each download that
424         currently has an object available (tracked with weakrefs). This is
425         intended for debugging purposes."""
426     def list_active_downloads():
427hunk ./src/allmydata/interfaces.py 2689
428 
429     def provide(provider=RIStatsProvider, nickname=str):
430         """
431-        @param provider: a stats collector instance which should be polled
432+        @param provider: a stats collector instance that should be polled
433                          periodically by the gatherer to collect stats.
434         @param nickname: a name useful to identify the provided client
435         """
436hunk ./src/allmydata/interfaces.py 2722
437 
438 class IValidatedThingProxy(Interface):
439     def start():
440-        """ Acquire a thing and validate it. Return a deferred which is
441+        """ Acquire a thing and validate it. Return a deferred that is
442         eventually fired with self if the thing is valid or errbacked if it
443         can't be acquired or validated."""
444 
445}
446[Pluggable backends -- new and moved files, changes to moved files. refs #999
447david-sarah@jacaranda.org**20110919232926
448 Ignore-this: ec5d2d1362a092d919e84327d3092424
449] {
450adddir ./src/allmydata/storage/backends
451adddir ./src/allmydata/storage/backends/disk
452move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
453move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
454adddir ./src/allmydata/storage/backends/null
455addfile ./src/allmydata/storage/backends/__init__.py
456addfile ./src/allmydata/storage/backends/base.py
457hunk ./src/allmydata/storage/backends/base.py 1
458+
459+from twisted.application import service
460+
461+from allmydata.storage.common import si_b2a
462+from allmydata.storage.lease import LeaseInfo
463+from allmydata.storage.bucket import BucketReader
464+
465+
466+class Backend(service.MultiService):
467+    def __init__(self):
468+        service.MultiService.__init__(self)
469+
470+
471+class ShareSet(object):
472+    """
473+    This class implements shareset logic that could work for all backends, but
474+    might be useful to override for efficiency.
475+    """
476+
477+    def __init__(self, storageindex):
478+        self.storageindex = storageindex
479+
480+    def get_storage_index(self):
481+        return self.storageindex
482+
483+    def get_storage_index_string(self):
484+        return si_b2a(self.storageindex)
485+
486+    def renew_lease(self, renew_secret, new_expiration_time):
487+        found_shares = False
488+        for share in self.get_shares():
489+            found_shares = True
490+            share.renew_lease(renew_secret, new_expiration_time)
491+
492+        if not found_shares:
493+            raise IndexError("no such lease to renew")
494+
495+    def get_leases(self):
496+        # Since all shares get the same lease data, we just grab the leases
497+        # from the first share.
498+        try:
499+            sf = self.get_shares().next()
500+            return sf.get_leases()
501+        except StopIteration:
502+            return iter([])
503+
504+    def add_or_renew_lease(self, lease_info):
505+        # This implementation assumes that lease data is duplicated in
506+        # all shares of a shareset, which might not be true for all backends.
507+        for share in self.get_shares():
508+            share.add_or_renew_lease(lease_info)
509+
510+    def make_bucket_reader(self, storageserver, share):
511+        return BucketReader(storageserver, share)
512+
513+    def testv_and_readv_and_writev(self, storageserver, secrets,
514+                                   test_and_write_vectors, read_vector,
515+                                   expiration_time):
516+        # The implementation here depends on the following helper methods,
517+        # which must be provided by subclasses:
518+        #
519+        # def _clean_up_after_unlink(self):
520+        #     """clean up resources associated with the shareset after some
521+        #     shares might have been deleted"""
522+        #
523+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
524+        #     """create a mutable share with the given shnum and write_enabler"""
525+
526+        # secrets might be a triple with cancel_secret in secrets[2], but if
527+        # so we ignore the cancel_secret.
528+        write_enabler = secrets[0]
529+        renew_secret = secrets[1]
530+
531+        si_s = self.get_storage_index_string()
532+        shares = {}
533+        for share in self.get_shares():
534+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
535+            # have a parameter saying what type it's expecting.
536+            if share.sharetype == "mutable":
537+                share.check_write_enabler(write_enabler, si_s)
538+                shares[share.get_shnum()] = share
539+
540+        # write_enabler is good for all existing shares
541+
542+        # now evaluate test vectors
543+        testv_is_good = True
544+        for sharenum in test_and_write_vectors:
545+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
546+            if sharenum in shares:
547+                if not shares[sharenum].check_testv(testv):
548+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
549+                    testv_is_good = False
550+                    break
551+            else:
552+                # compare the vectors against an empty share, in which all
553+                # reads return empty strings
554+                if not EmptyShare().check_testv(testv):
555+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
556+                                                                testv))
557+                    testv_is_good = False
558+                    break
559+
560+        # gather the read vectors, before we do any writes
561+        read_data = {}
562+        for shnum, share in shares.items():
563+            read_data[shnum] = share.readv(read_vector)
564+
565+        ownerid = 1 # TODO
566+        lease_info = LeaseInfo(ownerid, renew_secret,
567+                               expiration_time, storageserver.get_serverid())
568+
569+        if testv_is_good:
570+            # now apply the write vectors
571+            for shnum in test_and_write_vectors:
572+                (testv, datav, new_length) = test_and_write_vectors[shnum]
573+                if new_length == 0:
574+                    if shnum in shares:
575+                        shares[shnum].unlink()
576+                else:
577+                    if shnum not in shares:
578+                        # allocate a new share
579+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
580+                        shares[shnum] = share
581+                    shares[shnum].writev(datav, new_length)
582+                    # and update the lease
583+                    shares[shnum].add_or_renew_lease(lease_info)
584+
585+            if new_length == 0:
586+                self._clean_up_after_unlink()
587+
588+        return (testv_is_good, read_data)
589+
590+    def readv(self, wanted_shnums, read_vector):
591+        """
592+        Read a vector from the numbered shares in this shareset. An empty
593+        shares list means to return data from all known shares.
594+
595+        @param wanted_shnums=ListOf(int)
596+        @param read_vector=ReadVector
597+        @return DictOf(int, ReadData): shnum -> results, with one key per share
598+        """
599+        datavs = {}
600+        for share in self.get_shares():
601+            shnum = share.get_shnum()
602+            if not wanted_shnums or shnum in wanted_shnums:
603+                datavs[shnum] = share.readv(read_vector)
604+
605+        return datavs
606+
607+
608+def testv_compare(a, op, b):
609+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
610+    if op == "lt":
611+        return a < b
612+    if op == "le":
613+        return a <= b
614+    if op == "eq":
615+        return a == b
616+    if op == "ne":
617+        return a != b
618+    if op == "ge":
619+        return a >= b
620+    if op == "gt":
621+        return a > b
622+    # never reached
623+
624+
625+class EmptyShare:
626+    def check_testv(self, testv):
627+        test_good = True
628+        for (offset, length, operator, specimen) in testv:
629+            data = ""
630+            if not testv_compare(data, operator, specimen):
631+                test_good = False
632+                break
633+        return test_good
634+
635addfile ./src/allmydata/storage/backends/disk/__init__.py
636addfile ./src/allmydata/storage/backends/disk/disk_backend.py
637hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
638+
639+import re
640+
641+from twisted.python.filepath import UnlistableError
642+
643+from zope.interface import implements
644+from allmydata.interfaces import IStorageBackend, IShareSet
645+from allmydata.util import fileutil, log, time_format
646+from allmydata.storage.common import si_b2a, si_a2b
647+from allmydata.storage.bucket import BucketWriter
648+from allmydata.storage.backends.base import Backend, ShareSet
649+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
650+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
651+
652+# storage/
653+# storage/shares/incoming
654+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
655+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
656+# storage/shares/$START/$STORAGEINDEX
657+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
658+
659+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
660+# base-32 chars).
661+# $SHARENUM matches this regex:
662+NUM_RE=re.compile("^[0-9]+$")
663+
664+
665+def si_si2dir(startfp, storageindex):
666+    sia = si_b2a(storageindex)
667+    newfp = startfp.child(sia[:2])
668+    return newfp.child(sia)
669+
670+
671+def get_share(fp):
672+    f = fp.open('rb')
673+    try:
674+        prefix = f.read(32)
675+    finally:
676+        f.close()
677+
678+    if prefix == MutableDiskShare.MAGIC:
679+        return MutableDiskShare(fp)
680+    else:
681+        # assume it's immutable
682+        return ImmutableDiskShare(fp)
683+
684+
685+class DiskBackend(Backend):
686+    implements(IStorageBackend)
687+
688+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
689+        Backend.__init__(self)
690+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
691+        self._setup_corruption_advisory()
692+
693+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
694+        self._storedir = storedir
695+        self._readonly = readonly
696+        self._reserved_space = int(reserved_space)
697+        self._discard_storage = discard_storage
698+        self._sharedir = self._storedir.child("shares")
699+        fileutil.fp_make_dirs(self._sharedir)
700+        self._incomingdir = self._sharedir.child('incoming')
701+        self._clean_incomplete()
702+        if self._reserved_space and (self.get_available_space() is None):
703+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
704+                    umid="0wZ27w", level=log.UNUSUAL)
705+
706+    def _clean_incomplete(self):
707+        fileutil.fp_remove(self._incomingdir)
708+        fileutil.fp_make_dirs(self._incomingdir)
709+
710+    def _setup_corruption_advisory(self):
711+        # we don't actually create the corruption-advisory dir until necessary
712+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
713+
714+    def _make_shareset(self, sharehomedir):
715+        return self.get_shareset(si_a2b(sharehomedir.basename()))
716+
717+    def get_sharesets_for_prefix(self, prefix):
718+        prefixfp = self._sharedir.child(prefix)
719+        try:
720+            sharesets = map(self._make_shareset, prefixfp.children())
721+            def _by_base32si(b):
722+                return b.get_storage_index_string()
723+            sharesets.sort(key=_by_base32si)
724+        except EnvironmentError:
725+            sharesets = []
726+        return sharesets
727+
728+    def get_shareset(self, storageindex):
729+        sharehomedir = si_si2dir(self._sharedir, storageindex)
730+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
731+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
732+
733+    def fill_in_space_stats(self, stats):
734+        stats['storage_server.reserved_space'] = self._reserved_space
735+        try:
736+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
737+            writeable = disk['avail'] > 0
738+
739+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
740+            stats['storage_server.disk_total'] = disk['total']
741+            stats['storage_server.disk_used'] = disk['used']
742+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
743+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
744+            stats['storage_server.disk_avail'] = disk['avail']
745+        except AttributeError:
746+            writeable = True
747+        except EnvironmentError:
748+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
749+            writeable = False
750+
751+        if self._readonly:
752+            stats['storage_server.disk_avail'] = 0
753+            writeable = False
754+
755+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
756+
757+    def get_available_space(self):
758+        if self._readonly:
759+            return 0
760+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
761+
762+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
763+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
764+        now = time_format.iso_utc(sep="T")
765+        si_s = si_b2a(storageindex)
766+
767+        # Windows can't handle colons in the filename.
768+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
769+        f = self._corruption_advisory_dir.child(name).open("w")
770+        try:
771+            f.write("report: Share Corruption\n")
772+            f.write("type: %s\n" % sharetype)
773+            f.write("storage_index: %s\n" % si_s)
774+            f.write("share_number: %d\n" % shnum)
775+            f.write("\n")
776+            f.write(reason)
777+            f.write("\n")
778+        finally:
779+            f.close()
780+
781+        log.msg(format=("client claims corruption in (%(share_type)s) " +
782+                        "%(si)s-%(shnum)d: %(reason)s"),
783+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
784+                level=log.SCARY, umid="SGx2fA")
785+
786+
787+class DiskShareSet(ShareSet):
788+    implements(IShareSet)
789+
790+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
791+        ShareSet.__init__(self, storageindex)
792+        self._sharehomedir = sharehomedir
793+        self._incominghomedir = incominghomedir
794+        self._discard_storage = discard_storage
795+
796+    def get_overhead(self):
797+        return (fileutil.get_disk_usage(self._sharehomedir) +
798+                fileutil.get_disk_usage(self._incominghomedir))
799+
800+    def get_shares(self):
801+        """
802+        Generate IStorageBackendShare objects for shares we have for this storage index.
803+        ("Shares we have" means completed ones, excluding incoming ones.)
804+        """
805+        try:
806+            for fp in self._sharehomedir.children():
807+                shnumstr = fp.basename()
808+                if not NUM_RE.match(shnumstr):
809+                    continue
810+                sharehome = self._sharehomedir.child(shnumstr)
811+                yield self.get_share(sharehome)
812+        except UnlistableError:
813+            # There is no shares directory at all.
814+            pass
815+
816+    def has_incoming(self, shnum):
817+        if self._incominghomedir is None:
818+            return False
819+        return self._incominghomedir.child(str(shnum)).exists()
820+
821+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
822+        sharehome = self._sharehomedir.child(str(shnum))
823+        incominghome = self._incominghomedir.child(str(shnum))
824+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
825+                                   max_size=max_space_per_bucket, create=True)
826+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
827+        if self._discard_storage:
828+            bw.throw_out_all_data = True
829+        return bw
830+
831+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
832+        fileutil.fp_make_dirs(self._sharehomedir)
833+        sharehome = self._sharehomedir.child(str(shnum))
834+        serverid = storageserver.get_serverid()
835+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
836+
837+    def _clean_up_after_unlink(self):
838+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
839+
840hunk ./src/allmydata/storage/backends/disk/immutable.py 1
841-import os, stat, struct, time
842 
843hunk ./src/allmydata/storage/backends/disk/immutable.py 2
844-from foolscap.api import Referenceable
845+import struct
846 
847 from zope.interface import implements
848hunk ./src/allmydata/storage/backends/disk/immutable.py 5
849-from allmydata.interfaces import RIBucketWriter, RIBucketReader
850-from allmydata.util import base32, fileutil, log
851+
852+from allmydata.interfaces import IStoredShare
853+from allmydata.util import fileutil
854 from allmydata.util.assertutil import precondition
855hunk ./src/allmydata/storage/backends/disk/immutable.py 9
856+from allmydata.util.fileutil import fp_make_dirs
857 from allmydata.util.hashutil import constant_time_compare
858hunk ./src/allmydata/storage/backends/disk/immutable.py 11
859+from allmydata.util.encodingutil import quote_filepath
860+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
861 from allmydata.storage.lease import LeaseInfo
862hunk ./src/allmydata/storage/backends/disk/immutable.py 14
863-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
864-     DataTooLargeError
865+
866 
867 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
868 # and share data. The share data is accessed by RIBucketWriter.write and
869hunk ./src/allmydata/storage/backends/disk/immutable.py 41
870 # then the value stored in this field will be the actual share data length
871 # modulo 2**32.
872 
873-class ShareFile:
874-    LEASE_SIZE = struct.calcsize(">L32s32sL")
875+class ImmutableDiskShare(object):
876+    implements(IStoredShare)
877+
878     sharetype = "immutable"
879hunk ./src/allmydata/storage/backends/disk/immutable.py 45
880+    LEASE_SIZE = struct.calcsize(">L32s32sL")
881+
882 
883hunk ./src/allmydata/storage/backends/disk/immutable.py 48
884-    def __init__(self, filename, max_size=None, create=False):
885-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
886+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
887+        """ If max_size is not None then I won't allow more than
888+        max_size to be written to me. If create=True then max_size
889+        must not be None. """
890         precondition((max_size is not None) or (not create), max_size, create)
891hunk ./src/allmydata/storage/backends/disk/immutable.py 53
892-        self.home = filename
893+        self._storageindex = storageindex
894         self._max_size = max_size
895hunk ./src/allmydata/storage/backends/disk/immutable.py 55
896+        self._incominghome = incominghome
897+        self._home = finalhome
898+        self._shnum = shnum
899         if create:
900             # touch the file, so later callers will see that we're working on
901             # it. Also construct the metadata.
902hunk ./src/allmydata/storage/backends/disk/immutable.py 61
903-            assert not os.path.exists(self.home)
904-            fileutil.make_dirs(os.path.dirname(self.home))
905-            f = open(self.home, 'wb')
906+            assert not finalhome.exists()
907+            fp_make_dirs(self._incominghome.parent())
908             # The second field -- the four-byte share data length -- is no
909             # longer used as of Tahoe v1.3.0, but we continue to write it in
910             # there in case someone downgrades a storage server from >=
911hunk ./src/allmydata/storage/backends/disk/immutable.py 72
912             # the largest length that can fit into the field. That way, even
913             # if this does happen, the old < v1.3.0 server will still allow
914             # clients to read the first part of the share.
915-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
916-            f.close()
917+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
918             self._lease_offset = max_size + 0x0c
919             self._num_leases = 0
920         else:
921hunk ./src/allmydata/storage/backends/disk/immutable.py 76
922-            f = open(self.home, 'rb')
923-            filesize = os.path.getsize(self.home)
924-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
925-            f.close()
926+            f = self._home.open(mode='rb')
927+            try:
928+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
929+            finally:
930+                f.close()
931+            filesize = self._home.getsize()
932             if version != 1:
933                 msg = "sharefile %s had version %d but we wanted 1" % \
934hunk ./src/allmydata/storage/backends/disk/immutable.py 84
935-                      (filename, version)
936+                      (self._home, version)
937                 raise UnknownImmutableContainerVersionError(msg)
938             self._num_leases = num_leases
939             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
940hunk ./src/allmydata/storage/backends/disk/immutable.py 90
941         self._data_offset = 0xc
942 
943+    def __repr__(self):
944+        return ("<ImmutableDiskShare %s:%r at %s>"
945+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
946+
947+    def close(self):
948+        fileutil.fp_make_dirs(self._home.parent())
949+        self._incominghome.moveTo(self._home)
950+        try:
951+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
952+            # We try to delete the parent (.../ab/abcde) to avoid leaving
953+            # these directories lying around forever, but the delete might
954+            # fail if we're working on another share for the same storage
955+            # index (like ab/abcde/5). The alternative approach would be to
956+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
957+            # ShareWriter), each of which is responsible for a single
958+            # directory on disk, and have them use reference counting of
959+            # their children to know when they should do the rmdir. This
960+            # approach is simpler, but relies on os.rmdir refusing to delete
961+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
962+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
963+            # we also delete the grandparent (prefix) directory, .../ab ,
964+            # again to avoid leaving directories lying around. This might
965+            # fail if there is another bucket open that shares a prefix (like
966+            # ab/abfff).
967+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
968+            # we leave the great-grandparent (incoming/) directory in place.
969+        except EnvironmentError:
970+            # ignore the "can't rmdir because the directory is not empty"
971+            # exceptions, those are normal consequences of the
972+            # above-mentioned conditions.
973+            pass
974+        pass
975+
976+    def get_used_space(self):
977+        return (fileutil.get_used_space(self._home) +
978+                fileutil.get_used_space(self._incominghome))
979+
980+    def get_storage_index(self):
981+        return self._storageindex
982+
983+    def get_shnum(self):
984+        return self._shnum
985+
986     def unlink(self):
987hunk ./src/allmydata/storage/backends/disk/immutable.py 134
988-        os.unlink(self.home)
989+        self._home.remove()
990+
991+    def get_size(self):
992+        return self._home.getsize()
993+
994+    def get_data_length(self):
995+        return self._lease_offset - self._data_offset
996+
997+    #def readv(self, read_vector):
998+    #    ...
999 
1000     def read_share_data(self, offset, length):
1001         precondition(offset >= 0)
1002hunk ./src/allmydata/storage/backends/disk/immutable.py 147
1003-        # reads beyond the end of the data are truncated. Reads that start
1004+
1005+        # Reads beyond the end of the data are truncated. Reads that start
1006         # beyond the end of the data return an empty string.
1007         seekpos = self._data_offset+offset
1008         actuallength = max(0, min(length, self._lease_offset-seekpos))
1009hunk ./src/allmydata/storage/backends/disk/immutable.py 154
1010         if actuallength == 0:
1011             return ""
1012-        f = open(self.home, 'rb')
1013-        f.seek(seekpos)
1014-        return f.read(actuallength)
1015+        f = self._home.open(mode='rb')
1016+        try:
1017+            f.seek(seekpos)
1018+            sharedata = f.read(actuallength)
1019+        finally:
1020+            f.close()
1021+        return sharedata
1022 
1023     def write_share_data(self, offset, data):
1024         length = len(data)
1025hunk ./src/allmydata/storage/backends/disk/immutable.py 167
1026         precondition(offset >= 0, offset)
1027         if self._max_size is not None and offset+length > self._max_size:
1028             raise DataTooLargeError(self._max_size, offset, length)
1029-        f = open(self.home, 'rb+')
1030-        real_offset = self._data_offset+offset
1031-        f.seek(real_offset)
1032-        assert f.tell() == real_offset
1033-        f.write(data)
1034-        f.close()
1035+        f = self._incominghome.open(mode='rb+')
1036+        try:
1037+            real_offset = self._data_offset+offset
1038+            f.seek(real_offset)
1039+            assert f.tell() == real_offset
1040+            f.write(data)
1041+        finally:
1042+            f.close()
1043 
1044     def _write_lease_record(self, f, lease_number, lease_info):
1045         offset = self._lease_offset + lease_number * self.LEASE_SIZE
1046hunk ./src/allmydata/storage/backends/disk/immutable.py 184
1047 
1048     def _read_num_leases(self, f):
1049         f.seek(0x08)
1050-        (num_leases,) = struct.unpack(">L", f.read(4))
1051+        ro = f.read(4)
1052+        (num_leases,) = struct.unpack(">L", ro)
1053         return num_leases
1054 
1055     def _write_num_leases(self, f, num_leases):
1056hunk ./src/allmydata/storage/backends/disk/immutable.py 195
1057     def _truncate_leases(self, f, num_leases):
1058         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1059 
1060+    # These lease operations are intended for use by disk_backend.py.
1061+    # Other clients should not depend on the fact that the disk backend
1062+    # stores leases in share files.
1063+
1064     def get_leases(self):
1065         """Yields a LeaseInfo instance for all leases."""
1066hunk ./src/allmydata/storage/backends/disk/immutable.py 201
1067-        f = open(self.home, 'rb')
1068-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1069-        f.seek(self._lease_offset)
1070-        for i in range(num_leases):
1071-            data = f.read(self.LEASE_SIZE)
1072-            if data:
1073-                yield LeaseInfo().from_immutable_data(data)
1074+        f = self._home.open(mode='rb')
1075+        try:
1076+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1077+            f.seek(self._lease_offset)
1078+            for i in range(num_leases):
1079+                data = f.read(self.LEASE_SIZE)
1080+                if data:
1081+                    yield LeaseInfo().from_immutable_data(data)
1082+        finally:
1083+            f.close()
1084 
1085     def add_lease(self, lease_info):
1086hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1087-        f = open(self.home, 'rb+')
1088-        num_leases = self._read_num_leases(f)
1089-        self._write_lease_record(f, num_leases, lease_info)
1090-        self._write_num_leases(f, num_leases+1)
1091-        f.close()
1092+        f = self._incominghome.open(mode='rb')
1093+        try:
1094+            num_leases = self._read_num_leases(f)
1095+        finally:
1096+            f.close()
1097+        f = self._home.open(mode='wb+')
1098+        try:
1099+            self._write_lease_record(f, num_leases, lease_info)
1100+            self._write_num_leases(f, num_leases+1)
1101+        finally:
1102+            f.close()
1103 
1104     def renew_lease(self, renew_secret, new_expire_time):
1105hunk ./src/allmydata/storage/backends/disk/immutable.py 226
1106-        for i,lease in enumerate(self.get_leases()):
1107-            if constant_time_compare(lease.renew_secret, renew_secret):
1108-                # yup. See if we need to update the owner time.
1109-                if new_expire_time > lease.expiration_time:
1110-                    # yes
1111-                    lease.expiration_time = new_expire_time
1112-                    f = open(self.home, 'rb+')
1113-                    self._write_lease_record(f, i, lease)
1114-                    f.close()
1115-                return
1116+        try:
1117+            for i, lease in enumerate(self.get_leases()):
1118+                if constant_time_compare(lease.renew_secret, renew_secret):
1119+                    # yup. See if we need to update the owner time.
1120+                    if new_expire_time > lease.expiration_time:
1121+                        # yes
1122+                        lease.expiration_time = new_expire_time
1123+                        f = self._home.open('rb+')
1124+                        try:
1125+                            self._write_lease_record(f, i, lease)
1126+                        finally:
1127+                            f.close()
1128+                    return
1129+        except IndexError, e:
1130+            raise Exception("IndexError: %s" % (e,))
1131         raise IndexError("unable to renew non-existent lease")
1132 
1133     def add_or_renew_lease(self, lease_info):
1134hunk ./src/allmydata/storage/backends/disk/immutable.py 249
1135                              lease_info.expiration_time)
1136         except IndexError:
1137             self.add_lease(lease_info)
1138-
1139-
1140-    def cancel_lease(self, cancel_secret):
1141-        """Remove a lease with the given cancel_secret. If the last lease is
1142-        cancelled, the file will be removed. Return the number of bytes that
1143-        were freed (by truncating the list of leases, and possibly by
1144-        deleting the file. Raise IndexError if there was no lease with the
1145-        given cancel_secret.
1146-        """
1147-
1148-        leases = list(self.get_leases())
1149-        num_leases_removed = 0
1150-        for i,lease in enumerate(leases):
1151-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1152-                leases[i] = None
1153-                num_leases_removed += 1
1154-        if not num_leases_removed:
1155-            raise IndexError("unable to find matching lease to cancel")
1156-        if num_leases_removed:
1157-            # pack and write out the remaining leases. We write these out in
1158-            # the same order as they were added, so that if we crash while
1159-            # doing this, we won't lose any non-cancelled leases.
1160-            leases = [l for l in leases if l] # remove the cancelled leases
1161-            f = open(self.home, 'rb+')
1162-            for i,lease in enumerate(leases):
1163-                self._write_lease_record(f, i, lease)
1164-            self._write_num_leases(f, len(leases))
1165-            self._truncate_leases(f, len(leases))
1166-            f.close()
1167-        space_freed = self.LEASE_SIZE * num_leases_removed
1168-        if not len(leases):
1169-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1170-            self.unlink()
1171-        return space_freed
1172-
1173-
1174-class BucketWriter(Referenceable):
1175-    implements(RIBucketWriter)
1176-
1177-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1178-        self.ss = ss
1179-        self.incominghome = incominghome
1180-        self.finalhome = finalhome
1181-        self._max_size = max_size # don't allow the client to write more than this
1182-        self._canary = canary
1183-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1184-        self.closed = False
1185-        self.throw_out_all_data = False
1186-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1187-        # also, add our lease to the file now, so that other ones can be
1188-        # added by simultaneous uploaders
1189-        self._sharefile.add_lease(lease_info)
1190-
1191-    def allocated_size(self):
1192-        return self._max_size
1193-
1194-    def remote_write(self, offset, data):
1195-        start = time.time()
1196-        precondition(not self.closed)
1197-        if self.throw_out_all_data:
1198-            return
1199-        self._sharefile.write_share_data(offset, data)
1200-        self.ss.add_latency("write", time.time() - start)
1201-        self.ss.count("write")
1202-
1203-    def remote_close(self):
1204-        precondition(not self.closed)
1205-        start = time.time()
1206-
1207-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1208-        fileutil.rename(self.incominghome, self.finalhome)
1209-        try:
1210-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1211-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1212-            # these directories lying around forever, but the delete might
1213-            # fail if we're working on another share for the same storage
1214-            # index (like ab/abcde/5). The alternative approach would be to
1215-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1216-            # ShareWriter), each of which is responsible for a single
1217-            # directory on disk, and have them use reference counting of
1218-            # their children to know when they should do the rmdir. This
1219-            # approach is simpler, but relies on os.rmdir refusing to delete
1220-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1221-            os.rmdir(os.path.dirname(self.incominghome))
1222-            # we also delete the grandparent (prefix) directory, .../ab ,
1223-            # again to avoid leaving directories lying around. This might
1224-            # fail if there is another bucket open that shares a prefix (like
1225-            # ab/abfff).
1226-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1227-            # we leave the great-grandparent (incoming/) directory in place.
1228-        except EnvironmentError:
1229-            # ignore the "can't rmdir because the directory is not empty"
1230-            # exceptions, those are normal consequences of the
1231-            # above-mentioned conditions.
1232-            pass
1233-        self._sharefile = None
1234-        self.closed = True
1235-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1236-
1237-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1238-        self.ss.bucket_writer_closed(self, filelen)
1239-        self.ss.add_latency("close", time.time() - start)
1240-        self.ss.count("close")
1241-
1242-    def _disconnected(self):
1243-        if not self.closed:
1244-            self._abort()
1245-
1246-    def remote_abort(self):
1247-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1248-                facility="tahoe.storage", level=log.UNUSUAL)
1249-        if not self.closed:
1250-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1251-        self._abort()
1252-        self.ss.count("abort")
1253-
1254-    def _abort(self):
1255-        if self.closed:
1256-            return
1257-
1258-        os.remove(self.incominghome)
1259-        # if we were the last share to be moved, remove the incoming/
1260-        # directory that was our parent
1261-        parentdir = os.path.split(self.incominghome)[0]
1262-        if not os.listdir(parentdir):
1263-            os.rmdir(parentdir)
1264-        self._sharefile = None
1265-
1266-        # We are now considered closed for further writing. We must tell
1267-        # the storage server about this so that it stops expecting us to
1268-        # use the space it allocated for us earlier.
1269-        self.closed = True
1270-        self.ss.bucket_writer_closed(self, 0)
1271-
1272-
1273-class BucketReader(Referenceable):
1274-    implements(RIBucketReader)
1275-
1276-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1277-        self.ss = ss
1278-        self._share_file = ShareFile(sharefname)
1279-        self.storage_index = storage_index
1280-        self.shnum = shnum
1281-
1282-    def __repr__(self):
1283-        return "<%s %s %s>" % (self.__class__.__name__,
1284-                               base32.b2a_l(self.storage_index[:8], 60),
1285-                               self.shnum)
1286-
1287-    def remote_read(self, offset, length):
1288-        start = time.time()
1289-        data = self._share_file.read_share_data(offset, length)
1290-        self.ss.add_latency("read", time.time() - start)
1291-        self.ss.count("read")
1292-        return data
1293-
1294-    def remote_advise_corrupt_share(self, reason):
1295-        return self.ss.remote_advise_corrupt_share("immutable",
1296-                                                   self.storage_index,
1297-                                                   self.shnum,
1298-                                                   reason)
1299hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1300-import os, stat, struct
1301 
1302hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1303-from allmydata.interfaces import BadWriteEnablerError
1304-from allmydata.util import idlib, log
1305+import struct
1306+
1307+from zope.interface import implements
1308+
1309+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1310+from allmydata.util import fileutil, idlib, log
1311 from allmydata.util.assertutil import precondition
1312 from allmydata.util.hashutil import constant_time_compare
1313hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1314-from allmydata.storage.lease import LeaseInfo
1315-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1316+from allmydata.util.encodingutil import quote_filepath
1317+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1318      DataTooLargeError
1319hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1320+from allmydata.storage.lease import LeaseInfo
1321+from allmydata.storage.backends.base import testv_compare
1322 
1323hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1324-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1325-# has a different layout. See docs/mutable.txt for more details.
1326+
1327+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1328+# It has a different layout. See docs/mutable.rst for more details.
1329 
1330 # #   offset    size    name
1331 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1332hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1333 #                        4    4   expiration timestamp
1334 #                        8   32   renewal token
1335 #                        40  32   cancel token
1336-#                        72  20   nodeid which accepted the tokens
1337+#                        72  20   nodeid that accepted the tokens
1338 # 7   468       (a)     data
1339 # 8   ??        4       count of extra leases
1340 # 9   ??        n*92    extra leases
1341hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1342 
1343 
1344-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1345+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1346 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1347 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1348 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1349hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1350 
1351-class MutableShareFile:
1352+
1353+class MutableDiskShare(object):
1354+    implements(IStoredMutableShare)
1355 
1356     sharetype = "mutable"
1357     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1358hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1359     assert LEASE_SIZE == 92
1360     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1361     assert DATA_OFFSET == 468, DATA_OFFSET
1362+
1363     # our sharefiles share with a recognizable string, plus some random
1364     # binary data to reduce the chance that a regular text file will look
1365     # like a sharefile.
1366hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1367     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1368     # TODO: decide upon a policy for max share size
1369 
1370-    def __init__(self, filename, parent=None):
1371-        self.home = filename
1372-        if os.path.exists(self.home):
1373+    def __init__(self, storageindex, shnum, home, parent=None):
1374+        self._storageindex = storageindex
1375+        self._shnum = shnum
1376+        self._home = home
1377+        if self._home.exists():
1378             # we don't cache anything, just check the magic
1379hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1380-            f = open(self.home, 'rb')
1381-            data = f.read(self.HEADER_SIZE)
1382-            (magic,
1383-             write_enabler_nodeid, write_enabler,
1384-             data_length, extra_least_offset) = \
1385-             struct.unpack(">32s20s32sQQ", data)
1386-            if magic != self.MAGIC:
1387-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1388-                      (filename, magic, self.MAGIC)
1389-                raise UnknownMutableContainerVersionError(msg)
1390+            f = self._home.open('rb')
1391+            try:
1392+                data = f.read(self.HEADER_SIZE)
1393+                (magic,
1394+                 write_enabler_nodeid, write_enabler,
1395+                 data_length, extra_least_offset) = \
1396+                 struct.unpack(">32s20s32sQQ", data)
1397+                if magic != self.MAGIC:
1398+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1399+                          (quote_filepath(self._home), magic, self.MAGIC)
1400+                    raise UnknownMutableContainerVersionError(msg)
1401+            finally:
1402+                f.close()
1403         self.parent = parent # for logging
1404 
1405     def log(self, *args, **kwargs):
1406hunk ./src/allmydata/storage/backends/disk/mutable.py 87
1407         return self.parent.log(*args, **kwargs)
1408 
1409-    def create(self, my_nodeid, write_enabler):
1410-        assert not os.path.exists(self.home)
1411+    def create(self, serverid, write_enabler):
1412+        assert not self._home.exists()
1413         data_length = 0
1414         extra_lease_offset = (self.HEADER_SIZE
1415                               + 4 * self.LEASE_SIZE
1416hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1417                               + data_length)
1418         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1419         num_extra_leases = 0
1420-        f = open(self.home, 'wb')
1421-        header = struct.pack(">32s20s32sQQ",
1422-                             self.MAGIC, my_nodeid, write_enabler,
1423-                             data_length, extra_lease_offset,
1424-                             )
1425-        leases = ("\x00"*self.LEASE_SIZE) * 4
1426-        f.write(header + leases)
1427-        # data goes here, empty after creation
1428-        f.write(struct.pack(">L", num_extra_leases))
1429-        # extra leases go here, none at creation
1430-        f.close()
1431+        f = self._home.open('wb')
1432+        try:
1433+            header = struct.pack(">32s20s32sQQ",
1434+                                 self.MAGIC, serverid, write_enabler,
1435+                                 data_length, extra_lease_offset,
1436+                                 )
1437+            leases = ("\x00"*self.LEASE_SIZE) * 4
1438+            f.write(header + leases)
1439+            # data goes here, empty after creation
1440+            f.write(struct.pack(">L", num_extra_leases))
1441+            # extra leases go here, none at creation
1442+        finally:
1443+            f.close()
1444+
1445+    def __repr__(self):
1446+        return ("<MutableDiskShare %s:%r at %s>"
1447+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1448+
1449+    def get_used_space(self):
1450+        return fileutil.get_used_space(self._home)
1451+
1452+    def get_storage_index(self):
1453+        return self._storageindex
1454+
1455+    def get_shnum(self):
1456+        return self._shnum
1457 
1458     def unlink(self):
1459hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1460-        os.unlink(self.home)
1461+        self._home.remove()
1462 
1463     def _read_data_length(self, f):
1464         f.seek(self.DATA_LENGTH_OFFSET)
1465hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1466 
1467     def get_leases(self):
1468         """Yields a LeaseInfo instance for all leases."""
1469-        f = open(self.home, 'rb')
1470-        for i, lease in self._enumerate_leases(f):
1471-            yield lease
1472-        f.close()
1473+        f = self._home.open('rb')
1474+        try:
1475+            for i, lease in self._enumerate_leases(f):
1476+                yield lease
1477+        finally:
1478+            f.close()
1479 
1480     def _enumerate_leases(self, f):
1481         for i in range(self._get_num_lease_slots(f)):
1482hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1483             try:
1484                 data = self._read_lease_record(f, i)
1485                 if data is not None:
1486-                    yield i,data
1487+                    yield i, data
1488             except IndexError:
1489                 return
1490 
1491hunk ./src/allmydata/storage/backends/disk/mutable.py 307
1492+    # These lease operations are intended for use by disk_backend.py.
1493+    # Other non-test clients should not depend on the fact that the disk
1494+    # backend stores leases in share files.
1495+
1496     def add_lease(self, lease_info):
1497         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
1498hunk ./src/allmydata/storage/backends/disk/mutable.py 313
1499-        f = open(self.home, 'rb+')
1500-        num_lease_slots = self._get_num_lease_slots(f)
1501-        empty_slot = self._get_first_empty_lease_slot(f)
1502-        if empty_slot is not None:
1503-            self._write_lease_record(f, empty_slot, lease_info)
1504-        else:
1505-            self._write_lease_record(f, num_lease_slots, lease_info)
1506-        f.close()
1507+        f = self._home.open('rb+')
1508+        try:
1509+            num_lease_slots = self._get_num_lease_slots(f)
1510+            empty_slot = self._get_first_empty_lease_slot(f)
1511+            if empty_slot is not None:
1512+                self._write_lease_record(f, empty_slot, lease_info)
1513+            else:
1514+                self._write_lease_record(f, num_lease_slots, lease_info)
1515+        finally:
1516+            f.close()
1517 
1518     def renew_lease(self, renew_secret, new_expire_time):
1519         accepting_nodeids = set()
1520hunk ./src/allmydata/storage/backends/disk/mutable.py 326
1521-        f = open(self.home, 'rb+')
1522-        for (leasenum,lease) in self._enumerate_leases(f):
1523-            if constant_time_compare(lease.renew_secret, renew_secret):
1524-                # yup. See if we need to update the owner time.
1525-                if new_expire_time > lease.expiration_time:
1526-                    # yes
1527-                    lease.expiration_time = new_expire_time
1528-                    self._write_lease_record(f, leasenum, lease)
1529-                f.close()
1530-                return
1531-            accepting_nodeids.add(lease.nodeid)
1532-        f.close()
1533+        f = self._home.open('rb+')
1534+        try:
1535+            for (leasenum, lease) in self._enumerate_leases(f):
1536+                if constant_time_compare(lease.renew_secret, renew_secret):
1537+                    # yup. See if we need to update the owner time.
1538+                    if new_expire_time > lease.expiration_time:
1539+                        # yes
1540+                        lease.expiration_time = new_expire_time
1541+                        self._write_lease_record(f, leasenum, lease)
1542+                    return
1543+                accepting_nodeids.add(lease.nodeid)
1544+        finally:
1545+            f.close()
1546         # Return the accepting_nodeids set, to give the client a chance to
1547hunk ./src/allmydata/storage/backends/disk/mutable.py 340
1548-        # update the leases on a share which has been migrated from its
1549+        # update the leases on a share that has been migrated from its
1550         # original server to a new one.
1551         msg = ("Unable to renew non-existent lease. I have leases accepted by"
1552                " nodeids: ")
1553hunk ./src/allmydata/storage/backends/disk/mutable.py 357
1554         except IndexError:
1555             self.add_lease(lease_info)
1556 
1557-    def cancel_lease(self, cancel_secret):
1558-        """Remove any leases with the given cancel_secret. If the last lease
1559-        is cancelled, the file will be removed. Return the number of bytes
1560-        that were freed (by truncating the list of leases, and possibly by
1561-        deleting the file. Raise IndexError if there was no lease with the
1562-        given cancel_secret."""
1563-
1564-        accepting_nodeids = set()
1565-        modified = 0
1566-        remaining = 0
1567-        blank_lease = LeaseInfo(owner_num=0,
1568-                                renew_secret="\x00"*32,
1569-                                cancel_secret="\x00"*32,
1570-                                expiration_time=0,
1571-                                nodeid="\x00"*20)
1572-        f = open(self.home, 'rb+')
1573-        for (leasenum,lease) in self._enumerate_leases(f):
1574-            accepting_nodeids.add(lease.nodeid)
1575-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1576-                self._write_lease_record(f, leasenum, blank_lease)
1577-                modified += 1
1578-            else:
1579-                remaining += 1
1580-        if modified:
1581-            freed_space = self._pack_leases(f)
1582-            f.close()
1583-            if not remaining:
1584-                freed_space += os.stat(self.home)[stat.ST_SIZE]
1585-                self.unlink()
1586-            return freed_space
1587-
1588-        msg = ("Unable to cancel non-existent lease. I have leases "
1589-               "accepted by nodeids: ")
1590-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
1591-                         for anid in accepting_nodeids])
1592-        msg += " ."
1593-        raise IndexError(msg)
1594-
1595-    def _pack_leases(self, f):
1596-        # TODO: reclaim space from cancelled leases
1597-        return 0
1598-
1599     def _read_write_enabler_and_nodeid(self, f):
1600         f.seek(0)
1601         data = f.read(self.HEADER_SIZE)
1602hunk ./src/allmydata/storage/backends/disk/mutable.py 369
1603 
1604     def readv(self, readv):
1605         datav = []
1606-        f = open(self.home, 'rb')
1607-        for (offset, length) in readv:
1608-            datav.append(self._read_share_data(f, offset, length))
1609-        f.close()
1610+        f = self._home.open('rb')
1611+        try:
1612+            for (offset, length) in readv:
1613+                datav.append(self._read_share_data(f, offset, length))
1614+        finally:
1615+            f.close()
1616         return datav
1617 
1618hunk ./src/allmydata/storage/backends/disk/mutable.py 377
1619-#    def remote_get_length(self):
1620-#        f = open(self.home, 'rb')
1621-#        data_length = self._read_data_length(f)
1622-#        f.close()
1623-#        return data_length
1624+    def get_size(self):
1625+        return self._home.getsize()
1626+
1627+    def get_data_length(self):
1628+        f = self._home.open('rb')
1629+        try:
1630+            data_length = self._read_data_length(f)
1631+        finally:
1632+            f.close()
1633+        return data_length
1634 
1635     def check_write_enabler(self, write_enabler, si_s):
1636hunk ./src/allmydata/storage/backends/disk/mutable.py 389
1637-        f = open(self.home, 'rb+')
1638-        (real_write_enabler, write_enabler_nodeid) = \
1639-                             self._read_write_enabler_and_nodeid(f)
1640-        f.close()
1641+        f = self._home.open('rb+')
1642+        try:
1643+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
1644+        finally:
1645+            f.close()
1646         # avoid a timing attack
1647         #if write_enabler != real_write_enabler:
1648         if not constant_time_compare(write_enabler, real_write_enabler):
1649hunk ./src/allmydata/storage/backends/disk/mutable.py 410
1650 
1651     def check_testv(self, testv):
1652         test_good = True
1653-        f = open(self.home, 'rb+')
1654-        for (offset, length, operator, specimen) in testv:
1655-            data = self._read_share_data(f, offset, length)
1656-            if not testv_compare(data, operator, specimen):
1657-                test_good = False
1658-                break
1659-        f.close()
1660+        f = self._home.open('rb+')
1661+        try:
1662+            for (offset, length, operator, specimen) in testv:
1663+                data = self._read_share_data(f, offset, length)
1664+                if not testv_compare(data, operator, specimen):
1665+                    test_good = False
1666+                    break
1667+        finally:
1668+            f.close()
1669         return test_good
1670 
1671     def writev(self, datav, new_length):
1672hunk ./src/allmydata/storage/backends/disk/mutable.py 422
1673-        f = open(self.home, 'rb+')
1674-        for (offset, data) in datav:
1675-            self._write_share_data(f, offset, data)
1676-        if new_length is not None:
1677-            cur_length = self._read_data_length(f)
1678-            if new_length < cur_length:
1679-                self._write_data_length(f, new_length)
1680-                # TODO: if we're going to shrink the share file when the
1681-                # share data has shrunk, then call
1682-                # self._change_container_size() here.
1683-        f.close()
1684-
1685-def testv_compare(a, op, b):
1686-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1687-    if op == "lt":
1688-        return a < b
1689-    if op == "le":
1690-        return a <= b
1691-    if op == "eq":
1692-        return a == b
1693-    if op == "ne":
1694-        return a != b
1695-    if op == "ge":
1696-        return a >= b
1697-    if op == "gt":
1698-        return a > b
1699-    # never reached
1700+        f = self._home.open('rb+')
1701+        try:
1702+            for (offset, data) in datav:
1703+                self._write_share_data(f, offset, data)
1704+            if new_length is not None:
1705+                cur_length = self._read_data_length(f)
1706+                if new_length < cur_length:
1707+                    self._write_data_length(f, new_length)
1708+                    # TODO: if we're going to shrink the share file when the
1709+                    # share data has shrunk, then call
1710+                    # self._change_container_size() here.
1711+        finally:
1712+            f.close()
1713 
1714hunk ./src/allmydata/storage/backends/disk/mutable.py 436
1715-class EmptyShare:
1716+    def close(self):
1717+        pass
1718 
1719hunk ./src/allmydata/storage/backends/disk/mutable.py 439
1720-    def check_testv(self, testv):
1721-        test_good = True
1722-        for (offset, length, operator, specimen) in testv:
1723-            data = ""
1724-            if not testv_compare(data, operator, specimen):
1725-                test_good = False
1726-                break
1727-        return test_good
1728 
1729hunk ./src/allmydata/storage/backends/disk/mutable.py 440
1730-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
1731-    ms = MutableShareFile(filename, parent)
1732-    ms.create(my_nodeid, write_enabler)
1733+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
1734+    ms = MutableDiskShare(fp, parent)
1735+    ms.create(serverid, write_enabler)
1736     del ms
1737hunk ./src/allmydata/storage/backends/disk/mutable.py 444
1738-    return MutableShareFile(filename, parent)
1739-
1740+    return MutableDiskShare(fp, parent)
1741addfile ./src/allmydata/storage/backends/null/__init__.py
1742addfile ./src/allmydata/storage/backends/null/null_backend.py
1743hunk ./src/allmydata/storage/backends/null/null_backend.py 2
1744 
1745+import os, struct
1746+
1747+from zope.interface import implements
1748+
1749+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
1750+from allmydata.util.assertutil import precondition
1751+from allmydata.util.hashutil import constant_time_compare
1752+from allmydata.storage.backends.base import Backend, ShareSet
1753+from allmydata.storage.bucket import BucketWriter
1754+from allmydata.storage.common import si_b2a
1755+from allmydata.storage.lease import LeaseInfo
1756+
1757+
1758+class NullBackend(Backend):
1759+    implements(IStorageBackend)
1760+
1761+    def __init__(self):
1762+        Backend.__init__(self)
1763+
1764+    def get_available_space(self, reserved_space):
1765+        return None
1766+
1767+    def get_sharesets_for_prefix(self, prefix):
1768+        pass
1769+
1770+    def get_shareset(self, storageindex):
1771+        return NullShareSet(storageindex)
1772+
1773+    def fill_in_space_stats(self, stats):
1774+        pass
1775+
1776+    def set_storage_server(self, ss):
1777+        self.ss = ss
1778+
1779+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1780+        pass
1781+
1782+
1783+class NullShareSet(ShareSet):
1784+    implements(IShareSet)
1785+
1786+    def __init__(self, storageindex):
1787+        self.storageindex = storageindex
1788+
1789+    def get_overhead(self):
1790+        return 0
1791+
1792+    def get_incoming_shnums(self):
1793+        return frozenset()
1794+
1795+    def get_shares(self):
1796+        pass
1797+
1798+    def get_share(self, shnum):
1799+        return None
1800+
1801+    def get_storage_index(self):
1802+        return self.storageindex
1803+
1804+    def get_storage_index_string(self):
1805+        return si_b2a(self.storageindex)
1806+
1807+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1808+        immutableshare = ImmutableNullShare()
1809+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
1810+
1811+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1812+        return MutableNullShare()
1813+
1814+    def _clean_up_after_unlink(self):
1815+        pass
1816+
1817+
1818+class ImmutableNullShare:
1819+    implements(IStoredShare)
1820+    sharetype = "immutable"
1821+
1822+    def __init__(self):
1823+        """ If max_size is not None then I won't allow more than
1824+        max_size to be written to me. If create=True then max_size
1825+        must not be None. """
1826+        pass
1827+
1828+    def get_shnum(self):
1829+        return self.shnum
1830+
1831+    def unlink(self):
1832+        os.unlink(self.fname)
1833+
1834+    def read_share_data(self, offset, length):
1835+        precondition(offset >= 0)
1836+        # Reads beyond the end of the data are truncated. Reads that start
1837+        # beyond the end of the data return an empty string.
1838+        seekpos = self._data_offset+offset
1839+        fsize = os.path.getsize(self.fname)
1840+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
1841+        if actuallength == 0:
1842+            return ""
1843+        f = open(self.fname, 'rb')
1844+        f.seek(seekpos)
1845+        return f.read(actuallength)
1846+
1847+    def write_share_data(self, offset, data):
1848+        pass
1849+
1850+    def _write_lease_record(self, f, lease_number, lease_info):
1851+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1852+        f.seek(offset)
1853+        assert f.tell() == offset
1854+        f.write(lease_info.to_immutable_data())
1855+
1856+    def _read_num_leases(self, f):
1857+        f.seek(0x08)
1858+        (num_leases,) = struct.unpack(">L", f.read(4))
1859+        return num_leases
1860+
1861+    def _write_num_leases(self, f, num_leases):
1862+        f.seek(0x08)
1863+        f.write(struct.pack(">L", num_leases))
1864+
1865+    def _truncate_leases(self, f, num_leases):
1866+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1867+
1868+    def get_leases(self):
1869+        """Yields a LeaseInfo instance for all leases."""
1870+        f = open(self.fname, 'rb')
1871+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1872+        f.seek(self._lease_offset)
1873+        for i in range(num_leases):
1874+            data = f.read(self.LEASE_SIZE)
1875+            if data:
1876+                yield LeaseInfo().from_immutable_data(data)
1877+
1878+    def add_lease(self, lease):
1879+        pass
1880+
1881+    def renew_lease(self, renew_secret, new_expire_time):
1882+        for i,lease in enumerate(self.get_leases()):
1883+            if constant_time_compare(lease.renew_secret, renew_secret):
1884+                # yup. See if we need to update the owner time.
1885+                if new_expire_time > lease.expiration_time:
1886+                    # yes
1887+                    lease.expiration_time = new_expire_time
1888+                    f = open(self.fname, 'rb+')
1889+                    self._write_lease_record(f, i, lease)
1890+                    f.close()
1891+                return
1892+        raise IndexError("unable to renew non-existent lease")
1893+
1894+    def add_or_renew_lease(self, lease_info):
1895+        try:
1896+            self.renew_lease(lease_info.renew_secret,
1897+                             lease_info.expiration_time)
1898+        except IndexError:
1899+            self.add_lease(lease_info)
1900+
1901+
1902+class MutableNullShare:
1903+    implements(IStoredMutableShare)
1904+    sharetype = "mutable"
1905+
1906+    """ XXX: TODO """
1907addfile ./src/allmydata/storage/bucket.py
1908hunk ./src/allmydata/storage/bucket.py 1
1909+
1910+import time
1911+
1912+from foolscap.api import Referenceable
1913+
1914+from zope.interface import implements
1915+from allmydata.interfaces import RIBucketWriter, RIBucketReader
1916+from allmydata.util import base32, log
1917+from allmydata.util.assertutil import precondition
1918+
1919+
1920+class BucketWriter(Referenceable):
1921+    implements(RIBucketWriter)
1922+
1923+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1924+        self.ss = ss
1925+        self._max_size = max_size # don't allow the client to write more than this
1926+        self._canary = canary
1927+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1928+        self.closed = False
1929+        self.throw_out_all_data = False
1930+        self._share = immutableshare
1931+        # also, add our lease to the file now, so that other ones can be
1932+        # added by simultaneous uploaders
1933+        self._share.add_lease(lease_info)
1934+
1935+    def allocated_size(self):
1936+        return self._max_size
1937+
1938+    def remote_write(self, offset, data):
1939+        start = time.time()
1940+        precondition(not self.closed)
1941+        if self.throw_out_all_data:
1942+            return
1943+        self._share.write_share_data(offset, data)
1944+        self.ss.add_latency("write", time.time() - start)
1945+        self.ss.count("write")
1946+
1947+    def remote_close(self):
1948+        precondition(not self.closed)
1949+        start = time.time()
1950+
1951+        self._share.close()
1952+        filelen = self._share.stat()
1953+        self._share = None
1954+
1955+        self.closed = True
1956+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1957+
1958+        self.ss.bucket_writer_closed(self, filelen)
1959+        self.ss.add_latency("close", time.time() - start)
1960+        self.ss.count("close")
1961+
1962+    def _disconnected(self):
1963+        if not self.closed:
1964+            self._abort()
1965+
1966+    def remote_abort(self):
1967+        log.msg("storage: aborting write to share %r" % self._share,
1968+                facility="tahoe.storage", level=log.UNUSUAL)
1969+        if not self.closed:
1970+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1971+        self._abort()
1972+        self.ss.count("abort")
1973+
1974+    def _abort(self):
1975+        if self.closed:
1976+            return
1977+        self._share.unlink()
1978+        self._share = None
1979+
1980+        # We are now considered closed for further writing. We must tell
1981+        # the storage server about this so that it stops expecting us to
1982+        # use the space it allocated for us earlier.
1983+        self.closed = True
1984+        self.ss.bucket_writer_closed(self, 0)
1985+
1986+
1987+class BucketReader(Referenceable):
1988+    implements(RIBucketReader)
1989+
1990+    def __init__(self, ss, share):
1991+        self.ss = ss
1992+        self._share = share
1993+        self.storageindex = share.storageindex
1994+        self.shnum = share.shnum
1995+
1996+    def __repr__(self):
1997+        return "<%s %s %s>" % (self.__class__.__name__,
1998+                               base32.b2a_l(self.storageindex[:8], 60),
1999+                               self.shnum)
2000+
2001+    def remote_read(self, offset, length):
2002+        start = time.time()
2003+        data = self._share.read_share_data(offset, length)
2004+        self.ss.add_latency("read", time.time() - start)
2005+        self.ss.count("read")
2006+        return data
2007+
2008+    def remote_advise_corrupt_share(self, reason):
2009+        return self.ss.remote_advise_corrupt_share("immutable",
2010+                                                   self.storageindex,
2011+                                                   self.shnum,
2012+                                                   reason)
2013addfile ./src/allmydata/test/test_backends.py
2014hunk ./src/allmydata/test/test_backends.py 1
2015+import os, stat
2016+from twisted.trial import unittest
2017+from allmydata.util.log import msg
2018+from allmydata.test.common_util import ReallyEqualMixin
2019+import mock
2020+
2021+# This is the code that we're going to be testing.
2022+from allmydata.storage.server import StorageServer
2023+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
2024+from allmydata.storage.backends.null.null_backend import NullBackend
2025+
2026+# The following share file content was generated with
2027+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2028+# with share data == 'a'. The total size of this input
2029+# is 85 bytes.
2030+shareversionnumber = '\x00\x00\x00\x01'
2031+sharedatalength = '\x00\x00\x00\x01'
2032+numberofleases = '\x00\x00\x00\x01'
2033+shareinputdata = 'a'
2034+ownernumber = '\x00\x00\x00\x00'
2035+renewsecret  = 'x'*32
2036+cancelsecret = 'y'*32
2037+expirationtime = '\x00(\xde\x80'
2038+nextlease = ''
2039+containerdata = shareversionnumber + sharedatalength + numberofleases
2040+client_data = shareinputdata + ownernumber + renewsecret + \
2041+    cancelsecret + expirationtime + nextlease
2042+share_data = containerdata + client_data
2043+testnodeid = 'testnodeidxxxxxxxxxx'
2044+
2045+
2046+class MockFileSystem(unittest.TestCase):
2047+    """ I simulate a filesystem that the code under test can use. I simulate
2048+    just the parts of the filesystem that the current implementation of Disk
2049+    backend needs. """
2050+    def setUp(self):
2051+        # Make patcher, patch, and effects for disk-using functions.
2052+        msg( "%s.setUp()" % (self,))
2053+        self.mockedfilepaths = {}
2054+        # keys are pathnames, values are MockFilePath objects. This is necessary because
2055+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
2056+        # self.mockedfilepaths has the relevant information.
2057+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
2058+        self.basedir = self.storedir.child('shares')
2059+        self.baseincdir = self.basedir.child('incoming')
2060+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2061+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
2062+        self.shareincomingname = self.sharedirincomingname.child('0')
2063+        self.sharefinalname = self.sharedirfinalname.child('0')
2064+
2065+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
2066+        # or LeaseCheckingCrawler.
2067+
2068+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
2069+        self.FilePathFake.__enter__()
2070+
2071+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
2072+        FakeBCC = self.BCountingCrawler.__enter__()
2073+        FakeBCC.side_effect = self.call_FakeBCC
2074+
2075+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
2076+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
2077+        FakeLCC.side_effect = self.call_FakeLCC
2078+
2079+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
2080+        GetSpace = self.get_available_space.__enter__()
2081+        GetSpace.side_effect = self.call_get_available_space
2082+
2083+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
2084+        getsize = self.statforsize.__enter__()
2085+        getsize.side_effect = self.call_statforsize
2086+
2087+    def call_FakeBCC(self, StateFile):
2088+        return MockBCC()
2089+
2090+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
2091+        return MockLCC()
2092+
2093+    def call_get_available_space(self, storedir, reservedspace):
2094+        # The input vector has an input size of 85.
2095+        return 85 - reservedspace
2096+
2097+    def call_statforsize(self, fakefpname):
2098+        return self.mockedfilepaths[fakefpname].fileobject.size()
2099+
2100+    def tearDown(self):
2101+        msg( "%s.tearDown()" % (self,))
2102+        self.FilePathFake.__exit__()
2103+        self.mockedfilepaths = {}
2104+
2105+
2106+class MockFilePath:
2107+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
2108+        #  I can't just make the values MockFileObjects because they may be directories.
2109+        self.mockedfilepaths = ffpathsenvironment
2110+        self.path = pathstring
2111+        self.existence = existence
2112+        if not self.mockedfilepaths.has_key(self.path):
2113+            #  The first MockFilePath object is special
2114+            self.mockedfilepaths[self.path] = self
2115+            self.fileobject = None
2116+        else:
2117+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2118+        self.spawn = {}
2119+        self.antecedent = os.path.dirname(self.path)
2120+
2121+    def setContent(self, contentstring):
2122+        # This method rewrites the data in the file that corresponds to its path
2123+        # name whether it preexisted or not.
2124+        self.fileobject = MockFileObject(contentstring)
2125+        self.existence = True
2126+        self.mockedfilepaths[self.path].fileobject = self.fileobject
2127+        self.mockedfilepaths[self.path].existence = self.existence
2128+        self.setparents()
2129+
2130+    def create(self):
2131+        # This method chokes if there's a pre-existing file!
2132+        if self.mockedfilepaths[self.path].fileobject:
2133+            raise OSError
2134+        else:
2135+            self.existence = True
2136+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2137+            self.mockedfilepaths[self.path].existence = self.existence
2138+            self.setparents()
2139+
2140+    def open(self, mode='r'):
2141+        # XXX Makes no use of mode.
2142+        if not self.mockedfilepaths[self.path].fileobject:
2143+            # If there's no fileobject there already then make one and put it there.
2144+            self.fileobject = MockFileObject()
2145+            self.existence = True
2146+            self.mockedfilepaths[self.path].fileobject = self.fileobject
2147+            self.mockedfilepaths[self.path].existence = self.existence
2148+        else:
2149+            # Otherwise get a ref to it.
2150+            self.fileobject = self.mockedfilepaths[self.path].fileobject
2151+            self.existence = self.mockedfilepaths[self.path].existence
2152+        return self.fileobject.open(mode)
2153+
2154+    def child(self, childstring):
2155+        arg2child = os.path.join(self.path, childstring)
2156+        child = MockFilePath(arg2child, self.mockedfilepaths)
2157+        return child
2158+
2159+    def children(self):
2160+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
2161+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
2162+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
2163+        self.spawn = frozenset(childrenfromffs)
2164+        return self.spawn
2165+
2166+    def parent(self):
2167+        if self.mockedfilepaths.has_key(self.antecedent):
2168+            parent = self.mockedfilepaths[self.antecedent]
2169+        else:
2170+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
2171+        return parent
2172+
2173+    def parents(self):
2174+        antecedents = []
2175+        def f(fps, antecedents):
2176+            newfps = os.path.split(fps)[0]
2177+            if newfps:
2178+                antecedents.append(newfps)
2179+                f(newfps, antecedents)
2180+        f(self.path, antecedents)
2181+        return antecedents
2182+
2183+    def setparents(self):
2184+        for fps in self.parents():
2185+            if not self.mockedfilepaths.has_key(fps):
2186+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
2187+
2188+    def basename(self):
2189+        return os.path.split(self.path)[1]
2190+
2191+    def moveTo(self, newffp):
2192+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
2193+        if self.mockedfilepaths[newffp.path].exists():
2194+            raise OSError
2195+        else:
2196+            self.mockedfilepaths[newffp.path] = self
2197+            self.path = newffp.path
2198+
2199+    def getsize(self):
2200+        return self.fileobject.getsize()
2201+
2202+    def exists(self):
2203+        return self.existence
2204+
2205+    def isdir(self):
2206+        return True
2207+
2208+    def makedirs(self):
2209+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
2210+        pass
2211+
2212+    def remove(self):
2213+        pass
2214+
2215+
2216+class MockFileObject:
2217+    def __init__(self, contentstring=''):
2218+        self.buffer = contentstring
2219+        self.pos = 0
2220+    def open(self, mode='r'):
2221+        return self
2222+    def write(self, instring):
2223+        begin = self.pos
2224+        padlen = begin - len(self.buffer)
2225+        if padlen > 0:
2226+            self.buffer += '\x00' * padlen
2227+        end = self.pos + len(instring)
2228+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
2229+        self.pos = end
2230+    def close(self):
2231+        self.pos = 0
2232+    def seek(self, pos):
2233+        self.pos = pos
2234+    def read(self, numberbytes):
2235+        return self.buffer[self.pos:self.pos+numberbytes]
2236+    def tell(self):
2237+        return self.pos
2238+    def size(self):
2239+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
2240+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
2241+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
2242+        return {stat.ST_SIZE:len(self.buffer)}
2243+    def getsize(self):
2244+        return len(self.buffer)
2245+
2246+class MockBCC:
2247+    def setServiceParent(self, Parent):
2248+        pass
2249+
2250+
2251+class MockLCC:
2252+    def setServiceParent(self, Parent):
2253+        pass
2254+
2255+
2256+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
2257+    """ NullBackend is just for testing and executable documentation, so
2258+    this test is actually a test of StorageServer in which we're using
2259+    NullBackend as helper code for the test, rather than a test of
2260+    NullBackend. """
2261+    def setUp(self):
2262+        self.ss = StorageServer(testnodeid, NullBackend())
2263+
2264+    @mock.patch('os.mkdir')
2265+    @mock.patch('__builtin__.open')
2266+    @mock.patch('os.listdir')
2267+    @mock.patch('os.path.isdir')
2268+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2269+        """
2270+        Write a new share. This tests that StorageServer's remote_allocate_buckets
2271+        generates the correct return types when given test-vector arguments. That
2272+        bs is of the correct type is verified by attempting to invoke remote_write
2273+        on bs[0].
2274+        """
2275+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2276+        bs[0].remote_write(0, 'a')
2277+        self.failIf(mockisdir.called)
2278+        self.failIf(mocklistdir.called)
2279+        self.failIf(mockopen.called)
2280+        self.failIf(mockmkdir.called)
2281+
2282+
2283+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
2284+    def test_create_server_disk_backend(self):
2285+        """ This tests whether a server instance can be constructed with a
2286+        filesystem backend. To pass the test, it mustn't use the filesystem
2287+        outside of its configured storedir. """
2288+        StorageServer(testnodeid, DiskBackend(self.storedir))
2289+
2290+
2291+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
2292+    """ This tests both the StorageServer and the Disk backend together. """
2293+    def setUp(self):
2294+        MockFileSystem.setUp(self)
2295+        try:
2296+            self.backend = DiskBackend(self.storedir)
2297+            self.ss = StorageServer(testnodeid, self.backend)
2298+
2299+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
2300+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
2301+        except:
2302+            MockFileSystem.tearDown(self)
2303+            raise
2304+
2305+    @mock.patch('time.time')
2306+    @mock.patch('allmydata.util.fileutil.get_available_space')
2307+    def test_out_of_space(self, mockget_available_space, mocktime):
2308+        mocktime.return_value = 0
2309+
2310+        def call_get_available_space(dir, reserve):
2311+            return 0
2312+
2313+        mockget_available_space.side_effect = call_get_available_space
2314+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2315+        self.failUnlessReallyEqual(bsc, {})
2316+
2317+    @mock.patch('time.time')
2318+    def test_write_and_read_share(self, mocktime):
2319+        """
2320+        Write a new share, read it, and test the server's (and disk backend's)
2321+        handling of simultaneous and successive attempts to write the same
2322+        share.
2323+        """
2324+        mocktime.return_value = 0
2325+        # Inspect incoming and fail unless it's empty.
2326+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
2327+
2328+        self.failUnlessReallyEqual(incomingset, frozenset())
2329+
2330+        # Populate incoming with the sharenum: 0.
2331+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2332+
2333+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
2334+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
2335+
2336+
2337+
2338+        # Attempt to create a second share writer with the same sharenum.
2339+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
2340+
2341+        # Show that no sharewriter results from a remote_allocate_buckets
2342+        # with the same si and sharenum, until BucketWriter.remote_close()
2343+        # has been called.
2344+        self.failIf(bsa)
2345+
2346+        # Test allocated size.
2347+        spaceint = self.ss.allocated_size()
2348+        self.failUnlessReallyEqual(spaceint, 1)
2349+
2350+        # Write 'a' to shnum 0. Only tested together with close and read.
2351+        bs[0].remote_write(0, 'a')
2352+
2353+        # Preclose: Inspect final, failUnless nothing there.
2354+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
2355+        bs[0].remote_close()
2356+
2357+        # Postclose: (Omnibus) failUnless written data is in final.
2358+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
2359+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
2360+        contents = sharesinfinal[0].read_share_data(0, 73)
2361+        self.failUnlessReallyEqual(contents, client_data)
2362+
2363+        # Exercise the case that the share we're asking to allocate is
2364+        # already (completely) uploaded.
2365+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2366+
2367+
2368+    def test_read_old_share(self):
2369+        """ This tests whether the code correctly finds and reads
2370+        shares written out by old (Tahoe-LAFS <= v1.8.2)
2371+        servers. There is a similar test in test_download, but that one
2372+        is from the perspective of the client and exercises a deeper
2373+        stack of code. This one is for exercising just the
2374+        StorageServer object. """
2375+        # Contruct a file with the appropriate contents in the mockfilesystem.
2376+        datalen = len(share_data)
2377+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
2378+        finalhome.setContent(share_data)
2379+
2380+        # Now begin the test.
2381+        bs = self.ss.remote_get_buckets('teststorage_index')
2382+
2383+        self.failUnlessEqual(len(bs), 1)
2384+        b = bs['0']
2385+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
2386+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
2387+        # If you try to read past the end you get the as much data as is there.
2388+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
2389+        # If you start reading past the end of the file you get the empty string.
2390+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2391}
2392[Pluggable backends -- all other changes. refs #999
2393david-sarah@jacaranda.org**20110919233256
2394 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
2395] {
2396hunk ./src/allmydata/client.py 245
2397             sharetypes.append("immutable")
2398         if self.get_config("storage", "expire.mutable", True, boolean=True):
2399             sharetypes.append("mutable")
2400-        expiration_sharetypes = tuple(sharetypes)
2401 
2402hunk ./src/allmydata/client.py 246
2403+        expiration_policy = {
2404+            'enabled': expire,
2405+            'mode': mode,
2406+            'override_lease_duration': o_l_d,
2407+            'cutoff_date': cutoff_date,
2408+            'sharetypes': tuple(sharetypes),
2409+        }
2410         ss = StorageServer(storedir, self.nodeid,
2411                            reserved_space=reserved,
2412                            discard_storage=discard,
2413hunk ./src/allmydata/client.py 258
2414                            readonly_storage=readonly,
2415                            stats_provider=self.stats_provider,
2416-                           expiration_enabled=expire,
2417-                           expiration_mode=mode,
2418-                           expiration_override_lease_duration=o_l_d,
2419-                           expiration_cutoff_date=cutoff_date,
2420-                           expiration_sharetypes=expiration_sharetypes)
2421+                           expiration_policy=expiration_policy)
2422         self.add_service(ss)
2423 
2424         d = self.when_tub_ready()
2425hunk ./src/allmydata/immutable/offloaded.py 306
2426         if os.path.exists(self._encoding_file):
2427             self.log("ciphertext already present, bypassing fetch",
2428                      level=log.UNUSUAL)
2429+            # XXX the following comment is probably stale, since
2430+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
2431+            #
2432             # we'll still need the plaintext hashes (when
2433             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
2434             # called), and currently the easiest way to get them is to ask
2435hunk ./src/allmydata/immutable/upload.py 765
2436             self._status.set_progress(1, progress)
2437         return cryptdata
2438 
2439-
2440     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
2441hunk ./src/allmydata/immutable/upload.py 766
2442+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
2443+        plaintext segments, i.e. get the tagged hashes of the given segments.
2444+        The segment size is expected to be generated by the
2445+        IEncryptedUploadable before any plaintext is read or ciphertext
2446+        produced, so that the segment hashes can be generated with only a
2447+        single pass.
2448+
2449+        This returns a Deferred that fires with a sequence of hashes, using:
2450+
2451+         tuple(segment_hashes[first:last])
2452+
2453+        'num_segments' is used to assert that the number of segments that the
2454+        IEncryptedUploadable handled matches the number of segments that the
2455+        encoder was expecting.
2456+
2457+        This method must not be called until the final byte has been read
2458+        from read_encrypted(). Once this method is called, read_encrypted()
2459+        can never be called again.
2460+        """
2461         # this is currently unused, but will live again when we fix #453
2462         if len(self._plaintext_segment_hashes) < num_segments:
2463             # close out the last one
2464hunk ./src/allmydata/immutable/upload.py 803
2465         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
2466 
2467     def get_plaintext_hash(self):
2468+        """OBSOLETE; Get the hash of the whole plaintext.
2469+
2470+        This returns a Deferred that fires with a tagged SHA-256 hash of the
2471+        whole plaintext, obtained from hashutil.plaintext_hash(data).
2472+        """
2473+        # this is currently unused, but will live again when we fix #453
2474         h = self._plaintext_hasher.digest()
2475         return defer.succeed(h)
2476 
2477hunk ./src/allmydata/interfaces.py 29
2478 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
2479 Offset = Number
2480 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
2481-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
2482-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
2483-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
2484+WriteEnablerSecret = Hash # used to protect mutable share modifications
2485+LeaseRenewSecret = Hash # used to protect lease renewal requests
2486+LeaseCancelSecret = Hash # used to protect lease cancellation requests
2487 
2488 class RIStubClient(RemoteInterface):
2489     """Each client publishes a service announcement for a dummy object called
2490hunk ./src/allmydata/interfaces.py 106
2491                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2492                          allocated_size=Offset, canary=Referenceable):
2493         """
2494-        @param storage_index: the index of the bucket to be created or
2495+        @param storage_index: the index of the shareset to be created or
2496                               increfed.
2497         @param sharenums: these are the share numbers (probably between 0 and
2498                           99) that the sender is proposing to store on this
2499hunk ./src/allmydata/interfaces.py 111
2500                           server.
2501-        @param renew_secret: This is the secret used to protect bucket refresh
2502+        @param renew_secret: This is the secret used to protect lease renewal.
2503                              This secret is generated by the client and
2504                              stored for later comparison by the server. Each
2505                              server is given a different secret.
2506hunk ./src/allmydata/interfaces.py 115
2507-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2508-        @param canary: If the canary is lost before close(), the bucket is
2509+        @param cancel_secret: ignored
2510+        @param canary: If the canary is lost before close(), the allocation is
2511                        deleted.
2512         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2513                  already have and allocated is what we hereby agree to accept.
2514hunk ./src/allmydata/interfaces.py 129
2515                   renew_secret=LeaseRenewSecret,
2516                   cancel_secret=LeaseCancelSecret):
2517         """
2518-        Add a new lease on the given bucket. If the renew_secret matches an
2519+        Add a new lease on the given shareset. If the renew_secret matches an
2520         existing lease, that lease will be renewed instead. If there is no
2521hunk ./src/allmydata/interfaces.py 131
2522-        bucket for the given storage_index, return silently. (note that in
2523+        shareset for the given storage_index, return silently. (Note that in
2524         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2525hunk ./src/allmydata/interfaces.py 133
2526-        bucket)
2527+        shareset.)
2528         """
2529         return Any() # returns None now, but future versions might change
2530 
2531hunk ./src/allmydata/interfaces.py 139
2532     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
2533         """
2534-        Renew the lease on a given bucket, resetting the timer to 31 days.
2535-        Some networks will use this, some will not. If there is no bucket for
2536+        Renew the lease on a given shareset, resetting the timer to 31 days.
2537+        Some networks will use this, some will not. If there is no shareset for
2538         the given storage_index, IndexError will be raised.
2539 
2540         For mutable shares, if the given renew_secret does not match an
2541hunk ./src/allmydata/interfaces.py 146
2542         existing lease, IndexError will be raised with a note listing the
2543         server-nodeids on the existing leases, so leases on migrated shares
2544-        can be renewed or cancelled. For immutable shares, IndexError
2545-        (without the note) will be raised.
2546+        can be renewed. For immutable shares, IndexError (without the note)
2547+        will be raised.
2548         """
2549         return Any()
2550 
2551hunk ./src/allmydata/interfaces.py 154
2552     def get_buckets(storage_index=StorageIndex):
2553         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
2554 
2555-
2556-
2557     def slot_readv(storage_index=StorageIndex,
2558                    shares=ListOf(int), readv=ReadVector):
2559         """Read a vector from the numbered shares associated with the given
2560hunk ./src/allmydata/interfaces.py 163
2561 
2562     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
2563                                         secrets=TupleOf(WriteEnablerSecret,
2564-                                                        LeaseRenewSecret,
2565-                                                        LeaseCancelSecret),
2566+                                                        LeaseRenewSecret),
2567                                         tw_vectors=TestAndWriteVectorsForShares,
2568                                         r_vector=ReadVector,
2569                                         ):
2570hunk ./src/allmydata/interfaces.py 167
2571-        """General-purpose test-and-set operation for mutable slots. Perform
2572-        a bunch of comparisons against the existing shares. If they all pass,
2573-        then apply a bunch of write vectors to those shares. Then use the
2574-        read vectors to extract data from all the shares and return the data.
2575+        """
2576+        General-purpose atomic test-read-and-set operation for mutable slots.
2577+        Perform a bunch of comparisons against the existing shares. If they
2578+        all pass: use the read vectors to extract data from all the shares,
2579+        then apply a bunch of write vectors to those shares. Return the read
2580+        data, which does not include any modifications made by the writes.
2581 
2582         This method is, um, large. The goal is to allow clients to update all
2583         the shares associated with a mutable file in a single round trip.
2584hunk ./src/allmydata/interfaces.py 177
2585 
2586-        @param storage_index: the index of the bucket to be created or
2587+        @param storage_index: the index of the shareset to be created or
2588                               increfed.
2589         @param write_enabler: a secret that is stored along with the slot.
2590                               Writes are accepted from any caller who can
2591hunk ./src/allmydata/interfaces.py 183
2592                               present the matching secret. A different secret
2593                               should be used for each slot*server pair.
2594-        @param renew_secret: This is the secret used to protect bucket refresh
2595+        @param renew_secret: This is the secret used to protect lease renewal.
2596                              This secret is generated by the client and
2597                              stored for later comparison by the server. Each
2598                              server is given a different secret.
2599hunk ./src/allmydata/interfaces.py 187
2600-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2601+        @param cancel_secret: ignored
2602 
2603hunk ./src/allmydata/interfaces.py 189
2604-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
2605-        cancel_secret). The first is required to perform any write. The
2606-        latter two are used when allocating new shares. To simply acquire a
2607-        new lease on existing shares, use an empty testv and an empty writev.
2608+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
2609+        The write_enabler is required to perform any write. The renew_secret
2610+        is used when allocating new shares.
2611 
2612         Each share can have a separate test vector (i.e. a list of
2613         comparisons to perform). If all vectors for all shares pass, then all
2614hunk ./src/allmydata/interfaces.py 280
2615         store that on disk.
2616         """
2617 
2618-class IStorageBucketWriter(Interface):
2619+
2620+class IStorageBackend(Interface):
2621     """
2622hunk ./src/allmydata/interfaces.py 283
2623-    Objects of this kind live on the client side.
2624+    Objects of this kind live on the server side and are used by the
2625+    storage server object.
2626     """
2627hunk ./src/allmydata/interfaces.py 286
2628-    def put_block(segmentnum=int, data=ShareData):
2629-        """@param data: For most segments, this data will be 'blocksize'
2630-        bytes in length. The last segment might be shorter.
2631-        @return: a Deferred that fires (with None) when the operation completes
2632+    def get_available_space():
2633+        """
2634+        Returns available space for share storage in bytes, or
2635+        None if this information is not available or if the available
2636+        space is unlimited.
2637+
2638+        If the backend is configured for read-only mode then this will
2639+        return 0.
2640+        """
2641+
2642+    def get_sharesets_for_prefix(prefix):
2643+        """
2644+        Generates IShareSet objects for all storage indices matching the
2645+        given prefix for which this backend holds shares.
2646+        """
2647+
2648+    def get_shareset(storageindex):
2649+        """
2650+        Get an IShareSet object for the given storage index.
2651+        """
2652+
2653+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
2654+        """
2655+        Clients who discover hash failures in shares that they have
2656+        downloaded from me will use this method to inform me about the
2657+        failures. I will record their concern so that my operator can
2658+        manually inspect the shares in question.
2659+
2660+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
2661+        share number. 'reason' is a human-readable explanation of the problem,
2662+        probably including some expected hash values and the computed ones
2663+        that did not match. Corruption advisories for mutable shares should
2664+        include a hash of the public key (the same value that appears in the
2665+        mutable-file verify-cap), since the current share format does not
2666+        store that on disk.
2667+
2668+        @param storageindex=str
2669+        @param sharetype=str
2670+        @param shnum=int
2671+        @param reason=str
2672+        """
2673+
2674+
2675+class IShareSet(Interface):
2676+    def get_storage_index():
2677+        """
2678+        Returns the storage index for this shareset.
2679+        """
2680+
2681+    def get_storage_index_string():
2682+        """
2683+        Returns the base32-encoded storage index for this shareset.
2684+        """
2685+
2686+    def get_overhead():
2687+        """
2688+        Returns the storage overhead, in bytes, of this shareset (exclusive
2689+        of the space used by its shares).
2690+        """
2691+
2692+    def get_shares():
2693+        """
2694+        Generates the IStoredShare objects held in this shareset.
2695+        """
2696+
2697+    def has_incoming(shnum):
2698+        """
2699+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
2700+        """
2701+
2702+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
2703+        """
2704+        Create a bucket writer that can be used to write data to a given share.
2705+
2706+        @param storageserver=RIStorageServer
2707+        @param shnum=int: A share number in this shareset
2708+        @param max_space_per_bucket=int: The maximum space allocated for the
2709+                 share, in bytes
2710+        @param lease_info=LeaseInfo: The initial lease information
2711+        @param canary=Referenceable: If the canary is lost before close(), the
2712+                 bucket is deleted.
2713+        @return an IStorageBucketWriter for the given share
2714+        """
2715+
2716+    def make_bucket_reader(storageserver, share):
2717+        """
2718+        Create a bucket reader that can be used to read data from a given share.
2719+
2720+        @param storageserver=RIStorageServer
2721+        @param share=IStoredShare
2722+        @return an IStorageBucketReader for the given share
2723+        """
2724+
2725+    def readv(wanted_shnums, read_vector):
2726+        """
2727+        Read a vector from the numbered shares in this shareset. An empty
2728+        wanted_shnums list means to return data from all known shares.
2729+
2730+        @param wanted_shnums=ListOf(int)
2731+        @param read_vector=ReadVector
2732+        @return DictOf(int, ReadData): shnum -> results, with one key per share
2733+        """
2734+
2735+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
2736+        """
2737+        General-purpose atomic test-read-and-set operation for mutable slots.
2738+        Perform a bunch of comparisons against the existing shares in this
2739+        shareset. If they all pass: use the read vectors to extract data from
2740+        all the shares, then apply a bunch of write vectors to those shares.
2741+        Return the read data, which does not include any modifications made by
2742+        the writes.
2743+
2744+        See the similar method in RIStorageServer for more detail.
2745+
2746+        @param storageserver=RIStorageServer
2747+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
2748+        @param test_and_write_vectors=TestAndWriteVectorsForShares
2749+        @param read_vector=ReadVector
2750+        @param expiration_time=int
2751+        @return TupleOf(bool, DictOf(int, ReadData))
2752+        """
2753+
2754+    def add_or_renew_lease(lease_info):
2755+        """
2756+        Add a new lease on the shares in this shareset. If the renew_secret
2757+        matches an existing lease, that lease will be renewed instead. If
2758+        there are no shares in this shareset, return silently.
2759+
2760+        @param lease_info=LeaseInfo
2761+        """
2762+
2763+    def renew_lease(renew_secret, new_expiration_time):
2764+        """
2765+        Renew a lease on the shares in this shareset, resetting the timer
2766+        to 31 days. Some grids will use this, some will not. If there are no
2767+        shares in this shareset, IndexError will be raised.
2768+
2769+        For mutable shares, if the given renew_secret does not match an
2770+        existing lease, IndexError will be raised with a note listing the
2771+        server-nodeids on the existing leases, so leases on migrated shares
2772+        can be renewed. For immutable shares, IndexError (without the note)
2773+        will be raised.
2774+
2775+        @param renew_secret=LeaseRenewSecret
2776+        """
2777+
2778+
2779+class IStoredShare(Interface):
2780+    """
2781+    This object contains as much as all of the share data.  It is intended
2782+    for lazy evaluation, such that in many use cases substantially less than
2783+    all of the share data will be accessed.
2784+    """
2785+    def close():
2786+        """
2787+        Complete writing to this share.
2788+        """
2789+
2790+    def get_storage_index():
2791+        """
2792+        Returns the storage index.
2793+        """
2794+
2795+    def get_shnum():
2796+        """
2797+        Returns the share number.
2798+        """
2799+
2800+    def get_data_length():
2801+        """
2802+        Returns the data length in bytes.
2803+        """
2804+
2805+    def get_size():
2806+        """
2807+        Returns the size of the share in bytes.
2808+        """
2809+
2810+    def get_used_space():
2811+        """
2812+        Returns the amount of backend storage including overhead, in bytes, used
2813+        by this share.
2814+        """
2815+
2816+    def unlink():
2817+        """
2818+        Signal that this share can be removed from the backend storage. This does
2819+        not guarantee that the share data will be immediately inaccessible, or
2820+        that it will be securely erased.
2821+        """
2822+
2823+    def readv(read_vector):
2824+        """
2825+        XXX
2826+        """
2827+
2828+
2829+class IStoredMutableShare(IStoredShare):
2830+    def check_write_enabler(write_enabler, si_s):
2831+        """
2832+        XXX
2833         """
2834 
2835hunk ./src/allmydata/interfaces.py 489
2836-    def put_plaintext_hashes(hashes=ListOf(Hash)):
2837+    def check_testv(test_vector):
2838+        """
2839+        XXX
2840+        """
2841+
2842+    def writev(datav, new_length):
2843+        """
2844+        XXX
2845+        """
2846+
2847+
2848+class IStorageBucketWriter(Interface):
2849+    """
2850+    Objects of this kind live on the client side.
2851+    """
2852+    def put_block(segmentnum, data):
2853         """
2854hunk ./src/allmydata/interfaces.py 506
2855+        @param segmentnum=int
2856+        @param data=ShareData: For most segments, this data will be 'blocksize'
2857+        bytes in length. The last segment might be shorter.
2858         @return: a Deferred that fires (with None) when the operation completes
2859         """
2860 
2861hunk ./src/allmydata/interfaces.py 512
2862-    def put_crypttext_hashes(hashes=ListOf(Hash)):
2863+    def put_crypttext_hashes(hashes):
2864         """
2865hunk ./src/allmydata/interfaces.py 514
2866+        @param hashes=ListOf(Hash)
2867         @return: a Deferred that fires (with None) when the operation completes
2868         """
2869 
2870hunk ./src/allmydata/interfaces.py 518
2871-    def put_block_hashes(blockhashes=ListOf(Hash)):
2872+    def put_block_hashes(blockhashes):
2873         """
2874hunk ./src/allmydata/interfaces.py 520
2875+        @param blockhashes=ListOf(Hash)
2876         @return: a Deferred that fires (with None) when the operation completes
2877         """
2878 
2879hunk ./src/allmydata/interfaces.py 524
2880-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
2881+    def put_share_hashes(sharehashes):
2882         """
2883hunk ./src/allmydata/interfaces.py 526
2884+        @param sharehashes=ListOf(TupleOf(int, Hash))
2885         @return: a Deferred that fires (with None) when the operation completes
2886         """
2887 
2888hunk ./src/allmydata/interfaces.py 530
2889-    def put_uri_extension(data=URIExtensionData):
2890+    def put_uri_extension(data):
2891         """This block of data contains integrity-checking information (hashes
2892         of plaintext, crypttext, and shares), as well as encoding parameters
2893         that are necessary to recover the data. This is a serialized dict
2894hunk ./src/allmydata/interfaces.py 535
2895         mapping strings to other strings. The hash of this data is kept in
2896-        the URI and verified before any of the data is used. All buckets for
2897-        a given file contain identical copies of this data.
2898+        the URI and verified before any of the data is used. All share
2899+        containers for a given file contain identical copies of this data.
2900 
2901         The serialization format is specified with the following pseudocode:
2902         for k in sorted(dict.keys()):
2903hunk ./src/allmydata/interfaces.py 543
2904             assert re.match(r'^[a-zA-Z_\-]+$', k)
2905             write(k + ':' + netstring(dict[k]))
2906 
2907+        @param data=URIExtensionData
2908         @return: a Deferred that fires (with None) when the operation completes
2909         """
2910 
2911hunk ./src/allmydata/interfaces.py 558
2912 
2913 class IStorageBucketReader(Interface):
2914 
2915-    def get_block_data(blocknum=int, blocksize=int, size=int):
2916+    def get_block_data(blocknum, blocksize, size):
2917         """Most blocks will be the same size. The last block might be shorter
2918         than the others.
2919 
2920hunk ./src/allmydata/interfaces.py 562
2921+        @param blocknum=int
2922+        @param blocksize=int
2923+        @param size=int
2924         @return: ShareData
2925         """
2926 
2927hunk ./src/allmydata/interfaces.py 573
2928         @return: ListOf(Hash)
2929         """
2930 
2931-    def get_block_hashes(at_least_these=SetOf(int)):
2932+    def get_block_hashes(at_least_these=()):
2933         """
2934hunk ./src/allmydata/interfaces.py 575
2935+        @param at_least_these=SetOf(int)
2936         @return: ListOf(Hash)
2937         """
2938 
2939hunk ./src/allmydata/interfaces.py 579
2940-    def get_share_hashes(at_least_these=SetOf(int)):
2941+    def get_share_hashes():
2942         """
2943         @return: ListOf(TupleOf(int, Hash))
2944         """
2945hunk ./src/allmydata/interfaces.py 611
2946         @return: unicode nickname, or None
2947         """
2948 
2949-    # methods moved from IntroducerClient, need review
2950-    def get_all_connections():
2951-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
2952-        each active connection we've established to a remote service. This is
2953-        mostly useful for unit tests that need to wait until a certain number
2954-        of connections have been made."""
2955-
2956-    def get_all_connectors():
2957-        """Return a dict that maps from (nodeid, service_name) to a
2958-        RemoteServiceConnector instance for all services that we are actively
2959-        trying to connect to. Each RemoteServiceConnector has the following
2960-        public attributes::
2961-
2962-          service_name: the type of service provided, like 'storage'
2963-          announcement_time: when we first heard about this service
2964-          last_connect_time: when we last established a connection
2965-          last_loss_time: when we last lost a connection
2966-
2967-          version: the peer's version, from the most recent connection
2968-          oldest_supported: the peer's oldest supported version, same
2969-
2970-          rref: the RemoteReference, if connected, otherwise None
2971-          remote_host: the IAddress, if connected, otherwise None
2972-
2973-        This method is intended for monitoring interfaces, such as a web page
2974-        that describes connecting and connected peers.
2975-        """
2976-
2977-    def get_all_peerids():
2978-        """Return a frozenset of all peerids to whom we have a connection (to
2979-        one or more services) established. Mostly useful for unit tests."""
2980-
2981-    def get_all_connections_for(service_name):
2982-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
2983-        for each active connection that provides the given SERVICE_NAME."""
2984-
2985-    def get_permuted_peers(service_name, key):
2986-        """Returns an ordered list of (peerid, rref) tuples, selecting from
2987-        the connections that provide SERVICE_NAME, using a hash-based
2988-        permutation keyed by KEY. This randomizes the service list in a
2989-        repeatable way, to distribute load over many peers.
2990-        """
2991-
2992 
2993 class IMutableSlotWriter(Interface):
2994     """
2995hunk ./src/allmydata/interfaces.py 616
2996     The interface for a writer around a mutable slot on a remote server.
2997     """
2998-    def set_checkstring(checkstring, *args):
2999+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
3000         """
3001         Set the checkstring that I will pass to the remote server when
3002         writing.
3003hunk ./src/allmydata/interfaces.py 640
3004         Add a block and salt to the share.
3005         """
3006 
3007-    def put_encprivey(encprivkey):
3008+    def put_encprivkey(encprivkey):
3009         """
3010         Add the encrypted private key to the share.
3011         """
3012hunk ./src/allmydata/interfaces.py 645
3013 
3014-    def put_blockhashes(blockhashes=list):
3015+    def put_blockhashes(blockhashes):
3016         """
3017hunk ./src/allmydata/interfaces.py 647
3018+        @param blockhashes=list
3019         Add the block hash tree to the share.
3020         """
3021 
3022hunk ./src/allmydata/interfaces.py 651
3023-    def put_sharehashes(sharehashes=dict):
3024+    def put_sharehashes(sharehashes):
3025         """
3026hunk ./src/allmydata/interfaces.py 653
3027+        @param sharehashes=dict
3028         Add the share hash chain to the share.
3029         """
3030 
3031hunk ./src/allmydata/interfaces.py 739
3032     def get_extension_params():
3033         """Return the extension parameters in the URI"""
3034 
3035-    def set_extension_params():
3036+    def set_extension_params(params):
3037         """Set the extension parameters that should be in the URI"""
3038 
3039 class IDirectoryURI(Interface):
3040hunk ./src/allmydata/interfaces.py 879
3041         writer-visible data using this writekey.
3042         """
3043 
3044-    # TODO: Can this be overwrite instead of replace?
3045-    def replace(new_contents):
3046-        """Replace the contents of the mutable file, provided that no other
3047+    def overwrite(new_contents):
3048+        """Overwrite the contents of the mutable file, provided that no other
3049         node has published (or is attempting to publish, concurrently) a
3050         newer version of the file than this one.
3051 
3052hunk ./src/allmydata/interfaces.py 1346
3053         is empty, the metadata will be an empty dictionary.
3054         """
3055 
3056-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
3057+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
3058         """I add a child (by writecap+readcap) at the specific name. I return
3059         a Deferred that fires when the operation finishes. If overwrite= is
3060         True, I will replace any existing child of the same name, otherwise
3061hunk ./src/allmydata/interfaces.py 1745
3062     Block Hash, and the encoding parameters, both of which must be included
3063     in the URI.
3064 
3065-    I do not choose shareholders, that is left to the IUploader. I must be
3066-    given a dict of RemoteReferences to storage buckets that are ready and
3067-    willing to receive data.
3068+    I do not choose shareholders, that is left to the IUploader.
3069     """
3070 
3071     def set_size(size):
3072hunk ./src/allmydata/interfaces.py 1752
3073         """Specify the number of bytes that will be encoded. This must be
3074         peformed before get_serialized_params() can be called.
3075         """
3076+
3077     def set_params(params):
3078         """Override the default encoding parameters. 'params' is a tuple of
3079         (k,d,n), where 'k' is the number of required shares, 'd' is the
3080hunk ./src/allmydata/interfaces.py 1848
3081     download, validate, decode, and decrypt data from them, writing the
3082     results to an output file.
3083 
3084-    I do not locate the shareholders, that is left to the IDownloader. I must
3085-    be given a dict of RemoteReferences to storage buckets that are ready to
3086-    send data.
3087+    I do not locate the shareholders, that is left to the IDownloader.
3088     """
3089 
3090     def setup(outfile):
3091hunk ./src/allmydata/interfaces.py 1950
3092         resuming an interrupted upload (where we need to compute the
3093         plaintext hashes, but don't need the redundant encrypted data)."""
3094 
3095-    def get_plaintext_hashtree_leaves(first, last, num_segments):
3096-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
3097-        plaintext segments, i.e. get the tagged hashes of the given segments.
3098-        The segment size is expected to be generated by the
3099-        IEncryptedUploadable before any plaintext is read or ciphertext
3100-        produced, so that the segment hashes can be generated with only a
3101-        single pass.
3102-
3103-        This returns a Deferred that fires with a sequence of hashes, using:
3104-
3105-         tuple(segment_hashes[first:last])
3106-
3107-        'num_segments' is used to assert that the number of segments that the
3108-        IEncryptedUploadable handled matches the number of segments that the
3109-        encoder was expecting.
3110-
3111-        This method must not be called until the final byte has been read
3112-        from read_encrypted(). Once this method is called, read_encrypted()
3113-        can never be called again.
3114-        """
3115-
3116-    def get_plaintext_hash():
3117-        """OBSOLETE; Get the hash of the whole plaintext.
3118-
3119-        This returns a Deferred that fires with a tagged SHA-256 hash of the
3120-        whole plaintext, obtained from hashutil.plaintext_hash(data).
3121-        """
3122-
3123     def close():
3124         """Just like IUploadable.close()."""
3125 
3126hunk ./src/allmydata/interfaces.py 2144
3127         returns a Deferred that fires with an IUploadResults instance, from
3128         which the URI of the file can be obtained as results.uri ."""
3129 
3130-    def upload_ssk(write_capability, new_version, uploadable):
3131-        """TODO: how should this work?"""
3132-
3133 class ICheckable(Interface):
3134     def check(monitor, verify=False, add_lease=False):
3135         """Check up on my health, optionally repairing any problems.
3136hunk ./src/allmydata/interfaces.py 2505
3137 
3138 class IRepairResults(Interface):
3139     """I contain the results of a repair operation."""
3140-    def get_successful(self):
3141+    def get_successful():
3142         """Returns a boolean: True if the repair made the file healthy, False
3143         if not. Repair failure generally indicates a file that has been
3144         damaged beyond repair."""
3145hunk ./src/allmydata/interfaces.py 2577
3146     Tahoe process will typically have a single NodeMaker, but unit tests may
3147     create simplified/mocked forms for testing purposes.
3148     """
3149-    def create_from_cap(writecap, readcap=None, **kwargs):
3150+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
3151         """I create an IFilesystemNode from the given writecap/readcap. I can
3152         only provide nodes for existing file/directory objects: use my other
3153         methods to create new objects. I return synchronously."""
3154hunk ./src/allmydata/monitor.py 30
3155 
3156     # the following methods are provided for the operation code
3157 
3158-    def is_cancelled(self):
3159+    def is_cancelled():
3160         """Returns True if the operation has been cancelled. If True,
3161         operation code should stop creating new work, and attempt to stop any
3162         work already in progress."""
3163hunk ./src/allmydata/monitor.py 35
3164 
3165-    def raise_if_cancelled(self):
3166+    def raise_if_cancelled():
3167         """Raise OperationCancelledError if the operation has been cancelled.
3168         Operation code that has a robust error-handling path can simply call
3169         this periodically."""
3170hunk ./src/allmydata/monitor.py 40
3171 
3172-    def set_status(self, status):
3173+    def set_status(status):
3174         """Sets the Monitor's 'status' object to an arbitrary value.
3175         Different operations will store different sorts of status information
3176         here. Operation code should use get+modify+set sequences to update
3177hunk ./src/allmydata/monitor.py 46
3178         this."""
3179 
3180-    def get_status(self):
3181+    def get_status():
3182         """Return the status object. If the operation failed, this will be a
3183         Failure instance."""
3184 
3185hunk ./src/allmydata/monitor.py 50
3186-    def finish(self, status):
3187+    def finish(status):
3188         """Call this when the operation is done, successful or not. The
3189         Monitor's lifetime is influenced by the completion of the operation
3190         it is monitoring. The Monitor's 'status' value will be set with the
3191hunk ./src/allmydata/monitor.py 63
3192 
3193     # the following methods are provided for the initiator of the operation
3194 
3195-    def is_finished(self):
3196+    def is_finished():
3197         """Return a boolean, True if the operation is done (whether
3198         successful or failed), False if it is still running."""
3199 
3200hunk ./src/allmydata/monitor.py 67
3201-    def when_done(self):
3202+    def when_done():
3203         """Return a Deferred that fires when the operation is complete. It
3204         will fire with the operation status, the same value as returned by
3205         get_status()."""
3206hunk ./src/allmydata/monitor.py 72
3207 
3208-    def cancel(self):
3209+    def cancel():
3210         """Cancel the operation as soon as possible. is_cancelled() will
3211         start returning True after this is called."""
3212 
3213hunk ./src/allmydata/mutable/filenode.py 753
3214         self._writekey = writekey
3215         self._serializer = defer.succeed(None)
3216 
3217-
3218     def get_sequence_number(self):
3219         """
3220         Get the sequence number of the mutable version that I represent.
3221hunk ./src/allmydata/mutable/filenode.py 759
3222         """
3223         return self._version[0] # verinfo[0] == the sequence number
3224 
3225+    def get_servermap(self):
3226+        return self._servermap
3227 
3228hunk ./src/allmydata/mutable/filenode.py 762
3229-    # TODO: Terminology?
3230     def get_writekey(self):
3231         """
3232         I return a writekey or None if I don't have a writekey.
3233hunk ./src/allmydata/mutable/filenode.py 768
3234         """
3235         return self._writekey
3236 
3237-
3238     def set_downloader_hints(self, hints):
3239         """
3240         I set the downloader hints.
3241hunk ./src/allmydata/mutable/filenode.py 776
3242 
3243         self._downloader_hints = hints
3244 
3245-
3246     def get_downloader_hints(self):
3247         """
3248         I return the downloader hints.
3249hunk ./src/allmydata/mutable/filenode.py 782
3250         """
3251         return self._downloader_hints
3252 
3253-
3254     def overwrite(self, new_contents):
3255         """
3256         I overwrite the contents of this mutable file version with the
3257hunk ./src/allmydata/mutable/filenode.py 791
3258 
3259         return self._do_serialized(self._overwrite, new_contents)
3260 
3261-
3262     def _overwrite(self, new_contents):
3263         assert IMutableUploadable.providedBy(new_contents)
3264         assert self._servermap.last_update_mode == MODE_WRITE
3265hunk ./src/allmydata/mutable/filenode.py 797
3266 
3267         return self._upload(new_contents)
3268 
3269-
3270     def modify(self, modifier, backoffer=None):
3271         """I use a modifier callback to apply a change to the mutable file.
3272         I implement the following pseudocode::
3273hunk ./src/allmydata/mutable/filenode.py 841
3274 
3275         return self._do_serialized(self._modify, modifier, backoffer)
3276 
3277-
3278     def _modify(self, modifier, backoffer):
3279         if backoffer is None:
3280             backoffer = BackoffAgent().delay
3281hunk ./src/allmydata/mutable/filenode.py 846
3282         return self._modify_and_retry(modifier, backoffer, True)
3283 
3284-
3285     def _modify_and_retry(self, modifier, backoffer, first_time):
3286         """
3287         I try to apply modifier to the contents of this version of the
3288hunk ./src/allmydata/mutable/filenode.py 878
3289         d.addErrback(_retry)
3290         return d
3291 
3292-
3293     def _modify_once(self, modifier, first_time):
3294         """
3295         I attempt to apply a modifier to the contents of the mutable
3296hunk ./src/allmydata/mutable/filenode.py 913
3297         d.addCallback(_apply)
3298         return d
3299 
3300-
3301     def is_readonly(self):
3302         """
3303         I return True if this MutableFileVersion provides no write
3304hunk ./src/allmydata/mutable/filenode.py 921
3305         """
3306         return self._writekey is None
3307 
3308-
3309     def is_mutable(self):
3310         """
3311         I return True, since mutable files are always mutable by
3312hunk ./src/allmydata/mutable/filenode.py 928
3313         """
3314         return True
3315 
3316-
3317     def get_storage_index(self):
3318         """
3319         I return the storage index of the reference that I encapsulate.
3320hunk ./src/allmydata/mutable/filenode.py 934
3321         """
3322         return self._storage_index
3323 
3324-
3325     def get_size(self):
3326         """
3327         I return the length, in bytes, of this readable object.
3328hunk ./src/allmydata/mutable/filenode.py 940
3329         """
3330         return self._servermap.size_of_version(self._version)
3331 
3332-
3333     def download_to_data(self, fetch_privkey=False):
3334         """
3335         I return a Deferred that fires with the contents of this
3336hunk ./src/allmydata/mutable/filenode.py 951
3337         d.addCallback(lambda mc: "".join(mc.chunks))
3338         return d
3339 
3340-
3341     def _try_to_download_data(self):
3342         """
3343         I am an unserialized cousin of download_to_data; I am called
3344hunk ./src/allmydata/mutable/filenode.py 963
3345         d.addCallback(lambda mc: "".join(mc.chunks))
3346         return d
3347 
3348-
3349     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
3350         """
3351         I read a portion (possibly all) of the mutable file that I
3352hunk ./src/allmydata/mutable/filenode.py 971
3353         return self._do_serialized(self._read, consumer, offset, size,
3354                                    fetch_privkey)
3355 
3356-
3357     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
3358         """
3359         I am the serialized companion of read.
3360hunk ./src/allmydata/mutable/filenode.py 981
3361         d = r.download(consumer, offset, size)
3362         return d
3363 
3364-
3365     def _do_serialized(self, cb, *args, **kwargs):
3366         # note: to avoid deadlock, this callable is *not* allowed to invoke
3367         # other serialized methods within this (or any other)
3368hunk ./src/allmydata/mutable/filenode.py 999
3369         self._serializer.addErrback(log.err)
3370         return d
3371 
3372-
3373     def _upload(self, new_contents):
3374         #assert self._pubkey, "update_servermap must be called before publish"
3375         p = Publish(self._node, self._storage_broker, self._servermap)
3376hunk ./src/allmydata/mutable/filenode.py 1009
3377         d.addCallback(self._did_upload, new_contents.get_size())
3378         return d
3379 
3380-
3381     def _did_upload(self, res, size):
3382         self._most_recent_size = size
3383         return res
3384hunk ./src/allmydata/mutable/filenode.py 1029
3385         """
3386         return self._do_serialized(self._update, data, offset)
3387 
3388-
3389     def _update(self, data, offset):
3390         """
3391         I update the mutable file version represented by this particular
3392hunk ./src/allmydata/mutable/filenode.py 1058
3393         d.addCallback(self._build_uploadable_and_finish, data, offset)
3394         return d
3395 
3396-
3397     def _do_modify_update(self, data, offset):
3398         """
3399         I perform a file update by modifying the contents of the file
3400hunk ./src/allmydata/mutable/filenode.py 1073
3401             return new
3402         return self._modify(m, None)
3403 
3404-
3405     def _do_update_update(self, data, offset):
3406         """
3407         I start the Servermap update that gets us the data we need to
3408hunk ./src/allmydata/mutable/filenode.py 1108
3409         return self._update_servermap(update_range=(start_segment,
3410                                                     end_segment))
3411 
3412-
3413     def _decode_and_decrypt_segments(self, ignored, data, offset):
3414         """
3415         After the servermap update, I take the encrypted and encoded
3416hunk ./src/allmydata/mutable/filenode.py 1148
3417         d3 = defer.succeed(blockhashes)
3418         return deferredutil.gatherResults([d1, d2, d3])
3419 
3420-
3421     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
3422         """
3423         After the process has the plaintext segments, I build the
3424hunk ./src/allmydata/mutable/filenode.py 1163
3425         p = Publish(self._node, self._storage_broker, self._servermap)
3426         return p.update(u, offset, segments_and_bht[2], self._version)
3427 
3428-
3429     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
3430         """
3431         I update the servermap. I return a Deferred that fires when the
3432hunk ./src/allmydata/storage/common.py 1
3433-
3434-import os.path
3435 from allmydata.util import base32
3436 
3437 class DataTooLargeError(Exception):
3438hunk ./src/allmydata/storage/common.py 5
3439     pass
3440+
3441 class UnknownMutableContainerVersionError(Exception):
3442     pass
3443hunk ./src/allmydata/storage/common.py 8
3444+
3445 class UnknownImmutableContainerVersionError(Exception):
3446     pass
3447 
3448hunk ./src/allmydata/storage/common.py 18
3449 
3450 def si_a2b(ascii_storageindex):
3451     return base32.a2b(ascii_storageindex)
3452-
3453-def storage_index_to_dir(storageindex):
3454-    sia = si_b2a(storageindex)
3455-    return os.path.join(sia[:2], sia)
3456hunk ./src/allmydata/storage/crawler.py 2
3457 
3458-import os, time, struct
3459+import time, struct
3460 import cPickle as pickle
3461 from twisted.internet import reactor
3462 from twisted.application import service
3463hunk ./src/allmydata/storage/crawler.py 6
3464+
3465+from allmydata.util.assertutil import precondition
3466+from allmydata.interfaces import IStorageBackend
3467 from allmydata.storage.common import si_b2a
3468hunk ./src/allmydata/storage/crawler.py 10
3469-from allmydata.util import fileutil
3470+
3471 
3472 class TimeSliceExceeded(Exception):
3473     pass
3474hunk ./src/allmydata/storage/crawler.py 15
3475 
3476+
3477 class ShareCrawler(service.MultiService):
3478hunk ./src/allmydata/storage/crawler.py 17
3479-    """A ShareCrawler subclass is attached to a StorageServer, and
3480-    periodically walks all of its shares, processing each one in some
3481-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
3482-    since large servers can easily have a terabyte of shares, in several
3483-    million files, which can take hours or days to read.
3484+    """
3485+    An instance of a subclass of ShareCrawler is attached to a storage
3486+    backend, and periodically walks the backend's shares, processing them
3487+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
3488+    the host, since large servers can easily have a terabyte of shares in
3489+    several million files, which can take hours or days to read.
3490 
3491     Once the crawler starts a cycle, it will proceed at a rate limited by the
3492     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
3493hunk ./src/allmydata/storage/crawler.py 33
3494     long enough to ensure that 'minimum_cycle_time' elapses between the start
3495     of two consecutive cycles.
3496 
3497-    We assume that the normal upload/download/get_buckets traffic of a tahoe
3498+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
3499     grid will cause the prefixdir contents to be mostly cached in the kernel,
3500hunk ./src/allmydata/storage/crawler.py 35
3501-    or that the number of buckets in each prefixdir will be small enough to
3502-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
3503-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
3504+    or that the number of sharesets in each prefixdir will be small enough to
3505+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
3506+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
3507     prefix. On this server, each prefixdir took 130ms-200ms to list the first
3508     time, and 17ms to list the second time.
3509 
3510hunk ./src/allmydata/storage/crawler.py 41
3511-    To use a crawler, create a subclass which implements the process_bucket()
3512-    method. It will be called with a prefixdir and a base32 storage index
3513-    string. process_bucket() must run synchronously. Any keys added to
3514-    self.state will be preserved. Override add_initial_state() to set up
3515-    initial state keys. Override finished_cycle() to perform additional
3516-    processing when the cycle is complete. Any status that the crawler
3517-    produces should be put in the self.state dictionary. Status renderers
3518-    (like a web page which describes the accomplishments of your crawler)
3519-    will use crawler.get_state() to retrieve this dictionary; they can
3520-    present the contents as they see fit.
3521+    To implement a crawler, create a subclass that implements the
3522+    process_shareset() method. It will be called with a prefixdir and an
3523+    object providing the IShareSet interface. process_shareset() must run
3524+    synchronously. Any keys added to self.state will be preserved. Override
3525+    add_initial_state() to set up initial state keys. Override
3526+    finished_cycle() to perform additional processing when the cycle is
3527+    complete. Any status that the crawler produces should be put in the
3528+    self.state dictionary. Status renderers (like a web page describing the
3529+    accomplishments of your crawler) will use crawler.get_state() to retrieve
3530+    this dictionary; they can present the contents as they see fit.
3531 
3532hunk ./src/allmydata/storage/crawler.py 52
3533-    Then create an instance, with a reference to a StorageServer and a
3534-    filename where it can store persistent state. The statefile is used to
3535-    keep track of how far around the ring the process has travelled, as well
3536-    as timing history to allow the pace to be predicted and controlled. The
3537-    statefile will be updated and written to disk after each time slice (just
3538-    before the crawler yields to the reactor), and also after each cycle is
3539-    finished, and also when stopService() is called. Note that this means
3540-    that a crawler which is interrupted with SIGKILL while it is in the
3541-    middle of a time slice will lose progress: the next time the node is
3542-    started, the crawler will repeat some unknown amount of work.
3543+    Then create an instance, with a reference to a backend object providing
3544+    the IStorageBackend interface, and a filename where it can store
3545+    persistent state. The statefile is used to keep track of how far around
3546+    the ring the process has travelled, as well as timing history to allow
3547+    the pace to be predicted and controlled. The statefile will be updated
3548+    and written to disk after each time slice (just before the crawler yields
3549+    to the reactor), and also after each cycle is finished, and also when
3550+    stopService() is called. Note that this means that a crawler that is
3551+    interrupted with SIGKILL while it is in the middle of a time slice will
3552+    lose progress: the next time the node is started, the crawler will repeat
3553+    some unknown amount of work.
3554 
3555     The crawler instance must be started with startService() before it will
3556hunk ./src/allmydata/storage/crawler.py 65
3557-    do any work. To make it stop doing work, call stopService().
3558+    do any work. To make it stop doing work, call stopService(). A crawler
3559+    is usually a child service of a StorageServer, although it should not
3560+    depend on that.
3561+
3562+    For historical reasons, some dictionary key names use the term "bucket"
3563+    for what is now preferably called a "shareset" (the set of shares that a
3564+    server holds under a given storage index).
3565     """
3566 
3567     slow_start = 300 # don't start crawling for 5 minutes after startup
3568hunk ./src/allmydata/storage/crawler.py 80
3569     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
3570     minimum_cycle_time = 300 # don't run a cycle faster than this
3571 
3572-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
3573+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
3574+        precondition(IStorageBackend.providedBy(backend), backend)
3575         service.MultiService.__init__(self)
3576hunk ./src/allmydata/storage/crawler.py 83
3577+        self.backend = backend
3578+        self.statefp = statefp
3579         if allowed_cpu_percentage is not None:
3580             self.allowed_cpu_percentage = allowed_cpu_percentage
3581hunk ./src/allmydata/storage/crawler.py 87
3582-        self.server = server
3583-        self.sharedir = server.sharedir
3584-        self.statefile = statefile
3585         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
3586                          for i in range(2**10)]
3587         self.prefixes.sort()
3588hunk ./src/allmydata/storage/crawler.py 91
3589         self.timer = None
3590-        self.bucket_cache = (None, [])
3591+        self.shareset_cache = (None, [])
3592         self.current_sleep_time = None
3593         self.next_wake_time = None
3594         self.last_prefix_finished_time = None
3595hunk ./src/allmydata/storage/crawler.py 154
3596                 left = len(self.prefixes) - self.last_complete_prefix_index
3597                 remaining = left * self.last_prefix_elapsed_time
3598                 # TODO: remainder of this prefix: we need to estimate the
3599-                # per-bucket time, probably by measuring the time spent on
3600-                # this prefix so far, divided by the number of buckets we've
3601+                # per-shareset time, probably by measuring the time spent on
3602+                # this prefix so far, divided by the number of sharesets we've
3603                 # processed.
3604             d["estimated-cycle-complete-time-left"] = remaining
3605             # it's possible to call get_progress() from inside a crawler's
3606hunk ./src/allmydata/storage/crawler.py 175
3607         state dictionary.
3608 
3609         If we are not currently sleeping (i.e. get_state() was called from
3610-        inside the process_prefixdir, process_bucket, or finished_cycle()
3611+        inside the process_prefixdir, process_shareset, or finished_cycle()
3612         methods, or if startService has not yet been called on this crawler),
3613         these two keys will be None.
3614 
3615hunk ./src/allmydata/storage/crawler.py 188
3616     def load_state(self):
3617         # we use this to store state for both the crawler's internals and
3618         # anything the subclass-specific code needs. The state is stored
3619-        # after each bucket is processed, after each prefixdir is processed,
3620+        # after each shareset is processed, after each prefixdir is processed,
3621         # and after a cycle is complete. The internal keys we use are:
3622         #  ["version"]: int, always 1
3623         #  ["last-cycle-finished"]: int, or None if we have not yet finished
3624hunk ./src/allmydata/storage/crawler.py 202
3625         #                            are sleeping between cycles, or if we
3626         #                            have not yet finished any prefixdir since
3627         #                            a cycle was started
3628-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
3629-        #                            of the last bucket to be processed, or
3630-        #                            None if we are sleeping between cycles
3631+        #  ["last-complete-bucket"]: str, base32 storage index of the last
3632+        #                            shareset to be processed, or None if we
3633+        #                            are sleeping between cycles
3634         try:
3635hunk ./src/allmydata/storage/crawler.py 206
3636-            f = open(self.statefile, "rb")
3637-            state = pickle.load(f)
3638-            f.close()
3639+            state = pickle.loads(self.statefp.getContent())
3640         except EnvironmentError:
3641             state = {"version": 1,
3642                      "last-cycle-finished": None,
3643hunk ./src/allmydata/storage/crawler.py 242
3644         else:
3645             last_complete_prefix = self.prefixes[lcpi]
3646         self.state["last-complete-prefix"] = last_complete_prefix
3647-        tmpfile = self.statefile + ".tmp"
3648-        f = open(tmpfile, "wb")
3649-        pickle.dump(self.state, f)
3650-        f.close()
3651-        fileutil.move_into_place(tmpfile, self.statefile)
3652+        self.statefp.setContent(pickle.dumps(self.state))
3653 
3654     def startService(self):
3655         # arrange things to look like we were just sleeping, so
3656hunk ./src/allmydata/storage/crawler.py 284
3657         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
3658         # if the math gets weird, or a timequake happens, don't sleep
3659         # forever. Note that this means that, while a cycle is running, we
3660-        # will process at least one bucket every 5 minutes, no matter how
3661-        # long that bucket takes.
3662+        # will process at least one shareset every 5 minutes, no matter how
3663+        # long that shareset takes.
3664         sleep_time = max(0.0, min(sleep_time, 299))
3665         if finished_cycle:
3666             # how long should we sleep between cycles? Don't run faster than
3667hunk ./src/allmydata/storage/crawler.py 315
3668         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
3669             # if we want to yield earlier, just raise TimeSliceExceeded()
3670             prefix = self.prefixes[i]
3671-            prefixdir = os.path.join(self.sharedir, prefix)
3672-            if i == self.bucket_cache[0]:
3673-                buckets = self.bucket_cache[1]
3674+            if i == self.shareset_cache[0]:
3675+                sharesets = self.shareset_cache[1]
3676             else:
3677hunk ./src/allmydata/storage/crawler.py 318
3678-                try:
3679-                    buckets = os.listdir(prefixdir)
3680-                    buckets.sort()
3681-                except EnvironmentError:
3682-                    buckets = []
3683-                self.bucket_cache = (i, buckets)
3684-            self.process_prefixdir(cycle, prefix, prefixdir,
3685-                                   buckets, start_slice)
3686+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
3687+                self.shareset_cache = (i, sharesets)
3688+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
3689             self.last_complete_prefix_index = i
3690 
3691             now = time.time()
3692hunk ./src/allmydata/storage/crawler.py 345
3693         self.finished_cycle(cycle)
3694         self.save_state()
3695 
3696-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3697-        """This gets a list of bucket names (i.e. storage index strings,
3698+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3699+        """
3700+        This gets a list of shareset names (i.e. storage index strings,
3701         base32-encoded) in sorted order.
3702 
3703         You can override this if your crawler doesn't care about the actual
3704hunk ./src/allmydata/storage/crawler.py 352
3705         shares, for example a crawler which merely keeps track of how many
3706-        buckets are being managed by this server.
3707+        sharesets are being managed by this server.
3708 
3709hunk ./src/allmydata/storage/crawler.py 354
3710-        Subclasses which *do* care about actual bucket should leave this
3711-        method along, and implement process_bucket() instead.
3712+        Subclasses which *do* care about actual shareset should leave this
3713+        method alone, and implement process_shareset() instead.
3714         """
3715 
3716hunk ./src/allmydata/storage/crawler.py 358
3717-        for bucket in buckets:
3718-            if bucket <= self.state["last-complete-bucket"]:
3719+        for shareset in sharesets:
3720+            base32si = shareset.get_storage_index_string()
3721+            if base32si <= self.state["last-complete-bucket"]:
3722                 continue
3723hunk ./src/allmydata/storage/crawler.py 362
3724-            self.process_bucket(cycle, prefix, prefixdir, bucket)
3725-            self.state["last-complete-bucket"] = bucket
3726+            self.process_shareset(cycle, prefix, shareset)
3727+            self.state["last-complete-bucket"] = base32si
3728             if time.time() >= start_slice + self.cpu_slice:
3729                 raise TimeSliceExceeded()
3730 
3731hunk ./src/allmydata/storage/crawler.py 370
3732     # the remaining methods are explictly for subclasses to implement.
3733 
3734     def started_cycle(self, cycle):
3735-        """Notify a subclass that the crawler is about to start a cycle.
3736+        """
3737+        Notify a subclass that the crawler is about to start a cycle.
3738 
3739         This method is for subclasses to override. No upcall is necessary.
3740         """
3741hunk ./src/allmydata/storage/crawler.py 377
3742         pass
3743 
3744-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3745-        """Examine a single bucket. Subclasses should do whatever they want
3746+    def process_shareset(self, cycle, prefix, shareset):
3747+        """
3748+        Examine a single shareset. Subclasses should do whatever they want
3749         to do to the shares therein, then update self.state as necessary.
3750 
3751         If the crawler is never interrupted by SIGKILL, this method will be
3752hunk ./src/allmydata/storage/crawler.py 383
3753-        called exactly once per share (per cycle). If it *is* interrupted,
3754+        called exactly once per shareset (per cycle). If it *is* interrupted,
3755         then the next time the node is started, some amount of work will be
3756         duplicated, according to when self.save_state() was last called. By
3757         default, save_state() is called at the end of each timeslice, and
3758hunk ./src/allmydata/storage/crawler.py 391
3759 
3760         To reduce the chance of duplicate work (i.e. to avoid adding multiple
3761         records to a database), you can call save_state() at the end of your
3762-        process_bucket() method. This will reduce the maximum duplicated work
3763-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
3764-        per bucket (and some disk writes), which will count against your
3765-        allowed_cpu_percentage, and which may be considerable if
3766-        process_bucket() runs quickly.
3767+        process_shareset() method. This will reduce the maximum duplicated
3768+        work to one shareset per SIGKILL. It will also add overhead, probably
3769+        1-20ms per shareset (and some disk writes), which will count against
3770+        your allowed_cpu_percentage, and which may be considerable if
3771+        process_shareset() runs quickly.
3772 
3773         This method is for subclasses to override. No upcall is necessary.
3774         """
3775hunk ./src/allmydata/storage/crawler.py 402
3776         pass
3777 
3778     def finished_prefix(self, cycle, prefix):
3779-        """Notify a subclass that the crawler has just finished processing a
3780-        prefix directory (all buckets with the same two-character/10bit
3781+        """
3782+        Notify a subclass that the crawler has just finished processing a
3783+        prefix directory (all sharesets with the same two-character/10-bit
3784         prefix). To impose a limit on how much work might be duplicated by a
3785         SIGKILL that occurs during a timeslice, you can call
3786         self.save_state() here, but be aware that it may represent a
3787hunk ./src/allmydata/storage/crawler.py 415
3788         pass
3789 
3790     def finished_cycle(self, cycle):
3791-        """Notify subclass that a cycle (one complete traversal of all
3792+        """
3793+        Notify subclass that a cycle (one complete traversal of all
3794         prefixdirs) has just finished. 'cycle' is the number of the cycle
3795         that just finished. This method should perform summary work and
3796         update self.state to publish information to status displays.
3797hunk ./src/allmydata/storage/crawler.py 433
3798         pass
3799 
3800     def yielding(self, sleep_time):
3801-        """The crawler is about to sleep for 'sleep_time' seconds. This
3802+        """
3803+        The crawler is about to sleep for 'sleep_time' seconds. This
3804         method is mostly for the convenience of unit tests.
3805 
3806         This method is for subclasses to override. No upcall is necessary.
3807hunk ./src/allmydata/storage/crawler.py 443
3808 
3809 
3810 class BucketCountingCrawler(ShareCrawler):
3811-    """I keep track of how many buckets are being managed by this server.
3812-    This is equivalent to the number of distributed files and directories for
3813-    which I am providing storage. The actual number of files+directories in
3814-    the full grid is probably higher (especially when there are more servers
3815-    than 'N', the number of generated shares), because some files+directories
3816-    will have shares on other servers instead of me. Also note that the
3817-    number of buckets will differ from the number of shares in small grids,
3818-    when more than one share is placed on a single server.
3819+    """
3820+    I keep track of how many sharesets, each corresponding to a storage index,
3821+    are being managed by this server. This is equivalent to the number of
3822+    distributed files and directories for which I am providing storage. The
3823+    actual number of files and directories in the full grid is probably higher
3824+    (especially when there are more servers than 'N', the number of generated
3825+    shares), because some files and directories will have shares on other
3826+    servers instead of me. Also note that the number of sharesets will differ
3827+    from the number of shares in small grids, when more than one share is
3828+    placed on a single server.
3829     """
3830 
3831     minimum_cycle_time = 60*60 # we don't need this more than once an hour
3832hunk ./src/allmydata/storage/crawler.py 457
3833 
3834-    def __init__(self, server, statefile, num_sample_prefixes=1):
3835-        ShareCrawler.__init__(self, server, statefile)
3836+    def __init__(self, backend, statefp, num_sample_prefixes=1):
3837+        ShareCrawler.__init__(self, backend, statefp)
3838         self.num_sample_prefixes = num_sample_prefixes
3839 
3840     def add_initial_state(self):
3841hunk ./src/allmydata/storage/crawler.py 471
3842         self.state.setdefault("last-complete-bucket-count", None)
3843         self.state.setdefault("storage-index-samples", {})
3844 
3845-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
3846+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
3847         # we override process_prefixdir() because we don't want to look at
3848hunk ./src/allmydata/storage/crawler.py 473
3849-        # the individual buckets. We'll save state after each one. On my
3850+        # the individual sharesets. We'll save state after each one. On my
3851         # laptop, a mostly-empty storage server can process about 70
3852         # prefixdirs in a 1.0s slice.
3853         if cycle not in self.state["bucket-counts"]:
3854hunk ./src/allmydata/storage/crawler.py 478
3855             self.state["bucket-counts"][cycle] = {}
3856-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
3857+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
3858         if prefix in self.prefixes[:self.num_sample_prefixes]:
3859hunk ./src/allmydata/storage/crawler.py 480
3860-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
3861+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
3862 
3863     def finished_cycle(self, cycle):
3864         last_counts = self.state["bucket-counts"].get(cycle, [])
3865hunk ./src/allmydata/storage/crawler.py 486
3866         if len(last_counts) == len(self.prefixes):
3867             # great, we have a whole cycle.
3868-            num_buckets = sum(last_counts.values())
3869-            self.state["last-complete-bucket-count"] = num_buckets
3870+            num_sharesets = sum(last_counts.values())
3871+            self.state["last-complete-bucket-count"] = num_sharesets
3872             # get rid of old counts
3873             for old_cycle in list(self.state["bucket-counts"].keys()):
3874                 if old_cycle != cycle:
3875hunk ./src/allmydata/storage/crawler.py 494
3876                     del self.state["bucket-counts"][old_cycle]
3877         # get rid of old samples too
3878         for prefix in list(self.state["storage-index-samples"].keys()):
3879-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
3880+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
3881             if old_cycle != cycle:
3882                 del self.state["storage-index-samples"][prefix]
3883hunk ./src/allmydata/storage/crawler.py 497
3884-
3885hunk ./src/allmydata/storage/expirer.py 1
3886-import time, os, pickle, struct
3887+
3888+import time, pickle, struct
3889+from twisted.python import log as twlog
3890+
3891 from allmydata.storage.crawler import ShareCrawler
3892hunk ./src/allmydata/storage/expirer.py 6
3893-from allmydata.storage.shares import get_share_file
3894-from allmydata.storage.common import UnknownMutableContainerVersionError, \
3895+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
3896      UnknownImmutableContainerVersionError
3897hunk ./src/allmydata/storage/expirer.py 8
3898-from twisted.python import log as twlog
3899+
3900 
3901 class LeaseCheckingCrawler(ShareCrawler):
3902     """I examine the leases on all shares, determining which are still valid
3903hunk ./src/allmydata/storage/expirer.py 17
3904     removed.
3905 
3906     I collect statistics on the leases and make these available to a web
3907-    status page, including::
3908+    status page, including:
3909 
3910     Space recovered during this cycle-so-far:
3911      actual (only if expiration_enabled=True):
3912hunk ./src/allmydata/storage/expirer.py 21
3913-      num-buckets, num-shares, sum of share sizes, real disk usage
3914+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3915       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3916        space used by the directory)
3917      what it would have been with the original lease expiration time
3918hunk ./src/allmydata/storage/expirer.py 32
3919 
3920     Space recovered during the last 10 cycles  <-- saved in separate pickle
3921 
3922-    Shares/buckets examined:
3923+    Shares/storage-indices examined:
3924      this cycle-so-far
3925      prediction of rest of cycle
3926      during last 10 cycles <-- separate pickle
3927hunk ./src/allmydata/storage/expirer.py 42
3928     Histogram of leases-per-share:
3929      this-cycle-to-date
3930      last 10 cycles <-- separate pickle
3931-    Histogram of lease ages, buckets = 1day
3932+    Histogram of lease ages, storage-indices over 1 day
3933      cycle-to-date
3934      last 10 cycles <-- separate pickle
3935 
3936hunk ./src/allmydata/storage/expirer.py 53
3937     slow_start = 360 # wait 6 minutes after startup
3938     minimum_cycle_time = 12*60*60 # not more than twice per day
3939 
3940-    def __init__(self, server, statefile, historyfile,
3941-                 expiration_enabled, mode,
3942-                 override_lease_duration, # used if expiration_mode=="age"
3943-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3944-                 sharetypes):
3945-        self.historyfile = historyfile
3946-        self.expiration_enabled = expiration_enabled
3947-        self.mode = mode
3948+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3949+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
3950+        self.historyfp = historyfp
3951+        ShareCrawler.__init__(self, backend, statefp)
3952+
3953+        self.expiration_enabled = expiration_policy['enabled']
3954+        self.mode = expiration_policy['mode']
3955         self.override_lease_duration = None
3956         self.cutoff_date = None
3957         if self.mode == "age":
3958hunk ./src/allmydata/storage/expirer.py 63
3959-            assert isinstance(override_lease_duration, (int, type(None)))
3960-            self.override_lease_duration = override_lease_duration # seconds
3961+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3962+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3963         elif self.mode == "cutoff-date":
3964hunk ./src/allmydata/storage/expirer.py 66
3965-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3966-            assert cutoff_date is not None
3967-            self.cutoff_date = cutoff_date
3968+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3969+            self.cutoff_date = expiration_policy['cutoff_date']
3970         else:
3971hunk ./src/allmydata/storage/expirer.py 69
3972-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3973-        self.sharetypes_to_expire = sharetypes
3974-        ShareCrawler.__init__(self, server, statefile)
3975+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3976+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3977 
3978     def add_initial_state(self):
3979         # we fill ["cycle-to-date"] here (even though they will be reset in
3980hunk ./src/allmydata/storage/expirer.py 84
3981             self.state["cycle-to-date"].setdefault(k, so_far[k])
3982 
3983         # initialize history
3984-        if not os.path.exists(self.historyfile):
3985+        if not self.historyfp.exists():
3986             history = {} # cyclenum -> dict
3987hunk ./src/allmydata/storage/expirer.py 86
3988-            f = open(self.historyfile, "wb")
3989-            pickle.dump(history, f)
3990-            f.close()
3991+            self.historyfp.setContent(pickle.dumps(history))
3992 
3993     def create_empty_cycle_dict(self):
3994         recovered = self.create_empty_recovered_dict()
3995hunk ./src/allmydata/storage/expirer.py 99
3996 
3997     def create_empty_recovered_dict(self):
3998         recovered = {}
3999+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
4000         for a in ("actual", "original", "configured", "examined"):
4001             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
4002                 recovered[a+"-"+b] = 0
4003hunk ./src/allmydata/storage/expirer.py 110
4004     def started_cycle(self, cycle):
4005         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
4006 
4007-    def stat(self, fn):
4008-        return os.stat(fn)
4009-
4010-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
4011-        bucketdir = os.path.join(prefixdir, storage_index_b32)
4012-        s = self.stat(bucketdir)
4013+    def process_storage_index(self, cycle, prefix, container):
4014         would_keep_shares = []
4015         wks = None
4016hunk ./src/allmydata/storage/expirer.py 113
4017+        sharetype = None
4018 
4019hunk ./src/allmydata/storage/expirer.py 115
4020-        for fn in os.listdir(bucketdir):
4021-            try:
4022-                shnum = int(fn)
4023-            except ValueError:
4024-                continue # non-numeric means not a sharefile
4025-            sharefile = os.path.join(bucketdir, fn)
4026+        for share in container.get_shares():
4027+            sharetype = share.sharetype
4028             try:
4029hunk ./src/allmydata/storage/expirer.py 118
4030-                wks = self.process_share(sharefile)
4031+                wks = self.process_share(share)
4032             except (UnknownMutableContainerVersionError,
4033                     UnknownImmutableContainerVersionError,
4034                     struct.error):
4035hunk ./src/allmydata/storage/expirer.py 122
4036-                twlog.msg("lease-checker error processing %s" % sharefile)
4037+                twlog.msg("lease-checker error processing %r" % (share,))
4038                 twlog.err()
4039hunk ./src/allmydata/storage/expirer.py 124
4040-                which = (storage_index_b32, shnum)
4041+                which = (si_b2a(share.storageindex), share.get_shnum())
4042                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
4043                 wks = (1, 1, 1, "unknown")
4044             would_keep_shares.append(wks)
4045hunk ./src/allmydata/storage/expirer.py 129
4046 
4047-        sharetype = None
4048+        container_type = None
4049         if wks:
4050hunk ./src/allmydata/storage/expirer.py 131
4051-            # use the last share's sharetype as the buckettype
4052-            sharetype = wks[3]
4053+            # use the last share's sharetype as the container type
4054+            container_type = wks[3]
4055         rec = self.state["cycle-to-date"]["space-recovered"]
4056         self.increment(rec, "examined-buckets", 1)
4057         if sharetype:
4058hunk ./src/allmydata/storage/expirer.py 136
4059-            self.increment(rec, "examined-buckets-"+sharetype, 1)
4060+            self.increment(rec, "examined-buckets-"+container_type, 1)
4061+
4062+        container_diskbytes = container.get_overhead()
4063 
4064hunk ./src/allmydata/storage/expirer.py 140
4065-        try:
4066-            bucket_diskbytes = s.st_blocks * 512
4067-        except AttributeError:
4068-            bucket_diskbytes = 0 # no stat().st_blocks on windows
4069         if sum([wks[0] for wks in would_keep_shares]) == 0:
4070hunk ./src/allmydata/storage/expirer.py 141
4071-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
4072+            self.increment_container_space("original", container_diskbytes, sharetype)
4073         if sum([wks[1] for wks in would_keep_shares]) == 0:
4074hunk ./src/allmydata/storage/expirer.py 143
4075-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
4076+            self.increment_container_space("configured", container_diskbytes, sharetype)
4077         if sum([wks[2] for wks in would_keep_shares]) == 0:
4078hunk ./src/allmydata/storage/expirer.py 145
4079-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
4080+            self.increment_container_space("actual", container_diskbytes, sharetype)
4081 
4082hunk ./src/allmydata/storage/expirer.py 147
4083-    def process_share(self, sharefilename):
4084-        # first, find out what kind of a share it is
4085-        sf = get_share_file(sharefilename)
4086-        sharetype = sf.sharetype
4087+    def process_share(self, share):
4088+        sharetype = share.sharetype
4089         now = time.time()
4090hunk ./src/allmydata/storage/expirer.py 150
4091-        s = self.stat(sharefilename)
4092+        sharebytes = share.get_size()
4093+        diskbytes = share.get_used_space()
4094 
4095         num_leases = 0
4096         num_valid_leases_original = 0
4097hunk ./src/allmydata/storage/expirer.py 158
4098         num_valid_leases_configured = 0
4099         expired_leases_configured = []
4100 
4101-        for li in sf.get_leases():
4102+        for li in share.get_leases():
4103             num_leases += 1
4104             original_expiration_time = li.get_expiration_time()
4105             grant_renew_time = li.get_grant_renew_time_time()
4106hunk ./src/allmydata/storage/expirer.py 171
4107 
4108             #  expired-or-not according to our configured age limit
4109             expired = False
4110-            if self.mode == "age":
4111-                age_limit = original_expiration_time
4112-                if self.override_lease_duration is not None:
4113-                    age_limit = self.override_lease_duration
4114-                if age > age_limit:
4115-                    expired = True
4116-            else:
4117-                assert self.mode == "cutoff-date"
4118-                if grant_renew_time < self.cutoff_date:
4119-                    expired = True
4120-            if sharetype not in self.sharetypes_to_expire:
4121-                expired = False
4122+            if sharetype in self.sharetypes_to_expire:
4123+                if self.mode == "age":
4124+                    age_limit = original_expiration_time
4125+                    if self.override_lease_duration is not None:
4126+                        age_limit = self.override_lease_duration
4127+                    if age > age_limit:
4128+                        expired = True
4129+                else:
4130+                    assert self.mode == "cutoff-date"
4131+                    if grant_renew_time < self.cutoff_date:
4132+                        expired = True
4133 
4134             if expired:
4135                 expired_leases_configured.append(li)
4136hunk ./src/allmydata/storage/expirer.py 190
4137 
4138         so_far = self.state["cycle-to-date"]
4139         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
4140-        self.increment_space("examined", s, sharetype)
4141+        self.increment_space("examined", diskbytes, sharetype)
4142 
4143         would_keep_share = [1, 1, 1, sharetype]
4144 
4145hunk ./src/allmydata/storage/expirer.py 196
4146         if self.expiration_enabled:
4147             for li in expired_leases_configured:
4148-                sf.cancel_lease(li.cancel_secret)
4149+                share.cancel_lease(li.cancel_secret)
4150 
4151         if num_valid_leases_original == 0:
4152             would_keep_share[0] = 0
4153hunk ./src/allmydata/storage/expirer.py 200
4154-            self.increment_space("original", s, sharetype)
4155+            self.increment_space("original", sharebytes, diskbytes, sharetype)
4156 
4157         if num_valid_leases_configured == 0:
4158             would_keep_share[1] = 0
4159hunk ./src/allmydata/storage/expirer.py 204
4160-            self.increment_space("configured", s, sharetype)
4161+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
4162             if self.expiration_enabled:
4163                 would_keep_share[2] = 0
4164hunk ./src/allmydata/storage/expirer.py 207
4165-                self.increment_space("actual", s, sharetype)
4166+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
4167 
4168         return would_keep_share
4169 
4170hunk ./src/allmydata/storage/expirer.py 211
4171-    def increment_space(self, a, s, sharetype):
4172-        sharebytes = s.st_size
4173-        try:
4174-            # note that stat(2) says that st_blocks is 512 bytes, and that
4175-            # st_blksize is "optimal file sys I/O ops blocksize", which is
4176-            # independent of the block-size that st_blocks uses.
4177-            diskbytes = s.st_blocks * 512
4178-        except AttributeError:
4179-            # the docs say that st_blocks is only on linux. I also see it on
4180-            # MacOS. But it isn't available on windows.
4181-            diskbytes = sharebytes
4182+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
4183         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
4184         self.increment(so_far_sr, a+"-shares", 1)
4185         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
4186hunk ./src/allmydata/storage/expirer.py 221
4187             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
4188             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
4189 
4190-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
4191+    def increment_container_space(self, a, container_diskbytes, container_type):
4192         rec = self.state["cycle-to-date"]["space-recovered"]
4193hunk ./src/allmydata/storage/expirer.py 223
4194-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
4195+        self.increment(rec, a+"-diskbytes", container_diskbytes)
4196         self.increment(rec, a+"-buckets", 1)
4197hunk ./src/allmydata/storage/expirer.py 225
4198-        if sharetype:
4199-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
4200-            self.increment(rec, a+"-buckets-"+sharetype, 1)
4201+        if container_type:
4202+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
4203+            self.increment(rec, a+"-buckets-"+container_type, 1)
4204 
4205     def increment(self, d, k, delta=1):
4206         if k not in d:
4207hunk ./src/allmydata/storage/expirer.py 281
4208         # copy() needs to become a deepcopy
4209         h["space-recovered"] = s["space-recovered"].copy()
4210 
4211-        history = pickle.load(open(self.historyfile, "rb"))
4212+        history = pickle.load(self.historyfp.getContent())
4213         history[cycle] = h
4214         while len(history) > 10:
4215             oldcycles = sorted(history.keys())
4216hunk ./src/allmydata/storage/expirer.py 286
4217             del history[oldcycles[0]]
4218-        f = open(self.historyfile, "wb")
4219-        pickle.dump(history, f)
4220-        f.close()
4221+        self.historyfp.setContent(pickle.dumps(history))
4222 
4223     def get_state(self):
4224         """In addition to the crawler state described in
4225hunk ./src/allmydata/storage/expirer.py 355
4226         progress = self.get_progress()
4227 
4228         state = ShareCrawler.get_state(self) # does a shallow copy
4229-        history = pickle.load(open(self.historyfile, "rb"))
4230+        history = pickle.load(self.historyfp.getContent())
4231         state["history"] = history
4232 
4233         if not progress["cycle-in-progress"]:
4234hunk ./src/allmydata/storage/lease.py 3
4235 import struct, time
4236 
4237+
4238+class NonExistentLeaseError(Exception):
4239+    pass
4240+
4241 class LeaseInfo:
4242     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
4243                  expiration_time=None, nodeid=None):
4244hunk ./src/allmydata/storage/lease.py 21
4245 
4246     def get_expiration_time(self):
4247         return self.expiration_time
4248+
4249     def get_grant_renew_time_time(self):
4250         # hack, based upon fixed 31day expiration period
4251         return self.expiration_time - 31*24*60*60
4252hunk ./src/allmydata/storage/lease.py 25
4253+
4254     def get_age(self):
4255         return time.time() - self.get_grant_renew_time_time()
4256 
4257hunk ./src/allmydata/storage/lease.py 36
4258          self.expiration_time) = struct.unpack(">L32s32sL", data)
4259         self.nodeid = None
4260         return self
4261+
4262     def to_immutable_data(self):
4263         return struct.pack(">L32s32sL",
4264                            self.owner_num,
4265hunk ./src/allmydata/storage/lease.py 49
4266                            int(self.expiration_time),
4267                            self.renew_secret, self.cancel_secret,
4268                            self.nodeid)
4269+
4270     def from_mutable_data(self, data):
4271         (self.owner_num,
4272          self.expiration_time,
4273hunk ./src/allmydata/storage/server.py 1
4274-import os, re, weakref, struct, time
4275+import weakref, time
4276 
4277 from foolscap.api import Referenceable
4278 from twisted.application import service
4279hunk ./src/allmydata/storage/server.py 7
4280 
4281 from zope.interface import implements
4282-from allmydata.interfaces import RIStorageServer, IStatsProducer
4283-from allmydata.util import fileutil, idlib, log, time_format
4284+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
4285+from allmydata.util.assertutil import precondition
4286+from allmydata.util import idlib, log
4287 import allmydata # for __full_version__
4288 
4289hunk ./src/allmydata/storage/server.py 12
4290-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
4291-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
4292+from allmydata.storage.common import si_a2b, si_b2a
4293+[si_a2b]  # hush pyflakes
4294 from allmydata.storage.lease import LeaseInfo
4295hunk ./src/allmydata/storage/server.py 15
4296-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
4297-     create_mutable_sharefile
4298-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
4299-from allmydata.storage.crawler import BucketCountingCrawler
4300 from allmydata.storage.expirer import LeaseCheckingCrawler
4301hunk ./src/allmydata/storage/server.py 16
4302-
4303-# storage/
4304-# storage/shares/incoming
4305-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
4306-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
4307-# storage/shares/$START/$STORAGEINDEX
4308-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
4309-
4310-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
4311-# base-32 chars).
4312-
4313-# $SHARENUM matches this regex:
4314-NUM_RE=re.compile("^[0-9]+$")
4315-
4316+from allmydata.storage.crawler import BucketCountingCrawler
4317 
4318 
4319 class StorageServer(service.MultiService, Referenceable):
4320hunk ./src/allmydata/storage/server.py 21
4321     implements(RIStorageServer, IStatsProducer)
4322+
4323     name = 'storage'
4324     LeaseCheckerClass = LeaseCheckingCrawler
4325hunk ./src/allmydata/storage/server.py 24
4326+    DEFAULT_EXPIRATION_POLICY = {
4327+        'enabled': False,
4328+        'mode': 'age',
4329+        'override_lease_duration': None,
4330+        'cutoff_date': None,
4331+        'sharetypes': ('mutable', 'immutable'),
4332+    }
4333 
4334hunk ./src/allmydata/storage/server.py 32
4335-    def __init__(self, storedir, nodeid, reserved_space=0,
4336-                 discard_storage=False, readonly_storage=False,
4337+    def __init__(self, serverid, backend, statedir,
4338                  stats_provider=None,
4339hunk ./src/allmydata/storage/server.py 34
4340-                 expiration_enabled=False,
4341-                 expiration_mode="age",
4342-                 expiration_override_lease_duration=None,
4343-                 expiration_cutoff_date=None,
4344-                 expiration_sharetypes=("mutable", "immutable")):
4345+                 expiration_policy=None):
4346         service.MultiService.__init__(self)
4347hunk ./src/allmydata/storage/server.py 36
4348-        assert isinstance(nodeid, str)
4349-        assert len(nodeid) == 20
4350-        self.my_nodeid = nodeid
4351-        self.storedir = storedir
4352-        sharedir = os.path.join(storedir, "shares")
4353-        fileutil.make_dirs(sharedir)
4354-        self.sharedir = sharedir
4355-        # we don't actually create the corruption-advisory dir until necessary
4356-        self.corruption_advisory_dir = os.path.join(storedir,
4357-                                                    "corruption-advisories")
4358-        self.reserved_space = int(reserved_space)
4359-        self.no_storage = discard_storage
4360-        self.readonly_storage = readonly_storage
4361+        precondition(IStorageBackend.providedBy(backend), backend)
4362+        precondition(isinstance(serverid, str), serverid)
4363+        precondition(len(serverid) == 20, serverid)
4364+
4365+        self._serverid = serverid
4366         self.stats_provider = stats_provider
4367         if self.stats_provider:
4368             self.stats_provider.register_producer(self)
4369hunk ./src/allmydata/storage/server.py 44
4370-        self.incomingdir = os.path.join(sharedir, 'incoming')
4371-        self._clean_incomplete()
4372-        fileutil.make_dirs(self.incomingdir)
4373         self._active_writers = weakref.WeakKeyDictionary()
4374hunk ./src/allmydata/storage/server.py 45
4375+        self.backend = backend
4376+        self.backend.setServiceParent(self)
4377+        self._statedir = statedir
4378         log.msg("StorageServer created", facility="tahoe.storage")
4379 
4380hunk ./src/allmydata/storage/server.py 50
4381-        if reserved_space:
4382-            if self.get_available_space() is None:
4383-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
4384-                        umin="0wZ27w", level=log.UNUSUAL)
4385-
4386         self.latencies = {"allocate": [], # immutable
4387                           "write": [],
4388                           "close": [],
4389hunk ./src/allmydata/storage/server.py 61
4390                           "renew": [],
4391                           "cancel": [],
4392                           }
4393-        self.add_bucket_counter()
4394-
4395-        statefile = os.path.join(self.storedir, "lease_checker.state")
4396-        historyfile = os.path.join(self.storedir, "lease_checker.history")
4397-        klass = self.LeaseCheckerClass
4398-        self.lease_checker = klass(self, statefile, historyfile,
4399-                                   expiration_enabled, expiration_mode,
4400-                                   expiration_override_lease_duration,
4401-                                   expiration_cutoff_date,
4402-                                   expiration_sharetypes)
4403-        self.lease_checker.setServiceParent(self)
4404+        self._setup_bucket_counter()
4405+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
4406 
4407     def __repr__(self):
4408hunk ./src/allmydata/storage/server.py 65
4409-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
4410+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
4411 
4412hunk ./src/allmydata/storage/server.py 67
4413-    def add_bucket_counter(self):
4414-        statefile = os.path.join(self.storedir, "bucket_counter.state")
4415-        self.bucket_counter = BucketCountingCrawler(self, statefile)
4416+    def _setup_bucket_counter(self):
4417+        statefp = self._statedir.child("bucket_counter.state")
4418+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
4419         self.bucket_counter.setServiceParent(self)
4420 
4421hunk ./src/allmydata/storage/server.py 72
4422+    def _setup_lease_checker(self, expiration_policy):
4423+        statefp = self._statedir.child("lease_checker.state")
4424+        historyfp = self._statedir.child("lease_checker.history")
4425+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
4426+        self.lease_checker.setServiceParent(self)
4427+
4428     def count(self, name, delta=1):
4429         if self.stats_provider:
4430             self.stats_provider.count("storage_server." + name, delta)
4431hunk ./src/allmydata/storage/server.py 92
4432         """Return a dict, indexed by category, that contains a dict of
4433         latency numbers for each category. If there are sufficient samples
4434         for unambiguous interpretation, each dict will contain the
4435-        following keys: mean, 01_0_percentile, 10_0_percentile,
4436+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
4437         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
4438         99_0_percentile, 99_9_percentile.  If there are insufficient
4439         samples for a given percentile to be interpreted unambiguously
4440hunk ./src/allmydata/storage/server.py 114
4441             else:
4442                 stats["mean"] = None
4443 
4444-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
4445-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
4446-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
4447+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
4448+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
4449+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
4450                              (0.999, "99_9_percentile", 1000)]
4451 
4452             for percentile, percentilestring, minnumtoobserve in orderstatlist:
4453hunk ./src/allmydata/storage/server.py 133
4454             kwargs["facility"] = "tahoe.storage"
4455         return log.msg(*args, **kwargs)
4456 
4457-    def _clean_incomplete(self):
4458-        fileutil.rm_dir(self.incomingdir)
4459+    def get_serverid(self):
4460+        return self._serverid
4461 
4462     def get_stats(self):
4463         # remember: RIStatsProvider requires that our return dict
4464hunk ./src/allmydata/storage/server.py 138
4465-        # contains numeric values.
4466+        # contains numeric, or None values.
4467         stats = { 'storage_server.allocated': self.allocated_size(), }
4468hunk ./src/allmydata/storage/server.py 140
4469-        stats['storage_server.reserved_space'] = self.reserved_space
4470         for category,ld in self.get_latencies().items():
4471             for name,v in ld.items():
4472                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
4473hunk ./src/allmydata/storage/server.py 144
4474 
4475-        try:
4476-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
4477-            writeable = disk['avail'] > 0
4478-
4479-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
4480-            stats['storage_server.disk_total'] = disk['total']
4481-            stats['storage_server.disk_used'] = disk['used']
4482-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
4483-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
4484-            stats['storage_server.disk_avail'] = disk['avail']
4485-        except AttributeError:
4486-            writeable = True
4487-        except EnvironmentError:
4488-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
4489-            writeable = False
4490-
4491-        if self.readonly_storage:
4492-            stats['storage_server.disk_avail'] = 0
4493-            writeable = False
4494+        self.backend.fill_in_space_stats(stats)
4495 
4496hunk ./src/allmydata/storage/server.py 146
4497-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
4498         s = self.bucket_counter.get_state()
4499         bucket_count = s.get("last-complete-bucket-count")
4500         if bucket_count:
4501hunk ./src/allmydata/storage/server.py 153
4502         return stats
4503 
4504     def get_available_space(self):
4505-        """Returns available space for share storage in bytes, or None if no
4506-        API to get this information is available."""
4507-
4508-        if self.readonly_storage:
4509-            return 0
4510-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
4511+        return self.backend.get_available_space()
4512 
4513     def allocated_size(self):
4514         space = 0
4515hunk ./src/allmydata/storage/server.py 162
4516         return space
4517 
4518     def remote_get_version(self):
4519-        remaining_space = self.get_available_space()
4520+        remaining_space = self.backend.get_available_space()
4521         if remaining_space is None:
4522             # We're on a platform that has no API to get disk stats.
4523             remaining_space = 2**64
4524hunk ./src/allmydata/storage/server.py 178
4525                     }
4526         return version
4527 
4528-    def remote_allocate_buckets(self, storage_index,
4529+    def remote_allocate_buckets(self, storageindex,
4530                                 renew_secret, cancel_secret,
4531                                 sharenums, allocated_size,
4532                                 canary, owner_num=0):
4533hunk ./src/allmydata/storage/server.py 182
4534+        # cancel_secret is no longer used.
4535         # owner_num is not for clients to set, but rather it should be
4536hunk ./src/allmydata/storage/server.py 184
4537-        # curried into the PersonalStorageServer instance that is dedicated
4538-        # to a particular owner.
4539+        # curried into a StorageServer instance dedicated to a particular
4540+        # owner.
4541         start = time.time()
4542         self.count("allocate")
4543hunk ./src/allmydata/storage/server.py 188
4544-        alreadygot = set()
4545         bucketwriters = {} # k: shnum, v: BucketWriter
4546hunk ./src/allmydata/storage/server.py 189
4547-        si_dir = storage_index_to_dir(storage_index)
4548-        si_s = si_b2a(storage_index)
4549 
4550hunk ./src/allmydata/storage/server.py 190
4551+        si_s = si_b2a(storageindex)
4552         log.msg("storage: allocate_buckets %s" % si_s)
4553 
4554hunk ./src/allmydata/storage/server.py 193
4555-        # in this implementation, the lease information (including secrets)
4556-        # goes into the share files themselves. It could also be put into a
4557-        # separate database. Note that the lease should not be added until
4558-        # the BucketWriter has been closed.
4559+        # Note that the lease should not be added until the BucketWriter
4560+        # has been closed.
4561         expire_time = time.time() + 31*24*60*60
4562hunk ./src/allmydata/storage/server.py 196
4563-        lease_info = LeaseInfo(owner_num,
4564-                               renew_secret, cancel_secret,
4565-                               expire_time, self.my_nodeid)
4566+        lease_info = LeaseInfo(owner_num, renew_secret,
4567+                               expire_time, self._serverid)
4568 
4569         max_space_per_bucket = allocated_size
4570 
4571hunk ./src/allmydata/storage/server.py 201
4572-        remaining_space = self.get_available_space()
4573+        remaining_space = self.backend.get_available_space()
4574         limited = remaining_space is not None
4575         if limited:
4576hunk ./src/allmydata/storage/server.py 204
4577-            # this is a bit conservative, since some of this allocated_size()
4578-            # has already been written to disk, where it will show up in
4579+            # This is a bit conservative, since some of this allocated_size()
4580+            # has already been written to the backend, where it will show up in
4581             # get_available_space.
4582             remaining_space -= self.allocated_size()
4583hunk ./src/allmydata/storage/server.py 208
4584-        # self.readonly_storage causes remaining_space <= 0
4585+            # If the backend is read-only, remaining_space will be <= 0.
4586+
4587+        shareset = self.backend.get_shareset(storageindex)
4588 
4589hunk ./src/allmydata/storage/server.py 212
4590-        # fill alreadygot with all shares that we have, not just the ones
4591+        # Fill alreadygot with all shares that we have, not just the ones
4592         # they asked about: this will save them a lot of work. Add or update
4593         # leases for all of them: if they want us to hold shares for this
4594hunk ./src/allmydata/storage/server.py 215
4595-        # file, they'll want us to hold leases for this file.
4596-        for (shnum, fn) in self._get_bucket_shares(storage_index):
4597-            alreadygot.add(shnum)
4598-            sf = ShareFile(fn)
4599-            sf.add_or_renew_lease(lease_info)
4600+        # file, they'll want us to hold leases for all the shares of it.
4601+        #
4602+        # XXX should we be making the assumption here that lease info is
4603+        # duplicated in all shares?
4604+        alreadygot = set()
4605+        for share in shareset.get_shares():
4606+            share.add_or_renew_lease(lease_info)
4607+            alreadygot.add(share.shnum)
4608 
4609hunk ./src/allmydata/storage/server.py 224
4610-        for shnum in sharenums:
4611-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
4612-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
4613-            if os.path.exists(finalhome):
4614-                # great! we already have it. easy.
4615-                pass
4616-            elif os.path.exists(incominghome):
4617+        for shnum in sharenums - alreadygot:
4618+            if shareset.has_incoming(shnum):
4619                 # Note that we don't create BucketWriters for shnums that
4620                 # have a partial share (in incoming/), so if a second upload
4621                 # occurs while the first is still in progress, the second
4622hunk ./src/allmydata/storage/server.py 232
4623                 # uploader will use different storage servers.
4624                 pass
4625             elif (not limited) or (remaining_space >= max_space_per_bucket):
4626-                # ok! we need to create the new share file.
4627-                bw = BucketWriter(self, incominghome, finalhome,
4628-                                  max_space_per_bucket, lease_info, canary)
4629-                if self.no_storage:
4630-                    bw.throw_out_all_data = True
4631+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
4632+                                                 lease_info, canary)
4633                 bucketwriters[shnum] = bw
4634                 self._active_writers[bw] = 1
4635                 if limited:
4636hunk ./src/allmydata/storage/server.py 239
4637                     remaining_space -= max_space_per_bucket
4638             else:
4639-                # bummer! not enough space to accept this bucket
4640+                # Bummer not enough space to accept this share.
4641                 pass
4642 
4643hunk ./src/allmydata/storage/server.py 242
4644-        if bucketwriters:
4645-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
4646-
4647         self.add_latency("allocate", time.time() - start)
4648         return alreadygot, bucketwriters
4649 
4650hunk ./src/allmydata/storage/server.py 245
4651-    def _iter_share_files(self, storage_index):
4652-        for shnum, filename in self._get_bucket_shares(storage_index):
4653-            f = open(filename, 'rb')
4654-            header = f.read(32)
4655-            f.close()
4656-            if header[:32] == MutableShareFile.MAGIC:
4657-                sf = MutableShareFile(filename, self)
4658-                # note: if the share has been migrated, the renew_lease()
4659-                # call will throw an exception, with information to help the
4660-                # client update the lease.
4661-            elif header[:4] == struct.pack(">L", 1):
4662-                sf = ShareFile(filename)
4663-            else:
4664-                continue # non-sharefile
4665-            yield sf
4666-
4667-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
4668+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
4669                          owner_num=1):
4670hunk ./src/allmydata/storage/server.py 247
4671+        # cancel_secret is no longer used.
4672         start = time.time()
4673         self.count("add-lease")
4674         new_expire_time = time.time() + 31*24*60*60
4675hunk ./src/allmydata/storage/server.py 251
4676-        lease_info = LeaseInfo(owner_num,
4677-                               renew_secret, cancel_secret,
4678-                               new_expire_time, self.my_nodeid)
4679-        for sf in self._iter_share_files(storage_index):
4680-            sf.add_or_renew_lease(lease_info)
4681-        self.add_latency("add-lease", time.time() - start)
4682-        return None
4683+        lease_info = LeaseInfo(owner_num, renew_secret,
4684+                               new_expire_time, self._serverid)
4685 
4686hunk ./src/allmydata/storage/server.py 254
4687-    def remote_renew_lease(self, storage_index, renew_secret):
4688+        try:
4689+            self.backend.add_or_renew_lease(lease_info)
4690+        finally:
4691+            self.add_latency("add-lease", time.time() - start)
4692+
4693+    def remote_renew_lease(self, storageindex, renew_secret):
4694         start = time.time()
4695         self.count("renew")
4696hunk ./src/allmydata/storage/server.py 262
4697-        new_expire_time = time.time() + 31*24*60*60
4698-        found_buckets = False
4699-        for sf in self._iter_share_files(storage_index):
4700-            found_buckets = True
4701-            sf.renew_lease(renew_secret, new_expire_time)
4702-        self.add_latency("renew", time.time() - start)
4703-        if not found_buckets:
4704-            raise IndexError("no such lease to renew")
4705+
4706+        try:
4707+            shareset = self.backend.get_shareset(storageindex)
4708+            new_expiration_time = start + 31*24*60*60   # one month from now
4709+            shareset.renew_lease(renew_secret, new_expiration_time)
4710+        finally:
4711+            self.add_latency("renew", time.time() - start)
4712 
4713     def bucket_writer_closed(self, bw, consumed_size):
4714         if self.stats_provider:
4715hunk ./src/allmydata/storage/server.py 275
4716             self.stats_provider.count('storage_server.bytes_added', consumed_size)
4717         del self._active_writers[bw]
4718 
4719-    def _get_bucket_shares(self, storage_index):
4720-        """Return a list of (shnum, pathname) tuples for files that hold
4721-        shares for this storage_index. In each tuple, 'shnum' will always be
4722-        the integer form of the last component of 'pathname'."""
4723-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4724-        try:
4725-            for f in os.listdir(storagedir):
4726-                if NUM_RE.match(f):
4727-                    filename = os.path.join(storagedir, f)
4728-                    yield (int(f), filename)
4729-        except OSError:
4730-            # Commonly caused by there being no buckets at all.
4731-            pass
4732-
4733-    def remote_get_buckets(self, storage_index):
4734+    def remote_get_buckets(self, storageindex):
4735         start = time.time()
4736         self.count("get")
4737hunk ./src/allmydata/storage/server.py 278
4738-        si_s = si_b2a(storage_index)
4739+        si_s = si_b2a(storageindex)
4740         log.msg("storage: get_buckets %s" % si_s)
4741         bucketreaders = {} # k: sharenum, v: BucketReader
4742hunk ./src/allmydata/storage/server.py 281
4743-        for shnum, filename in self._get_bucket_shares(storage_index):
4744-            bucketreaders[shnum] = BucketReader(self, filename,
4745-                                                storage_index, shnum)
4746-        self.add_latency("get", time.time() - start)
4747-        return bucketreaders
4748 
4749hunk ./src/allmydata/storage/server.py 282
4750-    def get_leases(self, storage_index):
4751-        """Provide an iterator that yields all of the leases attached to this
4752-        bucket. Each lease is returned as a LeaseInfo instance.
4753+        try:
4754+            shareset = self.backend.get_shareset(storageindex)
4755+            for share in shareset.get_shares():
4756+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
4757+            return bucketreaders
4758+        finally:
4759+            self.add_latency("get", time.time() - start)
4760 
4761hunk ./src/allmydata/storage/server.py 290
4762-        This method is not for client use.
4763+    def get_leases(self, storageindex):
4764         """
4765hunk ./src/allmydata/storage/server.py 292
4766+        Provide an iterator that yields all of the leases attached to this
4767+        bucket. Each lease is returned as a LeaseInfo instance.
4768 
4769hunk ./src/allmydata/storage/server.py 295
4770-        # since all shares get the same lease data, we just grab the leases
4771-        # from the first share
4772-        try:
4773-            shnum, filename = self._get_bucket_shares(storage_index).next()
4774-            sf = ShareFile(filename)
4775-            return sf.get_leases()
4776-        except StopIteration:
4777-            return iter([])
4778+        This method is not for client use. XXX do we need it at all?
4779+        """
4780+        return self.backend.get_shareset(storageindex).get_leases()
4781 
4782hunk ./src/allmydata/storage/server.py 299
4783-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
4784+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
4785                                                secrets,
4786                                                test_and_write_vectors,
4787                                                read_vector):
4788hunk ./src/allmydata/storage/server.py 305
4789         start = time.time()
4790         self.count("writev")
4791-        si_s = si_b2a(storage_index)
4792+        si_s = si_b2a(storageindex)
4793         log.msg("storage: slot_writev %s" % si_s)
4794hunk ./src/allmydata/storage/server.py 307
4795-        si_dir = storage_index_to_dir(storage_index)
4796-        (write_enabler, renew_secret, cancel_secret) = secrets
4797-        # shares exist if there is a file for them
4798-        bucketdir = os.path.join(self.sharedir, si_dir)
4799-        shares = {}
4800-        if os.path.isdir(bucketdir):
4801-            for sharenum_s in os.listdir(bucketdir):
4802-                try:
4803-                    sharenum = int(sharenum_s)
4804-                except ValueError:
4805-                    continue
4806-                filename = os.path.join(bucketdir, sharenum_s)
4807-                msf = MutableShareFile(filename, self)
4808-                msf.check_write_enabler(write_enabler, si_s)
4809-                shares[sharenum] = msf
4810-        # write_enabler is good for all existing shares.
4811-
4812-        # Now evaluate test vectors.
4813-        testv_is_good = True
4814-        for sharenum in test_and_write_vectors:
4815-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
4816-            if sharenum in shares:
4817-                if not shares[sharenum].check_testv(testv):
4818-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
4819-                    testv_is_good = False
4820-                    break
4821-            else:
4822-                # compare the vectors against an empty share, in which all
4823-                # reads return empty strings.
4824-                if not EmptyShare().check_testv(testv):
4825-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
4826-                                                                testv))
4827-                    testv_is_good = False
4828-                    break
4829-
4830-        # now gather the read vectors, before we do any writes
4831-        read_data = {}
4832-        for sharenum, share in shares.items():
4833-            read_data[sharenum] = share.readv(read_vector)
4834-
4835-        ownerid = 1 # TODO
4836-        expire_time = time.time() + 31*24*60*60   # one month
4837-        lease_info = LeaseInfo(ownerid,
4838-                               renew_secret, cancel_secret,
4839-                               expire_time, self.my_nodeid)
4840-
4841-        if testv_is_good:
4842-            # now apply the write vectors
4843-            for sharenum in test_and_write_vectors:
4844-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
4845-                if new_length == 0:
4846-                    if sharenum in shares:
4847-                        shares[sharenum].unlink()
4848-                else:
4849-                    if sharenum not in shares:
4850-                        # allocate a new share
4851-                        allocated_size = 2000 # arbitrary, really
4852-                        share = self._allocate_slot_share(bucketdir, secrets,
4853-                                                          sharenum,
4854-                                                          allocated_size,
4855-                                                          owner_num=0)
4856-                        shares[sharenum] = share
4857-                    shares[sharenum].writev(datav, new_length)
4858-                    # and update the lease
4859-                    shares[sharenum].add_or_renew_lease(lease_info)
4860-
4861-            if new_length == 0:
4862-                # delete empty bucket directories
4863-                if not os.listdir(bucketdir):
4864-                    os.rmdir(bucketdir)
4865 
4866hunk ./src/allmydata/storage/server.py 308
4867+        try:
4868+            shareset = self.backend.get_shareset(storageindex)
4869+            expiration_time = start + 31*24*60*60   # one month from now
4870+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
4871+                                                       read_vector, expiration_time)
4872+        finally:
4873+            self.add_latency("writev", time.time() - start)
4874 
4875hunk ./src/allmydata/storage/server.py 316
4876-        # all done
4877-        self.add_latency("writev", time.time() - start)
4878-        return (testv_is_good, read_data)
4879-
4880-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
4881-                             allocated_size, owner_num=0):
4882-        (write_enabler, renew_secret, cancel_secret) = secrets
4883-        my_nodeid = self.my_nodeid
4884-        fileutil.make_dirs(bucketdir)
4885-        filename = os.path.join(bucketdir, "%d" % sharenum)
4886-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
4887-                                         self)
4888-        return share
4889-
4890-    def remote_slot_readv(self, storage_index, shares, readv):
4891+    def remote_slot_readv(self, storageindex, shares, readv):
4892         start = time.time()
4893         self.count("readv")
4894hunk ./src/allmydata/storage/server.py 319
4895-        si_s = si_b2a(storage_index)
4896-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
4897-                     facility="tahoe.storage", level=log.OPERATIONAL)
4898-        si_dir = storage_index_to_dir(storage_index)
4899-        # shares exist if there is a file for them
4900-        bucketdir = os.path.join(self.sharedir, si_dir)
4901-        if not os.path.isdir(bucketdir):
4902+        si_s = si_b2a(storageindex)
4903+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
4904+                facility="tahoe.storage", level=log.OPERATIONAL)
4905+
4906+        try:
4907+            shareset = self.backend.get_shareset(storageindex)
4908+            return shareset.readv(self, shares, readv)
4909+        finally:
4910             self.add_latency("readv", time.time() - start)
4911hunk ./src/allmydata/storage/server.py 328
4912-            return {}
4913-        datavs = {}
4914-        for sharenum_s in os.listdir(bucketdir):
4915-            try:
4916-                sharenum = int(sharenum_s)
4917-            except ValueError:
4918-                continue
4919-            if sharenum in shares or not shares:
4920-                filename = os.path.join(bucketdir, sharenum_s)
4921-                msf = MutableShareFile(filename, self)
4922-                datavs[sharenum] = msf.readv(readv)
4923-        log.msg("returning shares %s" % (datavs.keys(),),
4924-                facility="tahoe.storage", level=log.NOISY, parent=lp)
4925-        self.add_latency("readv", time.time() - start)
4926-        return datavs
4927 
4928hunk ./src/allmydata/storage/server.py 329
4929-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
4930-                                    reason):
4931-        fileutil.make_dirs(self.corruption_advisory_dir)
4932-        now = time_format.iso_utc(sep="T")
4933-        si_s = si_b2a(storage_index)
4934-        # windows can't handle colons in the filename
4935-        fn = os.path.join(self.corruption_advisory_dir,
4936-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4937-        f = open(fn, "w")
4938-        f.write("report: Share Corruption\n")
4939-        f.write("type: %s\n" % share_type)
4940-        f.write("storage_index: %s\n" % si_s)
4941-        f.write("share_number: %d\n" % shnum)
4942-        f.write("\n")
4943-        f.write(reason)
4944-        f.write("\n")
4945-        f.close()
4946-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4947-                        "%(si)s-%(shnum)d: %(reason)s"),
4948-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4949-                level=log.SCARY, umid="SGx2fA")
4950-        return None
4951+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4952+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4953hunk ./src/allmydata/test/common.py 20
4954 from allmydata.mutable.common import CorruptShareError
4955 from allmydata.mutable.layout import unpack_header
4956 from allmydata.mutable.publish import MutableData
4957-from allmydata.storage.mutable import MutableShareFile
4958+from allmydata.storage.backends.disk.mutable import MutableDiskShare
4959 from allmydata.util import hashutil, log, fileutil, pollmixin
4960 from allmydata.util.assertutil import precondition
4961 from allmydata.util.consumer import download_to_data
4962hunk ./src/allmydata/test/common.py 1297
4963 
4964 def _corrupt_mutable_share_data(data, debug=False):
4965     prefix = data[:32]
4966-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
4967-    data_offset = MutableShareFile.DATA_OFFSET
4968+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
4969+    data_offset = MutableDiskShare.DATA_OFFSET
4970     sharetype = data[data_offset:data_offset+1]
4971     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
4972     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
4973hunk ./src/allmydata/test/no_network.py 21
4974 from twisted.application import service
4975 from twisted.internet import defer, reactor
4976 from twisted.python.failure import Failure
4977+from twisted.python.filepath import FilePath
4978 from foolscap.api import Referenceable, fireEventually, RemoteException
4979 from base64 import b32encode
4980hunk ./src/allmydata/test/no_network.py 24
4981+
4982 from allmydata import uri as tahoe_uri
4983 from allmydata.client import Client
4984hunk ./src/allmydata/test/no_network.py 27
4985-from allmydata.storage.server import StorageServer, storage_index_to_dir
4986+from allmydata.storage.server import StorageServer
4987+from allmydata.storage.backends.disk.disk_backend import DiskBackend
4988 from allmydata.util import fileutil, idlib, hashutil
4989 from allmydata.util.hashutil import sha1
4990 from allmydata.test.common_web import HTTPClientGETFactory
4991hunk ./src/allmydata/test/no_network.py 155
4992             seed = server.get_permutation_seed()
4993             return sha1(peer_selection_index + seed).digest()
4994         return sorted(self.get_connected_servers(), key=_permuted)
4995+
4996     def get_connected_servers(self):
4997         return self.client._servers
4998hunk ./src/allmydata/test/no_network.py 158
4999+
5000     def get_nickname_for_serverid(self, serverid):
5001         return None
5002 
5003hunk ./src/allmydata/test/no_network.py 162
5004+    def get_known_servers(self):
5005+        return self.get_connected_servers()
5006+
5007+    def get_all_serverids(self):
5008+        return self.client.get_all_serverids()
5009+
5010+
5011 class NoNetworkClient(Client):
5012     def create_tub(self):
5013         pass
5014hunk ./src/allmydata/test/no_network.py 262
5015 
5016     def make_server(self, i, readonly=False):
5017         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
5018-        serverdir = os.path.join(self.basedir, "servers",
5019-                                 idlib.shortnodeid_b2a(serverid), "storage")
5020-        fileutil.make_dirs(serverdir)
5021-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
5022-                           readonly_storage=readonly)
5023+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
5024+
5025+        # The backend will make the storage directory and any necessary parents.
5026+        backend = DiskBackend(storagedir, readonly=readonly)
5027+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
5028         ss._no_network_server_number = i
5029         return ss
5030 
5031hunk ./src/allmydata/test/no_network.py 276
5032         middleman = service.MultiService()
5033         middleman.setServiceParent(self)
5034         ss.setServiceParent(middleman)
5035-        serverid = ss.my_nodeid
5036+        serverid = ss.get_serverid()
5037         self.servers_by_number[i] = ss
5038         wrapper = wrap_storage_server(ss)
5039         self.wrappers_by_id[serverid] = wrapper
5040hunk ./src/allmydata/test/no_network.py 295
5041         # it's enough to remove the server from c._servers (we don't actually
5042         # have to detach and stopService it)
5043         for i,ss in self.servers_by_number.items():
5044-            if ss.my_nodeid == serverid:
5045+            if ss.get_serverid() == serverid:
5046                 del self.servers_by_number[i]
5047                 break
5048         del self.wrappers_by_id[serverid]
5049hunk ./src/allmydata/test/no_network.py 345
5050     def get_clientdir(self, i=0):
5051         return self.g.clients[i].basedir
5052 
5053+    def get_server(self, i):
5054+        return self.g.servers_by_number[i]
5055+
5056     def get_serverdir(self, i):
5057hunk ./src/allmydata/test/no_network.py 349
5058-        return self.g.servers_by_number[i].storedir
5059+        return self.g.servers_by_number[i].backend.storedir
5060+
5061+    def remove_server(self, i):
5062+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
5063 
5064     def iterate_servers(self):
5065         for i in sorted(self.g.servers_by_number.keys()):
5066hunk ./src/allmydata/test/no_network.py 357
5067             ss = self.g.servers_by_number[i]
5068-            yield (i, ss, ss.storedir)
5069+            yield (i, ss, ss.backend.storedir)
5070 
5071     def find_uri_shares(self, uri):
5072         si = tahoe_uri.from_string(uri).get_storage_index()
5073hunk ./src/allmydata/test/no_network.py 361
5074-        prefixdir = storage_index_to_dir(si)
5075         shares = []
5076         for i,ss in self.g.servers_by_number.items():
5077hunk ./src/allmydata/test/no_network.py 363
5078-            serverid = ss.my_nodeid
5079-            basedir = os.path.join(ss.sharedir, prefixdir)
5080-            if not os.path.exists(basedir):
5081-                continue
5082-            for f in os.listdir(basedir):
5083-                try:
5084-                    shnum = int(f)
5085-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
5086-                except ValueError:
5087-                    pass
5088+            for share in ss.backend.get_shareset(si).get_shares():
5089+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
5090         return sorted(shares)
5091 
5092hunk ./src/allmydata/test/no_network.py 367
5093+    def count_leases(self, uri):
5094+        """Return (filename, leasecount) pairs in arbitrary order."""
5095+        si = tahoe_uri.from_string(uri).get_storage_index()
5096+        lease_counts = []
5097+        for i,ss in self.g.servers_by_number.items():
5098+            for share in ss.backend.get_shareset(si).get_shares():
5099+                num_leases = len(list(share.get_leases()))
5100+                lease_counts.append( (share._home.path, num_leases) )
5101+        return lease_counts
5102+
5103     def copy_shares(self, uri):
5104         shares = {}
5105hunk ./src/allmydata/test/no_network.py 379
5106-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
5107-            shares[sharefile] = open(sharefile, "rb").read()
5108+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
5109+            shares[sharefp.path] = sharefp.getContent()
5110         return shares
5111 
5112hunk ./src/allmydata/test/no_network.py 383
5113+    def copy_share(self, from_share, uri, to_server):
5114+        si = uri.from_string(self.uri).get_storage_index()
5115+        (i_shnum, i_serverid, i_sharefp) = from_share
5116+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
5117+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
5118+
5119     def restore_all_shares(self, shares):
5120hunk ./src/allmydata/test/no_network.py 390
5121-        for sharefile, data in shares.items():
5122-            open(sharefile, "wb").write(data)
5123+        for share, data in shares.items():
5124+            share.home.setContent(data)
5125 
5126hunk ./src/allmydata/test/no_network.py 393
5127-    def delete_share(self, (shnum, serverid, sharefile)):
5128-        os.unlink(sharefile)
5129+    def delete_share(self, (shnum, serverid, sharefp)):
5130+        sharefp.remove()
5131 
5132     def delete_shares_numbered(self, uri, shnums):
5133hunk ./src/allmydata/test/no_network.py 397
5134-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5135+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5136             if i_shnum in shnums:
5137hunk ./src/allmydata/test/no_network.py 399
5138-                os.unlink(i_sharefile)
5139+                i_sharefp.remove()
5140 
5141hunk ./src/allmydata/test/no_network.py 401
5142-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
5143-        sharedata = open(sharefile, "rb").read()
5144-        corruptdata = corruptor_function(sharedata)
5145-        open(sharefile, "wb").write(corruptdata)
5146+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
5147+        sharedata = sharefp.getContent()
5148+        corruptdata = corruptor_function(sharedata, debug=debug)
5149+        sharefp.setContent(corruptdata)
5150 
5151     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
5152hunk ./src/allmydata/test/no_network.py 407
5153-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5154+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5155             if i_shnum in shnums:
5156hunk ./src/allmydata/test/no_network.py 409
5157-                sharedata = open(i_sharefile, "rb").read()
5158-                corruptdata = corruptor(sharedata, debug=debug)
5159-                open(i_sharefile, "wb").write(corruptdata)
5160+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5161 
5162     def corrupt_all_shares(self, uri, corruptor, debug=False):
5163hunk ./src/allmydata/test/no_network.py 412
5164-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
5165-            sharedata = open(i_sharefile, "rb").read()
5166-            corruptdata = corruptor(sharedata, debug=debug)
5167-            open(i_sharefile, "wb").write(corruptdata)
5168+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
5169+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
5170 
5171     def GET(self, urlpath, followRedirect=False, return_response=False,
5172             method="GET", clientnum=0, **kwargs):
5173hunk ./src/allmydata/test/test_download.py 6
5174 # a previous run. This asserts that the current code is capable of decoding
5175 # shares from a previous version.
5176 
5177-import os
5178 from twisted.trial import unittest
5179 from twisted.internet import defer, reactor
5180 from allmydata import uri
5181hunk ./src/allmydata/test/test_download.py 9
5182-from allmydata.storage.server import storage_index_to_dir
5183 from allmydata.util import base32, fileutil, spans, log, hashutil
5184 from allmydata.util.consumer import download_to_data, MemoryConsumer
5185 from allmydata.immutable import upload, layout
5186hunk ./src/allmydata/test/test_download.py 85
5187         u = upload.Data(plaintext, None)
5188         d = self.c0.upload(u)
5189         f = open("stored_shares.py", "w")
5190-        def _created_immutable(ur):
5191-            # write the generated shares and URI to a file, which can then be
5192-            # incorporated into this one next time.
5193-            f.write('immutable_uri = "%s"\n' % ur.uri)
5194-            f.write('immutable_shares = {\n')
5195-            si = uri.from_string(ur.uri).get_storage_index()
5196-            si_dir = storage_index_to_dir(si)
5197+
5198+        def _write_py(uri):
5199+            si = uri.from_string(uri).get_storage_index()
5200             for (i,ss,ssdir) in self.iterate_servers():
5201hunk ./src/allmydata/test/test_download.py 89
5202-                sharedir = os.path.join(ssdir, "shares", si_dir)
5203                 shares = {}
5204hunk ./src/allmydata/test/test_download.py 90
5205-                for fn in os.listdir(sharedir):
5206-                    shnum = int(fn)
5207-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5208-                    shares[shnum] = sharedata
5209-                fileutil.rm_dir(sharedir)
5210+                shareset = ss.backend.get_shareset(si)
5211+                for share in shareset.get_shares():
5212+                    sharedata = share._home.getContent()
5213+                    shares[share.get_shnum()] = sharedata
5214+
5215+                fileutil.fp_remove(shareset._sharehomedir)
5216                 if shares:
5217                     f.write(' %d: { # client[%d]\n' % (i, i))
5218                     for shnum in sorted(shares.keys()):
5219hunk ./src/allmydata/test/test_download.py 103
5220                                 (shnum, base32.b2a(shares[shnum])))
5221                     f.write('    },\n')
5222             f.write('}\n')
5223-            f.write('\n')
5224 
5225hunk ./src/allmydata/test/test_download.py 104
5226+        def _created_immutable(ur):
5227+            # write the generated shares and URI to a file, which can then be
5228+            # incorporated into this one next time.
5229+            f.write('immutable_uri = "%s"\n' % ur.uri)
5230+            f.write('immutable_shares = {\n')
5231+            _write_py(ur.uri)
5232+            f.write('\n')
5233         d.addCallback(_created_immutable)
5234 
5235         d.addCallback(lambda ignored:
5236hunk ./src/allmydata/test/test_download.py 118
5237         def _created_mutable(n):
5238             f.write('mutable_uri = "%s"\n' % n.get_uri())
5239             f.write('mutable_shares = {\n')
5240-            si = uri.from_string(n.get_uri()).get_storage_index()
5241-            si_dir = storage_index_to_dir(si)
5242-            for (i,ss,ssdir) in self.iterate_servers():
5243-                sharedir = os.path.join(ssdir, "shares", si_dir)
5244-                shares = {}
5245-                for fn in os.listdir(sharedir):
5246-                    shnum = int(fn)
5247-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
5248-                    shares[shnum] = sharedata
5249-                fileutil.rm_dir(sharedir)
5250-                if shares:
5251-                    f.write(' %d: { # client[%d]\n' % (i, i))
5252-                    for shnum in sorted(shares.keys()):
5253-                        f.write('  %d: base32.a2b("%s"),\n' %
5254-                                (shnum, base32.b2a(shares[shnum])))
5255-                    f.write('    },\n')
5256-            f.write('}\n')
5257-
5258-            f.close()
5259+            _write_py(n.get_uri())
5260         d.addCallback(_created_mutable)
5261 
5262         def _done(ignored):
5263hunk ./src/allmydata/test/test_download.py 123
5264             f.close()
5265-        d.addCallback(_done)
5266+        d.addBoth(_done)
5267 
5268         return d
5269 
5270hunk ./src/allmydata/test/test_download.py 127
5271+    def _write_shares(self, uri, shares):
5272+        si = uri.from_string(uri).get_storage_index()
5273+        for i in shares:
5274+            shares_for_server = shares[i]
5275+            for shnum in shares_for_server:
5276+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
5277+                fileutil.fp_make_dirs(share_dir)
5278+                share_dir.child(str(shnum)).setContent(shares[shnum])
5279+
5280     def load_shares(self, ignored=None):
5281         # this uses the data generated by create_shares() to populate the
5282         # storage servers with pre-generated shares
5283hunk ./src/allmydata/test/test_download.py 139
5284-        si = uri.from_string(immutable_uri).get_storage_index()
5285-        si_dir = storage_index_to_dir(si)
5286-        for i in immutable_shares:
5287-            shares = immutable_shares[i]
5288-            for shnum in shares:
5289-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5290-                fileutil.make_dirs(dn)
5291-                fn = os.path.join(dn, str(shnum))
5292-                f = open(fn, "wb")
5293-                f.write(shares[shnum])
5294-                f.close()
5295-
5296-        si = uri.from_string(mutable_uri).get_storage_index()
5297-        si_dir = storage_index_to_dir(si)
5298-        for i in mutable_shares:
5299-            shares = mutable_shares[i]
5300-            for shnum in shares:
5301-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
5302-                fileutil.make_dirs(dn)
5303-                fn = os.path.join(dn, str(shnum))
5304-                f = open(fn, "wb")
5305-                f.write(shares[shnum])
5306-                f.close()
5307+        self._write_shares(immutable_uri, immutable_shares)
5308+        self._write_shares(mutable_uri, mutable_shares)
5309 
5310     def download_immutable(self, ignored=None):
5311         n = self.c0.create_node_from_uri(immutable_uri)
5312hunk ./src/allmydata/test/test_download.py 183
5313 
5314         self.load_shares()
5315         si = uri.from_string(immutable_uri).get_storage_index()
5316-        si_dir = storage_index_to_dir(si)
5317 
5318         n = self.c0.create_node_from_uri(immutable_uri)
5319         d = download_to_data(n)
5320hunk ./src/allmydata/test/test_download.py 198
5321                 for clientnum in immutable_shares:
5322                     for shnum in immutable_shares[clientnum]:
5323                         if s._shnum == shnum:
5324-                            fn = os.path.join(self.get_serverdir(clientnum),
5325-                                              "shares", si_dir, str(shnum))
5326-                            os.unlink(fn)
5327+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5328+                            share_dir.child(str(shnum)).remove()
5329         d.addCallback(_clobber_some_shares)
5330         d.addCallback(lambda ign: download_to_data(n))
5331         d.addCallback(_got_data)
5332hunk ./src/allmydata/test/test_download.py 212
5333                 for shnum in immutable_shares[clientnum]:
5334                     if shnum == save_me:
5335                         continue
5336-                    fn = os.path.join(self.get_serverdir(clientnum),
5337-                                      "shares", si_dir, str(shnum))
5338-                    if os.path.exists(fn):
5339-                        os.unlink(fn)
5340+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5341+                    fileutil.fp_remove(share_dir.child(str(shnum)))
5342             # now the download should fail with NotEnoughSharesError
5343             return self.shouldFail(NotEnoughSharesError, "1shares", None,
5344                                    download_to_data, n)
5345hunk ./src/allmydata/test/test_download.py 223
5346             # delete the last remaining share
5347             for clientnum in immutable_shares:
5348                 for shnum in immutable_shares[clientnum]:
5349-                    fn = os.path.join(self.get_serverdir(clientnum),
5350-                                      "shares", si_dir, str(shnum))
5351-                    if os.path.exists(fn):
5352-                        os.unlink(fn)
5353+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5354+                    share_dir.child(str(shnum)).remove()
5355             # now a new download should fail with NoSharesError. We want a
5356             # new ImmutableFileNode so it will forget about the old shares.
5357             # If we merely called create_node_from_uri() without first
5358hunk ./src/allmydata/test/test_download.py 801
5359         # will report two shares, and the ShareFinder will handle the
5360         # duplicate by attaching both to the same CommonShare instance.
5361         si = uri.from_string(immutable_uri).get_storage_index()
5362-        si_dir = storage_index_to_dir(si)
5363-        sh0_file = [sharefile
5364-                    for (shnum, serverid, sharefile)
5365-                    in self.find_uri_shares(immutable_uri)
5366-                    if shnum == 0][0]
5367-        sh0_data = open(sh0_file, "rb").read()
5368+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
5369+                          in self.find_uri_shares(immutable_uri)
5370+                          if shnum == 0][0]
5371+        sh0_data = sh0_fp.getContent()
5372         for clientnum in immutable_shares:
5373             if 0 in immutable_shares[clientnum]:
5374                 continue
5375hunk ./src/allmydata/test/test_download.py 808
5376-            cdir = self.get_serverdir(clientnum)
5377-            target = os.path.join(cdir, "shares", si_dir, "0")
5378-            outf = open(target, "wb")
5379-            outf.write(sh0_data)
5380-            outf.close()
5381+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
5382+            fileutil.fp_make_dirs(cdir)
5383+            cdir.child(str(shnum)).setContent(sh0_data)
5384 
5385         d = self.download_immutable()
5386         return d
5387hunk ./src/allmydata/test/test_encode.py 134
5388         d.addCallback(_try)
5389         return d
5390 
5391-    def get_share_hashes(self, at_least_these=()):
5392+    def get_share_hashes(self):
5393         d = self._start()
5394         def _try(unused=None):
5395             if self.mode == "bad sharehash":
5396hunk ./src/allmydata/test/test_hung_server.py 3
5397 # -*- coding: utf-8 -*-
5398 
5399-import os, shutil
5400 from twisted.trial import unittest
5401 from twisted.internet import defer
5402hunk ./src/allmydata/test/test_hung_server.py 5
5403-from allmydata import uri
5404+
5405 from allmydata.util.consumer import download_to_data
5406 from allmydata.immutable import upload
5407 from allmydata.mutable.common import UnrecoverableFileError
5408hunk ./src/allmydata/test/test_hung_server.py 10
5409 from allmydata.mutable.publish import MutableData
5410-from allmydata.storage.common import storage_index_to_dir
5411 from allmydata.test.no_network import GridTestMixin
5412 from allmydata.test.common import ShouldFailMixin
5413 from allmydata.util.pollmixin import PollMixin
5414hunk ./src/allmydata/test/test_hung_server.py 18
5415 immutable_plaintext = "data" * 10000
5416 mutable_plaintext = "muta" * 10000
5417 
5418+
5419 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
5420                              unittest.TestCase):
5421     # Many of these tests take around 60 seconds on François's ARM buildslave:
5422hunk ./src/allmydata/test/test_hung_server.py 31
5423     timeout = 240
5424 
5425     def _break(self, servers):
5426-        for (id, ss) in servers:
5427-            self.g.break_server(id)
5428+        for ss in servers:
5429+            self.g.break_server(ss.get_serverid())
5430 
5431     def _hang(self, servers, **kwargs):
5432hunk ./src/allmydata/test/test_hung_server.py 35
5433-        for (id, ss) in servers:
5434-            self.g.hang_server(id, **kwargs)
5435+        for ss in servers:
5436+            self.g.hang_server(ss.get_serverid(), **kwargs)
5437 
5438     def _unhang(self, servers, **kwargs):
5439hunk ./src/allmydata/test/test_hung_server.py 39
5440-        for (id, ss) in servers:
5441-            self.g.unhang_server(id, **kwargs)
5442+        for ss in servers:
5443+            self.g.unhang_server(ss.get_serverid(), **kwargs)
5444 
5445     def _hang_shares(self, shnums, **kwargs):
5446         # hang all servers who are holding the given shares
5447hunk ./src/allmydata/test/test_hung_server.py 52
5448                     hung_serverids.add(i_serverid)
5449 
5450     def _delete_all_shares_from(self, servers):
5451-        serverids = [id for (id, ss) in servers]
5452-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5453+        serverids = [ss.get_serverid() for ss in servers]
5454+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5455             if i_serverid in serverids:
5456hunk ./src/allmydata/test/test_hung_server.py 55
5457-                os.unlink(i_sharefile)
5458+                i_sharefp.remove()
5459 
5460     def _corrupt_all_shares_in(self, servers, corruptor_func):
5461hunk ./src/allmydata/test/test_hung_server.py 58
5462-        serverids = [id for (id, ss) in servers]
5463-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5464+        serverids = [ss.get_serverid() for ss in servers]
5465+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5466             if i_serverid in serverids:
5467hunk ./src/allmydata/test/test_hung_server.py 61
5468-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
5469+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
5470 
5471     def _copy_all_shares_from(self, from_servers, to_server):
5472hunk ./src/allmydata/test/test_hung_server.py 64
5473-        serverids = [id for (id, ss) in from_servers]
5474-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
5475+        serverids = [ss.get_serverid() for ss in from_servers]
5476+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
5477             if i_serverid in serverids:
5478hunk ./src/allmydata/test/test_hung_server.py 67
5479-                self._copy_share((i_shnum, i_sharefile), to_server)
5480+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
5481 
5482hunk ./src/allmydata/test/test_hung_server.py 69
5483-    def _copy_share(self, share, to_server):
5484-        (sharenum, sharefile) = share
5485-        (id, ss) = to_server
5486-        shares_dir = os.path.join(ss.original.storedir, "shares")
5487-        si = uri.from_string(self.uri).get_storage_index()
5488-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
5489-        if not os.path.exists(si_dir):
5490-            os.makedirs(si_dir)
5491-        new_sharefile = os.path.join(si_dir, str(sharenum))
5492-        shutil.copy(sharefile, new_sharefile)
5493         self.shares = self.find_uri_shares(self.uri)
5494hunk ./src/allmydata/test/test_hung_server.py 70
5495-        # Make sure that the storage server has the share.
5496-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
5497-                        in self.shares)
5498-
5499-    def _corrupt_share(self, share, corruptor_func):
5500-        (sharenum, sharefile) = share
5501-        data = open(sharefile, "rb").read()
5502-        newdata = corruptor_func(data)
5503-        os.unlink(sharefile)
5504-        wf = open(sharefile, "wb")
5505-        wf.write(newdata)
5506-        wf.close()
5507 
5508     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
5509         self.mutable = mutable
5510hunk ./src/allmydata/test/test_hung_server.py 82
5511 
5512         self.c0 = self.g.clients[0]
5513         nm = self.c0.nodemaker
5514-        self.servers = sorted([(s.get_serverid(), s.get_rref())
5515-                               for s in nm.storage_broker.get_connected_servers()])
5516+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
5517+        self.servers = [ss for (id, ss) in sorted(unsorted)]
5518         self.servers = self.servers[5:] + self.servers[:5]
5519 
5520         if mutable:
5521hunk ./src/allmydata/test/test_hung_server.py 244
5522             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
5523             # will retire before the download is complete and the ShareFinder
5524             # is shut off. That will leave 4 OVERDUE and 1
5525-            # stuck-but-not-overdue, for a total of 5 requests in in
5526+            # stuck-but-not-overdue, for a total of 5 requests in
5527             # _sf.pending_requests
5528             for t in self._sf.overdue_timers.values()[:4]:
5529                 t.reset(-1.0)
5530hunk ./src/allmydata/test/test_mutable.py 21
5531 from foolscap.api import eventually, fireEventually
5532 from foolscap.logging import log
5533 from allmydata.storage_client import StorageFarmBroker
5534-from allmydata.storage.common import storage_index_to_dir
5535 from allmydata.scripts import debug
5536 
5537 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
5538hunk ./src/allmydata/test/test_mutable.py 3669
5539         # Now execute each assignment by writing the storage.
5540         for (share, servernum) in assignments:
5541             sharedata = base64.b64decode(self.sdmf_old_shares[share])
5542-            storedir = self.get_serverdir(servernum)
5543-            storage_path = os.path.join(storedir, "shares",
5544-                                        storage_index_to_dir(si))
5545-            fileutil.make_dirs(storage_path)
5546-            fileutil.write(os.path.join(storage_path, "%d" % share),
5547-                           sharedata)
5548+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
5549+            fileutil.fp_make_dirs(storage_dir)
5550+            storage_dir.child("%d" % share).setContent(sharedata)
5551         # ...and verify that the shares are there.
5552         shares = self.find_uri_shares(self.sdmf_old_cap)
5553         assert len(shares) == 10
5554hunk ./src/allmydata/test/test_provisioning.py 13
5555 from nevow import inevow
5556 from zope.interface import implements
5557 
5558-class MyRequest:
5559+class MockRequest:
5560     implements(inevow.IRequest)
5561     pass
5562 
5563hunk ./src/allmydata/test/test_provisioning.py 26
5564     def test_load(self):
5565         pt = provisioning.ProvisioningTool()
5566         self.fields = {}
5567-        #r = MyRequest()
5568+        #r = MockRequest()
5569         #r.fields = self.fields
5570         #ctx = RequestContext()
5571         #unfilled = pt.renderSynchronously(ctx)
5572hunk ./src/allmydata/test/test_repairer.py 537
5573         # happiness setting.
5574         def _delete_some_servers(ignored):
5575             for i in xrange(7):
5576-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
5577+                self.remove_server(i)
5578 
5579             assert len(self.g.servers_by_number) == 3
5580 
5581hunk ./src/allmydata/test/test_storage.py 14
5582 from allmydata import interfaces
5583 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
5584 from allmydata.storage.server import StorageServer
5585-from allmydata.storage.mutable import MutableShareFile
5586-from allmydata.storage.immutable import BucketWriter, BucketReader
5587-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
5588+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5589+from allmydata.storage.bucket import BucketWriter, BucketReader
5590+from allmydata.storage.common import DataTooLargeError, \
5591      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
5592 from allmydata.storage.lease import LeaseInfo
5593 from allmydata.storage.crawler import BucketCountingCrawler
5594hunk ./src/allmydata/test/test_storage.py 474
5595         w[0].remote_write(0, "\xff"*10)
5596         w[0].remote_close()
5597 
5598-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5599-        f = open(fn, "rb+")
5600+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5601+        f = fp.open("rb+")
5602         f.seek(0)
5603         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
5604         f.close()
5605hunk ./src/allmydata/test/test_storage.py 814
5606     def test_bad_magic(self):
5607         ss = self.create("test_bad_magic")
5608         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
5609-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
5610-        f = open(fn, "rb+")
5611+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
5612+        f = fp.open("rb+")
5613         f.seek(0)
5614         f.write("BAD MAGIC")
5615         f.close()
5616hunk ./src/allmydata/test/test_storage.py 842
5617 
5618         # Trying to make the container too large (by sending a write vector
5619         # whose offset is too high) will raise an exception.
5620-        TOOBIG = MutableShareFile.MAX_SIZE + 10
5621+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
5622         self.failUnlessRaises(DataTooLargeError,
5623                               rstaraw, "si1", secrets,
5624                               {0: ([], [(TOOBIG,data)], None)},
5625hunk ./src/allmydata/test/test_storage.py 1229
5626 
5627         # create a random non-numeric file in the bucket directory, to
5628         # exercise the code that's supposed to ignore those.
5629-        bucket_dir = os.path.join(self.workdir("test_leases"),
5630-                                  "shares", storage_index_to_dir("si1"))
5631-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
5632-        f.write("you ought to be ignoring me\n")
5633-        f.close()
5634+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5635+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5636 
5637hunk ./src/allmydata/test/test_storage.py 1232
5638-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5639+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
5640         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5641 
5642         # add-lease on a missing storage index is silently ignored
5643hunk ./src/allmydata/test/test_storage.py 3118
5644         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5645 
5646         # add a non-sharefile to exercise another code path
5647-        fn = os.path.join(ss.sharedir,
5648-                          storage_index_to_dir(immutable_si_0),
5649-                          "not-a-share")
5650-        f = open(fn, "wb")
5651-        f.write("I am not a share.\n")
5652-        f.close()
5653+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5654+        fp.setContent("I am not a share.\n")
5655 
5656         # this is before the crawl has started, so we're not in a cycle yet
5657         initial_state = lc.get_state()
5658hunk ./src/allmydata/test/test_storage.py 3282
5659     def test_expire_age(self):
5660         basedir = "storage/LeaseCrawler/expire_age"
5661         fileutil.make_dirs(basedir)
5662-        # setting expiration_time to 2000 means that any lease which is more
5663-        # than 2000s old will be expired.
5664-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5665-                                       expiration_enabled=True,
5666-                                       expiration_mode="age",
5667-                                       expiration_override_lease_duration=2000)
5668+        # setting 'override_lease_duration' to 2000 means that any lease that
5669+        # is more than 2000 seconds old will be expired.
5670+        expiration_policy = {
5671+            'enabled': True,
5672+            'mode': 'age',
5673+            'override_lease_duration': 2000,
5674+            'sharetypes': ('mutable', 'immutable'),
5675+        }
5676+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5677         # make it start sooner than usual.
5678         lc = ss.lease_checker
5679         lc.slow_start = 0
5680hunk ./src/allmydata/test/test_storage.py 3423
5681     def test_expire_cutoff_date(self):
5682         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5683         fileutil.make_dirs(basedir)
5684-        # setting cutoff-date to 2000 seconds ago means that any lease which
5685-        # is more than 2000s old will be expired.
5686+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5687+        # is more than 2000 seconds old will be expired.
5688         now = time.time()
5689         then = int(now - 2000)
5690hunk ./src/allmydata/test/test_storage.py 3427
5691-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5692-                                       expiration_enabled=True,
5693-                                       expiration_mode="cutoff-date",
5694-                                       expiration_cutoff_date=then)
5695+        expiration_policy = {
5696+            'enabled': True,
5697+            'mode': 'cutoff-date',
5698+            'cutoff_date': then,
5699+            'sharetypes': ('mutable', 'immutable'),
5700+        }
5701+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5702         # make it start sooner than usual.
5703         lc = ss.lease_checker
5704         lc.slow_start = 0
5705hunk ./src/allmydata/test/test_storage.py 3575
5706     def test_only_immutable(self):
5707         basedir = "storage/LeaseCrawler/only_immutable"
5708         fileutil.make_dirs(basedir)
5709+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5710+        # is more than 2000 seconds old will be expired.
5711         now = time.time()
5712         then = int(now - 2000)
5713hunk ./src/allmydata/test/test_storage.py 3579
5714-        ss = StorageServer(basedir, "\x00" * 20,
5715-                           expiration_enabled=True,
5716-                           expiration_mode="cutoff-date",
5717-                           expiration_cutoff_date=then,
5718-                           expiration_sharetypes=("immutable",))
5719+        expiration_policy = {
5720+            'enabled': True,
5721+            'mode': 'cutoff-date',
5722+            'cutoff_date': then,
5723+            'sharetypes': ('immutable',),
5724+        }
5725+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5726         lc = ss.lease_checker
5727         lc.slow_start = 0
5728         webstatus = StorageStatus(ss)
5729hunk ./src/allmydata/test/test_storage.py 3636
5730     def test_only_mutable(self):
5731         basedir = "storage/LeaseCrawler/only_mutable"
5732         fileutil.make_dirs(basedir)
5733+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5734+        # is more than 2000 seconds old will be expired.
5735         now = time.time()
5736         then = int(now - 2000)
5737hunk ./src/allmydata/test/test_storage.py 3640
5738-        ss = StorageServer(basedir, "\x00" * 20,
5739-                           expiration_enabled=True,
5740-                           expiration_mode="cutoff-date",
5741-                           expiration_cutoff_date=then,
5742-                           expiration_sharetypes=("mutable",))
5743+        expiration_policy = {
5744+            'enabled': True,
5745+            'mode': 'cutoff-date',
5746+            'cutoff_date': then,
5747+            'sharetypes': ('mutable',),
5748+        }
5749+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5750         lc = ss.lease_checker
5751         lc.slow_start = 0
5752         webstatus = StorageStatus(ss)
5753hunk ./src/allmydata/test/test_storage.py 3819
5754     def test_no_st_blocks(self):
5755         basedir = "storage/LeaseCrawler/no_st_blocks"
5756         fileutil.make_dirs(basedir)
5757-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5758-                                        expiration_mode="age",
5759-                                        expiration_override_lease_duration=-1000)
5760-        # a negative expiration_time= means the "configured-"
5761+        # A negative 'override_lease_duration' means that the "configured-"
5762         # space-recovered counts will be non-zero, since all shares will have
5763hunk ./src/allmydata/test/test_storage.py 3821
5764-        # expired by then
5765+        # expired by then.
5766+        expiration_policy = {
5767+            'enabled': True,
5768+            'mode': 'age',
5769+            'override_lease_duration': -1000,
5770+            'sharetypes': ('mutable', 'immutable'),
5771+        }
5772+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5773 
5774         # make it start sooner than usual.
5775         lc = ss.lease_checker
5776hunk ./src/allmydata/test/test_storage.py 3877
5777         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5778         first = min(self.sis)
5779         first_b32 = base32.b2a(first)
5780-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5781-        f = open(fn, "rb+")
5782+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5783+        f = fp.open("rb+")
5784         f.seek(0)
5785         f.write("BAD MAGIC")
5786         f.close()
5787hunk ./src/allmydata/test/test_storage.py 3890
5788 
5789         # also create an empty bucket
5790         empty_si = base32.b2a("\x04"*16)
5791-        empty_bucket_dir = os.path.join(ss.sharedir,
5792-                                        storage_index_to_dir(empty_si))
5793-        fileutil.make_dirs(empty_bucket_dir)
5794+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5795+        fileutil.fp_make_dirs(empty_bucket_dir)
5796 
5797         ss.setServiceParent(self.s)
5798 
5799hunk ./src/allmydata/test/test_system.py 10
5800 
5801 import allmydata
5802 from allmydata import uri
5803-from allmydata.storage.mutable import MutableShareFile
5804+from allmydata.storage.backends.disk.mutable import MutableDiskShare
5805 from allmydata.storage.server import si_a2b
5806 from allmydata.immutable import offloaded, upload
5807 from allmydata.immutable.literal import LiteralFileNode
5808hunk ./src/allmydata/test/test_system.py 421
5809         return shares
5810 
5811     def _corrupt_mutable_share(self, filename, which):
5812-        msf = MutableShareFile(filename)
5813+        msf = MutableDiskShare(filename)
5814         datav = msf.readv([ (0, 1000000) ])
5815         final_share = datav[0]
5816         assert len(final_share) < 1000000 # ought to be truncated
5817hunk ./src/allmydata/test/test_upload.py 22
5818 from allmydata.util.happinessutil import servers_of_happiness, \
5819                                          shares_by_server, merge_servers
5820 from allmydata.storage_client import StorageFarmBroker
5821-from allmydata.storage.server import storage_index_to_dir
5822 
5823 MiB = 1024*1024
5824 
5825hunk ./src/allmydata/test/test_upload.py 821
5826 
5827     def _copy_share_to_server(self, share_number, server_number):
5828         ss = self.g.servers_by_number[server_number]
5829-        # Copy share i from the directory associated with the first
5830-        # storage server to the directory associated with this one.
5831-        assert self.g, "I tried to find a grid at self.g, but failed"
5832-        assert self.shares, "I tried to find shares at self.shares, but failed"
5833-        old_share_location = self.shares[share_number][2]
5834-        new_share_location = os.path.join(ss.storedir, "shares")
5835-        si = uri.from_string(self.uri).get_storage_index()
5836-        new_share_location = os.path.join(new_share_location,
5837-                                          storage_index_to_dir(si))
5838-        if not os.path.exists(new_share_location):
5839-            os.makedirs(new_share_location)
5840-        new_share_location = os.path.join(new_share_location,
5841-                                          str(share_number))
5842-        if old_share_location != new_share_location:
5843-            shutil.copy(old_share_location, new_share_location)
5844-        shares = self.find_uri_shares(self.uri)
5845-        # Make sure that the storage server has the share.
5846-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5847-                        in shares)
5848+        self.copy_share(self.shares[share_number], ss)
5849 
5850     def _setup_grid(self):
5851         """
5852hunk ./src/allmydata/test/test_upload.py 1103
5853                 self._copy_share_to_server(i, 2)
5854         d.addCallback(_copy_shares)
5855         # Remove the first server, and add a placeholder with share 0
5856-        d.addCallback(lambda ign:
5857-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5858+        d.addCallback(lambda ign: self.remove_server(0))
5859         d.addCallback(lambda ign:
5860             self._add_server_with_share(server_number=4, share_number=0))
5861         # Now try uploading.
5862hunk ./src/allmydata/test/test_upload.py 1134
5863         d.addCallback(lambda ign:
5864             self._add_server(server_number=4))
5865         d.addCallback(_copy_shares)
5866-        d.addCallback(lambda ign:
5867-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5868+        d.addCallback(lambda ign: self.remove_server(0))
5869         d.addCallback(_reset_encoding_parameters)
5870         d.addCallback(lambda client:
5871             client.upload(upload.Data("data" * 10000, convergence="")))
5872hunk ./src/allmydata/test/test_upload.py 1196
5873                 self._copy_share_to_server(i, 2)
5874         d.addCallback(_copy_shares)
5875         # Remove server 0, and add another in its place
5876-        d.addCallback(lambda ign:
5877-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5878+        d.addCallback(lambda ign: self.remove_server(0))
5879         d.addCallback(lambda ign:
5880             self._add_server_with_share(server_number=4, share_number=0,
5881                                         readonly=True))
5882hunk ./src/allmydata/test/test_upload.py 1237
5883             for i in xrange(1, 10):
5884                 self._copy_share_to_server(i, 2)
5885         d.addCallback(_copy_shares)
5886-        d.addCallback(lambda ign:
5887-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5888+        d.addCallback(lambda ign: self.remove_server(0))
5889         def _reset_encoding_parameters(ign, happy=4):
5890             client = self.g.clients[0]
5891             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5892hunk ./src/allmydata/test/test_upload.py 1273
5893         # remove the original server
5894         # (necessary to ensure that the Tahoe2ServerSelector will distribute
5895         #  all the shares)
5896-        def _remove_server(ign):
5897-            server = self.g.servers_by_number[0]
5898-            self.g.remove_server(server.my_nodeid)
5899-        d.addCallback(_remove_server)
5900+        d.addCallback(lambda ign: self.remove_server(0))
5901         # This should succeed; we still have 4 servers, and the
5902         # happiness of the upload is 4.
5903         d.addCallback(lambda ign:
5904hunk ./src/allmydata/test/test_upload.py 1285
5905         d.addCallback(lambda ign:
5906             self._setup_and_upload())
5907         d.addCallback(_do_server_setup)
5908-        d.addCallback(_remove_server)
5909+        d.addCallback(lambda ign: self.remove_server(0))
5910         d.addCallback(lambda ign:
5911             self.shouldFail(UploadUnhappinessError,
5912                             "test_dropped_servers_in_encoder",
5913hunk ./src/allmydata/test/test_upload.py 1307
5914             self._add_server_with_share(4, 7, readonly=True)
5915             self._add_server_with_share(5, 8, readonly=True)
5916         d.addCallback(_do_server_setup_2)
5917-        d.addCallback(_remove_server)
5918+        d.addCallback(lambda ign: self.remove_server(0))
5919         d.addCallback(lambda ign:
5920             self._do_upload_with_broken_servers(1))
5921         d.addCallback(_set_basedir)
5922hunk ./src/allmydata/test/test_upload.py 1314
5923         d.addCallback(lambda ign:
5924             self._setup_and_upload())
5925         d.addCallback(_do_server_setup_2)
5926-        d.addCallback(_remove_server)
5927+        d.addCallback(lambda ign: self.remove_server(0))
5928         d.addCallback(lambda ign:
5929             self.shouldFail(UploadUnhappinessError,
5930                             "test_dropped_servers_in_encoder",
5931hunk ./src/allmydata/test/test_upload.py 1528
5932             for i in xrange(1, 10):
5933                 self._copy_share_to_server(i, 1)
5934         d.addCallback(_copy_shares)
5935-        d.addCallback(lambda ign:
5936-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5937+        d.addCallback(lambda ign: self.remove_server(0))
5938         def _prepare_client(ign):
5939             client = self.g.clients[0]
5940             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5941hunk ./src/allmydata/test/test_upload.py 1550
5942         def _setup(ign):
5943             for i in xrange(1, 11):
5944                 self._add_server(server_number=i)
5945-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5946+            self.remove_server(0)
5947             c = self.g.clients[0]
5948             # We set happy to an unsatisfiable value so that we can check the
5949             # counting in the exception message. The same progress message
5950hunk ./src/allmydata/test/test_upload.py 1577
5951                 self._add_server(server_number=i)
5952             self._add_server(server_number=11, readonly=True)
5953             self._add_server(server_number=12, readonly=True)
5954-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5955+            self.remove_server(0)
5956             c = self.g.clients[0]
5957             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5958             return c
5959hunk ./src/allmydata/test/test_upload.py 1605
5960             # the first one that the selector sees.
5961             for i in xrange(10):
5962                 self._copy_share_to_server(i, 9)
5963-            # Remove server 0, and its contents
5964-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5965+            self.remove_server(0)
5966             # Make happiness unsatisfiable
5967             c = self.g.clients[0]
5968             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
5969hunk ./src/allmydata/test/test_upload.py 1625
5970         def _then(ign):
5971             for i in xrange(1, 11):
5972                 self._add_server(server_number=i, readonly=True)
5973-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
5974+            self.remove_server(0)
5975             c = self.g.clients[0]
5976             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
5977             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
5978hunk ./src/allmydata/test/test_upload.py 1661
5979             self._add_server(server_number=4, readonly=True))
5980         d.addCallback(lambda ign:
5981             self._add_server(server_number=5, readonly=True))
5982-        d.addCallback(lambda ign:
5983-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
5984+        d.addCallback(lambda ign: self.remove_server(0))
5985         def _reset_encoding_parameters(ign, happy=4):
5986             client = self.g.clients[0]
5987             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
5988hunk ./src/allmydata/test/test_upload.py 1696
5989         d.addCallback(lambda ign:
5990             self._add_server(server_number=2))
5991         def _break_server_2(ign):
5992-            serverid = self.g.servers_by_number[2].my_nodeid
5993+            serverid = self.get_server(2).get_serverid()
5994             self.g.break_server(serverid)
5995         d.addCallback(_break_server_2)
5996         d.addCallback(lambda ign:
5997hunk ./src/allmydata/test/test_upload.py 1705
5998             self._add_server(server_number=4, readonly=True))
5999         d.addCallback(lambda ign:
6000             self._add_server(server_number=5, readonly=True))
6001-        d.addCallback(lambda ign:
6002-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6003+        d.addCallback(lambda ign: self.remove_server(0))
6004         d.addCallback(_reset_encoding_parameters)
6005         d.addCallback(lambda client:
6006             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
6007hunk ./src/allmydata/test/test_upload.py 1816
6008             # Copy shares
6009             self._copy_share_to_server(1, 1)
6010             self._copy_share_to_server(2, 1)
6011-            # Remove server 0
6012-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6013+            self.remove_server(0)
6014             client = self.g.clients[0]
6015             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
6016             return client
6017hunk ./src/allmydata/test/test_upload.py 1930
6018                                         readonly=True)
6019             self._add_server_with_share(server_number=4, share_number=3,
6020                                         readonly=True)
6021-            # Remove server 0.
6022-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6023+            self.remove_server(0)
6024             # Set the client appropriately
6025             c = self.g.clients[0]
6026             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6027hunk ./src/allmydata/test/test_util.py 9
6028 from twisted.trial import unittest
6029 from twisted.internet import defer, reactor
6030 from twisted.python.failure import Failure
6031+from twisted.python.filepath import FilePath
6032 from twisted.python import log
6033 from pycryptopp.hash.sha256 import SHA256 as _hash
6034 
6035hunk ./src/allmydata/test/test_util.py 508
6036                 os.chdir(saved_cwd)
6037 
6038     def test_disk_stats(self):
6039-        avail = fileutil.get_available_space('.', 2**14)
6040+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
6041         if avail == 0:
6042             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
6043 
6044hunk ./src/allmydata/test/test_util.py 512
6045-        disk = fileutil.get_disk_stats('.', 2**13)
6046+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
6047         self.failUnless(disk['total'] > 0, disk['total'])
6048         self.failUnless(disk['used'] > 0, disk['used'])
6049         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
6050hunk ./src/allmydata/test/test_util.py 521
6051 
6052     def test_disk_stats_avail_nonnegative(self):
6053         # This test will spuriously fail if you have more than 2^128
6054-        # bytes of available space on your filesystem.
6055-        disk = fileutil.get_disk_stats('.', 2**128)
6056+        # bytes of available space on your filesystem (lucky you).
6057+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
6058         self.failUnlessEqual(disk['avail'], 0)
6059 
6060 class PollMixinTests(unittest.TestCase):
6061hunk ./src/allmydata/test/test_web.py 12
6062 from twisted.python import failure, log
6063 from nevow import rend
6064 from allmydata import interfaces, uri, webish, dirnode
6065-from allmydata.storage.shares import get_share_file
6066 from allmydata.storage_client import StorageFarmBroker
6067 from allmydata.immutable import upload
6068 from allmydata.immutable.downloader.status import DownloadStatus
6069hunk ./src/allmydata/test/test_web.py 4111
6070             good_shares = self.find_uri_shares(self.uris["good"])
6071             self.failUnlessReallyEqual(len(good_shares), 10)
6072             sick_shares = self.find_uri_shares(self.uris["sick"])
6073-            os.unlink(sick_shares[0][2])
6074+            sick_shares[0][2].remove()
6075             dead_shares = self.find_uri_shares(self.uris["dead"])
6076             for i in range(1, 10):
6077hunk ./src/allmydata/test/test_web.py 4114
6078-                os.unlink(dead_shares[i][2])
6079+                dead_shares[i][2].remove()
6080             c_shares = self.find_uri_shares(self.uris["corrupt"])
6081             cso = CorruptShareOptions()
6082             cso.stdout = StringIO()
6083hunk ./src/allmydata/test/test_web.py 4118
6084-            cso.parseOptions([c_shares[0][2]])
6085+            cso.parseOptions([c_shares[0][2].path])
6086             corrupt_share(cso)
6087         d.addCallback(_clobber_shares)
6088 
6089hunk ./src/allmydata/test/test_web.py 4253
6090             good_shares = self.find_uri_shares(self.uris["good"])
6091             self.failUnlessReallyEqual(len(good_shares), 10)
6092             sick_shares = self.find_uri_shares(self.uris["sick"])
6093-            os.unlink(sick_shares[0][2])
6094+            sick_shares[0][2].remove()
6095             dead_shares = self.find_uri_shares(self.uris["dead"])
6096             for i in range(1, 10):
6097hunk ./src/allmydata/test/test_web.py 4256
6098-                os.unlink(dead_shares[i][2])
6099+                dead_shares[i][2].remove()
6100             c_shares = self.find_uri_shares(self.uris["corrupt"])
6101             cso = CorruptShareOptions()
6102             cso.stdout = StringIO()
6103hunk ./src/allmydata/test/test_web.py 4260
6104-            cso.parseOptions([c_shares[0][2]])
6105+            cso.parseOptions([c_shares[0][2].path])
6106             corrupt_share(cso)
6107         d.addCallback(_clobber_shares)
6108 
6109hunk ./src/allmydata/test/test_web.py 4319
6110 
6111         def _clobber_shares(ignored):
6112             sick_shares = self.find_uri_shares(self.uris["sick"])
6113-            os.unlink(sick_shares[0][2])
6114+            sick_shares[0][2].remove()
6115         d.addCallback(_clobber_shares)
6116 
6117         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
6118hunk ./src/allmydata/test/test_web.py 4811
6119             good_shares = self.find_uri_shares(self.uris["good"])
6120             self.failUnlessReallyEqual(len(good_shares), 10)
6121             sick_shares = self.find_uri_shares(self.uris["sick"])
6122-            os.unlink(sick_shares[0][2])
6123+            sick_shares[0][2].remove()
6124             #dead_shares = self.find_uri_shares(self.uris["dead"])
6125             #for i in range(1, 10):
6126hunk ./src/allmydata/test/test_web.py 4814
6127-            #    os.unlink(dead_shares[i][2])
6128+            #    dead_shares[i][2].remove()
6129 
6130             #c_shares = self.find_uri_shares(self.uris["corrupt"])
6131             #cso = CorruptShareOptions()
6132hunk ./src/allmydata/test/test_web.py 4819
6133             #cso.stdout = StringIO()
6134-            #cso.parseOptions([c_shares[0][2]])
6135+            #cso.parseOptions([c_shares[0][2].path])
6136             #corrupt_share(cso)
6137         d.addCallback(_clobber_shares)
6138 
6139hunk ./src/allmydata/test/test_web.py 4870
6140         d.addErrback(self.explain_web_error)
6141         return d
6142 
6143-    def _count_leases(self, ignored, which):
6144-        u = self.uris[which]
6145-        shares = self.find_uri_shares(u)
6146-        lease_counts = []
6147-        for shnum, serverid, fn in shares:
6148-            sf = get_share_file(fn)
6149-            num_leases = len(list(sf.get_leases()))
6150-            lease_counts.append( (fn, num_leases) )
6151-        return lease_counts
6152-
6153-    def _assert_leasecount(self, lease_counts, expected):
6154+    def _assert_leasecount(self, ignored, which, expected):
6155+        lease_counts = self.count_leases(self.uris[which])
6156         for (fn, num_leases) in lease_counts:
6157             if num_leases != expected:
6158                 self.fail("expected %d leases, have %d, on %s" %
6159hunk ./src/allmydata/test/test_web.py 4903
6160                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
6161         d.addCallback(_compute_fileurls)
6162 
6163-        d.addCallback(self._count_leases, "one")
6164-        d.addCallback(self._assert_leasecount, 1)
6165-        d.addCallback(self._count_leases, "two")
6166-        d.addCallback(self._assert_leasecount, 1)
6167-        d.addCallback(self._count_leases, "mutable")
6168-        d.addCallback(self._assert_leasecount, 1)
6169+        d.addCallback(self._assert_leasecount, "one", 1)
6170+        d.addCallback(self._assert_leasecount, "two", 1)
6171+        d.addCallback(self._assert_leasecount, "mutable", 1)
6172 
6173         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
6174         def _got_html_good(res):
6175hunk ./src/allmydata/test/test_web.py 4913
6176             self.failIf("Not Healthy" in res, res)
6177         d.addCallback(_got_html_good)
6178 
6179-        d.addCallback(self._count_leases, "one")
6180-        d.addCallback(self._assert_leasecount, 1)
6181-        d.addCallback(self._count_leases, "two")
6182-        d.addCallback(self._assert_leasecount, 1)
6183-        d.addCallback(self._count_leases, "mutable")
6184-        d.addCallback(self._assert_leasecount, 1)
6185+        d.addCallback(self._assert_leasecount, "one", 1)
6186+        d.addCallback(self._assert_leasecount, "two", 1)
6187+        d.addCallback(self._assert_leasecount, "mutable", 1)
6188 
6189         # this CHECK uses the original client, which uses the same
6190         # lease-secrets, so it will just renew the original lease
6191hunk ./src/allmydata/test/test_web.py 4922
6192         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
6193         d.addCallback(_got_html_good)
6194 
6195-        d.addCallback(self._count_leases, "one")
6196-        d.addCallback(self._assert_leasecount, 1)
6197-        d.addCallback(self._count_leases, "two")
6198-        d.addCallback(self._assert_leasecount, 1)
6199-        d.addCallback(self._count_leases, "mutable")
6200-        d.addCallback(self._assert_leasecount, 1)
6201+        d.addCallback(self._assert_leasecount, "one", 1)
6202+        d.addCallback(self._assert_leasecount, "two", 1)
6203+        d.addCallback(self._assert_leasecount, "mutable", 1)
6204 
6205         # this CHECK uses an alternate client, which adds a second lease
6206         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
6207hunk ./src/allmydata/test/test_web.py 4930
6208         d.addCallback(_got_html_good)
6209 
6210-        d.addCallback(self._count_leases, "one")
6211-        d.addCallback(self._assert_leasecount, 2)
6212-        d.addCallback(self._count_leases, "two")
6213-        d.addCallback(self._assert_leasecount, 1)
6214-        d.addCallback(self._count_leases, "mutable")
6215-        d.addCallback(self._assert_leasecount, 1)
6216+        d.addCallback(self._assert_leasecount, "one", 2)
6217+        d.addCallback(self._assert_leasecount, "two", 1)
6218+        d.addCallback(self._assert_leasecount, "mutable", 1)
6219 
6220         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
6221         d.addCallback(_got_html_good)
6222hunk ./src/allmydata/test/test_web.py 4937
6223 
6224-        d.addCallback(self._count_leases, "one")
6225-        d.addCallback(self._assert_leasecount, 2)
6226-        d.addCallback(self._count_leases, "two")
6227-        d.addCallback(self._assert_leasecount, 1)
6228-        d.addCallback(self._count_leases, "mutable")
6229-        d.addCallback(self._assert_leasecount, 1)
6230+        d.addCallback(self._assert_leasecount, "one", 2)
6231+        d.addCallback(self._assert_leasecount, "two", 1)
6232+        d.addCallback(self._assert_leasecount, "mutable", 1)
6233 
6234         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
6235                       clientnum=1)
6236hunk ./src/allmydata/test/test_web.py 4945
6237         d.addCallback(_got_html_good)
6238 
6239-        d.addCallback(self._count_leases, "one")
6240-        d.addCallback(self._assert_leasecount, 2)
6241-        d.addCallback(self._count_leases, "two")
6242-        d.addCallback(self._assert_leasecount, 1)
6243-        d.addCallback(self._count_leases, "mutable")
6244-        d.addCallback(self._assert_leasecount, 2)
6245+        d.addCallback(self._assert_leasecount, "one", 2)
6246+        d.addCallback(self._assert_leasecount, "two", 1)
6247+        d.addCallback(self._assert_leasecount, "mutable", 2)
6248 
6249         d.addErrback(self.explain_web_error)
6250         return d
6251hunk ./src/allmydata/test/test_web.py 4989
6252             self.failUnlessReallyEqual(len(units), 4+1)
6253         d.addCallback(_done)
6254 
6255-        d.addCallback(self._count_leases, "root")
6256-        d.addCallback(self._assert_leasecount, 1)
6257-        d.addCallback(self._count_leases, "one")
6258-        d.addCallback(self._assert_leasecount, 1)
6259-        d.addCallback(self._count_leases, "mutable")
6260-        d.addCallback(self._assert_leasecount, 1)
6261+        d.addCallback(self._assert_leasecount, "root", 1)
6262+        d.addCallback(self._assert_leasecount, "one", 1)
6263+        d.addCallback(self._assert_leasecount, "mutable", 1)
6264 
6265         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
6266         d.addCallback(_done)
6267hunk ./src/allmydata/test/test_web.py 4996
6268 
6269-        d.addCallback(self._count_leases, "root")
6270-        d.addCallback(self._assert_leasecount, 1)
6271-        d.addCallback(self._count_leases, "one")
6272-        d.addCallback(self._assert_leasecount, 1)
6273-        d.addCallback(self._count_leases, "mutable")
6274-        d.addCallback(self._assert_leasecount, 1)
6275+        d.addCallback(self._assert_leasecount, "root", 1)
6276+        d.addCallback(self._assert_leasecount, "one", 1)
6277+        d.addCallback(self._assert_leasecount, "mutable", 1)
6278 
6279         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
6280                       clientnum=1)
6281hunk ./src/allmydata/test/test_web.py 5004
6282         d.addCallback(_done)
6283 
6284-        d.addCallback(self._count_leases, "root")
6285-        d.addCallback(self._assert_leasecount, 2)
6286-        d.addCallback(self._count_leases, "one")
6287-        d.addCallback(self._assert_leasecount, 2)
6288-        d.addCallback(self._count_leases, "mutable")
6289-        d.addCallback(self._assert_leasecount, 2)
6290+        d.addCallback(self._assert_leasecount, "root", 2)
6291+        d.addCallback(self._assert_leasecount, "one", 2)
6292+        d.addCallback(self._assert_leasecount, "mutable", 2)
6293 
6294         d.addErrback(self.explain_web_error)
6295         return d
6296merger 0.0 (
6297hunk ./src/allmydata/uri.py 829
6298+    def is_readonly(self):
6299+        return True
6300+
6301+    def get_readonly(self):
6302+        return self
6303+
6304+
6305hunk ./src/allmydata/uri.py 829
6306+    def is_readonly(self):
6307+        return True
6308+
6309+    def get_readonly(self):
6310+        return self
6311+
6312+
6313)
6314merger 0.0 (
6315hunk ./src/allmydata/uri.py 848
6316+    def is_readonly(self):
6317+        return True
6318+
6319+    def get_readonly(self):
6320+        return self
6321+
6322hunk ./src/allmydata/uri.py 848
6323+    def is_readonly(self):
6324+        return True
6325+
6326+    def get_readonly(self):
6327+        return self
6328+
6329)
6330hunk ./src/allmydata/util/encodingutil.py 221
6331 def quote_path(path, quotemarks=True):
6332     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
6333 
6334+def quote_filepath(fp, quotemarks=True, encoding=None):
6335+    path = fp.path
6336+    if isinstance(path, str):
6337+        try:
6338+            path = path.decode(filesystem_encoding)
6339+        except UnicodeDecodeError:
6340+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
6341+
6342+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
6343+
6344 
6345 def unicode_platform():
6346     """
6347hunk ./src/allmydata/util/fileutil.py 5
6348 Futz with files like a pro.
6349 """
6350 
6351-import sys, exceptions, os, stat, tempfile, time, binascii
6352+import errno, sys, exceptions, os, stat, tempfile, time, binascii
6353+
6354+from allmydata.util.assertutil import precondition
6355 
6356 from twisted.python import log
6357hunk ./src/allmydata/util/fileutil.py 10
6358+from twisted.python.filepath import FilePath, UnlistableError
6359 
6360 from pycryptopp.cipher.aes import AES
6361 
6362hunk ./src/allmydata/util/fileutil.py 189
6363             raise tx
6364         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6365 
6366-def rm_dir(dirname):
6367+def fp_make_dirs(dirfp):
6368+    """
6369+    An idempotent version of FilePath.makedirs().  If the dir already
6370+    exists, do nothing and return without raising an exception.  If this
6371+    call creates the dir, return without raising an exception.  If there is
6372+    an error that prevents creation or if the directory gets deleted after
6373+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
6374+    exists, raise an exception.
6375+    """
6376+    log.msg( "xxx 0 %s" % (dirfp,))
6377+    tx = None
6378+    try:
6379+        dirfp.makedirs()
6380+    except OSError, x:
6381+        tx = x
6382+
6383+    if not dirfp.isdir():
6384+        if tx:
6385+            raise tx
6386+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
6387+
6388+def fp_rmdir_if_empty(dirfp):
6389+    """ Remove the directory if it is empty. """
6390+    try:
6391+        os.rmdir(dirfp.path)
6392+    except OSError, e:
6393+        if e.errno != errno.ENOTEMPTY:
6394+            raise
6395+    else:
6396+        dirfp.changed()
6397+
6398+def rmtree(dirname):
6399     """
6400     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
6401     already gone, do nothing and return without raising an exception.  If this
6402hunk ./src/allmydata/util/fileutil.py 239
6403             else:
6404                 remove(fullname)
6405         os.rmdir(dirname)
6406-    except Exception, le:
6407-        # Ignore "No such file or directory"
6408-        if (not isinstance(le, OSError)) or le.args[0] != 2:
6409+    except EnvironmentError, le:
6410+        # Ignore "No such file or directory", collect any other exception.
6411+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
6412             excs.append(le)
6413hunk ./src/allmydata/util/fileutil.py 243
6414+    except Exception, le:
6415+        excs.append(le)
6416 
6417     # Okay, now we've recursively removed everything, ignoring any "No
6418     # such file or directory" errors, and collecting any other errors.
6419hunk ./src/allmydata/util/fileutil.py 256
6420             raise OSError, "Failed to remove dir for unknown reason."
6421         raise OSError, excs
6422 
6423+def fp_remove(fp):
6424+    """
6425+    An idempotent version of shutil.rmtree().  If the file/dir is already
6426+    gone, do nothing and return without raising an exception.  If this call
6427+    removes the file/dir, return without raising an exception.  If there is
6428+    an error that prevents removal, or if a file or directory at the same
6429+    path gets created again by someone else after this deletes it and before
6430+    this checks that it is gone, raise an exception.
6431+    """
6432+    try:
6433+        fp.remove()
6434+    except UnlistableError, e:
6435+        if e.originalException.errno != errno.ENOENT:
6436+            raise
6437+    except OSError, e:
6438+        if e.errno != errno.ENOENT:
6439+            raise
6440+
6441+def rm_dir(dirname):
6442+    # Renamed to be like shutil.rmtree and unlike rmdir.
6443+    return rmtree(dirname)
6444 
6445 def remove_if_possible(f):
6446     try:
6447hunk ./src/allmydata/util/fileutil.py 387
6448         import traceback
6449         traceback.print_exc()
6450 
6451-def get_disk_stats(whichdir, reserved_space=0):
6452+def get_disk_stats(whichdirfp, reserved_space=0):
6453     """Return disk statistics for the storage disk, in the form of a dict
6454     with the following fields.
6455       total:            total bytes on disk
6456hunk ./src/allmydata/util/fileutil.py 408
6457     you can pass how many bytes you would like to leave unused on this
6458     filesystem as reserved_space.
6459     """
6460+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6461 
6462     if have_GetDiskFreeSpaceExW:
6463         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
6464hunk ./src/allmydata/util/fileutil.py 419
6465         n_free_for_nonroot = c_ulonglong(0)
6466         n_total            = c_ulonglong(0)
6467         n_free_for_root    = c_ulonglong(0)
6468-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
6469-                                               byref(n_total),
6470-                                               byref(n_free_for_root))
6471+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6472+                                                      byref(n_total),
6473+                                                      byref(n_free_for_root))
6474         if retval == 0:
6475             raise OSError("Windows error %d attempting to get disk statistics for %r"
6476hunk ./src/allmydata/util/fileutil.py 424
6477-                          % (GetLastError(), whichdir))
6478+                          % (GetLastError(), whichdirfp.path))
6479         free_for_nonroot = n_free_for_nonroot.value
6480         total            = n_total.value
6481         free_for_root    = n_free_for_root.value
6482hunk ./src/allmydata/util/fileutil.py 433
6483         # <http://docs.python.org/library/os.html#os.statvfs>
6484         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
6485         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
6486-        s = os.statvfs(whichdir)
6487+        s = os.statvfs(whichdirfp.path)
6488 
6489         # on my mac laptop:
6490         #  statvfs(2) is a wrapper around statfs(2).
6491hunk ./src/allmydata/util/fileutil.py 460
6492              'avail': avail,
6493            }
6494 
6495-def get_available_space(whichdir, reserved_space):
6496+def get_available_space(whichdirfp, reserved_space):
6497     """Returns available space for share storage in bytes, or None if no
6498     API to get this information is available.
6499 
6500hunk ./src/allmydata/util/fileutil.py 472
6501     you can pass how many bytes you would like to leave unused on this
6502     filesystem as reserved_space.
6503     """
6504+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
6505     try:
6506hunk ./src/allmydata/util/fileutil.py 474
6507-        return get_disk_stats(whichdir, reserved_space)['avail']
6508+        return get_disk_stats(whichdirfp, reserved_space)['avail']
6509     except AttributeError:
6510         return None
6511hunk ./src/allmydata/util/fileutil.py 477
6512-    except EnvironmentError:
6513-        log.msg("OS call to get disk statistics failed")
6514+
6515+
6516+def get_used_space(fp):
6517+    if fp is None:
6518         return 0
6519hunk ./src/allmydata/util/fileutil.py 482
6520+    try:
6521+        s = os.stat(fp.path)
6522+    except EnvironmentError:
6523+        if not fp.exists():
6524+            return 0
6525+        raise
6526+    else:
6527+        # POSIX defines st_blocks (originally a BSDism):
6528+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
6529+        # but does not require stat() to give it a "meaningful value"
6530+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
6531+        # and says:
6532+        #   "The unit for the st_blocks member of the stat structure is not defined
6533+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
6534+        #    It may differ on a file system basis. There is no correlation between
6535+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
6536+        #    structure members."
6537+        #
6538+        # The Linux docs define it as "the number of blocks allocated to the file,
6539+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
6540+        # not set the attribute on Windows.
6541+        #
6542+        # We consider platforms that define st_blocks but give it a wrong value, or
6543+        # measure it in a unit other than 512 bytes, to be broken. See also
6544+        # <http://bugs.python.org/issue12350>.
6545+
6546+        if hasattr(s, 'st_blocks'):
6547+            return s.st_blocks * 512
6548+        else:
6549+            return s.st_size
6550}
6551[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
6552david-sarah@jacaranda.org**20110920033803
6553 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
6554] {
6555hunk ./src/allmydata/client.py 9
6556 from twisted.internet import reactor, defer
6557 from twisted.application import service
6558 from twisted.application.internet import TimerService
6559+from twisted.python.filepath import FilePath
6560 from foolscap.api import Referenceable
6561 from pycryptopp.publickey import rsa
6562 
6563hunk ./src/allmydata/client.py 15
6564 import allmydata
6565 from allmydata.storage.server import StorageServer
6566+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6567 from allmydata import storage_client
6568 from allmydata.immutable.upload import Uploader
6569 from allmydata.immutable.offloaded import Helper
6570hunk ./src/allmydata/client.py 213
6571             return
6572         readonly = self.get_config("storage", "readonly", False, boolean=True)
6573 
6574-        storedir = os.path.join(self.basedir, self.STOREDIR)
6575+        storedir = FilePath(self.basedir).child(self.STOREDIR)
6576 
6577         data = self.get_config("storage", "reserved_space", None)
6578         reserved = None
6579hunk ./src/allmydata/client.py 255
6580             'cutoff_date': cutoff_date,
6581             'sharetypes': tuple(sharetypes),
6582         }
6583-        ss = StorageServer(storedir, self.nodeid,
6584-                           reserved_space=reserved,
6585-                           discard_storage=discard,
6586-                           readonly_storage=readonly,
6587+
6588+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
6589+                              discard_storage=discard)
6590+        ss = StorageServer(nodeid, backend, storedir,
6591                            stats_provider=self.stats_provider,
6592                            expiration_policy=expiration_policy)
6593         self.add_service(ss)
6594hunk ./src/allmydata/interfaces.py 348
6595 
6596     def get_shares():
6597         """
6598-        Generates the IStoredShare objects held in this shareset.
6599+        Generates IStoredShare objects for all completed shares in this shareset.
6600         """
6601 
6602     def has_incoming(shnum):
6603hunk ./src/allmydata/storage/backends/base.py 69
6604         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
6605         #     """create a mutable share with the given shnum and write_enabler"""
6606 
6607-        # secrets might be a triple with cancel_secret in secrets[2], but if
6608-        # so we ignore the cancel_secret.
6609         write_enabler = secrets[0]
6610         renew_secret = secrets[1]
6611hunk ./src/allmydata/storage/backends/base.py 71
6612+        cancel_secret = '\x00'*32
6613+        if len(secrets) > 2:
6614+            cancel_secret = secrets[2]
6615 
6616         si_s = self.get_storage_index_string()
6617         shares = {}
6618hunk ./src/allmydata/storage/backends/base.py 110
6619             read_data[shnum] = share.readv(read_vector)
6620 
6621         ownerid = 1 # TODO
6622-        lease_info = LeaseInfo(ownerid, renew_secret,
6623+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
6624                                expiration_time, storageserver.get_serverid())
6625 
6626         if testv_is_good:
6627hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
6628     return newfp.child(sia)
6629 
6630 
6631-def get_share(fp):
6632+def get_share(storageindex, shnum, fp):
6633     f = fp.open('rb')
6634     try:
6635         prefix = f.read(32)
6636hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
6637         f.close()
6638 
6639     if prefix == MutableDiskShare.MAGIC:
6640-        return MutableDiskShare(fp)
6641+        return MutableDiskShare(storageindex, shnum, fp)
6642     else:
6643         # assume it's immutable
6644hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
6645-        return ImmutableDiskShare(fp)
6646+        return ImmutableDiskShare(storageindex, shnum, fp)
6647 
6648 
6649 class DiskBackend(Backend):
6650hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
6651                 if not NUM_RE.match(shnumstr):
6652                     continue
6653                 sharehome = self._sharehomedir.child(shnumstr)
6654-                yield self.get_share(sharehome)
6655+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
6656         except UnlistableError:
6657             # There is no shares directory at all.
6658             pass
6659hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
6660         return self._incominghomedir.child(str(shnum)).exists()
6661 
6662     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
6663-        sharehome = self._sharehomedir.child(str(shnum))
6664+        finalhome = self._sharehomedir.child(str(shnum))
6665         incominghome = self._incominghomedir.child(str(shnum))
6666hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
6667-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
6668-                                   max_size=max_space_per_bucket, create=True)
6669+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
6670+                                   max_size=max_space_per_bucket)
6671         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
6672         if self._discard_storage:
6673             bw.throw_out_all_data = True
6674hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
6675         fileutil.fp_make_dirs(self._sharehomedir)
6676         sharehome = self._sharehomedir.child(str(shnum))
6677         serverid = storageserver.get_serverid()
6678-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
6679+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
6680 
6681     def _clean_up_after_unlink(self):
6682         fileutil.fp_rmdir_if_empty(self._sharehomedir)
6683hunk ./src/allmydata/storage/backends/disk/immutable.py 48
6684     LEASE_SIZE = struct.calcsize(">L32s32sL")
6685 
6686 
6687-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
6688-        """ If max_size is not None then I won't allow more than
6689-        max_size to be written to me. If create=True then max_size
6690-        must not be None. """
6691-        precondition((max_size is not None) or (not create), max_size, create)
6692+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
6693+        """
6694+        If max_size is not None then I won't allow more than max_size to be written to me.
6695+        If finalhome is not None (meaning that we are creating the share) then max_size
6696+        must not be None.
6697+        """
6698+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
6699         self._storageindex = storageindex
6700         self._max_size = max_size
6701hunk ./src/allmydata/storage/backends/disk/immutable.py 57
6702-        self._incominghome = incominghome
6703-        self._home = finalhome
6704+
6705+        # If we are creating the share, _finalhome refers to the final path and
6706+        # _home to the incoming path. Otherwise, _finalhome is None.
6707+        self._finalhome = finalhome
6708+        self._home = home
6709         self._shnum = shnum
6710hunk ./src/allmydata/storage/backends/disk/immutable.py 63
6711-        if create:
6712-            # touch the file, so later callers will see that we're working on
6713+
6714+        if self._finalhome is not None:
6715+            # Touch the file, so later callers will see that we're working on
6716             # it. Also construct the metadata.
6717hunk ./src/allmydata/storage/backends/disk/immutable.py 67
6718-            assert not finalhome.exists()
6719-            fp_make_dirs(self._incominghome.parent())
6720+            assert not self._finalhome.exists()
6721+            fp_make_dirs(self._home.parent())
6722             # The second field -- the four-byte share data length -- is no
6723             # longer used as of Tahoe v1.3.0, but we continue to write it in
6724             # there in case someone downgrades a storage server from >=
6725hunk ./src/allmydata/storage/backends/disk/immutable.py 78
6726             # the largest length that can fit into the field. That way, even
6727             # if this does happen, the old < v1.3.0 server will still allow
6728             # clients to read the first part of the share.
6729-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6730+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6731             self._lease_offset = max_size + 0x0c
6732             self._num_leases = 0
6733         else:
6734hunk ./src/allmydata/storage/backends/disk/immutable.py 101
6735                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
6736 
6737     def close(self):
6738-        fileutil.fp_make_dirs(self._home.parent())
6739-        self._incominghome.moveTo(self._home)
6740-        try:
6741-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
6742-            # We try to delete the parent (.../ab/abcde) to avoid leaving
6743-            # these directories lying around forever, but the delete might
6744-            # fail if we're working on another share for the same storage
6745-            # index (like ab/abcde/5). The alternative approach would be to
6746-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
6747-            # ShareWriter), each of which is responsible for a single
6748-            # directory on disk, and have them use reference counting of
6749-            # their children to know when they should do the rmdir. This
6750-            # approach is simpler, but relies on os.rmdir refusing to delete
6751-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
6752-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
6753-            # we also delete the grandparent (prefix) directory, .../ab ,
6754-            # again to avoid leaving directories lying around. This might
6755-            # fail if there is another bucket open that shares a prefix (like
6756-            # ab/abfff).
6757-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
6758-            # we leave the great-grandparent (incoming/) directory in place.
6759-        except EnvironmentError:
6760-            # ignore the "can't rmdir because the directory is not empty"
6761-            # exceptions, those are normal consequences of the
6762-            # above-mentioned conditions.
6763-            pass
6764-        pass
6765+        fileutil.fp_make_dirs(self._finalhome.parent())
6766+        self._home.moveTo(self._finalhome)
6767+
6768+        # self._home is like storage/shares/incoming/ab/abcde/4 .
6769+        # We try to delete the parent (.../ab/abcde) to avoid leaving
6770+        # these directories lying around forever, but the delete might
6771+        # fail if we're working on another share for the same storage
6772+        # index (like ab/abcde/5). The alternative approach would be to
6773+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
6774+        # ShareWriter), each of which is responsible for a single
6775+        # directory on disk, and have them use reference counting of
6776+        # their children to know when they should do the rmdir. This
6777+        # approach is simpler, but relies on os.rmdir (used by
6778+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
6779+        # Do *not* use fileutil.fp_remove() here!
6780+        parent = self._home.parent()
6781+        fileutil.fp_rmdir_if_empty(parent)
6782+
6783+        # we also delete the grandparent (prefix) directory, .../ab ,
6784+        # again to avoid leaving directories lying around. This might
6785+        # fail if there is another bucket open that shares a prefix (like
6786+        # ab/abfff).
6787+        fileutil.fp_rmdir_if_empty(parent.parent())
6788+
6789+        # we leave the great-grandparent (incoming/) directory in place.
6790+
6791+        # allow lease changes after closing.
6792+        self._home = self._finalhome
6793+        self._finalhome = None
6794 
6795     def get_used_space(self):
6796hunk ./src/allmydata/storage/backends/disk/immutable.py 132
6797-        return (fileutil.get_used_space(self._home) +
6798-                fileutil.get_used_space(self._incominghome))
6799+        return (fileutil.get_used_space(self._finalhome) +
6800+                fileutil.get_used_space(self._home))
6801 
6802     def get_storage_index(self):
6803         return self._storageindex
6804hunk ./src/allmydata/storage/backends/disk/immutable.py 175
6805         precondition(offset >= 0, offset)
6806         if self._max_size is not None and offset+length > self._max_size:
6807             raise DataTooLargeError(self._max_size, offset, length)
6808-        f = self._incominghome.open(mode='rb+')
6809+        f = self._home.open(mode='rb+')
6810         try:
6811             real_offset = self._data_offset+offset
6812             f.seek(real_offset)
6813hunk ./src/allmydata/storage/backends/disk/immutable.py 205
6814 
6815     # These lease operations are intended for use by disk_backend.py.
6816     # Other clients should not depend on the fact that the disk backend
6817-    # stores leases in share files.
6818+    # stores leases in share files. XXX bucket.py also relies on this.
6819 
6820     def get_leases(self):
6821         """Yields a LeaseInfo instance for all leases."""
6822hunk ./src/allmydata/storage/backends/disk/immutable.py 221
6823             f.close()
6824 
6825     def add_lease(self, lease_info):
6826-        f = self._incominghome.open(mode='rb')
6827+        f = self._home.open(mode='rb+')
6828         try:
6829             num_leases = self._read_num_leases(f)
6830hunk ./src/allmydata/storage/backends/disk/immutable.py 224
6831-        finally:
6832-            f.close()
6833-        f = self._home.open(mode='wb+')
6834-        try:
6835             self._write_lease_record(f, num_leases, lease_info)
6836             self._write_num_leases(f, num_leases+1)
6837         finally:
6838hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6839         pass
6840 
6841 
6842-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6843-    ms = MutableDiskShare(fp, parent)
6844+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
6845+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
6846     ms.create(serverid, write_enabler)
6847     del ms
6848hunk ./src/allmydata/storage/backends/disk/mutable.py 444
6849-    return MutableDiskShare(fp, parent)
6850+    return MutableDiskShare(storageindex, shnum, fp, parent)
6851hunk ./src/allmydata/storage/bucket.py 44
6852         start = time.time()
6853 
6854         self._share.close()
6855-        filelen = self._share.stat()
6856+        # XXX should this be self._share.get_used_space() ?
6857+        consumed_size = self._share.get_size()
6858         self._share = None
6859 
6860         self.closed = True
6861hunk ./src/allmydata/storage/bucket.py 51
6862         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
6863 
6864-        self.ss.bucket_writer_closed(self, filelen)
6865+        self.ss.bucket_writer_closed(self, consumed_size)
6866         self.ss.add_latency("close", time.time() - start)
6867         self.ss.count("close")
6868 
6869hunk ./src/allmydata/storage/server.py 182
6870                                 renew_secret, cancel_secret,
6871                                 sharenums, allocated_size,
6872                                 canary, owner_num=0):
6873-        # cancel_secret is no longer used.
6874         # owner_num is not for clients to set, but rather it should be
6875         # curried into a StorageServer instance dedicated to a particular
6876         # owner.
6877hunk ./src/allmydata/storage/server.py 195
6878         # Note that the lease should not be added until the BucketWriter
6879         # has been closed.
6880         expire_time = time.time() + 31*24*60*60
6881-        lease_info = LeaseInfo(owner_num, renew_secret,
6882+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
6883                                expire_time, self._serverid)
6884 
6885         max_space_per_bucket = allocated_size
6886hunk ./src/allmydata/test/no_network.py 349
6887         return self.g.servers_by_number[i]
6888 
6889     def get_serverdir(self, i):
6890-        return self.g.servers_by_number[i].backend.storedir
6891+        return self.g.servers_by_number[i].backend._storedir
6892 
6893     def remove_server(self, i):
6894         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6895hunk ./src/allmydata/test/no_network.py 357
6896     def iterate_servers(self):
6897         for i in sorted(self.g.servers_by_number.keys()):
6898             ss = self.g.servers_by_number[i]
6899-            yield (i, ss, ss.backend.storedir)
6900+            yield (i, ss, ss.backend._storedir)
6901 
6902     def find_uri_shares(self, uri):
6903         si = tahoe_uri.from_string(uri).get_storage_index()
6904hunk ./src/allmydata/test/no_network.py 384
6905         return shares
6906 
6907     def copy_share(self, from_share, uri, to_server):
6908-        si = uri.from_string(self.uri).get_storage_index()
6909+        si = tahoe_uri.from_string(uri).get_storage_index()
6910         (i_shnum, i_serverid, i_sharefp) = from_share
6911         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
6912         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
6913hunk ./src/allmydata/test/test_download.py 127
6914 
6915         return d
6916 
6917-    def _write_shares(self, uri, shares):
6918-        si = uri.from_string(uri).get_storage_index()
6919+    def _write_shares(self, fileuri, shares):
6920+        si = uri.from_string(fileuri).get_storage_index()
6921         for i in shares:
6922             shares_for_server = shares[i]
6923             for shnum in shares_for_server:
6924hunk ./src/allmydata/test/test_hung_server.py 36
6925 
6926     def _hang(self, servers, **kwargs):
6927         for ss in servers:
6928-            self.g.hang_server(ss.get_serverid(), **kwargs)
6929+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
6930 
6931     def _unhang(self, servers, **kwargs):
6932         for ss in servers:
6933hunk ./src/allmydata/test/test_hung_server.py 40
6934-            self.g.unhang_server(ss.get_serverid(), **kwargs)
6935+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
6936 
6937     def _hang_shares(self, shnums, **kwargs):
6938         # hang all servers who are holding the given shares
6939hunk ./src/allmydata/test/test_hung_server.py 52
6940                     hung_serverids.add(i_serverid)
6941 
6942     def _delete_all_shares_from(self, servers):
6943-        serverids = [ss.get_serverid() for ss in servers]
6944+        serverids = [ss.original.get_serverid() for ss in servers]
6945         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6946             if i_serverid in serverids:
6947                 i_sharefp.remove()
6948hunk ./src/allmydata/test/test_hung_server.py 58
6949 
6950     def _corrupt_all_shares_in(self, servers, corruptor_func):
6951-        serverids = [ss.get_serverid() for ss in servers]
6952+        serverids = [ss.original.get_serverid() for ss in servers]
6953         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6954             if i_serverid in serverids:
6955                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
6956hunk ./src/allmydata/test/test_hung_server.py 64
6957 
6958     def _copy_all_shares_from(self, from_servers, to_server):
6959-        serverids = [ss.get_serverid() for ss in from_servers]
6960+        serverids = [ss.original.get_serverid() for ss in from_servers]
6961         for (i_shnum, i_serverid, i_sharefp) in self.shares:
6962             if i_serverid in serverids:
6963                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
6964hunk ./src/allmydata/test/test_mutable.py 2990
6965             fso = debug.FindSharesOptions()
6966             storage_index = base32.b2a(n.get_storage_index())
6967             fso.si_s = storage_index
6968-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
6969+            fso.nodedirs = [unicode(storedir.parent().path)
6970                             for (i,ss,storedir)
6971                             in self.iterate_servers()]
6972             fso.stdout = StringIO()
6973hunk ./src/allmydata/test/test_upload.py 818
6974         if share_number is not None:
6975             self._copy_share_to_server(share_number, server_number)
6976 
6977-
6978     def _copy_share_to_server(self, share_number, server_number):
6979         ss = self.g.servers_by_number[server_number]
6980hunk ./src/allmydata/test/test_upload.py 820
6981-        self.copy_share(self.shares[share_number], ss)
6982+        self.copy_share(self.shares[share_number], self.uri, ss)
6983 
6984     def _setup_grid(self):
6985         """
6986}
6987[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
6988david-sarah@jacaranda.org**20110920171737
6989 Ignore-this: 5947e864682a43cb04e557334cda7c19
6990] {
6991adddir ./docs/backends
6992addfile ./docs/backends/S3.rst
6993hunk ./docs/backends/S3.rst 1
6994+====================================================
6995+Storing Shares in Amazon Simple Storage Service (S3)
6996+====================================================
6997+
6998+S3 is a commercial storage service provided by Amazon, described at
6999+`<https://aws.amazon.com/s3/>`_.
7000+
7001+The Tahoe-LAFS storage server can be configured to store its shares in
7002+an S3 bucket, rather than on local filesystem. To enable this, add the
7003+following keys to the server's ``tahoe.cfg`` file:
7004+
7005+``[storage]``
7006+
7007+``backend = s3``
7008+
7009+    This turns off the local filesystem backend and enables use of S3.
7010+
7011+``s3.access_key_id = (string, required)``
7012+``s3.secret_access_key = (string, required)``
7013+
7014+    These two give the storage server permission to access your Amazon
7015+    Web Services account, allowing them to upload and download shares
7016+    from S3.
7017+
7018+``s3.bucket = (string, required)``
7019+
7020+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
7021+    storage server will only modify and access objects in the configured S3
7022+    bucket.
7023+
7024+``s3.url = (URL string, optional)``
7025+
7026+    This URL tells the storage server how to access the S3 service. It
7027+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
7028+    else, you may be able to use some other S3-like service if it is
7029+    sufficiently compatible.
7030+
7031+``s3.max_space = (str, optional)``
7032+
7033+    This tells the server to limit how much space can be used in the S3
7034+    bucket. Before each share is uploaded, the server will ask S3 for the
7035+    current bucket usage, and will only accept the share if it does not cause
7036+    the usage to grow above this limit.
7037+
7038+    The string contains a number, with an optional case-insensitive scale
7039+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7040+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7041+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7042+    thing.
7043+
7044+    If ``s3.max_space`` is omitted, the default behavior is to allow
7045+    unlimited usage.
7046+
7047+
7048+Once configured, the WUI "storage server" page will provide information about
7049+how much space is being used and how many shares are being stored.
7050+
7051+
7052+Issues
7053+------
7054+
7055+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7056+is configured to store shares in S3 rather than on local disk, some common
7057+operations may behave differently:
7058+
7059+* Lease crawling/expiration is not yet implemented. As a result, shares will
7060+  be retained forever, and the Storage Server status web page will not show
7061+  information about the number of mutable/immutable shares present.
7062+
7063+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7064+  each share upload, causing the upload process to run slightly slower and
7065+  incur more S3 request charges.
7066addfile ./docs/backends/disk.rst
7067hunk ./docs/backends/disk.rst 1
7068+====================================
7069+Storing Shares on a Local Filesystem
7070+====================================
7071+
7072+The "disk" backend stores shares on the local filesystem. Versions of
7073+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
7074+
7075+``[storage]``
7076+
7077+``backend = disk``
7078+
7079+    This enables use of the disk backend, and is the default.
7080+
7081+``reserved_space = (str, optional)``
7082+
7083+    If provided, this value defines how much disk space is reserved: the
7084+    storage server will not accept any share that causes the amount of free
7085+    disk space to drop below this value. (The free space is measured by a
7086+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7087+    space available to the user account under which the storage server runs.)
7088+
7089+    This string contains a number, with an optional case-insensitive scale
7090+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7091+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7092+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7093+    thing.
7094+
7095+    "``tahoe create-node``" generates a tahoe.cfg with
7096+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7097+    reservation to suit your needs.
7098+
7099+``expire.enabled =``
7100+
7101+``expire.mode =``
7102+
7103+``expire.override_lease_duration =``
7104+
7105+``expire.cutoff_date =``
7106+
7107+``expire.immutable =``
7108+
7109+``expire.mutable =``
7110+
7111+    These settings control garbage collection, causing the server to
7112+    delete shares that no longer have an up-to-date lease on them. Please
7113+    see `<garbage-collection.rst>`_ for full details.
7114hunk ./docs/configuration.rst 436
7115     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
7116     status of this bug. The default value is ``False``.
7117 
7118-``reserved_space = (str, optional)``
7119+``backend = (string, optional)``
7120 
7121hunk ./docs/configuration.rst 438
7122-    If provided, this value defines how much disk space is reserved: the
7123-    storage server will not accept any share that causes the amount of free
7124-    disk space to drop below this value. (The free space is measured by a
7125-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
7126-    space available to the user account under which the storage server runs.)
7127+    Storage servers can store the data into different "backends". Clients
7128+    need not be aware of which backend is used by a server. The default
7129+    value is ``disk``.
7130 
7131hunk ./docs/configuration.rst 442
7132-    This string contains a number, with an optional case-insensitive scale
7133-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
7134-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
7135-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
7136-    thing.
7137+``backend = disk``
7138 
7139hunk ./docs/configuration.rst 444
7140-    "``tahoe create-node``" generates a tahoe.cfg with
7141-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
7142-    reservation to suit your needs.
7143+    The default is to store shares on the local filesystem (in
7144+    BASEDIR/storage/shares/). For configuration details (including how to
7145+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
7146 
7147hunk ./docs/configuration.rst 448
7148-``expire.enabled =``
7149+``backend = S3``
7150 
7151hunk ./docs/configuration.rst 450
7152-``expire.mode =``
7153-
7154-``expire.override_lease_duration =``
7155-
7156-``expire.cutoff_date =``
7157-
7158-``expire.immutable =``
7159-
7160-``expire.mutable =``
7161-
7162-    These settings control garbage collection, in which the server will
7163-    delete shares that no longer have an up-to-date lease on them. Please see
7164-    `<garbage-collection.rst>`_ for full details.
7165+    The storage server can store all shares to an Amazon Simple Storage
7166+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
7167 
7168 
7169 Running A Helper
7170}
7171[Fix some incorrect attribute accesses. refs #999
7172david-sarah@jacaranda.org**20110921031207
7173 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
7174] {
7175hunk ./src/allmydata/client.py 258
7176 
7177         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
7178                               discard_storage=discard)
7179-        ss = StorageServer(nodeid, backend, storedir,
7180+        ss = StorageServer(self.nodeid, backend, storedir,
7181                            stats_provider=self.stats_provider,
7182                            expiration_policy=expiration_policy)
7183         self.add_service(ss)
7184hunk ./src/allmydata/interfaces.py 449
7185         Returns the storage index.
7186         """
7187 
7188+    def get_storage_index_string():
7189+        """
7190+        Returns the base32-encoded storage index.
7191+        """
7192+
7193     def get_shnum():
7194         """
7195         Returns the share number.
7196hunk ./src/allmydata/storage/backends/disk/immutable.py 138
7197     def get_storage_index(self):
7198         return self._storageindex
7199 
7200+    def get_storage_index_string(self):
7201+        return si_b2a(self._storageindex)
7202+
7203     def get_shnum(self):
7204         return self._shnum
7205 
7206hunk ./src/allmydata/storage/backends/disk/mutable.py 119
7207     def get_storage_index(self):
7208         return self._storageindex
7209 
7210+    def get_storage_index_string(self):
7211+        return si_b2a(self._storageindex)
7212+
7213     def get_shnum(self):
7214         return self._shnum
7215 
7216hunk ./src/allmydata/storage/bucket.py 86
7217     def __init__(self, ss, share):
7218         self.ss = ss
7219         self._share = share
7220-        self.storageindex = share.storageindex
7221-        self.shnum = share.shnum
7222+        self.storageindex = share.get_storage_index()
7223+        self.shnum = share.get_shnum()
7224 
7225     def __repr__(self):
7226         return "<%s %s %s>" % (self.__class__.__name__,
7227hunk ./src/allmydata/storage/expirer.py 6
7228 from twisted.python import log as twlog
7229 
7230 from allmydata.storage.crawler import ShareCrawler
7231-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
7232+from allmydata.storage.common import UnknownMutableContainerVersionError, \
7233      UnknownImmutableContainerVersionError
7234 
7235 
7236hunk ./src/allmydata/storage/expirer.py 124
7237                     struct.error):
7238                 twlog.msg("lease-checker error processing %r" % (share,))
7239                 twlog.err()
7240-                which = (si_b2a(share.storageindex), share.get_shnum())
7241+                which = (share.get_storage_index_string(), share.get_shnum())
7242                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
7243                 wks = (1, 1, 1, "unknown")
7244             would_keep_shares.append(wks)
7245hunk ./src/allmydata/storage/server.py 221
7246         alreadygot = set()
7247         for share in shareset.get_shares():
7248             share.add_or_renew_lease(lease_info)
7249-            alreadygot.add(share.shnum)
7250+            alreadygot.add(share.get_shnum())
7251 
7252         for shnum in sharenums - alreadygot:
7253             if shareset.has_incoming(shnum):
7254hunk ./src/allmydata/storage/server.py 324
7255 
7256         try:
7257             shareset = self.backend.get_shareset(storageindex)
7258-            return shareset.readv(self, shares, readv)
7259+            return shareset.readv(shares, readv)
7260         finally:
7261             self.add_latency("readv", time.time() - start)
7262 
7263hunk ./src/allmydata/storage/shares.py 1
7264-#! /usr/bin/python
7265-
7266-from allmydata.storage.mutable import MutableShareFile
7267-from allmydata.storage.immutable import ShareFile
7268-
7269-def get_share_file(filename):
7270-    f = open(filename, "rb")
7271-    prefix = f.read(32)
7272-    f.close()
7273-    if prefix == MutableShareFile.MAGIC:
7274-        return MutableShareFile(filename)
7275-    # otherwise assume it's immutable
7276-    return ShareFile(filename)
7277-
7278rmfile ./src/allmydata/storage/shares.py
7279hunk ./src/allmydata/test/no_network.py 387
7280         si = tahoe_uri.from_string(uri).get_storage_index()
7281         (i_shnum, i_serverid, i_sharefp) = from_share
7282         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
7283+        fileutil.fp_make_dirs(shares_dir)
7284         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
7285 
7286     def restore_all_shares(self, shares):
7287hunk ./src/allmydata/test/no_network.py 391
7288-        for share, data in shares.items():
7289-            share.home.setContent(data)
7290+        for sharepath, data in shares.items():
7291+            FilePath(sharepath).setContent(data)
7292 
7293     def delete_share(self, (shnum, serverid, sharefp)):
7294         sharefp.remove()
7295hunk ./src/allmydata/test/test_upload.py 744
7296         servertoshnums = {} # k: server, v: set(shnum)
7297 
7298         for i, c in self.g.servers_by_number.iteritems():
7299-            for (dirp, dirns, fns) in os.walk(c.sharedir):
7300+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
7301                 for fn in fns:
7302                     try:
7303                         sharenum = int(fn)
7304}
7305[docs/backends/S3.rst: remove Issues section. refs #999
7306david-sarah@jacaranda.org**20110921031625
7307 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
7308] hunk ./docs/backends/S3.rst 57
7309 
7310 Once configured, the WUI "storage server" page will provide information about
7311 how much space is being used and how many shares are being stored.
7312-
7313-
7314-Issues
7315-------
7316-
7317-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
7318-is configured to store shares in S3 rather than on local disk, some common
7319-operations may behave differently:
7320-
7321-* Lease crawling/expiration is not yet implemented. As a result, shares will
7322-  be retained forever, and the Storage Server status web page will not show
7323-  information about the number of mutable/immutable shares present.
7324-
7325-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
7326-  each share upload, causing the upload process to run slightly slower and
7327-  incur more S3 request charges.
7328[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
7329david-sarah@jacaranda.org**20110921031705
7330 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
7331] {
7332hunk ./docs/backends/S3.rst 38
7333     else, you may be able to use some other S3-like service if it is
7334     sufficiently compatible.
7335 
7336-``s3.max_space = (str, optional)``
7337+``s3.max_space = (quantity of space, optional)``
7338 
7339     This tells the server to limit how much space can be used in the S3
7340     bucket. Before each share is uploaded, the server will ask S3 for the
7341hunk ./docs/backends/disk.rst 14
7342 
7343     This enables use of the disk backend, and is the default.
7344 
7345-``reserved_space = (str, optional)``
7346+``reserved_space = (quantity of space, optional)``
7347 
7348     If provided, this value defines how much disk space is reserved: the
7349     storage server will not accept any share that causes the amount of free
7350}
7351[More fixes to tests needed for pluggable backends. refs #999
7352david-sarah@jacaranda.org**20110921184649
7353 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
7354] {
7355hunk ./src/allmydata/scripts/debug.py 8
7356 from twisted.python import usage, failure
7357 from twisted.internet import defer
7358 from twisted.scripts import trial as twisted_trial
7359+from twisted.python.filepath import FilePath
7360 
7361 
7362 class DumpOptions(usage.Options):
7363hunk ./src/allmydata/scripts/debug.py 38
7364         self['filename'] = argv_to_abspath(filename)
7365 
7366 def dump_share(options):
7367-    from allmydata.storage.mutable import MutableShareFile
7368+    from allmydata.storage.backends.disk.disk_backend import get_share
7369     from allmydata.util.encodingutil import quote_output
7370 
7371     out = options.stdout
7372hunk ./src/allmydata/scripts/debug.py 46
7373     # check the version, to see if we have a mutable or immutable share
7374     print >>out, "share filename: %s" % quote_output(options['filename'])
7375 
7376-    f = open(options['filename'], "rb")
7377-    prefix = f.read(32)
7378-    f.close()
7379-    if prefix == MutableShareFile.MAGIC:
7380-        return dump_mutable_share(options)
7381-    # otherwise assume it's immutable
7382-    return dump_immutable_share(options)
7383-
7384-def dump_immutable_share(options):
7385-    from allmydata.storage.immutable import ShareFile
7386+    share = get_share("", 0, fp)
7387+    if share.sharetype == "mutable":
7388+        return dump_mutable_share(options, share)
7389+    else:
7390+        assert share.sharetype == "immutable", share.sharetype
7391+        return dump_immutable_share(options)
7392 
7393hunk ./src/allmydata/scripts/debug.py 53
7394+def dump_immutable_share(options, share):
7395     out = options.stdout
7396hunk ./src/allmydata/scripts/debug.py 55
7397-    f = ShareFile(options['filename'])
7398     if not options["leases-only"]:
7399hunk ./src/allmydata/scripts/debug.py 56
7400-        dump_immutable_chk_share(f, out, options)
7401-    dump_immutable_lease_info(f, out)
7402+        dump_immutable_chk_share(share, out, options)
7403+    dump_immutable_lease_info(share, out)
7404     print >>out
7405     return 0
7406 
7407hunk ./src/allmydata/scripts/debug.py 166
7408     return when
7409 
7410 
7411-def dump_mutable_share(options):
7412-    from allmydata.storage.mutable import MutableShareFile
7413+def dump_mutable_share(options, m):
7414     from allmydata.util import base32, idlib
7415     out = options.stdout
7416hunk ./src/allmydata/scripts/debug.py 169
7417-    m = MutableShareFile(options['filename'])
7418     f = open(options['filename'], "rb")
7419     WE, nodeid = m._read_write_enabler_and_nodeid(f)
7420     num_extra_leases = m._read_num_extra_leases(f)
7421hunk ./src/allmydata/scripts/debug.py 641
7422     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
7423     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
7424     """
7425-    from allmydata.storage.server import si_a2b, storage_index_to_dir
7426-    from allmydata.util.encodingutil import listdir_unicode
7427+    from allmydata.storage.server import si_a2b
7428+    from allmydata.storage.backends.disk_backend import si_si2dir
7429+    from allmydata.util.encodingutil import quote_filepath
7430 
7431     out = options.stdout
7432hunk ./src/allmydata/scripts/debug.py 646
7433-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
7434-    for d in options.nodedirs:
7435-        d = os.path.join(d, "storage/shares", sharedir)
7436-        if os.path.exists(d):
7437-            for shnum in listdir_unicode(d):
7438-                print >>out, os.path.join(d, shnum)
7439+    si = si_a2b(options.si_s)
7440+    for nodedir in options.nodedirs:
7441+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
7442+        if sharedir.exists():
7443+            for sharefp in sharedir.children():
7444+                print >>out, quote_filepath(sharefp, quotemarks=False)
7445 
7446     return 0
7447 
7448hunk ./src/allmydata/scripts/debug.py 878
7449         print >>err, "Error processing %s" % quote_output(si_dir)
7450         failure.Failure().printTraceback(err)
7451 
7452+
7453 class CorruptShareOptions(usage.Options):
7454     def getSynopsis(self):
7455         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
7456hunk ./src/allmydata/scripts/debug.py 902
7457 Obviously, this command should not be used in normal operation.
7458 """
7459         return t
7460+
7461     def parseArgs(self, filename):
7462         self['filename'] = filename
7463 
7464hunk ./src/allmydata/scripts/debug.py 907
7465 def corrupt_share(options):
7466+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
7467+
7468+def do_corrupt_share(out, fp, offset="block-random"):
7469     import random
7470hunk ./src/allmydata/scripts/debug.py 911
7471-    from allmydata.storage.mutable import MutableShareFile
7472-    from allmydata.storage.immutable import ShareFile
7473+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
7474+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
7475     from allmydata.mutable.layout import unpack_header
7476     from allmydata.immutable.layout import ReadBucketProxy
7477hunk ./src/allmydata/scripts/debug.py 915
7478-    out = options.stdout
7479-    fn = options['filename']
7480-    assert options["offset"] == "block-random", "other offsets not implemented"
7481+
7482+    assert offset == "block-random", "other offsets not implemented"
7483+
7484     # first, what kind of share is it?
7485 
7486     def flip_bit(start, end):
7487hunk ./src/allmydata/scripts/debug.py 924
7488         offset = random.randrange(start, end)
7489         bit = random.randrange(0, 8)
7490         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
7491-        f = open(fn, "rb+")
7492-        f.seek(offset)
7493-        d = f.read(1)
7494-        d = chr(ord(d) ^ 0x01)
7495-        f.seek(offset)
7496-        f.write(d)
7497-        f.close()
7498+        f = fp.open("rb+")
7499+        try:
7500+            f.seek(offset)
7501+            d = f.read(1)
7502+            d = chr(ord(d) ^ 0x01)
7503+            f.seek(offset)
7504+            f.write(d)
7505+        finally:
7506+            f.close()
7507 
7508hunk ./src/allmydata/scripts/debug.py 934
7509-    f = open(fn, "rb")
7510-    prefix = f.read(32)
7511-    f.close()
7512-    if prefix == MutableShareFile.MAGIC:
7513-        # mutable
7514-        m = MutableShareFile(fn)
7515-        f = open(fn, "rb")
7516-        f.seek(m.DATA_OFFSET)
7517-        data = f.read(2000)
7518-        # make sure this slot contains an SMDF share
7519-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7520+    f = fp.open("rb")
7521+    try:
7522+        prefix = f.read(32)
7523+    finally:
7524         f.close()
7525hunk ./src/allmydata/scripts/debug.py 939
7526+    if prefix == MutableDiskShare.MAGIC:
7527+        # mutable
7528+        m = MutableDiskShare("", 0, fp)
7529+        f = fp.open("rb")
7530+        try:
7531+            f.seek(m.DATA_OFFSET)
7532+            data = f.read(2000)
7533+            # make sure this slot contains an SMDF share
7534+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
7535+        finally:
7536+            f.close()
7537 
7538         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
7539          ig_datalen, offsets) = unpack_header(data)
7540hunk ./src/allmydata/scripts/debug.py 960
7541         flip_bit(start, end)
7542     else:
7543         # otherwise assume it's immutable
7544-        f = ShareFile(fn)
7545+        f = ImmutableDiskShare("", 0, fp)
7546         bp = ReadBucketProxy(None, None, '')
7547         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
7548         start = f._data_offset + offsets["data"]
7549hunk ./src/allmydata/storage/backends/base.py 92
7550             (testv, datav, new_length) = test_and_write_vectors[sharenum]
7551             if sharenum in shares:
7552                 if not shares[sharenum].check_testv(testv):
7553-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
7554+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
7555                     testv_is_good = False
7556                     break
7557             else:
7558hunk ./src/allmydata/storage/backends/base.py 99
7559                 # compare the vectors against an empty share, in which all
7560                 # reads return empty strings
7561                 if not EmptyShare().check_testv(testv):
7562-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
7563-                                                                testv))
7564+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
7565                     testv_is_good = False
7566                     break
7567 
7568hunk ./src/allmydata/test/test_cli.py 2892
7569             # delete one, corrupt a second
7570             shares = self.find_uri_shares(self.uri)
7571             self.failUnlessReallyEqual(len(shares), 10)
7572-            os.unlink(shares[0][2])
7573-            cso = debug.CorruptShareOptions()
7574-            cso.stdout = StringIO()
7575-            cso.parseOptions([shares[1][2]])
7576+            shares[0][2].remove()
7577+            stdout = StringIO()
7578+            sharefile = shares[1][2]
7579             storage_index = uri.from_string(self.uri).get_storage_index()
7580             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
7581                                        (base32.b2a(shares[1][1]),
7582hunk ./src/allmydata/test/test_cli.py 2900
7583                                         base32.b2a(storage_index),
7584                                         shares[1][0])
7585-            debug.corrupt_share(cso)
7586+            debug.do_corrupt_share(stdout, sharefile)
7587         d.addCallback(_clobber_shares)
7588 
7589         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
7590hunk ./src/allmydata/test/test_cli.py 3017
7591         def _clobber_shares(ignored):
7592             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
7593             self.failUnlessReallyEqual(len(shares), 10)
7594-            os.unlink(shares[0][2])
7595+            shares[0][2].remove()
7596 
7597             shares = self.find_uri_shares(self.uris["mutable"])
7598hunk ./src/allmydata/test/test_cli.py 3020
7599-            cso = debug.CorruptShareOptions()
7600-            cso.stdout = StringIO()
7601-            cso.parseOptions([shares[1][2]])
7602+            stdout = StringIO()
7603+            sharefile = shares[1][2]
7604             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
7605             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
7606                                        (base32.b2a(shares[1][1]),
7607hunk ./src/allmydata/test/test_cli.py 3027
7608                                         base32.b2a(storage_index),
7609                                         shares[1][0])
7610-            debug.corrupt_share(cso)
7611+            debug.do_corrupt_share(stdout, sharefile)
7612         d.addCallback(_clobber_shares)
7613 
7614         # root
7615hunk ./src/allmydata/test/test_client.py 90
7616                            "enabled = true\n" + \
7617                            "reserved_space = 1000\n")
7618         c = client.Client(basedir)
7619-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
7620+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
7621 
7622     def test_reserved_2(self):
7623         basedir = "client.Basic.test_reserved_2"
7624hunk ./src/allmydata/test/test_client.py 101
7625                            "enabled = true\n" + \
7626                            "reserved_space = 10K\n")
7627         c = client.Client(basedir)
7628-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
7629+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
7630 
7631     def test_reserved_3(self):
7632         basedir = "client.Basic.test_reserved_3"
7633hunk ./src/allmydata/test/test_client.py 112
7634                            "enabled = true\n" + \
7635                            "reserved_space = 5mB\n")
7636         c = client.Client(basedir)
7637-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7638+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7639                              5*1000*1000)
7640 
7641     def test_reserved_4(self):
7642hunk ./src/allmydata/test/test_client.py 124
7643                            "enabled = true\n" + \
7644                            "reserved_space = 78Gb\n")
7645         c = client.Client(basedir)
7646-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
7647+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
7648                              78*1000*1000*1000)
7649 
7650     def test_reserved_bad(self):
7651hunk ./src/allmydata/test/test_client.py 136
7652                            "enabled = true\n" + \
7653                            "reserved_space = bogus\n")
7654         c = client.Client(basedir)
7655-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
7656+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
7657 
7658     def _permute(self, sb, key):
7659         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
7660hunk ./src/allmydata/test/test_crawler.py 7
7661 from twisted.trial import unittest
7662 from twisted.application import service
7663 from twisted.internet import defer
7664+from twisted.python.filepath import FilePath
7665 from foolscap.api import eventually, fireEventually
7666 
7667 from allmydata.util import fileutil, hashutil, pollmixin
7668hunk ./src/allmydata/test/test_crawler.py 13
7669 from allmydata.storage.server import StorageServer, si_b2a
7670 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
7671+from allmydata.storage.backends.disk.disk_backend import DiskBackend
7672 
7673 from allmydata.test.test_storage import FakeCanary
7674 from allmydata.test.common_util import StallMixin
7675hunk ./src/allmydata/test/test_crawler.py 115
7676 
7677     def test_immediate(self):
7678         self.basedir = "crawler/Basic/immediate"
7679-        fileutil.make_dirs(self.basedir)
7680         serverid = "\x00" * 20
7681hunk ./src/allmydata/test/test_crawler.py 116
7682-        ss = StorageServer(self.basedir, serverid)
7683+        fp = FilePath(self.basedir)
7684+        backend = DiskBackend(fp)
7685+        ss = StorageServer(serverid, backend, fp)
7686         ss.setServiceParent(self.s)
7687 
7688         sis = [self.write(i, ss, serverid) for i in range(10)]
7689hunk ./src/allmydata/test/test_crawler.py 122
7690-        statefile = os.path.join(self.basedir, "statefile")
7691+        statefp = fp.child("statefile")
7692 
7693hunk ./src/allmydata/test/test_crawler.py 124
7694-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
7695+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
7696         c.load_state()
7697 
7698         c.start_current_prefix(time.time())
7699hunk ./src/allmydata/test/test_crawler.py 137
7700         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
7701 
7702         # check that a new crawler picks up on the state file properly
7703-        c2 = BucketEnumeratingCrawler(ss, statefile)
7704+        c2 = BucketEnumeratingCrawler(backend, statefp)
7705         c2.load_state()
7706 
7707         c2.start_current_prefix(time.time())
7708hunk ./src/allmydata/test/test_crawler.py 145
7709 
7710     def test_service(self):
7711         self.basedir = "crawler/Basic/service"
7712-        fileutil.make_dirs(self.basedir)
7713         serverid = "\x00" * 20
7714hunk ./src/allmydata/test/test_crawler.py 146
7715-        ss = StorageServer(self.basedir, serverid)
7716+        fp = FilePath(self.basedir)
7717+        backend = DiskBackend(fp)
7718+        ss = StorageServer(serverid, backend, fp)
7719         ss.setServiceParent(self.s)
7720 
7721         sis = [self.write(i, ss, serverid) for i in range(10)]
7722hunk ./src/allmydata/test/test_crawler.py 153
7723 
7724-        statefile = os.path.join(self.basedir, "statefile")
7725-        c = BucketEnumeratingCrawler(ss, statefile)
7726+        statefp = fp.child("statefile")
7727+        c = BucketEnumeratingCrawler(backend, statefp)
7728         c.setServiceParent(self.s)
7729 
7730         # it should be legal to call get_state() and get_progress() right
7731hunk ./src/allmydata/test/test_crawler.py 174
7732 
7733     def test_paced(self):
7734         self.basedir = "crawler/Basic/paced"
7735-        fileutil.make_dirs(self.basedir)
7736         serverid = "\x00" * 20
7737hunk ./src/allmydata/test/test_crawler.py 175
7738-        ss = StorageServer(self.basedir, serverid)
7739+        fp = FilePath(self.basedir)
7740+        backend = DiskBackend(fp)
7741+        ss = StorageServer(serverid, backend, fp)
7742         ss.setServiceParent(self.s)
7743 
7744         # put four buckets in each prefixdir
7745hunk ./src/allmydata/test/test_crawler.py 186
7746             for tail in range(4):
7747                 sis.append(self.write(i, ss, serverid, tail))
7748 
7749-        statefile = os.path.join(self.basedir, "statefile")
7750+        statefp = fp.child("statefile")
7751 
7752hunk ./src/allmydata/test/test_crawler.py 188
7753-        c = PacedCrawler(ss, statefile)
7754+        c = PacedCrawler(backend, statefp)
7755         c.load_state()
7756         try:
7757             c.start_current_prefix(time.time())
7758hunk ./src/allmydata/test/test_crawler.py 213
7759         del c
7760 
7761         # start a new crawler, it should start from the beginning
7762-        c = PacedCrawler(ss, statefile)
7763+        c = PacedCrawler(backend, statefp)
7764         c.load_state()
7765         try:
7766             c.start_current_prefix(time.time())
7767hunk ./src/allmydata/test/test_crawler.py 226
7768         c.cpu_slice = PacedCrawler.cpu_slice
7769 
7770         # a third crawler should pick up from where it left off
7771-        c2 = PacedCrawler(ss, statefile)
7772+        c2 = PacedCrawler(backend, statefp)
7773         c2.all_buckets = c.all_buckets[:]
7774         c2.load_state()
7775         c2.countdown = -1
7776hunk ./src/allmydata/test/test_crawler.py 237
7777 
7778         # now stop it at the end of a bucket (countdown=4), to exercise a
7779         # different place that checks the time
7780-        c = PacedCrawler(ss, statefile)
7781+        c = PacedCrawler(backend, statefp)
7782         c.load_state()
7783         c.countdown = 4
7784         try:
7785hunk ./src/allmydata/test/test_crawler.py 256
7786 
7787         # stop it again at the end of the bucket, check that a new checker
7788         # picks up correctly
7789-        c = PacedCrawler(ss, statefile)
7790+        c = PacedCrawler(backend, statefp)
7791         c.load_state()
7792         c.countdown = 4
7793         try:
7794hunk ./src/allmydata/test/test_crawler.py 266
7795         # that should stop at the end of one of the buckets.
7796         c.save_state()
7797 
7798-        c2 = PacedCrawler(ss, statefile)
7799+        c2 = PacedCrawler(backend, statefp)
7800         c2.all_buckets = c.all_buckets[:]
7801         c2.load_state()
7802         c2.countdown = -1
7803hunk ./src/allmydata/test/test_crawler.py 277
7804 
7805     def test_paced_service(self):
7806         self.basedir = "crawler/Basic/paced_service"
7807-        fileutil.make_dirs(self.basedir)
7808         serverid = "\x00" * 20
7809hunk ./src/allmydata/test/test_crawler.py 278
7810-        ss = StorageServer(self.basedir, serverid)
7811+        fp = FilePath(self.basedir)
7812+        backend = DiskBackend(fp)
7813+        ss = StorageServer(serverid, backend, fp)
7814         ss.setServiceParent(self.s)
7815 
7816         sis = [self.write(i, ss, serverid) for i in range(10)]
7817hunk ./src/allmydata/test/test_crawler.py 285
7818 
7819-        statefile = os.path.join(self.basedir, "statefile")
7820-        c = PacedCrawler(ss, statefile)
7821+        statefp = fp.child("statefile")
7822+        c = PacedCrawler(backend, statefp)
7823 
7824         did_check_progress = [False]
7825         def check_progress():
7826hunk ./src/allmydata/test/test_crawler.py 345
7827         # and read the stdout when it runs.
7828 
7829         self.basedir = "crawler/Basic/cpu_usage"
7830-        fileutil.make_dirs(self.basedir)
7831         serverid = "\x00" * 20
7832hunk ./src/allmydata/test/test_crawler.py 346
7833-        ss = StorageServer(self.basedir, serverid)
7834+        fp = FilePath(self.basedir)
7835+        backend = DiskBackend(fp)
7836+        ss = StorageServer(serverid, backend, fp)
7837         ss.setServiceParent(self.s)
7838 
7839         for i in range(10):
7840hunk ./src/allmydata/test/test_crawler.py 354
7841             self.write(i, ss, serverid)
7842 
7843-        statefile = os.path.join(self.basedir, "statefile")
7844-        c = ConsumingCrawler(ss, statefile)
7845+        statefp = fp.child("statefile")
7846+        c = ConsumingCrawler(backend, statefp)
7847         c.setServiceParent(self.s)
7848 
7849         # this will run as fast as it can, consuming about 50ms per call to
7850hunk ./src/allmydata/test/test_crawler.py 391
7851 
7852     def test_empty_subclass(self):
7853         self.basedir = "crawler/Basic/empty_subclass"
7854-        fileutil.make_dirs(self.basedir)
7855         serverid = "\x00" * 20
7856hunk ./src/allmydata/test/test_crawler.py 392
7857-        ss = StorageServer(self.basedir, serverid)
7858+        fp = FilePath(self.basedir)
7859+        backend = DiskBackend(fp)
7860+        ss = StorageServer(serverid, backend, fp)
7861         ss.setServiceParent(self.s)
7862 
7863         for i in range(10):
7864hunk ./src/allmydata/test/test_crawler.py 400
7865             self.write(i, ss, serverid)
7866 
7867-        statefile = os.path.join(self.basedir, "statefile")
7868-        c = ShareCrawler(ss, statefile)
7869+        statefp = fp.child("statefile")
7870+        c = ShareCrawler(backend, statefp)
7871         c.slow_start = 0
7872         c.setServiceParent(self.s)
7873 
7874hunk ./src/allmydata/test/test_crawler.py 417
7875         d.addCallback(_done)
7876         return d
7877 
7878-
7879     def test_oneshot(self):
7880         self.basedir = "crawler/Basic/oneshot"
7881hunk ./src/allmydata/test/test_crawler.py 419
7882-        fileutil.make_dirs(self.basedir)
7883         serverid = "\x00" * 20
7884hunk ./src/allmydata/test/test_crawler.py 420
7885-        ss = StorageServer(self.basedir, serverid)
7886+        fp = FilePath(self.basedir)
7887+        backend = DiskBackend(fp)
7888+        ss = StorageServer(serverid, backend, fp)
7889         ss.setServiceParent(self.s)
7890 
7891         for i in range(30):
7892hunk ./src/allmydata/test/test_crawler.py 428
7893             self.write(i, ss, serverid)
7894 
7895-        statefile = os.path.join(self.basedir, "statefile")
7896-        c = OneShotCrawler(ss, statefile)
7897+        statefp = fp.child("statefile")
7898+        c = OneShotCrawler(backend, statefp)
7899         c.setServiceParent(self.s)
7900 
7901         d = c.finished_d
7902hunk ./src/allmydata/test/test_crawler.py 447
7903             self.failUnlessEqual(s["current-cycle"], None)
7904         d.addCallback(_check)
7905         return d
7906-
7907hunk ./src/allmydata/test/test_deepcheck.py 23
7908      ShouldFailMixin
7909 from allmydata.test.common_util import StallMixin
7910 from allmydata.test.no_network import GridTestMixin
7911+from allmydata.scripts import debug
7912+
7913 
7914 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
7915 
7916hunk ./src/allmydata/test/test_deepcheck.py 905
7917         d.addErrback(self.explain_error)
7918         return d
7919 
7920-
7921-
7922     def set_up_damaged_tree(self):
7923         # 6.4s
7924 
7925hunk ./src/allmydata/test/test_deepcheck.py 989
7926 
7927         return d
7928 
7929-    def _run_cli(self, argv):
7930-        stdout, stderr = StringIO(), StringIO()
7931-        # this can only do synchronous operations
7932-        assert argv[0] == "debug"
7933-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
7934-        return stdout.getvalue()
7935-
7936     def _delete_some_shares(self, node):
7937         self.delete_shares_numbered(node.get_uri(), [0,1])
7938 
7939hunk ./src/allmydata/test/test_deepcheck.py 995
7940     def _corrupt_some_shares(self, node):
7941         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
7942             if shnum in (0,1):
7943-                self._run_cli(["debug", "corrupt-share", sharefile])
7944+                debug.do_corrupt_share(StringIO(), sharefile)
7945 
7946     def _delete_most_shares(self, node):
7947         self.delete_shares_numbered(node.get_uri(), range(1,10))
7948hunk ./src/allmydata/test/test_deepcheck.py 1000
7949 
7950-
7951     def check_is_healthy(self, cr, where):
7952         try:
7953             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
7954hunk ./src/allmydata/test/test_download.py 134
7955             for shnum in shares_for_server:
7956                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
7957                 fileutil.fp_make_dirs(share_dir)
7958-                share_dir.child(str(shnum)).setContent(shares[shnum])
7959+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
7960 
7961     def load_shares(self, ignored=None):
7962         # this uses the data generated by create_shares() to populate the
7963hunk ./src/allmydata/test/test_hung_server.py 32
7964 
7965     def _break(self, servers):
7966         for ss in servers:
7967-            self.g.break_server(ss.get_serverid())
7968+            self.g.break_server(ss.original.get_serverid())
7969 
7970     def _hang(self, servers, **kwargs):
7971         for ss in servers:
7972hunk ./src/allmydata/test/test_hung_server.py 67
7973         serverids = [ss.original.get_serverid() for ss in from_servers]
7974         for (i_shnum, i_serverid, i_sharefp) in self.shares:
7975             if i_serverid in serverids:
7976-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
7977+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
7978 
7979         self.shares = self.find_uri_shares(self.uri)
7980 
7981hunk ./src/allmydata/test/test_mutable.py 3669
7982         # Now execute each assignment by writing the storage.
7983         for (share, servernum) in assignments:
7984             sharedata = base64.b64decode(self.sdmf_old_shares[share])
7985-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
7986+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
7987             fileutil.fp_make_dirs(storage_dir)
7988             storage_dir.child("%d" % share).setContent(sharedata)
7989         # ...and verify that the shares are there.
7990hunk ./src/allmydata/test/test_no_network.py 10
7991 from allmydata.immutable.upload import Data
7992 from allmydata.util.consumer import download_to_data
7993 
7994+
7995 class Harness(unittest.TestCase):
7996     def setUp(self):
7997         self.s = service.MultiService()
7998hunk ./src/allmydata/test/test_storage.py 1
7999-import time, os.path, platform, stat, re, simplejson, struct, shutil
8000+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8001 
8002 import mock
8003 
8004hunk ./src/allmydata/test/test_storage.py 6
8005 from twisted.trial import unittest
8006-
8007 from twisted.internet import defer
8008 from twisted.application import service
8009hunk ./src/allmydata/test/test_storage.py 8
8010+from twisted.python.filepath import FilePath
8011 from foolscap.api import fireEventually
8012hunk ./src/allmydata/test/test_storage.py 10
8013-import itertools
8014+
8015 from allmydata import interfaces
8016 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8017 from allmydata.storage.server import StorageServer
8018hunk ./src/allmydata/test/test_storage.py 14
8019+from allmydata.storage.backends.disk.disk_backend import DiskBackend
8020 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8021 from allmydata.storage.bucket import BucketWriter, BucketReader
8022 from allmydata.storage.common import DataTooLargeError, \
8023hunk ./src/allmydata/test/test_storage.py 310
8024         return self.sparent.stopService()
8025 
8026     def workdir(self, name):
8027-        basedir = os.path.join("storage", "Server", name)
8028-        return basedir
8029+        return FilePath("storage").child("Server").child(name)
8030 
8031     def create(self, name, reserved_space=0, klass=StorageServer):
8032         workdir = self.workdir(name)
8033hunk ./src/allmydata/test/test_storage.py 314
8034-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
8035+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
8036+        ss = klass("\x00" * 20, backend, workdir,
8037                    stats_provider=FakeStatsProvider())
8038         ss.setServiceParent(self.sparent)
8039         return ss
8040hunk ./src/allmydata/test/test_storage.py 1386
8041 
8042     def tearDown(self):
8043         self.sparent.stopService()
8044-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
8045+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
8046 
8047 
8048     def write_enabler(self, we_tag):
8049hunk ./src/allmydata/test/test_storage.py 2781
8050         return self.sparent.stopService()
8051 
8052     def workdir(self, name):
8053-        basedir = os.path.join("storage", "Server", name)
8054-        return basedir
8055+        return FilePath("storage").child("Server").child(name)
8056 
8057     def create(self, name):
8058         workdir = self.workdir(name)
8059hunk ./src/allmydata/test/test_storage.py 2785
8060-        ss = StorageServer(workdir, "\x00" * 20)
8061+        backend = DiskBackend(workdir)
8062+        ss = StorageServer("\x00" * 20, backend, workdir)
8063         ss.setServiceParent(self.sparent)
8064         return ss
8065 
8066hunk ./src/allmydata/test/test_storage.py 4061
8067         }
8068 
8069         basedir = "storage/WebStatus/status_right_disk_stats"
8070-        fileutil.make_dirs(basedir)
8071-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
8072-        expecteddir = ss.sharedir
8073+        fp = FilePath(basedir)
8074+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
8075+        ss = StorageServer("\x00" * 20, backend, fp)
8076+        expecteddir = backend._sharedir
8077         ss.setServiceParent(self.s)
8078         w = StorageStatus(ss)
8079         html = w.renderSynchronously()
8080hunk ./src/allmydata/test/test_storage.py 4084
8081 
8082     def test_readonly(self):
8083         basedir = "storage/WebStatus/readonly"
8084-        fileutil.make_dirs(basedir)
8085-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
8086+        fp = FilePath(basedir)
8087+        backend = DiskBackend(fp, readonly=True)
8088+        ss = StorageServer("\x00" * 20, backend, fp)
8089         ss.setServiceParent(self.s)
8090         w = StorageStatus(ss)
8091         html = w.renderSynchronously()
8092hunk ./src/allmydata/test/test_storage.py 4096
8093 
8094     def test_reserved(self):
8095         basedir = "storage/WebStatus/reserved"
8096-        fileutil.make_dirs(basedir)
8097-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8098-        ss.setServiceParent(self.s)
8099-        w = StorageStatus(ss)
8100-        html = w.renderSynchronously()
8101-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
8102-        s = remove_tags(html)
8103-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
8104-
8105-    def test_huge_reserved(self):
8106-        basedir = "storage/WebStatus/reserved"
8107-        fileutil.make_dirs(basedir)
8108-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
8109+        fp = FilePath(basedir)
8110+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
8111+        ss = StorageServer("\x00" * 20, backend, fp)
8112         ss.setServiceParent(self.s)
8113         w = StorageStatus(ss)
8114         html = w.renderSynchronously()
8115hunk ./src/allmydata/test/test_upload.py 3
8116 # -*- coding: utf-8 -*-
8117 
8118-import os, shutil
8119+import os
8120 from cStringIO import StringIO
8121 from twisted.trial import unittest
8122 from twisted.python.failure import Failure
8123hunk ./src/allmydata/test/test_upload.py 14
8124 from allmydata import uri, monitor, client
8125 from allmydata.immutable import upload, encode
8126 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
8127-from allmydata.util import log
8128+from allmydata.util import log, fileutil
8129 from allmydata.util.assertutil import precondition
8130 from allmydata.util.deferredutil import DeferredListShouldSucceed
8131 from allmydata.test.no_network import GridTestMixin
8132hunk ./src/allmydata/test/test_upload.py 972
8133                                         readonly=True))
8134         # Remove the first share from server 0.
8135         def _remove_share_0_from_server_0():
8136-            share_location = self.shares[0][2]
8137-            os.remove(share_location)
8138+            self.shares[0][2].remove()
8139         d.addCallback(lambda ign:
8140             _remove_share_0_from_server_0())
8141         # Set happy = 4 in the client.
8142hunk ./src/allmydata/test/test_upload.py 1847
8143             self._copy_share_to_server(3, 1)
8144             storedir = self.get_serverdir(0)
8145             # remove the storedir, wiping out any existing shares
8146-            shutil.rmtree(storedir)
8147+            fileutil.fp_remove(storedir)
8148             # create an empty storedir to replace the one we just removed
8149hunk ./src/allmydata/test/test_upload.py 1849
8150-            os.mkdir(storedir)
8151+            storedir.mkdir()
8152             client = self.g.clients[0]
8153             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8154             return client
8155hunk ./src/allmydata/test/test_upload.py 1888
8156             self._copy_share_to_server(3, 1)
8157             storedir = self.get_serverdir(0)
8158             # remove the storedir, wiping out any existing shares
8159-            shutil.rmtree(storedir)
8160+            fileutil.fp_remove(storedir)
8161             # create an empty storedir to replace the one we just removed
8162hunk ./src/allmydata/test/test_upload.py 1890
8163-            os.mkdir(storedir)
8164+            storedir.mkdir()
8165             client = self.g.clients[0]
8166             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8167             return client
8168hunk ./src/allmydata/test/test_web.py 4870
8169         d.addErrback(self.explain_web_error)
8170         return d
8171 
8172-    def _assert_leasecount(self, ignored, which, expected):
8173+    def _assert_leasecount(self, which, expected):
8174         lease_counts = self.count_leases(self.uris[which])
8175         for (fn, num_leases) in lease_counts:
8176             if num_leases != expected:
8177hunk ./src/allmydata/test/test_web.py 4903
8178                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
8179         d.addCallback(_compute_fileurls)
8180 
8181-        d.addCallback(self._assert_leasecount, "one", 1)
8182-        d.addCallback(self._assert_leasecount, "two", 1)
8183-        d.addCallback(self._assert_leasecount, "mutable", 1)
8184+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8185+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8186+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8187 
8188         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
8189         def _got_html_good(res):
8190hunk ./src/allmydata/test/test_web.py 4913
8191             self.failIf("Not Healthy" in res, res)
8192         d.addCallback(_got_html_good)
8193 
8194-        d.addCallback(self._assert_leasecount, "one", 1)
8195-        d.addCallback(self._assert_leasecount, "two", 1)
8196-        d.addCallback(self._assert_leasecount, "mutable", 1)
8197+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8198+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8199+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8200 
8201         # this CHECK uses the original client, which uses the same
8202         # lease-secrets, so it will just renew the original lease
8203hunk ./src/allmydata/test/test_web.py 4922
8204         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
8205         d.addCallback(_got_html_good)
8206 
8207-        d.addCallback(self._assert_leasecount, "one", 1)
8208-        d.addCallback(self._assert_leasecount, "two", 1)
8209-        d.addCallback(self._assert_leasecount, "mutable", 1)
8210+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8211+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8212+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8213 
8214         # this CHECK uses an alternate client, which adds a second lease
8215         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
8216hunk ./src/allmydata/test/test_web.py 4930
8217         d.addCallback(_got_html_good)
8218 
8219-        d.addCallback(self._assert_leasecount, "one", 2)
8220-        d.addCallback(self._assert_leasecount, "two", 1)
8221-        d.addCallback(self._assert_leasecount, "mutable", 1)
8222+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8223+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8224+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8225 
8226         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
8227         d.addCallback(_got_html_good)
8228hunk ./src/allmydata/test/test_web.py 4937
8229 
8230-        d.addCallback(self._assert_leasecount, "one", 2)
8231-        d.addCallback(self._assert_leasecount, "two", 1)
8232-        d.addCallback(self._assert_leasecount, "mutable", 1)
8233+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8234+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8235+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8236 
8237         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
8238                       clientnum=1)
8239hunk ./src/allmydata/test/test_web.py 4945
8240         d.addCallback(_got_html_good)
8241 
8242-        d.addCallback(self._assert_leasecount, "one", 2)
8243-        d.addCallback(self._assert_leasecount, "two", 1)
8244-        d.addCallback(self._assert_leasecount, "mutable", 2)
8245+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8246+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
8247+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8248 
8249         d.addErrback(self.explain_web_error)
8250         return d
8251hunk ./src/allmydata/test/test_web.py 4989
8252             self.failUnlessReallyEqual(len(units), 4+1)
8253         d.addCallback(_done)
8254 
8255-        d.addCallback(self._assert_leasecount, "root", 1)
8256-        d.addCallback(self._assert_leasecount, "one", 1)
8257-        d.addCallback(self._assert_leasecount, "mutable", 1)
8258+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8259+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8260+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8261 
8262         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
8263         d.addCallback(_done)
8264hunk ./src/allmydata/test/test_web.py 4996
8265 
8266-        d.addCallback(self._assert_leasecount, "root", 1)
8267-        d.addCallback(self._assert_leasecount, "one", 1)
8268-        d.addCallback(self._assert_leasecount, "mutable", 1)
8269+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
8270+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
8271+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
8272 
8273         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
8274                       clientnum=1)
8275hunk ./src/allmydata/test/test_web.py 5004
8276         d.addCallback(_done)
8277 
8278-        d.addCallback(self._assert_leasecount, "root", 2)
8279-        d.addCallback(self._assert_leasecount, "one", 2)
8280-        d.addCallback(self._assert_leasecount, "mutable", 2)
8281+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
8282+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
8283+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
8284 
8285         d.addErrback(self.explain_web_error)
8286         return d
8287}
8288[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
8289david-sarah@jacaranda.org**20110921221421
8290 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
8291] {
8292hunk ./src/allmydata/scripts/debug.py 642
8293     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
8294     """
8295     from allmydata.storage.server import si_a2b
8296-    from allmydata.storage.backends.disk_backend import si_si2dir
8297+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
8298     from allmydata.util.encodingutil import quote_filepath
8299 
8300     out = options.stdout
8301hunk ./src/allmydata/scripts/debug.py 648
8302     si = si_a2b(options.si_s)
8303     for nodedir in options.nodedirs:
8304-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
8305+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
8306         if sharedir.exists():
8307             for sharefp in sharedir.children():
8308                 print >>out, quote_filepath(sharefp, quotemarks=False)
8309hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
8310         incominghome = self._incominghomedir.child(str(shnum))
8311         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
8312                                    max_size=max_space_per_bucket)
8313-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
8314+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
8315         if self._discard_storage:
8316             bw.throw_out_all_data = True
8317         return bw
8318hunk ./src/allmydata/storage/backends/disk/immutable.py 147
8319     def unlink(self):
8320         self._home.remove()
8321 
8322+    def get_allocated_size(self):
8323+        return self._max_size
8324+
8325     def get_size(self):
8326         return self._home.getsize()
8327 
8328hunk ./src/allmydata/storage/bucket.py 15
8329 class BucketWriter(Referenceable):
8330     implements(RIBucketWriter)
8331 
8332-    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
8333+    def __init__(self, ss, immutableshare, lease_info, canary):
8334         self.ss = ss
8335hunk ./src/allmydata/storage/bucket.py 17
8336-        self._max_size = max_size # don't allow the client to write more than this
8337         self._canary = canary
8338         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
8339         self.closed = False
8340hunk ./src/allmydata/storage/bucket.py 27
8341         self._share.add_lease(lease_info)
8342 
8343     def allocated_size(self):
8344-        return self._max_size
8345+        return self._share.get_allocated_size()
8346 
8347     def remote_write(self, offset, data):
8348         start = time.time()
8349hunk ./src/allmydata/storage/crawler.py 480
8350             self.state["bucket-counts"][cycle] = {}
8351         self.state["bucket-counts"][cycle][prefix] = len(sharesets)
8352         if prefix in self.prefixes[:self.num_sample_prefixes]:
8353-            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
8354+            si_strings = [shareset.get_storage_index_string() for shareset in sharesets]
8355+            self.state["storage-index-samples"][prefix] = (cycle, si_strings)
8356 
8357     def finished_cycle(self, cycle):
8358         last_counts = self.state["bucket-counts"].get(cycle, [])
8359hunk ./src/allmydata/storage/expirer.py 281
8360         # copy() needs to become a deepcopy
8361         h["space-recovered"] = s["space-recovered"].copy()
8362 
8363-        history = pickle.load(self.historyfp.getContent())
8364+        history = pickle.loads(self.historyfp.getContent())
8365         history[cycle] = h
8366         while len(history) > 10:
8367             oldcycles = sorted(history.keys())
8368hunk ./src/allmydata/storage/expirer.py 355
8369         progress = self.get_progress()
8370 
8371         state = ShareCrawler.get_state(self) # does a shallow copy
8372-        history = pickle.load(self.historyfp.getContent())
8373+        history = pickle.loads(self.historyfp.getContent())
8374         state["history"] = history
8375 
8376         if not progress["cycle-in-progress"]:
8377hunk ./src/allmydata/test/test_download.py 199
8378                     for shnum in immutable_shares[clientnum]:
8379                         if s._shnum == shnum:
8380                             share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8381-                            share_dir.child(str(shnum)).remove()
8382+                            fileutil.fp_remove(share_dir.child(str(shnum)))
8383         d.addCallback(_clobber_some_shares)
8384         d.addCallback(lambda ign: download_to_data(n))
8385         d.addCallback(_got_data)
8386hunk ./src/allmydata/test/test_download.py 224
8387             for clientnum in immutable_shares:
8388                 for shnum in immutable_shares[clientnum]:
8389                     share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
8390-                    share_dir.child(str(shnum)).remove()
8391+                    fileutil.fp_remove(share_dir.child(str(shnum)))
8392             # now a new download should fail with NoSharesError. We want a
8393             # new ImmutableFileNode so it will forget about the old shares.
8394             # If we merely called create_node_from_uri() without first
8395hunk ./src/allmydata/test/test_repairer.py 415
8396         def _test_corrupt(ignored):
8397             olddata = {}
8398             shares = self.find_uri_shares(self.uri)
8399-            for (shnum, serverid, sharefile) in shares:
8400-                olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
8401+            for (shnum, serverid, sharefp) in shares:
8402+                olddata[ (shnum, serverid) ] = sharefp.getContent()
8403             for sh in shares:
8404                 self.corrupt_share(sh, common._corrupt_uri_extension)
8405hunk ./src/allmydata/test/test_repairer.py 419
8406-            for (shnum, serverid, sharefile) in shares:
8407-                newdata = open(sharefile, "rb").read()
8408+            for (shnum, serverid, sharefp) in shares:
8409+                newdata = sharefp.getContent()
8410                 self.failIfEqual(olddata[ (shnum, serverid) ], newdata)
8411         d.addCallback(_test_corrupt)
8412 
8413hunk ./src/allmydata/test/test_storage.py 63
8414 
8415 class Bucket(unittest.TestCase):
8416     def make_workdir(self, name):
8417-        basedir = os.path.join("storage", "Bucket", name)
8418-        incoming = os.path.join(basedir, "tmp", "bucket")
8419-        final = os.path.join(basedir, "bucket")
8420-        fileutil.make_dirs(basedir)
8421-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8422+        basedir = FilePath("storage").child("Bucket").child(name)
8423+        tmpdir = basedir.child("tmp")
8424+        tmpdir.makedirs()
8425+        incoming = tmpdir.child("bucket")
8426+        final = basedir.child("bucket")
8427         return incoming, final
8428 
8429     def bucket_writer_closed(self, bw, consumed):
8430hunk ./src/allmydata/test/test_storage.py 87
8431 
8432     def test_create(self):
8433         incoming, final = self.make_workdir("test_create")
8434-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8435-                          FakeCanary())
8436+        share = ImmutableDiskShare("", 0, incoming, final, 200)
8437+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8438         bw.remote_write(0, "a"*25)
8439         bw.remote_write(25, "b"*25)
8440         bw.remote_write(50, "c"*25)
8441hunk ./src/allmydata/test/test_storage.py 97
8442 
8443     def test_readwrite(self):
8444         incoming, final = self.make_workdir("test_readwrite")
8445-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
8446-                          FakeCanary())
8447+        share = ImmutableDiskShare("", 0, incoming, 200)
8448+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8449         bw.remote_write(0, "a"*25)
8450         bw.remote_write(25, "b"*25)
8451         bw.remote_write(50, "c"*7) # last block may be short
8452hunk ./src/allmydata/test/test_storage.py 140
8453 
8454         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8455 
8456-        fileutil.write(final, share_file_data)
8457+        final.setContent(share_file_data)
8458 
8459         mockstorageserver = mock.Mock()
8460 
8461hunk ./src/allmydata/test/test_storage.py 179
8462 
8463 class BucketProxy(unittest.TestCase):
8464     def make_bucket(self, name, size):
8465-        basedir = os.path.join("storage", "BucketProxy", name)
8466-        incoming = os.path.join(basedir, "tmp", "bucket")
8467-        final = os.path.join(basedir, "bucket")
8468-        fileutil.make_dirs(basedir)
8469-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
8470-        bw = BucketWriter(self, incoming, final, size, self.make_lease(),
8471-                          FakeCanary())
8472+        basedir = FilePath("storage").child("BucketProxy").child(name)
8473+        tmpdir = basedir.child("tmp")
8474+        tmpdir.makedirs()
8475+        incoming = tmpdir.child("bucket")
8476+        final = basedir.child("bucket")
8477+        share = ImmutableDiskShare("", 0, incoming, final, size)
8478+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8479         rb = RemoteBucket()
8480         rb.target = bw
8481         return bw, rb, final
8482hunk ./src/allmydata/test/test_storage.py 206
8483         pass
8484 
8485     def test_create(self):
8486-        bw, rb, sharefname = self.make_bucket("test_create", 500)
8487+        bw, rb, sharefp = self.make_bucket("test_create", 500)
8488         bp = WriteBucketProxy(rb, None,
8489                               data_size=300,
8490                               block_size=10,
8491hunk ./src/allmydata/test/test_storage.py 237
8492                         for i in (1,9,13)]
8493         uri_extension = "s" + "E"*498 + "e"
8494 
8495-        bw, rb, sharefname = self.make_bucket(name, sharesize)
8496+        bw, rb, sharefp = self.make_bucket(name, sharesize)
8497         bp = wbp_class(rb, None,
8498                        data_size=95,
8499                        block_size=25,
8500hunk ./src/allmydata/test/test_storage.py 258
8501 
8502         # now read everything back
8503         def _start_reading(res):
8504-            br = BucketReader(self, sharefname)
8505+            br = BucketReader(self, sharefp)
8506             rb = RemoteBucket()
8507             rb.target = br
8508             server = NoNetworkServer("abc", None)
8509hunk ./src/allmydata/test/test_storage.py 373
8510         for i, wb in writers.items():
8511             wb.remote_write(0, "%10d" % i)
8512             wb.remote_close()
8513-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8514-                                "shares")
8515-        children_of_storedir = set(os.listdir(storedir))
8516+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8517+        children_of_storedir = sorted([child.basename() for child in storedir.children()])
8518 
8519         # Now store another one under another storageindex that has leading
8520         # chars the same as the first storageindex.
8521hunk ./src/allmydata/test/test_storage.py 382
8522         for i, wb in writers.items():
8523             wb.remote_write(0, "%10d" % i)
8524             wb.remote_close()
8525-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
8526-                                "shares")
8527-        new_children_of_storedir = set(os.listdir(storedir))
8528+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
8529+        new_children_of_storedir = sorted([child.basename() for child in storedir.children()])
8530         self.failUnlessEqual(children_of_storedir, new_children_of_storedir)
8531 
8532     def test_remove_incoming(self):
8533hunk ./src/allmydata/test/test_storage.py 390
8534         ss = self.create("test_remove_incoming")
8535         already, writers = self.allocate(ss, "vid", range(3), 10)
8536         for i,wb in writers.items():
8537+            incoming_share_home = wb._share._home
8538             wb.remote_write(0, "%10d" % i)
8539             wb.remote_close()
8540hunk ./src/allmydata/test/test_storage.py 393
8541-        incoming_share_dir = wb.incominghome
8542-        incoming_bucket_dir = os.path.dirname(incoming_share_dir)
8543-        incoming_prefix_dir = os.path.dirname(incoming_bucket_dir)
8544-        incoming_dir = os.path.dirname(incoming_prefix_dir)
8545-        self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir)
8546-        self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir)
8547-        self.failUnless(os.path.exists(incoming_dir), incoming_dir)
8548+        incoming_bucket_dir = incoming_share_home.parent()
8549+        incoming_prefix_dir = incoming_bucket_dir.parent()
8550+        incoming_dir = incoming_prefix_dir.parent()
8551+        self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir)
8552+        self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir)
8553+        self.failUnless(incoming_dir.exists(), incoming_dir)
8554 
8555     def test_abort(self):
8556         # remote_abort, when called on a writer, should make sure that
8557hunk ./src/allmydata/test/test_upload.py 1849
8558             # remove the storedir, wiping out any existing shares
8559             fileutil.fp_remove(storedir)
8560             # create an empty storedir to replace the one we just removed
8561-            storedir.mkdir()
8562+            storedir.makedirs()
8563             client = self.g.clients[0]
8564             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8565             return client
8566hunk ./src/allmydata/test/test_upload.py 1890
8567             # remove the storedir, wiping out any existing shares
8568             fileutil.fp_remove(storedir)
8569             # create an empty storedir to replace the one we just removed
8570-            storedir.mkdir()
8571+            storedir.makedirs()
8572             client = self.g.clients[0]
8573             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
8574             return client
8575}
8576[uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
8577david-sarah@jacaranda.org**20110921222038
8578 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf
8579] {
8580hunk ./src/allmydata/uri.py 829
8581     def is_mutable(self):
8582         return False
8583 
8584+    def is_readonly(self):
8585+        return True
8586+
8587+    def get_readonly(self):
8588+        return self
8589+
8590+
8591 class DirectoryURIVerifier(_DirectoryBaseURI):
8592     implements(IVerifierURI)
8593 
8594hunk ./src/allmydata/uri.py 855
8595     def is_mutable(self):
8596         return False
8597 
8598+    def is_readonly(self):
8599+        return True
8600+
8601+    def get_readonly(self):
8602+        return self
8603+
8604 
8605 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
8606     implements(IVerifierURI)
8607}
8608[Fix some more test failures. refs #999
8609david-sarah@jacaranda.org**20110922045451
8610 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7
8611] {
8612hunk ./src/allmydata/scripts/debug.py 42
8613     from allmydata.util.encodingutil import quote_output
8614 
8615     out = options.stdout
8616+    filename = options['filename']
8617 
8618     # check the version, to see if we have a mutable or immutable share
8619hunk ./src/allmydata/scripts/debug.py 45
8620-    print >>out, "share filename: %s" % quote_output(options['filename'])
8621+    print >>out, "share filename: %s" % quote_output(filename)
8622 
8623hunk ./src/allmydata/scripts/debug.py 47
8624-    share = get_share("", 0, fp)
8625+    share = get_share("", 0, FilePath(filename))
8626     if share.sharetype == "mutable":
8627         return dump_mutable_share(options, share)
8628     else:
8629hunk ./src/allmydata/storage/backends/disk/mutable.py 85
8630         self.parent = parent # for logging
8631 
8632     def log(self, *args, **kwargs):
8633-        return self.parent.log(*args, **kwargs)
8634+        if self.parent:
8635+            return self.parent.log(*args, **kwargs)
8636 
8637     def create(self, serverid, write_enabler):
8638         assert not self._home.exists()
8639hunk ./src/allmydata/storage/common.py 6
8640 class DataTooLargeError(Exception):
8641     pass
8642 
8643-class UnknownMutableContainerVersionError(Exception):
8644+class UnknownContainerVersionError(Exception):
8645     pass
8646 
8647hunk ./src/allmydata/storage/common.py 9
8648-class UnknownImmutableContainerVersionError(Exception):
8649+class UnknownMutableContainerVersionError(UnknownContainerVersionError):
8650+    pass
8651+
8652+class UnknownImmutableContainerVersionError(UnknownContainerVersionError):
8653     pass
8654 
8655 
8656hunk ./src/allmydata/storage/crawler.py 208
8657         try:
8658             state = pickle.loads(self.statefp.getContent())
8659         except EnvironmentError:
8660+            if self.statefp.exists():
8661+                raise
8662             state = {"version": 1,
8663                      "last-cycle-finished": None,
8664                      "current-cycle": None,
8665hunk ./src/allmydata/storage/server.py 24
8666 
8667     name = 'storage'
8668     LeaseCheckerClass = LeaseCheckingCrawler
8669+    BucketCounterClass = BucketCountingCrawler
8670     DEFAULT_EXPIRATION_POLICY = {
8671         'enabled': False,
8672         'mode': 'age',
8673hunk ./src/allmydata/storage/server.py 70
8674 
8675     def _setup_bucket_counter(self):
8676         statefp = self._statedir.child("bucket_counter.state")
8677-        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
8678+        self.bucket_counter = self.BucketCounterClass(self.backend, statefp)
8679         self.bucket_counter.setServiceParent(self)
8680 
8681     def _setup_lease_checker(self, expiration_policy):
8682hunk ./src/allmydata/storage/server.py 224
8683             share.add_or_renew_lease(lease_info)
8684             alreadygot.add(share.get_shnum())
8685 
8686-        for shnum in sharenums - alreadygot:
8687+        for shnum in set(sharenums) - alreadygot:
8688             if shareset.has_incoming(shnum):
8689                 # Note that we don't create BucketWriters for shnums that
8690                 # have a partial share (in incoming/), so if a second upload
8691hunk ./src/allmydata/storage/server.py 247
8692 
8693     def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
8694                          owner_num=1):
8695-        # cancel_secret is no longer used.
8696         start = time.time()
8697         self.count("add-lease")
8698         new_expire_time = time.time() + 31*24*60*60
8699hunk ./src/allmydata/storage/server.py 250
8700-        lease_info = LeaseInfo(owner_num, renew_secret,
8701+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
8702                                new_expire_time, self._serverid)
8703 
8704         try:
8705hunk ./src/allmydata/storage/server.py 254
8706-            self.backend.add_or_renew_lease(lease_info)
8707+            shareset = self.backend.get_shareset(storageindex)
8708+            shareset.add_or_renew_lease(lease_info)
8709         finally:
8710             self.add_latency("add-lease", time.time() - start)
8711 
8712hunk ./src/allmydata/test/test_crawler.py 3
8713 
8714 import time
8715-import os.path
8716+
8717 from twisted.trial import unittest
8718 from twisted.application import service
8719 from twisted.internet import defer
8720hunk ./src/allmydata/test/test_crawler.py 10
8721 from twisted.python.filepath import FilePath
8722 from foolscap.api import eventually, fireEventually
8723 
8724-from allmydata.util import fileutil, hashutil, pollmixin
8725+from allmydata.util import hashutil, pollmixin
8726 from allmydata.storage.server import StorageServer, si_b2a
8727 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
8728 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8729hunk ./src/allmydata/test/test_mutable.py 3024
8730             cso.stderr = StringIO()
8731             debug.catalog_shares(cso)
8732             shares = cso.stdout.getvalue().splitlines()
8733+            self.failIf(len(shares) < 1, shares)
8734             oneshare = shares[0] # all shares should be MDMF
8735             self.failIf(oneshare.startswith("UNKNOWN"), oneshare)
8736             self.failUnless(oneshare.startswith("MDMF"), oneshare)
8737hunk ./src/allmydata/test/test_storage.py 1
8738-import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
8739+import time, os.path, platform, re, simplejson, struct, itertools
8740 
8741 import mock
8742 
8743hunk ./src/allmydata/test/test_storage.py 15
8744 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
8745 from allmydata.storage.server import StorageServer
8746 from allmydata.storage.backends.disk.disk_backend import DiskBackend
8747+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
8748 from allmydata.storage.backends.disk.mutable import MutableDiskShare
8749 from allmydata.storage.bucket import BucketWriter, BucketReader
8750hunk ./src/allmydata/test/test_storage.py 18
8751-from allmydata.storage.common import DataTooLargeError, \
8752+from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \
8753      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
8754 from allmydata.storage.lease import LeaseInfo
8755 from allmydata.storage.crawler import BucketCountingCrawler
8756hunk ./src/allmydata/test/test_storage.py 88
8757 
8758     def test_create(self):
8759         incoming, final = self.make_workdir("test_create")
8760-        share = ImmutableDiskShare("", 0, incoming, final, 200)
8761+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8762         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8763         bw.remote_write(0, "a"*25)
8764         bw.remote_write(25, "b"*25)
8765hunk ./src/allmydata/test/test_storage.py 98
8766 
8767     def test_readwrite(self):
8768         incoming, final = self.make_workdir("test_readwrite")
8769-        share = ImmutableDiskShare("", 0, incoming, 200)
8770+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
8771         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
8772         bw.remote_write(0, "a"*25)
8773         bw.remote_write(25, "b"*25)
8774hunk ./src/allmydata/test/test_storage.py 106
8775         bw.remote_close()
8776 
8777         # now read from it
8778-        br = BucketReader(self, bw.finalhome)
8779+        br = BucketReader(self, share)
8780         self.failUnlessEqual(br.remote_read(0, 25), "a"*25)
8781         self.failUnlessEqual(br.remote_read(25, 25), "b"*25)
8782         self.failUnlessEqual(br.remote_read(50, 7), "c"*7)
8783hunk ./src/allmydata/test/test_storage.py 131
8784         ownernumber = struct.pack('>L', 0)
8785         renewsecret  = 'THIS LETS ME RENEW YOUR FILE....'
8786         assert len(renewsecret) == 32
8787-        cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA'
8788+        cancelsecret = 'THIS USED TO LET ME KILL YR FILE'
8789         assert len(cancelsecret) == 32
8790         expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds
8791 
8792hunk ./src/allmydata/test/test_storage.py 142
8793         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
8794 
8795         final.setContent(share_file_data)
8796+        share = ImmutableDiskShare("", 0, final)
8797 
8798         mockstorageserver = mock.Mock()
8799 
8800hunk ./src/allmydata/test/test_storage.py 147
8801         # Now read from it.
8802-        br = BucketReader(mockstorageserver, final)
8803+        br = BucketReader(mockstorageserver, share)
8804 
8805         self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data)
8806 
8807hunk ./src/allmydata/test/test_storage.py 260
8808 
8809         # now read everything back
8810         def _start_reading(res):
8811-            br = BucketReader(self, sharefp)
8812+            share = ImmutableDiskShare("", 0, sharefp)
8813+            br = BucketReader(self, share)
8814             rb = RemoteBucket()
8815             rb.target = br
8816             server = NoNetworkServer("abc", None)
8817hunk ./src/allmydata/test/test_storage.py 346
8818         if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow:
8819             raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).")
8820 
8821-        avail = fileutil.get_available_space('.', 512*2**20)
8822+        avail = fileutil.get_available_space(FilePath('.'), 512*2**20)
8823         if avail <= 4*2**30:
8824             raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.")
8825 
8826hunk ./src/allmydata/test/test_storage.py 476
8827         w[0].remote_write(0, "\xff"*10)
8828         w[0].remote_close()
8829 
8830-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8831+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8832         f = fp.open("rb+")
8833hunk ./src/allmydata/test/test_storage.py 478
8834-        f.seek(0)
8835-        f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8836-        f.close()
8837+        try:
8838+            f.seek(0)
8839+            f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
8840+        finally:
8841+            f.close()
8842 
8843         ss.remote_get_buckets("allocate")
8844 
8845hunk ./src/allmydata/test/test_storage.py 575
8846 
8847     def test_seek(self):
8848         basedir = self.workdir("test_seek_behavior")
8849-        fileutil.make_dirs(basedir)
8850-        filename = os.path.join(basedir, "testfile")
8851-        f = open(filename, "wb")
8852-        f.write("start")
8853-        f.close()
8854+        basedir.makedirs()
8855+        fp = basedir.child("testfile")
8856+        fp.setContent("start")
8857+
8858         # mode="w" allows seeking-to-create-holes, but truncates pre-existing
8859         # files. mode="a" preserves previous contents but does not allow
8860         # seeking-to-create-holes. mode="r+" allows both.
8861hunk ./src/allmydata/test/test_storage.py 582
8862-        f = open(filename, "rb+")
8863-        f.seek(100)
8864-        f.write("100")
8865-        f.close()
8866-        filelen = os.stat(filename)[stat.ST_SIZE]
8867+        f = fp.open("rb+")
8868+        try:
8869+            f.seek(100)
8870+            f.write("100")
8871+        finally:
8872+            f.close()
8873+        fp.restat()
8874+        filelen = fp.getsize()
8875         self.failUnlessEqual(filelen, 100+3)
8876hunk ./src/allmydata/test/test_storage.py 591
8877-        f2 = open(filename, "rb")
8878-        self.failUnlessEqual(f2.read(5), "start")
8879-
8880+        f2 = fp.open("rb")
8881+        try:
8882+            self.failUnlessEqual(f2.read(5), "start")
8883+        finally:
8884+            f2.close()
8885 
8886     def test_leases(self):
8887         ss = self.create("test_leases")
8888hunk ./src/allmydata/test/test_storage.py 693
8889 
8890     def test_readonly(self):
8891         workdir = self.workdir("test_readonly")
8892-        ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True)
8893+        backend = DiskBackend(workdir, readonly=True)
8894+        ss = StorageServer("\x00" * 20, backend, workdir)
8895         ss.setServiceParent(self.sparent)
8896 
8897         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8898hunk ./src/allmydata/test/test_storage.py 710
8899 
8900     def test_discard(self):
8901         # discard is really only used for other tests, but we test it anyways
8902+        # XXX replace this with a null backend test
8903         workdir = self.workdir("test_discard")
8904hunk ./src/allmydata/test/test_storage.py 712
8905-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8906+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8907+        ss = StorageServer("\x00" * 20, backend, workdir)
8908         ss.setServiceParent(self.sparent)
8909 
8910         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
8911hunk ./src/allmydata/test/test_storage.py 731
8912 
8913     def test_advise_corruption(self):
8914         workdir = self.workdir("test_advise_corruption")
8915-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
8916+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
8917+        ss = StorageServer("\x00" * 20, backend, workdir)
8918         ss.setServiceParent(self.sparent)
8919 
8920         si0_s = base32.b2a("si0")
8921hunk ./src/allmydata/test/test_storage.py 738
8922         ss.remote_advise_corrupt_share("immutable", "si0", 0,
8923                                        "This share smells funny.\n")
8924-        reportdir = os.path.join(workdir, "corruption-advisories")
8925-        reports = os.listdir(reportdir)
8926+        reportdir = workdir.child("corruption-advisories")
8927+        reports = [child.basename() for child in reportdir.children()]
8928         self.failUnlessEqual(len(reports), 1)
8929         report_si0 = reports[0]
8930hunk ./src/allmydata/test/test_storage.py 742
8931-        self.failUnlessIn(si0_s, report_si0)
8932-        f = open(os.path.join(reportdir, report_si0), "r")
8933-        report = f.read()
8934-        f.close()
8935+        self.failUnlessIn(si0_s, str(report_si0))
8936+        report = reportdir.child(report_si0).getContent()
8937+
8938         self.failUnlessIn("type: immutable", report)
8939         self.failUnlessIn("storage_index: %s" % si0_s, report)
8940         self.failUnlessIn("share_number: 0", report)
8941hunk ./src/allmydata/test/test_storage.py 762
8942         self.failUnlessEqual(set(b.keys()), set([1]))
8943         b[1].remote_advise_corrupt_share("This share tastes like dust.\n")
8944 
8945-        reports = os.listdir(reportdir)
8946+        reports = [child.basename() for child in reportdir.children()]
8947         self.failUnlessEqual(len(reports), 2)
8948hunk ./src/allmydata/test/test_storage.py 764
8949-        report_si1 = [r for r in reports if si1_s in r][0]
8950-        f = open(os.path.join(reportdir, report_si1), "r")
8951-        report = f.read()
8952-        f.close()
8953+        report_si1 = [r for r in reports if si1_s in str(r)][0]
8954+        report = reportdir.child(report_si1).getContent()
8955+
8956         self.failUnlessIn("type: immutable", report)
8957         self.failUnlessIn("storage_index: %s" % si1_s, report)
8958         self.failUnlessIn("share_number: 1", report)
8959hunk ./src/allmydata/test/test_storage.py 783
8960         return self.sparent.stopService()
8961 
8962     def workdir(self, name):
8963-        basedir = os.path.join("storage", "MutableServer", name)
8964-        return basedir
8965+        return FilePath("storage").child("MutableServer").child(name)
8966 
8967     def create(self, name):
8968         workdir = self.workdir(name)
8969hunk ./src/allmydata/test/test_storage.py 787
8970-        ss = StorageServer(workdir, "\x00" * 20)
8971+        backend = DiskBackend(workdir)
8972+        ss = StorageServer("\x00" * 20, backend, workdir)
8973         ss.setServiceParent(self.sparent)
8974         return ss
8975 
8976hunk ./src/allmydata/test/test_storage.py 810
8977         cancel_secret = self.cancel_secret(lease_tag)
8978         rstaraw = ss.remote_slot_testv_and_readv_and_writev
8979         testandwritev = dict( [ (shnum, ([], [], None) )
8980-                         for shnum in sharenums ] )
8981+                                for shnum in sharenums ] )
8982         readv = []
8983         rc = rstaraw(storage_index,
8984                      (write_enabler, renew_secret, cancel_secret),
8985hunk ./src/allmydata/test/test_storage.py 824
8986     def test_bad_magic(self):
8987         ss = self.create("test_bad_magic")
8988         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
8989-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
8990+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
8991         f = fp.open("rb+")
8992hunk ./src/allmydata/test/test_storage.py 826
8993-        f.seek(0)
8994-        f.write("BAD MAGIC")
8995-        f.close()
8996+        try:
8997+            f.seek(0)
8998+            f.write("BAD MAGIC")
8999+        finally:
9000+            f.close()
9001         read = ss.remote_slot_readv
9002hunk ./src/allmydata/test/test_storage.py 832
9003-        e = self.failUnlessRaises(UnknownMutableContainerVersionError,
9004+
9005+        # This used to test for UnknownMutableContainerVersionError,
9006+        # but the current code raises UnknownImmutableContainerVersionError.
9007+        # (It changed because remote_slot_readv now works with either
9008+        # mutable or immutable shares.) Since the share file doesn't have
9009+        # the mutable magic, it's not clear that this is wrong.
9010+        # For now, accept either exception.
9011+        e = self.failUnlessRaises(UnknownContainerVersionError,
9012                                   read, "si1", [0], [(0,10)])
9013hunk ./src/allmydata/test/test_storage.py 841
9014-        self.failUnlessIn(" had magic ", str(e))
9015+        self.failUnlessIn(" had ", str(e))
9016         self.failUnlessIn(" but we wanted ", str(e))
9017 
9018     def test_container_size(self):
9019hunk ./src/allmydata/test/test_storage.py 1248
9020 
9021         # create a random non-numeric file in the bucket directory, to
9022         # exercise the code that's supposed to ignore those.
9023-        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
9024+        bucket_dir = ss.backend.get_shareset("si1")._sharehomedir
9025         bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
9026 
9027hunk ./src/allmydata/test/test_storage.py 1251
9028-        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
9029+        s0 = MutableDiskShare("", 0, bucket_dir.child("0"))
9030         self.failUnlessEqual(len(list(s0.get_leases())), 1)
9031 
9032         # add-lease on a missing storage index is silently ignored
9033hunk ./src/allmydata/test/test_storage.py 1365
9034         # note: this is a detail of the storage server implementation, and
9035         # may change in the future
9036         prefix = si[:2]
9037-        prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix)
9038-        bucketdir = os.path.join(prefixdir, si)
9039-        self.failUnless(os.path.exists(prefixdir), prefixdir)
9040-        self.failIf(os.path.exists(bucketdir), bucketdir)
9041+        prefixdir = self.workdir("test_remove").child("shares").child(prefix)
9042+        bucketdir = prefixdir.child(si)
9043+        self.failUnless(prefixdir.exists(), prefixdir)
9044+        self.failIf(bucketdir.exists(), bucketdir)
9045 
9046 
9047 class MDMFProxies(unittest.TestCase, ShouldFailMixin):
9048hunk ./src/allmydata/test/test_storage.py 1420
9049 
9050 
9051     def workdir(self, name):
9052-        basedir = os.path.join("storage", "MutableServer", name)
9053-        return basedir
9054-
9055+        return FilePath("storage").child("MDMFProxies").child(name)
9056 
9057     def create(self, name):
9058         workdir = self.workdir(name)
9059hunk ./src/allmydata/test/test_storage.py 1424
9060-        ss = StorageServer(workdir, "\x00" * 20)
9061+        backend = DiskBackend(workdir)
9062+        ss = StorageServer("\x00" * 20, backend, workdir)
9063         ss.setServiceParent(self.sparent)
9064         return ss
9065 
9066hunk ./src/allmydata/test/test_storage.py 2798
9067         return self.sparent.stopService()
9068 
9069     def workdir(self, name):
9070-        return FilePath("storage").child("Server").child(name)
9071+        return FilePath("storage").child("Stats").child(name)
9072 
9073     def create(self, name):
9074         workdir = self.workdir(name)
9075hunk ./src/allmydata/test/test_storage.py 2886
9076             d.callback(None)
9077 
9078 class MyStorageServer(StorageServer):
9079-    def add_bucket_counter(self):
9080-        statefile = os.path.join(self.storedir, "bucket_counter.state")
9081-        self.bucket_counter = MyBucketCountingCrawler(self, statefile)
9082-        self.bucket_counter.setServiceParent(self)
9083+    BucketCounterClass = MyBucketCountingCrawler
9084+
9085 
9086 class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
9087 
9088hunk ./src/allmydata/test/test_storage.py 2899
9089 
9090     def test_bucket_counter(self):
9091         basedir = "storage/BucketCounter/bucket_counter"
9092-        fileutil.make_dirs(basedir)
9093-        ss = StorageServer(basedir, "\x00" * 20)
9094+        fp = FilePath(basedir)
9095+        backend = DiskBackend(fp)
9096+        ss = StorageServer("\x00" * 20, backend, fp)
9097+
9098         # to make sure we capture the bucket-counting-crawler in the middle
9099         # of a cycle, we reach in and reduce its maximum slice time to 0. We
9100         # also make it start sooner than usual.
9101hunk ./src/allmydata/test/test_storage.py 2958
9102 
9103     def test_bucket_counter_cleanup(self):
9104         basedir = "storage/BucketCounter/bucket_counter_cleanup"
9105-        fileutil.make_dirs(basedir)
9106-        ss = StorageServer(basedir, "\x00" * 20)
9107+        fp = FilePath(basedir)
9108+        backend = DiskBackend(fp)
9109+        ss = StorageServer("\x00" * 20, backend, fp)
9110+
9111         # to make sure we capture the bucket-counting-crawler in the middle
9112         # of a cycle, we reach in and reduce its maximum slice time to 0.
9113         ss.bucket_counter.slow_start = 0
9114hunk ./src/allmydata/test/test_storage.py 3002
9115 
9116     def test_bucket_counter_eta(self):
9117         basedir = "storage/BucketCounter/bucket_counter_eta"
9118-        fileutil.make_dirs(basedir)
9119-        ss = MyStorageServer(basedir, "\x00" * 20)
9120+        fp = FilePath(basedir)
9121+        backend = DiskBackend(fp)
9122+        ss = MyStorageServer("\x00" * 20, backend, fp)
9123         ss.bucket_counter.slow_start = 0
9124         # these will be fired inside finished_prefix()
9125         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
9126hunk ./src/allmydata/test/test_storage.py 3125
9127 
9128     def test_basic(self):
9129         basedir = "storage/LeaseCrawler/basic"
9130-        fileutil.make_dirs(basedir)
9131-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9132+        fp = FilePath(basedir)
9133+        backend = DiskBackend(fp)
9134+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9135+
9136         # make it start sooner than usual.
9137         lc = ss.lease_checker
9138         lc.slow_start = 0
9139hunk ./src/allmydata/test/test_storage.py 3141
9140         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9141 
9142         # add a non-sharefile to exercise another code path
9143-        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
9144+        fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share")
9145         fp.setContent("I am not a share.\n")
9146 
9147         # this is before the crawl has started, so we're not in a cycle yet
9148hunk ./src/allmydata/test/test_storage.py 3264
9149             self.failUnlessEqual(rec["configured-sharebytes"], 0)
9150 
9151             def _get_sharefile(si):
9152-                return list(ss._iter_share_files(si))[0]
9153+                return list(ss.backend.get_shareset(si).get_shares())[0]
9154             def count_leases(si):
9155                 return len(list(_get_sharefile(si).get_leases()))
9156             self.failUnlessEqual(count_leases(immutable_si_0), 1)
9157hunk ./src/allmydata/test/test_storage.py 3296
9158         for i,lease in enumerate(sf.get_leases()):
9159             if lease.renew_secret == renew_secret:
9160                 lease.expiration_time = new_expire_time
9161-                f = open(sf.home, 'rb+')
9162-                sf._write_lease_record(f, i, lease)
9163-                f.close()
9164+                f = sf._home.open('rb+')
9165+                try:
9166+                    sf._write_lease_record(f, i, lease)
9167+                finally:
9168+                    f.close()
9169                 return
9170         raise IndexError("unable to renew non-existent lease")
9171 
9172hunk ./src/allmydata/test/test_storage.py 3306
9173     def test_expire_age(self):
9174         basedir = "storage/LeaseCrawler/expire_age"
9175-        fileutil.make_dirs(basedir)
9176+        fp = FilePath(basedir)
9177+        backend = DiskBackend(fp)
9178+
9179         # setting 'override_lease_duration' to 2000 means that any lease that
9180         # is more than 2000 seconds old will be expired.
9181         expiration_policy = {
9182hunk ./src/allmydata/test/test_storage.py 3317
9183             'override_lease_duration': 2000,
9184             'sharetypes': ('mutable', 'immutable'),
9185         }
9186-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9187+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9188+
9189         # make it start sooner than usual.
9190         lc = ss.lease_checker
9191         lc.slow_start = 0
9192hunk ./src/allmydata/test/test_storage.py 3330
9193         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9194 
9195         def count_shares(si):
9196-            return len(list(ss._iter_share_files(si)))
9197+            return len(list(ss.backend.get_shareset(si).get_shares()))
9198         def _get_sharefile(si):
9199hunk ./src/allmydata/test/test_storage.py 3332
9200-            return list(ss._iter_share_files(si))[0]
9201+            return list(ss.backend.get_shareset(si).get_shares())[0]
9202         def count_leases(si):
9203             return len(list(_get_sharefile(si).get_leases()))
9204 
9205hunk ./src/allmydata/test/test_storage.py 3355
9206 
9207         sf0 = _get_sharefile(immutable_si_0)
9208         self.backdate_lease(sf0, self.renew_secrets[0], now - 1000)
9209-        sf0_size = os.stat(sf0.home).st_size
9210+        sf0_size = sf0.get_size()
9211 
9212         # immutable_si_1 gets an extra lease
9213         sf1 = _get_sharefile(immutable_si_1)
9214hunk ./src/allmydata/test/test_storage.py 3363
9215 
9216         sf2 = _get_sharefile(mutable_si_2)
9217         self.backdate_lease(sf2, self.renew_secrets[3], now - 1000)
9218-        sf2_size = os.stat(sf2.home).st_size
9219+        sf2_size = sf2.get_size()
9220 
9221         # mutable_si_3 gets an extra lease
9222         sf3 = _get_sharefile(mutable_si_3)
9223hunk ./src/allmydata/test/test_storage.py 3450
9224 
9225     def test_expire_cutoff_date(self):
9226         basedir = "storage/LeaseCrawler/expire_cutoff_date"
9227-        fileutil.make_dirs(basedir)
9228+        fp = FilePath(basedir)
9229+        backend = DiskBackend(fp)
9230+
9231         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9232         # is more than 2000 seconds old will be expired.
9233         now = time.time()
9234hunk ./src/allmydata/test/test_storage.py 3463
9235             'cutoff_date': then,
9236             'sharetypes': ('mutable', 'immutable'),
9237         }
9238-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
9239+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9240+
9241         # make it start sooner than usual.
9242         lc = ss.lease_checker
9243         lc.slow_start = 0
9244hunk ./src/allmydata/test/test_storage.py 3476
9245         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9246 
9247         def count_shares(si):
9248-            return len(list(ss._iter_share_files(si)))
9249+            return len(list(ss.backend.get_shareset(si).get_shares()))
9250         def _get_sharefile(si):
9251hunk ./src/allmydata/test/test_storage.py 3478
9252-            return list(ss._iter_share_files(si))[0]
9253+            return list(ss.backend.get_shareset(si).get_shares())[0]
9254         def count_leases(si):
9255             return len(list(_get_sharefile(si).get_leases()))
9256 
9257hunk ./src/allmydata/test/test_storage.py 3505
9258 
9259         sf0 = _get_sharefile(immutable_si_0)
9260         self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time)
9261-        sf0_size = os.stat(sf0.home).st_size
9262+        sf0_size = sf0.get_size()
9263 
9264         # immutable_si_1 gets an extra lease
9265         sf1 = _get_sharefile(immutable_si_1)
9266hunk ./src/allmydata/test/test_storage.py 3513
9267 
9268         sf2 = _get_sharefile(mutable_si_2)
9269         self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time)
9270-        sf2_size = os.stat(sf2.home).st_size
9271+        sf2_size = sf2.get_size()
9272 
9273         # mutable_si_3 gets an extra lease
9274         sf3 = _get_sharefile(mutable_si_3)
9275hunk ./src/allmydata/test/test_storage.py 3605
9276 
9277     def test_only_immutable(self):
9278         basedir = "storage/LeaseCrawler/only_immutable"
9279-        fileutil.make_dirs(basedir)
9280+        fp = FilePath(basedir)
9281+        backend = DiskBackend(fp)
9282+
9283         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9284         # is more than 2000 seconds old will be expired.
9285         now = time.time()
9286hunk ./src/allmydata/test/test_storage.py 3618
9287             'cutoff_date': then,
9288             'sharetypes': ('immutable',),
9289         }
9290-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9291+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9292         lc = ss.lease_checker
9293         lc.slow_start = 0
9294         webstatus = StorageStatus(ss)
9295hunk ./src/allmydata/test/test_storage.py 3629
9296         new_expiration_time = now - 3000 + 31*24*60*60
9297 
9298         def count_shares(si):
9299-            return len(list(ss._iter_share_files(si)))
9300+            return len(list(ss.backend.get_shareset(si).get_shares()))
9301         def _get_sharefile(si):
9302hunk ./src/allmydata/test/test_storage.py 3631
9303-            return list(ss._iter_share_files(si))[0]
9304+            return list(ss.backend.get_shareset(si).get_shares())[0]
9305         def count_leases(si):
9306             return len(list(_get_sharefile(si).get_leases()))
9307 
9308hunk ./src/allmydata/test/test_storage.py 3668
9309 
9310     def test_only_mutable(self):
9311         basedir = "storage/LeaseCrawler/only_mutable"
9312-        fileutil.make_dirs(basedir)
9313+        fp = FilePath(basedir)
9314+        backend = DiskBackend(fp)
9315+
9316         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
9317         # is more than 2000 seconds old will be expired.
9318         now = time.time()
9319hunk ./src/allmydata/test/test_storage.py 3681
9320             'cutoff_date': then,
9321             'sharetypes': ('mutable',),
9322         }
9323-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
9324+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9325         lc = ss.lease_checker
9326         lc.slow_start = 0
9327         webstatus = StorageStatus(ss)
9328hunk ./src/allmydata/test/test_storage.py 3692
9329         new_expiration_time = now - 3000 + 31*24*60*60
9330 
9331         def count_shares(si):
9332-            return len(list(ss._iter_share_files(si)))
9333+            return len(list(ss.backend.get_shareset(si).get_shares()))
9334         def _get_sharefile(si):
9335hunk ./src/allmydata/test/test_storage.py 3694
9336-            return list(ss._iter_share_files(si))[0]
9337+            return list(ss.backend.get_shareset(si).get_shares())[0]
9338         def count_leases(si):
9339             return len(list(_get_sharefile(si).get_leases()))
9340 
9341hunk ./src/allmydata/test/test_storage.py 3731
9342 
9343     def test_bad_mode(self):
9344         basedir = "storage/LeaseCrawler/bad_mode"
9345-        fileutil.make_dirs(basedir)
9346+        fp = FilePath(basedir)
9347+        backend = DiskBackend(fp)
9348+
9349+        expiration_policy = {
9350+            'enabled': True,
9351+            'mode': 'bogus',
9352+            'override_lease_duration': None,
9353+            'cutoff_date': None,
9354+            'sharetypes': ('mutable', 'immutable'),
9355+        }
9356         e = self.failUnlessRaises(ValueError,
9357hunk ./src/allmydata/test/test_storage.py 3742
9358-                                  StorageServer, basedir, "\x00" * 20,
9359-                                  expiration_mode="bogus")
9360+                                  StorageServer, "\x00" * 20, backend, fp,
9361+                                  expiration_policy=expiration_policy)
9362         self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e))
9363 
9364     def test_parse_duration(self):
9365hunk ./src/allmydata/test/test_storage.py 3767
9366 
9367     def test_limited_history(self):
9368         basedir = "storage/LeaseCrawler/limited_history"
9369-        fileutil.make_dirs(basedir)
9370-        ss = StorageServer(basedir, "\x00" * 20)
9371+        fp = FilePath(basedir)
9372+        backend = DiskBackend(fp)
9373+        ss = StorageServer("\x00" * 20, backend, fp)
9374+
9375         # make it start sooner than usual.
9376         lc = ss.lease_checker
9377         lc.slow_start = 0
9378hunk ./src/allmydata/test/test_storage.py 3801
9379 
9380     def test_unpredictable_future(self):
9381         basedir = "storage/LeaseCrawler/unpredictable_future"
9382-        fileutil.make_dirs(basedir)
9383-        ss = StorageServer(basedir, "\x00" * 20)
9384+        fp = FilePath(basedir)
9385+        backend = DiskBackend(fp)
9386+        ss = StorageServer("\x00" * 20, backend, fp)
9387+
9388         # make it start sooner than usual.
9389         lc = ss.lease_checker
9390         lc.slow_start = 0
9391hunk ./src/allmydata/test/test_storage.py 3866
9392 
9393     def test_no_st_blocks(self):
9394         basedir = "storage/LeaseCrawler/no_st_blocks"
9395-        fileutil.make_dirs(basedir)
9396+        fp = FilePath(basedir)
9397+        backend = DiskBackend(fp)
9398+
9399         # A negative 'override_lease_duration' means that the "configured-"
9400         # space-recovered counts will be non-zero, since all shares will have
9401         # expired by then.
9402hunk ./src/allmydata/test/test_storage.py 3878
9403             'override_lease_duration': -1000,
9404             'sharetypes': ('mutable', 'immutable'),
9405         }
9406-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
9407+        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
9408 
9409         # make it start sooner than usual.
9410         lc = ss.lease_checker
9411hunk ./src/allmydata/test/test_storage.py 3911
9412             UnknownImmutableContainerVersionError,
9413             ]
9414         basedir = "storage/LeaseCrawler/share_corruption"
9415-        fileutil.make_dirs(basedir)
9416-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
9417+        fp = FilePath(basedir)
9418+        backend = DiskBackend(fp)
9419+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
9420         w = StorageStatus(ss)
9421         # make it start sooner than usual.
9422         lc = ss.lease_checker
9423hunk ./src/allmydata/test/test_storage.py 3928
9424         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
9425         first = min(self.sis)
9426         first_b32 = base32.b2a(first)
9427-        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
9428+        fp = ss.backend.get_shareset(first)._sharehomedir.child("0")
9429         f = fp.open("rb+")
9430hunk ./src/allmydata/test/test_storage.py 3930
9431-        f.seek(0)
9432-        f.write("BAD MAGIC")
9433-        f.close()
9434+        try:
9435+            f.seek(0)
9436+            f.write("BAD MAGIC")
9437+        finally:
9438+            f.close()
9439         # if get_share_file() doesn't see the correct mutable magic, it
9440         # assumes the file is an immutable share, and then
9441         # immutable.ShareFile sees a bad version. So regardless of which kind
9442hunk ./src/allmydata/test/test_storage.py 3943
9443 
9444         # also create an empty bucket
9445         empty_si = base32.b2a("\x04"*16)
9446-        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
9447+        empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir
9448         fileutil.fp_make_dirs(empty_bucket_dir)
9449 
9450         ss.setServiceParent(self.s)
9451hunk ./src/allmydata/test/test_storage.py 4031
9452 
9453     def test_status(self):
9454         basedir = "storage/WebStatus/status"
9455-        fileutil.make_dirs(basedir)
9456-        ss = StorageServer(basedir, "\x00" * 20)
9457+        fp = FilePath(basedir)
9458+        backend = DiskBackend(fp)
9459+        ss = StorageServer("\x00" * 20, backend, fp)
9460         ss.setServiceParent(self.s)
9461         w = StorageStatus(ss)
9462         d = self.render1(w)
9463hunk ./src/allmydata/test/test_storage.py 4065
9464         # Some platforms may have no disk stats API. Make sure the code can handle that
9465         # (test runs on all platforms).
9466         basedir = "storage/WebStatus/status_no_disk_stats"
9467-        fileutil.make_dirs(basedir)
9468-        ss = StorageServer(basedir, "\x00" * 20)
9469+        fp = FilePath(basedir)
9470+        backend = DiskBackend(fp)
9471+        ss = StorageServer("\x00" * 20, backend, fp)
9472         ss.setServiceParent(self.s)
9473         w = StorageStatus(ss)
9474         html = w.renderSynchronously()
9475hunk ./src/allmydata/test/test_storage.py 4085
9476         # If the API to get disk stats exists but a call to it fails, then the status should
9477         # show that no shares will be accepted, and get_available_space() should be 0.
9478         basedir = "storage/WebStatus/status_bad_disk_stats"
9479-        fileutil.make_dirs(basedir)
9480-        ss = StorageServer(basedir, "\x00" * 20)
9481+        fp = FilePath(basedir)
9482+        backend = DiskBackend(fp)
9483+        ss = StorageServer("\x00" * 20, backend, fp)
9484         ss.setServiceParent(self.s)
9485         w = StorageStatus(ss)
9486         html = w.renderSynchronously()
9487}
9488[Fix most of the crawler tests. refs #999
9489david-sarah@jacaranda.org**20110922183008
9490 Ignore-this: 116c0848008f3989ba78d87c07ec783c
9491] {
9492hunk ./src/allmydata/storage/backends/disk/disk_backend.py 160
9493         self._discard_storage = discard_storage
9494 
9495     def get_overhead(self):
9496-        return (fileutil.get_disk_usage(self._sharehomedir) +
9497-                fileutil.get_disk_usage(self._incominghomedir))
9498+        return (fileutil.get_used_space(self._sharehomedir) +
9499+                fileutil.get_used_space(self._incominghomedir))
9500 
9501     def get_shares(self):
9502         """
9503hunk ./src/allmydata/storage/crawler.py 2
9504 
9505-import time, struct
9506-import cPickle as pickle
9507+import time, pickle, struct
9508 from twisted.internet import reactor
9509 from twisted.application import service
9510 
9511hunk ./src/allmydata/storage/crawler.py 205
9512         #                            shareset to be processed, or None if we
9513         #                            are sleeping between cycles
9514         try:
9515-            state = pickle.loads(self.statefp.getContent())
9516+            pickled = self.statefp.getContent()
9517         except EnvironmentError:
9518             if self.statefp.exists():
9519                 raise
9520hunk ./src/allmydata/storage/crawler.py 215
9521                      "last-complete-prefix": None,
9522                      "last-complete-bucket": None,
9523                      }
9524+        else:
9525+            state = pickle.loads(pickled)
9526+
9527         state.setdefault("current-cycle-start-time", time.time()) # approximate
9528         self.state = state
9529         lcp = state["last-complete-prefix"]
9530hunk ./src/allmydata/storage/crawler.py 246
9531         else:
9532             last_complete_prefix = self.prefixes[lcpi]
9533         self.state["last-complete-prefix"] = last_complete_prefix
9534-        self.statefp.setContent(pickle.dumps(self.state))
9535+        pickled = pickle.dumps(self.state)
9536+        self.statefp.setContent(pickled)
9537 
9538     def startService(self):
9539         # arrange things to look like we were just sleeping, so
9540hunk ./src/allmydata/storage/expirer.py 86
9541         # initialize history
9542         if not self.historyfp.exists():
9543             history = {} # cyclenum -> dict
9544-            self.historyfp.setContent(pickle.dumps(history))
9545+            pickled = pickle.dumps(history)
9546+            self.historyfp.setContent(pickled)
9547 
9548     def create_empty_cycle_dict(self):
9549         recovered = self.create_empty_recovered_dict()
9550hunk ./src/allmydata/storage/expirer.py 111
9551     def started_cycle(self, cycle):
9552         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
9553 
9554-    def process_storage_index(self, cycle, prefix, container):
9555+    def process_shareset(self, cycle, prefix, shareset):
9556         would_keep_shares = []
9557         wks = None
9558hunk ./src/allmydata/storage/expirer.py 114
9559-        sharetype = None
9560 
9561hunk ./src/allmydata/storage/expirer.py 115
9562-        for share in container.get_shares():
9563-            sharetype = share.sharetype
9564+        for share in shareset.get_shares():
9565             try:
9566                 wks = self.process_share(share)
9567             except (UnknownMutableContainerVersionError,
9568hunk ./src/allmydata/storage/expirer.py 128
9569                 wks = (1, 1, 1, "unknown")
9570             would_keep_shares.append(wks)
9571 
9572-        container_type = None
9573+        shareset_type = None
9574         if wks:
9575hunk ./src/allmydata/storage/expirer.py 130
9576-            # use the last share's sharetype as the container type
9577-            container_type = wks[3]
9578+            # use the last share's type as the shareset type
9579+            shareset_type = wks[3]
9580         rec = self.state["cycle-to-date"]["space-recovered"]
9581         self.increment(rec, "examined-buckets", 1)
9582hunk ./src/allmydata/storage/expirer.py 134
9583-        if sharetype:
9584-            self.increment(rec, "examined-buckets-"+container_type, 1)
9585+        if shareset_type:
9586+            self.increment(rec, "examined-buckets-"+shareset_type, 1)
9587 
9588hunk ./src/allmydata/storage/expirer.py 137
9589-        container_diskbytes = container.get_overhead()
9590+        shareset_diskbytes = shareset.get_overhead()
9591 
9592         if sum([wks[0] for wks in would_keep_shares]) == 0:
9593hunk ./src/allmydata/storage/expirer.py 140
9594-            self.increment_container_space("original", container_diskbytes, sharetype)
9595+            self.increment_shareset_space("original", shareset_diskbytes, shareset_type)
9596         if sum([wks[1] for wks in would_keep_shares]) == 0:
9597hunk ./src/allmydata/storage/expirer.py 142
9598-            self.increment_container_space("configured", container_diskbytes, sharetype)
9599+            self.increment_shareset_space("configured", shareset_diskbytes, shareset_type)
9600         if sum([wks[2] for wks in would_keep_shares]) == 0:
9601hunk ./src/allmydata/storage/expirer.py 144
9602-            self.increment_container_space("actual", container_diskbytes, sharetype)
9603+            self.increment_shareset_space("actual", shareset_diskbytes, shareset_type)
9604 
9605     def process_share(self, share):
9606         sharetype = share.sharetype
9607hunk ./src/allmydata/storage/expirer.py 189
9608 
9609         so_far = self.state["cycle-to-date"]
9610         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
9611-        self.increment_space("examined", diskbytes, sharetype)
9612+        self.increment_space("examined", sharebytes, diskbytes, sharetype)
9613 
9614         would_keep_share = [1, 1, 1, sharetype]
9615 
9616hunk ./src/allmydata/storage/expirer.py 220
9617             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
9618             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
9619 
9620-    def increment_container_space(self, a, container_diskbytes, container_type):
9621+    def increment_shareset_space(self, a, shareset_diskbytes, shareset_type):
9622         rec = self.state["cycle-to-date"]["space-recovered"]
9623hunk ./src/allmydata/storage/expirer.py 222
9624-        self.increment(rec, a+"-diskbytes", container_diskbytes)
9625+        self.increment(rec, a+"-diskbytes", shareset_diskbytes)
9626         self.increment(rec, a+"-buckets", 1)
9627hunk ./src/allmydata/storage/expirer.py 224
9628-        if container_type:
9629-            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
9630-            self.increment(rec, a+"-buckets-"+container_type, 1)
9631+        if shareset_type:
9632+            self.increment(rec, a+"-diskbytes-"+shareset_type, shareset_diskbytes)
9633+            self.increment(rec, a+"-buckets-"+shareset_type, 1)
9634 
9635     def increment(self, d, k, delta=1):
9636         if k not in d:
9637hunk ./src/allmydata/storage/expirer.py 280
9638         # copy() needs to become a deepcopy
9639         h["space-recovered"] = s["space-recovered"].copy()
9640 
9641-        history = pickle.loads(self.historyfp.getContent())
9642+        pickled = self.historyfp.getContent()
9643+        history = pickle.loads(pickled)
9644         history[cycle] = h
9645         while len(history) > 10:
9646             oldcycles = sorted(history.keys())
9647hunk ./src/allmydata/storage/expirer.py 286
9648             del history[oldcycles[0]]
9649-        self.historyfp.setContent(pickle.dumps(history))
9650+        repickled = pickle.dumps(history)
9651+        self.historyfp.setContent(repickled)
9652 
9653     def get_state(self):
9654         """In addition to the crawler state described in
9655hunk ./src/allmydata/storage/expirer.py 356
9656         progress = self.get_progress()
9657 
9658         state = ShareCrawler.get_state(self) # does a shallow copy
9659-        history = pickle.loads(self.historyfp.getContent())
9660+        pickled = self.historyfp.getContent()
9661+        history = pickle.loads(pickled)
9662         state["history"] = history
9663 
9664         if not progress["cycle-in-progress"]:
9665hunk ./src/allmydata/test/test_crawler.py 25
9666         ShareCrawler.__init__(self, *args, **kwargs)
9667         self.all_buckets = []
9668         self.finished_d = defer.Deferred()
9669-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9670-        self.all_buckets.append(storage_index_b32)
9671+
9672+    def process_shareset(self, cycle, prefix, shareset):
9673+        self.all_buckets.append(shareset.get_storage_index_string())
9674+
9675     def finished_cycle(self, cycle):
9676         eventually(self.finished_d.callback, None)
9677 
9678hunk ./src/allmydata/test/test_crawler.py 41
9679         self.all_buckets = []
9680         self.finished_d = defer.Deferred()
9681         self.yield_cb = None
9682-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9683-        self.all_buckets.append(storage_index_b32)
9684+
9685+    def process_shareset(self, cycle, prefix, shareset):
9686+        self.all_buckets.append(shareset.get_storage_index_string())
9687         self.countdown -= 1
9688         if self.countdown == 0:
9689             # force a timeout. We restore it in yielding()
9690hunk ./src/allmydata/test/test_crawler.py 66
9691         self.accumulated = 0.0
9692         self.cycles = 0
9693         self.last_yield = 0.0
9694-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9695+
9696+    def process_shareset(self, cycle, prefix, shareset):
9697         start = time.time()
9698         time.sleep(0.05)
9699         elapsed = time.time() - start
9700hunk ./src/allmydata/test/test_crawler.py 85
9701         ShareCrawler.__init__(self, *args, **kwargs)
9702         self.counter = 0
9703         self.finished_d = defer.Deferred()
9704-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
9705+
9706+    def process_shareset(self, cycle, prefix, shareset):
9707         self.counter += 1
9708     def finished_cycle(self, cycle):
9709         self.finished_d.callback(None)
9710hunk ./src/allmydata/test/test_storage.py 3041
9711 
9712 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
9713     stop_after_first_bucket = False
9714-    def process_bucket(self, *args, **kwargs):
9715-        LeaseCheckingCrawler.process_bucket(self, *args, **kwargs)
9716+
9717+    def process_shareset(self, cycle, prefix, shareset):
9718+        LeaseCheckingCrawler.process_shareset(self, cycle, prefix, shareset)
9719         if self.stop_after_first_bucket:
9720             self.stop_after_first_bucket = False
9721             self.cpu_slice = -1.0
9722hunk ./src/allmydata/test/test_storage.py 3051
9723         if not self.stop_after_first_bucket:
9724             self.cpu_slice = 500
9725 
9726+class InstrumentedStorageServer(StorageServer):
9727+    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9728+
9729+
9730 class BrokenStatResults:
9731     pass
9732 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
9733hunk ./src/allmydata/test/test_storage.py 3069
9734             setattr(bsr, attrname, getattr(s, attrname))
9735         return bsr
9736 
9737-class InstrumentedStorageServer(StorageServer):
9738-    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
9739 class No_ST_BLOCKS_StorageServer(StorageServer):
9740     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
9741 
9742}
9743[Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
9744david-sarah@jacaranda.org**20110922183323
9745 Ignore-this: a11fb0dd0078ff627cb727fc769ec848
9746] {
9747hunk ./src/allmydata/storage/backends/disk/immutable.py 260
9748         except IndexError:
9749             self.add_lease(lease_info)
9750 
9751+    def cancel_lease(self, cancel_secret):
9752+        """Remove a lease with the given cancel_secret. If the last lease is
9753+        cancelled, the file will be removed. Return the number of bytes that
9754+        were freed (by truncating the list of leases, and possibly by
9755+        deleting the file). Raise IndexError if there was no lease with the
9756+        given cancel_secret.
9757+        """
9758+
9759+        leases = list(self.get_leases())
9760+        num_leases_removed = 0
9761+        for i, lease in enumerate(leases):
9762+            if constant_time_compare(lease.cancel_secret, cancel_secret):
9763+                leases[i] = None
9764+                num_leases_removed += 1
9765+        if not num_leases_removed:
9766+            raise IndexError("unable to find matching lease to cancel")
9767+
9768+        space_freed = 0
9769+        if num_leases_removed:
9770+            # pack and write out the remaining leases. We write these out in
9771+            # the same order as they were added, so that if we crash while
9772+            # doing this, we won't lose any non-cancelled leases.
9773+            leases = [l for l in leases if l] # remove the cancelled leases
9774+            if len(leases) > 0:
9775+                f = self._home.open('rb+')
9776+                try:
9777+                    for i, lease in enumerate(leases):
9778+                        self._write_lease_record(f, i, lease)
9779+                    self._write_num_leases(f, len(leases))
9780+                    self._truncate_leases(f, len(leases))
9781+                finally:
9782+                    f.close()
9783+                space_freed = self.LEASE_SIZE * num_leases_removed
9784+            else:
9785+                space_freed = fileutil.get_used_space(self._home)
9786+                self.unlink()
9787+        return space_freed
9788+
9789hunk ./src/allmydata/storage/backends/disk/mutable.py 361
9790         except IndexError:
9791             self.add_lease(lease_info)
9792 
9793+    def cancel_lease(self, cancel_secret):
9794+        """Remove any leases with the given cancel_secret. If the last lease
9795+        is cancelled, the file will be removed. Return the number of bytes
9796+        that were freed (by truncating the list of leases, and possibly by
9797+        deleting the file). Raise IndexError if there was no lease with the
9798+        given cancel_secret."""
9799+
9800+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
9801+
9802+        accepting_nodeids = set()
9803+        modified = 0
9804+        remaining = 0
9805+        blank_lease = LeaseInfo(owner_num=0,
9806+                                renew_secret="\x00"*32,
9807+                                cancel_secret="\x00"*32,
9808+                                expiration_time=0,
9809+                                nodeid="\x00"*20)
9810+        f = self._home.open('rb+')
9811+        try:
9812+            for (leasenum, lease) in self._enumerate_leases(f):
9813+                accepting_nodeids.add(lease.nodeid)
9814+                if constant_time_compare(lease.cancel_secret, cancel_secret):
9815+                    self._write_lease_record(f, leasenum, blank_lease)
9816+                    modified += 1
9817+                else:
9818+                    remaining += 1
9819+            if modified:
9820+                freed_space = self._pack_leases(f)
9821+        finally:
9822+            f.close()
9823+
9824+        if modified > 0:
9825+            if remaining == 0:
9826+                freed_space = fileutil.get_used_space(self._home)
9827+                self.unlink()
9828+            return freed_space
9829+
9830+        msg = ("Unable to cancel non-existent lease. I have leases "
9831+               "accepted by nodeids: ")
9832+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
9833+                         for anid in accepting_nodeids])
9834+        msg += " ."
9835+        raise IndexError(msg)
9836+
9837+    def _pack_leases(self, f):
9838+        # TODO: reclaim space from cancelled leases
9839+        return 0
9840+
9841     def _read_write_enabler_and_nodeid(self, f):
9842         f.seek(0)
9843         data = f.read(self.HEADER_SIZE)
9844}
9845[Blank line cleanups.
9846david-sarah@jacaranda.org**20110923012044
9847 Ignore-this: 8e1c4ecb5b0c65673af35872876a8591
9848] {
9849hunk ./src/allmydata/interfaces.py 33
9850 LeaseRenewSecret = Hash # used to protect lease renewal requests
9851 LeaseCancelSecret = Hash # used to protect lease cancellation requests
9852 
9853+
9854 class RIStubClient(RemoteInterface):
9855     """Each client publishes a service announcement for a dummy object called
9856     the StubClient. This object doesn't actually offer any services, but the
9857hunk ./src/allmydata/interfaces.py 42
9858     the grid and the client versions in use). This is the (empty)
9859     RemoteInterface for the StubClient."""
9860 
9861+
9862 class RIBucketWriter(RemoteInterface):
9863     """ Objects of this kind live on the server side. """
9864     def write(offset=Offset, data=ShareData):
9865hunk ./src/allmydata/interfaces.py 61
9866         """
9867         return None
9868 
9869+
9870 class RIBucketReader(RemoteInterface):
9871     def read(offset=Offset, length=ReadSize):
9872         return ShareData
9873hunk ./src/allmydata/interfaces.py 78
9874         documentation.
9875         """
9876 
9877+
9878 TestVector = ListOf(TupleOf(Offset, ReadSize, str, str))
9879 # elements are (offset, length, operator, specimen)
9880 # operator is one of "lt, le, eq, ne, ge, gt"
9881hunk ./src/allmydata/interfaces.py 95
9882 ReadData = ListOf(ShareData)
9883 # returns data[offset:offset+length] for each element of TestVector
9884 
9885+
9886 class RIStorageServer(RemoteInterface):
9887     __remote_name__ = "RIStorageServer.tahoe.allmydata.com"
9888 
9889hunk ./src/allmydata/interfaces.py 2255
9890 
9891     def get_storage_index():
9892         """Return a string with the (binary) storage index."""
9893+
9894     def get_storage_index_string():
9895         """Return a string with the (printable) abbreviated storage index."""
9896hunk ./src/allmydata/interfaces.py 2258
9897+
9898     def get_uri():
9899         """Return the (string) URI of the object that was checked."""
9900 
9901hunk ./src/allmydata/interfaces.py 2353
9902     def get_report():
9903         """Return a list of strings with more detailed results."""
9904 
9905+
9906 class ICheckAndRepairResults(Interface):
9907     """I contain the detailed results of a check/verify/repair operation.
9908 
9909hunk ./src/allmydata/interfaces.py 2363
9910 
9911     def get_storage_index():
9912         """Return a string with the (binary) storage index."""
9913+
9914     def get_storage_index_string():
9915         """Return a string with the (printable) abbreviated storage index."""
9916hunk ./src/allmydata/interfaces.py 2366
9917+
9918     def get_repair_attempted():
9919         """Return a boolean, True if a repair was attempted. We might not
9920         attempt to repair the file because it was healthy, or healthy enough
9921hunk ./src/allmydata/interfaces.py 2372
9922         (i.e. some shares were missing but not enough to exceed some
9923         threshold), or because we don't know how to repair this object."""
9924+
9925     def get_repair_successful():
9926         """Return a boolean, True if repair was attempted and the file/dir
9927         was fully healthy afterwards. False if no repair was attempted or if
9928hunk ./src/allmydata/interfaces.py 2377
9929         a repair attempt failed."""
9930+
9931     def get_pre_repair_results():
9932         """Return an ICheckResults instance that describes the state of the
9933         file/dir before any repair was attempted."""
9934hunk ./src/allmydata/interfaces.py 2381
9935+
9936     def get_post_repair_results():
9937         """Return an ICheckResults instance that describes the state of the
9938         file/dir after any repair was attempted. If no repair was attempted,
9939hunk ./src/allmydata/interfaces.py 2615
9940         (childnode, metadata_dict) tuples), the directory will be populated
9941         with those children, otherwise it will be empty."""
9942 
9943+
9944 class IClientStatus(Interface):
9945     def list_all_uploads():
9946         """Return a list of uploader objects, one for each upload that
9947hunk ./src/allmydata/interfaces.py 2621
9948         currently has an object available (tracked with weakrefs). This is
9949         intended for debugging purposes."""
9950+
9951     def list_active_uploads():
9952         """Return a list of active IUploadStatus objects."""
9953hunk ./src/allmydata/interfaces.py 2624
9954+
9955     def list_recent_uploads():
9956         """Return a list of IUploadStatus objects for the most recently
9957         started uploads."""
9958hunk ./src/allmydata/interfaces.py 2633
9959         """Return a list of downloader objects, one for each download that
9960         currently has an object available (tracked with weakrefs). This is
9961         intended for debugging purposes."""
9962+
9963     def list_active_downloads():
9964         """Return a list of active IDownloadStatus objects."""
9965hunk ./src/allmydata/interfaces.py 2636
9966+
9967     def list_recent_downloads():
9968         """Return a list of IDownloadStatus objects for the most recently
9969         started downloads."""
9970hunk ./src/allmydata/interfaces.py 2641
9971 
9972+
9973 class IUploadStatus(Interface):
9974     def get_started():
9975         """Return a timestamp (float with seconds since epoch) indicating
9976hunk ./src/allmydata/interfaces.py 2646
9977         when the operation was started."""
9978+
9979     def get_storage_index():
9980         """Return a string with the (binary) storage index in use on this
9981         upload. Returns None if the storage index has not yet been
9982hunk ./src/allmydata/interfaces.py 2651
9983         calculated."""
9984+
9985     def get_size():
9986         """Return an integer with the number of bytes that will eventually
9987         be uploaded for this file. Returns None if the size is not yet known.
9988hunk ./src/allmydata/interfaces.py 2656
9989         """
9990+
9991     def using_helper():
9992         """Return True if this upload is using a Helper, False if not."""
9993hunk ./src/allmydata/interfaces.py 2659
9994+
9995     def get_status():
9996         """Return a string describing the current state of the upload
9997         process."""
9998hunk ./src/allmydata/interfaces.py 2663
9999+
10000     def get_progress():
10001         """Returns a tuple of floats, (chk, ciphertext, encode_and_push),
10002         each from 0.0 to 1.0 . 'chk' describes how much progress has been
10003hunk ./src/allmydata/interfaces.py 2675
10004         process has finished: for helper uploads this is dependent upon the
10005         helper providing progress reports. It might be reasonable to add all
10006         three numbers and report the sum to the user."""
10007+
10008     def get_active():
10009         """Return True if the upload is currently active, False if not."""
10010hunk ./src/allmydata/interfaces.py 2678
10011+
10012     def get_results():
10013         """Return an instance of UploadResults (which contains timing and
10014         sharemap information). Might return None if the upload is not yet
10015hunk ./src/allmydata/interfaces.py 2683
10016         finished."""
10017+
10018     def get_counter():
10019         """Each upload status gets a unique number: this method returns that
10020         number. This provides a handle to this particular upload, so a web
10021hunk ./src/allmydata/interfaces.py 2689
10022         page can generate a suitable hyperlink."""
10023 
10024+
10025 class IDownloadStatus(Interface):
10026     def get_started():
10027         """Return a timestamp (float with seconds since epoch) indicating
10028hunk ./src/allmydata/interfaces.py 2694
10029         when the operation was started."""
10030+
10031     def get_storage_index():
10032         """Return a string with the (binary) storage index in use on this
10033         download. This may be None if there is no storage index (i.e. LIT
10034hunk ./src/allmydata/interfaces.py 2699
10035         files)."""
10036+
10037     def get_size():
10038         """Return an integer with the number of bytes that will eventually be
10039         retrieved for this file. Returns None if the size is not yet known.
10040hunk ./src/allmydata/interfaces.py 2704
10041         """
10042+
10043     def using_helper():
10044         """Return True if this download is using a Helper, False if not."""
10045hunk ./src/allmydata/interfaces.py 2707
10046+
10047     def get_status():
10048         """Return a string describing the current state of the download
10049         process."""
10050hunk ./src/allmydata/interfaces.py 2711
10051+
10052     def get_progress():
10053         """Returns a float (from 0.0 to 1.0) describing the amount of the
10054         download that has completed. This value will remain at 0.0 until the
10055hunk ./src/allmydata/interfaces.py 2716
10056         first byte of plaintext is pushed to the download target."""
10057+
10058     def get_active():
10059         """Return True if the download is currently active, False if not."""
10060hunk ./src/allmydata/interfaces.py 2719
10061+
10062     def get_counter():
10063         """Each download status gets a unique number: this method returns
10064         that number. This provides a handle to this particular download, so a
10065hunk ./src/allmydata/interfaces.py 2725
10066         web page can generate a suitable hyperlink."""
10067 
10068+
10069 class IServermapUpdaterStatus(Interface):
10070     pass
10071hunk ./src/allmydata/interfaces.py 2728
10072+
10073+
10074 class IPublishStatus(Interface):
10075     pass
10076hunk ./src/allmydata/interfaces.py 2732
10077+
10078+
10079 class IRetrieveStatus(Interface):
10080     pass
10081 
10082hunk ./src/allmydata/interfaces.py 2737
10083+
10084 class NotCapableError(Exception):
10085     """You have tried to write to a read-only node."""
10086 
10087hunk ./src/allmydata/interfaces.py 2741
10088+
10089 class BadWriteEnablerError(Exception):
10090     pass
10091 
10092hunk ./src/allmydata/interfaces.py 2745
10093-class RIControlClient(RemoteInterface):
10094 
10095hunk ./src/allmydata/interfaces.py 2746
10096+class RIControlClient(RemoteInterface):
10097     def wait_for_client_connections(num_clients=int):
10098         """Do not return until we have connections to at least NUM_CLIENTS
10099         storage servers.
10100hunk ./src/allmydata/interfaces.py 2801
10101 
10102         return DictOf(str, float)
10103 
10104+
10105 UploadResults = Any() #DictOf(str, str)
10106 
10107hunk ./src/allmydata/interfaces.py 2804
10108+
10109 class RIEncryptedUploadable(RemoteInterface):
10110     __remote_name__ = "RIEncryptedUploadable.tahoe.allmydata.com"
10111 
10112hunk ./src/allmydata/interfaces.py 2877
10113         """
10114         return DictOf(str, DictOf(str, ChoiceOf(float, int, long, None)))
10115 
10116+
10117 class RIStatsGatherer(RemoteInterface):
10118     __remote_name__ = "RIStatsGatherer.tahoe.allmydata.com"
10119     """
10120hunk ./src/allmydata/interfaces.py 2917
10121 class FileTooLargeError(Exception):
10122     pass
10123 
10124+
10125 class IValidatedThingProxy(Interface):
10126     def start():
10127         """ Acquire a thing and validate it. Return a deferred that is
10128hunk ./src/allmydata/interfaces.py 2924
10129         eventually fired with self if the thing is valid or errbacked if it
10130         can't be acquired or validated."""
10131 
10132+
10133 class InsufficientVersionError(Exception):
10134     def __init__(self, needed, got):
10135         self.needed = needed
10136hunk ./src/allmydata/interfaces.py 2933
10137         return "InsufficientVersionError(need '%s', got %s)" % (self.needed,
10138                                                                 self.got)
10139 
10140+
10141 class EmptyPathnameComponentError(Exception):
10142     """The webapi disallows empty pathname components."""
10143hunk ./src/allmydata/test/test_crawler.py 21
10144 class BucketEnumeratingCrawler(ShareCrawler):
10145     cpu_slice = 500 # make sure it can complete in a single slice
10146     slow_start = 0
10147+
10148     def __init__(self, *args, **kwargs):
10149         ShareCrawler.__init__(self, *args, **kwargs)
10150         self.all_buckets = []
10151hunk ./src/allmydata/test/test_crawler.py 33
10152     def finished_cycle(self, cycle):
10153         eventually(self.finished_d.callback, None)
10154 
10155+
10156 class PacedCrawler(ShareCrawler):
10157     cpu_slice = 500 # make sure it can complete in a single slice
10158     slow_start = 0
10159hunk ./src/allmydata/test/test_crawler.py 37
10160+
10161     def __init__(self, *args, **kwargs):
10162         ShareCrawler.__init__(self, *args, **kwargs)
10163         self.countdown = 6
10164hunk ./src/allmydata/test/test_crawler.py 51
10165         if self.countdown == 0:
10166             # force a timeout. We restore it in yielding()
10167             self.cpu_slice = -1.0
10168+
10169     def yielding(self, sleep_time):
10170         self.cpu_slice = 500
10171         if self.yield_cb:
10172hunk ./src/allmydata/test/test_crawler.py 56
10173             self.yield_cb()
10174+
10175     def finished_cycle(self, cycle):
10176         eventually(self.finished_d.callback, None)
10177 
10178hunk ./src/allmydata/test/test_crawler.py 60
10179+
10180 class ConsumingCrawler(ShareCrawler):
10181     cpu_slice = 0.5
10182     allowed_cpu_percentage = 0.5
10183hunk ./src/allmydata/test/test_crawler.py 79
10184         elapsed = time.time() - start
10185         self.accumulated += elapsed
10186         self.last_yield += elapsed
10187+
10188     def finished_cycle(self, cycle):
10189         self.cycles += 1
10190hunk ./src/allmydata/test/test_crawler.py 82
10191+
10192     def yielding(self, sleep_time):
10193         self.last_yield = 0.0
10194 
10195hunk ./src/allmydata/test/test_crawler.py 86
10196+
10197 class OneShotCrawler(ShareCrawler):
10198     cpu_slice = 500 # make sure it can complete in a single slice
10199     slow_start = 0
10200hunk ./src/allmydata/test/test_crawler.py 90
10201+
10202     def __init__(self, *args, **kwargs):
10203         ShareCrawler.__init__(self, *args, **kwargs)
10204         self.counter = 0
10205hunk ./src/allmydata/test/test_crawler.py 98
10206 
10207     def process_shareset(self, cycle, prefix, shareset):
10208         self.counter += 1
10209+
10210     def finished_cycle(self, cycle):
10211         self.finished_d.callback(None)
10212         self.disownServiceParent()
10213hunk ./src/allmydata/test/test_crawler.py 103
10214 
10215+
10216 class Basic(unittest.TestCase, StallMixin, pollmixin.PollMixin):
10217     def setUp(self):
10218         self.s = service.MultiService()
10219hunk ./src/allmydata/test/test_crawler.py 114
10220 
10221     def si(self, i):
10222         return hashutil.storage_index_hash(str(i))
10223+
10224     def rs(self, i, serverid):
10225         return hashutil.bucket_renewal_secret_hash(str(i), serverid)
10226hunk ./src/allmydata/test/test_crawler.py 117
10227+
10228     def cs(self, i, serverid):
10229         return hashutil.bucket_cancel_secret_hash(str(i), serverid)
10230 
10231hunk ./src/allmydata/test/test_storage.py 39
10232 from allmydata.test.no_network import NoNetworkServer
10233 from allmydata.web.storage import StorageStatus, remove_prefix
10234 
10235+
10236 class Marker:
10237     pass
10238hunk ./src/allmydata/test/test_storage.py 42
10239+
10240+
10241 class FakeCanary:
10242     def __init__(self, ignore_disconnectors=False):
10243         self.ignore = ignore_disconnectors
10244hunk ./src/allmydata/test/test_storage.py 59
10245             return
10246         del self.disconnectors[marker]
10247 
10248+
10249 class FakeStatsProvider:
10250     def count(self, name, delta=1):
10251         pass
10252hunk ./src/allmydata/test/test_storage.py 66
10253     def register_producer(self, producer):
10254         pass
10255 
10256+
10257 class Bucket(unittest.TestCase):
10258     def make_workdir(self, name):
10259         basedir = FilePath("storage").child("Bucket").child(name)
10260hunk ./src/allmydata/test/test_storage.py 165
10261         result_of_read = br.remote_read(0, len(share_data)+1)
10262         self.failUnlessEqual(result_of_read, share_data)
10263 
10264+
10265 class RemoteBucket:
10266 
10267     def __init__(self):
10268hunk ./src/allmydata/test/test_storage.py 309
10269         return self._do_test_readwrite("test_readwrite_v2",
10270                                        0x44, WriteBucketProxy_v2, ReadBucketProxy)
10271 
10272+
10273 class Server(unittest.TestCase):
10274 
10275     def setUp(self):
10276hunk ./src/allmydata/test/test_storage.py 780
10277         self.failUnlessIn("This share tastes like dust.", report)
10278 
10279 
10280-
10281 class MutableServer(unittest.TestCase):
10282 
10283     def setUp(self):
10284hunk ./src/allmydata/test/test_storage.py 1407
10285         # header.
10286         self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
10287 
10288-
10289     def tearDown(self):
10290         self.sparent.stopService()
10291         fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
10292hunk ./src/allmydata/test/test_storage.py 1411
10293 
10294-
10295     def write_enabler(self, we_tag):
10296         return hashutil.tagged_hash("we_blah", we_tag)
10297 
10298hunk ./src/allmydata/test/test_storage.py 1414
10299-
10300     def renew_secret(self, tag):
10301         return hashutil.tagged_hash("renew_blah", str(tag))
10302 
10303hunk ./src/allmydata/test/test_storage.py 1417
10304-
10305     def cancel_secret(self, tag):
10306         return hashutil.tagged_hash("cancel_blah", str(tag))
10307 
10308hunk ./src/allmydata/test/test_storage.py 1420
10309-
10310     def workdir(self, name):
10311         return FilePath("storage").child("MDMFProxies").child(name)
10312 
10313hunk ./src/allmydata/test/test_storage.py 1430
10314         ss.setServiceParent(self.sparent)
10315         return ss
10316 
10317-
10318     def build_test_mdmf_share(self, tail_segment=False, empty=False):
10319         # Start with the checkstring
10320         data = struct.pack(">BQ32s",
10321hunk ./src/allmydata/test/test_storage.py 1527
10322         data += self.block_hash_tree_s
10323         return data
10324 
10325-
10326     def write_test_share_to_server(self,
10327                                    storage_index,
10328                                    tail_segment=False,
10329hunk ./src/allmydata/test/test_storage.py 1548
10330         results = write(storage_index, self.secrets, tws, readv)
10331         self.failUnless(results[0])
10332 
10333-
10334     def build_test_sdmf_share(self, empty=False):
10335         if empty:
10336             sharedata = ""
10337hunk ./src/allmydata/test/test_storage.py 1598
10338         self.offsets['EOF'] = eof_offset
10339         return final_share
10340 
10341-
10342     def write_sdmf_share_to_server(self,
10343                                    storage_index,
10344                                    empty=False):
10345hunk ./src/allmydata/test/test_storage.py 1613
10346         results = write(storage_index, self.secrets, tws, readv)
10347         self.failUnless(results[0])
10348 
10349-
10350     def test_read(self):
10351         self.write_test_share_to_server("si1")
10352         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10353hunk ./src/allmydata/test/test_storage.py 1682
10354             self.failUnlessEqual(checkstring, checkstring))
10355         return d
10356 
10357-
10358     def test_read_with_different_tail_segment_size(self):
10359         self.write_test_share_to_server("si1", tail_segment=True)
10360         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10361hunk ./src/allmydata/test/test_storage.py 1693
10362         d.addCallback(_check_tail_segment)
10363         return d
10364 
10365-
10366     def test_get_block_with_invalid_segnum(self):
10367         self.write_test_share_to_server("si1")
10368         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10369hunk ./src/allmydata/test/test_storage.py 1703
10370                             mr.get_block_and_salt, 7))
10371         return d
10372 
10373-
10374     def test_get_encoding_parameters_first(self):
10375         self.write_test_share_to_server("si1")
10376         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10377hunk ./src/allmydata/test/test_storage.py 1715
10378         d.addCallback(_check_encoding_parameters)
10379         return d
10380 
10381-
10382     def test_get_seqnum_first(self):
10383         self.write_test_share_to_server("si1")
10384         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10385hunk ./src/allmydata/test/test_storage.py 1723
10386             self.failUnlessEqual(seqnum, 0))
10387         return d
10388 
10389-
10390     def test_get_root_hash_first(self):
10391         self.write_test_share_to_server("si1")
10392         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10393hunk ./src/allmydata/test/test_storage.py 1731
10394             self.failUnlessEqual(root_hash, self.root_hash))
10395         return d
10396 
10397-
10398     def test_get_checkstring_first(self):
10399         self.write_test_share_to_server("si1")
10400         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10401hunk ./src/allmydata/test/test_storage.py 1739
10402             self.failUnlessEqual(checkstring, self.checkstring))
10403         return d
10404 
10405-
10406     def test_write_read_vectors(self):
10407         # When writing for us, the storage server will return to us a
10408         # read vector, along with its result. If a write fails because
10409hunk ./src/allmydata/test/test_storage.py 1777
10410         # The checkstring remains the same for the rest of the process.
10411         return d
10412 
10413-
10414     def test_private_key_after_share_hash_chain(self):
10415         mw = self._make_new_mw("si1", 0)
10416         d = defer.succeed(None)
10417hunk ./src/allmydata/test/test_storage.py 1795
10418                             mw.put_encprivkey, self.encprivkey))
10419         return d
10420 
10421-
10422     def test_signature_after_verification_key(self):
10423         mw = self._make_new_mw("si1", 0)
10424         d = defer.succeed(None)
10425hunk ./src/allmydata/test/test_storage.py 1821
10426                             mw.put_signature, self.signature))
10427         return d
10428 
10429-
10430     def test_uncoordinated_write(self):
10431         # Make two mutable writers, both pointing to the same storage
10432         # server, both at the same storage index, and try writing to the
10433hunk ./src/allmydata/test/test_storage.py 1853
10434         d.addCallback(_check_failure)
10435         return d
10436 
10437-
10438     def test_invalid_salt_size(self):
10439         # Salts need to be 16 bytes in size. Writes that attempt to
10440         # write more or less than this should be rejected.
10441hunk ./src/allmydata/test/test_storage.py 1871
10442                             another_invalid_salt))
10443         return d
10444 
10445-
10446     def test_write_test_vectors(self):
10447         # If we give the write proxy a bogus test vector at
10448         # any point during the process, it should fail to write when we
10449hunk ./src/allmydata/test/test_storage.py 1904
10450         d.addCallback(_check_success)
10451         return d
10452 
10453-
10454     def serialize_blockhashes(self, blockhashes):
10455         return "".join(blockhashes)
10456 
10457hunk ./src/allmydata/test/test_storage.py 1907
10458-
10459     def serialize_sharehashes(self, sharehashes):
10460         ret = "".join([struct.pack(">H32s", i, sharehashes[i])
10461                         for i in sorted(sharehashes.keys())])
10462hunk ./src/allmydata/test/test_storage.py 1912
10463         return ret
10464 
10465-
10466     def test_write(self):
10467         # This translates to a file with 6 6-byte segments, and with 2-byte
10468         # blocks.
10469hunk ./src/allmydata/test/test_storage.py 2043
10470                                 6, datalength)
10471         return mw
10472 
10473-
10474     def test_write_rejected_with_too_many_blocks(self):
10475         mw = self._make_new_mw("si0", 0)
10476 
10477hunk ./src/allmydata/test/test_storage.py 2059
10478                             mw.put_block, self.block, 7, self.salt))
10479         return d
10480 
10481-
10482     def test_write_rejected_with_invalid_salt(self):
10483         # Try writing an invalid salt. Salts are 16 bytes -- any more or
10484         # less should cause an error.
10485hunk ./src/allmydata/test/test_storage.py 2070
10486                             None, mw.put_block, self.block, 7, bad_salt))
10487         return d
10488 
10489-
10490     def test_write_rejected_with_invalid_root_hash(self):
10491         # Try writing an invalid root hash. This should be SHA256d, and
10492         # 32 bytes long as a result.
10493hunk ./src/allmydata/test/test_storage.py 2095
10494                             None, mw.put_root_hash, invalid_root_hash))
10495         return d
10496 
10497-
10498     def test_write_rejected_with_invalid_blocksize(self):
10499         # The blocksize implied by the writer that we get from
10500         # _make_new_mw is 2bytes -- any more or any less than this
10501hunk ./src/allmydata/test/test_storage.py 2128
10502             mw.put_block(valid_block, 5, self.salt))
10503         return d
10504 
10505-
10506     def test_write_enforces_order_constraints(self):
10507         # We require that the MDMFSlotWriteProxy be interacted with in a
10508         # specific way.
10509hunk ./src/allmydata/test/test_storage.py 2213
10510             mw0.put_verification_key(self.verification_key))
10511         return d
10512 
10513-
10514     def test_end_to_end(self):
10515         mw = self._make_new_mw("si1", 0)
10516         # Write a share using the mutable writer, and make sure that the
10517hunk ./src/allmydata/test/test_storage.py 2378
10518             self.failUnlessEqual(root_hash, self.root_hash, root_hash))
10519         return d
10520 
10521-
10522     def test_only_reads_one_segment_sdmf(self):
10523         # SDMF shares have only one segment, so it doesn't make sense to
10524         # read more segments than that. The reader should know this and
10525hunk ./src/allmydata/test/test_storage.py 2395
10526                             mr.get_block_and_salt, 1))
10527         return d
10528 
10529-
10530     def test_read_with_prefetched_mdmf_data(self):
10531         # The MDMFSlotReadProxy will prefill certain fields if you pass
10532         # it data that you have already fetched. This is useful for
10533hunk ./src/allmydata/test/test_storage.py 2459
10534         d.addCallback(_check_block_and_salt)
10535         return d
10536 
10537-
10538     def test_read_with_prefetched_sdmf_data(self):
10539         sdmf_data = self.build_test_sdmf_share()
10540         self.write_sdmf_share_to_server("si1")
10541hunk ./src/allmydata/test/test_storage.py 2522
10542         d.addCallback(_check_block_and_salt)
10543         return d
10544 
10545-
10546     def test_read_with_empty_mdmf_file(self):
10547         # Some tests upload a file with no contents to test things
10548         # unrelated to the actual handling of the content of the file.
10549hunk ./src/allmydata/test/test_storage.py 2550
10550                             mr.get_block_and_salt, 0))
10551         return d
10552 
10553-
10554     def test_read_with_empty_sdmf_file(self):
10555         self.write_sdmf_share_to_server("si1", empty=True)
10556         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10557hunk ./src/allmydata/test/test_storage.py 2575
10558                             mr.get_block_and_salt, 0))
10559         return d
10560 
10561-
10562     def test_verinfo_with_sdmf_file(self):
10563         self.write_sdmf_share_to_server("si1")
10564         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10565hunk ./src/allmydata/test/test_storage.py 2615
10566         d.addCallback(_check_verinfo)
10567         return d
10568 
10569-
10570     def test_verinfo_with_mdmf_file(self):
10571         self.write_test_share_to_server("si1")
10572         mr = MDMFSlotReadProxy(self.rref, "si1", 0)
10573hunk ./src/allmydata/test/test_storage.py 2653
10574         d.addCallback(_check_verinfo)
10575         return d
10576 
10577-
10578     def test_sdmf_writer(self):
10579         # Go through the motions of writing an SDMF share to the storage
10580         # server. Then read the storage server to see that the share got
10581hunk ./src/allmydata/test/test_storage.py 2696
10582         d.addCallback(_then)
10583         return d
10584 
10585-
10586     def test_sdmf_writer_preexisting_share(self):
10587         data = self.build_test_sdmf_share()
10588         self.write_sdmf_share_to_server("si1")
10589hunk ./src/allmydata/test/test_storage.py 2839
10590         self.failUnless(output["get"]["99_0_percentile"] is None, output)
10591         self.failUnless(output["get"]["99_9_percentile"] is None, output)
10592 
10593+
10594 def remove_tags(s):
10595     s = re.sub(r'<[^>]*>', ' ', s)
10596     s = re.sub(r'\s+', ' ', s)
10597hunk ./src/allmydata/test/test_storage.py 2845
10598     return s
10599 
10600+
10601 class MyBucketCountingCrawler(BucketCountingCrawler):
10602     def finished_prefix(self, cycle, prefix):
10603         BucketCountingCrawler.finished_prefix(self, cycle, prefix)
10604hunk ./src/allmydata/test/test_storage.py 2974
10605         backend = DiskBackend(fp)
10606         ss = MyStorageServer("\x00" * 20, backend, fp)
10607         ss.bucket_counter.slow_start = 0
10608+
10609         # these will be fired inside finished_prefix()
10610         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
10611         w = StorageStatus(ss)
10612hunk ./src/allmydata/test/test_storage.py 3008
10613         ss.setServiceParent(self.s)
10614         return d
10615 
10616+
10617 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
10618     stop_after_first_bucket = False
10619 
10620hunk ./src/allmydata/test/test_storage.py 3017
10621         if self.stop_after_first_bucket:
10622             self.stop_after_first_bucket = False
10623             self.cpu_slice = -1.0
10624+
10625     def yielding(self, sleep_time):
10626         if not self.stop_after_first_bucket:
10627             self.cpu_slice = 500
10628hunk ./src/allmydata/test/test_storage.py 3028
10629 
10630 class BrokenStatResults:
10631     pass
10632+
10633 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
10634     def stat(self, fn):
10635         s = os.stat(fn)
10636hunk ./src/allmydata/test/test_storage.py 3044
10637 class No_ST_BLOCKS_StorageServer(StorageServer):
10638     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
10639 
10640+
10641 class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
10642 
10643     def setUp(self):
10644hunk ./src/allmydata/test/test_storage.py 3891
10645         backend = DiskBackend(fp)
10646         ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
10647         w = StorageStatus(ss)
10648+
10649         # make it start sooner than usual.
10650         lc = ss.lease_checker
10651         lc.stop_after_first_bucket = True
10652hunk ./src/allmydata/util/fileutil.py 460
10653              'avail': avail,
10654            }
10655 
10656+
10657 def get_available_space(whichdirfp, reserved_space):
10658     """Returns available space for share storage in bytes, or None if no
10659     API to get this information is available.
10660}
10661[mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393
10662david-sarah@jacaranda.org**20110923040825
10663 Ignore-this: 135da94bd344db6ccd59a576b54901c1
10664] {
10665hunk ./src/allmydata/mutable/publish.py 6
10666 import os, time
10667 from StringIO import StringIO
10668 from itertools import count
10669+from copy import copy
10670 from zope.interface import implements
10671 from twisted.internet import defer
10672 from twisted.python import failure
10673merger 0.0 (
10674hunk ./src/allmydata/mutable/publish.py 868
10675-
10676-        # TODO: Bad, since we remove from this same dict. We need to
10677-        # make a copy, or just use a non-iterated value.
10678-        for (shnum, writer) in self.writers.iteritems():
10679+        for (shnum, writer) in self.writers.copy().iteritems():
10680hunk ./src/allmydata/mutable/publish.py 868
10681-
10682-        # TODO: Bad, since we remove from this same dict. We need to
10683-        # make a copy, or just use a non-iterated value.
10684-        for (shnum, writer) in self.writers.iteritems():
10685+        for (shnum, writer) in copy(self.writers).iteritems():
10686)
10687}
10688[A few comment cleanups. refs #999
10689david-sarah@jacaranda.org**20110923041003
10690 Ignore-this: f574b4a3954b6946016646011ad15edf
10691] {
10692hunk ./src/allmydata/storage/backends/disk/disk_backend.py 17
10693 
10694 # storage/
10695 # storage/shares/incoming
10696-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
10697-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
10698-# storage/shares/$START/$STORAGEINDEX
10699-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
10700+#   incoming/ holds temp dirs named $PREFIX/$STORAGEINDEX/$SHNUM which will
10701+#   be moved to storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM upon success
10702+# storage/shares/$PREFIX/$STORAGEINDEX
10703+# storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM
10704 
10705hunk ./src/allmydata/storage/backends/disk/disk_backend.py 22
10706-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
10707+# Where "$PREFIX" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
10708 # base-32 chars).
10709 # $SHARENUM matches this regex:
10710 NUM_RE=re.compile("^[0-9]+$")
10711hunk ./src/allmydata/storage/backends/disk/immutable.py 16
10712 from allmydata.storage.lease import LeaseInfo
10713 
10714 
10715-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
10716-# and share data. The share data is accessed by RIBucketWriter.write and
10717-# RIBucketReader.read . The lease information is not accessible through these
10718-# interfaces.
10719+# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
10720+# lease information and share data. The share data is accessed by
10721+# RIBucketWriter.write and RIBucketReader.read . The lease information is not
10722+# accessible through these remote interfaces.
10723 
10724 # The share file has the following layout:
10725 #  0x00: share file version number, four bytes, current version is 1
10726hunk ./src/allmydata/storage/backends/disk/immutable.py 211
10727 
10728     # These lease operations are intended for use by disk_backend.py.
10729     # Other clients should not depend on the fact that the disk backend
10730-    # stores leases in share files. XXX bucket.py also relies on this.
10731+    # stores leases in share files.
10732+    # XXX BucketWriter in bucket.py also relies on add_lease.
10733 
10734     def get_leases(self):
10735         """Yields a LeaseInfo instance for all leases."""
10736}
10737[Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999
10738david-sarah@jacaranda.org**20110923041115
10739 Ignore-this: 782b49f243bd98fcb6c249f8e40fd9f
10740] {
10741hunk ./src/allmydata/storage/backends/base.py 4
10742 
10743 from twisted.application import service
10744 
10745+from allmydata.util import fileutil, log, time_format
10746 from allmydata.storage.common import si_b2a
10747 from allmydata.storage.lease import LeaseInfo
10748 from allmydata.storage.bucket import BucketReader
10749hunk ./src/allmydata/storage/backends/base.py 13
10750 class Backend(service.MultiService):
10751     def __init__(self):
10752         service.MultiService.__init__(self)
10753+        self._corruption_advisory_dir = None
10754+
10755+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
10756+        if self._corruption_advisory_dir is not None:
10757+            fileutil.fp_make_dirs(self._corruption_advisory_dir)
10758+            now = time_format.iso_utc(sep="T")
10759+            si_s = si_b2a(storageindex)
10760+
10761+            # Windows can't handle colons in the filename.
10762+            name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
10763+            f = self._corruption_advisory_dir.child(name).open("w")
10764+            try:
10765+                f.write("report: Share Corruption\n")
10766+                f.write("type: %s\n" % sharetype)
10767+                f.write("storage_index: %s\n" % si_s)
10768+                f.write("share_number: %d\n" % shnum)
10769+                f.write("\n")
10770+                f.write(reason)
10771+                f.write("\n")
10772+            finally:
10773+                f.close()
10774+
10775+        log.msg(format=("client claims corruption in (%(share_type)s) " +
10776+                        "%(si)s-%(shnum)d: %(reason)s"),
10777+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
10778+                level=log.SCARY, umid="2fASGx")
10779 
10780 
10781 class ShareSet(object):
10782hunk ./src/allmydata/storage/backends/disk/disk_backend.py 8
10783 
10784 from zope.interface import implements
10785 from allmydata.interfaces import IStorageBackend, IShareSet
10786-from allmydata.util import fileutil, log, time_format
10787+from allmydata.util import fileutil, log
10788 from allmydata.storage.common import si_b2a, si_a2b
10789 from allmydata.storage.bucket import BucketWriter
10790 from allmydata.storage.backends.base import Backend, ShareSet
10791hunk ./src/allmydata/storage/backends/disk/disk_backend.py 125
10792             return 0
10793         return fileutil.get_available_space(self._sharedir, self._reserved_space)
10794 
10795-    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
10796-        fileutil.fp_make_dirs(self._corruption_advisory_dir)
10797-        now = time_format.iso_utc(sep="T")
10798-        si_s = si_b2a(storageindex)
10799-
10800-        # Windows can't handle colons in the filename.
10801-        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
10802-        f = self._corruption_advisory_dir.child(name).open("w")
10803-        try:
10804-            f.write("report: Share Corruption\n")
10805-            f.write("type: %s\n" % sharetype)
10806-            f.write("storage_index: %s\n" % si_s)
10807-            f.write("share_number: %d\n" % shnum)
10808-            f.write("\n")
10809-            f.write(reason)
10810-            f.write("\n")
10811-        finally:
10812-            f.close()
10813-
10814-        log.msg(format=("client claims corruption in (%(share_type)s) " +
10815-                        "%(si)s-%(shnum)d: %(reason)s"),
10816-                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
10817-                level=log.SCARY, umid="SGx2fA")
10818-
10819 
10820 class DiskShareSet(ShareSet):
10821     implements(IShareSet)
10822}
10823[Add incomplete S3 backend. refs #999
10824david-sarah@jacaranda.org**20110923041314
10825 Ignore-this: b48df65699e3926dcbb87b5f755cdbf1
10826] {
10827adddir ./src/allmydata/storage/backends/s3
10828addfile ./src/allmydata/storage/backends/s3/__init__.py
10829addfile ./src/allmydata/storage/backends/s3/immutable.py
10830hunk ./src/allmydata/storage/backends/s3/immutable.py 1
10831+
10832+import struct
10833+
10834+from zope.interface import implements
10835+
10836+from allmydata.interfaces import IStoredShare
10837+from allmydata.util.assertutil import precondition
10838+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
10839+
10840+
10841+# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
10842+# lease information [currently inaccessible] and share data. The share data is
10843+# accessed by RIBucketWriter.write and RIBucketReader.read .
10844+
10845+# The share file has the following layout:
10846+#  0x00: share file version number, four bytes, current version is 1
10847+#  0x04: always zero (was share data length prior to Tahoe-LAFS v1.3.0)
10848+#  0x08: number of leases, four bytes big-endian
10849+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
10850+#  data_length+0x0c: first lease. Each lease record is 72 bytes.
10851+
10852+
10853+class ImmutableS3Share(object):
10854+    implements(IStoredShare)
10855+
10856+    sharetype = "immutable"
10857+    LEASE_SIZE = struct.calcsize(">L32s32sL")  # for compatibility
10858+
10859+
10860+    def __init__(self, storageindex, shnum, s3bucket, create=False, max_size=None):
10861+        """
10862+        If max_size is not None then I won't allow more than max_size to be written to me.
10863+        """
10864+        precondition((max_size is not None) or not create, max_size, create)
10865+        self._storageindex = storageindex
10866+        self._max_size = max_size
10867+
10868+        self._s3bucket = s3bucket
10869+        si_s = si_b2a(storageindex)
10870+        self._key = "storage/shares/%s/%s/%d" % (si_s[:2], si_s, shnum)
10871+        self._shnum = shnum
10872+
10873+        if create:
10874+            # The second field, which was the four-byte share data length in
10875+            # Tahoe-LAFS versions prior to 1.3.0, is not used; we always write 0.
10876+            # We also write 0 for the number of leases.
10877+            self._home.setContent(struct.pack(">LLL", 1, 0, 0) )
10878+            self._end_offset = max_size + 0x0c
10879+
10880+            # TODO: start write to S3.
10881+        else:
10882+            # TODO: get header
10883+            header = "\x00"*12
10884+            (version, unused, num_leases) = struct.unpack(">LLL", header)
10885+
10886+            if version != 1:
10887+                msg = "sharefile %s had version %d but we wanted 1" % \
10888+                      (self._home, version)
10889+                raise UnknownImmutableContainerVersionError(msg)
10890+
10891+            # We cannot write leases in share files, but allow them to be present
10892+            # in case a share file is copied from a disk backend, or in case we
10893+            # need them in future.
10894+            # TODO: filesize = size of S3 object
10895+            self._end_offset = filesize - (num_leases * self.LEASE_SIZE)
10896+        self._data_offset = 0xc
10897+
10898+    def __repr__(self):
10899+        return ("<ImmutableS3Share %s:%r at %r>"
10900+                % (si_b2a(self._storageindex), self._shnum, self._key))
10901+
10902+    def close(self):
10903+        # TODO: finalize write to S3.
10904+        pass
10905+
10906+    def get_used_space(self):
10907+        return self._size
10908+
10909+    def get_storage_index(self):
10910+        return self._storageindex
10911+
10912+    def get_storage_index_string(self):
10913+        return si_b2a(self._storageindex)
10914+
10915+    def get_shnum(self):
10916+        return self._shnum
10917+
10918+    def unlink(self):
10919+        # TODO: remove the S3 object.
10920+        pass
10921+
10922+    def get_allocated_size(self):
10923+        return self._max_size
10924+
10925+    def get_size(self):
10926+        return self._size
10927+
10928+    def get_data_length(self):
10929+        return self._end_offset - self._data_offset
10930+
10931+    def read_share_data(self, offset, length):
10932+        precondition(offset >= 0)
10933+
10934+        # Reads beyond the end of the data are truncated. Reads that start
10935+        # beyond the end of the data return an empty string.
10936+        seekpos = self._data_offset+offset
10937+        actuallength = max(0, min(length, self._end_offset-seekpos))
10938+        if actuallength == 0:
10939+            return ""
10940+
10941+        # TODO: perform an S3 GET request, possibly with a Content-Range header.
10942+        return "\x00"*actuallength
10943+
10944+    def write_share_data(self, offset, data):
10945+        assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size)
10946+
10947+        # TODO: write data to S3. If offset > self._size, fill the space
10948+        # between with zeroes.
10949+
10950+        self._size = offset + len(data)
10951+
10952+    def add_lease(self, lease_info):
10953+        pass
10954addfile ./src/allmydata/storage/backends/s3/mutable.py
10955hunk ./src/allmydata/storage/backends/s3/mutable.py 1
10956+
10957+import struct
10958+
10959+from zope.interface import implements
10960+
10961+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
10962+from allmydata.util import fileutil, idlib, log
10963+from allmydata.util.assertutil import precondition
10964+from allmydata.util.hashutil import constant_time_compare
10965+from allmydata.util.encodingutil import quote_filepath
10966+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
10967+     DataTooLargeError
10968+from allmydata.storage.lease import LeaseInfo
10969+from allmydata.storage.backends.base import testv_compare
10970+
10971+
10972+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
10973+# It has a different layout. See docs/mutable.rst for more details.
10974+
10975+# #   offset    size    name
10976+# 1   0         32      magic verstr "tahoe mutable container v1" plus binary
10977+# 2   32        20      write enabler's nodeid
10978+# 3   52        32      write enabler
10979+# 4   84        8       data size (actual share data present) (a)
10980+# 5   92        8       offset of (8) count of extra leases (after data)
10981+# 6   100       368     four leases, 92 bytes each
10982+#                        0    4   ownerid (0 means "no lease here")
10983+#                        4    4   expiration timestamp
10984+#                        8   32   renewal token
10985+#                        40  32   cancel token
10986+#                        72  20   nodeid that accepted the tokens
10987+# 7   468       (a)     data
10988+# 8   ??        4       count of extra leases
10989+# 9   ??        n*92    extra leases
10990+
10991+
10992+# The struct module doc says that L's are 4 bytes in size, and that Q's are
10993+# 8 bytes in size. Since compatibility depends upon this, double-check it.
10994+assert struct.calcsize(">L") == 4, struct.calcsize(">L")
10995+assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
10996+
10997+
10998+class MutableDiskShare(object):
10999+    implements(IStoredMutableShare)
11000+
11001+    sharetype = "mutable"
11002+    DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
11003+    EXTRA_LEASE_OFFSET = DATA_LENGTH_OFFSET + 8
11004+    HEADER_SIZE = struct.calcsize(">32s20s32sQQ") # doesn't include leases
11005+    LEASE_SIZE = struct.calcsize(">LL32s32s20s")
11006+    assert LEASE_SIZE == 92
11007+    DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
11008+    assert DATA_OFFSET == 468, DATA_OFFSET
11009+
11010+    # our sharefiles share with a recognizable string, plus some random
11011+    # binary data to reduce the chance that a regular text file will look
11012+    # like a sharefile.
11013+    MAGIC = "Tahoe mutable container v1\n" + "\x75\x09\x44\x03\x8e"
11014+    assert len(MAGIC) == 32
11015+    MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
11016+    # TODO: decide upon a policy for max share size
11017+
11018+    def __init__(self, storageindex, shnum, home, parent=None):
11019+        self._storageindex = storageindex
11020+        self._shnum = shnum
11021+        self._home = home
11022+        if self._home.exists():
11023+            # we don't cache anything, just check the magic
11024+            f = self._home.open('rb')
11025+            try:
11026+                data = f.read(self.HEADER_SIZE)
11027+                (magic,
11028+                 write_enabler_nodeid, write_enabler,
11029+                 data_length, extra_least_offset) = \
11030+                 struct.unpack(">32s20s32sQQ", data)
11031+                if magic != self.MAGIC:
11032+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
11033+                          (quote_filepath(self._home), magic, self.MAGIC)
11034+                    raise UnknownMutableContainerVersionError(msg)
11035+            finally:
11036+                f.close()
11037+        self.parent = parent # for logging
11038+
11039+    def log(self, *args, **kwargs):
11040+        if self.parent:
11041+            return self.parent.log(*args, **kwargs)
11042+
11043+    def create(self, serverid, write_enabler):
11044+        assert not self._home.exists()
11045+        data_length = 0
11046+        extra_lease_offset = (self.HEADER_SIZE
11047+                              + 4 * self.LEASE_SIZE
11048+                              + data_length)
11049+        assert extra_lease_offset == self.DATA_OFFSET # true at creation
11050+        num_extra_leases = 0
11051+        f = self._home.open('wb')
11052+        try:
11053+            header = struct.pack(">32s20s32sQQ",
11054+                                 self.MAGIC, serverid, write_enabler,
11055+                                 data_length, extra_lease_offset,
11056+                                 )
11057+            leases = ("\x00"*self.LEASE_SIZE) * 4
11058+            f.write(header + leases)
11059+            # data goes here, empty after creation
11060+            f.write(struct.pack(">L", num_extra_leases))
11061+            # extra leases go here, none at creation
11062+        finally:
11063+            f.close()
11064+
11065+    def __repr__(self):
11066+        return ("<MutableDiskShare %s:%r at %s>"
11067+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
11068+
11069+    def get_used_space(self):
11070+        return fileutil.get_used_space(self._home)
11071+
11072+    def get_storage_index(self):
11073+        return self._storageindex
11074+
11075+    def get_storage_index_string(self):
11076+        return si_b2a(self._storageindex)
11077+
11078+    def get_shnum(self):
11079+        return self._shnum
11080+
11081+    def unlink(self):
11082+        self._home.remove()
11083+
11084+    def _read_data_length(self, f):
11085+        f.seek(self.DATA_LENGTH_OFFSET)
11086+        (data_length,) = struct.unpack(">Q", f.read(8))
11087+        return data_length
11088+
11089+    def _write_data_length(self, f, data_length):
11090+        f.seek(self.DATA_LENGTH_OFFSET)
11091+        f.write(struct.pack(">Q", data_length))
11092+
11093+    def _read_share_data(self, f, offset, length):
11094+        precondition(offset >= 0)
11095+        data_length = self._read_data_length(f)
11096+        if offset+length > data_length:
11097+            # reads beyond the end of the data are truncated. Reads that
11098+            # start beyond the end of the data return an empty string.
11099+            length = max(0, data_length-offset)
11100+        if length == 0:
11101+            return ""
11102+        precondition(offset+length <= data_length)
11103+        f.seek(self.DATA_OFFSET+offset)
11104+        data = f.read(length)
11105+        return data
11106+
11107+    def _read_extra_lease_offset(self, f):
11108+        f.seek(self.EXTRA_LEASE_OFFSET)
11109+        (extra_lease_offset,) = struct.unpack(">Q", f.read(8))
11110+        return extra_lease_offset
11111+
11112+    def _write_extra_lease_offset(self, f, offset):
11113+        f.seek(self.EXTRA_LEASE_OFFSET)
11114+        f.write(struct.pack(">Q", offset))
11115+
11116+    def _read_num_extra_leases(self, f):
11117+        offset = self._read_extra_lease_offset(f)
11118+        f.seek(offset)
11119+        (num_extra_leases,) = struct.unpack(">L", f.read(4))
11120+        return num_extra_leases
11121+
11122+    def _write_num_extra_leases(self, f, num_leases):
11123+        extra_lease_offset = self._read_extra_lease_offset(f)
11124+        f.seek(extra_lease_offset)
11125+        f.write(struct.pack(">L", num_leases))
11126+
11127+    def _change_container_size(self, f, new_container_size):
11128+        if new_container_size > self.MAX_SIZE:
11129+            raise DataTooLargeError()
11130+        old_extra_lease_offset = self._read_extra_lease_offset(f)
11131+        new_extra_lease_offset = self.DATA_OFFSET + new_container_size
11132+        if new_extra_lease_offset < old_extra_lease_offset:
11133+            # TODO: allow containers to shrink. For now they remain large.
11134+            return
11135+        num_extra_leases = self._read_num_extra_leases(f)
11136+        f.seek(old_extra_lease_offset)
11137+        leases_size = 4 + num_extra_leases * self.LEASE_SIZE
11138+        extra_lease_data = f.read(leases_size)
11139+
11140+        # Zero out the old lease info (in order to minimize the chance that
11141+        # it could accidentally be exposed to a reader later, re #1528).
11142+        f.seek(old_extra_lease_offset)
11143+        f.write('\x00' * leases_size)
11144+        f.flush()
11145+
11146+        # An interrupt here will corrupt the leases.
11147+
11148+        f.seek(new_extra_lease_offset)
11149+        f.write(extra_lease_data)
11150+        self._write_extra_lease_offset(f, new_extra_lease_offset)
11151+
11152+    def _write_share_data(self, f, offset, data):
11153+        length = len(data)
11154+        precondition(offset >= 0)
11155+        data_length = self._read_data_length(f)
11156+        extra_lease_offset = self._read_extra_lease_offset(f)
11157+
11158+        if offset+length >= data_length:
11159+            # They are expanding their data size.
11160+
11161+            if self.DATA_OFFSET+offset+length > extra_lease_offset:
11162+                # TODO: allow containers to shrink. For now, they remain
11163+                # large.
11164+
11165+                # Their new data won't fit in the current container, so we
11166+                # have to move the leases. With luck, they're expanding it
11167+                # more than the size of the extra lease block, which will
11168+                # minimize the corrupt-the-share window
11169+                self._change_container_size(f, offset+length)
11170+                extra_lease_offset = self._read_extra_lease_offset(f)
11171+
11172+                # an interrupt here is ok.. the container has been enlarged
11173+                # but the data remains untouched
11174+
11175+            assert self.DATA_OFFSET+offset+length <= extra_lease_offset
11176+            # Their data now fits in the current container. We must write
11177+            # their new data and modify the recorded data size.
11178+
11179+            # Fill any newly exposed empty space with 0's.
11180+            if offset > data_length:
11181+                f.seek(self.DATA_OFFSET+data_length)
11182+                f.write('\x00'*(offset - data_length))
11183+                f.flush()
11184+
11185+            new_data_length = offset+length
11186+            self._write_data_length(f, new_data_length)
11187+            # an interrupt here will result in a corrupted share
11188+
11189+        # now all that's left to do is write out their data
11190+        f.seek(self.DATA_OFFSET+offset)
11191+        f.write(data)
11192+        return
11193+
11194+    def _write_lease_record(self, f, lease_number, lease_info):
11195+        extra_lease_offset = self._read_extra_lease_offset(f)
11196+        num_extra_leases = self._read_num_extra_leases(f)
11197+        if lease_number < 4:
11198+            offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE
11199+        elif (lease_number-4) < num_extra_leases:
11200+            offset = (extra_lease_offset
11201+                      + 4
11202+                      + (lease_number-4)*self.LEASE_SIZE)
11203+        else:
11204+            # must add an extra lease record
11205+            self._write_num_extra_leases(f, num_extra_leases+1)
11206+            offset = (extra_lease_offset
11207+                      + 4
11208+                      + (lease_number-4)*self.LEASE_SIZE)
11209+        f.seek(offset)
11210+        assert f.tell() == offset
11211+        f.write(lease_info.to_mutable_data())
11212+
11213+    def _read_lease_record(self, f, lease_number):
11214+        # returns a LeaseInfo instance, or None
11215+        extra_lease_offset = self._read_extra_lease_offset(f)
11216+        num_extra_leases = self._read_num_extra_leases(f)
11217+        if lease_number < 4:
11218+            offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE
11219+        elif (lease_number-4) < num_extra_leases:
11220+            offset = (extra_lease_offset
11221+                      + 4
11222+                      + (lease_number-4)*self.LEASE_SIZE)
11223+        else:
11224+            raise IndexError("No such lease number %d" % lease_number)
11225+        f.seek(offset)
11226+        assert f.tell() == offset
11227+        data = f.read(self.LEASE_SIZE)
11228+        lease_info = LeaseInfo().from_mutable_data(data)
11229+        if lease_info.owner_num == 0:
11230+            return None
11231+        return lease_info
11232+
11233+    def _get_num_lease_slots(self, f):
11234+        # how many places do we have allocated for leases? Not all of them
11235+        # are filled.
11236+        num_extra_leases = self._read_num_extra_leases(f)
11237+        return 4+num_extra_leases
11238+
11239+    def _get_first_empty_lease_slot(self, f):
11240+        # return an int with the index of an empty slot, or None if we do not
11241+        # currently have an empty slot
11242+
11243+        for i in range(self._get_num_lease_slots(f)):
11244+            if self._read_lease_record(f, i) is None:
11245+                return i
11246+        return None
11247+
11248+    def get_leases(self):
11249+        """Yields a LeaseInfo instance for all leases."""
11250+        f = self._home.open('rb')
11251+        try:
11252+            for i, lease in self._enumerate_leases(f):
11253+                yield lease
11254+        finally:
11255+            f.close()
11256+
11257+    def _enumerate_leases(self, f):
11258+        for i in range(self._get_num_lease_slots(f)):
11259+            try:
11260+                data = self._read_lease_record(f, i)
11261+                if data is not None:
11262+                    yield i, data
11263+            except IndexError:
11264+                return
11265+
11266+    # These lease operations are intended for use by disk_backend.py.
11267+    # Other non-test clients should not depend on the fact that the disk
11268+    # backend stores leases in share files.
11269+
11270+    def add_lease(self, lease_info):
11271+        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
11272+        f = self._home.open('rb+')
11273+        try:
11274+            num_lease_slots = self._get_num_lease_slots(f)
11275+            empty_slot = self._get_first_empty_lease_slot(f)
11276+            if empty_slot is not None:
11277+                self._write_lease_record(f, empty_slot, lease_info)
11278+            else:
11279+                self._write_lease_record(f, num_lease_slots, lease_info)
11280+        finally:
11281+            f.close()
11282+
11283+    def renew_lease(self, renew_secret, new_expire_time):
11284+        accepting_nodeids = set()
11285+        f = self._home.open('rb+')
11286+        try:
11287+            for (leasenum, lease) in self._enumerate_leases(f):
11288+                if constant_time_compare(lease.renew_secret, renew_secret):
11289+                    # yup. See if we need to update the owner time.
11290+                    if new_expire_time > lease.expiration_time:
11291+                        # yes
11292+                        lease.expiration_time = new_expire_time
11293+                        self._write_lease_record(f, leasenum, lease)
11294+                    return
11295+                accepting_nodeids.add(lease.nodeid)
11296+        finally:
11297+            f.close()
11298+        # Return the accepting_nodeids set, to give the client a chance to
11299+        # update the leases on a share that has been migrated from its
11300+        # original server to a new one.
11301+        msg = ("Unable to renew non-existent lease. I have leases accepted by"
11302+               " nodeids: ")
11303+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
11304+                         for anid in accepting_nodeids])
11305+        msg += " ."
11306+        raise IndexError(msg)
11307+
11308+    def add_or_renew_lease(self, lease_info):
11309+        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
11310+        try:
11311+            self.renew_lease(lease_info.renew_secret,
11312+                             lease_info.expiration_time)
11313+        except IndexError:
11314+            self.add_lease(lease_info)
11315+
11316+    def cancel_lease(self, cancel_secret):
11317+        """Remove any leases with the given cancel_secret. If the last lease
11318+        is cancelled, the file will be removed. Return the number of bytes
11319+        that were freed (by truncating the list of leases, and possibly by
11320+        deleting the file). Raise IndexError if there was no lease with the
11321+        given cancel_secret."""
11322+
11323+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
11324+
11325+        accepting_nodeids = set()
11326+        modified = 0
11327+        remaining = 0
11328+        blank_lease = LeaseInfo(owner_num=0,
11329+                                renew_secret="\x00"*32,
11330+                                cancel_secret="\x00"*32,
11331+                                expiration_time=0,
11332+                                nodeid="\x00"*20)
11333+        f = self._home.open('rb+')
11334+        try:
11335+            for (leasenum, lease) in self._enumerate_leases(f):
11336+                accepting_nodeids.add(lease.nodeid)
11337+                if constant_time_compare(lease.cancel_secret, cancel_secret):
11338+                    self._write_lease_record(f, leasenum, blank_lease)
11339+                    modified += 1
11340+                else:
11341+                    remaining += 1
11342+            if modified:
11343+                freed_space = self._pack_leases(f)
11344+        finally:
11345+            f.close()
11346+
11347+        if modified > 0:
11348+            if remaining == 0:
11349+                freed_space = fileutil.get_used_space(self._home)
11350+                self.unlink()
11351+            return freed_space
11352+
11353+        msg = ("Unable to cancel non-existent lease. I have leases "
11354+               "accepted by nodeids: ")
11355+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
11356+                         for anid in accepting_nodeids])
11357+        msg += " ."
11358+        raise IndexError(msg)
11359+
11360+    def _pack_leases(self, f):
11361+        # TODO: reclaim space from cancelled leases
11362+        return 0
11363+
11364+    def _read_write_enabler_and_nodeid(self, f):
11365+        f.seek(0)
11366+        data = f.read(self.HEADER_SIZE)
11367+        (magic,
11368+         write_enabler_nodeid, write_enabler,
11369+         data_length, extra_least_offset) = \
11370+         struct.unpack(">32s20s32sQQ", data)
11371+        assert magic == self.MAGIC
11372+        return (write_enabler, write_enabler_nodeid)
11373+
11374+    def readv(self, readv):
11375+        datav = []
11376+        f = self._home.open('rb')
11377+        try:
11378+            for (offset, length) in readv:
11379+                datav.append(self._read_share_data(f, offset, length))
11380+        finally:
11381+            f.close()
11382+        return datav
11383+
11384+    def get_size(self):
11385+        return self._home.getsize()
11386+
11387+    def get_data_length(self):
11388+        f = self._home.open('rb')
11389+        try:
11390+            data_length = self._read_data_length(f)
11391+        finally:
11392+            f.close()
11393+        return data_length
11394+
11395+    def check_write_enabler(self, write_enabler, si_s):
11396+        f = self._home.open('rb+')
11397+        try:
11398+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11399+        finally:
11400+            f.close()
11401+        # avoid a timing attack
11402+        #if write_enabler != real_write_enabler:
11403+        if not constant_time_compare(write_enabler, real_write_enabler):
11404+            # accomodate share migration by reporting the nodeid used for the
11405+            # old write enabler.
11406+            self.log(format="bad write enabler on SI %(si)s,"
11407+                     " recorded by nodeid %(nodeid)s",
11408+                     facility="tahoe.storage",
11409+                     level=log.WEIRD, umid="cE1eBQ",
11410+                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11411+            msg = "The write enabler was recorded by nodeid '%s'." % \
11412+                  (idlib.nodeid_b2a(write_enabler_nodeid),)
11413+            raise BadWriteEnablerError(msg)
11414+
11415+    def check_testv(self, testv):
11416+        test_good = True
11417+        f = self._home.open('rb+')
11418+        try:
11419+            for (offset, length, operator, specimen) in testv:
11420+                data = self._read_share_data(f, offset, length)
11421+                if not testv_compare(data, operator, specimen):
11422+                    test_good = False
11423+                    break
11424+        finally:
11425+            f.close()
11426+        return test_good
11427+
11428+    def writev(self, datav, new_length):
11429+        f = self._home.open('rb+')
11430+        try:
11431+            for (offset, data) in datav:
11432+                self._write_share_data(f, offset, data)
11433+            if new_length is not None:
11434+                cur_length = self._read_data_length(f)
11435+                if new_length < cur_length:
11436+                    self._write_data_length(f, new_length)
11437+                    # TODO: if we're going to shrink the share file when the
11438+                    # share data has shrunk, then call
11439+                    # self._change_container_size() here.
11440+        finally:
11441+            f.close()
11442+
11443+    def close(self):
11444+        pass
11445+
11446+
11447+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
11448+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
11449+    ms.create(serverid, write_enabler)
11450+    del ms
11451+    return MutableDiskShare(storageindex, shnum, fp, parent)
11452addfile ./src/allmydata/storage/backends/s3/s3_backend.py
11453hunk ./src/allmydata/storage/backends/s3/s3_backend.py 1
11454+
11455+from zope.interface import implements
11456+from allmydata.interfaces import IStorageBackend, IShareSet
11457+from allmydata.storage.common import si_b2a, si_a2b
11458+from allmydata.storage.bucket import BucketWriter
11459+from allmydata.storage.backends.base import Backend, ShareSet
11460+from allmydata.storage.backends.s3.immutable import ImmutableS3Share
11461+from allmydata.storage.backends.s3.mutable import MutableS3Share
11462+
11463+# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM
11464+
11465+
11466+class S3Backend(Backend):
11467+    implements(IStorageBackend)
11468+
11469+    def __init__(self, s3bucket, readonly=False, max_space=None, corruption_advisory_dir=None):
11470+        Backend.__init__(self)
11471+        self._s3bucket = s3bucket
11472+        self._readonly = readonly
11473+        if max_space is None:
11474+            self._max_space = 2**64
11475+        else:
11476+            self._max_space = int(max_space)
11477+
11478+        # TODO: any set-up for S3?
11479+
11480+        # we don't actually create the corruption-advisory dir until necessary
11481+        self._corruption_advisory_dir = corruption_advisory_dir
11482+
11483+    def get_sharesets_for_prefix(self, prefix):
11484+        # TODO: query S3 for keys matching prefix
11485+        return []
11486+
11487+    def get_shareset(self, storageindex):
11488+        return S3ShareSet(storageindex, self._s3bucket)
11489+
11490+    def fill_in_space_stats(self, stats):
11491+        stats['storage_server.max_space'] = self._max_space
11492+
11493+        # TODO: query space usage of S3 bucket
11494+        stats['storage_server.accepting_immutable_shares'] = int(not self._readonly)
11495+
11496+    def get_available_space(self):
11497+        if self._readonly:
11498+            return 0
11499+        # TODO: query space usage of S3 bucket
11500+        return self._max_space
11501+
11502+
11503+class S3ShareSet(ShareSet):
11504+    implements(IShareSet)
11505+
11506+    def __init__(self, storageindex, s3bucket):
11507+        ShareSet.__init__(self, storageindex)
11508+        self._s3bucket = s3bucket
11509+
11510+    def get_overhead(self):
11511+        return 0
11512+
11513+    def get_shares(self):
11514+        """
11515+        Generate IStorageBackendShare objects for shares we have for this storage index.
11516+        ("Shares we have" means completed ones, excluding incoming ones.)
11517+        """
11518+        pass
11519+
11520+    def has_incoming(self, shnum):
11521+        # TODO: this might need to be more like the disk backend; review callers
11522+        return False
11523+
11524+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
11525+        immsh = ImmutableS3Share(self.get_storage_index(), shnum, self._s3bucket,
11526+                                 max_size=max_space_per_bucket)
11527+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
11528+        return bw
11529+
11530+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
11531+        # TODO
11532+        serverid = storageserver.get_serverid()
11533+        return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, write_enabler, storageserver)
11534+
11535+    def _clean_up_after_unlink(self):
11536+        pass
11537+
11538}
11539[interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999
11540david-sarah@jacaranda.org**20110923203723
11541 Ignore-this: 59371c150532055939794fed6c77dcb6
11542] {
11543hunk ./src/allmydata/interfaces.py 304
11544     def get_sharesets_for_prefix(prefix):
11545         """
11546         Generates IShareSet objects for all storage indices matching the
11547-        given prefix for which this backend holds shares.
11548+        given base-32 prefix for which this backend holds shares.
11549         """
11550 
11551     def get_shareset(storageindex):
11552hunk ./src/allmydata/interfaces.py 312
11553         Get an IShareSet object for the given storage index.
11554         """
11555 
11556+    def fill_in_space_stats(stats):
11557+        """
11558+        Fill in the 'stats' dict with space statistics for this backend, in
11559+        'storage_server.*' keys.
11560+        """
11561+
11562     def advise_corrupt_share(storageindex, sharetype, shnum, reason):
11563         """
11564         Clients who discover hash failures in shares that they have
11565}
11566[Remove redundant si_s argument from check_write_enabler. refs #999
11567david-sarah@jacaranda.org**20110923204425
11568 Ignore-this: 25be760118dbce2eb661137f7d46dd20
11569] {
11570hunk ./src/allmydata/interfaces.py 500
11571 
11572 
11573 class IStoredMutableShare(IStoredShare):
11574-    def check_write_enabler(write_enabler, si_s):
11575+    def check_write_enabler(write_enabler):
11576         """
11577         XXX
11578         """
11579hunk ./src/allmydata/storage/backends/base.py 102
11580         if len(secrets) > 2:
11581             cancel_secret = secrets[2]
11582 
11583-        si_s = self.get_storage_index_string()
11584         shares = {}
11585         for share in self.get_shares():
11586             # XXX is it correct to ignore immutable shares? Maybe get_shares should
11587hunk ./src/allmydata/storage/backends/base.py 107
11588             # have a parameter saying what type it's expecting.
11589             if share.sharetype == "mutable":
11590-                share.check_write_enabler(write_enabler, si_s)
11591+                share.check_write_enabler(write_enabler)
11592                 shares[share.get_shnum()] = share
11593 
11594         # write_enabler is good for all existing shares
11595hunk ./src/allmydata/storage/backends/disk/mutable.py 440
11596             f.close()
11597         return data_length
11598 
11599-    def check_write_enabler(self, write_enabler, si_s):
11600+    def check_write_enabler(self, write_enabler):
11601         f = self._home.open('rb+')
11602         try:
11603             (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11604hunk ./src/allmydata/storage/backends/disk/mutable.py 447
11605         finally:
11606             f.close()
11607         # avoid a timing attack
11608-        #if write_enabler != real_write_enabler:
11609         if not constant_time_compare(write_enabler, real_write_enabler):
11610             # accomodate share migration by reporting the nodeid used for the
11611             # old write enabler.
11612hunk ./src/allmydata/storage/backends/disk/mutable.py 454
11613                      " recorded by nodeid %(nodeid)s",
11614                      facility="tahoe.storage",
11615                      level=log.WEIRD, umid="cE1eBQ",
11616-                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11617+                     si=self.get_storage_index_string(),
11618+                     nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11619             msg = "The write enabler was recorded by nodeid '%s'." % \
11620                   (idlib.nodeid_b2a(write_enabler_nodeid),)
11621             raise BadWriteEnablerError(msg)
11622hunk ./src/allmydata/storage/backends/s3/mutable.py 440
11623             f.close()
11624         return data_length
11625 
11626-    def check_write_enabler(self, write_enabler, si_s):
11627+    def check_write_enabler(self, write_enabler):
11628         f = self._home.open('rb+')
11629         try:
11630             (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
11631hunk ./src/allmydata/storage/backends/s3/mutable.py 447
11632         finally:
11633             f.close()
11634         # avoid a timing attack
11635-        #if write_enabler != real_write_enabler:
11636         if not constant_time_compare(write_enabler, real_write_enabler):
11637             # accomodate share migration by reporting the nodeid used for the
11638             # old write enabler.
11639hunk ./src/allmydata/storage/backends/s3/mutable.py 454
11640                      " recorded by nodeid %(nodeid)s",
11641                      facility="tahoe.storage",
11642                      level=log.WEIRD, umid="cE1eBQ",
11643-                     si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11644+                     si=self.get_storage_index_string(),
11645+                     nodeid=idlib.nodeid_b2a(write_enabler_nodeid))
11646             msg = "The write enabler was recorded by nodeid '%s'." % \
11647                   (idlib.nodeid_b2a(write_enabler_nodeid),)
11648             raise BadWriteEnablerError(msg)
11649}
11650[Implement readv for immutable shares. refs #999
11651david-sarah@jacaranda.org**20110923204611
11652 Ignore-this: 24f14b663051169d66293020e40c5a05
11653] {
11654hunk ./src/allmydata/storage/backends/disk/immutable.py 156
11655     def get_data_length(self):
11656         return self._lease_offset - self._data_offset
11657 
11658-    #def readv(self, read_vector):
11659-    #    ...
11660+    def readv(self, readv):
11661+        datav = []
11662+        f = self._home.open('rb')
11663+        try:
11664+            for (offset, length) in readv:
11665+                datav.append(self._read_share_data(f, offset, length))
11666+        finally:
11667+            f.close()
11668+        return datav
11669 
11670hunk ./src/allmydata/storage/backends/disk/immutable.py 166
11671-    def read_share_data(self, offset, length):
11672+    def _read_share_data(self, f, offset, length):
11673         precondition(offset >= 0)
11674 
11675         # Reads beyond the end of the data are truncated. Reads that start
11676hunk ./src/allmydata/storage/backends/disk/immutable.py 175
11677         actuallength = max(0, min(length, self._lease_offset-seekpos))
11678         if actuallength == 0:
11679             return ""
11680+        f.seek(seekpos)
11681+        return f.read(actuallength)
11682+
11683+    def read_share_data(self, offset, length):
11684         f = self._home.open(mode='rb')
11685         try:
11686hunk ./src/allmydata/storage/backends/disk/immutable.py 181
11687-            f.seek(seekpos)
11688-            sharedata = f.read(actuallength)
11689+            return self._read_share_data(f, offset, length)
11690         finally:
11691             f.close()
11692hunk ./src/allmydata/storage/backends/disk/immutable.py 184
11693-        return sharedata
11694 
11695     def write_share_data(self, offset, data):
11696         length = len(data)
11697hunk ./src/allmydata/storage/backends/null/null_backend.py 89
11698         return self.shnum
11699 
11700     def unlink(self):
11701-        os.unlink(self.fname)
11702+        pass
11703+
11704+    def readv(self, readv):
11705+        datav = []
11706+        for (offset, length) in readv:
11707+            datav.append("")
11708+        return datav
11709 
11710     def read_share_data(self, offset, length):
11711         precondition(offset >= 0)
11712hunk ./src/allmydata/storage/backends/s3/immutable.py 101
11713     def get_data_length(self):
11714         return self._end_offset - self._data_offset
11715 
11716+    def readv(self, readv):
11717+        datav = []
11718+        for (offset, length) in readv:
11719+            datav.append(self.read_share_data(offset, length))
11720+        return datav
11721+
11722     def read_share_data(self, offset, length):
11723         precondition(offset >= 0)
11724 
11725}
11726[The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999
11727david-sarah@jacaranda.org**20110923204914
11728 Ignore-this: 6c44bb908dd4c0cdc59506b2d87a47b0
11729] {
11730hunk ./src/allmydata/storage/backends/base.py 98
11731 
11732         write_enabler = secrets[0]
11733         renew_secret = secrets[1]
11734-        cancel_secret = '\x00'*32
11735         if len(secrets) > 2:
11736             cancel_secret = secrets[2]
11737hunk ./src/allmydata/storage/backends/base.py 100
11738+        else:
11739+            cancel_secret = renew_secret
11740 
11741         shares = {}
11742         for share in self.get_shares():
11743}
11744[Make EmptyShare.check_testv a simple function. refs #999
11745david-sarah@jacaranda.org**20110923204945
11746 Ignore-this: d0132c085f40c39815fa920b77fc39ab
11747] {
11748hunk ./src/allmydata/storage/backends/base.py 125
11749             else:
11750                 # compare the vectors against an empty share, in which all
11751                 # reads return empty strings
11752-                if not EmptyShare().check_testv(testv):
11753+                if not empty_check_testv(testv):
11754                     storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
11755                     testv_is_good = False
11756                     break
11757hunk ./src/allmydata/storage/backends/base.py 195
11758     # never reached
11759 
11760 
11761-class EmptyShare:
11762-    def check_testv(self, testv):
11763-        test_good = True
11764-        for (offset, length, operator, specimen) in testv:
11765-            data = ""
11766-            if not testv_compare(data, operator, specimen):
11767-                test_good = False
11768-                break
11769-        return test_good
11770+def empty_check_testv(testv):
11771+    test_good = True
11772+    for (offset, length, operator, specimen) in testv:
11773+        data = ""
11774+        if not testv_compare(data, operator, specimen):
11775+            test_good = False
11776+            break
11777+    return test_good
11778 
11779}
11780[Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999
11781david-sarah@jacaranda.org**20110923205219
11782 Ignore-this: 42a23d7e253255003dc63facea783251
11783] {
11784hunk ./src/allmydata/storage/backends/null/null_backend.py 2
11785 
11786-import os, struct
11787-
11788 from zope.interface import implements
11789 
11790 from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
11791hunk ./src/allmydata/storage/backends/null/null_backend.py 6
11792 from allmydata.util.assertutil import precondition
11793-from allmydata.util.hashutil import constant_time_compare
11794-from allmydata.storage.backends.base import Backend, ShareSet
11795-from allmydata.storage.bucket import BucketWriter
11796+from allmydata.storage.backends.base import Backend, empty_check_testv
11797+from allmydata.storage.bucket import BucketWriter, BucketReader
11798 from allmydata.storage.common import si_b2a
11799hunk ./src/allmydata/storage/backends/null/null_backend.py 9
11800-from allmydata.storage.lease import LeaseInfo
11801 
11802 
11803 class NullBackend(Backend):
11804hunk ./src/allmydata/storage/backends/null/null_backend.py 13
11805     implements(IStorageBackend)
11806+    """
11807+    I am a test backend that records (in memory) which shares exist, but not their contents, leases,
11808+    or write-enablers.
11809+    """
11810 
11811     def __init__(self):
11812         Backend.__init__(self)
11813hunk ./src/allmydata/storage/backends/null/null_backend.py 20
11814+        # mapping from storageindex to NullShareSet
11815+        self._sharesets = {}
11816 
11817hunk ./src/allmydata/storage/backends/null/null_backend.py 23
11818-    def get_available_space(self, reserved_space):
11819+    def get_available_space(self):
11820         return None
11821 
11822     def get_sharesets_for_prefix(self, prefix):
11823hunk ./src/allmydata/storage/backends/null/null_backend.py 27
11824-        pass
11825+        sharesets = []
11826+        for (si, shareset) in self._sharesets.iteritems():
11827+            if si_b2a(si).startswith(prefix):
11828+                sharesets.append(shareset)
11829+
11830+        def _by_base32si(b):
11831+            return b.get_storage_index_string()
11832+        sharesets.sort(key=_by_base32si)
11833+        return sharesets
11834 
11835     def get_shareset(self, storageindex):
11836hunk ./src/allmydata/storage/backends/null/null_backend.py 38
11837-        return NullShareSet(storageindex)
11838+        shareset = self._sharesets.get(storageindex, None)
11839+        if shareset is None:
11840+            shareset = NullShareSet(storageindex)
11841+            self._sharesets[storageindex] = shareset
11842+        return shareset
11843 
11844     def fill_in_space_stats(self, stats):
11845         pass
11846hunk ./src/allmydata/storage/backends/null/null_backend.py 47
11847 
11848-    def set_storage_server(self, ss):
11849-        self.ss = ss
11850 
11851hunk ./src/allmydata/storage/backends/null/null_backend.py 48
11852-    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
11853-        pass
11854-
11855-
11856-class NullShareSet(ShareSet):
11857+class NullShareSet(object):
11858     implements(IShareSet)
11859 
11860     def __init__(self, storageindex):
11861hunk ./src/allmydata/storage/backends/null/null_backend.py 53
11862         self.storageindex = storageindex
11863+        self._incoming_shnums = set()
11864+        self._immutable_shnums = set()
11865+        self._mutable_shnums = set()
11866+
11867+    def close_shnum(self, shnum):
11868+        self._incoming_shnums.remove(shnum)
11869+        self._immutable_shnums.add(shnum)
11870 
11871     def get_overhead(self):
11872         return 0
11873hunk ./src/allmydata/storage/backends/null/null_backend.py 64
11874 
11875-    def get_incoming_shnums(self):
11876-        return frozenset()
11877-
11878     def get_shares(self):
11879hunk ./src/allmydata/storage/backends/null/null_backend.py 65
11880+        for shnum in self._immutable_shnums:
11881+            yield ImmutableNullShare(self, shnum)
11882+        for shnum in self._mutable_shnums:
11883+            yield MutableNullShare(self, shnum)
11884+
11885+    def renew_lease(self, renew_secret, new_expiration_time):
11886+        raise IndexError("no such lease to renew")
11887+
11888+    def get_leases(self):
11889         pass
11890 
11891hunk ./src/allmydata/storage/backends/null/null_backend.py 76
11892-    def get_share(self, shnum):
11893-        return None
11894+    def add_or_renew_lease(self, lease_info):
11895+        pass
11896+
11897+    def has_incoming(self, shnum):
11898+        return shnum in self._incoming_shnums
11899 
11900     def get_storage_index(self):
11901         return self.storageindex
11902hunk ./src/allmydata/storage/backends/null/null_backend.py 89
11903         return si_b2a(self.storageindex)
11904 
11905     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
11906-        immutableshare = ImmutableNullShare()
11907-        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
11908+        self._incoming_shnums.add(shnum)
11909+        immutableshare = ImmutableNullShare(self, shnum)
11910+        bw = BucketWriter(storageserver, immutableshare, lease_info, canary)
11911+        bw.throw_out_all_data = True
11912+        return bw
11913 
11914hunk ./src/allmydata/storage/backends/null/null_backend.py 95
11915-    def _create_mutable_share(self, storageserver, shnum, write_enabler):
11916-        return MutableNullShare()
11917+    def make_bucket_reader(self, storageserver, share):
11918+        return BucketReader(storageserver, share)
11919 
11920hunk ./src/allmydata/storage/backends/null/null_backend.py 98
11921-    def _clean_up_after_unlink(self):
11922-        pass
11923+    def testv_and_readv_and_writev(self, storageserver, secrets,
11924+                                   test_and_write_vectors, read_vector,
11925+                                   expiration_time):
11926+        # evaluate test vectors
11927+        testv_is_good = True
11928+        for sharenum in test_and_write_vectors:
11929+            # compare the vectors against an empty share, in which all
11930+            # reads return empty strings
11931+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
11932+            if not empty_check_testv(testv):
11933+                storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
11934+                testv_is_good = False
11935+                break
11936 
11937hunk ./src/allmydata/storage/backends/null/null_backend.py 112
11938+        # gather the read vectors
11939+        read_data = {}
11940+        for shnum in self._mutable_shnums:
11941+            read_data[shnum] = ""
11942 
11943hunk ./src/allmydata/storage/backends/null/null_backend.py 117
11944-class ImmutableNullShare:
11945-    implements(IStoredShare)
11946-    sharetype = "immutable"
11947+        if testv_is_good:
11948+            # now apply the write vectors
11949+            for shnum in test_and_write_vectors:
11950+                (testv, datav, new_length) = test_and_write_vectors[shnum]
11951+                if new_length == 0:
11952+                    self._mutable_shnums.remove(shnum)
11953+                else:
11954+                    self._mutable_shnums.add(shnum)
11955 
11956hunk ./src/allmydata/storage/backends/null/null_backend.py 126
11957-    def __init__(self):
11958-        """ If max_size is not None then I won't allow more than
11959-        max_size to be written to me. If create=True then max_size
11960-        must not be None. """
11961-        pass
11962+        return (testv_is_good, read_data)
11963+
11964+    def readv(self, wanted_shnums, read_vector):
11965+        return {}
11966+
11967+
11968+class NullShareBase(object):
11969+    def __init__(self, shareset, shnum):
11970+        self.shareset = shareset
11971+        self.shnum = shnum
11972+
11973+    def get_storage_index(self):
11974+        return self.shareset.get_storage_index()
11975+
11976+    def get_storage_index_string(self):
11977+        return self.shareset.get_storage_index_string()
11978 
11979     def get_shnum(self):
11980         return self.shnum
11981hunk ./src/allmydata/storage/backends/null/null_backend.py 146
11982 
11983+    def get_data_length(self):
11984+        return 0
11985+
11986+    def get_size(self):
11987+        return 0
11988+
11989+    def get_used_space(self):
11990+        return 0
11991+
11992     def unlink(self):
11993         pass
11994 
11995hunk ./src/allmydata/storage/backends/null/null_backend.py 166
11996 
11997     def read_share_data(self, offset, length):
11998         precondition(offset >= 0)
11999-        # Reads beyond the end of the data are truncated. Reads that start
12000-        # beyond the end of the data return an empty string.
12001-        seekpos = self._data_offset+offset
12002-        fsize = os.path.getsize(self.fname)
12003-        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
12004-        if actuallength == 0:
12005-            return ""
12006-        f = open(self.fname, 'rb')
12007-        f.seek(seekpos)
12008-        return f.read(actuallength)
12009+        return ""
12010 
12011     def write_share_data(self, offset, data):
12012         pass
12013hunk ./src/allmydata/storage/backends/null/null_backend.py 171
12014 
12015-    def _write_lease_record(self, f, lease_number, lease_info):
12016-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
12017-        f.seek(offset)
12018-        assert f.tell() == offset
12019-        f.write(lease_info.to_immutable_data())
12020-
12021-    def _read_num_leases(self, f):
12022-        f.seek(0x08)
12023-        (num_leases,) = struct.unpack(">L", f.read(4))
12024-        return num_leases
12025-
12026-    def _write_num_leases(self, f, num_leases):
12027-        f.seek(0x08)
12028-        f.write(struct.pack(">L", num_leases))
12029-
12030-    def _truncate_leases(self, f, num_leases):
12031-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
12032-
12033     def get_leases(self):
12034hunk ./src/allmydata/storage/backends/null/null_backend.py 172
12035-        """Yields a LeaseInfo instance for all leases."""
12036-        f = open(self.fname, 'rb')
12037-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12038-        f.seek(self._lease_offset)
12039-        for i in range(num_leases):
12040-            data = f.read(self.LEASE_SIZE)
12041-            if data:
12042-                yield LeaseInfo().from_immutable_data(data)
12043+        pass
12044 
12045     def add_lease(self, lease):
12046         pass
12047hunk ./src/allmydata/storage/backends/null/null_backend.py 178
12048 
12049     def renew_lease(self, renew_secret, new_expire_time):
12050-        for i,lease in enumerate(self.get_leases()):
12051-            if constant_time_compare(lease.renew_secret, renew_secret):
12052-                # yup. See if we need to update the owner time.
12053-                if new_expire_time > lease.expiration_time:
12054-                    # yes
12055-                    lease.expiration_time = new_expire_time
12056-                    f = open(self.fname, 'rb+')
12057-                    self._write_lease_record(f, i, lease)
12058-                    f.close()
12059-                return
12060         raise IndexError("unable to renew non-existent lease")
12061 
12062     def add_or_renew_lease(self, lease_info):
12063hunk ./src/allmydata/storage/backends/null/null_backend.py 181
12064-        try:
12065-            self.renew_lease(lease_info.renew_secret,
12066-                             lease_info.expiration_time)
12067-        except IndexError:
12068-            self.add_lease(lease_info)
12069+        pass
12070 
12071 
12072hunk ./src/allmydata/storage/backends/null/null_backend.py 184
12073-class MutableNullShare:
12074+class ImmutableNullShare(NullShareBase):
12075+    implements(IStoredShare)
12076+    sharetype = "immutable"
12077+
12078+    def close(self):
12079+        self.shareset.close_shnum(self.shnum)
12080+
12081+
12082+class MutableNullShare(NullShareBase):
12083     implements(IStoredMutableShare)
12084     sharetype = "mutable"
12085hunk ./src/allmydata/storage/backends/null/null_backend.py 195
12086+
12087+    def check_write_enabler(self, write_enabler):
12088+        # Null backend doesn't check write enablers.
12089+        pass
12090+
12091+    def check_testv(self, testv):
12092+        return empty_check_testv(testv)
12093+
12094+    def writev(self, datav, new_length):
12095+        pass
12096+
12097+    def close(self):
12098+        pass
12099 
12100hunk ./src/allmydata/storage/backends/null/null_backend.py 209
12101-    """ XXX: TODO """
12102}
12103[Update the S3 backend. refs #999
12104david-sarah@jacaranda.org**20110923205345
12105 Ignore-this: 5ca623a17e09ddad4cab2f51b49aec0a
12106] {
12107hunk ./src/allmydata/storage/backends/s3/immutable.py 11
12108 from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
12109 
12110 
12111-# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains
12112+# Each share file (with key 'shares/$PREFIX/$STORAGEINDEX/$SHNUM') contains
12113 # lease information [currently inaccessible] and share data. The share data is
12114 # accessed by RIBucketWriter.write and RIBucketReader.read .
12115 
12116hunk ./src/allmydata/storage/backends/s3/immutable.py 65
12117             # in case a share file is copied from a disk backend, or in case we
12118             # need them in future.
12119             # TODO: filesize = size of S3 object
12120+            filesize = 0
12121             self._end_offset = filesize - (num_leases * self.LEASE_SIZE)
12122         self._data_offset = 0xc
12123 
12124hunk ./src/allmydata/storage/backends/s3/immutable.py 122
12125         return "\x00"*actuallength
12126 
12127     def write_share_data(self, offset, data):
12128-        assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size)
12129+        length = len(data)
12130+        precondition(offset >= self._size, "offset = %r, size = %r" % (offset, self._size))
12131+        if self._max_size is not None and offset+length > self._max_size:
12132+            raise DataTooLargeError(self._max_size, offset, length)
12133 
12134         # TODO: write data to S3. If offset > self._size, fill the space
12135         # between with zeroes.
12136hunk ./src/allmydata/storage/backends/s3/mutable.py 17
12137 from allmydata.storage.backends.base import testv_compare
12138 
12139 
12140-# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
12141+# The MutableS3Share is like the ImmutableS3Share, but used for mutable data.
12142 # It has a different layout. See docs/mutable.rst for more details.
12143 
12144 # #   offset    size    name
12145hunk ./src/allmydata/storage/backends/s3/mutable.py 43
12146 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
12147 
12148 
12149-class MutableDiskShare(object):
12150+class MutableS3Share(object):
12151     implements(IStoredMutableShare)
12152 
12153     sharetype = "mutable"
12154hunk ./src/allmydata/storage/backends/s3/mutable.py 111
12155             f.close()
12156 
12157     def __repr__(self):
12158-        return ("<MutableDiskShare %s:%r at %s>"
12159+        return ("<MutableS3Share %s:%r at %s>"
12160                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
12161 
12162     def get_used_space(self):
12163hunk ./src/allmydata/storage/backends/s3/mutable.py 311
12164             except IndexError:
12165                 return
12166 
12167-    # These lease operations are intended for use by disk_backend.py.
12168-    # Other non-test clients should not depend on the fact that the disk
12169-    # backend stores leases in share files.
12170-
12171-    def add_lease(self, lease_info):
12172-        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
12173-        f = self._home.open('rb+')
12174-        try:
12175-            num_lease_slots = self._get_num_lease_slots(f)
12176-            empty_slot = self._get_first_empty_lease_slot(f)
12177-            if empty_slot is not None:
12178-                self._write_lease_record(f, empty_slot, lease_info)
12179-            else:
12180-                self._write_lease_record(f, num_lease_slots, lease_info)
12181-        finally:
12182-            f.close()
12183-
12184-    def renew_lease(self, renew_secret, new_expire_time):
12185-        accepting_nodeids = set()
12186-        f = self._home.open('rb+')
12187-        try:
12188-            for (leasenum, lease) in self._enumerate_leases(f):
12189-                if constant_time_compare(lease.renew_secret, renew_secret):
12190-                    # yup. See if we need to update the owner time.
12191-                    if new_expire_time > lease.expiration_time:
12192-                        # yes
12193-                        lease.expiration_time = new_expire_time
12194-                        self._write_lease_record(f, leasenum, lease)
12195-                    return
12196-                accepting_nodeids.add(lease.nodeid)
12197-        finally:
12198-            f.close()
12199-        # Return the accepting_nodeids set, to give the client a chance to
12200-        # update the leases on a share that has been migrated from its
12201-        # original server to a new one.
12202-        msg = ("Unable to renew non-existent lease. I have leases accepted by"
12203-               " nodeids: ")
12204-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
12205-                         for anid in accepting_nodeids])
12206-        msg += " ."
12207-        raise IndexError(msg)
12208-
12209-    def add_or_renew_lease(self, lease_info):
12210-        precondition(lease_info.owner_num != 0) # 0 means "no lease here"
12211-        try:
12212-            self.renew_lease(lease_info.renew_secret,
12213-                             lease_info.expiration_time)
12214-        except IndexError:
12215-            self.add_lease(lease_info)
12216-
12217-    def cancel_lease(self, cancel_secret):
12218-        """Remove any leases with the given cancel_secret. If the last lease
12219-        is cancelled, the file will be removed. Return the number of bytes
12220-        that were freed (by truncating the list of leases, and possibly by
12221-        deleting the file). Raise IndexError if there was no lease with the
12222-        given cancel_secret."""
12223-
12224-        # XXX can this be more like ImmutableDiskShare.cancel_lease?
12225-
12226-        accepting_nodeids = set()
12227-        modified = 0
12228-        remaining = 0
12229-        blank_lease = LeaseInfo(owner_num=0,
12230-                                renew_secret="\x00"*32,
12231-                                cancel_secret="\x00"*32,
12232-                                expiration_time=0,
12233-                                nodeid="\x00"*20)
12234-        f = self._home.open('rb+')
12235-        try:
12236-            for (leasenum, lease) in self._enumerate_leases(f):
12237-                accepting_nodeids.add(lease.nodeid)
12238-                if constant_time_compare(lease.cancel_secret, cancel_secret):
12239-                    self._write_lease_record(f, leasenum, blank_lease)
12240-                    modified += 1
12241-                else:
12242-                    remaining += 1
12243-            if modified:
12244-                freed_space = self._pack_leases(f)
12245-        finally:
12246-            f.close()
12247-
12248-        if modified > 0:
12249-            if remaining == 0:
12250-                freed_space = fileutil.get_used_space(self._home)
12251-                self.unlink()
12252-            return freed_space
12253-
12254-        msg = ("Unable to cancel non-existent lease. I have leases "
12255-               "accepted by nodeids: ")
12256-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
12257-                         for anid in accepting_nodeids])
12258-        msg += " ."
12259-        raise IndexError(msg)
12260-
12261-    def _pack_leases(self, f):
12262-        # TODO: reclaim space from cancelled leases
12263-        return 0
12264-
12265     def _read_write_enabler_and_nodeid(self, f):
12266         f.seek(0)
12267         data = f.read(self.HEADER_SIZE)
12268hunk ./src/allmydata/storage/backends/s3/mutable.py 394
12269         pass
12270 
12271 
12272-def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
12273-    ms = MutableDiskShare(storageindex, shnum, fp, parent)
12274+def create_mutable_s3_share(storageindex, shnum, fp, serverid, write_enabler, parent):
12275+    ms = MutableS3Share(storageindex, shnum, fp, parent)
12276     ms.create(serverid, write_enabler)
12277     del ms
12278hunk ./src/allmydata/storage/backends/s3/mutable.py 398
12279-    return MutableDiskShare(storageindex, shnum, fp, parent)
12280+    return MutableS3Share(storageindex, shnum, fp, parent)
12281hunk ./src/allmydata/storage/backends/s3/s3_backend.py 10
12282 from allmydata.storage.backends.s3.immutable import ImmutableS3Share
12283 from allmydata.storage.backends.s3.mutable import MutableS3Share
12284 
12285-# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM
12286-
12287+# The S3 bucket has keys of the form shares/$PREFIX/$STORAGEINDEX/$SHNUM .
12288 
12289 class S3Backend(Backend):
12290     implements(IStorageBackend)
12291}
12292[Minor cleanup to disk backend. refs #999
12293david-sarah@jacaranda.org**20110923205510
12294 Ignore-this: 79f92d7c2edb14cfedb167247c3f0d08
12295] {
12296hunk ./src/allmydata/storage/backends/disk/immutable.py 87
12297                 (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12298             finally:
12299                 f.close()
12300-            filesize = self._home.getsize()
12301             if version != 1:
12302                 msg = "sharefile %s had version %d but we wanted 1" % \
12303                       (self._home, version)
12304hunk ./src/allmydata/storage/backends/disk/immutable.py 91
12305                 raise UnknownImmutableContainerVersionError(msg)
12306+
12307+            filesize = self._home.getsize()
12308             self._num_leases = num_leases
12309             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
12310         self._data_offset = 0xc
12311}
12312[Add 'has-immutable-readv' to server version information. refs #999
12313david-sarah@jacaranda.org**20110923220935
12314 Ignore-this: c3c4358f2ab8ac503f99c968ace8efcf
12315] {
12316hunk ./src/allmydata/storage/server.py 174
12317                       "delete-mutable-shares-with-zero-length-writev": True,
12318                       "fills-holes-with-zero-bytes": True,
12319                       "prevents-read-past-end-of-share-data": True,
12320+                      "has-immutable-readv": True,
12321                       },
12322                     "application-version": str(allmydata.__full_version__),
12323                     }
12324hunk ./src/allmydata/test/test_storage.py 339
12325         sv1 = ver['http://allmydata.org/tahoe/protocols/storage/v1']
12326         self.failUnless(sv1.get('prevents-read-past-end-of-share-data'), sv1)
12327 
12328+    def test_has_immutable_readv(self):
12329+        ss = self.create("test_has_immutable_readv")
12330+        ver = ss.remote_get_version()
12331+        sv1 = ver['http://allmydata.org/tahoe/protocols/storage/v1']
12332+        self.failUnless(sv1.get('has-immutable-readv'), sv1)
12333+
12334+        # TODO: test that we actually support it
12335+
12336     def allocate(self, ss, storage_index, sharenums, size, canary=None):
12337         renew_secret = hashutil.tagged_hash("blah", "%d" % self._lease_secret.next())
12338         cancel_secret = hashutil.tagged_hash("blah", "%d" % self._lease_secret.next())
12339}
12340[util/deferredutil.py: add some utilities for asynchronous iteration. refs #999
12341david-sarah@jacaranda.org**20110927070947
12342 Ignore-this: ac4946c1e5779ea64b85a1a420d34c9e
12343] {
12344hunk ./src/allmydata/util/deferredutil.py 1
12345+
12346+from foolscap.api import fireEventually
12347 from twisted.internet import defer
12348 
12349 # utility wrapper for DeferredList
12350hunk ./src/allmydata/util/deferredutil.py 38
12351     d.addCallbacks(_parseDListResult, _unwrapFirstError)
12352     return d
12353 
12354+
12355+def async_accumulate(accumulator, body):
12356+    """
12357+    I execute an asynchronous loop in which, for each iteration, I eventually
12358+    call 'body' with the current value of an accumulator. 'body' should return a
12359+    (possibly deferred) pair: (result, should_continue). If should_continue is
12360+    a (possibly deferred) True value, the loop will continue with result as the
12361+    new accumulator, otherwise it will terminate.
12362+
12363+    I return a Deferred that fires with the final result, or that fails with
12364+    the first failure of 'body'.
12365+    """
12366+    d = defer.succeed(accumulator)
12367+    d.addCallback(body)
12368+    def _iterate((result, should_continue)):
12369+        if not should_continue:
12370+            return result
12371+        d2 = fireEventually(result)
12372+        d2.addCallback(async_accumulate, body)
12373+        return d2
12374+    d.addCallback(_iterate)
12375+    return d
12376+
12377+def async_iterate(process, iterable):
12378+    """
12379+    I iterate over the elements of 'iterable' (which may be deferred), eventually
12380+    applying 'process' to each one. 'process' should return a (possibly deferred)
12381+    boolean: True to continue the iteration, False to stop.
12382+
12383+    I return a Deferred that fires with True if all elements of the iterable
12384+    were processed (i.e. 'process' only returned True values); with False if
12385+    the iteration was stopped by 'process' returning False; or that fails with
12386+    the first failure of either 'process' or the iterator.
12387+    """
12388+    iterator = iter(iterable)
12389+
12390+    def _body(accumulator):
12391+        d = defer.maybeDeferred(iterator.next)
12392+        def _cb(item):
12393+            d2 = defer.maybeDeferred(process, item)
12394+            d2.addCallback(lambda res: (res, res))
12395+            return d2
12396+        def _eb(f):
12397+            if f.trap(StopIteration):
12398+                return (True, False)
12399+        d.addCallbacks(_cb, _eb)
12400+        return d
12401+
12402+    return async_accumulate(False, _body)
12403+
12404+def async_foldl(process, unit, iterable):
12405+    """
12406+    I perform an asynchronous left fold, similar to Haskell 'foldl process unit iterable'.
12407+    Each call to process is eventual.
12408+
12409+    I return a Deferred that fires with the result of the fold, or that fails with
12410+    the first failure of either 'process' or the iterator.
12411+    """
12412+    iterator = iter(iterable)
12413+
12414+    def _body(accumulator):
12415+        d = defer.maybeDeferred(iterator.next)
12416+        def _cb(item):
12417+            d2 = defer.maybeDeferred(process, accumulator, item)
12418+            d2.addCallback(lambda res: (res, True))
12419+            return d2
12420+        def _eb(f):
12421+            if f.trap(StopIteration):
12422+                return (accumulator, False)
12423+        d.addCallbacks(_cb, _eb)
12424+        return d
12425+
12426+    return async_accumulate(unit, _body)
12427}
12428[test_storage.py: fix test_status_bad_disk_stats. refs #999
12429david-sarah@jacaranda.org**20110927071403
12430 Ignore-this: 6108fee69a60962be2df2ad11b483a11
12431] hunk ./src/allmydata/storage/backends/disk/disk_backend.py 123
12432     def get_available_space(self):
12433         if self._readonly:
12434             return 0
12435-        return fileutil.get_available_space(self._sharedir, self._reserved_space)
12436+        try:
12437+            return fileutil.get_available_space(self._sharedir, self._reserved_space)
12438+        except EnvironmentError:
12439+            return 0
12440 
12441 
12442 class DiskShareSet(ShareSet):
12443[Cleanups to disk backend. refs #999
12444david-sarah@jacaranda.org**20110927071544
12445 Ignore-this: e9d3fd0e85aaf301c04342fffdc8f26
12446] {
12447hunk ./src/allmydata/storage/backends/disk/immutable.py 46
12448 
12449     sharetype = "immutable"
12450     LEASE_SIZE = struct.calcsize(">L32s32sL")
12451-
12452+    HEADER = ">LLL"
12453+    HEADER_SIZE = struct.calcsize(HEADER)
12454 
12455     def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
12456         """
12457hunk ./src/allmydata/storage/backends/disk/immutable.py 79
12458             # the largest length that can fit into the field. That way, even
12459             # if this does happen, the old < v1.3.0 server will still allow
12460             # clients to read the first part of the share.
12461-            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
12462-            self._lease_offset = max_size + 0x0c
12463+            self._home.setContent(struct.pack(self.HEADER, 1, min(2**32-1, max_size), 0) )
12464+            self._lease_offset = self.HEADER_SIZE + max_size
12465             self._num_leases = 0
12466         else:
12467             f = self._home.open(mode='rb')
12468hunk ./src/allmydata/storage/backends/disk/immutable.py 85
12469             try:
12470-                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12471+                (version, unused, num_leases) = struct.unpack(self.HEADER, f.read(self.HEADER_SIZE))
12472             finally:
12473                 f.close()
12474             if version != 1:
12475hunk ./src/allmydata/storage/backends/disk/immutable.py 229
12476         """Yields a LeaseInfo instance for all leases."""
12477         f = self._home.open(mode='rb')
12478         try:
12479-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
12480+            (version, unused, num_leases) = struct.unpack(self.HEADER, f.read(self.HEADER_SIZE))
12481             f.seek(self._lease_offset)
12482             for i in range(num_leases):
12483                 data = f.read(self.LEASE_SIZE)
12484}
12485[Cleanups to S3 backend (not including Deferred changes). refs #999
12486david-sarah@jacaranda.org**20110927071855
12487 Ignore-this: f0dca788190d92b1edb1ee1498fb34dc
12488] {
12489hunk ./src/allmydata/storage/backends/s3/immutable.py 7
12490 from zope.interface import implements
12491 
12492 from allmydata.interfaces import IStoredShare
12493+
12494 from allmydata.util.assertutil import precondition
12495 from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
12496 
12497hunk ./src/allmydata/storage/backends/s3/immutable.py 29
12498 
12499     sharetype = "immutable"
12500     LEASE_SIZE = struct.calcsize(">L32s32sL")  # for compatibility
12501+    HEADER = ">LLL"
12502+    HEADER_SIZE = struct.calcsize(HEADER)
12503 
12504hunk ./src/allmydata/storage/backends/s3/immutable.py 32
12505-
12506-    def __init__(self, storageindex, shnum, s3bucket, create=False, max_size=None):
12507+    def __init__(self, storageindex, shnum, s3bucket, max_size=None, data=None):
12508         """
12509         If max_size is not None then I won't allow more than max_size to be written to me.
12510         """
12511hunk ./src/allmydata/storage/backends/s3/immutable.py 36
12512-        precondition((max_size is not None) or not create, max_size, create)
12513+        precondition((max_size is not None) or (data is not None), max_size, data)
12514         self._storageindex = storageindex
12515hunk ./src/allmydata/storage/backends/s3/immutable.py 38
12516+        self._shnum = shnum
12517+        self._s3bucket = s3bucket
12518         self._max_size = max_size
12519hunk ./src/allmydata/storage/backends/s3/immutable.py 41
12520+        self._data = data
12521 
12522hunk ./src/allmydata/storage/backends/s3/immutable.py 43
12523-        self._s3bucket = s3bucket
12524-        si_s = si_b2a(storageindex)
12525-        self._key = "storage/shares/%s/%s/%d" % (si_s[:2], si_s, shnum)
12526-        self._shnum = shnum
12527+        sistr = self.get_storage_index_string()
12528+        self._key = "shares/%s/%s/%d" % (sistr[:2], sistr, shnum)
12529 
12530hunk ./src/allmydata/storage/backends/s3/immutable.py 46
12531-        if create:
12532+        if data is None:  # creating share
12533             # The second field, which was the four-byte share data length in
12534             # Tahoe-LAFS versions prior to 1.3.0, is not used; we always write 0.
12535             # We also write 0 for the number of leases.
12536hunk ./src/allmydata/storage/backends/s3/immutable.py 50
12537-            self._home.setContent(struct.pack(">LLL", 1, 0, 0) )
12538-            self._end_offset = max_size + 0x0c
12539-
12540-            # TODO: start write to S3.
12541+            self._home.setContent(struct.pack(self.HEADER, 1, 0, 0) )
12542+            self._end_offset = self.HEADER_SIZE + max_size
12543+            self._size = self.HEADER_SIZE
12544+            self._writes = []
12545         else:
12546hunk ./src/allmydata/storage/backends/s3/immutable.py 55
12547-            # TODO: get header
12548-            header = "\x00"*12
12549-            (version, unused, num_leases) = struct.unpack(">LLL", header)
12550+            (version, unused, num_leases) = struct.unpack(self.HEADER, data[:self.HEADER_SIZE])
12551 
12552             if version != 1:
12553hunk ./src/allmydata/storage/backends/s3/immutable.py 58
12554-                msg = "sharefile %s had version %d but we wanted 1" % \
12555-                      (self._home, version)
12556+                msg = "%r had version %d but we wanted 1" % (self, version)
12557                 raise UnknownImmutableContainerVersionError(msg)
12558 
12559             # We cannot write leases in share files, but allow them to be present
12560hunk ./src/allmydata/storage/backends/s3/immutable.py 64
12561             # in case a share file is copied from a disk backend, or in case we
12562             # need them in future.
12563-            # TODO: filesize = size of S3 object
12564-            filesize = 0
12565-            self._end_offset = filesize - (num_leases * self.LEASE_SIZE)
12566-        self._data_offset = 0xc
12567+            self._size = len(data)
12568+            self._end_offset = self._size - (num_leases * self.LEASE_SIZE)
12569+        self._data_offset = self.HEADER_SIZE
12570 
12571     def __repr__(self):
12572hunk ./src/allmydata/storage/backends/s3/immutable.py 69
12573-        return ("<ImmutableS3Share %s:%r at %r>"
12574-                % (si_b2a(self._storageindex), self._shnum, self._key))
12575+        return ("<ImmutableS3Share at %r>" % (self._key,))
12576 
12577     def close(self):
12578         # TODO: finalize write to S3.
12579hunk ./src/allmydata/storage/backends/s3/immutable.py 88
12580         return self._shnum
12581 
12582     def unlink(self):
12583-        # TODO: remove the S3 object.
12584-        pass
12585+        self._data = None
12586+        self._writes = None
12587+        return self._s3bucket.delete_object(self._key)
12588 
12589     def get_allocated_size(self):
12590         return self._max_size
12591hunk ./src/allmydata/storage/backends/s3/immutable.py 126
12592         if self._max_size is not None and offset+length > self._max_size:
12593             raise DataTooLargeError(self._max_size, offset, length)
12594 
12595-        # TODO: write data to S3. If offset > self._size, fill the space
12596-        # between with zeroes.
12597-
12598+        if offset > self._size:
12599+            self._writes.append("\x00" * (offset - self._size))
12600+        self._writes.append(data)
12601         self._size = offset + len(data)
12602 
12603     def add_lease(self, lease_info):
12604hunk ./src/allmydata/storage/backends/s3/s3_backend.py 2
12605 
12606-from zope.interface import implements
12607+import re
12608+
12609+from zope.interface import implements, Interface
12610 from allmydata.interfaces import IStorageBackend, IShareSet
12611hunk ./src/allmydata/storage/backends/s3/s3_backend.py 6
12612-from allmydata.storage.common import si_b2a, si_a2b
12613+
12614+from allmydata.storage.common import si_a2b
12615 from allmydata.storage.bucket import BucketWriter
12616 from allmydata.storage.backends.base import Backend, ShareSet
12617 from allmydata.storage.backends.s3.immutable import ImmutableS3Share
12618hunk ./src/allmydata/storage/backends/s3/s3_backend.py 15
12619 
12620 # The S3 bucket has keys of the form shares/$PREFIX/$STORAGEINDEX/$SHNUM .
12621 
12622+NUM_RE=re.compile("^[0-9]+$")
12623+
12624+
12625+class IS3Bucket(Interface):
12626+    """
12627+    I represent an S3 bucket.
12628+    """
12629+    def create(self):
12630+        """
12631+        Create this bucket.
12632+        """
12633+
12634+    def delete(self):
12635+        """
12636+        Delete this bucket.
12637+        The bucket must be empty before it can be deleted.
12638+        """
12639+
12640+    def list_objects(self, prefix=""):
12641+        """
12642+        Get a list of all the objects in this bucket whose object names start with
12643+        the given prefix.
12644+        """
12645+
12646+    def put_object(self, object_name, data, content_type=None, metadata={}):
12647+        """
12648+        Put an object in this bucket.
12649+        Any existing object of the same name will be replaced.
12650+        """
12651+
12652+    def get_object(self, object_name):
12653+        """
12654+        Get an object from this bucket.
12655+        """
12656+
12657+    def head_object(self, object_name):
12658+        """
12659+        Retrieve object metadata only.
12660+        """
12661+
12662+    def delete_object(self, object_name):
12663+        """
12664+        Delete an object from this bucket.
12665+        Once deleted, there is no method to restore or undelete an object.
12666+        """
12667+
12668+
12669 class S3Backend(Backend):
12670     implements(IStorageBackend)
12671 
12672hunk ./src/allmydata/storage/backends/s3/s3_backend.py 74
12673         else:
12674             self._max_space = int(max_space)
12675 
12676-        # TODO: any set-up for S3?
12677-
12678         # we don't actually create the corruption-advisory dir until necessary
12679         self._corruption_advisory_dir = corruption_advisory_dir
12680 
12681hunk ./src/allmydata/storage/backends/s3/s3_backend.py 103
12682     def __init__(self, storageindex, s3bucket):
12683         ShareSet.__init__(self, storageindex)
12684         self._s3bucket = s3bucket
12685+        sistr = self.get_storage_index_string()
12686+        self._key = 'shares/%s/%s/' % (sistr[:2], sistr)
12687 
12688     def get_overhead(self):
12689         return 0
12690hunk ./src/allmydata/storage/backends/s3/s3_backend.py 129
12691     def _create_mutable_share(self, storageserver, shnum, write_enabler):
12692         # TODO
12693         serverid = storageserver.get_serverid()
12694-        return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, write_enabler, storageserver)
12695+        return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid,
12696+                              write_enabler, storageserver)
12697 
12698     def _clean_up_after_unlink(self):
12699         pass
12700}
12701[test_storage.py: fix test_no_st_blocks. refs #999
12702david-sarah@jacaranda.org**20110927072848
12703 Ignore-this: 5f12b784920f87d09c97c676d0afa6f8
12704] {
12705hunk ./src/allmydata/test/test_storage.py 3034
12706     LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
12707 
12708 
12709-class BrokenStatResults:
12710-    pass
12711-
12712-class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
12713-    def stat(self, fn):
12714-        s = os.stat(fn)
12715-        bsr = BrokenStatResults()
12716-        for attrname in dir(s):
12717-            if attrname.startswith("_"):
12718-                continue
12719-            if attrname == "st_blocks":
12720-                continue
12721-            setattr(bsr, attrname, getattr(s, attrname))
12722-        return bsr
12723-
12724-class No_ST_BLOCKS_StorageServer(StorageServer):
12725-    LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
12726-
12727-
12728 class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
12729 
12730     def setUp(self):
12731hunk ./src/allmydata/test/test_storage.py 3830
12732         return d
12733 
12734     def test_no_st_blocks(self):
12735-        basedir = "storage/LeaseCrawler/no_st_blocks"
12736-        fp = FilePath(basedir)
12737-        backend = DiskBackend(fp)
12738+        # TODO: replace with @patch that supports Deferreds.
12739 
12740hunk ./src/allmydata/test/test_storage.py 3832
12741-        # A negative 'override_lease_duration' means that the "configured-"
12742-        # space-recovered counts will be non-zero, since all shares will have
12743-        # expired by then.
12744-        expiration_policy = {
12745-            'enabled': True,
12746-            'mode': 'age',
12747-            'override_lease_duration': -1000,
12748-            'sharetypes': ('mutable', 'immutable'),
12749-        }
12750-        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
12751+        class BrokenStatResults:
12752+            pass
12753 
12754hunk ./src/allmydata/test/test_storage.py 3835
12755-        # make it start sooner than usual.
12756-        lc = ss.lease_checker
12757-        lc.slow_start = 0
12758+        def call_stat(fn):
12759+            s = self.old_os_stat(fn)
12760+            bsr = BrokenStatResults()
12761+            for attrname in dir(s):
12762+                if attrname.startswith("_"):
12763+                    continue
12764+                if attrname == "st_blocks":
12765+                    continue
12766+                setattr(bsr, attrname, getattr(s, attrname))
12767+            return bsr
12768 
12769hunk ./src/allmydata/test/test_storage.py 3846
12770-        self.make_shares(ss)
12771-        ss.setServiceParent(self.s)
12772-        def _wait():
12773-            return bool(lc.get_state()["last-cycle-finished"] is not None)
12774-        d = self.poll(_wait)
12775+        def _cleanup(res):
12776+            os.stat = self.old_os_stat
12777+            return res
12778 
12779hunk ./src/allmydata/test/test_storage.py 3850
12780-        def _check(ignored):
12781-            s = lc.get_state()
12782-            last = s["history"][0]
12783-            rec = last["space-recovered"]
12784-            self.failUnlessEqual(rec["configured-buckets"], 4)
12785-            self.failUnlessEqual(rec["configured-shares"], 4)
12786-            self.failUnless(rec["configured-sharebytes"] > 0,
12787-                            rec["configured-sharebytes"])
12788-            # without the .st_blocks field in os.stat() results, we should be
12789-            # reporting diskbytes==sharebytes
12790-            self.failUnlessEqual(rec["configured-sharebytes"],
12791-                                 rec["configured-diskbytes"])
12792-        d.addCallback(_check)
12793-        return d
12794+        self.old_os_stat = os.stat
12795+        try:
12796+            os.stat = call_stat
12797+
12798+            basedir = "storage/LeaseCrawler/no_st_blocks"
12799+            fp = FilePath(basedir)
12800+            backend = DiskBackend(fp)
12801+
12802+            # A negative 'override_lease_duration' means that the "configured-"
12803+            # space-recovered counts will be non-zero, since all shares will have
12804+            # expired by then.
12805+            expiration_policy = {
12806+                'enabled': True,
12807+                'mode': 'age',
12808+                'override_lease_duration': -1000,
12809+                'sharetypes': ('mutable', 'immutable'),
12810+            }
12811+            ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
12812+
12813+            # make it start sooner than usual.
12814+            lc = ss.lease_checker
12815+            lc.slow_start = 0
12816+
12817+            d = defer.succeed(None)
12818+            d.addCallback(lambda ign: self.make_shares(ss))
12819+            d.addCallback(lambda ign: ss.setServiceParent(self.s))
12820+            def _wait():
12821+                return bool(lc.get_state()["last-cycle-finished"] is not None)
12822+            d.addCallback(lambda ign: self.poll(_wait))
12823+
12824+            def _check(ignored):
12825+                s = lc.get_state()
12826+                last = s["history"][0]
12827+                rec = last["space-recovered"]
12828+                self.failUnlessEqual(rec["configured-buckets"], 4)
12829+                self.failUnlessEqual(rec["configured-shares"], 4)
12830+                self.failUnless(rec["configured-sharebytes"] > 0,
12831+                                rec["configured-sharebytes"])
12832+                # without the .st_blocks field in os.stat() results, we should be
12833+                # reporting diskbytes==sharebytes
12834+                self.failUnlessEqual(rec["configured-sharebytes"],
12835+                                     rec["configured-diskbytes"])
12836+            d.addCallback(_check)
12837+            d.addBoth(_cleanup)
12838+            return d
12839+        finally:
12840+            _cleanup(None)
12841 
12842     def test_share_corruption(self):
12843         self._poll_should_ignore_these_errors = [
12844}
12845[mutable/publish.py: resolve conflicting patches. refs #999
12846david-sarah@jacaranda.org**20110927073530
12847 Ignore-this: 6154a113723dc93148151288bd032439
12848] {
12849hunk ./src/allmydata/mutable/publish.py 6
12850 import os, time
12851 from StringIO import StringIO
12852 from itertools import count
12853-from copy import copy
12854 from zope.interface import implements
12855 from twisted.internet import defer
12856 from twisted.python import failure
12857hunk ./src/allmydata/mutable/publish.py 867
12858         ds = []
12859         verification_key = self._pubkey.serialize()
12860 
12861-
12862-        # TODO: Bad, since we remove from this same dict. We need to
12863-        # make a copy, or just use a non-iterated value.
12864-        for (shnum, writer) in self.writers.iteritems():
12865+        for (shnum, writer) in self.writers.copy().iteritems():
12866             writer.put_verification_key(verification_key)
12867             self.num_outstanding += 1
12868             def _no_longer_outstanding(res):
12869}
12870[Undo an incompatible change to RIStorageServer. refs #999
12871david-sarah@jacaranda.org**20110928013729
12872 Ignore-this: bea4c0f6cb71202fab942cd846eab693
12873] {
12874hunk ./src/allmydata/interfaces.py 168
12875 
12876     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
12877                                         secrets=TupleOf(WriteEnablerSecret,
12878-                                                        LeaseRenewSecret),
12879+                                                        LeaseRenewSecret,
12880+                                                        LeaseCancelSecret),
12881                                         tw_vectors=TestAndWriteVectorsForShares,
12882                                         r_vector=ReadVector,
12883                                         ):
12884hunk ./src/allmydata/interfaces.py 193
12885                              This secret is generated by the client and
12886                              stored for later comparison by the server. Each
12887                              server is given a different secret.
12888-        @param cancel_secret: ignored
12889+        @param cancel_secret: This no longer allows lease cancellation, but
12890+                              must still be a unique value identifying the
12891+                              lease. XXX stop relying on it to be unique.
12892 
12893         The 'secrets' argument is a tuple with (write_enabler, renew_secret).
12894         The write_enabler is required to perform any write. The renew_secret
12895hunk ./src/allmydata/storage/backends/base.py 96
12896         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
12897         #     """create a mutable share with the given shnum and write_enabler"""
12898 
12899-        write_enabler = secrets[0]
12900-        renew_secret = secrets[1]
12901-        if len(secrets) > 2:
12902-            cancel_secret = secrets[2]
12903-        else:
12904-            cancel_secret = renew_secret
12905+        (write_enabler, renew_secret, cancel_secret) = secrets
12906 
12907         shares = {}
12908         for share in self.get_shares():
12909}
12910[test_system.py: incorrect arguments were being passed to the constructor for MutableDiskShare. refs #999
12911david-sarah@jacaranda.org**20110928013857
12912 Ignore-this: e9719f74e7e073e37537f9a71614b8a0
12913] {
12914hunk ./src/allmydata/test/test_system.py 7
12915 from twisted.trial import unittest
12916 from twisted.internet import defer
12917 from twisted.internet import threads # CLI tests use deferToThread
12918+from twisted.python.filepath import FilePath
12919 
12920 import allmydata
12921 from allmydata import uri
12922hunk ./src/allmydata/test/test_system.py 421
12923             self.fail("unable to find any share files in %s" % basedir)
12924         return shares
12925 
12926-    def _corrupt_mutable_share(self, filename, which):
12927-        msf = MutableDiskShare(filename)
12928+    def _corrupt_mutable_share(self, what, which):
12929+        (storageindex, filename, shnum) = what
12930+        msf = MutableDiskShare(storageindex, shnum, FilePath(filename))
12931         datav = msf.readv([ (0, 1000000) ])
12932         final_share = datav[0]
12933         assert len(final_share) < 1000000 # ought to be truncated
12934hunk ./src/allmydata/test/test_system.py 504
12935             output = out.getvalue()
12936             self.failUnlessEqual(rc, 0)
12937             try:
12938-                self.failUnless("Mutable slot found:\n" in output)
12939-                self.failUnless("share_type: SDMF\n" in output)
12940+                self.failUnlessIn("Mutable slot found:\n", output)
12941+                self.failUnlessIn("share_type: SDMF\n", output)
12942                 peerid = idlib.nodeid_b2a(self.clients[client_num].nodeid)
12943hunk ./src/allmydata/test/test_system.py 507
12944-                self.failUnless(" WE for nodeid: %s\n" % peerid in output)
12945-                self.failUnless(" num_extra_leases: 0\n" in output)
12946-                self.failUnless("  secrets are for nodeid: %s\n" % peerid
12947-                                in output)
12948-                self.failUnless(" SDMF contents:\n" in output)
12949-                self.failUnless("  seqnum: 1\n" in output)
12950-                self.failUnless("  required_shares: 3\n" in output)
12951-                self.failUnless("  total_shares: 10\n" in output)
12952-                self.failUnless("  segsize: 27\n" in output, (output, filename))
12953-                self.failUnless("  datalen: 25\n" in output)
12954+                self.failUnlessIn(" WE for nodeid: %s\n" % peerid, output)
12955+                self.failUnlessIn(" num_extra_leases: 0\n", output)
12956+                self.failUnlessIn("  secrets are for nodeid: %s\n" % peerid, output)
12957+                self.failUnlessIn(" SDMF contents:\n", output)
12958+                self.failUnlessIn("  seqnum: 1\n", output)
12959+                self.failUnlessIn("  required_shares: 3\n", output)
12960+                self.failUnlessIn("  total_shares: 10\n", output)
12961+                self.failUnlessIn("  segsize: 27\n", output)
12962+                self.failUnlessIn("  datalen: 25\n", output)
12963                 # the exact share_hash_chain nodes depends upon the sharenum,
12964                 # and is more of a hassle to compute than I want to deal with
12965                 # now
12966hunk ./src/allmydata/test/test_system.py 519
12967-                self.failUnless("  share_hash_chain: " in output)
12968-                self.failUnless("  block_hash_tree: 1 nodes\n" in output)
12969+                self.failUnlessIn("  share_hash_chain: ", output)
12970+                self.failUnlessIn("  block_hash_tree: 1 nodes\n", output)
12971                 expected = ("  verify-cap: URI:SSK-Verifier:%s:" %
12972                             base32.b2a(storage_index))
12973                 self.failUnless(expected in output)
12974hunk ./src/allmydata/test/test_system.py 596
12975             shares = self._find_all_shares(self.basedir)
12976             ## sort by share number
12977             #shares.sort( lambda a,b: cmp(a[3], b[3]) )
12978-            where = dict([ (shnum, filename)
12979-                           for (client_num, storage_index, filename, shnum)
12980+            where = dict([ (shnum, (storageindex, filename, shnum))
12981+                           for (client_num, storageindex, filename, shnum)
12982                            in shares ])
12983             assert len(where) == 10 # this test is designed for 3-of-10
12984hunk ./src/allmydata/test/test_system.py 600
12985-            for shnum, filename in where.items():
12986+            for shnum, what in where.items():
12987                 # shares 7,8,9 are left alone. read will check
12988                 # (share_hash_chain, block_hash_tree, share_data). New
12989                 # seqnum+R pairs will trigger a check of (seqnum, R, IV,
12990hunk ./src/allmydata/test/test_system.py 608
12991                 if shnum == 0:
12992                     # read: this will trigger "pubkey doesn't match
12993                     # fingerprint".
12994-                    self._corrupt_mutable_share(filename, "pubkey")
12995-                    self._corrupt_mutable_share(filename, "encprivkey")
12996+                    self._corrupt_mutable_share(what, "pubkey")
12997+                    self._corrupt_mutable_share(what, "encprivkey")
12998                 elif shnum == 1:
12999                     # triggers "signature is invalid"
13000hunk ./src/allmydata/test/test_system.py 612
13001-                    self._corrupt_mutable_share(filename, "seqnum")
13002+                    self._corrupt_mutable_share(what, "seqnum")
13003                 elif shnum == 2:
13004                     # triggers "signature is invalid"
13005hunk ./src/allmydata/test/test_system.py 615
13006-                    self._corrupt_mutable_share(filename, "R")
13007+                    self._corrupt_mutable_share(what, "R")
13008                 elif shnum == 3:
13009                     # triggers "signature is invalid"
13010hunk ./src/allmydata/test/test_system.py 618
13011-                    self._corrupt_mutable_share(filename, "segsize")
13012+                    self._corrupt_mutable_share(what, "segsize")
13013                 elif shnum == 4:
13014hunk ./src/allmydata/test/test_system.py 620
13015-                    self._corrupt_mutable_share(filename, "share_hash_chain")
13016+                    self._corrupt_mutable_share(what, "share_hash_chain")
13017                 elif shnum == 5:
13018hunk ./src/allmydata/test/test_system.py 622
13019-                    self._corrupt_mutable_share(filename, "block_hash_tree")
13020+                    self._corrupt_mutable_share(what, "block_hash_tree")
13021                 elif shnum == 6:
13022hunk ./src/allmydata/test/test_system.py 624
13023-                    self._corrupt_mutable_share(filename, "share_data")
13024+                    self._corrupt_mutable_share(what, "share_data")
13025                 # other things to correct: IV, signature
13026                 # 7,8,9 are left alone
13027 
13028}
13029[test_system.py: more debug output for a failing check in test_filesystem. refs #999
13030david-sarah@jacaranda.org**20110928014019
13031 Ignore-this: e8bb77b8f7db12db7cd69efb6e0ed130
13032] hunk ./src/allmydata/test/test_system.py 1371
13033         self.failUnlessEqual(rc, 0)
13034         out.seek(0)
13035         descriptions = [sfn.strip() for sfn in out.readlines()]
13036-        self.failUnlessEqual(len(descriptions), 30)
13037+        self.failUnlessEqual(len(descriptions), 30, repr((cmd, descriptions)))
13038         matching = [line
13039                     for line in descriptions
13040                     if line.startswith("CHK %s " % storage_index_s)]
13041[scripts/debug.py: fix incorrect arguments to dump_immutable_share. refs #999
13042david-sarah@jacaranda.org**20110928014049
13043 Ignore-this: 1078ee3f06a2f36b29e0cf694d2851cd
13044] hunk ./src/allmydata/scripts/debug.py 52
13045         return dump_mutable_share(options, share)
13046     else:
13047         assert share.sharetype == "immutable", share.sharetype
13048-        return dump_immutable_share(options)
13049+        return dump_immutable_share(options, share)
13050 
13051 def dump_immutable_share(options, share):
13052     out = options.stdout
13053[mutable/publish.py: don't crash if there are no writers in _report_verinfo. refs #999
13054david-sarah@jacaranda.org**20110928014126
13055 Ignore-this: 9999c82bb3057f755a6e86baeafb8a39
13056] hunk ./src/allmydata/mutable/publish.py 885
13057 
13058 
13059     def _record_verinfo(self):
13060-        self.versioninfo = self.writers.values()[0].get_verinfo()
13061+        writers = self.writers.values()
13062+        if len(writers) > 0:
13063+            self.versioninfo = writers[0].get_verinfo()
13064 
13065 
13066     def _connection_problem(self, f, writer):
13067
13068Context:
13069
13070[test/test_runner.py: BinTahoe.test_path has rare nondeterministic failures; this patch probably fixes a problem where the actual cause of failure is masked by a string conversion error.
13071david-sarah@jacaranda.org**20110927225336
13072 Ignore-this: 6f1ad68004194cc9cea55ace3745e4af
13073]
13074[docs/configuration.rst: add section about the types of node, and clarify when setting web.port enables web-API service. fixes #1444
13075zooko@zooko.com**20110926203801
13076 Ignore-this: ab94d470c68e720101a7ff3c207a719e
13077]
13078[TAG allmydata-tahoe-1.9.0a2
13079warner@lothar.com**20110925234811
13080 Ignore-this: e9649c58f9c9017a7d55008938dba64f
13081]
13082Patch bundle hash:
13083b73ceded030348491c32592b283cc35f3a053481