Ticket #999: pluggable-backends-davidsarah-v3.darcs.patch

File pluggable-backends-davidsarah-v3.darcs.patch, 338.4 KB (added by davidsarah, at 2011-09-19T20:33:29Z)

Bleeding edge pluggable backends code from David-Sarah. refs #999

Line 
12 patches for repository /home/davidsarah/tahoe/1.9alpha:
2
3Sat Sep 17 03:00:04 BST 2011  david-sarah@jacaranda.org
4  * Work-in-progress patch for pluggable backends. Still fails many tests. refs #999
5
6Mon Sep 19 21:27:15 BST 2011  david-sarah@jacaranda.org
7  * Bleeding edge pluggable backends code from David-Sarah. refs #999
8
9New patches:
10
11[Work-in-progress patch for pluggable backends. Still fails many tests. refs #999
12david-sarah@jacaranda.org**20110917020004
13 Ignore-this: b2a0d7c8e20037c690e0be02e81d37fe
14] {
15adddir ./src/allmydata/storage/backends
16adddir ./src/allmydata/storage/backends/disk
17move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
18move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
19adddir ./src/allmydata/storage/backends/null
20hunk ./docs/garbage-collection.rst 177
21     use this parameter to implement it.
22 
23     This key is only valid when age-based expiration is in use (i.e. when
24-    ``expire.mode = age`` is used). It will be rejected if cutoff-date
25+    ``expire.mode = age`` is used). It will be ignored if cutoff-date
26     expiration is in use.
27 
28 ``expire.cutoff_date = (date string, required if mode=cutoff-date)``
29hunk ./docs/garbage-collection.rst 196
30     the last renewal time and the cutoff date.
31 
32     This key is only valid when cutoff-based expiration is in use (i.e. when
33-    "expire.mode = cutoff-date"). It will be rejected if age-based expiration
34+    "expire.mode = cutoff-date"). It will be ignored if age-based expiration
35     is in use.
36 
37   expire.immutable = (boolean, optional)
38hunk ./src/allmydata/client.py 245
39             sharetypes.append("immutable")
40         if self.get_config("storage", "expire.mutable", True, boolean=True):
41             sharetypes.append("mutable")
42-        expiration_sharetypes = tuple(sharetypes)
43 
44hunk ./src/allmydata/client.py 246
45+        expiration_policy = {
46+            'enabled': expire,
47+            'mode': mode,
48+            'override_lease_duration': o_l_d,
49+            'cutoff_date': cutoff_date,
50+            'sharetypes': tuple(sharetypes),
51+        }
52         ss = StorageServer(storedir, self.nodeid,
53                            reserved_space=reserved,
54                            discard_storage=discard,
55hunk ./src/allmydata/client.py 258
56                            readonly_storage=readonly,
57                            stats_provider=self.stats_provider,
58-                           expiration_enabled=expire,
59-                           expiration_mode=mode,
60-                           expiration_override_lease_duration=o_l_d,
61-                           expiration_cutoff_date=cutoff_date,
62-                           expiration_sharetypes=expiration_sharetypes)
63+                           expiration_policy=expiration_policy)
64         self.add_service(ss)
65 
66         d = self.when_tub_ready()
67hunk ./src/allmydata/immutable/offloaded.py 306
68         if os.path.exists(self._encoding_file):
69             self.log("ciphertext already present, bypassing fetch",
70                      level=log.UNUSUAL)
71+            # XXX the following comment is probably stale, since
72+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
73+            #
74             # we'll still need the plaintext hashes (when
75             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
76             # called), and currently the easiest way to get them is to ask
77hunk ./src/allmydata/immutable/upload.py 765
78             self._status.set_progress(1, progress)
79         return cryptdata
80 
81-
82     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
83hunk ./src/allmydata/immutable/upload.py 766
84+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
85+        plaintext segments, i.e. get the tagged hashes of the given segments.
86+        The segment size is expected to be generated by the
87+        IEncryptedUploadable before any plaintext is read or ciphertext
88+        produced, so that the segment hashes can be generated with only a
89+        single pass.
90+
91+        This returns a Deferred that fires with a sequence of hashes, using:
92+
93+         tuple(segment_hashes[first:last])
94+
95+        'num_segments' is used to assert that the number of segments that the
96+        IEncryptedUploadable handled matches the number of segments that the
97+        encoder was expecting.
98+
99+        This method must not be called until the final byte has been read
100+        from read_encrypted(). Once this method is called, read_encrypted()
101+        can never be called again.
102+        """
103         # this is currently unused, but will live again when we fix #453
104         if len(self._plaintext_segment_hashes) < num_segments:
105             # close out the last one
106hunk ./src/allmydata/immutable/upload.py 803
107         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
108 
109     def get_plaintext_hash(self):
110+        """OBSOLETE; Get the hash of the whole plaintext.
111+
112+        This returns a Deferred that fires with a tagged SHA-256 hash of the
113+        whole plaintext, obtained from hashutil.plaintext_hash(data).
114+        """
115+        # this is currently unused, but will live again when we fix #453
116         h = self._plaintext_hasher.digest()
117         return defer.succeed(h)
118 
119hunk ./src/allmydata/interfaces.py 29
120 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
121 Offset = Number
122 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
123-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
124-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
125-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
126+WriteEnablerSecret = Hash # used to protect mutable share modifications
127+LeaseRenewSecret = Hash # used to protect lease renewal requests
128+LeaseCancelSecret = Hash # used to protect lease cancellation requests
129 
130 class RIStubClient(RemoteInterface):
131     """Each client publishes a service announcement for a dummy object called
132hunk ./src/allmydata/interfaces.py 106
133                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
134                          allocated_size=Offset, canary=Referenceable):
135         """
136-        @param storage_index: the index of the bucket to be created or
137+        @param storage_index: the index of the shareset to be created or
138                               increfed.
139         @param sharenums: these are the share numbers (probably between 0 and
140                           99) that the sender is proposing to store on this
141hunk ./src/allmydata/interfaces.py 111
142                           server.
143-        @param renew_secret: This is the secret used to protect bucket refresh
144+        @param renew_secret: This is the secret used to protect lease renewal.
145                              This secret is generated by the client and
146                              stored for later comparison by the server. Each
147                              server is given a different secret.
148hunk ./src/allmydata/interfaces.py 115
149-        @param cancel_secret: Like renew_secret, but protects bucket decref.
150-        @param canary: If the canary is lost before close(), the bucket is
151+        @param cancel_secret: ignored
152+        @param canary: If the canary is lost before close(), the allocation is
153                        deleted.
154         @return: tuple of (alreadygot, allocated), where alreadygot is what we
155                  already have and allocated is what we hereby agree to accept.
156hunk ./src/allmydata/interfaces.py 129
157                   renew_secret=LeaseRenewSecret,
158                   cancel_secret=LeaseCancelSecret):
159         """
160-        Add a new lease on the given bucket. If the renew_secret matches an
161+        Add a new lease on the given shareset. If the renew_secret matches an
162         existing lease, that lease will be renewed instead. If there is no
163hunk ./src/allmydata/interfaces.py 131
164-        bucket for the given storage_index, return silently. (note that in
165+        shareset for the given storage_index, return silently. (Note that in
166         tahoe-1.3.0 and earlier, IndexError was raised if there was no
167hunk ./src/allmydata/interfaces.py 133
168-        bucket)
169+        shareset.)
170         """
171         return Any() # returns None now, but future versions might change
172 
173hunk ./src/allmydata/interfaces.py 139
174     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
175         """
176-        Renew the lease on a given bucket, resetting the timer to 31 days.
177-        Some networks will use this, some will not. If there is no bucket for
178+        Renew the lease on a given shareset, resetting the timer to 31 days.
179+        Some networks will use this, some will not. If there is no shareset for
180         the given storage_index, IndexError will be raised.
181 
182         For mutable shares, if the given renew_secret does not match an
183hunk ./src/allmydata/interfaces.py 146
184         existing lease, IndexError will be raised with a note listing the
185         server-nodeids on the existing leases, so leases on migrated shares
186-        can be renewed or cancelled. For immutable shares, IndexError
187-        (without the note) will be raised.
188+        can be renewed. For immutable shares, IndexError (without the note)
189+        will be raised.
190         """
191         return Any()
192 
193hunk ./src/allmydata/interfaces.py 154
194     def get_buckets(storage_index=StorageIndex):
195         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
196 
197-
198-
199     def slot_readv(storage_index=StorageIndex,
200                    shares=ListOf(int), readv=ReadVector):
201         """Read a vector from the numbered shares associated with the given
202hunk ./src/allmydata/interfaces.py 163
203 
204     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
205                                         secrets=TupleOf(WriteEnablerSecret,
206-                                                        LeaseRenewSecret,
207-                                                        LeaseCancelSecret),
208+                                                        LeaseRenewSecret),
209                                         tw_vectors=TestAndWriteVectorsForShares,
210                                         r_vector=ReadVector,
211                                         ):
212hunk ./src/allmydata/interfaces.py 167
213-        """General-purpose test-and-set operation for mutable slots. Perform
214-        a bunch of comparisons against the existing shares. If they all pass,
215-        then apply a bunch of write vectors to those shares. Then use the
216-        read vectors to extract data from all the shares and return the data.
217+        """
218+        General-purpose atomic test-read-and-set operation for mutable slots.
219+        Perform a bunch of comparisons against the existing shares. If they
220+        all pass: use the read vectors to extract data from all the shares,
221+        then apply a bunch of write vectors to those shares. Return the read
222+        data, which does not include any modifications made by the writes.
223 
224         This method is, um, large. The goal is to allow clients to update all
225         the shares associated with a mutable file in a single round trip.
226hunk ./src/allmydata/interfaces.py 177
227 
228-        @param storage_index: the index of the bucket to be created or
229+        @param storage_index: the index of the shareset to be created or
230                               increfed.
231         @param write_enabler: a secret that is stored along with the slot.
232                               Writes are accepted from any caller who can
233hunk ./src/allmydata/interfaces.py 183
234                               present the matching secret. A different secret
235                               should be used for each slot*server pair.
236-        @param renew_secret: This is the secret used to protect bucket refresh
237+        @param renew_secret: This is the secret used to protect lease renewal.
238                              This secret is generated by the client and
239                              stored for later comparison by the server. Each
240                              server is given a different secret.
241hunk ./src/allmydata/interfaces.py 187
242-        @param cancel_secret: Like renew_secret, but protects bucket decref.
243+        @param cancel_secret: ignored
244 
245hunk ./src/allmydata/interfaces.py 189
246-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
247-        cancel_secret). The first is required to perform any write. The
248-        latter two are used when allocating new shares. To simply acquire a
249-        new lease on existing shares, use an empty testv and an empty writev.
250+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
251+        The write_enabler is required to perform any write. The renew_secret
252+        is used when allocating new shares.
253 
254         Each share can have a separate test vector (i.e. a list of
255         comparisons to perform). If all vectors for all shares pass, then all
256hunk ./src/allmydata/interfaces.py 280
257         store that on disk.
258         """
259 
260-class IStorageBucketWriter(Interface):
261+
262+class IStorageBackend(Interface):
263     """
264hunk ./src/allmydata/interfaces.py 283
265-    Objects of this kind live on the client side.
266+    Objects of this kind live on the server side and are used by the
267+    storage server object.
268     """
269hunk ./src/allmydata/interfaces.py 286
270-    def put_block(segmentnum=int, data=ShareData):
271-        """@param data: For most segments, this data will be 'blocksize'
272-        bytes in length. The last segment might be shorter.
273-        @return: a Deferred that fires (with None) when the operation completes
274+    def get_available_space():
275+        """
276+        Returns available space for share storage in bytes, or
277+        None if this information is not available or if the available
278+        space is unlimited.
279+
280+        If the backend is configured for read-only mode then this will
281+        return 0.
282+        """
283+
284+    def get_sharesets_for_prefix(prefix):
285+        """
286+        Generates IShareSet objects for all storage indices matching the
287+        given prefix for which this backend holds shares.
288+        """
289+
290+    def get_shareset(storageindex):
291+        """
292+        Get an IShareSet object for the given storage index.
293+        """
294+
295+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
296+        """
297+        Clients who discover hash failures in shares that they have
298+        downloaded from me will use this method to inform me about the
299+        failures. I will record their concern so that my operator can
300+        manually inspect the shares in question.
301+
302+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
303+        share number. 'reason' is a human-readable explanation of the problem,
304+        probably including some expected hash values and the computed ones
305+        that did not match. Corruption advisories for mutable shares should
306+        include a hash of the public key (the same value that appears in the
307+        mutable-file verify-cap), since the current share format does not
308+        store that on disk.
309+
310+        @param storageindex=str
311+        @param sharetype=str
312+        @param shnum=int
313+        @param reason=str
314+        """
315+
316+
317+class IShareSet(Interface):
318+    def get_storage_index():
319+        """
320+        Returns the storage index for this shareset.
321+        """
322+
323+    def get_storage_index_string():
324+        """
325+        Returns the base32-encoded storage index for this shareset.
326+        """
327+
328+    def get_overhead():
329+        """
330+        Returns the storage overhead, in bytes, of this shareset (exclusive
331+        of the space used by its shares).
332+        """
333+
334+    def get_shares():
335+        """
336+        Generates the IStoredShare objects held in this shareset.
337+        """
338+
339+    def get_incoming_shnums():
340+        """
341+        Return a frozenset of the shnums (as ints) of incoming shares.
342+        """
343+
344+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
345+        """
346+        Create a bucket writer that can be used to write data to a given share.
347+
348+        @param storageserver=RIStorageServer
349+        @param shnum=int: A share number in this shareset
350+        @param max_space_per_bucket=int: The maximum space allocated for the
351+                 share, in bytes
352+        @param lease_info=LeaseInfo: The initial lease information
353+        @param canary=Referenceable: If the canary is lost before close(), the
354+                 bucket is deleted.
355+        @return an IStorageBucketWriter for the given share
356+        """
357+
358+    def make_bucket_reader(storageserver, share):
359+        """
360+        Create a bucket reader that can be used to read data from a given share.
361+
362+        @param storageserver=RIStorageServer
363+        @param share=IStoredShare
364+        @return an IStorageBucketReader for the given share
365         """
366 
367hunk ./src/allmydata/interfaces.py 379
368-    def put_plaintext_hashes(hashes=ListOf(Hash)):
369+    def readv(wanted_shnums, read_vector):
370         """
371hunk ./src/allmydata/interfaces.py 381
372+        Read a vector from the numbered shares in this shareset. An empty
373+        wanted_shnums list means to return data from all known shares.
374+
375+        @param wanted_shnums=ListOf(int)
376+        @param read_vector=ReadVector
377+        @return DictOf(int, ReadData): shnum -> results, with one key per share
378+        """
379+
380+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
381+        """
382+        General-purpose atomic test-read-and-set operation for mutable slots.
383+        Perform a bunch of comparisons against the existing shares in this
384+        shareset. If they all pass: use the read vectors to extract data from
385+        all the shares, then apply a bunch of write vectors to those shares.
386+        Return the read data, which does not include any modifications made by
387+        the writes.
388+
389+        See the similar method in RIStorageServer for more detail.
390+
391+        @param storageserver=RIStorageServer
392+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
393+        @param test_and_write_vectors=TestAndWriteVectorsForShares
394+        @param read_vector=ReadVector
395+        @param expiration_time=int
396+        @return TupleOf(bool, DictOf(int, ReadData))
397+        """
398+
399+    def add_or_renew_lease(lease_info):
400+        """
401+        Add a new lease on the shares in this shareset. If the renew_secret
402+        matches an existing lease, that lease will be renewed instead. If
403+        there are no shares in this shareset, return silently. (Note that
404+        in Tahoe-LAFS v1.3.0 and earlier, IndexError was raised if there were
405+        no shares with this shareset's storage index.)
406+
407+        @param lease_info=LeaseInfo
408+        """
409+
410+    def renew_lease(renew_secret, new_expiration_time):
411+        """
412+        Renew a lease on the shares in this shareset, resetting the timer
413+        to 31 days. Some grids will use this, some will not. If there are no
414+        shares in this shareset, IndexError will be raised.
415+
416+        For mutable shares, if the given renew_secret does not match an
417+        existing lease, IndexError will be raised with a note listing the
418+        server-nodeids on the existing leases, so leases on migrated shares
419+        can be renewed. For immutable shares, IndexError (without the note)
420+        will be raised.
421+
422+        @param renew_secret=LeaseRenewSecret
423+        """
424+
425+
426+class IStoredShare(Interface):
427+    """
428+    This object contains as much as all of the share data.  It is intended
429+    for lazy evaluation, such that in many use cases substantially less than
430+    all of the share data will be accessed.
431+    """
432+    def close():
433+        """
434+        Complete writing to this share.
435+        """
436+
437+    def get_storage_index():
438+        """
439+        Returns the storage index.
440+        """
441+
442+    def get_shnum():
443+        """
444+        Returns the share number.
445+        """
446+
447+    def get_data_length():
448+        """
449+        Returns the data length in bytes.
450+        """
451+
452+    def get_size():
453+        """
454+        Returns the size of the share in bytes.
455+        """
456+
457+    def get_used_space():
458+        """
459+        Returns the amount of backend storage including overhead, in bytes, used
460+        by this share.
461+        """
462+
463+    def unlink():
464+        """
465+        Signal that this share can be removed from the backend storage. This does
466+        not guarantee that the share data will be immediately inaccessible, or
467+        that it will be securely erased.
468+        """
469+
470+    def readv(read_vector):
471+        """
472+        XXX
473+        """
474+
475+
476+class IStoredMutableShare(IStoredShare):
477+    def check_write_enabler(write_enabler, si_s):
478+        """
479+        XXX
480+        """
481+
482+    def check_testv(test_vector):
483+        """
484+        XXX
485+        """
486+
487+    def writev(datav, new_length):
488+        """
489+        XXX
490+        """
491+
492+
493+class IStorageBucketWriter(Interface):
494+    """
495+    Objects of this kind live on the client side.
496+    """
497+    def put_block(segmentnum, data):
498+        """
499+        @param segmentnum=int
500+        @param data=ShareData: For most segments, this data will be 'blocksize'
501+        bytes in length. The last segment might be shorter.
502         @return: a Deferred that fires (with None) when the operation completes
503         """
504 
505hunk ./src/allmydata/interfaces.py 514
506-    def put_crypttext_hashes(hashes=ListOf(Hash)):
507+    def put_crypttext_hashes(hashes):
508         """
509hunk ./src/allmydata/interfaces.py 516
510+        @param hashes=ListOf(Hash)
511         @return: a Deferred that fires (with None) when the operation completes
512         """
513 
514hunk ./src/allmydata/interfaces.py 520
515-    def put_block_hashes(blockhashes=ListOf(Hash)):
516+    def put_block_hashes(blockhashes):
517         """
518hunk ./src/allmydata/interfaces.py 522
519+        @param blockhashes=ListOf(Hash)
520         @return: a Deferred that fires (with None) when the operation completes
521         """
522 
523hunk ./src/allmydata/interfaces.py 526
524-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
525+    def put_share_hashes(sharehashes):
526         """
527hunk ./src/allmydata/interfaces.py 528
528+        @param sharehashes=ListOf(TupleOf(int, Hash))
529         @return: a Deferred that fires (with None) when the operation completes
530         """
531 
532hunk ./src/allmydata/interfaces.py 532
533-    def put_uri_extension(data=URIExtensionData):
534+    def put_uri_extension(data):
535         """This block of data contains integrity-checking information (hashes
536         of plaintext, crypttext, and shares), as well as encoding parameters
537         that are necessary to recover the data. This is a serialized dict
538hunk ./src/allmydata/interfaces.py 537
539         mapping strings to other strings. The hash of this data is kept in
540-        the URI and verified before any of the data is used. All buckets for
541-        a given file contain identical copies of this data.
542+        the URI and verified before any of the data is used. All share
543+        containers for a given file contain identical copies of this data.
544 
545         The serialization format is specified with the following pseudocode:
546         for k in sorted(dict.keys()):
547hunk ./src/allmydata/interfaces.py 545
548             assert re.match(r'^[a-zA-Z_\-]+$', k)
549             write(k + ':' + netstring(dict[k]))
550 
551+        @param data=URIExtensionData
552         @return: a Deferred that fires (with None) when the operation completes
553         """
554 
555hunk ./src/allmydata/interfaces.py 560
556 
557 class IStorageBucketReader(Interface):
558 
559-    def get_block_data(blocknum=int, blocksize=int, size=int):
560+    def get_block_data(blocknum, blocksize, size):
561         """Most blocks will be the same size. The last block might be shorter
562         than the others.
563 
564hunk ./src/allmydata/interfaces.py 564
565+        @param blocknum=int
566+        @param blocksize=int
567+        @param size=int
568         @return: ShareData
569         """
570 
571hunk ./src/allmydata/interfaces.py 575
572         @return: ListOf(Hash)
573         """
574 
575-    def get_block_hashes(at_least_these=SetOf(int)):
576+    def get_block_hashes(at_least_these=()):
577         """
578hunk ./src/allmydata/interfaces.py 577
579+        @param at_least_these=SetOf(int)
580         @return: ListOf(Hash)
581         """
582 
583hunk ./src/allmydata/interfaces.py 581
584-    def get_share_hashes(at_least_these=SetOf(int)):
585+    def get_share_hashes():
586         """
587         @return: ListOf(TupleOf(int, Hash))
588         """
589hunk ./src/allmydata/interfaces.py 613
590         @return: unicode nickname, or None
591         """
592 
593-    # methods moved from IntroducerClient, need review
594-    def get_all_connections():
595-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
596-        each active connection we've established to a remote service. This is
597-        mostly useful for unit tests that need to wait until a certain number
598-        of connections have been made."""
599-
600-    def get_all_connectors():
601-        """Return a dict that maps from (nodeid, service_name) to a
602-        RemoteServiceConnector instance for all services that we are actively
603-        trying to connect to. Each RemoteServiceConnector has the following
604-        public attributes::
605-
606-          service_name: the type of service provided, like 'storage'
607-          announcement_time: when we first heard about this service
608-          last_connect_time: when we last established a connection
609-          last_loss_time: when we last lost a connection
610-
611-          version: the peer's version, from the most recent connection
612-          oldest_supported: the peer's oldest supported version, same
613-
614-          rref: the RemoteReference, if connected, otherwise None
615-          remote_host: the IAddress, if connected, otherwise None
616-
617-        This method is intended for monitoring interfaces, such as a web page
618-        that describes connecting and connected peers.
619-        """
620-
621-    def get_all_peerids():
622-        """Return a frozenset of all peerids to whom we have a connection (to
623-        one or more services) established. Mostly useful for unit tests."""
624-
625-    def get_all_connections_for(service_name):
626-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
627-        for each active connection that provides the given SERVICE_NAME."""
628-
629-    def get_permuted_peers(service_name, key):
630-        """Returns an ordered list of (peerid, rref) tuples, selecting from
631-        the connections that provide SERVICE_NAME, using a hash-based
632-        permutation keyed by KEY. This randomizes the service list in a
633-        repeatable way, to distribute load over many peers.
634-        """
635-
636 
637 class IMutableSlotWriter(Interface):
638     """
639hunk ./src/allmydata/interfaces.py 618
640     The interface for a writer around a mutable slot on a remote server.
641     """
642-    def set_checkstring(checkstring, checkstring_or_seqnum, root_hash=None, salt=None):
643+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
644         """
645         Set the checkstring that I will pass to the remote server when
646         writing.
647hunk ./src/allmydata/interfaces.py 642
648         Add a block and salt to the share.
649         """
650 
651-    def put_encprivey(encprivkey):
652+    def put_encprivkey(encprivkey):
653         """
654         Add the encrypted private key to the share.
655         """
656hunk ./src/allmydata/interfaces.py 881
657         writer-visible data using this writekey.
658         """
659 
660-    # TODO: Can this be overwrite instead of replace?
661-    def replace(new_contents):
662-        """Replace the contents of the mutable file, provided that no other
663+    def overwrite(new_contents):
664+        """Overwrite the contents of the mutable file, provided that no other
665         node has published (or is attempting to publish, concurrently) a
666         newer version of the file than this one.
667 
668hunk ./src/allmydata/interfaces.py 1348
669         is empty, the metadata will be an empty dictionary.
670         """
671 
672-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
673+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
674         """I add a child (by writecap+readcap) at the specific name. I return
675         a Deferred that fires when the operation finishes. If overwrite= is
676         True, I will replace any existing child of the same name, otherwise
677hunk ./src/allmydata/interfaces.py 1747
678     Block Hash, and the encoding parameters, both of which must be included
679     in the URI.
680 
681-    I do not choose shareholders, that is left to the IUploader. I must be
682-    given a dict of RemoteReferences to storage buckets that are ready and
683-    willing to receive data.
684+    I do not choose shareholders, that is left to the IUploader.
685     """
686 
687     def set_size(size):
688hunk ./src/allmydata/interfaces.py 1754
689         """Specify the number of bytes that will be encoded. This must be
690         peformed before get_serialized_params() can be called.
691         """
692+
693     def set_params(params):
694         """Override the default encoding parameters. 'params' is a tuple of
695         (k,d,n), where 'k' is the number of required shares, 'd' is the
696hunk ./src/allmydata/interfaces.py 1850
697     download, validate, decode, and decrypt data from them, writing the
698     results to an output file.
699 
700-    I do not locate the shareholders, that is left to the IDownloader. I must
701-    be given a dict of RemoteReferences to storage buckets that are ready to
702-    send data.
703+    I do not locate the shareholders, that is left to the IDownloader.
704     """
705 
706     def setup(outfile):
707hunk ./src/allmydata/interfaces.py 1952
708         resuming an interrupted upload (where we need to compute the
709         plaintext hashes, but don't need the redundant encrypted data)."""
710 
711-    def get_plaintext_hashtree_leaves(first, last, num_segments):
712-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
713-        plaintext segments, i.e. get the tagged hashes of the given segments.
714-        The segment size is expected to be generated by the
715-        IEncryptedUploadable before any plaintext is read or ciphertext
716-        produced, so that the segment hashes can be generated with only a
717-        single pass.
718-
719-        This returns a Deferred that fires with a sequence of hashes, using:
720-
721-         tuple(segment_hashes[first:last])
722-
723-        'num_segments' is used to assert that the number of segments that the
724-        IEncryptedUploadable handled matches the number of segments that the
725-        encoder was expecting.
726-
727-        This method must not be called until the final byte has been read
728-        from read_encrypted(). Once this method is called, read_encrypted()
729-        can never be called again.
730-        """
731-
732-    def get_plaintext_hash():
733-        """OBSOLETE; Get the hash of the whole plaintext.
734-
735-        This returns a Deferred that fires with a tagged SHA-256 hash of the
736-        whole plaintext, obtained from hashutil.plaintext_hash(data).
737-        """
738-
739     def close():
740         """Just like IUploadable.close()."""
741 
742hunk ./src/allmydata/interfaces.py 2579
743     Tahoe process will typically have a single NodeMaker, but unit tests may
744     create simplified/mocked forms for testing purposes.
745     """
746-    def create_from_cap(writecap, readcap=None, **kwargs):
747+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
748         """I create an IFilesystemNode from the given writecap/readcap. I can
749         only provide nodes for existing file/directory objects: use my other
750         methods to create new objects. I return synchronously."""
751hunk ./src/allmydata/mutable/filenode.py 753
752         self._writekey = writekey
753         self._serializer = defer.succeed(None)
754 
755-
756     def get_sequence_number(self):
757         """
758         Get the sequence number of the mutable version that I represent.
759hunk ./src/allmydata/mutable/filenode.py 759
760         """
761         return self._version[0] # verinfo[0] == the sequence number
762 
763+    def get_servermap(self):
764+        return self._servermap
765 
766hunk ./src/allmydata/mutable/filenode.py 762
767-    # TODO: Terminology?
768     def get_writekey(self):
769         """
770         I return a writekey or None if I don't have a writekey.
771hunk ./src/allmydata/mutable/filenode.py 768
772         """
773         return self._writekey
774 
775-
776     def set_downloader_hints(self, hints):
777         """
778         I set the downloader hints.
779hunk ./src/allmydata/mutable/filenode.py 776
780 
781         self._downloader_hints = hints
782 
783-
784     def get_downloader_hints(self):
785         """
786         I return the downloader hints.
787hunk ./src/allmydata/mutable/filenode.py 782
788         """
789         return self._downloader_hints
790 
791-
792     def overwrite(self, new_contents):
793         """
794         I overwrite the contents of this mutable file version with the
795hunk ./src/allmydata/mutable/filenode.py 791
796 
797         return self._do_serialized(self._overwrite, new_contents)
798 
799-
800     def _overwrite(self, new_contents):
801         assert IMutableUploadable.providedBy(new_contents)
802         assert self._servermap.last_update_mode == MODE_WRITE
803hunk ./src/allmydata/mutable/filenode.py 797
804 
805         return self._upload(new_contents)
806 
807-
808     def modify(self, modifier, backoffer=None):
809         """I use a modifier callback to apply a change to the mutable file.
810         I implement the following pseudocode::
811hunk ./src/allmydata/mutable/filenode.py 841
812 
813         return self._do_serialized(self._modify, modifier, backoffer)
814 
815-
816     def _modify(self, modifier, backoffer):
817         if backoffer is None:
818             backoffer = BackoffAgent().delay
819hunk ./src/allmydata/mutable/filenode.py 846
820         return self._modify_and_retry(modifier, backoffer, True)
821 
822-
823     def _modify_and_retry(self, modifier, backoffer, first_time):
824         """
825         I try to apply modifier to the contents of this version of the
826hunk ./src/allmydata/mutable/filenode.py 878
827         d.addErrback(_retry)
828         return d
829 
830-
831     def _modify_once(self, modifier, first_time):
832         """
833         I attempt to apply a modifier to the contents of the mutable
834hunk ./src/allmydata/mutable/filenode.py 913
835         d.addCallback(_apply)
836         return d
837 
838-
839     def is_readonly(self):
840         """
841         I return True if this MutableFileVersion provides no write
842hunk ./src/allmydata/mutable/filenode.py 921
843         """
844         return self._writekey is None
845 
846-
847     def is_mutable(self):
848         """
849         I return True, since mutable files are always mutable by
850hunk ./src/allmydata/mutable/filenode.py 928
851         """
852         return True
853 
854-
855     def get_storage_index(self):
856         """
857         I return the storage index of the reference that I encapsulate.
858hunk ./src/allmydata/mutable/filenode.py 934
859         """
860         return self._storage_index
861 
862-
863     def get_size(self):
864         """
865         I return the length, in bytes, of this readable object.
866hunk ./src/allmydata/mutable/filenode.py 940
867         """
868         return self._servermap.size_of_version(self._version)
869 
870-
871     def download_to_data(self, fetch_privkey=False):
872         """
873         I return a Deferred that fires with the contents of this
874hunk ./src/allmydata/mutable/filenode.py 951
875         d.addCallback(lambda mc: "".join(mc.chunks))
876         return d
877 
878-
879     def _try_to_download_data(self):
880         """
881         I am an unserialized cousin of download_to_data; I am called
882hunk ./src/allmydata/mutable/filenode.py 963
883         d.addCallback(lambda mc: "".join(mc.chunks))
884         return d
885 
886-
887     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
888         """
889         I read a portion (possibly all) of the mutable file that I
890hunk ./src/allmydata/mutable/filenode.py 971
891         return self._do_serialized(self._read, consumer, offset, size,
892                                    fetch_privkey)
893 
894-
895     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
896         """
897         I am the serialized companion of read.
898hunk ./src/allmydata/mutable/filenode.py 981
899         d = r.download(consumer, offset, size)
900         return d
901 
902-
903     def _do_serialized(self, cb, *args, **kwargs):
904         # note: to avoid deadlock, this callable is *not* allowed to invoke
905         # other serialized methods within this (or any other)
906hunk ./src/allmydata/mutable/filenode.py 999
907         self._serializer.addErrback(log.err)
908         return d
909 
910-
911     def _upload(self, new_contents):
912         #assert self._pubkey, "update_servermap must be called before publish"
913         p = Publish(self._node, self._storage_broker, self._servermap)
914hunk ./src/allmydata/mutable/filenode.py 1009
915         d.addCallback(self._did_upload, new_contents.get_size())
916         return d
917 
918-
919     def _did_upload(self, res, size):
920         self._most_recent_size = size
921         return res
922hunk ./src/allmydata/mutable/filenode.py 1029
923         """
924         return self._do_serialized(self._update, data, offset)
925 
926-
927     def _update(self, data, offset):
928         """
929         I update the mutable file version represented by this particular
930hunk ./src/allmydata/mutable/filenode.py 1058
931         d.addCallback(self._build_uploadable_and_finish, data, offset)
932         return d
933 
934-
935     def _do_modify_update(self, data, offset):
936         """
937         I perform a file update by modifying the contents of the file
938hunk ./src/allmydata/mutable/filenode.py 1073
939             return new
940         return self._modify(m, None)
941 
942-
943     def _do_update_update(self, data, offset):
944         """
945         I start the Servermap update that gets us the data we need to
946hunk ./src/allmydata/mutable/filenode.py 1108
947         return self._update_servermap(update_range=(start_segment,
948                                                     end_segment))
949 
950-
951     def _decode_and_decrypt_segments(self, ignored, data, offset):
952         """
953         After the servermap update, I take the encrypted and encoded
954hunk ./src/allmydata/mutable/filenode.py 1148
955         d3 = defer.succeed(blockhashes)
956         return deferredutil.gatherResults([d1, d2, d3])
957 
958-
959     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
960         """
961         After the process has the plaintext segments, I build the
962hunk ./src/allmydata/mutable/filenode.py 1163
963         p = Publish(self._node, self._storage_broker, self._servermap)
964         return p.update(u, offset, segments_and_bht[2], self._version)
965 
966-
967     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
968         """
969         I update the servermap. I return a Deferred that fires when the
970addfile ./src/allmydata/storage/backends/__init__.py
971addfile ./src/allmydata/storage/backends/base.py
972hunk ./src/allmydata/storage/backends/base.py 1
973+
974+from twisted.application import service
975+
976+from allmydata.storage.common import si_b2a
977+from allmydata.storage.lease import LeaseInfo
978+from allmydata.storage.bucket import BucketReader
979+
980+
981+class Backend(service.MultiService):
982+    def __init__(self):
983+        service.MultiService.__init__(self)
984+
985+
986+class ShareSet(object):
987+    """
988+    This class implements shareset logic that could work for all backends, but
989+    might be useful to override for efficiency.
990+    """
991+
992+    def __init__(self, storageindex):
993+        self.storageindex = storageindex
994+
995+    def get_storage_index(self):
996+        return self.storageindex
997+
998+    def get_storage_index_string(self):
999+        return si_b2a(self.storageindex)
1000+
1001+    def renew_lease(self, renew_secret, new_expiration_time):
1002+        found_buckets = False
1003+        for share in self.get_shares():
1004+            found_buckets = True
1005+            share.renew_lease(renew_secret, new_expiration_time)
1006+
1007+        if not found_buckets:
1008+            raise IndexError("no such lease to renew")
1009+
1010+    def get_leases(self):
1011+        # Since all shares get the same lease data, we just grab the leases
1012+        # from the first share.
1013+        try:
1014+            sf = self.get_shares().next()
1015+            return sf.get_leases()
1016+        except StopIteration:
1017+            return iter([])
1018+
1019+    def add_or_renew_lease(self, lease_info):
1020+        # This implementation assumes that lease data is duplicated in
1021+        # all shares of a shareset, which might not be true for all backends.
1022+        for share in self.get_shares():
1023+            share.add_or_renew_lease(lease_info)
1024+
1025+    def make_bucket_reader(self, storageserver, share):
1026+        return BucketReader(storageserver, share)
1027+
1028+    def testv_and_readv_and_writev(self, storageserver, secrets,
1029+                                   test_and_write_vectors, read_vector,
1030+                                   expiration_time):
1031+        # The implementation here depends on the following helper methods,
1032+        # which must be provided by subclasses:
1033+        #
1034+        # def _clean_up_after_unlink(self):
1035+        #     """clean up resources associated with the shareset after some
1036+        #     shares might have been deleted"""
1037+        #
1038+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
1039+        #     """create a mutable share with the given shnum and write_enabler"""
1040+
1041+        # This previously had to be a triple with cancel_secret in secrets[2],
1042+        # but we now allow the cancel_secret to be omitted.
1043+        write_enabler = secrets[0]
1044+        renew_secret = secrets[1]
1045+
1046+        si_s = self.get_storage_index_string()
1047+        shares = {}
1048+        for share in self.get_shares():
1049+            # XXX is ignoring immutable shares correct? Maybe get_shares should
1050+            # have a parameter saying what type it's expecting.
1051+            if share.sharetype == "mutable":
1052+                share.check_write_enabler(write_enabler, si_s)
1053+                shares[share.get_shnum()] = share
1054+
1055+        # write_enabler is good for all existing shares.
1056+
1057+        # Now evaluate test vectors.
1058+        testv_is_good = True
1059+        for sharenum in test_and_write_vectors:
1060+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
1061+            if sharenum in shares:
1062+                if not shares[sharenum].check_testv(testv):
1063+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
1064+                    testv_is_good = False
1065+                    break
1066+            else:
1067+                # compare the vectors against an empty share, in which all
1068+                # reads return empty strings.
1069+                if not EmptyShare().check_testv(testv):
1070+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
1071+                                                                testv))
1072+                    testv_is_good = False
1073+                    break
1074+
1075+        # now gather the read vectors, before we do any writes
1076+        read_data = {}
1077+        for shnum, share in shares.items():
1078+            read_data[shnum] = share.readv(read_vector)
1079+
1080+        ownerid = 1 # TODO
1081+        lease_info = LeaseInfo(ownerid, renew_secret,
1082+                               expiration_time, storageserver.get_serverid())
1083+
1084+        if testv_is_good:
1085+            # now apply the write vectors
1086+            for shnum in test_and_write_vectors:
1087+                (testv, datav, new_length) = test_and_write_vectors[shnum]
1088+                if new_length == 0:
1089+                    if shnum in shares:
1090+                        shares[shnum].unlink()
1091+                else:
1092+                    if shnum not in shares:
1093+                        # allocate a new share
1094+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
1095+                        shares[shnum] = share
1096+                    shares[shnum].writev(datav, new_length)
1097+                    # and update the lease
1098+                    shares[shnum].add_or_renew_lease(lease_info)
1099+
1100+            if new_length == 0:
1101+                self._clean_up_after_unlink()
1102+
1103+        return (testv_is_good, read_data)
1104+
1105+    def readv(self, wanted_shnums, read_vector):
1106+        """
1107+        Read a vector from the numbered shares in this shareset. An empty
1108+        shares list means to return data from all known shares.
1109+
1110+        @param wanted_shnums=ListOf(int)
1111+        @param read_vector=ReadVector
1112+        @return DictOf(int, ReadData): shnum -> results, with one key per share
1113+        """
1114+        datavs = {}
1115+        for share in self.get_shares():
1116+            # XXX is ignoring immutable shares correct? Maybe get_shares should
1117+            # have a parameter saying what type it's expecting.
1118+            shnum = share.get_shnum()
1119+            if share.sharetype == "mutable" and (not wanted_shnums or shnum in wanted_shnums):
1120+                datavs[shnum] = share.readv(read_vector)
1121+
1122+        return datavs
1123+
1124+
1125+def testv_compare(a, op, b):
1126+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
1127+    if op == "lt":
1128+        return a < b
1129+    if op == "le":
1130+        return a <= b
1131+    if op == "eq":
1132+        return a == b
1133+    if op == "ne":
1134+        return a != b
1135+    if op == "ge":
1136+        return a >= b
1137+    if op == "gt":
1138+        return a > b
1139+    # never reached
1140+
1141+
1142+class EmptyShare:
1143+    def check_testv(self, testv):
1144+        test_good = True
1145+        for (offset, length, operator, specimen) in testv:
1146+            data = ""
1147+            if not testv_compare(data, operator, specimen):
1148+                test_good = False
1149+                break
1150+        return test_good
1151+
1152addfile ./src/allmydata/storage/backends/disk/__init__.py
1153addfile ./src/allmydata/storage/backends/disk/disk_backend.py
1154hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
1155+
1156+import re
1157+
1158+from twisted.python.filepath import FilePath, UnlistableError
1159+
1160+from zope.interface import implements
1161+from allmydata.interfaces import IStorageBackend, IShareSet
1162+from allmydata.util import fileutil, log, time_format
1163+from allmydata.util.assertutil import precondition
1164+from allmydata.storage.common import si_b2a, si_a2b
1165+from allmydata.storage.bucket import BucketWriter
1166+from allmydata.storage.backends.base import Backend, ShareSet
1167+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
1168+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
1169+
1170+# storage/
1171+# storage/shares/incoming
1172+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1173+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
1174+# storage/shares/$START/$STORAGEINDEX
1175+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
1176+
1177+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1178+# base-32 chars).
1179+# $SHARENUM matches this regex:
1180+NUM_RE=re.compile("^[0-9]+$")
1181+
1182+
1183+def si_si2dir(startfp, storageindex):
1184+    sia = si_b2a(storageindex)
1185+    newfp = startfp.child(sia[:2])
1186+    return newfp.child(sia)
1187+
1188+
1189+def get_share(fp):
1190+    f = fp.open('rb')
1191+    try:
1192+        prefix = f.read(32)
1193+    finally:
1194+        f.close()
1195+
1196+    if prefix == MutableDiskShare.MAGIC:
1197+        return MutableDiskShare(fp)
1198+    else:
1199+        # assume it's immutable
1200+        return ImmutableDiskShare(fp)
1201+
1202+
1203+class DiskBackend(Backend):
1204+    implements(IStorageBackend)
1205+
1206+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1207+        Backend.__init__(self)
1208+        self._setup_storage(storedir, readonly, reserved_space)
1209+        self._setup_corruption_advisory()
1210+
1211+    def _setup_storage(self, storedir, readonly, reserved_space):
1212+        precondition(isinstance(storedir, FilePath), storedir, FilePath)
1213+        self.storedir = storedir
1214+        self.readonly = readonly
1215+        self.reserved_space = int(reserved_space)
1216+        self.sharedir = self.storedir.child("shares")
1217+        fileutil.fp_make_dirs(self.sharedir)
1218+        self.incomingdir = self.sharedir.child('incoming')
1219+        self._clean_incomplete()
1220+        if self.reserved_space and (self.get_available_space() is None):
1221+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1222+                    umid="0wZ27w", level=log.UNUSUAL)
1223+
1224+    def _clean_incomplete(self):
1225+        fileutil.fp_remove(self.incomingdir)
1226+        fileutil.fp_make_dirs(self.incomingdir)
1227+
1228+    def _setup_corruption_advisory(self):
1229+        # we don't actually create the corruption-advisory dir until necessary
1230+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
1231+
1232+    def _make_shareset(self, sharehomedir):
1233+        return self.get_shareset(si_a2b(sharehomedir.basename()))
1234+
1235+    def get_sharesets_for_prefix(self, prefix):
1236+        prefixfp = self.sharedir.child(prefix)
1237+        try:
1238+            sharesets = map(self._make_shareset, prefixfp.children())
1239+            def _by_base32si(b):
1240+                return b.get_storage_index_string()
1241+            sharesets.sort(key=_by_base32si)
1242+        except EnvironmentError:
1243+            sharesets = []
1244+        return sharesets
1245+
1246+    def get_shareset(self, storageindex):
1247+        sharehomedir = si_si2dir(self.sharedir, storageindex)
1248+        incominghomedir = si_si2dir(self.incomingdir, storageindex)
1249+        return DiskShareSet(storageindex, sharehomedir, incominghomedir)
1250+
1251+    def fill_in_space_stats(self, stats):
1252+        try:
1253+            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
1254+            writeable = disk['avail'] > 0
1255+
1256+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
1257+            stats['storage_server.disk_total'] = disk['total']
1258+            stats['storage_server.disk_used'] = disk['used']
1259+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
1260+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
1261+            stats['storage_server.disk_avail'] = disk['avail']
1262+        except AttributeError:
1263+            writeable = True
1264+        except EnvironmentError:
1265+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
1266+            writeable = False
1267+
1268+        if self.readonly_storage:
1269+            stats['storage_server.disk_avail'] = 0
1270+            writeable = False
1271+
1272+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
1273+
1274+    def get_available_space(self):
1275+        if self.readonly:
1276+            return 0
1277+        return fileutil.get_available_space(self.sharedir, self.reserved_space)
1278+
1279+    #def set_storage_server(self, ss):
1280+    #    self.ss = ss
1281+
1282+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
1283+        fileutil.fp_make_dirs(self.corruption_advisory_dir)
1284+        now = time_format.iso_utc(sep="T")
1285+        si_s = si_b2a(storageindex)
1286+
1287+        # Windows can't handle colons in the filename.
1288+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
1289+        f = self.corruption_advisory_dir.child(name).open("w")
1290+        try:
1291+            f.write("report: Share Corruption\n")
1292+            f.write("type: %s\n" % sharetype)
1293+            f.write("storage_index: %s\n" % si_s)
1294+            f.write("share_number: %d\n" % shnum)
1295+            f.write("\n")
1296+            f.write(reason)
1297+            f.write("\n")
1298+        finally:
1299+            f.close()
1300+
1301+        log.msg(format=("client claims corruption in (%(share_type)s) " +
1302+                        "%(si)s-%(shnum)d: %(reason)s"),
1303+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
1304+                level=log.SCARY, umid="SGx2fA")
1305+
1306+
1307+class DiskShareSet(ShareSet):
1308+    implements(IShareSet)
1309+
1310+    def __init__(self, storageindex, sharehomedir, incominghomedir=None):
1311+        ShareSet.__init__(storageindex)
1312+        self._sharehomedir = sharehomedir
1313+        self._incominghomedir = incominghomedir
1314+
1315+    def get_overhead(self):
1316+        return (fileutil.get_disk_usage(self._sharehomedir) +
1317+                fileutil.get_disk_usage(self._incominghomedir))
1318+
1319+    def get_shares(self):
1320+        """
1321+        Generate IStorageBackendShare objects for shares we have for this storage index.
1322+        ("Shares we have" means completed ones, excluding incoming ones.)
1323+        """
1324+        try:
1325+            for fp in self._sharehomedir.children():
1326+                shnumstr = fp.basename()
1327+                if not NUM_RE.match(shnumstr):
1328+                    continue
1329+                sharehome = self._sharehomedir.child(shnumstr)
1330+                yield self.get_share(sharehome)
1331+        except UnlistableError:
1332+            # There is no shares directory at all.
1333+            pass
1334+
1335+    def get_incoming_shnums(self):
1336+        """
1337+        Return a frozenset of the shnum (as ints) of incoming shares.
1338+        """
1339+        if self._incominghomedir is None:
1340+            return frozenset()
1341+        try:
1342+            childfps = [ fp for fp in self._incominghomedir.children() if NUM_RE.match(fp.basename()) ]
1343+            shnums = [ int(fp.basename()) for fp in childfps]
1344+            return frozenset(shnums)
1345+        except UnlistableError:
1346+            # There is no incoming directory at all.
1347+            return frozenset()
1348+
1349+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
1350+        sharehome = self._sharehomedir.child(str(shnum))
1351+        incominghome = self._incominghomedir.child(str(shnum))
1352+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
1353+                                   max_size=max_space_per_bucket, create=True)
1354+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
1355+        return bw
1356+
1357+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
1358+        fileutil.fp_make_dirs(self.sharehomedir)
1359+        sharehome = self.sharehomedir.child(str(shnum))
1360+        nodeid = storageserver.get_nodeid()
1361+        return create_mutable_disk_share(sharehome, nodeid, write_enabler, storageserver)
1362+
1363+    def _clean_up_after_unlink(self):
1364+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
1365+
1366hunk ./src/allmydata/storage/backends/disk/immutable.py 1
1367-import os, stat, struct, time
1368 
1369hunk ./src/allmydata/storage/backends/disk/immutable.py 2
1370-from foolscap.api import Referenceable
1371+import struct
1372 
1373 from zope.interface import implements
1374hunk ./src/allmydata/storage/backends/disk/immutable.py 5
1375-from allmydata.interfaces import RIBucketWriter, RIBucketReader
1376-from allmydata.util import base32, fileutil, log
1377+
1378+from allmydata.interfaces import IStoredShare
1379+from allmydata.util import fileutil
1380 from allmydata.util.assertutil import precondition
1381hunk ./src/allmydata/storage/backends/disk/immutable.py 9
1382+from allmydata.util.fileutil import fp_make_dirs
1383 from allmydata.util.hashutil import constant_time_compare
1384hunk ./src/allmydata/storage/backends/disk/immutable.py 11
1385+from allmydata.util.encodingutil import quote_filepath
1386+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
1387 from allmydata.storage.lease import LeaseInfo
1388hunk ./src/allmydata/storage/backends/disk/immutable.py 14
1389-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1390-     DataTooLargeError
1391+
1392 
1393 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1394 # and share data. The share data is accessed by RIBucketWriter.write and
1395hunk ./src/allmydata/storage/backends/disk/immutable.py 41
1396 # then the value stored in this field will be the actual share data length
1397 # modulo 2**32.
1398 
1399-class ShareFile:
1400-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1401+class ImmutableDiskShare(object):
1402+    implements(IStoredShare)
1403+
1404     sharetype = "immutable"
1405hunk ./src/allmydata/storage/backends/disk/immutable.py 45
1406+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1407+
1408 
1409hunk ./src/allmydata/storage/backends/disk/immutable.py 48
1410-    def __init__(self, filename, max_size=None, create=False):
1411-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
1412+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
1413+        """ If max_size is not None then I won't allow more than
1414+        max_size to be written to me. If create=True then max_size
1415+        must not be None. """
1416         precondition((max_size is not None) or (not create), max_size, create)
1417hunk ./src/allmydata/storage/backends/disk/immutable.py 53
1418-        self.home = filename
1419+        self._storageindex = storageindex
1420         self._max_size = max_size
1421hunk ./src/allmydata/storage/backends/disk/immutable.py 55
1422+        self._incominghome = incominghome
1423+        self._home = finalhome
1424+        self._shnum = shnum
1425         if create:
1426             # touch the file, so later callers will see that we're working on
1427             # it. Also construct the metadata.
1428hunk ./src/allmydata/storage/backends/disk/immutable.py 61
1429-            assert not os.path.exists(self.home)
1430-            fileutil.make_dirs(os.path.dirname(self.home))
1431-            f = open(self.home, 'wb')
1432+            assert not finalhome.exists()
1433+            fp_make_dirs(self._incominghome.parent())
1434             # The second field -- the four-byte share data length -- is no
1435             # longer used as of Tahoe v1.3.0, but we continue to write it in
1436             # there in case someone downgrades a storage server from >=
1437hunk ./src/allmydata/storage/backends/disk/immutable.py 72
1438             # the largest length that can fit into the field. That way, even
1439             # if this does happen, the old < v1.3.0 server will still allow
1440             # clients to read the first part of the share.
1441-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1442-            f.close()
1443+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
1444             self._lease_offset = max_size + 0x0c
1445             self._num_leases = 0
1446         else:
1447hunk ./src/allmydata/storage/backends/disk/immutable.py 76
1448-            f = open(self.home, 'rb')
1449-            filesize = os.path.getsize(self.home)
1450-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1451-            f.close()
1452+            f = self._home.open(mode='rb')
1453+            try:
1454+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1455+            finally:
1456+                f.close()
1457+            filesize = self._home.getsize()
1458             if version != 1:
1459                 msg = "sharefile %s had version %d but we wanted 1" % \
1460hunk ./src/allmydata/storage/backends/disk/immutable.py 84
1461-                      (filename, version)
1462+                      (self._home, version)
1463                 raise UnknownImmutableContainerVersionError(msg)
1464             self._num_leases = num_leases
1465             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1466hunk ./src/allmydata/storage/backends/disk/immutable.py 90
1467         self._data_offset = 0xc
1468 
1469+    def __repr__(self):
1470+        return ("<ImmutableDiskShare %s:%r at %s>"
1471+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1472+
1473+    def close(self):
1474+        fileutil.fp_make_dirs(self._home.parent())
1475+        self._incominghome.moveTo(self._home)
1476+        try:
1477+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
1478+            # We try to delete the parent (.../ab/abcde) to avoid leaving
1479+            # these directories lying around forever, but the delete might
1480+            # fail if we're working on another share for the same storage
1481+            # index (like ab/abcde/5). The alternative approach would be to
1482+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1483+            # ShareWriter), each of which is responsible for a single
1484+            # directory on disk, and have them use reference counting of
1485+            # their children to know when they should do the rmdir. This
1486+            # approach is simpler, but relies on os.rmdir refusing to delete
1487+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
1488+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
1489+            # we also delete the grandparent (prefix) directory, .../ab ,
1490+            # again to avoid leaving directories lying around. This might
1491+            # fail if there is another bucket open that shares a prefix (like
1492+            # ab/abfff).
1493+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
1494+            # we leave the great-grandparent (incoming/) directory in place.
1495+        except EnvironmentError:
1496+            # ignore the "can't rmdir because the directory is not empty"
1497+            # exceptions, those are normal consequences of the
1498+            # above-mentioned conditions.
1499+            pass
1500+        pass
1501+
1502+    def get_used_space(self):
1503+        return (fileutil.get_used_space(self._home) +
1504+                fileutil.get_used_space(self._incominghome))
1505+
1506+    def get_storage_index(self):
1507+        return self._storageindex
1508+
1509+    def get_shnum(self):
1510+        return self._shnum
1511+
1512     def unlink(self):
1513hunk ./src/allmydata/storage/backends/disk/immutable.py 134
1514-        os.unlink(self.home)
1515+        self._home.remove()
1516+
1517+    def get_size(self):
1518+        return self._home.getsize()
1519+
1520+    def get_data_length(self):
1521+        return self._lease_offset - self._data_offset
1522+
1523+    #def readv(self, read_vector):
1524+    #    ...
1525 
1526     def read_share_data(self, offset, length):
1527         precondition(offset >= 0)
1528hunk ./src/allmydata/storage/backends/disk/immutable.py 147
1529-        # reads beyond the end of the data are truncated. Reads that start
1530+
1531+        # Reads beyond the end of the data are truncated. Reads that start
1532         # beyond the end of the data return an empty string.
1533         seekpos = self._data_offset+offset
1534         actuallength = max(0, min(length, self._lease_offset-seekpos))
1535hunk ./src/allmydata/storage/backends/disk/immutable.py 154
1536         if actuallength == 0:
1537             return ""
1538-        f = open(self.home, 'rb')
1539-        f.seek(seekpos)
1540-        return f.read(actuallength)
1541+        f = self._home.open(mode='rb')
1542+        try:
1543+            f.seek(seekpos)
1544+            sharedata = f.read(actuallength)
1545+        finally:
1546+            f.close()
1547+        return sharedata
1548 
1549     def write_share_data(self, offset, data):
1550         length = len(data)
1551hunk ./src/allmydata/storage/backends/disk/immutable.py 167
1552         precondition(offset >= 0, offset)
1553         if self._max_size is not None and offset+length > self._max_size:
1554             raise DataTooLargeError(self._max_size, offset, length)
1555-        f = open(self.home, 'rb+')
1556-        real_offset = self._data_offset+offset
1557-        f.seek(real_offset)
1558-        assert f.tell() == real_offset
1559-        f.write(data)
1560-        f.close()
1561+        f = self._incominghome.open(mode='rb+')
1562+        try:
1563+            real_offset = self._data_offset+offset
1564+            f.seek(real_offset)
1565+            assert f.tell() == real_offset
1566+            f.write(data)
1567+        finally:
1568+            f.close()
1569 
1570     def _write_lease_record(self, f, lease_number, lease_info):
1571         offset = self._lease_offset + lease_number * self.LEASE_SIZE
1572hunk ./src/allmydata/storage/backends/disk/immutable.py 184
1573 
1574     def _read_num_leases(self, f):
1575         f.seek(0x08)
1576-        (num_leases,) = struct.unpack(">L", f.read(4))
1577+        ro = f.read(4)
1578+        (num_leases,) = struct.unpack(">L", ro)
1579         return num_leases
1580 
1581     def _write_num_leases(self, f, num_leases):
1582hunk ./src/allmydata/storage/backends/disk/immutable.py 195
1583     def _truncate_leases(self, f, num_leases):
1584         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1585 
1586+    # These lease operations are intended for use by disk_backend.py.
1587+    # Other clients should not depend on the fact that the disk backend
1588+    # stores leases in share files.
1589+
1590     def get_leases(self):
1591         """Yields a LeaseInfo instance for all leases."""
1592hunk ./src/allmydata/storage/backends/disk/immutable.py 201
1593-        f = open(self.home, 'rb')
1594-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1595-        f.seek(self._lease_offset)
1596-        for i in range(num_leases):
1597-            data = f.read(self.LEASE_SIZE)
1598-            if data:
1599-                yield LeaseInfo().from_immutable_data(data)
1600+        f = self._home.open(mode='rb')
1601+        try:
1602+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1603+            f.seek(self._lease_offset)
1604+            for i in range(num_leases):
1605+                data = f.read(self.LEASE_SIZE)
1606+                if data:
1607+                    yield LeaseInfo().from_immutable_data(data)
1608+        finally:
1609+            f.close()
1610 
1611     def add_lease(self, lease_info):
1612hunk ./src/allmydata/storage/backends/disk/immutable.py 213
1613-        f = open(self.home, 'rb+')
1614-        num_leases = self._read_num_leases(f)
1615-        self._write_lease_record(f, num_leases, lease_info)
1616-        self._write_num_leases(f, num_leases+1)
1617-        f.close()
1618+        num_leases = self._read_num_leases(self._incominghome)
1619+        f = self._home.open(mode='wb+')
1620+        try:
1621+            self._write_lease_record(f, num_leases, lease_info)
1622+            self._write_num_leases(f, num_leases+1)
1623+        finally:
1624+            f.close()
1625 
1626     def renew_lease(self, renew_secret, new_expire_time):
1627hunk ./src/allmydata/storage/backends/disk/immutable.py 222
1628-        for i,lease in enumerate(self.get_leases()):
1629+        for i, lease in enumerate(self.get_leases()):
1630             if constant_time_compare(lease.renew_secret, renew_secret):
1631                 # yup. See if we need to update the owner time.
1632                 if new_expire_time > lease.expiration_time:
1633hunk ./src/allmydata/storage/backends/disk/immutable.py 228
1634                     # yes
1635                     lease.expiration_time = new_expire_time
1636-                    f = open(self.home, 'rb+')
1637-                    self._write_lease_record(f, i, lease)
1638-                    f.close()
1639+                    f = self._home.open('rb+')
1640+                    try:
1641+                        self._write_lease_record(f, i, lease)
1642+                    finally:
1643+                        f.close()
1644                 return
1645         raise IndexError("unable to renew non-existent lease")
1646 
1647hunk ./src/allmydata/storage/backends/disk/immutable.py 242
1648                              lease_info.expiration_time)
1649         except IndexError:
1650             self.add_lease(lease_info)
1651-
1652-
1653-    def cancel_lease(self, cancel_secret):
1654-        """Remove a lease with the given cancel_secret. If the last lease is
1655-        cancelled, the file will be removed. Return the number of bytes that
1656-        were freed (by truncating the list of leases, and possibly by
1657-        deleting the file. Raise IndexError if there was no lease with the
1658-        given cancel_secret.
1659-        """
1660-
1661-        leases = list(self.get_leases())
1662-        num_leases_removed = 0
1663-        for i,lease in enumerate(leases):
1664-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1665-                leases[i] = None
1666-                num_leases_removed += 1
1667-        if not num_leases_removed:
1668-            raise IndexError("unable to find matching lease to cancel")
1669-        if num_leases_removed:
1670-            # pack and write out the remaining leases. We write these out in
1671-            # the same order as they were added, so that if we crash while
1672-            # doing this, we won't lose any non-cancelled leases.
1673-            leases = [l for l in leases if l] # remove the cancelled leases
1674-            f = open(self.home, 'rb+')
1675-            for i,lease in enumerate(leases):
1676-                self._write_lease_record(f, i, lease)
1677-            self._write_num_leases(f, len(leases))
1678-            self._truncate_leases(f, len(leases))
1679-            f.close()
1680-        space_freed = self.LEASE_SIZE * num_leases_removed
1681-        if not len(leases):
1682-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1683-            self.unlink()
1684-        return space_freed
1685-
1686-
1687-class BucketWriter(Referenceable):
1688-    implements(RIBucketWriter)
1689-
1690-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1691-        self.ss = ss
1692-        self.incominghome = incominghome
1693-        self.finalhome = finalhome
1694-        self._max_size = max_size # don't allow the client to write more than this
1695-        self._canary = canary
1696-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1697-        self.closed = False
1698-        self.throw_out_all_data = False
1699-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1700-        # also, add our lease to the file now, so that other ones can be
1701-        # added by simultaneous uploaders
1702-        self._sharefile.add_lease(lease_info)
1703-
1704-    def allocated_size(self):
1705-        return self._max_size
1706-
1707-    def remote_write(self, offset, data):
1708-        start = time.time()
1709-        precondition(not self.closed)
1710-        if self.throw_out_all_data:
1711-            return
1712-        self._sharefile.write_share_data(offset, data)
1713-        self.ss.add_latency("write", time.time() - start)
1714-        self.ss.count("write")
1715-
1716-    def remote_close(self):
1717-        precondition(not self.closed)
1718-        start = time.time()
1719-
1720-        fileutil.make_dirs(os.path.dirname(self.finalhome))
1721-        fileutil.rename(self.incominghome, self.finalhome)
1722-        try:
1723-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
1724-            # We try to delete the parent (.../ab/abcde) to avoid leaving
1725-            # these directories lying around forever, but the delete might
1726-            # fail if we're working on another share for the same storage
1727-            # index (like ab/abcde/5). The alternative approach would be to
1728-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
1729-            # ShareWriter), each of which is responsible for a single
1730-            # directory on disk, and have them use reference counting of
1731-            # their children to know when they should do the rmdir. This
1732-            # approach is simpler, but relies on os.rmdir refusing to delete
1733-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
1734-            os.rmdir(os.path.dirname(self.incominghome))
1735-            # we also delete the grandparent (prefix) directory, .../ab ,
1736-            # again to avoid leaving directories lying around. This might
1737-            # fail if there is another bucket open that shares a prefix (like
1738-            # ab/abfff).
1739-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
1740-            # we leave the great-grandparent (incoming/) directory in place.
1741-        except EnvironmentError:
1742-            # ignore the "can't rmdir because the directory is not empty"
1743-            # exceptions, those are normal consequences of the
1744-            # above-mentioned conditions.
1745-            pass
1746-        self._sharefile = None
1747-        self.closed = True
1748-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1749-
1750-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
1751-        self.ss.bucket_writer_closed(self, filelen)
1752-        self.ss.add_latency("close", time.time() - start)
1753-        self.ss.count("close")
1754-
1755-    def _disconnected(self):
1756-        if not self.closed:
1757-            self._abort()
1758-
1759-    def remote_abort(self):
1760-        log.msg("storage: aborting sharefile %s" % self.incominghome,
1761-                facility="tahoe.storage", level=log.UNUSUAL)
1762-        if not self.closed:
1763-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
1764-        self._abort()
1765-        self.ss.count("abort")
1766-
1767-    def _abort(self):
1768-        if self.closed:
1769-            return
1770-
1771-        os.remove(self.incominghome)
1772-        # if we were the last share to be moved, remove the incoming/
1773-        # directory that was our parent
1774-        parentdir = os.path.split(self.incominghome)[0]
1775-        if not os.listdir(parentdir):
1776-            os.rmdir(parentdir)
1777-        self._sharefile = None
1778-
1779-        # We are now considered closed for further writing. We must tell
1780-        # the storage server about this so that it stops expecting us to
1781-        # use the space it allocated for us earlier.
1782-        self.closed = True
1783-        self.ss.bucket_writer_closed(self, 0)
1784-
1785-
1786-class BucketReader(Referenceable):
1787-    implements(RIBucketReader)
1788-
1789-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
1790-        self.ss = ss
1791-        self._share_file = ShareFile(sharefname)
1792-        self.storage_index = storage_index
1793-        self.shnum = shnum
1794-
1795-    def __repr__(self):
1796-        return "<%s %s %s>" % (self.__class__.__name__,
1797-                               base32.b2a_l(self.storage_index[:8], 60),
1798-                               self.shnum)
1799-
1800-    def remote_read(self, offset, length):
1801-        start = time.time()
1802-        data = self._share_file.read_share_data(offset, length)
1803-        self.ss.add_latency("read", time.time() - start)
1804-        self.ss.count("read")
1805-        return data
1806-
1807-    def remote_advise_corrupt_share(self, reason):
1808-        return self.ss.remote_advise_corrupt_share("immutable",
1809-                                                   self.storage_index,
1810-                                                   self.shnum,
1811-                                                   reason)
1812hunk ./src/allmydata/storage/backends/disk/mutable.py 1
1813-import os, stat, struct
1814 
1815hunk ./src/allmydata/storage/backends/disk/mutable.py 2
1816-from allmydata.interfaces import BadWriteEnablerError
1817-from allmydata.util import idlib, log
1818+import struct
1819+
1820+from zope.interface import implements
1821+
1822+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
1823+from allmydata.util import fileutil, idlib, log
1824 from allmydata.util.assertutil import precondition
1825 from allmydata.util.hashutil import constant_time_compare
1826hunk ./src/allmydata/storage/backends/disk/mutable.py 10
1827-from allmydata.storage.lease import LeaseInfo
1828-from allmydata.storage.common import UnknownMutableContainerVersionError, \
1829+from allmydata.util.encodingutil import quote_filepath
1830+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
1831      DataTooLargeError
1832hunk ./src/allmydata/storage/backends/disk/mutable.py 13
1833+from allmydata.storage.lease import LeaseInfo
1834+from allmydata.storage.backends.base import testv_compare
1835 
1836hunk ./src/allmydata/storage/backends/disk/mutable.py 16
1837-# the MutableShareFile is like the ShareFile, but used for mutable data. It
1838-# has a different layout. See docs/mutable.txt for more details.
1839+
1840+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
1841+# It has a different layout. See docs/mutable.rst for more details.
1842 
1843 # #   offset    size    name
1844 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
1845hunk ./src/allmydata/storage/backends/disk/mutable.py 31
1846 #                        4    4   expiration timestamp
1847 #                        8   32   renewal token
1848 #                        40  32   cancel token
1849-#                        72  20   nodeid which accepted the tokens
1850+#                        72  20   nodeid that accepted the tokens
1851 # 7   468       (a)     data
1852 # 8   ??        4       count of extra leases
1853 # 9   ??        n*92    extra leases
1854hunk ./src/allmydata/storage/backends/disk/mutable.py 37
1855 
1856 
1857-# The struct module doc says that L's are 4 bytes in size., and that Q's are
1858+# The struct module doc says that L's are 4 bytes in size, and that Q's are
1859 # 8 bytes in size. Since compatibility depends upon this, double-check it.
1860 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
1861 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
1862hunk ./src/allmydata/storage/backends/disk/mutable.py 42
1863 
1864-class MutableShareFile:
1865+
1866+class MutableDiskShare(object):
1867+    implements(IStoredMutableShare)
1868 
1869     sharetype = "mutable"
1870     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
1871hunk ./src/allmydata/storage/backends/disk/mutable.py 54
1872     assert LEASE_SIZE == 92
1873     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
1874     assert DATA_OFFSET == 468, DATA_OFFSET
1875+
1876     # our sharefiles share with a recognizable string, plus some random
1877     # binary data to reduce the chance that a regular text file will look
1878     # like a sharefile.
1879hunk ./src/allmydata/storage/backends/disk/mutable.py 63
1880     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
1881     # TODO: decide upon a policy for max share size
1882 
1883-    def __init__(self, filename, parent=None):
1884-        self.home = filename
1885-        if os.path.exists(self.home):
1886+    def __init__(self, storageindex, shnum, home, parent=None):
1887+        self._storageindex = storageindex
1888+        self._shnum = shnum
1889+        self._home = home
1890+        if self._home.exists():
1891             # we don't cache anything, just check the magic
1892hunk ./src/allmydata/storage/backends/disk/mutable.py 69
1893-            f = open(self.home, 'rb')
1894-            data = f.read(self.HEADER_SIZE)
1895-            (magic,
1896-             write_enabler_nodeid, write_enabler,
1897-             data_length, extra_least_offset) = \
1898-             struct.unpack(">32s20s32sQQ", data)
1899-            if magic != self.MAGIC:
1900-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1901-                      (filename, magic, self.MAGIC)
1902-                raise UnknownMutableContainerVersionError(msg)
1903+            f = self._home.open('rb')
1904+            try:
1905+                data = f.read(self.HEADER_SIZE)
1906+                (magic,
1907+                 write_enabler_nodeid, write_enabler,
1908+                 data_length, extra_least_offset) = \
1909+                 struct.unpack(">32s20s32sQQ", data)
1910+                if magic != self.MAGIC:
1911+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
1912+                          (quote_filepath(self._home), magic, self.MAGIC)
1913+                    raise UnknownMutableContainerVersionError(msg)
1914+            finally:
1915+                f.close()
1916         self.parent = parent # for logging
1917 
1918     def log(self, *args, **kwargs):
1919hunk ./src/allmydata/storage/backends/disk/mutable.py 88
1920         return self.parent.log(*args, **kwargs)
1921 
1922     def create(self, my_nodeid, write_enabler):
1923-        assert not os.path.exists(self.home)
1924+        assert not self._home.exists()
1925         data_length = 0
1926         extra_lease_offset = (self.HEADER_SIZE
1927                               + 4 * self.LEASE_SIZE
1928hunk ./src/allmydata/storage/backends/disk/mutable.py 95
1929                               + data_length)
1930         assert extra_lease_offset == self.DATA_OFFSET # true at creation
1931         num_extra_leases = 0
1932-        f = open(self.home, 'wb')
1933-        header = struct.pack(">32s20s32sQQ",
1934-                             self.MAGIC, my_nodeid, write_enabler,
1935-                             data_length, extra_lease_offset,
1936-                             )
1937-        leases = ("\x00"*self.LEASE_SIZE) * 4
1938-        f.write(header + leases)
1939-        # data goes here, empty after creation
1940-        f.write(struct.pack(">L", num_extra_leases))
1941-        # extra leases go here, none at creation
1942-        f.close()
1943+        f = self._home.open('wb')
1944+        try:
1945+            header = struct.pack(">32s20s32sQQ",
1946+                                 self.MAGIC, my_nodeid, write_enabler,
1947+                                 data_length, extra_lease_offset,
1948+                                 )
1949+            leases = ("\x00"*self.LEASE_SIZE) * 4
1950+            f.write(header + leases)
1951+            # data goes here, empty after creation
1952+            f.write(struct.pack(">L", num_extra_leases))
1953+            # extra leases go here, none at creation
1954+        finally:
1955+            f.close()
1956+
1957+    def __repr__(self):
1958+        return ("<MutableDiskShare %s:%r at %s>"
1959+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
1960+
1961+    def get_used_space(self):
1962+        return fileutil.get_used_space(self._home)
1963+
1964+    def get_storage_index(self):
1965+        return self._storageindex
1966+
1967+    def get_shnum(self):
1968+        return self._shnum
1969 
1970     def unlink(self):
1971hunk ./src/allmydata/storage/backends/disk/mutable.py 123
1972-        os.unlink(self.home)
1973+        self._home.remove()
1974 
1975     def _read_data_length(self, f):
1976         f.seek(self.DATA_LENGTH_OFFSET)
1977hunk ./src/allmydata/storage/backends/disk/mutable.py 291
1978 
1979     def get_leases(self):
1980         """Yields a LeaseInfo instance for all leases."""
1981-        f = open(self.home, 'rb')
1982-        for i, lease in self._enumerate_leases(f):
1983-            yield lease
1984-        f.close()
1985+        f = self._home.open('rb')
1986+        try:
1987+            for i, lease in self._enumerate_leases(f):
1988+                yield lease
1989+        finally:
1990+            f.close()
1991 
1992     def _enumerate_leases(self, f):
1993         for i in range(self._get_num_lease_slots(f)):
1994hunk ./src/allmydata/storage/backends/disk/mutable.py 303
1995             try:
1996                 data = self._read_lease_record(f, i)
1997                 if data is not None:
1998-                    yield i,data
1999+                    yield i, data
2000             except IndexError:
2001                 return
2002 
2003hunk ./src/allmydata/storage/backends/disk/mutable.py 307
2004+    # These lease operations are intended for use by disk_backend.py.
2005+    # Other non-test clients should not depend on the fact that the disk
2006+    # backend stores leases in share files.
2007+
2008     def add_lease(self, lease_info):
2009         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
2010hunk ./src/allmydata/storage/backends/disk/mutable.py 313
2011-        f = open(self.home, 'rb+')
2012-        num_lease_slots = self._get_num_lease_slots(f)
2013-        empty_slot = self._get_first_empty_lease_slot(f)
2014-        if empty_slot is not None:
2015-            self._write_lease_record(f, empty_slot, lease_info)
2016-        else:
2017-            self._write_lease_record(f, num_lease_slots, lease_info)
2018-        f.close()
2019+        f = self._home.open('rb+')
2020+        try:
2021+            num_lease_slots = self._get_num_lease_slots(f)
2022+            empty_slot = self._get_first_empty_lease_slot(f)
2023+            if empty_slot is not None:
2024+                self._write_lease_record(f, empty_slot, lease_info)
2025+            else:
2026+                self._write_lease_record(f, num_lease_slots, lease_info)
2027+        finally:
2028+            f.close()
2029 
2030     def renew_lease(self, renew_secret, new_expire_time):
2031         accepting_nodeids = set()
2032hunk ./src/allmydata/storage/backends/disk/mutable.py 326
2033-        f = open(self.home, 'rb+')
2034-        for (leasenum,lease) in self._enumerate_leases(f):
2035-            if constant_time_compare(lease.renew_secret, renew_secret):
2036-                # yup. See if we need to update the owner time.
2037-                if new_expire_time > lease.expiration_time:
2038-                    # yes
2039-                    lease.expiration_time = new_expire_time
2040-                    self._write_lease_record(f, leasenum, lease)
2041-                f.close()
2042-                return
2043-            accepting_nodeids.add(lease.nodeid)
2044-        f.close()
2045+        f = self._home.open('rb+')
2046+        try:
2047+            for (leasenum, lease) in self._enumerate_leases(f):
2048+                if constant_time_compare(lease.renew_secret, renew_secret):
2049+                    # yup. See if we need to update the owner time.
2050+                    if new_expire_time > lease.expiration_time:
2051+                        # yes
2052+                        lease.expiration_time = new_expire_time
2053+                        self._write_lease_record(f, leasenum, lease)
2054+                    return
2055+                accepting_nodeids.add(lease.nodeid)
2056+        finally:
2057+            f.close()
2058         # Return the accepting_nodeids set, to give the client a chance to
2059hunk ./src/allmydata/storage/backends/disk/mutable.py 340
2060-        # update the leases on a share which has been migrated from its
2061+        # update the leases on a share that has been migrated from its
2062         # original server to a new one.
2063         msg = ("Unable to renew non-existent lease. I have leases accepted by"
2064                " nodeids: ")
2065hunk ./src/allmydata/storage/backends/disk/mutable.py 357
2066         except IndexError:
2067             self.add_lease(lease_info)
2068 
2069-    def cancel_lease(self, cancel_secret):
2070-        """Remove any leases with the given cancel_secret. If the last lease
2071-        is cancelled, the file will be removed. Return the number of bytes
2072-        that were freed (by truncating the list of leases, and possibly by
2073-        deleting the file. Raise IndexError if there was no lease with the
2074-        given cancel_secret."""
2075-
2076-        accepting_nodeids = set()
2077-        modified = 0
2078-        remaining = 0
2079-        blank_lease = LeaseInfo(owner_num=0,
2080-                                renew_secret="\x00"*32,
2081-                                cancel_secret="\x00"*32,
2082-                                expiration_time=0,
2083-                                nodeid="\x00"*20)
2084-        f = open(self.home, 'rb+')
2085-        for (leasenum,lease) in self._enumerate_leases(f):
2086-            accepting_nodeids.add(lease.nodeid)
2087-            if constant_time_compare(lease.cancel_secret, cancel_secret):
2088-                self._write_lease_record(f, leasenum, blank_lease)
2089-                modified += 1
2090-            else:
2091-                remaining += 1
2092-        if modified:
2093-            freed_space = self._pack_leases(f)
2094-            f.close()
2095-            if not remaining:
2096-                freed_space += os.stat(self.home)[stat.ST_SIZE]
2097-                self.unlink()
2098-            return freed_space
2099-
2100-        msg = ("Unable to cancel non-existent lease. I have leases "
2101-               "accepted by nodeids: ")
2102-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
2103-                         for anid in accepting_nodeids])
2104-        msg += " ."
2105-        raise IndexError(msg)
2106-
2107-    def _pack_leases(self, f):
2108-        # TODO: reclaim space from cancelled leases
2109-        return 0
2110-
2111     def _read_write_enabler_and_nodeid(self, f):
2112         f.seek(0)
2113         data = f.read(self.HEADER_SIZE)
2114hunk ./src/allmydata/storage/backends/disk/mutable.py 369
2115 
2116     def readv(self, readv):
2117         datav = []
2118-        f = open(self.home, 'rb')
2119-        for (offset, length) in readv:
2120-            datav.append(self._read_share_data(f, offset, length))
2121-        f.close()
2122+        f = self._home.open('rb')
2123+        try:
2124+            for (offset, length) in readv:
2125+                datav.append(self._read_share_data(f, offset, length))
2126+        finally:
2127+            f.close()
2128         return datav
2129 
2130hunk ./src/allmydata/storage/backends/disk/mutable.py 377
2131-#    def remote_get_length(self):
2132-#        f = open(self.home, 'rb')
2133-#        data_length = self._read_data_length(f)
2134-#        f.close()
2135-#        return data_length
2136+    def get_size(self):
2137+        return self._home.getsize()
2138+
2139+    def get_data_length(self):
2140+        f = self._home.open('rb')
2141+        try:
2142+            data_length = self._read_data_length(f)
2143+        finally:
2144+            f.close()
2145+        return data_length
2146 
2147     def check_write_enabler(self, write_enabler, si_s):
2148hunk ./src/allmydata/storage/backends/disk/mutable.py 389
2149-        f = open(self.home, 'rb+')
2150-        (real_write_enabler, write_enabler_nodeid) = \
2151-                             self._read_write_enabler_and_nodeid(f)
2152-        f.close()
2153+        f = self._home.open('rb+')
2154+        try:
2155+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
2156+        finally:
2157+            f.close()
2158         # avoid a timing attack
2159         #if write_enabler != real_write_enabler:
2160         if not constant_time_compare(write_enabler, real_write_enabler):
2161hunk ./src/allmydata/storage/backends/disk/mutable.py 410
2162 
2163     def check_testv(self, testv):
2164         test_good = True
2165-        f = open(self.home, 'rb+')
2166-        for (offset, length, operator, specimen) in testv:
2167-            data = self._read_share_data(f, offset, length)
2168-            if not testv_compare(data, operator, specimen):
2169-                test_good = False
2170-                break
2171-        f.close()
2172+        f = self._home.open('rb+')
2173+        try:
2174+            for (offset, length, operator, specimen) in testv:
2175+                data = self._read_share_data(f, offset, length)
2176+                if not testv_compare(data, operator, specimen):
2177+                    test_good = False
2178+                    break
2179+        finally:
2180+            f.close()
2181         return test_good
2182 
2183     def writev(self, datav, new_length):
2184hunk ./src/allmydata/storage/backends/disk/mutable.py 422
2185-        f = open(self.home, 'rb+')
2186-        for (offset, data) in datav:
2187-            self._write_share_data(f, offset, data)
2188-        if new_length is not None:
2189-            cur_length = self._read_data_length(f)
2190-            if new_length < cur_length:
2191-                self._write_data_length(f, new_length)
2192-                # TODO: if we're going to shrink the share file when the
2193-                # share data has shrunk, then call
2194-                # self._change_container_size() here.
2195-        f.close()
2196-
2197-def testv_compare(a, op, b):
2198-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
2199-    if op == "lt":
2200-        return a < b
2201-    if op == "le":
2202-        return a <= b
2203-    if op == "eq":
2204-        return a == b
2205-    if op == "ne":
2206-        return a != b
2207-    if op == "ge":
2208-        return a >= b
2209-    if op == "gt":
2210-        return a > b
2211-    # never reached
2212+        f = self._home.open('rb+')
2213+        try:
2214+            for (offset, data) in datav:
2215+                self._write_share_data(f, offset, data)
2216+            if new_length is not None:
2217+                cur_length = self._read_data_length(f)
2218+                if new_length < cur_length:
2219+                    self._write_data_length(f, new_length)
2220+                    # TODO: if we're going to shrink the share file when the
2221+                    # share data has shrunk, then call
2222+                    # self._change_container_size() here.
2223+        finally:
2224+            f.close()
2225 
2226hunk ./src/allmydata/storage/backends/disk/mutable.py 436
2227-class EmptyShare:
2228+    def close(self):
2229+        pass
2230 
2231hunk ./src/allmydata/storage/backends/disk/mutable.py 439
2232-    def check_testv(self, testv):
2233-        test_good = True
2234-        for (offset, length, operator, specimen) in testv:
2235-            data = ""
2236-            if not testv_compare(data, operator, specimen):
2237-                test_good = False
2238-                break
2239-        return test_good
2240 
2241hunk ./src/allmydata/storage/backends/disk/mutable.py 440
2242-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
2243-    ms = MutableShareFile(filename, parent)
2244-    ms.create(my_nodeid, write_enabler)
2245+def create_mutable_disk_share(fp, nodeid, write_enabler, parent):
2246+    ms = MutableDiskShare(fp, parent)
2247+    ms.create(nodeid, write_enabler)
2248     del ms
2249hunk ./src/allmydata/storage/backends/disk/mutable.py 444
2250-    return MutableShareFile(filename, parent)
2251-
2252+    return MutableDiskShare(fp, parent)
2253addfile ./src/allmydata/storage/backends/null/__init__.py
2254addfile ./src/allmydata/storage/backends/null/null_backend.py
2255hunk ./src/allmydata/storage/backends/null/null_backend.py 2
2256 
2257+import os, struct
2258+
2259+from zope.interface import implements
2260+
2261+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
2262+from allmydata.util.assertutil import precondition
2263+from allmydata.util.hashutil import constant_time_compare
2264+from allmydata.storage.backends.base import Backend, ShareSet
2265+from allmydata.storage.bucket import BucketWriter
2266+from allmydata.storage.common import si_b2a
2267+from allmydata.storage.lease import LeaseInfo
2268+
2269+
2270+class NullBackend(Backend):
2271+    implements(IStorageBackend)
2272+
2273+    def __init__(self):
2274+        Backend.__init__(self)
2275+
2276+    def get_available_space(self, reserved_space):
2277+        return None
2278+
2279+    def get_sharesets_for_prefix(self, prefix):
2280+        pass
2281+
2282+    def get_shareset(self, storageindex):
2283+        return NullShareSet(storageindex)
2284+
2285+    def fill_in_space_stats(self, stats):
2286+        pass
2287+
2288+    def set_storage_server(self, ss):
2289+        self.ss = ss
2290+
2291+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
2292+        pass
2293+
2294+
2295+class NullShareSet(ShareSet):
2296+    implements(IShareSet)
2297+
2298+    def __init__(self, storageindex):
2299+        self.storageindex = storageindex
2300+
2301+    def get_overhead(self):
2302+        return 0
2303+
2304+    def get_incoming_shnums(self):
2305+        return frozenset()
2306+
2307+    def get_shares(self):
2308+        pass
2309+
2310+    def get_share(self, shnum):
2311+        return None
2312+
2313+    def get_storage_index(self):
2314+        return self.storageindex
2315+
2316+    def get_storage_index_string(self):
2317+        return si_b2a(self.storageindex)
2318+
2319+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
2320+        immutableshare = ImmutableNullShare()
2321+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2322+
2323+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
2324+        return MutableNullShare()
2325+
2326+    def _clean_up_after_unlink(self):
2327+        pass
2328+
2329+
2330+class ImmutableNullShare:
2331+    implements(IStoredShare)
2332+    sharetype = "immutable"
2333+
2334+    def __init__(self):
2335+        """ If max_size is not None then I won't allow more than
2336+        max_size to be written to me. If create=True then max_size
2337+        must not be None. """
2338+        pass
2339+
2340+    def get_shnum(self):
2341+        return self.shnum
2342+
2343+    def unlink(self):
2344+        os.unlink(self.fname)
2345+
2346+    def read_share_data(self, offset, length):
2347+        precondition(offset >= 0)
2348+        # Reads beyond the end of the data are truncated. Reads that start
2349+        # beyond the end of the data return an empty string.
2350+        seekpos = self._data_offset+offset
2351+        fsize = os.path.getsize(self.fname)
2352+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
2353+        if actuallength == 0:
2354+            return ""
2355+        f = open(self.fname, 'rb')
2356+        f.seek(seekpos)
2357+        return f.read(actuallength)
2358+
2359+    def write_share_data(self, offset, data):
2360+        pass
2361+
2362+    def _write_lease_record(self, f, lease_number, lease_info):
2363+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2364+        f.seek(offset)
2365+        assert f.tell() == offset
2366+        f.write(lease_info.to_immutable_data())
2367+
2368+    def _read_num_leases(self, f):
2369+        f.seek(0x08)
2370+        (num_leases,) = struct.unpack(">L", f.read(4))
2371+        return num_leases
2372+
2373+    def _write_num_leases(self, f, num_leases):
2374+        f.seek(0x08)
2375+        f.write(struct.pack(">L", num_leases))
2376+
2377+    def _truncate_leases(self, f, num_leases):
2378+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2379+
2380+    def get_leases(self):
2381+        """Yields a LeaseInfo instance for all leases."""
2382+        f = open(self.fname, 'rb')
2383+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2384+        f.seek(self._lease_offset)
2385+        for i in range(num_leases):
2386+            data = f.read(self.LEASE_SIZE)
2387+            if data:
2388+                yield LeaseInfo().from_immutable_data(data)
2389+
2390+    def add_lease(self, lease):
2391+        pass
2392+
2393+    def renew_lease(self, renew_secret, new_expire_time):
2394+        for i,lease in enumerate(self.get_leases()):
2395+            if constant_time_compare(lease.renew_secret, renew_secret):
2396+                # yup. See if we need to update the owner time.
2397+                if new_expire_time > lease.expiration_time:
2398+                    # yes
2399+                    lease.expiration_time = new_expire_time
2400+                    f = open(self.fname, 'rb+')
2401+                    self._write_lease_record(f, i, lease)
2402+                    f.close()
2403+                return
2404+        raise IndexError("unable to renew non-existent lease")
2405+
2406+    def add_or_renew_lease(self, lease_info):
2407+        try:
2408+            self.renew_lease(lease_info.renew_secret,
2409+                             lease_info.expiration_time)
2410+        except IndexError:
2411+            self.add_lease(lease_info)
2412+
2413+
2414+class MutableNullShare:
2415+    implements(IStoredMutableShare)
2416+    sharetype = "mutable"
2417+
2418+    """ XXX: TODO """
2419addfile ./src/allmydata/storage/bucket.py
2420hunk ./src/allmydata/storage/bucket.py 1
2421+
2422+import time
2423+
2424+from foolscap.api import Referenceable
2425+
2426+from zope.interface import implements
2427+from allmydata.interfaces import RIBucketWriter, RIBucketReader
2428+from allmydata.util import base32, log
2429+from allmydata.util.assertutil import precondition
2430+
2431+
2432+class BucketWriter(Referenceable):
2433+    implements(RIBucketWriter)
2434+
2435+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
2436+        self.ss = ss
2437+        self._max_size = max_size # don't allow the client to write more than this
2438+        self._canary = canary
2439+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
2440+        self.closed = False
2441+        self.throw_out_all_data = False
2442+        self._share = immutableshare
2443+        # also, add our lease to the file now, so that other ones can be
2444+        # added by simultaneous uploaders
2445+        self._share.add_lease(lease_info)
2446+
2447+    def allocated_size(self):
2448+        return self._max_size
2449+
2450+    def remote_write(self, offset, data):
2451+        start = time.time()
2452+        precondition(not self.closed)
2453+        if self.throw_out_all_data:
2454+            return
2455+        self._share.write_share_data(offset, data)
2456+        self.ss.add_latency("write", time.time() - start)
2457+        self.ss.count("write")
2458+
2459+    def remote_close(self):
2460+        precondition(not self.closed)
2461+        start = time.time()
2462+
2463+        self._share.close()
2464+        filelen = self._share.stat()
2465+        self._share = None
2466+
2467+        self.closed = True
2468+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2469+
2470+        self.ss.bucket_writer_closed(self, filelen)
2471+        self.ss.add_latency("close", time.time() - start)
2472+        self.ss.count("close")
2473+
2474+    def _disconnected(self):
2475+        if not self.closed:
2476+            self._abort()
2477+
2478+    def remote_abort(self):
2479+        log.msg("storage: aborting write to share %r" % self._share,
2480+                facility="tahoe.storage", level=log.UNUSUAL)
2481+        if not self.closed:
2482+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2483+        self._abort()
2484+        self.ss.count("abort")
2485+
2486+    def _abort(self):
2487+        if self.closed:
2488+            return
2489+        self._share.unlink()
2490+        self._share = None
2491+
2492+        # We are now considered closed for further writing. We must tell
2493+        # the storage server about this so that it stops expecting us to
2494+        # use the space it allocated for us earlier.
2495+        self.closed = True
2496+        self.ss.bucket_writer_closed(self, 0)
2497+
2498+
2499+class BucketReader(Referenceable):
2500+    implements(RIBucketReader)
2501+
2502+    def __init__(self, ss, share):
2503+        self.ss = ss
2504+        self._share = share
2505+        self.storageindex = share.storageindex
2506+        self.shnum = share.shnum
2507+
2508+    def __repr__(self):
2509+        return "<%s %s %s>" % (self.__class__.__name__,
2510+                               base32.b2a_l(self.storageindex[:8], 60),
2511+                               self.shnum)
2512+
2513+    def remote_read(self, offset, length):
2514+        start = time.time()
2515+        data = self._share.read_share_data(offset, length)
2516+        self.ss.add_latency("read", time.time() - start)
2517+        self.ss.count("read")
2518+        return data
2519+
2520+    def remote_advise_corrupt_share(self, reason):
2521+        return self.ss.remote_advise_corrupt_share("immutable",
2522+                                                   self.storageindex,
2523+                                                   self.shnum,
2524+                                                   reason)
2525hunk ./src/allmydata/storage/common.py 1
2526-
2527-import os.path
2528 from allmydata.util import base32
2529 
2530 class DataTooLargeError(Exception):
2531hunk ./src/allmydata/storage/common.py 5
2532     pass
2533+
2534 class UnknownMutableContainerVersionError(Exception):
2535     pass
2536hunk ./src/allmydata/storage/common.py 8
2537+
2538 class UnknownImmutableContainerVersionError(Exception):
2539     pass
2540 
2541hunk ./src/allmydata/storage/common.py 18
2542 
2543 def si_a2b(ascii_storageindex):
2544     return base32.a2b(ascii_storageindex)
2545-
2546-def storage_index_to_dir(storageindex):
2547-    sia = si_b2a(storageindex)
2548-    return os.path.join(sia[:2], sia)
2549hunk ./src/allmydata/storage/crawler.py 2
2550 
2551-import os, time, struct
2552+import time, struct
2553 import cPickle as pickle
2554 from twisted.internet import reactor
2555 from twisted.application import service
2556hunk ./src/allmydata/storage/crawler.py 7
2557 from allmydata.storage.common import si_b2a
2558-from allmydata.util import fileutil
2559+
2560 
2561 class TimeSliceExceeded(Exception):
2562     pass
2563hunk ./src/allmydata/storage/crawler.py 12
2564 
2565+
2566 class ShareCrawler(service.MultiService):
2567hunk ./src/allmydata/storage/crawler.py 14
2568-    """A ShareCrawler subclass is attached to a StorageServer, and
2569-    periodically walks all of its shares, processing each one in some
2570-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
2571-    since large servers can easily have a terabyte of shares, in several
2572-    million files, which can take hours or days to read.
2573+    """
2574+    An instance of a subclass of ShareCrawler is attached to a storage
2575+    backend, and periodically walks the backend's shares, processing them
2576+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
2577+    the host, since large servers can easily have a terabyte of shares in
2578+    several million files, which can take hours or days to read.
2579 
2580     Once the crawler starts a cycle, it will proceed at a rate limited by the
2581     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
2582hunk ./src/allmydata/storage/crawler.py 30
2583     long enough to ensure that 'minimum_cycle_time' elapses between the start
2584     of two consecutive cycles.
2585 
2586-    We assume that the normal upload/download/get_buckets traffic of a tahoe
2587+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
2588     grid will cause the prefixdir contents to be mostly cached in the kernel,
2589hunk ./src/allmydata/storage/crawler.py 32
2590-    or that the number of buckets in each prefixdir will be small enough to
2591-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
2592-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
2593+    or that the number of sharesets in each prefixdir will be small enough to
2594+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
2595+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
2596     prefix. On this server, each prefixdir took 130ms-200ms to list the first
2597     time, and 17ms to list the second time.
2598 
2599hunk ./src/allmydata/storage/crawler.py 38
2600-    To use a crawler, create a subclass which implements the process_bucket()
2601-    method. It will be called with a prefixdir and a base32 storage index
2602-    string. process_bucket() must run synchronously. Any keys added to
2603-    self.state will be preserved. Override add_initial_state() to set up
2604-    initial state keys. Override finished_cycle() to perform additional
2605-    processing when the cycle is complete. Any status that the crawler
2606-    produces should be put in the self.state dictionary. Status renderers
2607-    (like a web page which describes the accomplishments of your crawler)
2608-    will use crawler.get_state() to retrieve this dictionary; they can
2609-    present the contents as they see fit.
2610+    To implement a crawler, create a subclass that implements the
2611+    process_shareset() method. It will be called with a prefixdir and an
2612+    object providing the IShareSet interface. process_shareset() must run
2613+    synchronously. Any keys added to self.state will be preserved. Override
2614+    add_initial_state() to set up initial state keys. Override
2615+    finished_cycle() to perform additional processing when the cycle is
2616+    complete. Any status that the crawler produces should be put in the
2617+    self.state dictionary. Status renderers (like a web page describing the
2618+    accomplishments of your crawler) will use crawler.get_state() to retrieve
2619+    this dictionary; they can present the contents as they see fit.
2620 
2621hunk ./src/allmydata/storage/crawler.py 49
2622-    Then create an instance, with a reference to a StorageServer and a
2623-    filename where it can store persistent state. The statefile is used to
2624-    keep track of how far around the ring the process has travelled, as well
2625-    as timing history to allow the pace to be predicted and controlled. The
2626-    statefile will be updated and written to disk after each time slice (just
2627-    before the crawler yields to the reactor), and also after each cycle is
2628-    finished, and also when stopService() is called. Note that this means
2629-    that a crawler which is interrupted with SIGKILL while it is in the
2630-    middle of a time slice will lose progress: the next time the node is
2631-    started, the crawler will repeat some unknown amount of work.
2632+    Then create an instance, with a reference to a backend object providing
2633+    the IStorageBackend interface, and a filename where it can store
2634+    persistent state. The statefile is used to keep track of how far around
2635+    the ring the process has travelled, as well as timing history to allow
2636+    the pace to be predicted and controlled. The statefile will be updated
2637+    and written to disk after each time slice (just before the crawler yields
2638+    to the reactor), and also after each cycle is finished, and also when
2639+    stopService() is called. Note that this means that a crawler that is
2640+    interrupted with SIGKILL while it is in the middle of a time slice will
2641+    lose progress: the next time the node is started, the crawler will repeat
2642+    some unknown amount of work.
2643 
2644     The crawler instance must be started with startService() before it will
2645hunk ./src/allmydata/storage/crawler.py 62
2646-    do any work. To make it stop doing work, call stopService().
2647+    do any work. To make it stop doing work, call stopService(). A crawler
2648+    is usually a child service of a StorageServer, although it should not
2649+    depend on that.
2650+
2651+    For historical reasons, some dictionary key names use the term "bucket"
2652+    for what is now preferably called a "shareset" (the set of shares that a
2653+    server holds under a given storage index).
2654     """
2655 
2656     slow_start = 300 # don't start crawling for 5 minutes after startup
2657hunk ./src/allmydata/storage/crawler.py 77
2658     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
2659     minimum_cycle_time = 300 # don't run a cycle faster than this
2660 
2661-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
2662+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
2663         service.MultiService.__init__(self)
2664hunk ./src/allmydata/storage/crawler.py 79
2665+        self.backend = backend
2666+        self.statefp = statefp
2667         if allowed_cpu_percentage is not None:
2668             self.allowed_cpu_percentage = allowed_cpu_percentage
2669hunk ./src/allmydata/storage/crawler.py 83
2670-        self.server = server
2671-        self.sharedir = server.sharedir
2672-        self.statefile = statefile
2673         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
2674                          for i in range(2**10)]
2675         self.prefixes.sort()
2676hunk ./src/allmydata/storage/crawler.py 87
2677         self.timer = None
2678-        self.bucket_cache = (None, [])
2679+        self.shareset_cache = (None, [])
2680         self.current_sleep_time = None
2681         self.next_wake_time = None
2682         self.last_prefix_finished_time = None
2683hunk ./src/allmydata/storage/crawler.py 150
2684                 left = len(self.prefixes) - self.last_complete_prefix_index
2685                 remaining = left * self.last_prefix_elapsed_time
2686                 # TODO: remainder of this prefix: we need to estimate the
2687-                # per-bucket time, probably by measuring the time spent on
2688-                # this prefix so far, divided by the number of buckets we've
2689+                # per-shareset time, probably by measuring the time spent on
2690+                # this prefix so far, divided by the number of sharesets we've
2691                 # processed.
2692             d["estimated-cycle-complete-time-left"] = remaining
2693             # it's possible to call get_progress() from inside a crawler's
2694hunk ./src/allmydata/storage/crawler.py 171
2695         state dictionary.
2696 
2697         If we are not currently sleeping (i.e. get_state() was called from
2698-        inside the process_prefixdir, process_bucket, or finished_cycle()
2699+        inside the process_prefixdir, process_shareset, or finished_cycle()
2700         methods, or if startService has not yet been called on this crawler),
2701         these two keys will be None.
2702 
2703hunk ./src/allmydata/storage/crawler.py 184
2704     def load_state(self):
2705         # we use this to store state for both the crawler's internals and
2706         # anything the subclass-specific code needs. The state is stored
2707-        # after each bucket is processed, after each prefixdir is processed,
2708+        # after each shareset is processed, after each prefixdir is processed,
2709         # and after a cycle is complete. The internal keys we use are:
2710         #  ["version"]: int, always 1
2711         #  ["last-cycle-finished"]: int, or None if we have not yet finished
2712hunk ./src/allmydata/storage/crawler.py 198
2713         #                            are sleeping between cycles, or if we
2714         #                            have not yet finished any prefixdir since
2715         #                            a cycle was started
2716-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
2717-        #                            of the last bucket to be processed, or
2718-        #                            None if we are sleeping between cycles
2719+        #  ["last-complete-bucket"]: str, base32 storage index of the last
2720+        #                            shareset to be processed, or None if we
2721+        #                            are sleeping between cycles
2722         try:
2723hunk ./src/allmydata/storage/crawler.py 202
2724-            f = open(self.statefile, "rb")
2725-            state = pickle.load(f)
2726-            f.close()
2727+            state = pickle.loads(self.statefp.getContent())
2728         except EnvironmentError:
2729             state = {"version": 1,
2730                      "last-cycle-finished": None,
2731hunk ./src/allmydata/storage/crawler.py 238
2732         else:
2733             last_complete_prefix = self.prefixes[lcpi]
2734         self.state["last-complete-prefix"] = last_complete_prefix
2735-        tmpfile = self.statefile + ".tmp"
2736-        f = open(tmpfile, "wb")
2737-        pickle.dump(self.state, f)
2738-        f.close()
2739-        fileutil.move_into_place(tmpfile, self.statefile)
2740+        self.statefp.setContent(pickle.dumps(self.state))
2741 
2742     def startService(self):
2743         # arrange things to look like we were just sleeping, so
2744hunk ./src/allmydata/storage/crawler.py 280
2745         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
2746         # if the math gets weird, or a timequake happens, don't sleep
2747         # forever. Note that this means that, while a cycle is running, we
2748-        # will process at least one bucket every 5 minutes, no matter how
2749-        # long that bucket takes.
2750+        # will process at least one shareset every 5 minutes, no matter how
2751+        # long that shareset takes.
2752         sleep_time = max(0.0, min(sleep_time, 299))
2753         if finished_cycle:
2754             # how long should we sleep between cycles? Don't run faster than
2755hunk ./src/allmydata/storage/crawler.py 311
2756         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
2757             # if we want to yield earlier, just raise TimeSliceExceeded()
2758             prefix = self.prefixes[i]
2759-            prefixdir = os.path.join(self.sharedir, prefix)
2760-            if i == self.bucket_cache[0]:
2761-                buckets = self.bucket_cache[1]
2762+            if i == self.shareset_cache[0]:
2763+                sharesets = self.shareset_cache[1]
2764             else:
2765hunk ./src/allmydata/storage/crawler.py 314
2766-                try:
2767-                    buckets = os.listdir(prefixdir)
2768-                    buckets.sort()
2769-                except EnvironmentError:
2770-                    buckets = []
2771-                self.bucket_cache = (i, buckets)
2772-            self.process_prefixdir(cycle, prefix, prefixdir,
2773-                                   buckets, start_slice)
2774+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
2775+                self.shareset_cache = (i, sharesets)
2776+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
2777             self.last_complete_prefix_index = i
2778 
2779             now = time.time()
2780hunk ./src/allmydata/storage/crawler.py 341
2781         self.finished_cycle(cycle)
2782         self.save_state()
2783 
2784-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
2785-        """This gets a list of bucket names (i.e. storage index strings,
2786+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
2787+        """
2788+        This gets a list of shareset names (i.e. storage index strings,
2789         base32-encoded) in sorted order.
2790 
2791         You can override this if your crawler doesn't care about the actual
2792hunk ./src/allmydata/storage/crawler.py 348
2793         shares, for example a crawler which merely keeps track of how many
2794-        buckets are being managed by this server.
2795+        sharesets are being managed by this server.
2796 
2797hunk ./src/allmydata/storage/crawler.py 350
2798-        Subclasses which *do* care about actual bucket should leave this
2799-        method along, and implement process_bucket() instead.
2800+        Subclasses which *do* care about actual shareset should leave this
2801+        method alone, and implement process_shareset() instead.
2802         """
2803 
2804hunk ./src/allmydata/storage/crawler.py 354
2805-        for bucket in buckets:
2806-            if bucket <= self.state["last-complete-bucket"]:
2807+        for shareset in sharesets:
2808+            base32si = shareset.get_storage_index_string()
2809+            if base32si <= self.state["last-complete-bucket"]:
2810                 continue
2811hunk ./src/allmydata/storage/crawler.py 358
2812-            self.process_bucket(cycle, prefix, prefixdir, bucket)
2813-            self.state["last-complete-bucket"] = bucket
2814+            self.process_shareset(cycle, prefix, shareset)
2815+            self.state["last-complete-bucket"] = base32si
2816             if time.time() >= start_slice + self.cpu_slice:
2817                 raise TimeSliceExceeded()
2818 
2819hunk ./src/allmydata/storage/crawler.py 366
2820     # the remaining methods are explictly for subclasses to implement.
2821 
2822     def started_cycle(self, cycle):
2823-        """Notify a subclass that the crawler is about to start a cycle.
2824+        """
2825+        Notify a subclass that the crawler is about to start a cycle.
2826 
2827         This method is for subclasses to override. No upcall is necessary.
2828         """
2829hunk ./src/allmydata/storage/crawler.py 373
2830         pass
2831 
2832-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
2833-        """Examine a single bucket. Subclasses should do whatever they want
2834+    def process_shareset(self, cycle, prefix, shareset):
2835+        """
2836+        Examine a single shareset. Subclasses should do whatever they want
2837         to do to the shares therein, then update self.state as necessary.
2838 
2839         If the crawler is never interrupted by SIGKILL, this method will be
2840hunk ./src/allmydata/storage/crawler.py 379
2841-        called exactly once per share (per cycle). If it *is* interrupted,
2842+        called exactly once per shareset (per cycle). If it *is* interrupted,
2843         then the next time the node is started, some amount of work will be
2844         duplicated, according to when self.save_state() was last called. By
2845         default, save_state() is called at the end of each timeslice, and
2846hunk ./src/allmydata/storage/crawler.py 387
2847 
2848         To reduce the chance of duplicate work (i.e. to avoid adding multiple
2849         records to a database), you can call save_state() at the end of your
2850-        process_bucket() method. This will reduce the maximum duplicated work
2851-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
2852-        per bucket (and some disk writes), which will count against your
2853-        allowed_cpu_percentage, and which may be considerable if
2854-        process_bucket() runs quickly.
2855+        process_shareset() method. This will reduce the maximum duplicated
2856+        work to one shareset per SIGKILL. It will also add overhead, probably
2857+        1-20ms per shareset (and some disk writes), which will count against
2858+        your allowed_cpu_percentage, and which may be considerable if
2859+        process_shareset() runs quickly.
2860 
2861         This method is for subclasses to override. No upcall is necessary.
2862         """
2863hunk ./src/allmydata/storage/crawler.py 398
2864         pass
2865 
2866     def finished_prefix(self, cycle, prefix):
2867-        """Notify a subclass that the crawler has just finished processing a
2868-        prefix directory (all buckets with the same two-character/10bit
2869+        """
2870+        Notify a subclass that the crawler has just finished processing a
2871+        prefix directory (all sharesets with the same two-character/10-bit
2872         prefix). To impose a limit on how much work might be duplicated by a
2873         SIGKILL that occurs during a timeslice, you can call
2874         self.save_state() here, but be aware that it may represent a
2875hunk ./src/allmydata/storage/crawler.py 411
2876         pass
2877 
2878     def finished_cycle(self, cycle):
2879-        """Notify subclass that a cycle (one complete traversal of all
2880+        """
2881+        Notify subclass that a cycle (one complete traversal of all
2882         prefixdirs) has just finished. 'cycle' is the number of the cycle
2883         that just finished. This method should perform summary work and
2884         update self.state to publish information to status displays.
2885hunk ./src/allmydata/storage/crawler.py 429
2886         pass
2887 
2888     def yielding(self, sleep_time):
2889-        """The crawler is about to sleep for 'sleep_time' seconds. This
2890+        """
2891+        The crawler is about to sleep for 'sleep_time' seconds. This
2892         method is mostly for the convenience of unit tests.
2893 
2894         This method is for subclasses to override. No upcall is necessary.
2895hunk ./src/allmydata/storage/crawler.py 439
2896 
2897 
2898 class BucketCountingCrawler(ShareCrawler):
2899-    """I keep track of how many buckets are being managed by this server.
2900-    This is equivalent to the number of distributed files and directories for
2901-    which I am providing storage. The actual number of files+directories in
2902-    the full grid is probably higher (especially when there are more servers
2903-    than 'N', the number of generated shares), because some files+directories
2904-    will have shares on other servers instead of me. Also note that the
2905-    number of buckets will differ from the number of shares in small grids,
2906-    when more than one share is placed on a single server.
2907+    """
2908+    I keep track of how many sharesets, each corresponding to a storage index,
2909+    are being managed by this server. This is equivalent to the number of
2910+    distributed files and directories for which I am providing storage. The
2911+    actual number of files and directories in the full grid is probably higher
2912+    (especially when there are more servers than 'N', the number of generated
2913+    shares), because some files and directories will have shares on other
2914+    servers instead of me. Also note that the number of sharesets will differ
2915+    from the number of shares in small grids, when more than one share is
2916+    placed on a single server.
2917     """
2918 
2919     minimum_cycle_time = 60*60 # we don't need this more than once an hour
2920hunk ./src/allmydata/storage/crawler.py 453
2921 
2922-    def __init__(self, server, statefile, num_sample_prefixes=1):
2923-        ShareCrawler.__init__(self, server, statefile)
2924+    def __init__(self, backend, statefp, num_sample_prefixes=1):
2925+        ShareCrawler.__init__(self, backend, statefp)
2926         self.num_sample_prefixes = num_sample_prefixes
2927 
2928     def add_initial_state(self):
2929hunk ./src/allmydata/storage/crawler.py 467
2930         self.state.setdefault("last-complete-bucket-count", None)
2931         self.state.setdefault("storage-index-samples", {})
2932 
2933-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
2934+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
2935         # we override process_prefixdir() because we don't want to look at
2936hunk ./src/allmydata/storage/crawler.py 469
2937-        # the individual buckets. We'll save state after each one. On my
2938+        # the individual sharesets. We'll save state after each one. On my
2939         # laptop, a mostly-empty storage server can process about 70
2940         # prefixdirs in a 1.0s slice.
2941         if cycle not in self.state["bucket-counts"]:
2942hunk ./src/allmydata/storage/crawler.py 474
2943             self.state["bucket-counts"][cycle] = {}
2944-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
2945+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
2946         if prefix in self.prefixes[:self.num_sample_prefixes]:
2947hunk ./src/allmydata/storage/crawler.py 476
2948-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
2949+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
2950 
2951     def finished_cycle(self, cycle):
2952         last_counts = self.state["bucket-counts"].get(cycle, [])
2953hunk ./src/allmydata/storage/crawler.py 482
2954         if len(last_counts) == len(self.prefixes):
2955             # great, we have a whole cycle.
2956-            num_buckets = sum(last_counts.values())
2957-            self.state["last-complete-bucket-count"] = num_buckets
2958+            num_sharesets = sum(last_counts.values())
2959+            self.state["last-complete-bucket-count"] = num_sharesets
2960             # get rid of old counts
2961             for old_cycle in list(self.state["bucket-counts"].keys()):
2962                 if old_cycle != cycle:
2963hunk ./src/allmydata/storage/crawler.py 490
2964                     del self.state["bucket-counts"][old_cycle]
2965         # get rid of old samples too
2966         for prefix in list(self.state["storage-index-samples"].keys()):
2967-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
2968+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
2969             if old_cycle != cycle:
2970                 del self.state["storage-index-samples"][prefix]
2971hunk ./src/allmydata/storage/crawler.py 493
2972-
2973hunk ./src/allmydata/storage/expirer.py 1
2974-import time, os, pickle, struct
2975+
2976+import time, pickle, struct
2977+from twisted.python import log as twlog
2978+
2979 from allmydata.storage.crawler import ShareCrawler
2980hunk ./src/allmydata/storage/expirer.py 6
2981-from allmydata.storage.shares import get_share_file
2982-from allmydata.storage.common import UnknownMutableContainerVersionError, \
2983+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
2984      UnknownImmutableContainerVersionError
2985hunk ./src/allmydata/storage/expirer.py 8
2986-from twisted.python import log as twlog
2987+
2988 
2989 class LeaseCheckingCrawler(ShareCrawler):
2990     """I examine the leases on all shares, determining which are still valid
2991hunk ./src/allmydata/storage/expirer.py 17
2992     removed.
2993 
2994     I collect statistics on the leases and make these available to a web
2995-    status page, including::
2996+    status page, including:
2997 
2998     Space recovered during this cycle-so-far:
2999      actual (only if expiration_enabled=True):
3000hunk ./src/allmydata/storage/expirer.py 21
3001-      num-buckets, num-shares, sum of share sizes, real disk usage
3002+      num-storage-indices, num-shares, sum of share sizes, real disk usage
3003       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
3004        space used by the directory)
3005      what it would have been with the original lease expiration time
3006hunk ./src/allmydata/storage/expirer.py 32
3007 
3008     Space recovered during the last 10 cycles  <-- saved in separate pickle
3009 
3010-    Shares/buckets examined:
3011+    Shares/storage-indices examined:
3012      this cycle-so-far
3013      prediction of rest of cycle
3014      during last 10 cycles <-- separate pickle
3015hunk ./src/allmydata/storage/expirer.py 42
3016     Histogram of leases-per-share:
3017      this-cycle-to-date
3018      last 10 cycles <-- separate pickle
3019-    Histogram of lease ages, buckets = 1day
3020+    Histogram of lease ages, storage-indices over 1 day
3021      cycle-to-date
3022      last 10 cycles <-- separate pickle
3023 
3024hunk ./src/allmydata/storage/expirer.py 53
3025     slow_start = 360 # wait 6 minutes after startup
3026     minimum_cycle_time = 12*60*60 # not more than twice per day
3027 
3028-    def __init__(self, server, statefile, historyfile,
3029-                 expiration_enabled, mode,
3030-                 override_lease_duration, # used if expiration_mode=="age"
3031-                 cutoff_date, # used if expiration_mode=="cutoff-date"
3032-                 sharetypes):
3033-        self.historyfile = historyfile
3034-        self.expiration_enabled = expiration_enabled
3035-        self.mode = mode
3036+    def __init__(self, backend, statefp, historyfp, expiration_policy):
3037+        ShareCrawler.__init__(self, backend, statefp)
3038+        self.historyfp = historyfp
3039+        self.expiration_enabled = expiration_policy['enabled']
3040+        self.mode = expiration_policy['mode']
3041         self.override_lease_duration = None
3042         self.cutoff_date = None
3043         if self.mode == "age":
3044hunk ./src/allmydata/storage/expirer.py 61
3045-            assert isinstance(override_lease_duration, (int, type(None)))
3046-            self.override_lease_duration = override_lease_duration # seconds
3047+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
3048+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
3049         elif self.mode == "cutoff-date":
3050hunk ./src/allmydata/storage/expirer.py 64
3051-            assert isinstance(cutoff_date, int) # seconds-since-epoch
3052-            assert cutoff_date is not None
3053-            self.cutoff_date = cutoff_date
3054+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
3055+            self.cutoff_date = expiration_policy['cutoff_date']
3056         else:
3057hunk ./src/allmydata/storage/expirer.py 67
3058-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
3059-        self.sharetypes_to_expire = sharetypes
3060-        ShareCrawler.__init__(self, server, statefile)
3061+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
3062+        self.sharetypes_to_expire = expiration_policy['sharetypes']
3063 
3064     def add_initial_state(self):
3065         # we fill ["cycle-to-date"] here (even though they will be reset in
3066hunk ./src/allmydata/storage/expirer.py 82
3067             self.state["cycle-to-date"].setdefault(k, so_far[k])
3068 
3069         # initialize history
3070-        if not os.path.exists(self.historyfile):
3071+        if not self.historyfp.exists():
3072             history = {} # cyclenum -> dict
3073hunk ./src/allmydata/storage/expirer.py 84
3074-            f = open(self.historyfile, "wb")
3075-            pickle.dump(history, f)
3076-            f.close()
3077+            self.historyfp.setContent(pickle.dumps(history))
3078 
3079     def create_empty_cycle_dict(self):
3080         recovered = self.create_empty_recovered_dict()
3081hunk ./src/allmydata/storage/expirer.py 97
3082 
3083     def create_empty_recovered_dict(self):
3084         recovered = {}
3085+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
3086         for a in ("actual", "original", "configured", "examined"):
3087             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
3088                 recovered[a+"-"+b] = 0
3089hunk ./src/allmydata/storage/expirer.py 108
3090     def started_cycle(self, cycle):
3091         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
3092 
3093-    def stat(self, fn):
3094-        return os.stat(fn)
3095-
3096-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
3097-        bucketdir = os.path.join(prefixdir, storage_index_b32)
3098-        s = self.stat(bucketdir)
3099+    def process_storage_index(self, cycle, prefix, container):
3100         would_keep_shares = []
3101         wks = None
3102hunk ./src/allmydata/storage/expirer.py 111
3103+        sharetype = None
3104 
3105hunk ./src/allmydata/storage/expirer.py 113
3106-        for fn in os.listdir(bucketdir):
3107-            try:
3108-                shnum = int(fn)
3109-            except ValueError:
3110-                continue # non-numeric means not a sharefile
3111-            sharefile = os.path.join(bucketdir, fn)
3112+        for share in container.get_shares():
3113+            sharetype = share.sharetype
3114             try:
3115hunk ./src/allmydata/storage/expirer.py 116
3116-                wks = self.process_share(sharefile)
3117+                wks = self.process_share(share)
3118             except (UnknownMutableContainerVersionError,
3119                     UnknownImmutableContainerVersionError,
3120                     struct.error):
3121hunk ./src/allmydata/storage/expirer.py 120
3122-                twlog.msg("lease-checker error processing %s" % sharefile)
3123+                twlog.msg("lease-checker error processing %r" % (share,))
3124                 twlog.err()
3125hunk ./src/allmydata/storage/expirer.py 122
3126-                which = (storage_index_b32, shnum)
3127+                which = (si_b2a(share.storageindex), share.get_shnum())
3128                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
3129                 wks = (1, 1, 1, "unknown")
3130             would_keep_shares.append(wks)
3131hunk ./src/allmydata/storage/expirer.py 127
3132 
3133-        sharetype = None
3134+        container_type = None
3135         if wks:
3136hunk ./src/allmydata/storage/expirer.py 129
3137-            # use the last share's sharetype as the buckettype
3138-            sharetype = wks[3]
3139+            # use the last share's sharetype as the container type
3140+            container_type = wks[3]
3141         rec = self.state["cycle-to-date"]["space-recovered"]
3142         self.increment(rec, "examined-buckets", 1)
3143         if sharetype:
3144hunk ./src/allmydata/storage/expirer.py 134
3145-            self.increment(rec, "examined-buckets-"+sharetype, 1)
3146+            self.increment(rec, "examined-buckets-"+container_type, 1)
3147+
3148+        container_diskbytes = container.get_overhead()
3149 
3150hunk ./src/allmydata/storage/expirer.py 138
3151-        try:
3152-            bucket_diskbytes = s.st_blocks * 512
3153-        except AttributeError:
3154-            bucket_diskbytes = 0 # no stat().st_blocks on windows
3155         if sum([wks[0] for wks in would_keep_shares]) == 0:
3156hunk ./src/allmydata/storage/expirer.py 139
3157-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
3158+            self.increment_container_space("original", container_diskbytes, sharetype)
3159         if sum([wks[1] for wks in would_keep_shares]) == 0:
3160hunk ./src/allmydata/storage/expirer.py 141
3161-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
3162+            self.increment_container_space("configured", container_diskbytes, sharetype)
3163         if sum([wks[2] for wks in would_keep_shares]) == 0:
3164hunk ./src/allmydata/storage/expirer.py 143
3165-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
3166+            self.increment_container_space("actual", container_diskbytes, sharetype)
3167 
3168hunk ./src/allmydata/storage/expirer.py 145
3169-    def process_share(self, sharefilename):
3170-        # first, find out what kind of a share it is
3171-        sf = get_share_file(sharefilename)
3172-        sharetype = sf.sharetype
3173+    def process_share(self, share):
3174+        sharetype = share.sharetype
3175         now = time.time()
3176hunk ./src/allmydata/storage/expirer.py 148
3177-        s = self.stat(sharefilename)
3178+        sharebytes = share.get_size()
3179+        diskbytes = share.get_used_space()
3180 
3181         num_leases = 0
3182         num_valid_leases_original = 0
3183hunk ./src/allmydata/storage/expirer.py 156
3184         num_valid_leases_configured = 0
3185         expired_leases_configured = []
3186 
3187-        for li in sf.get_leases():
3188+        for li in share.get_leases():
3189             num_leases += 1
3190             original_expiration_time = li.get_expiration_time()
3191             grant_renew_time = li.get_grant_renew_time_time()
3192hunk ./src/allmydata/storage/expirer.py 169
3193 
3194             #  expired-or-not according to our configured age limit
3195             expired = False
3196-            if self.mode == "age":
3197-                age_limit = original_expiration_time
3198-                if self.override_lease_duration is not None:
3199-                    age_limit = self.override_lease_duration
3200-                if age > age_limit:
3201-                    expired = True
3202-            else:
3203-                assert self.mode == "cutoff-date"
3204-                if grant_renew_time < self.cutoff_date:
3205-                    expired = True
3206-            if sharetype not in self.sharetypes_to_expire:
3207-                expired = False
3208+            if sharetype in self.sharetypes_to_expire:
3209+                if self.mode == "age":
3210+                    age_limit = original_expiration_time
3211+                    if self.override_lease_duration is not None:
3212+                        age_limit = self.override_lease_duration
3213+                    if age > age_limit:
3214+                        expired = True
3215+                else:
3216+                    assert self.mode == "cutoff-date"
3217+                    if grant_renew_time < self.cutoff_date:
3218+                        expired = True
3219 
3220             if expired:
3221                 expired_leases_configured.append(li)
3222hunk ./src/allmydata/storage/expirer.py 188
3223 
3224         so_far = self.state["cycle-to-date"]
3225         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
3226-        self.increment_space("examined", s, sharetype)
3227+        self.increment_space("examined", diskbytes, sharetype)
3228 
3229         would_keep_share = [1, 1, 1, sharetype]
3230 
3231hunk ./src/allmydata/storage/expirer.py 194
3232         if self.expiration_enabled:
3233             for li in expired_leases_configured:
3234-                sf.cancel_lease(li.cancel_secret)
3235+                share.cancel_lease(li.cancel_secret)
3236 
3237         if num_valid_leases_original == 0:
3238             would_keep_share[0] = 0
3239hunk ./src/allmydata/storage/expirer.py 198
3240-            self.increment_space("original", s, sharetype)
3241+            self.increment_space("original", sharebytes, diskbytes, sharetype)
3242 
3243         if num_valid_leases_configured == 0:
3244             would_keep_share[1] = 0
3245hunk ./src/allmydata/storage/expirer.py 202
3246-            self.increment_space("configured", s, sharetype)
3247+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
3248             if self.expiration_enabled:
3249                 would_keep_share[2] = 0
3250hunk ./src/allmydata/storage/expirer.py 205
3251-                self.increment_space("actual", s, sharetype)
3252+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
3253 
3254         return would_keep_share
3255 
3256hunk ./src/allmydata/storage/expirer.py 209
3257-    def increment_space(self, a, s, sharetype):
3258-        sharebytes = s.st_size
3259-        try:
3260-            # note that stat(2) says that st_blocks is 512 bytes, and that
3261-            # st_blksize is "optimal file sys I/O ops blocksize", which is
3262-            # independent of the block-size that st_blocks uses.
3263-            diskbytes = s.st_blocks * 512
3264-        except AttributeError:
3265-            # the docs say that st_blocks is only on linux. I also see it on
3266-            # MacOS. But it isn't available on windows.
3267-            diskbytes = sharebytes
3268+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
3269         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
3270         self.increment(so_far_sr, a+"-shares", 1)
3271         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
3272hunk ./src/allmydata/storage/expirer.py 219
3273             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
3274             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
3275 
3276-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
3277+    def increment_container_space(self, a, container_diskbytes, container_type):
3278         rec = self.state["cycle-to-date"]["space-recovered"]
3279hunk ./src/allmydata/storage/expirer.py 221
3280-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
3281+        self.increment(rec, a+"-diskbytes", container_diskbytes)
3282         self.increment(rec, a+"-buckets", 1)
3283hunk ./src/allmydata/storage/expirer.py 223
3284-        if sharetype:
3285-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
3286-            self.increment(rec, a+"-buckets-"+sharetype, 1)
3287+        if container_type:
3288+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
3289+            self.increment(rec, a+"-buckets-"+container_type, 1)
3290 
3291     def increment(self, d, k, delta=1):
3292         if k not in d:
3293hunk ./src/allmydata/storage/expirer.py 279
3294         # copy() needs to become a deepcopy
3295         h["space-recovered"] = s["space-recovered"].copy()
3296 
3297-        history = pickle.load(open(self.historyfile, "rb"))
3298+        history = pickle.load(self.historyfp.getContent())
3299         history[cycle] = h
3300         while len(history) > 10:
3301             oldcycles = sorted(history.keys())
3302hunk ./src/allmydata/storage/expirer.py 284
3303             del history[oldcycles[0]]
3304-        f = open(self.historyfile, "wb")
3305-        pickle.dump(history, f)
3306-        f.close()
3307+        self.historyfp.setContent(pickle.dumps(history))
3308 
3309     def get_state(self):
3310         """In addition to the crawler state described in
3311hunk ./src/allmydata/storage/expirer.py 353
3312         progress = self.get_progress()
3313 
3314         state = ShareCrawler.get_state(self) # does a shallow copy
3315-        history = pickle.load(open(self.historyfile, "rb"))
3316+        history = pickle.load(self.historyfp.getContent())
3317         state["history"] = history
3318 
3319         if not progress["cycle-in-progress"]:
3320hunk ./src/allmydata/storage/lease.py 17
3321 
3322     def get_expiration_time(self):
3323         return self.expiration_time
3324+
3325     def get_grant_renew_time_time(self):
3326         # hack, based upon fixed 31day expiration period
3327         return self.expiration_time - 31*24*60*60
3328hunk ./src/allmydata/storage/lease.py 21
3329+
3330     def get_age(self):
3331         return time.time() - self.get_grant_renew_time_time()
3332 
3333hunk ./src/allmydata/storage/lease.py 32
3334          self.expiration_time) = struct.unpack(">L32s32sL", data)
3335         self.nodeid = None
3336         return self
3337+
3338     def to_immutable_data(self):
3339         return struct.pack(">L32s32sL",
3340                            self.owner_num,
3341hunk ./src/allmydata/storage/lease.py 45
3342                            int(self.expiration_time),
3343                            self.renew_secret, self.cancel_secret,
3344                            self.nodeid)
3345+
3346     def from_mutable_data(self, data):
3347         (self.owner_num,
3348          self.expiration_time,
3349hunk ./src/allmydata/storage/server.py 1
3350-import os, re, weakref, struct, time
3351+import weakref, time
3352 
3353 from foolscap.api import Referenceable
3354 from twisted.application import service
3355hunk ./src/allmydata/storage/server.py 8
3356 
3357 from zope.interface import implements
3358 from allmydata.interfaces import RIStorageServer, IStatsProducer
3359-from allmydata.util import fileutil, idlib, log, time_format
3360+from allmydata.util import idlib, log
3361 import allmydata # for __full_version__
3362 
3363hunk ./src/allmydata/storage/server.py 11
3364-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3365-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
3366+from allmydata.storage.common import si_a2b, si_b2a
3367+[si_a2b]  # hush pyflakes
3368 from allmydata.storage.lease import LeaseInfo
3369hunk ./src/allmydata/storage/server.py 14
3370-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3371-     create_mutable_sharefile
3372-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
3373-from allmydata.storage.crawler import BucketCountingCrawler
3374 from allmydata.storage.expirer import LeaseCheckingCrawler
3375hunk ./src/allmydata/storage/server.py 15
3376-
3377-# storage/
3378-# storage/shares/incoming
3379-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3380-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3381-# storage/shares/$START/$STORAGEINDEX
3382-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3383-
3384-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3385-# base-32 chars).
3386-
3387-# $SHARENUM matches this regex:
3388-NUM_RE=re.compile("^[0-9]+$")
3389-
3390+from allmydata.storage.crawler import BucketCountingCrawler
3391 
3392 
3393 class StorageServer(service.MultiService, Referenceable):
3394hunk ./src/allmydata/storage/server.py 20
3395     implements(RIStorageServer, IStatsProducer)
3396+
3397     name = 'storage'
3398     LeaseCheckerClass = LeaseCheckingCrawler
3399hunk ./src/allmydata/storage/server.py 23
3400+    DEFAULT_EXPIRATION_POLICY = {
3401+        'enabled': False,
3402+        'mode': 'age',
3403+        'override_lease_duration': None,
3404+        'cutoff_date': None,
3405+        'sharetypes': ('mutable', 'immutable'),
3406+    }
3407 
3408hunk ./src/allmydata/storage/server.py 31
3409-    def __init__(self, storedir, nodeid, reserved_space=0,
3410-                 discard_storage=False, readonly_storage=False,
3411+    def __init__(self, nodeid, backend, reserved_space=0,
3412+                 readonly_storage=False,
3413                  stats_provider=None,
3414hunk ./src/allmydata/storage/server.py 34
3415-                 expiration_enabled=False,
3416-                 expiration_mode="age",
3417-                 expiration_override_lease_duration=None,
3418-                 expiration_cutoff_date=None,
3419-                 expiration_sharetypes=("mutable", "immutable")):
3420+                 expiration_policy=None):
3421         service.MultiService.__init__(self)
3422         assert isinstance(nodeid, str)
3423         assert len(nodeid) == 20
3424hunk ./src/allmydata/storage/server.py 39
3425         self.my_nodeid = nodeid
3426-        self.storedir = storedir
3427-        sharedir = os.path.join(storedir, "shares")
3428-        fileutil.make_dirs(sharedir)
3429-        self.sharedir = sharedir
3430-        # we don't actually create the corruption-advisory dir until necessary
3431-        self.corruption_advisory_dir = os.path.join(storedir,
3432-                                                    "corruption-advisories")
3433-        self.reserved_space = int(reserved_space)
3434-        self.no_storage = discard_storage
3435-        self.readonly_storage = readonly_storage
3436         self.stats_provider = stats_provider
3437         if self.stats_provider:
3438             self.stats_provider.register_producer(self)
3439hunk ./src/allmydata/storage/server.py 42
3440-        self.incomingdir = os.path.join(sharedir, 'incoming')
3441-        self._clean_incomplete()
3442-        fileutil.make_dirs(self.incomingdir)
3443         self._active_writers = weakref.WeakKeyDictionary()
3444hunk ./src/allmydata/storage/server.py 43
3445+        self.backend = backend
3446+        self.backend.setServiceParent(self)
3447+        self.backend.set_storage_server(self)
3448         log.msg("StorageServer created", facility="tahoe.storage")
3449 
3450hunk ./src/allmydata/storage/server.py 48
3451-        if reserved_space:
3452-            if self.get_available_space() is None:
3453-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
3454-                        umin="0wZ27w", level=log.UNUSUAL)
3455-
3456         self.latencies = {"allocate": [], # immutable
3457                           "write": [],
3458                           "close": [],
3459hunk ./src/allmydata/storage/server.py 59
3460                           "renew": [],
3461                           "cancel": [],
3462                           }
3463-        self.add_bucket_counter()
3464-
3465-        statefile = os.path.join(self.storedir, "lease_checker.state")
3466-        historyfile = os.path.join(self.storedir, "lease_checker.history")
3467-        klass = self.LeaseCheckerClass
3468-        self.lease_checker = klass(self, statefile, historyfile,
3469-                                   expiration_enabled, expiration_mode,
3470-                                   expiration_override_lease_duration,
3471-                                   expiration_cutoff_date,
3472-                                   expiration_sharetypes)
3473-        self.lease_checker.setServiceParent(self)
3474+        self._setup_bucket_counter()
3475+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
3476 
3477     def __repr__(self):
3478         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
3479hunk ./src/allmydata/storage/server.py 65
3480 
3481-    def add_bucket_counter(self):
3482-        statefile = os.path.join(self.storedir, "bucket_counter.state")
3483-        self.bucket_counter = BucketCountingCrawler(self, statefile)
3484+    def _setup_bucket_counter(self):
3485+        statefp = self.storedir.child("bucket_counter.state")
3486+        self.bucket_counter = BucketCountingCrawler(statefp)
3487         self.bucket_counter.setServiceParent(self)
3488 
3489hunk ./src/allmydata/storage/server.py 70
3490+    def _setup_lease_checker(self, expiration_policy):
3491+        statefp = self.storedir.child("lease_checker.state")
3492+        historyfp = self.storedir.child("lease_checker.history")
3493+        self.lease_checker = self.LeaseCheckerClass(statefp, historyfp, expiration_policy)
3494+        self.lease_checker.setServiceParent(self)
3495+
3496     def count(self, name, delta=1):
3497         if self.stats_provider:
3498             self.stats_provider.count("storage_server." + name, delta)
3499hunk ./src/allmydata/storage/server.py 90
3500         """Return a dict, indexed by category, that contains a dict of
3501         latency numbers for each category. If there are sufficient samples
3502         for unambiguous interpretation, each dict will contain the
3503-        following keys: mean, 01_0_percentile, 10_0_percentile,
3504+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3505         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3506         99_0_percentile, 99_9_percentile.  If there are insufficient
3507         samples for a given percentile to be interpreted unambiguously
3508hunk ./src/allmydata/storage/server.py 112
3509             else:
3510                 stats["mean"] = None
3511 
3512-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
3513-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
3514-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
3515+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
3516+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
3517+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
3518                              (0.999, "99_9_percentile", 1000)]
3519 
3520             for percentile, percentilestring, minnumtoobserve in orderstatlist:
3521hunk ./src/allmydata/storage/server.py 131
3522             kwargs["facility"] = "tahoe.storage"
3523         return log.msg(*args, **kwargs)
3524 
3525-    def _clean_incomplete(self):
3526-        fileutil.rm_dir(self.incomingdir)
3527+    def get_serverid(self):
3528+        return self.my_nodeid
3529 
3530     def get_stats(self):
3531         # remember: RIStatsProvider requires that our return dict
3532hunk ./src/allmydata/storage/server.py 136
3533-        # contains numeric values.
3534+        # contains numeric, or None values.
3535         stats = { 'storage_server.allocated': self.allocated_size(), }
3536         stats['storage_server.reserved_space'] = self.reserved_space
3537         for category,ld in self.get_latencies().items():
3538hunk ./src/allmydata/storage/server.py 143
3539             for name,v in ld.items():
3540                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
3541 
3542-        try:
3543-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
3544-            writeable = disk['avail'] > 0
3545+        self.backend.fill_in_space_stats(stats)
3546 
3547hunk ./src/allmydata/storage/server.py 145
3548-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
3549-            stats['storage_server.disk_total'] = disk['total']
3550-            stats['storage_server.disk_used'] = disk['used']
3551-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
3552-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
3553-            stats['storage_server.disk_avail'] = disk['avail']
3554-        except AttributeError:
3555-            writeable = True
3556-        except EnvironmentError:
3557-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
3558-            writeable = False
3559-
3560-        if self.readonly_storage:
3561-            stats['storage_server.disk_avail'] = 0
3562-            writeable = False
3563-
3564-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
3565         s = self.bucket_counter.get_state()
3566         bucket_count = s.get("last-complete-bucket-count")
3567         if bucket_count:
3568hunk ./src/allmydata/storage/server.py 152
3569         return stats
3570 
3571     def get_available_space(self):
3572-        """Returns available space for share storage in bytes, or None if no
3573-        API to get this information is available."""
3574-
3575-        if self.readonly_storage:
3576-            return 0
3577-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
3578+        return self.backend.get_available_space()
3579 
3580     def allocated_size(self):
3581         space = 0
3582hunk ./src/allmydata/storage/server.py 161
3583         return space
3584 
3585     def remote_get_version(self):
3586-        remaining_space = self.get_available_space()
3587+        remaining_space = self.backend.get_available_space()
3588         if remaining_space is None:
3589             # We're on a platform that has no API to get disk stats.
3590             remaining_space = 2**64
3591hunk ./src/allmydata/storage/server.py 177
3592                     }
3593         return version
3594 
3595-    def remote_allocate_buckets(self, storage_index,
3596+    def remote_allocate_buckets(self, storageindex,
3597                                 renew_secret, cancel_secret,
3598                                 sharenums, allocated_size,
3599                                 canary, owner_num=0):
3600hunk ./src/allmydata/storage/server.py 181
3601+        # cancel_secret is no longer used.
3602         # owner_num is not for clients to set, but rather it should be
3603hunk ./src/allmydata/storage/server.py 183
3604-        # curried into the PersonalStorageServer instance that is dedicated
3605-        # to a particular owner.
3606+        # curried into a StorageServer instance dedicated to a particular
3607+        # owner.
3608         start = time.time()
3609         self.count("allocate")
3610hunk ./src/allmydata/storage/server.py 187
3611-        alreadygot = set()
3612+        incoming = set()
3613         bucketwriters = {} # k: shnum, v: BucketWriter
3614hunk ./src/allmydata/storage/server.py 189
3615-        si_dir = storage_index_to_dir(storage_index)
3616-        si_s = si_b2a(storage_index)
3617 
3618hunk ./src/allmydata/storage/server.py 190
3619+        si_s = si_b2a(storageindex)
3620         log.msg("storage: allocate_buckets %s" % si_s)
3621 
3622hunk ./src/allmydata/storage/server.py 193
3623-        # in this implementation, the lease information (including secrets)
3624-        # goes into the share files themselves. It could also be put into a
3625-        # separate database. Note that the lease should not be added until
3626-        # the BucketWriter has been closed.
3627+        # Note that the lease should not be added until the BucketWriter
3628+        # has been closed.
3629         expire_time = time.time() + 31*24*60*60
3630hunk ./src/allmydata/storage/server.py 196
3631-        lease_info = LeaseInfo(owner_num,
3632-                               renew_secret, cancel_secret,
3633+        lease_info = LeaseInfo(owner_num, renew_secret,
3634                                expire_time, self.my_nodeid)
3635 
3636         max_space_per_bucket = allocated_size
3637hunk ./src/allmydata/storage/server.py 201
3638 
3639-        remaining_space = self.get_available_space()
3640+        remaining_space = self.backend.get_available_space()
3641         limited = remaining_space is not None
3642         if limited:
3643hunk ./src/allmydata/storage/server.py 204
3644-            # this is a bit conservative, since some of this allocated_size()
3645-            # has already been written to disk, where it will show up in
3646+            # This is a bit conservative, since some of this allocated_size()
3647+            # has already been written to the backend, where it will show up in
3648             # get_available_space.
3649             remaining_space -= self.allocated_size()
3650         # self.readonly_storage causes remaining_space <= 0
3651hunk ./src/allmydata/storage/server.py 210
3652 
3653-        # fill alreadygot with all shares that we have, not just the ones
3654+        # Fill alreadygot with all shares that we have, not just the ones
3655         # they asked about: this will save them a lot of work. Add or update
3656         # leases for all of them: if they want us to hold shares for this
3657hunk ./src/allmydata/storage/server.py 213
3658-        # file, they'll want us to hold leases for this file.
3659-        for (shnum, fn) in self._get_bucket_shares(storage_index):
3660-            alreadygot.add(shnum)
3661-            sf = ShareFile(fn)
3662-            sf.add_or_renew_lease(lease_info)
3663+        # file, they'll want us to hold leases for all the shares of it.
3664+        #
3665+        # XXX should we be making the assumption here that lease info is
3666+        # duplicated in all shares?
3667+        alreadygot = set()
3668+        for share in self.backend.get_shares(storageindex):
3669+            share.add_or_renew_lease(lease_info)
3670+            alreadygot.add(share.shnum)
3671 
3672hunk ./src/allmydata/storage/server.py 222
3673-        for shnum in sharenums:
3674-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3675-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
3676-            if os.path.exists(finalhome):
3677-                # great! we already have it. easy.
3678-                pass
3679-            elif os.path.exists(incominghome):
3680-                # Note that we don't create BucketWriters for shnums that
3681-                # have a partial share (in incoming/), so if a second upload
3682-                # occurs while the first is still in progress, the second
3683-                # uploader will use different storage servers.
3684-                pass
3685-            elif (not limited) or (remaining_space >= max_space_per_bucket):
3686-                # ok! we need to create the new share file.
3687-                bw = BucketWriter(self, incominghome, finalhome,
3688-                                  max_space_per_bucket, lease_info, canary)
3689-                if self.no_storage:
3690-                    bw.throw_out_all_data = True
3691+        # all share numbers that are incoming
3692+        incoming = self.backend.get_incoming_shnums(storageindex)
3693+
3694+        for shnum in ((sharenums - alreadygot) - incoming):
3695+            if (not limited) or (remaining_space >= max_space_per_bucket):
3696+                bw = self.backend.make_bucket_writer(storageindex, shnum, max_space_per_bucket,
3697+                                                     lease_info, canary)
3698                 bucketwriters[shnum] = bw
3699                 self._active_writers[bw] = 1
3700                 if limited:
3701hunk ./src/allmydata/storage/server.py 234
3702                     remaining_space -= max_space_per_bucket
3703             else:
3704-                # bummer! not enough space to accept this bucket
3705+                # Bummer not enough space to accept this share.
3706                 pass
3707 
3708hunk ./src/allmydata/storage/server.py 237
3709-        if bucketwriters:
3710-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
3711-
3712         self.add_latency("allocate", time.time() - start)
3713         return alreadygot, bucketwriters
3714 
3715hunk ./src/allmydata/storage/server.py 240
3716-    def _iter_share_files(self, storage_index):
3717-        for shnum, filename in self._get_bucket_shares(storage_index):
3718-            f = open(filename, 'rb')
3719-            header = f.read(32)
3720-            f.close()
3721-            if header[:32] == MutableShareFile.MAGIC:
3722-                sf = MutableShareFile(filename, self)
3723-                # note: if the share has been migrated, the renew_lease()
3724-                # call will throw an exception, with information to help the
3725-                # client update the lease.
3726-            elif header[:4] == struct.pack(">L", 1):
3727-                sf = ShareFile(filename)
3728-            else:
3729-                continue # non-sharefile
3730-            yield sf
3731-
3732-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
3733+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
3734                          owner_num=1):
3735hunk ./src/allmydata/storage/server.py 242
3736+        # cancel_secret is no longer used.
3737         start = time.time()
3738         self.count("add-lease")
3739         new_expire_time = time.time() + 31*24*60*60
3740hunk ./src/allmydata/storage/server.py 246
3741-        lease_info = LeaseInfo(owner_num,
3742-                               renew_secret, cancel_secret,
3743+        lease_info = LeaseInfo(owner_num, renew_secret,
3744                                new_expire_time, self.my_nodeid)
3745hunk ./src/allmydata/storage/server.py 248
3746-        for sf in self._iter_share_files(storage_index):
3747-            sf.add_or_renew_lease(lease_info)
3748-        self.add_latency("add-lease", time.time() - start)
3749-        return None
3750 
3751hunk ./src/allmydata/storage/server.py 249
3752-    def remote_renew_lease(self, storage_index, renew_secret):
3753+        try:
3754+            self.backend.add_or_renew_lease(lease_info)
3755+        finally:
3756+            self.add_latency("add-lease", time.time() - start)
3757+
3758+    def remote_renew_lease(self, storageindex, renew_secret):
3759         start = time.time()
3760         self.count("renew")
3761hunk ./src/allmydata/storage/server.py 257
3762-        new_expire_time = time.time() + 31*24*60*60
3763-        found_buckets = False
3764-        for sf in self._iter_share_files(storage_index):
3765-            found_buckets = True
3766-            sf.renew_lease(renew_secret, new_expire_time)
3767-        self.add_latency("renew", time.time() - start)
3768-        if not found_buckets:
3769-            raise IndexError("no such lease to renew")
3770+
3771+        try:
3772+            shareset = self.backend.get_shareset(storageindex)
3773+            new_expiration_time = start + 31*24*60*60   # one month from now
3774+            shareset.renew_lease(renew_secret, new_expiration_time)
3775+        finally:
3776+            self.add_latency("renew", time.time() - start)
3777 
3778     def bucket_writer_closed(self, bw, consumed_size):
3779         if self.stats_provider:
3780hunk ./src/allmydata/storage/server.py 270
3781             self.stats_provider.count('storage_server.bytes_added', consumed_size)
3782         del self._active_writers[bw]
3783 
3784-    def _get_bucket_shares(self, storage_index):
3785-        """Return a list of (shnum, pathname) tuples for files that hold
3786-        shares for this storage_index. In each tuple, 'shnum' will always be
3787-        the integer form of the last component of 'pathname'."""
3788-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3789-        try:
3790-            for f in os.listdir(storagedir):
3791-                if NUM_RE.match(f):
3792-                    filename = os.path.join(storagedir, f)
3793-                    yield (int(f), filename)
3794-        except OSError:
3795-            # Commonly caused by there being no buckets at all.
3796-            pass
3797-
3798-    def remote_get_buckets(self, storage_index):
3799+    def remote_get_buckets(self, storageindex):
3800         start = time.time()
3801         self.count("get")
3802hunk ./src/allmydata/storage/server.py 273
3803-        si_s = si_b2a(storage_index)
3804+        si_s = si_b2a(storageindex)
3805         log.msg("storage: get_buckets %s" % si_s)
3806         bucketreaders = {} # k: sharenum, v: BucketReader
3807hunk ./src/allmydata/storage/server.py 276
3808-        for shnum, filename in self._get_bucket_shares(storage_index):
3809-            bucketreaders[shnum] = BucketReader(self, filename,
3810-                                                storage_index, shnum)
3811-        self.add_latency("get", time.time() - start)
3812-        return bucketreaders
3813 
3814hunk ./src/allmydata/storage/server.py 277
3815-    def get_leases(self, storage_index):
3816-        """Provide an iterator that yields all of the leases attached to this
3817-        bucket. Each lease is returned as a LeaseInfo instance.
3818+        try:
3819+            shareset = self.backend.get_shareset(storageindex)
3820+            for share in shareset.get_shares(storageindex):
3821+                bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(self, share)
3822+            return bucketreaders
3823+        finally:
3824+            self.add_latency("get", time.time() - start)
3825 
3826hunk ./src/allmydata/storage/server.py 285
3827-        This method is not for client use.
3828+    def get_leases(self, storageindex):
3829         """
3830hunk ./src/allmydata/storage/server.py 287
3831+        Provide an iterator that yields all of the leases attached to this
3832+        bucket. Each lease is returned as a LeaseInfo instance.
3833 
3834hunk ./src/allmydata/storage/server.py 290
3835-        # since all shares get the same lease data, we just grab the leases
3836-        # from the first share
3837-        try:
3838-            shnum, filename = self._get_bucket_shares(storage_index).next()
3839-            sf = ShareFile(filename)
3840-            return sf.get_leases()
3841-        except StopIteration:
3842-            return iter([])
3843+        This method is not for client use. XXX do we need it at all?
3844+        """
3845+        return self.backend.get_shareset(storageindex).get_leases()
3846 
3847hunk ./src/allmydata/storage/server.py 294
3848-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
3849+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
3850                                                secrets,
3851                                                test_and_write_vectors,
3852                                                read_vector):
3853hunk ./src/allmydata/storage/server.py 300
3854         start = time.time()
3855         self.count("writev")
3856-        si_s = si_b2a(storage_index)
3857+        si_s = si_b2a(storageindex)
3858         log.msg("storage: slot_writev %s" % si_s)
3859hunk ./src/allmydata/storage/server.py 302
3860-        si_dir = storage_index_to_dir(storage_index)
3861-        (write_enabler, renew_secret, cancel_secret) = secrets
3862-        # shares exist if there is a file for them
3863-        bucketdir = os.path.join(self.sharedir, si_dir)
3864-        shares = {}
3865-        if os.path.isdir(bucketdir):
3866-            for sharenum_s in os.listdir(bucketdir):
3867-                try:
3868-                    sharenum = int(sharenum_s)
3869-                except ValueError:
3870-                    continue
3871-                filename = os.path.join(bucketdir, sharenum_s)
3872-                msf = MutableShareFile(filename, self)
3873-                msf.check_write_enabler(write_enabler, si_s)
3874-                shares[sharenum] = msf
3875-        # write_enabler is good for all existing shares.
3876-
3877-        # Now evaluate test vectors.
3878-        testv_is_good = True
3879-        for sharenum in test_and_write_vectors:
3880-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
3881-            if sharenum in shares:
3882-                if not shares[sharenum].check_testv(testv):
3883-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
3884-                    testv_is_good = False
3885-                    break
3886-            else:
3887-                # compare the vectors against an empty share, in which all
3888-                # reads return empty strings.
3889-                if not EmptyShare().check_testv(testv):
3890-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
3891-                                                                testv))
3892-                    testv_is_good = False
3893-                    break
3894 
3895hunk ./src/allmydata/storage/server.py 303
3896-        # now gather the read vectors, before we do any writes
3897-        read_data = {}
3898-        for sharenum, share in shares.items():
3899-            read_data[sharenum] = share.readv(read_vector)
3900-
3901-        ownerid = 1 # TODO
3902-        expire_time = time.time() + 31*24*60*60   # one month
3903-        lease_info = LeaseInfo(ownerid,
3904-                               renew_secret, cancel_secret,
3905-                               expire_time, self.my_nodeid)
3906-
3907-        if testv_is_good:
3908-            # now apply the write vectors
3909-            for sharenum in test_and_write_vectors:
3910-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
3911-                if new_length == 0:
3912-                    if sharenum in shares:
3913-                        shares[sharenum].unlink()
3914-                else:
3915-                    if sharenum not in shares:
3916-                        # allocate a new share
3917-                        allocated_size = 2000 # arbitrary, really
3918-                        share = self._allocate_slot_share(bucketdir, secrets,
3919-                                                          sharenum,
3920-                                                          allocated_size,
3921-                                                          owner_num=0)
3922-                        shares[sharenum] = share
3923-                    shares[sharenum].writev(datav, new_length)
3924-                    # and update the lease
3925-                    shares[sharenum].add_or_renew_lease(lease_info)
3926-
3927-            if new_length == 0:
3928-                # delete empty bucket directories
3929-                if not os.listdir(bucketdir):
3930-                    os.rmdir(bucketdir)
3931-
3932-
3933-        # all done
3934-        self.add_latency("writev", time.time() - start)
3935-        return (testv_is_good, read_data)
3936-
3937-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
3938-                             allocated_size, owner_num=0):
3939-        (write_enabler, renew_secret, cancel_secret) = secrets
3940-        my_nodeid = self.my_nodeid
3941-        fileutil.make_dirs(bucketdir)
3942-        filename = os.path.join(bucketdir, "%d" % sharenum)
3943-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
3944-                                         self)
3945-        return share
3946+        try:
3947+            shareset = self.backend.get_shareset(storageindex)
3948+            expiration_time = start + 31*24*60*60   # one month from now
3949+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
3950+                                                       read_vector, expiration_time)
3951+        finally:
3952+            self.add_latency("writev", time.time() - start)
3953 
3954hunk ./src/allmydata/storage/server.py 311
3955-    def remote_slot_readv(self, storage_index, shares, readv):
3956+    def remote_slot_readv(self, storageindex, shares, readv):
3957         start = time.time()
3958         self.count("readv")
3959hunk ./src/allmydata/storage/server.py 314
3960-        si_s = si_b2a(storage_index)
3961-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
3962-                     facility="tahoe.storage", level=log.OPERATIONAL)
3963-        si_dir = storage_index_to_dir(storage_index)
3964-        # shares exist if there is a file for them
3965-        bucketdir = os.path.join(self.sharedir, si_dir)
3966-        if not os.path.isdir(bucketdir):
3967+        si_s = si_b2a(storageindex)
3968+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
3969+                facility="tahoe.storage", level=log.OPERATIONAL)
3970+
3971+        try:
3972+            shareset = self.backend.get_shareset(storageindex)
3973+            return shareset.readv(self, shares, readv)
3974+        finally:
3975             self.add_latency("readv", time.time() - start)
3976hunk ./src/allmydata/storage/server.py 323
3977-            return {}
3978-        datavs = {}
3979-        for sharenum_s in os.listdir(bucketdir):
3980-            try:
3981-                sharenum = int(sharenum_s)
3982-            except ValueError:
3983-                continue
3984-            if sharenum in shares or not shares:
3985-                filename = os.path.join(bucketdir, sharenum_s)
3986-                msf = MutableShareFile(filename, self)
3987-                datavs[sharenum] = msf.readv(readv)
3988-        log.msg("returning shares %s" % (datavs.keys(),),
3989-                facility="tahoe.storage", level=log.NOISY, parent=lp)
3990-        self.add_latency("readv", time.time() - start)
3991-        return datavs
3992 
3993hunk ./src/allmydata/storage/server.py 324
3994-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
3995-                                    reason):
3996-        fileutil.make_dirs(self.corruption_advisory_dir)
3997-        now = time_format.iso_utc(sep="T")
3998-        si_s = si_b2a(storage_index)
3999-        # windows can't handle colons in the filename
4000-        fn = os.path.join(self.corruption_advisory_dir,
4001-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
4002-        f = open(fn, "w")
4003-        f.write("report: Share Corruption\n")
4004-        f.write("type: %s\n" % share_type)
4005-        f.write("storage_index: %s\n" % si_s)
4006-        f.write("share_number: %d\n" % shnum)
4007-        f.write("\n")
4008-        f.write(reason)
4009-        f.write("\n")
4010-        f.close()
4011-        log.msg(format=("client claims corruption in (%(share_type)s) " +
4012-                        "%(si)s-%(shnum)d: %(reason)s"),
4013-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
4014-                level=log.SCARY, umid="SGx2fA")
4015-        return None
4016+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
4017+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
4018hunk ./src/allmydata/storage/shares.py 1
4019-#! /usr/bin/python
4020-
4021-from allmydata.storage.mutable import MutableShareFile
4022-from allmydata.storage.immutable import ShareFile
4023-
4024-def get_share_file(filename):
4025-    f = open(filename, "rb")
4026-    prefix = f.read(32)
4027-    f.close()
4028-    if prefix == MutableShareFile.MAGIC:
4029-        return MutableShareFile(filename)
4030-    # otherwise assume it's immutable
4031-    return ShareFile(filename)
4032-
4033rmfile ./src/allmydata/storage/shares.py
4034hunk ./src/allmydata/test/common.py 20
4035 from allmydata.mutable.common import CorruptShareError
4036 from allmydata.mutable.layout import unpack_header
4037 from allmydata.mutable.publish import MutableData
4038-from allmydata.storage.mutable import MutableShareFile
4039+from allmydata.storage.backends.disk.mutable import MutableShareFile
4040 from allmydata.util import hashutil, log, fileutil, pollmixin
4041 from allmydata.util.assertutil import precondition
4042 from allmydata.util.consumer import download_to_data
4043replace ./src/allmydata/test/common.py [A-Za-z_0-9] MutableShareFile MutableDiskShare
4044hunk ./src/allmydata/test/no_network.py 25
4045 from base64 import b32encode
4046 from allmydata import uri as tahoe_uri
4047 from allmydata.client import Client
4048-from allmydata.storage.server import StorageServer, storage_index_to_dir
4049+from allmydata.storage.server import StorageServer
4050 from allmydata.util import fileutil, idlib, hashutil
4051 from allmydata.util.hashutil import sha1
4052 from allmydata.test.common_web import HTTPClientGETFactory
4053hunk ./src/allmydata/test/no_network.py 152
4054             seed = server.get_permutation_seed()
4055             return sha1(peer_selection_index + seed).digest()
4056         return sorted(self.get_connected_servers(), key=_permuted)
4057+
4058     def get_connected_servers(self):
4059         return self.client._servers
4060hunk ./src/allmydata/test/no_network.py 155
4061+
4062     def get_nickname_for_serverid(self, serverid):
4063         return None
4064 
4065hunk ./src/allmydata/test/no_network.py 159
4066+    def get_known_servers(self):
4067+        return self.get_connected_servers()
4068+
4069+    def get_all_serverids(self):
4070+        return self.client.get_all_serverids()
4071+
4072+
4073 class NoNetworkClient(Client):
4074     def create_tub(self):
4075         pass
4076hunk ./src/allmydata/test/no_network.py 342
4077     def get_clientdir(self, i=0):
4078         return self.g.clients[i].basedir
4079 
4080+    def get_server(self, i):
4081+        return self.g.servers_by_number[i]
4082+
4083     def get_serverdir(self, i):
4084hunk ./src/allmydata/test/no_network.py 346
4085-        return self.g.servers_by_number[i].storedir
4086+        return self.g.servers_by_number[i].backend.storedir
4087 
4088     def iterate_servers(self):
4089         for i in sorted(self.g.servers_by_number.keys()):
4090hunk ./src/allmydata/test/no_network.py 351
4091             ss = self.g.servers_by_number[i]
4092-            yield (i, ss, ss.storedir)
4093+            yield (i, ss, ss.backend.storedir)
4094 
4095     def find_uri_shares(self, uri):
4096         si = tahoe_uri.from_string(uri).get_storage_index()
4097hunk ./src/allmydata/test/no_network.py 355
4098-        prefixdir = storage_index_to_dir(si)
4099         shares = []
4100         for i,ss in self.g.servers_by_number.items():
4101hunk ./src/allmydata/test/no_network.py 357
4102-            serverid = ss.my_nodeid
4103-            basedir = os.path.join(ss.sharedir, prefixdir)
4104-            if not os.path.exists(basedir):
4105-                continue
4106-            for f in os.listdir(basedir):
4107-                try:
4108-                    shnum = int(f)
4109-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
4110-                except ValueError:
4111-                    pass
4112+            for share in ss.backend.get_shareset(si).get_shares():
4113+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
4114         return sorted(shares)
4115 
4116hunk ./src/allmydata/test/no_network.py 361
4117+    def count_leases(self, uri):
4118+        """Return (filename, leasecount) pairs in arbitrary order."""
4119+        si = tahoe_uri.from_string(uri).get_storage_index()
4120+        lease_counts = []
4121+        for i,ss in self.g.servers_by_number.items():
4122+            for share in ss.backend.get_shareset(si).get_shares():
4123+                num_leases = len(list(share.get_leases()))
4124+                lease_counts.append( (share._home.path, num_leases) )
4125+        return lease_counts
4126+
4127     def copy_shares(self, uri):
4128         shares = {}
4129hunk ./src/allmydata/test/no_network.py 373
4130-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
4131-            shares[sharefile] = open(sharefile, "rb").read()
4132+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
4133+            shares[sharefp.path] = sharefp.getContent()
4134         return shares
4135 
4136hunk ./src/allmydata/test/no_network.py 377
4137+    def copy_share(self, from_share, uri, to_server):
4138+        si = uri.from_string(self.uri).get_storage_index()
4139+        (i_shnum, i_serverid, i_sharefp) = from_share
4140+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
4141+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
4142+
4143     def restore_all_shares(self, shares):
4144hunk ./src/allmydata/test/no_network.py 384
4145-        for sharefile, data in shares.items():
4146-            open(sharefile, "wb").write(data)
4147+        for share, data in shares.items():
4148+            share.home.setContent(data)
4149 
4150hunk ./src/allmydata/test/no_network.py 387
4151-    def delete_share(self, (shnum, serverid, sharefile)):
4152-        os.unlink(sharefile)
4153+    def delete_share(self, (shnum, serverid, sharefp)):
4154+        sharefp.remove()
4155 
4156     def delete_shares_numbered(self, uri, shnums):
4157hunk ./src/allmydata/test/no_network.py 391
4158-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
4159+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
4160             if i_shnum in shnums:
4161hunk ./src/allmydata/test/no_network.py 393
4162-                os.unlink(i_sharefile)
4163+                i_sharefp.remove()
4164 
4165hunk ./src/allmydata/test/no_network.py 395
4166-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
4167-        sharedata = open(sharefile, "rb").read()
4168-        corruptdata = corruptor_function(sharedata)
4169-        open(sharefile, "wb").write(corruptdata)
4170+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
4171+        sharedata = sharefp.getContent()
4172+        corruptdata = corruptor_function(sharedata, debug=debug)
4173+        sharefp.setContent(corruptdata)
4174 
4175     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
4176hunk ./src/allmydata/test/no_network.py 401
4177-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
4178+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
4179             if i_shnum in shnums:
4180hunk ./src/allmydata/test/no_network.py 403
4181-                sharedata = open(i_sharefile, "rb").read()
4182-                corruptdata = corruptor(sharedata, debug=debug)
4183-                open(i_sharefile, "wb").write(corruptdata)
4184+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
4185 
4186     def corrupt_all_shares(self, uri, corruptor, debug=False):
4187hunk ./src/allmydata/test/no_network.py 406
4188-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
4189-            sharedata = open(i_sharefile, "rb").read()
4190-            corruptdata = corruptor(sharedata, debug=debug)
4191-            open(i_sharefile, "wb").write(corruptdata)
4192+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
4193+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
4194 
4195     def GET(self, urlpath, followRedirect=False, return_response=False,
4196             method="GET", clientnum=0, **kwargs):
4197addfile ./src/allmydata/test/test_backends.py
4198hunk ./src/allmydata/test/test_backends.py 1
4199+import os, stat
4200+from twisted.trial import unittest
4201+from allmydata.util.log import msg
4202+from allmydata.test.common_util import ReallyEqualMixin
4203+import mock
4204+
4205+# This is the code that we're going to be testing.
4206+from allmydata.storage.server import StorageServer
4207+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
4208+from allmydata.storage.backends.null.null_backend import NullBackend
4209+
4210+# The following share file content was generated with
4211+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4212+# with share data == 'a'. The total size of this input
4213+# is 85 bytes.
4214+shareversionnumber = '\x00\x00\x00\x01'
4215+sharedatalength = '\x00\x00\x00\x01'
4216+numberofleases = '\x00\x00\x00\x01'
4217+shareinputdata = 'a'
4218+ownernumber = '\x00\x00\x00\x00'
4219+renewsecret  = 'x'*32
4220+cancelsecret = 'y'*32
4221+expirationtime = '\x00(\xde\x80'
4222+nextlease = ''
4223+containerdata = shareversionnumber + sharedatalength + numberofleases
4224+client_data = shareinputdata + ownernumber + renewsecret + \
4225+    cancelsecret + expirationtime + nextlease
4226+share_data = containerdata + client_data
4227+testnodeid = 'testnodeidxxxxxxxxxx'
4228+
4229+
4230+class MockFileSystem(unittest.TestCase):
4231+    """ I simulate a filesystem that the code under test can use. I simulate
4232+    just the parts of the filesystem that the current implementation of Disk
4233+    backend needs. """
4234+    def setUp(self):
4235+        # Make patcher, patch, and effects for disk-using functions.
4236+        msg( "%s.setUp()" % (self,))
4237+        self.mockedfilepaths = {}
4238+        # keys are pathnames, values are MockFilePath objects. This is necessary because
4239+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
4240+        # self.mockedfilepaths has the relevant information.
4241+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
4242+        self.basedir = self.storedir.child('shares')
4243+        self.baseincdir = self.basedir.child('incoming')
4244+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4245+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4246+        self.shareincomingname = self.sharedirincomingname.child('0')
4247+        self.sharefinalname = self.sharedirfinalname.child('0')
4248+
4249+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.core.FilePath', new = MockFilePath)
4250+        self.FilePathFake.__enter__()
4251+
4252+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.core.BucketCountingCrawler')
4253+        FakeBCC = self.BCountingCrawler.__enter__()
4254+        FakeBCC.side_effect = self.call_FakeBCC
4255+
4256+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.core.LeaseCheckingCrawler')
4257+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
4258+        FakeLCC.side_effect = self.call_FakeLCC
4259+
4260+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
4261+        GetSpace = self.get_available_space.__enter__()
4262+        GetSpace.side_effect = self.call_get_available_space
4263+
4264+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
4265+        getsize = self.statforsize.__enter__()
4266+        getsize.side_effect = self.call_statforsize
4267+
4268+    def call_FakeBCC(self, StateFile):
4269+        return MockBCC()
4270+
4271+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
4272+        return MockLCC()
4273+
4274+    def call_get_available_space(self, storedir, reservedspace):
4275+        # The input vector has an input size of 85.
4276+        return 85 - reservedspace
4277+
4278+    def call_statforsize(self, fakefpname):
4279+        return self.mockedfilepaths[fakefpname].fileobject.size()
4280+
4281+    def tearDown(self):
4282+        msg( "%s.tearDown()" % (self,))
4283+        self.FilePathFake.__exit__()
4284+        self.mockedfilepaths = {}
4285+
4286+
4287+class MockFilePath:
4288+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
4289+        #  I can't just make the values MockFileObjects because they may be directories.
4290+        self.mockedfilepaths = ffpathsenvironment
4291+        self.path = pathstring
4292+        self.existence = existence
4293+        if not self.mockedfilepaths.has_key(self.path):
4294+            #  The first MockFilePath object is special
4295+            self.mockedfilepaths[self.path] = self
4296+            self.fileobject = None
4297+        else:
4298+            self.fileobject = self.mockedfilepaths[self.path].fileobject
4299+        self.spawn = {}
4300+        self.antecedent = os.path.dirname(self.path)
4301+
4302+    def setContent(self, contentstring):
4303+        # This method rewrites the data in the file that corresponds to its path
4304+        # name whether it preexisted or not.
4305+        self.fileobject = MockFileObject(contentstring)
4306+        self.existence = True
4307+        self.mockedfilepaths[self.path].fileobject = self.fileobject
4308+        self.mockedfilepaths[self.path].existence = self.existence
4309+        self.setparents()
4310+
4311+    def create(self):
4312+        # This method chokes if there's a pre-existing file!
4313+        if self.mockedfilepaths[self.path].fileobject:
4314+            raise OSError
4315+        else:
4316+            self.existence = True
4317+            self.mockedfilepaths[self.path].fileobject = self.fileobject
4318+            self.mockedfilepaths[self.path].existence = self.existence
4319+            self.setparents()
4320+
4321+    def open(self, mode='r'):
4322+        # XXX Makes no use of mode.
4323+        if not self.mockedfilepaths[self.path].fileobject:
4324+            # If there's no fileobject there already then make one and put it there.
4325+            self.fileobject = MockFileObject()
4326+            self.existence = True
4327+            self.mockedfilepaths[self.path].fileobject = self.fileobject
4328+            self.mockedfilepaths[self.path].existence = self.existence
4329+        else:
4330+            # Otherwise get a ref to it.
4331+            self.fileobject = self.mockedfilepaths[self.path].fileobject
4332+            self.existence = self.mockedfilepaths[self.path].existence
4333+        return self.fileobject.open(mode)
4334+
4335+    def child(self, childstring):
4336+        arg2child = os.path.join(self.path, childstring)
4337+        child = MockFilePath(arg2child, self.mockedfilepaths)
4338+        return child
4339+
4340+    def children(self):
4341+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
4342+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
4343+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
4344+        self.spawn = frozenset(childrenfromffs)
4345+        return self.spawn
4346+
4347+    def parent(self):
4348+        if self.mockedfilepaths.has_key(self.antecedent):
4349+            parent = self.mockedfilepaths[self.antecedent]
4350+        else:
4351+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
4352+        return parent
4353+
4354+    def parents(self):
4355+        antecedents = []
4356+        def f(fps, antecedents):
4357+            newfps = os.path.split(fps)[0]
4358+            if newfps:
4359+                antecedents.append(newfps)
4360+                f(newfps, antecedents)
4361+        f(self.path, antecedents)
4362+        return antecedents
4363+
4364+    def setparents(self):
4365+        for fps in self.parents():
4366+            if not self.mockedfilepaths.has_key(fps):
4367+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
4368+
4369+    def basename(self):
4370+        return os.path.split(self.path)[1]
4371+
4372+    def moveTo(self, newffp):
4373+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
4374+        if self.mockedfilepaths[newffp.path].exists():
4375+            raise OSError
4376+        else:
4377+            self.mockedfilepaths[newffp.path] = self
4378+            self.path = newffp.path
4379+
4380+    def getsize(self):
4381+        return self.fileobject.getsize()
4382+
4383+    def exists(self):
4384+        return self.existence
4385+
4386+    def isdir(self):
4387+        return True
4388+
4389+    def makedirs(self):
4390+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
4391+        pass
4392+
4393+    def remove(self):
4394+        pass
4395+
4396+
4397+class MockFileObject:
4398+    def __init__(self, contentstring=''):
4399+        self.buffer = contentstring
4400+        self.pos = 0
4401+    def open(self, mode='r'):
4402+        return self
4403+    def write(self, instring):
4404+        begin = self.pos
4405+        padlen = begin - len(self.buffer)
4406+        if padlen > 0:
4407+            self.buffer += '\x00' * padlen
4408+        end = self.pos + len(instring)
4409+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
4410+        self.pos = end
4411+    def close(self):
4412+        self.pos = 0
4413+    def seek(self, pos):
4414+        self.pos = pos
4415+    def read(self, numberbytes):
4416+        return self.buffer[self.pos:self.pos+numberbytes]
4417+    def tell(self):
4418+        return self.pos
4419+    def size(self):
4420+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
4421+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
4422+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
4423+        return {stat.ST_SIZE:len(self.buffer)}
4424+    def getsize(self):
4425+        return len(self.buffer)
4426+
4427+class MockBCC:
4428+    def setServiceParent(self, Parent):
4429+        pass
4430+
4431+
4432+class MockLCC:
4433+    def setServiceParent(self, Parent):
4434+        pass
4435+
4436+
4437+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4438+    """ NullBackend is just for testing and executable documentation, so
4439+    this test is actually a test of StorageServer in which we're using
4440+    NullBackend as helper code for the test, rather than a test of
4441+    NullBackend. """
4442+    def setUp(self):
4443+        self.ss = StorageServer(testnodeid, NullBackend())
4444+
4445+    @mock.patch('os.mkdir')
4446+    @mock.patch('__builtin__.open')
4447+    @mock.patch('os.listdir')
4448+    @mock.patch('os.path.isdir')
4449+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4450+        """
4451+        Write a new share. This tests that StorageServer's remote_allocate_buckets
4452+        generates the correct return types when given test-vector arguments. That
4453+        bs is of the correct type is verified by attempting to invoke remote_write
4454+        on bs[0].
4455+        """
4456+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4457+        bs[0].remote_write(0, 'a')
4458+        self.failIf(mockisdir.called)
4459+        self.failIf(mocklistdir.called)
4460+        self.failIf(mockopen.called)
4461+        self.failIf(mockmkdir.called)
4462+
4463+
4464+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
4465+    def test_create_server_disk_backend(self):
4466+        """ This tests whether a server instance can be constructed with a
4467+        filesystem backend. To pass the test, it mustn't use the filesystem
4468+        outside of its configured storedir. """
4469+        StorageServer(testnodeid, DiskBackend(self.storedir))
4470+
4471+
4472+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
4473+    """ This tests both the StorageServer and the Disk backend together. """
4474+    def setUp(self):
4475+        MockFileSystem.setUp(self)
4476+        try:
4477+            self.backend = DiskBackend(self.storedir)
4478+            self.ss = StorageServer(testnodeid, self.backend)
4479+
4480+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
4481+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
4482+        except:
4483+            MockFileSystem.tearDown(self)
4484+            raise
4485+
4486+    @mock.patch('time.time')
4487+    @mock.patch('allmydata.util.fileutil.get_available_space')
4488+    def test_out_of_space(self, mockget_available_space, mocktime):
4489+        mocktime.return_value = 0
4490+
4491+        def call_get_available_space(dir, reserve):
4492+            return 0
4493+
4494+        mockget_available_space.side_effect = call_get_available_space
4495+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4496+        self.failUnlessReallyEqual(bsc, {})
4497+
4498+    @mock.patch('time.time')
4499+    def test_write_and_read_share(self, mocktime):
4500+        """
4501+        Write a new share, read it, and test the server's (and disk backend's)
4502+        handling of simultaneous and successive attempts to write the same
4503+        share.
4504+        """
4505+        mocktime.return_value = 0
4506+        # Inspect incoming and fail unless it's empty.
4507+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
4508+
4509+        self.failUnlessReallyEqual(incomingset, frozenset())
4510+
4511+        # Populate incoming with the sharenum: 0.
4512+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
4513+
4514+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
4515+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
4516+
4517+
4518+
4519+        # Attempt to create a second share writer with the same sharenum.
4520+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
4521+
4522+        # Show that no sharewriter results from a remote_allocate_buckets
4523+        # with the same si and sharenum, until BucketWriter.remote_close()
4524+        # has been called.
4525+        self.failIf(bsa)
4526+
4527+        # Test allocated size.
4528+        spaceint = self.ss.allocated_size()
4529+        self.failUnlessReallyEqual(spaceint, 1)
4530+
4531+        # Write 'a' to shnum 0. Only tested together with close and read.
4532+        bs[0].remote_write(0, 'a')
4533+
4534+        # Preclose: Inspect final, failUnless nothing there.
4535+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4536+        bs[0].remote_close()
4537+
4538+        # Postclose: (Omnibus) failUnless written data is in final.
4539+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4540+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
4541+        contents = sharesinfinal[0].read_share_data(0, 73)
4542+        self.failUnlessReallyEqual(contents, client_data)
4543+
4544+        # Exercise the case that the share we're asking to allocate is
4545+        # already (completely) uploaded.
4546+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4547+
4548+
4549+    def test_read_old_share(self):
4550+        """ This tests whether the code correctly finds and reads
4551+        shares written out by old (Tahoe-LAFS <= v1.8.2)
4552+        servers. There is a similar test in test_download, but that one
4553+        is from the perspective of the client and exercises a deeper
4554+        stack of code. This one is for exercising just the
4555+        StorageServer object. """
4556+        # Contruct a file with the appropriate contents in the mockfilesystem.
4557+        datalen = len(share_data)
4558+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
4559+        finalhome.setContent(share_data)
4560+
4561+        # Now begin the test.
4562+        bs = self.ss.remote_get_buckets('teststorage_index')
4563+
4564+        self.failUnlessEqual(len(bs), 1)
4565+        b = bs['0']
4566+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4567+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4568+        # If you try to read past the end you get the as much data as is there.
4569+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4570+        # If you start reading past the end of the file you get the empty string.
4571+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4572hunk ./src/allmydata/test/test_download.py 6
4573 # a previous run. This asserts that the current code is capable of decoding
4574 # shares from a previous version.
4575 
4576-import os
4577 from twisted.trial import unittest
4578 from twisted.internet import defer, reactor
4579 from allmydata import uri
4580hunk ./src/allmydata/test/test_download.py 9
4581-from allmydata.storage.server import storage_index_to_dir
4582 from allmydata.util import base32, fileutil, spans, log, hashutil
4583 from allmydata.util.consumer import download_to_data, MemoryConsumer
4584 from allmydata.immutable import upload, layout
4585hunk ./src/allmydata/test/test_download.py 85
4586         u = upload.Data(plaintext, None)
4587         d = self.c0.upload(u)
4588         f = open("stored_shares.py", "w")
4589-        def _created_immutable(ur):
4590-            # write the generated shares and URI to a file, which can then be
4591-            # incorporated into this one next time.
4592-            f.write('immutable_uri = "%s"\n' % ur.uri)
4593-            f.write('immutable_shares = {\n')
4594-            si = uri.from_string(ur.uri).get_storage_index()
4595-            si_dir = storage_index_to_dir(si)
4596+
4597+        def _write_py(uri):
4598+            si = uri.from_string(uri).get_storage_index()
4599             for (i,ss,ssdir) in self.iterate_servers():
4600hunk ./src/allmydata/test/test_download.py 89
4601-                sharedir = os.path.join(ssdir, "shares", si_dir)
4602                 shares = {}
4603hunk ./src/allmydata/test/test_download.py 90
4604-                for fn in os.listdir(sharedir):
4605-                    shnum = int(fn)
4606-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
4607-                    shares[shnum] = sharedata
4608-                fileutil.rm_dir(sharedir)
4609+                shareset = ss.backend.get_shareset(si)
4610+                for share in shareset.get_shares():
4611+                    sharedata = share._home.getContent()
4612+                    shares[share.get_shnum()] = sharedata
4613+
4614+                fileutil.fp_remove(shareset._sharehomedir)
4615                 if shares:
4616                     f.write(' %d: { # client[%d]\n' % (i, i))
4617                     for shnum in sorted(shares.keys()):
4618hunk ./src/allmydata/test/test_download.py 103
4619                                 (shnum, base32.b2a(shares[shnum])))
4620                     f.write('    },\n')
4621             f.write('}\n')
4622-            f.write('\n')
4623 
4624hunk ./src/allmydata/test/test_download.py 104
4625+        def _created_immutable(ur):
4626+            # write the generated shares and URI to a file, which can then be
4627+            # incorporated into this one next time.
4628+            f.write('immutable_uri = "%s"\n' % ur.uri)
4629+            f.write('immutable_shares = {\n')
4630+            _write_py(ur.uri)
4631+            f.write('\n')
4632         d.addCallback(_created_immutable)
4633 
4634         d.addCallback(lambda ignored:
4635hunk ./src/allmydata/test/test_download.py 118
4636         def _created_mutable(n):
4637             f.write('mutable_uri = "%s"\n' % n.get_uri())
4638             f.write('mutable_shares = {\n')
4639-            si = uri.from_string(n.get_uri()).get_storage_index()
4640-            si_dir = storage_index_to_dir(si)
4641-            for (i,ss,ssdir) in self.iterate_servers():
4642-                sharedir = os.path.join(ssdir, "shares", si_dir)
4643-                shares = {}
4644-                for fn in os.listdir(sharedir):
4645-                    shnum = int(fn)
4646-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
4647-                    shares[shnum] = sharedata
4648-                fileutil.rm_dir(sharedir)
4649-                if shares:
4650-                    f.write(' %d: { # client[%d]\n' % (i, i))
4651-                    for shnum in sorted(shares.keys()):
4652-                        f.write('  %d: base32.a2b("%s"),\n' %
4653-                                (shnum, base32.b2a(shares[shnum])))
4654-                    f.write('    },\n')
4655-            f.write('}\n')
4656-
4657-            f.close()
4658+            _write_py(n.get_uri())
4659         d.addCallback(_created_mutable)
4660 
4661         def _done(ignored):
4662hunk ./src/allmydata/test/test_download.py 123
4663             f.close()
4664-        d.addCallback(_done)
4665+        d.addBoth(_done)
4666 
4667         return d
4668 
4669hunk ./src/allmydata/test/test_download.py 127
4670+    def _write_shares(self, uri, shares):
4671+        si = uri.from_string(uri).get_storage_index()
4672+        for i in shares:
4673+            shares_for_server = shares[i]
4674+            for shnum in shares_for_server:
4675+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
4676+                fileutil.fp_make_dirs(share_dir)
4677+                share_dir.child(str(shnum)).setContent(shares[shnum])
4678+
4679     def load_shares(self, ignored=None):
4680         # this uses the data generated by create_shares() to populate the
4681         # storage servers with pre-generated shares
4682hunk ./src/allmydata/test/test_download.py 139
4683-        si = uri.from_string(immutable_uri).get_storage_index()
4684-        si_dir = storage_index_to_dir(si)
4685-        for i in immutable_shares:
4686-            shares = immutable_shares[i]
4687-            for shnum in shares:
4688-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
4689-                fileutil.make_dirs(dn)
4690-                fn = os.path.join(dn, str(shnum))
4691-                f = open(fn, "wb")
4692-                f.write(shares[shnum])
4693-                f.close()
4694-
4695-        si = uri.from_string(mutable_uri).get_storage_index()
4696-        si_dir = storage_index_to_dir(si)
4697-        for i in mutable_shares:
4698-            shares = mutable_shares[i]
4699-            for shnum in shares:
4700-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
4701-                fileutil.make_dirs(dn)
4702-                fn = os.path.join(dn, str(shnum))
4703-                f = open(fn, "wb")
4704-                f.write(shares[shnum])
4705-                f.close()
4706+        self._write_shares(immutable_uri, immutable_shares)
4707+        self._write_shares(mutable_uri, mutable_shares)
4708 
4709     def download_immutable(self, ignored=None):
4710         n = self.c0.create_node_from_uri(immutable_uri)
4711hunk ./src/allmydata/test/test_download.py 183
4712 
4713         self.load_shares()
4714         si = uri.from_string(immutable_uri).get_storage_index()
4715-        si_dir = storage_index_to_dir(si)
4716 
4717         n = self.c0.create_node_from_uri(immutable_uri)
4718         d = download_to_data(n)
4719hunk ./src/allmydata/test/test_download.py 198
4720                 for clientnum in immutable_shares:
4721                     for shnum in immutable_shares[clientnum]:
4722                         if s._shnum == shnum:
4723-                            fn = os.path.join(self.get_serverdir(clientnum),
4724-                                              "shares", si_dir, str(shnum))
4725-                            os.unlink(fn)
4726+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
4727+                            share_dir.child(str(shnum)).remove()
4728         d.addCallback(_clobber_some_shares)
4729         d.addCallback(lambda ign: download_to_data(n))
4730         d.addCallback(_got_data)
4731hunk ./src/allmydata/test/test_download.py 212
4732                 for shnum in immutable_shares[clientnum]:
4733                     if shnum == save_me:
4734                         continue
4735-                    fn = os.path.join(self.get_serverdir(clientnum),
4736-                                      "shares", si_dir, str(shnum))
4737-                    if os.path.exists(fn):
4738-                        os.unlink(fn)
4739+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
4740+                    fileutil.fp_remove(share_dir.child(str(shnum)))
4741             # now the download should fail with NotEnoughSharesError
4742             return self.shouldFail(NotEnoughSharesError, "1shares", None,
4743                                    download_to_data, n)
4744hunk ./src/allmydata/test/test_download.py 223
4745             # delete the last remaining share
4746             for clientnum in immutable_shares:
4747                 for shnum in immutable_shares[clientnum]:
4748-                    fn = os.path.join(self.get_serverdir(clientnum),
4749-                                      "shares", si_dir, str(shnum))
4750-                    if os.path.exists(fn):
4751-                        os.unlink(fn)
4752+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
4753+                    share_dir.child(str(shnum)).remove()
4754             # now a new download should fail with NoSharesError. We want a
4755             # new ImmutableFileNode so it will forget about the old shares.
4756             # If we merely called create_node_from_uri() without first
4757hunk ./src/allmydata/test/test_download.py 801
4758         # will report two shares, and the ShareFinder will handle the
4759         # duplicate by attaching both to the same CommonShare instance.
4760         si = uri.from_string(immutable_uri).get_storage_index()
4761-        si_dir = storage_index_to_dir(si)
4762-        sh0_file = [sharefile
4763-                    for (shnum, serverid, sharefile)
4764-                    in self.find_uri_shares(immutable_uri)
4765-                    if shnum == 0][0]
4766-        sh0_data = open(sh0_file, "rb").read()
4767+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
4768+                          in self.find_uri_shares(immutable_uri)
4769+                          if shnum == 0][0]
4770+        sh0_data = sh0_fp.getContent()
4771         for clientnum in immutable_shares:
4772             if 0 in immutable_shares[clientnum]:
4773                 continue
4774hunk ./src/allmydata/test/test_download.py 808
4775-            cdir = self.get_serverdir(clientnum)
4776-            target = os.path.join(cdir, "shares", si_dir, "0")
4777-            outf = open(target, "wb")
4778-            outf.write(sh0_data)
4779-            outf.close()
4780+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
4781+            fileutil.fp_make_dirs(cdir)
4782+            cdir.child(str(shnum)).setContent(sh0_data)
4783 
4784         d = self.download_immutable()
4785         return d
4786hunk ./src/allmydata/test/test_encode.py 134
4787         d.addCallback(_try)
4788         return d
4789 
4790-    def get_share_hashes(self, at_least_these=()):
4791+    def get_share_hashes(self):
4792         d = self._start()
4793         def _try(unused=None):
4794             if self.mode == "bad sharehash":
4795hunk ./src/allmydata/test/test_hung_server.py 3
4796 # -*- coding: utf-8 -*-
4797 
4798-import os, shutil
4799 from twisted.trial import unittest
4800 from twisted.internet import defer
4801hunk ./src/allmydata/test/test_hung_server.py 5
4802-from allmydata import uri
4803+
4804 from allmydata.util.consumer import download_to_data
4805 from allmydata.immutable import upload
4806 from allmydata.mutable.common import UnrecoverableFileError
4807hunk ./src/allmydata/test/test_hung_server.py 10
4808 from allmydata.mutable.publish import MutableData
4809-from allmydata.storage.common import storage_index_to_dir
4810 from allmydata.test.no_network import GridTestMixin
4811 from allmydata.test.common import ShouldFailMixin
4812 from allmydata.util.pollmixin import PollMixin
4813hunk ./src/allmydata/test/test_hung_server.py 18
4814 immutable_plaintext = "data" * 10000
4815 mutable_plaintext = "muta" * 10000
4816 
4817+
4818 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
4819                              unittest.TestCase):
4820     # Many of these tests take around 60 seconds on François's ARM buildslave:
4821hunk ./src/allmydata/test/test_hung_server.py 31
4822     timeout = 240
4823 
4824     def _break(self, servers):
4825-        for (id, ss) in servers:
4826-            self.g.break_server(id)
4827+        for ss in servers:
4828+            self.g.break_server(ss.get_serverid())
4829 
4830     def _hang(self, servers, **kwargs):
4831hunk ./src/allmydata/test/test_hung_server.py 35
4832-        for (id, ss) in servers:
4833-            self.g.hang_server(id, **kwargs)
4834+        for ss in servers:
4835+            self.g.hang_server(ss.get_serverid(), **kwargs)
4836 
4837     def _unhang(self, servers, **kwargs):
4838hunk ./src/allmydata/test/test_hung_server.py 39
4839-        for (id, ss) in servers:
4840-            self.g.unhang_server(id, **kwargs)
4841+        for ss in servers:
4842+            self.g.unhang_server(ss.get_serverid(), **kwargs)
4843 
4844     def _hang_shares(self, shnums, **kwargs):
4845         # hang all servers who are holding the given shares
4846hunk ./src/allmydata/test/test_hung_server.py 52
4847                     hung_serverids.add(i_serverid)
4848 
4849     def _delete_all_shares_from(self, servers):
4850-        serverids = [id for (id, ss) in servers]
4851-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
4852+        serverids = [ss.get_serverid() for ss in servers]
4853+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
4854             if i_serverid in serverids:
4855hunk ./src/allmydata/test/test_hung_server.py 55
4856-                os.unlink(i_sharefile)
4857+                i_sharefp.remove()
4858 
4859     def _corrupt_all_shares_in(self, servers, corruptor_func):
4860hunk ./src/allmydata/test/test_hung_server.py 58
4861-        serverids = [id for (id, ss) in servers]
4862-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
4863+        serverids = [ss.get_serverid() for ss in servers]
4864+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
4865             if i_serverid in serverids:
4866hunk ./src/allmydata/test/test_hung_server.py 61
4867-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
4868+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
4869 
4870     def _copy_all_shares_from(self, from_servers, to_server):
4871hunk ./src/allmydata/test/test_hung_server.py 64
4872-        serverids = [id for (id, ss) in from_servers]
4873-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
4874+        serverids = [ss.get_serverid() for ss in from_servers]
4875+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
4876             if i_serverid in serverids:
4877hunk ./src/allmydata/test/test_hung_server.py 67
4878-                self._copy_share((i_shnum, i_sharefile), to_server)
4879+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
4880 
4881hunk ./src/allmydata/test/test_hung_server.py 69
4882-    def _copy_share(self, share, to_server):
4883-        (sharenum, sharefile) = share
4884-        (id, ss) = to_server
4885-        shares_dir = os.path.join(ss.original.storedir, "shares")
4886-        si = uri.from_string(self.uri).get_storage_index()
4887-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
4888-        if not os.path.exists(si_dir):
4889-            os.makedirs(si_dir)
4890-        new_sharefile = os.path.join(si_dir, str(sharenum))
4891-        shutil.copy(sharefile, new_sharefile)
4892         self.shares = self.find_uri_shares(self.uri)
4893hunk ./src/allmydata/test/test_hung_server.py 70
4894-        # Make sure that the storage server has the share.
4895-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
4896-                        in self.shares)
4897-
4898-    def _corrupt_share(self, share, corruptor_func):
4899-        (sharenum, sharefile) = share
4900-        data = open(sharefile, "rb").read()
4901-        newdata = corruptor_func(data)
4902-        os.unlink(sharefile)
4903-        wf = open(sharefile, "wb")
4904-        wf.write(newdata)
4905-        wf.close()
4906 
4907     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
4908         self.mutable = mutable
4909hunk ./src/allmydata/test/test_hung_server.py 82
4910 
4911         self.c0 = self.g.clients[0]
4912         nm = self.c0.nodemaker
4913-        self.servers = sorted([(s.get_serverid(), s.get_rref())
4914-                               for s in nm.storage_broker.get_connected_servers()])
4915+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
4916+        self.servers = [ss for (id, ss) in sorted(unsorted)]
4917         self.servers = self.servers[5:] + self.servers[:5]
4918 
4919         if mutable:
4920hunk ./src/allmydata/test/test_hung_server.py 244
4921             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
4922             # will retire before the download is complete and the ShareFinder
4923             # is shut off. That will leave 4 OVERDUE and 1
4924-            # stuck-but-not-overdue, for a total of 5 requests in in
4925+            # stuck-but-not-overdue, for a total of 5 requests in
4926             # _sf.pending_requests
4927             for t in self._sf.overdue_timers.values()[:4]:
4928                 t.reset(-1.0)
4929hunk ./src/allmydata/test/test_mutable.py 21
4930 from foolscap.api import eventually, fireEventually
4931 from foolscap.logging import log
4932 from allmydata.storage_client import StorageFarmBroker
4933-from allmydata.storage.common import storage_index_to_dir
4934 from allmydata.scripts import debug
4935 
4936 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
4937hunk ./src/allmydata/test/test_mutable.py 3662
4938         # Now execute each assignment by writing the storage.
4939         for (share, servernum) in assignments:
4940             sharedata = base64.b64decode(self.sdmf_old_shares[share])
4941-            storedir = self.get_serverdir(servernum)
4942-            storage_path = os.path.join(storedir, "shares",
4943-                                        storage_index_to_dir(si))
4944-            fileutil.make_dirs(storage_path)
4945-            fileutil.write(os.path.join(storage_path, "%d" % share),
4946-                           sharedata)
4947+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
4948+            fileutil.fp_make_dirs(storage_dir)
4949+            storage_dir.child("%d" % share).setContent(sharedata)
4950         # ...and verify that the shares are there.
4951         shares = self.find_uri_shares(self.sdmf_old_cap)
4952         assert len(shares) == 10
4953replace ./src/allmydata/test/test_mutable.py [A-Za-z_0-9] MutableShareFile MutableDiskShare
4954replace ./src/allmydata/test/test_provisioning.py [A-Za-z_0-9] MyRequest MockRequest
4955hunk ./src/allmydata/test/test_storage.py 14
4956 from allmydata import interfaces
4957 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
4958 from allmydata.storage.server import StorageServer
4959-from allmydata.storage.mutable import MutableShareFile
4960-from allmydata.storage.immutable import BucketWriter, BucketReader
4961-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
4962+from allmydata.storage.backends.disk.mutable import MutableShareFile
4963+from allmydata.storage.bucket import BucketWriter, BucketReader
4964+from allmydata.storage.common import DataTooLargeError, \
4965      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
4966 from allmydata.storage.lease import LeaseInfo
4967 from allmydata.storage.crawler import BucketCountingCrawler
4968hunk ./src/allmydata/test/test_storage.py 474
4969         w[0].remote_write(0, "\xff"*10)
4970         w[0].remote_close()
4971 
4972-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
4973-        f = open(fn, "rb+")
4974+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
4975+        f = fp.open("rb+")
4976         f.seek(0)
4977         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
4978         f.close()
4979hunk ./src/allmydata/test/test_storage.py 814
4980     def test_bad_magic(self):
4981         ss = self.create("test_bad_magic")
4982         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
4983-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
4984-        f = open(fn, "rb+")
4985+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
4986+        f = fp.open("rb+")
4987         f.seek(0)
4988         f.write("BAD MAGIC")
4989         f.close()
4990hunk ./src/allmydata/test/test_storage.py 1229
4991 
4992         # create a random non-numeric file in the bucket directory, to
4993         # exercise the code that's supposed to ignore those.
4994-        bucket_dir = os.path.join(self.workdir("test_leases"),
4995-                                  "shares", storage_index_to_dir("si1"))
4996-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
4997-        f.write("you ought to be ignoring me\n")
4998-        f.close()
4999+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
5000+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
5001 
5002         s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
5003         self.failUnlessEqual(len(list(s0.get_leases())), 1)
5004hunk ./src/allmydata/test/test_storage.py 3118
5005         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5006 
5007         # add a non-sharefile to exercise another code path
5008-        fn = os.path.join(ss.sharedir,
5009-                          storage_index_to_dir(immutable_si_0),
5010-                          "not-a-share")
5011-        f = open(fn, "wb")
5012-        f.write("I am not a share.\n")
5013-        f.close()
5014+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
5015+        fp.setContent("I am not a share.\n")
5016 
5017         # this is before the crawl has started, so we're not in a cycle yet
5018         initial_state = lc.get_state()
5019hunk ./src/allmydata/test/test_storage.py 3282
5020     def test_expire_age(self):
5021         basedir = "storage/LeaseCrawler/expire_age"
5022         fileutil.make_dirs(basedir)
5023-        # setting expiration_time to 2000 means that any lease which is more
5024-        # than 2000s old will be expired.
5025-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5026-                                       expiration_enabled=True,
5027-                                       expiration_mode="age",
5028-                                       expiration_override_lease_duration=2000)
5029+        # setting 'override_lease_duration' to 2000 means that any lease that
5030+        # is more than 2000 seconds old will be expired.
5031+        expiration_policy = {
5032+            'enabled': True,
5033+            'mode': 'age',
5034+            'override_lease_duration': 2000,
5035+            'sharetypes': ('mutable', 'immutable'),
5036+        }
5037+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5038         # make it start sooner than usual.
5039         lc = ss.lease_checker
5040         lc.slow_start = 0
5041hunk ./src/allmydata/test/test_storage.py 3423
5042     def test_expire_cutoff_date(self):
5043         basedir = "storage/LeaseCrawler/expire_cutoff_date"
5044         fileutil.make_dirs(basedir)
5045-        # setting cutoff-date to 2000 seconds ago means that any lease which
5046-        # is more than 2000s old will be expired.
5047+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5048+        # is more than 2000 seconds old will be expired.
5049         now = time.time()
5050         then = int(now - 2000)
5051hunk ./src/allmydata/test/test_storage.py 3427
5052-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
5053-                                       expiration_enabled=True,
5054-                                       expiration_mode="cutoff-date",
5055-                                       expiration_cutoff_date=then)
5056+        expiration_policy = {
5057+            'enabled': True,
5058+            'mode': 'cutoff-date',
5059+            'cutoff_date': then,
5060+            'sharetypes': ('mutable', 'immutable'),
5061+        }
5062+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
5063         # make it start sooner than usual.
5064         lc = ss.lease_checker
5065         lc.slow_start = 0
5066hunk ./src/allmydata/test/test_storage.py 3575
5067     def test_only_immutable(self):
5068         basedir = "storage/LeaseCrawler/only_immutable"
5069         fileutil.make_dirs(basedir)
5070+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5071+        # is more than 2000 seconds old will be expired.
5072         now = time.time()
5073         then = int(now - 2000)
5074hunk ./src/allmydata/test/test_storage.py 3579
5075-        ss = StorageServer(basedir, "\x00" * 20,
5076-                           expiration_enabled=True,
5077-                           expiration_mode="cutoff-date",
5078-                           expiration_cutoff_date=then,
5079-                           expiration_sharetypes=("immutable",))
5080+        expiration_policy = {
5081+            'enabled': True,
5082+            'mode': 'cutoff-date',
5083+            'cutoff_date': then,
5084+            'sharetypes': ('immutable',),
5085+        }
5086+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5087         lc = ss.lease_checker
5088         lc.slow_start = 0
5089         webstatus = StorageStatus(ss)
5090hunk ./src/allmydata/test/test_storage.py 3636
5091     def test_only_mutable(self):
5092         basedir = "storage/LeaseCrawler/only_mutable"
5093         fileutil.make_dirs(basedir)
5094+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
5095+        # is more than 2000 seconds old will be expired.
5096         now = time.time()
5097         then = int(now - 2000)
5098hunk ./src/allmydata/test/test_storage.py 3640
5099-        ss = StorageServer(basedir, "\x00" * 20,
5100-                           expiration_enabled=True,
5101-                           expiration_mode="cutoff-date",
5102-                           expiration_cutoff_date=then,
5103-                           expiration_sharetypes=("mutable",))
5104+        expiration_policy = {
5105+            'enabled': True,
5106+            'mode': 'cutoff-date',
5107+            'cutoff_date': then,
5108+            'sharetypes': ('mutable',),
5109+        }
5110+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
5111         lc = ss.lease_checker
5112         lc.slow_start = 0
5113         webstatus = StorageStatus(ss)
5114hunk ./src/allmydata/test/test_storage.py 3819
5115     def test_no_st_blocks(self):
5116         basedir = "storage/LeaseCrawler/no_st_blocks"
5117         fileutil.make_dirs(basedir)
5118-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
5119-                                        expiration_mode="age",
5120-                                        expiration_override_lease_duration=-1000)
5121-        # a negative expiration_time= means the "configured-"
5122+        # A negative 'override_lease_duration' means that the "configured-"
5123         # space-recovered counts will be non-zero, since all shares will have
5124hunk ./src/allmydata/test/test_storage.py 3821
5125-        # expired by then
5126+        # expired by then.
5127+        expiration_policy = {
5128+            'enabled': True,
5129+            'mode': 'age',
5130+            'override_lease_duration': -1000,
5131+            'sharetypes': ('mutable', 'immutable'),
5132+        }
5133+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
5134 
5135         # make it start sooner than usual.
5136         lc = ss.lease_checker
5137hunk ./src/allmydata/test/test_storage.py 3877
5138         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
5139         first = min(self.sis)
5140         first_b32 = base32.b2a(first)
5141-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
5142-        f = open(fn, "rb+")
5143+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
5144+        f = fp.open("rb+")
5145         f.seek(0)
5146         f.write("BAD MAGIC")
5147         f.close()
5148hunk ./src/allmydata/test/test_storage.py 3890
5149 
5150         # also create an empty bucket
5151         empty_si = base32.b2a("\x04"*16)
5152-        empty_bucket_dir = os.path.join(ss.sharedir,
5153-                                        storage_index_to_dir(empty_si))
5154-        fileutil.make_dirs(empty_bucket_dir)
5155+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
5156+        fileutil.fp_make_dirs(empty_bucket_dir)
5157 
5158         ss.setServiceParent(self.s)
5159 
5160replace ./src/allmydata/test/test_storage.py [A-Za-z_0-9] MutableShareFile MutableDiskShare
5161hunk ./src/allmydata/test/test_system.py 10
5162 
5163 import allmydata
5164 from allmydata import uri
5165-from allmydata.storage.mutable import MutableShareFile
5166+from allmydata.storage.backends.disk.mutable import MutableShareFile
5167 from allmydata.storage.server import si_a2b
5168 from allmydata.immutable import offloaded, upload
5169 from allmydata.immutable.literal import LiteralFileNode
5170replace ./src/allmydata/test/test_system.py [A-Za-z_0-9] MutableShareFile MutableDiskShare
5171hunk ./src/allmydata/test/test_upload.py 22
5172 from allmydata.util.happinessutil import servers_of_happiness, \
5173                                          shares_by_server, merge_servers
5174 from allmydata.storage_client import StorageFarmBroker
5175-from allmydata.storage.server import storage_index_to_dir
5176 
5177 MiB = 1024*1024
5178 
5179hunk ./src/allmydata/test/test_upload.py 821
5180 
5181     def _copy_share_to_server(self, share_number, server_number):
5182         ss = self.g.servers_by_number[server_number]
5183-        # Copy share i from the directory associated with the first
5184-        # storage server to the directory associated with this one.
5185-        assert self.g, "I tried to find a grid at self.g, but failed"
5186-        assert self.shares, "I tried to find shares at self.shares, but failed"
5187-        old_share_location = self.shares[share_number][2]
5188-        new_share_location = os.path.join(ss.storedir, "shares")
5189-        si = uri.from_string(self.uri).get_storage_index()
5190-        new_share_location = os.path.join(new_share_location,
5191-                                          storage_index_to_dir(si))
5192-        if not os.path.exists(new_share_location):
5193-            os.makedirs(new_share_location)
5194-        new_share_location = os.path.join(new_share_location,
5195-                                          str(share_number))
5196-        if old_share_location != new_share_location:
5197-            shutil.copy(old_share_location, new_share_location)
5198-        shares = self.find_uri_shares(self.uri)
5199-        # Make sure that the storage server has the share.
5200-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
5201-                        in shares)
5202+        self.copy_share(self.shares[share_number], ss)
5203 
5204     def _setup_grid(self):
5205         """
5206hunk ./src/allmydata/test/test_web.py 12
5207 from twisted.python import failure, log
5208 from nevow import rend
5209 from allmydata import interfaces, uri, webish, dirnode
5210-from allmydata.storage.shares import get_share_file
5211 from allmydata.storage_client import StorageFarmBroker
5212 from allmydata.immutable import upload
5213 from allmydata.immutable.downloader.status import DownloadStatus
5214hunk ./src/allmydata/test/test_web.py 4111
5215             good_shares = self.find_uri_shares(self.uris["good"])
5216             self.failUnlessReallyEqual(len(good_shares), 10)
5217             sick_shares = self.find_uri_shares(self.uris["sick"])
5218-            os.unlink(sick_shares[0][2])
5219+            sick_shares[0][2].remove()
5220             dead_shares = self.find_uri_shares(self.uris["dead"])
5221             for i in range(1, 10):
5222hunk ./src/allmydata/test/test_web.py 4114
5223-                os.unlink(dead_shares[i][2])
5224+                dead_shares[i][2].remove()
5225             c_shares = self.find_uri_shares(self.uris["corrupt"])
5226             cso = CorruptShareOptions()
5227             cso.stdout = StringIO()
5228hunk ./src/allmydata/test/test_web.py 4118
5229-            cso.parseOptions([c_shares[0][2]])
5230+            cso.parseOptions([c_shares[0][2].path])
5231             corrupt_share(cso)
5232         d.addCallback(_clobber_shares)
5233 
5234hunk ./src/allmydata/test/test_web.py 4253
5235             good_shares = self.find_uri_shares(self.uris["good"])
5236             self.failUnlessReallyEqual(len(good_shares), 10)
5237             sick_shares = self.find_uri_shares(self.uris["sick"])
5238-            os.unlink(sick_shares[0][2])
5239+            sick_shares[0][2].remove()
5240             dead_shares = self.find_uri_shares(self.uris["dead"])
5241             for i in range(1, 10):
5242hunk ./src/allmydata/test/test_web.py 4256
5243-                os.unlink(dead_shares[i][2])
5244+                dead_shares[i][2].remove()
5245             c_shares = self.find_uri_shares(self.uris["corrupt"])
5246             cso = CorruptShareOptions()
5247             cso.stdout = StringIO()
5248hunk ./src/allmydata/test/test_web.py 4260
5249-            cso.parseOptions([c_shares[0][2]])
5250+            cso.parseOptions([c_shares[0][2].path])
5251             corrupt_share(cso)
5252         d.addCallback(_clobber_shares)
5253 
5254hunk ./src/allmydata/test/test_web.py 4319
5255 
5256         def _clobber_shares(ignored):
5257             sick_shares = self.find_uri_shares(self.uris["sick"])
5258-            os.unlink(sick_shares[0][2])
5259+            sick_shares[0][2].remove()
5260         d.addCallback(_clobber_shares)
5261 
5262         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
5263hunk ./src/allmydata/test/test_web.py 4811
5264             good_shares = self.find_uri_shares(self.uris["good"])
5265             self.failUnlessReallyEqual(len(good_shares), 10)
5266             sick_shares = self.find_uri_shares(self.uris["sick"])
5267-            os.unlink(sick_shares[0][2])
5268+            sick_shares[0][2].remove()
5269             #dead_shares = self.find_uri_shares(self.uris["dead"])
5270             #for i in range(1, 10):
5271hunk ./src/allmydata/test/test_web.py 4814
5272-            #    os.unlink(dead_shares[i][2])
5273+            #    dead_shares[i][2].remove()
5274 
5275             #c_shares = self.find_uri_shares(self.uris["corrupt"])
5276             #cso = CorruptShareOptions()
5277hunk ./src/allmydata/test/test_web.py 4819
5278             #cso.stdout = StringIO()
5279-            #cso.parseOptions([c_shares[0][2]])
5280+            #cso.parseOptions([c_shares[0][2].path])
5281             #corrupt_share(cso)
5282         d.addCallback(_clobber_shares)
5283 
5284hunk ./src/allmydata/test/test_web.py 4870
5285         d.addErrback(self.explain_web_error)
5286         return d
5287 
5288-    def _count_leases(self, ignored, which):
5289-        u = self.uris[which]
5290-        shares = self.find_uri_shares(u)
5291-        lease_counts = []
5292-        for shnum, serverid, fn in shares:
5293-            sf = get_share_file(fn)
5294-            num_leases = len(list(sf.get_leases()))
5295-            lease_counts.append( (fn, num_leases) )
5296-        return lease_counts
5297-
5298-    def _assert_leasecount(self, lease_counts, expected):
5299+    def _assert_leasecount(self, ignored, which, expected):
5300+        lease_counts = self.count_leases(self.uris[which])
5301         for (fn, num_leases) in lease_counts:
5302             if num_leases != expected:
5303                 self.fail("expected %d leases, have %d, on %s" %
5304hunk ./src/allmydata/test/test_web.py 4903
5305                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
5306         d.addCallback(_compute_fileurls)
5307 
5308-        d.addCallback(self._count_leases, "one")
5309-        d.addCallback(self._assert_leasecount, 1)
5310-        d.addCallback(self._count_leases, "two")
5311-        d.addCallback(self._assert_leasecount, 1)
5312-        d.addCallback(self._count_leases, "mutable")
5313-        d.addCallback(self._assert_leasecount, 1)
5314+        d.addCallback(self._assert_leasecount, "one", 1)
5315+        d.addCallback(self._assert_leasecount, "two", 1)
5316+        d.addCallback(self._assert_leasecount, "mutable", 1)
5317 
5318         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
5319         def _got_html_good(res):
5320hunk ./src/allmydata/test/test_web.py 4913
5321             self.failIf("Not Healthy" in res, res)
5322         d.addCallback(_got_html_good)
5323 
5324-        d.addCallback(self._count_leases, "one")
5325-        d.addCallback(self._assert_leasecount, 1)
5326-        d.addCallback(self._count_leases, "two")
5327-        d.addCallback(self._assert_leasecount, 1)
5328-        d.addCallback(self._count_leases, "mutable")
5329-        d.addCallback(self._assert_leasecount, 1)
5330+        d.addCallback(self._assert_leasecount, "one", 1)
5331+        d.addCallback(self._assert_leasecount, "two", 1)
5332+        d.addCallback(self._assert_leasecount, "mutable", 1)
5333 
5334         # this CHECK uses the original client, which uses the same
5335         # lease-secrets, so it will just renew the original lease
5336hunk ./src/allmydata/test/test_web.py 4922
5337         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
5338         d.addCallback(_got_html_good)
5339 
5340-        d.addCallback(self._count_leases, "one")
5341-        d.addCallback(self._assert_leasecount, 1)
5342-        d.addCallback(self._count_leases, "two")
5343-        d.addCallback(self._assert_leasecount, 1)
5344-        d.addCallback(self._count_leases, "mutable")
5345-        d.addCallback(self._assert_leasecount, 1)
5346+        d.addCallback(self._assert_leasecount, "one", 1)
5347+        d.addCallback(self._assert_leasecount, "two", 1)
5348+        d.addCallback(self._assert_leasecount, "mutable", 1)
5349 
5350         # this CHECK uses an alternate client, which adds a second lease
5351         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
5352hunk ./src/allmydata/test/test_web.py 4930
5353         d.addCallback(_got_html_good)
5354 
5355-        d.addCallback(self._count_leases, "one")
5356-        d.addCallback(self._assert_leasecount, 2)
5357-        d.addCallback(self._count_leases, "two")
5358-        d.addCallback(self._assert_leasecount, 1)
5359-        d.addCallback(self._count_leases, "mutable")
5360-        d.addCallback(self._assert_leasecount, 1)
5361+        d.addCallback(self._assert_leasecount, "one", 2)
5362+        d.addCallback(self._assert_leasecount, "two", 1)
5363+        d.addCallback(self._assert_leasecount, "mutable", 1)
5364 
5365         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
5366         d.addCallback(_got_html_good)
5367hunk ./src/allmydata/test/test_web.py 4937
5368 
5369-        d.addCallback(self._count_leases, "one")
5370-        d.addCallback(self._assert_leasecount, 2)
5371-        d.addCallback(self._count_leases, "two")
5372-        d.addCallback(self._assert_leasecount, 1)
5373-        d.addCallback(self._count_leases, "mutable")
5374-        d.addCallback(self._assert_leasecount, 1)
5375+        d.addCallback(self._assert_leasecount, "one", 2)
5376+        d.addCallback(self._assert_leasecount, "two", 1)
5377+        d.addCallback(self._assert_leasecount, "mutable", 1)
5378 
5379         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
5380                       clientnum=1)
5381hunk ./src/allmydata/test/test_web.py 4945
5382         d.addCallback(_got_html_good)
5383 
5384-        d.addCallback(self._count_leases, "one")
5385-        d.addCallback(self._assert_leasecount, 2)
5386-        d.addCallback(self._count_leases, "two")
5387-        d.addCallback(self._assert_leasecount, 1)
5388-        d.addCallback(self._count_leases, "mutable")
5389-        d.addCallback(self._assert_leasecount, 2)
5390+        d.addCallback(self._assert_leasecount, "one", 2)
5391+        d.addCallback(self._assert_leasecount, "two", 1)
5392+        d.addCallback(self._assert_leasecount, "mutable", 2)
5393 
5394         d.addErrback(self.explain_web_error)
5395         return d
5396hunk ./src/allmydata/test/test_web.py 4989
5397             self.failUnlessReallyEqual(len(units), 4+1)
5398         d.addCallback(_done)
5399 
5400-        d.addCallback(self._count_leases, "root")
5401-        d.addCallback(self._assert_leasecount, 1)
5402-        d.addCallback(self._count_leases, "one")
5403-        d.addCallback(self._assert_leasecount, 1)
5404-        d.addCallback(self._count_leases, "mutable")
5405-        d.addCallback(self._assert_leasecount, 1)
5406+        d.addCallback(self._assert_leasecount, "root", 1)
5407+        d.addCallback(self._assert_leasecount, "one", 1)
5408+        d.addCallback(self._assert_leasecount, "mutable", 1)
5409 
5410         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
5411         d.addCallback(_done)
5412hunk ./src/allmydata/test/test_web.py 4996
5413 
5414-        d.addCallback(self._count_leases, "root")
5415-        d.addCallback(self._assert_leasecount, 1)
5416-        d.addCallback(self._count_leases, "one")
5417-        d.addCallback(self._assert_leasecount, 1)
5418-        d.addCallback(self._count_leases, "mutable")
5419-        d.addCallback(self._assert_leasecount, 1)
5420+        d.addCallback(self._assert_leasecount, "root", 1)
5421+        d.addCallback(self._assert_leasecount, "one", 1)
5422+        d.addCallback(self._assert_leasecount, "mutable", 1)
5423 
5424         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
5425                       clientnum=1)
5426hunk ./src/allmydata/test/test_web.py 5004
5427         d.addCallback(_done)
5428 
5429-        d.addCallback(self._count_leases, "root")
5430-        d.addCallback(self._assert_leasecount, 2)
5431-        d.addCallback(self._count_leases, "one")
5432-        d.addCallback(self._assert_leasecount, 2)
5433-        d.addCallback(self._count_leases, "mutable")
5434-        d.addCallback(self._assert_leasecount, 2)
5435+        d.addCallback(self._assert_leasecount, "root", 2)
5436+        d.addCallback(self._assert_leasecount, "one", 2)
5437+        d.addCallback(self._assert_leasecount, "mutable", 2)
5438 
5439         d.addErrback(self.explain_web_error)
5440         return d
5441hunk ./src/allmydata/util/encodingutil.py 221
5442 def quote_path(path, quotemarks=True):
5443     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
5444 
5445+def quote_filepath(fp, quotemarks=True, encoding=None):
5446+    path = fp.path
5447+    if isinstance(path, str):
5448+        try:
5449+            path = path.decode(filesystem_encoding)
5450+        except UnicodeDecodeError:
5451+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
5452+
5453+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
5454+
5455 
5456 def unicode_platform():
5457     """
5458hunk ./src/allmydata/util/fileutil.py 5
5459 Futz with files like a pro.
5460 """
5461 
5462-import sys, exceptions, os, stat, tempfile, time, binascii
5463+import errno, sys, exceptions, os, stat, tempfile, time, binascii
5464+
5465+from allmydata.util.assertutil import precondition
5466 
5467 from twisted.python import log
5468hunk ./src/allmydata/util/fileutil.py 10
5469+from twisted.python.filepath import FilePath, UnlistableError
5470 
5471 from pycryptopp.cipher.aes import AES
5472 
5473hunk ./src/allmydata/util/fileutil.py 189
5474             raise tx
5475         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5476 
5477-def rm_dir(dirname):
5478+def fp_make_dirs(dirfp):
5479+    """
5480+    An idempotent version of FilePath.makedirs().  If the dir already
5481+    exists, do nothing and return without raising an exception.  If this
5482+    call creates the dir, return without raising an exception.  If there is
5483+    an error that prevents creation or if the directory gets deleted after
5484+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5485+    exists, raise an exception.
5486+    """
5487+    log.msg( "xxx 0 %s" % (dirfp,))
5488+    tx = None
5489+    try:
5490+        dirfp.makedirs()
5491+    except OSError, x:
5492+        tx = x
5493+
5494+    if not dirfp.isdir():
5495+        if tx:
5496+            raise tx
5497+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5498+
5499+def fp_rmdir_if_empty(dirfp):
5500+    """ Remove the directory if it is empty. """
5501+    try:
5502+        os.rmdir(dirfp.path)
5503+    except OSError, e:
5504+        if e.errno != errno.ENOTEMPTY:
5505+            raise
5506+    else:
5507+        dirfp.changed()
5508+
5509+def rmtree(dirname):
5510     """
5511     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5512     already gone, do nothing and return without raising an exception.  If this
5513hunk ./src/allmydata/util/fileutil.py 239
5514             else:
5515                 remove(fullname)
5516         os.rmdir(dirname)
5517-    except Exception, le:
5518-        # Ignore "No such file or directory"
5519-        if (not isinstance(le, OSError)) or le.args[0] != 2:
5520+    except EnvironmentError, le:
5521+        # Ignore "No such file or directory", collect any other exception.
5522+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
5523             excs.append(le)
5524hunk ./src/allmydata/util/fileutil.py 243
5525+    except Exception, le:
5526+        excs.append(le)
5527 
5528     # Okay, now we've recursively removed everything, ignoring any "No
5529     # such file or directory" errors, and collecting any other errors.
5530hunk ./src/allmydata/util/fileutil.py 256
5531             raise OSError, "Failed to remove dir for unknown reason."
5532         raise OSError, excs
5533 
5534+def fp_remove(fp):
5535+    """
5536+    An idempotent version of shutil.rmtree().  If the file/dir is already
5537+    gone, do nothing and return without raising an exception.  If this call
5538+    removes the file/dir, return without raising an exception.  If there is
5539+    an error that prevents removal, or if a file or directory at the same
5540+    path gets created again by someone else after this deletes it and before
5541+    this checks that it is gone, raise an exception.
5542+    """
5543+    try:
5544+        fp.remove()
5545+    except UnlistableError, e:
5546+        if e.originalException.errno != errno.ENOENT:
5547+            raise
5548+    except OSError, e:
5549+        if e.errno != errno.ENOENT:
5550+            raise
5551+
5552+def rm_dir(dirname):
5553+    # Renamed to be like shutil.rmtree and unlike rmdir.
5554+    return rmtree(dirname)
5555 
5556 def remove_if_possible(f):
5557     try:
5558hunk ./src/allmydata/util/fileutil.py 387
5559         import traceback
5560         traceback.print_exc()
5561 
5562-def get_disk_stats(whichdir, reserved_space=0):
5563+def get_disk_stats(whichdirfp, reserved_space=0):
5564     """Return disk statistics for the storage disk, in the form of a dict
5565     with the following fields.
5566       total:            total bytes on disk
5567hunk ./src/allmydata/util/fileutil.py 408
5568     you can pass how many bytes you would like to leave unused on this
5569     filesystem as reserved_space.
5570     """
5571+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5572 
5573     if have_GetDiskFreeSpaceExW:
5574         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5575hunk ./src/allmydata/util/fileutil.py 419
5576         n_free_for_nonroot = c_ulonglong(0)
5577         n_total            = c_ulonglong(0)
5578         n_free_for_root    = c_ulonglong(0)
5579-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5580+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5581                                                byref(n_total),
5582                                                byref(n_free_for_root))
5583         if retval == 0:
5584hunk ./src/allmydata/util/fileutil.py 424
5585             raise OSError("Windows error %d attempting to get disk statistics for %r"
5586-                          % (GetLastError(), whichdir))
5587+                          % (GetLastError(), whichdirfp.path))
5588         free_for_nonroot = n_free_for_nonroot.value
5589         total            = n_total.value
5590         free_for_root    = n_free_for_root.value
5591hunk ./src/allmydata/util/fileutil.py 433
5592         # <http://docs.python.org/library/os.html#os.statvfs>
5593         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5594         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5595-        s = os.statvfs(whichdir)
5596+        s = os.statvfs(whichdirfp.path)
5597 
5598         # on my mac laptop:
5599         #  statvfs(2) is a wrapper around statfs(2).
5600hunk ./src/allmydata/util/fileutil.py 460
5601              'avail': avail,
5602            }
5603 
5604-def get_available_space(whichdir, reserved_space):
5605+def get_available_space(whichdirfp, reserved_space):
5606     """Returns available space for share storage in bytes, or None if no
5607     API to get this information is available.
5608 
5609hunk ./src/allmydata/util/fileutil.py 472
5610     you can pass how many bytes you would like to leave unused on this
5611     filesystem as reserved_space.
5612     """
5613+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5614     try:
5615hunk ./src/allmydata/util/fileutil.py 474
5616-        return get_disk_stats(whichdir, reserved_space)['avail']
5617+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5618     except AttributeError:
5619         return None
5620hunk ./src/allmydata/util/fileutil.py 477
5621-    except EnvironmentError:
5622-        log.msg("OS call to get disk statistics failed")
5623+
5624+
5625+def get_used_space(fp):
5626+    if fp is None:
5627         return 0
5628hunk ./src/allmydata/util/fileutil.py 482
5629+    try:
5630+        s = os.stat(fp.path)
5631+    except EnvironmentError:
5632+        if not fp.exists():
5633+            return 0
5634+        raise
5635+    else:
5636+        # POSIX defines st_blocks (originally a BSDism):
5637+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
5638+        # but does not require stat() to give it a "meaningful value"
5639+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
5640+        # and says:
5641+        #   "The unit for the st_blocks member of the stat structure is not defined
5642+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
5643+        #    It may differ on a file system basis. There is no correlation between
5644+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
5645+        #    structure members."
5646+        #
5647+        # The Linux docs define it as "the number of blocks allocated to the file,
5648+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
5649+        # not set the attribute on Windows.
5650+        #
5651+        # We consider platforms that define st_blocks but give it a wrong value, or
5652+        # measure it in a unit other than 512 bytes, to be broken. See also
5653+        # <http://bugs.python.org/issue12350>.
5654+
5655+        if hasattr(s, 'st_blocks'):
5656+            return s.st_blocks * 512
5657+        else:
5658+            return s.st_size
5659}
5660[Bleeding edge pluggable backends code from David-Sarah. refs #999
5661david-sarah@jacaranda.org**20110919202715
5662 Ignore-this: dd5d1a7ca745389239c189f82246a99c
5663] {
5664hunk ./src/allmydata/interfaces.py 351
5665         Generates the IStoredShare objects held in this shareset.
5666         """
5667 
5668-    def get_incoming_shnums():
5669+    def has_incoming(shnum):
5670         """
5671hunk ./src/allmydata/interfaces.py 353
5672-        Return a frozenset of the shnums (as ints) of incoming shares.
5673+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
5674         """
5675 
5676     def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
5677hunk ./src/allmydata/interfaces.py 412
5678         """
5679         Add a new lease on the shares in this shareset. If the renew_secret
5680         matches an existing lease, that lease will be renewed instead. If
5681-        there are no shares in this shareset, return silently. (Note that
5682-        in Tahoe-LAFS v1.3.0 and earlier, IndexError was raised if there were
5683-        no shares with this shareset's storage index.)
5684+        there are no shares in this shareset, return silently.
5685 
5686         @param lease_info=LeaseInfo
5687         """
5688hunk ./src/allmydata/storage/backends/base.py 30
5689         return si_b2a(self.storageindex)
5690 
5691     def renew_lease(self, renew_secret, new_expiration_time):
5692-        found_buckets = False
5693+        found_shares = False
5694         for share in self.get_shares():
5695hunk ./src/allmydata/storage/backends/base.py 32
5696-            found_buckets = True
5697+            found_shares = True
5698             share.renew_lease(renew_secret, new_expiration_time)
5699 
5700hunk ./src/allmydata/storage/backends/base.py 35
5701-        if not found_buckets:
5702+        if not found_shares:
5703             raise IndexError("no such lease to renew")
5704 
5705     def get_leases(self):
5706hunk ./src/allmydata/storage/backends/base.py 69
5707         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
5708         #     """create a mutable share with the given shnum and write_enabler"""
5709 
5710-        # This previously had to be a triple with cancel_secret in secrets[2],
5711-        # but we now allow the cancel_secret to be omitted.
5712+        # secrets might be a triple with cancel_secret in secrets[2], but if
5713+        # so we ignore the cancel_secret.
5714         write_enabler = secrets[0]
5715         renew_secret = secrets[1]
5716 
5717hunk ./src/allmydata/storage/backends/base.py 77
5718         si_s = self.get_storage_index_string()
5719         shares = {}
5720         for share in self.get_shares():
5721-            # XXX is ignoring immutable shares correct? Maybe get_shares should
5722+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
5723             # have a parameter saying what type it's expecting.
5724             if share.sharetype == "mutable":
5725                 share.check_write_enabler(write_enabler, si_s)
5726hunk ./src/allmydata/storage/backends/base.py 83
5727                 shares[share.get_shnum()] = share
5728 
5729-        # write_enabler is good for all existing shares.
5730+        # write_enabler is good for all existing shares
5731 
5732hunk ./src/allmydata/storage/backends/base.py 85
5733-        # Now evaluate test vectors.
5734+        # now evaluate test vectors
5735         testv_is_good = True
5736         for sharenum in test_and_write_vectors:
5737             (testv, datav, new_length) = test_and_write_vectors[sharenum]
5738hunk ./src/allmydata/storage/backends/base.py 96
5739                     break
5740             else:
5741                 # compare the vectors against an empty share, in which all
5742-                # reads return empty strings.
5743+                # reads return empty strings
5744                 if not EmptyShare().check_testv(testv):
5745                     self.log("testv failed (empty): [%d] %r" % (sharenum,
5746                                                                 testv))
5747hunk ./src/allmydata/storage/backends/base.py 103
5748                     testv_is_good = False
5749                     break
5750 
5751-        # now gather the read vectors, before we do any writes
5752+        # gather the read vectors, before we do any writes
5753         read_data = {}
5754         for shnum, share in shares.items():
5755             read_data[shnum] = share.readv(read_vector)
5756hunk ./src/allmydata/storage/backends/base.py 144
5757         """
5758         datavs = {}
5759         for share in self.get_shares():
5760-            # XXX is ignoring immutable shares correct? Maybe get_shares should
5761-            # have a parameter saying what type it's expecting.
5762             shnum = share.get_shnum()
5763hunk ./src/allmydata/storage/backends/base.py 145
5764-            if share.sharetype == "mutable" and (not wanted_shnums or shnum in wanted_shnums):
5765+            if not wanted_shnums or shnum in wanted_shnums:
5766                 datavs[shnum] = share.readv(read_vector)
5767 
5768         return datavs
5769hunk ./src/allmydata/storage/backends/disk/disk_backend.py 4
5770 
5771 import re
5772 
5773-from twisted.python.filepath import FilePath, UnlistableError
5774+from twisted.python.filepath import UnlistableError
5775 
5776 from zope.interface import implements
5777 from allmydata.interfaces import IStorageBackend, IShareSet
5778hunk ./src/allmydata/storage/backends/disk/disk_backend.py 9
5779 from allmydata.util import fileutil, log, time_format
5780-from allmydata.util.assertutil import precondition
5781 from allmydata.storage.common import si_b2a, si_a2b
5782 from allmydata.storage.bucket import BucketWriter
5783 from allmydata.storage.backends.base import Backend, ShareSet
5784hunk ./src/allmydata/storage/backends/disk/disk_backend.py 51
5785 class DiskBackend(Backend):
5786     implements(IStorageBackend)
5787 
5788-    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5789+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
5790         Backend.__init__(self)
5791hunk ./src/allmydata/storage/backends/disk/disk_backend.py 53
5792-        self._setup_storage(storedir, readonly, reserved_space)
5793+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
5794         self._setup_corruption_advisory()
5795 
5796hunk ./src/allmydata/storage/backends/disk/disk_backend.py 56
5797-    def _setup_storage(self, storedir, readonly, reserved_space):
5798-        precondition(isinstance(storedir, FilePath), storedir, FilePath)
5799-        self.storedir = storedir
5800-        self.readonly = readonly
5801-        self.reserved_space = int(reserved_space)
5802-        self.sharedir = self.storedir.child("shares")
5803-        fileutil.fp_make_dirs(self.sharedir)
5804-        self.incomingdir = self.sharedir.child('incoming')
5805+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
5806+        self._storedir = storedir
5807+        self._readonly = readonly
5808+        self._reserved_space = int(reserved_space)
5809+        self._discard_storage = discard_storage
5810+        self._sharedir = self._storedir.child("shares")
5811+        fileutil.fp_make_dirs(self._sharedir)
5812+        self._incomingdir = self._sharedir.child('incoming')
5813         self._clean_incomplete()
5814hunk ./src/allmydata/storage/backends/disk/disk_backend.py 65
5815-        if self.reserved_space and (self.get_available_space() is None):
5816+        if self._reserved_space and (self.get_available_space() is None):
5817             log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5818                     umid="0wZ27w", level=log.UNUSUAL)
5819 
5820hunk ./src/allmydata/storage/backends/disk/disk_backend.py 70
5821     def _clean_incomplete(self):
5822-        fileutil.fp_remove(self.incomingdir)
5823-        fileutil.fp_make_dirs(self.incomingdir)
5824+        fileutil.fp_remove(self._incomingdir)
5825+        fileutil.fp_make_dirs(self._incomingdir)
5826 
5827     def _setup_corruption_advisory(self):
5828         # we don't actually create the corruption-advisory dir until necessary
5829hunk ./src/allmydata/storage/backends/disk/disk_backend.py 75
5830-        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5831+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
5832 
5833     def _make_shareset(self, sharehomedir):
5834         return self.get_shareset(si_a2b(sharehomedir.basename()))
5835hunk ./src/allmydata/storage/backends/disk/disk_backend.py 81
5836 
5837     def get_sharesets_for_prefix(self, prefix):
5838-        prefixfp = self.sharedir.child(prefix)
5839+        prefixfp = self._sharedir.child(prefix)
5840         try:
5841             sharesets = map(self._make_shareset, prefixfp.children())
5842             def _by_base32si(b):
5843hunk ./src/allmydata/storage/backends/disk/disk_backend.py 92
5844         return sharesets
5845 
5846     def get_shareset(self, storageindex):
5847-        sharehomedir = si_si2dir(self.sharedir, storageindex)
5848-        incominghomedir = si_si2dir(self.incomingdir, storageindex)
5849-        return DiskShareSet(storageindex, sharehomedir, incominghomedir)
5850+        sharehomedir = si_si2dir(self._sharedir, storageindex)
5851+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
5852+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
5853 
5854     def fill_in_space_stats(self, stats):
5855hunk ./src/allmydata/storage/backends/disk/disk_backend.py 97
5856+        stats['storage_server.reserved_space'] = self._reserved_space
5857         try:
5858hunk ./src/allmydata/storage/backends/disk/disk_backend.py 99
5859-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
5860+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
5861             writeable = disk['avail'] > 0
5862 
5863             # spacetime predictors should use disk_avail / (d(disk_used)/dt)
5864hunk ./src/allmydata/storage/backends/disk/disk_backend.py 114
5865             log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
5866             writeable = False
5867 
5868-        if self.readonly_storage:
5869+        if self._readonly:
5870             stats['storage_server.disk_avail'] = 0
5871             writeable = False
5872 
5873hunk ./src/allmydata/storage/backends/disk/disk_backend.py 121
5874         stats['storage_server.accepting_immutable_shares'] = int(writeable)
5875 
5876     def get_available_space(self):
5877-        if self.readonly:
5878+        if self._readonly:
5879             return 0
5880hunk ./src/allmydata/storage/backends/disk/disk_backend.py 123
5881-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
5882-
5883-    #def set_storage_server(self, ss):
5884-    #    self.ss = ss
5885+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
5886 
5887     def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
5888hunk ./src/allmydata/storage/backends/disk/disk_backend.py 126
5889-        fileutil.fp_make_dirs(self.corruption_advisory_dir)
5890+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
5891         now = time_format.iso_utc(sep="T")
5892         si_s = si_b2a(storageindex)
5893 
5894hunk ./src/allmydata/storage/backends/disk/disk_backend.py 132
5895         # Windows can't handle colons in the filename.
5896         name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
5897-        f = self.corruption_advisory_dir.child(name).open("w")
5898+        f = self._corruption_advisory_dir.child(name).open("w")
5899         try:
5900             f.write("report: Share Corruption\n")
5901             f.write("type: %s\n" % sharetype)
5902hunk ./src/allmydata/storage/backends/disk/disk_backend.py 153
5903 class DiskShareSet(ShareSet):
5904     implements(IShareSet)
5905 
5906-    def __init__(self, storageindex, sharehomedir, incominghomedir=None):
5907-        ShareSet.__init__(storageindex)
5908+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
5909+        ShareSet.__init__(self, storageindex)
5910         self._sharehomedir = sharehomedir
5911         self._incominghomedir = incominghomedir
5912hunk ./src/allmydata/storage/backends/disk/disk_backend.py 157
5913+        self._discard_storage = discard_storage
5914 
5915     def get_overhead(self):
5916         return (fileutil.get_disk_usage(self._sharehomedir) +
5917hunk ./src/allmydata/storage/backends/disk/disk_backend.py 179
5918             # There is no shares directory at all.
5919             pass
5920 
5921-    def get_incoming_shnums(self):
5922-        """
5923-        Return a frozenset of the shnum (as ints) of incoming shares.
5924-        """
5925+    def has_incoming(self, shnum):
5926         if self._incominghomedir is None:
5927hunk ./src/allmydata/storage/backends/disk/disk_backend.py 181
5928-            return frozenset()
5929-        try:
5930-            childfps = [ fp for fp in self._incominghomedir.children() if NUM_RE.match(fp.basename()) ]
5931-            shnums = [ int(fp.basename()) for fp in childfps]
5932-            return frozenset(shnums)
5933-        except UnlistableError:
5934-            # There is no incoming directory at all.
5935-            return frozenset()
5936+            return False
5937+        return self._incominghomedir.child(str(shnum)).exists()
5938 
5939     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
5940         sharehome = self._sharehomedir.child(str(shnum))
5941hunk ./src/allmydata/storage/backends/disk/disk_backend.py 190
5942         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
5943                                    max_size=max_space_per_bucket, create=True)
5944         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
5945+        if self._discard_storage:
5946+            bw.throw_out_all_data = True
5947         return bw
5948 
5949     def _create_mutable_share(self, storageserver, shnum, write_enabler):
5950hunk ./src/allmydata/storage/backends/disk/disk_backend.py 195
5951-        fileutil.fp_make_dirs(self.sharehomedir)
5952-        sharehome = self.sharehomedir.child(str(shnum))
5953-        nodeid = storageserver.get_nodeid()
5954-        return create_mutable_disk_share(sharehome, nodeid, write_enabler, storageserver)
5955+        fileutil.fp_make_dirs(self._sharehomedir)
5956+        sharehome = self._sharehomedir.child(str(shnum))
5957+        serverid = storageserver.get_serverid()
5958+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
5959 
5960     def _clean_up_after_unlink(self):
5961         fileutil.fp_rmdir_if_empty(self._sharehomedir)
5962hunk ./src/allmydata/storage/backends/disk/immutable.py 213
5963             f.close()
5964 
5965     def add_lease(self, lease_info):
5966-        num_leases = self._read_num_leases(self._incominghome)
5967+        f = self._incominghome.open(mode='rb')
5968+        try:
5969+            num_leases = self._read_num_leases(f)
5970+        finally:
5971+            f.close()
5972         f = self._home.open(mode='wb+')
5973         try:
5974             self._write_lease_record(f, num_leases, lease_info)
5975hunk ./src/allmydata/storage/backends/disk/immutable.py 226
5976             f.close()
5977 
5978     def renew_lease(self, renew_secret, new_expire_time):
5979-        for i, lease in enumerate(self.get_leases()):
5980-            if constant_time_compare(lease.renew_secret, renew_secret):
5981-                # yup. See if we need to update the owner time.
5982-                if new_expire_time > lease.expiration_time:
5983-                    # yes
5984-                    lease.expiration_time = new_expire_time
5985-                    f = self._home.open('rb+')
5986-                    try:
5987-                        self._write_lease_record(f, i, lease)
5988-                    finally:
5989-                        f.close()
5990-                return
5991+        try:
5992+            for i, lease in enumerate(self.get_leases()):
5993+                if constant_time_compare(lease.renew_secret, renew_secret):
5994+                    # yup. See if we need to update the owner time.
5995+                    if new_expire_time > lease.expiration_time:
5996+                        # yes
5997+                        lease.expiration_time = new_expire_time
5998+                        f = self._home.open('rb+')
5999+                        try:
6000+                            self._write_lease_record(f, i, lease)
6001+                        finally:
6002+                            f.close()
6003+                    return
6004+        except IndexError, e:
6005+            raise Exception("IndexError: %s" % (e,))
6006         raise IndexError("unable to renew non-existent lease")
6007 
6008     def add_or_renew_lease(self, lease_info):
6009hunk ./src/allmydata/storage/backends/disk/mutable.py 87
6010     def log(self, *args, **kwargs):
6011         return self.parent.log(*args, **kwargs)
6012 
6013-    def create(self, my_nodeid, write_enabler):
6014+    def create(self, serverid, write_enabler):
6015         assert not self._home.exists()
6016         data_length = 0
6017         extra_lease_offset = (self.HEADER_SIZE
6018hunk ./src/allmydata/storage/backends/disk/mutable.py 98
6019         f = self._home.open('wb')
6020         try:
6021             header = struct.pack(">32s20s32sQQ",
6022-                                 self.MAGIC, my_nodeid, write_enabler,
6023+                                 self.MAGIC, serverid, write_enabler,
6024                                  data_length, extra_lease_offset,
6025                                  )
6026             leases = ("\x00"*self.LEASE_SIZE) * 4
6027hunk ./src/allmydata/storage/backends/disk/mutable.py 440
6028         pass
6029 
6030 
6031-def create_mutable_disk_share(fp, nodeid, write_enabler, parent):
6032+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
6033     ms = MutableDiskShare(fp, parent)
6034hunk ./src/allmydata/storage/backends/disk/mutable.py 442
6035-    ms.create(nodeid, write_enabler)
6036+    ms.create(serverid, write_enabler)
6037     del ms
6038     return MutableDiskShare(fp, parent)
6039hunk ./src/allmydata/storage/crawler.py 6
6040 import cPickle as pickle
6041 from twisted.internet import reactor
6042 from twisted.application import service
6043+
6044+from allmydata.util.assertutil import precondition
6045+from allmydata.interfaces import IStorageBackend
6046 from allmydata.storage.common import si_b2a
6047 
6048 
6049hunk ./src/allmydata/storage/crawler.py 81
6050     minimum_cycle_time = 300 # don't run a cycle faster than this
6051 
6052     def __init__(self, backend, statefp, allowed_cpu_percentage=None):
6053+        precondition(IStorageBackend.providedBy(backend), backend)
6054         service.MultiService.__init__(self)
6055         self.backend = backend
6056         self.statefp = statefp
6057hunk ./src/allmydata/storage/expirer.py 54
6058     minimum_cycle_time = 12*60*60 # not more than twice per day
6059 
6060     def __init__(self, backend, statefp, historyfp, expiration_policy):
6061-        ShareCrawler.__init__(self, backend, statefp)
6062+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
6063         self.historyfp = historyfp
6064hunk ./src/allmydata/storage/expirer.py 56
6065+        ShareCrawler.__init__(self, backend, statefp)
6066+
6067         self.expiration_enabled = expiration_policy['enabled']
6068         self.mode = expiration_policy['mode']
6069         self.override_lease_duration = None
6070hunk ./src/allmydata/storage/lease.py 3
6071 import struct, time
6072 
6073+
6074+class NonExistentLeaseError(Exception):
6075+    pass
6076+
6077 class LeaseInfo:
6078     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
6079                  expiration_time=None, nodeid=None):
6080hunk ./src/allmydata/storage/server.py 7
6081 from twisted.application import service
6082 
6083 from zope.interface import implements
6084-from allmydata.interfaces import RIStorageServer, IStatsProducer
6085+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
6086+from allmydata.util.assertutil import precondition
6087 from allmydata.util import idlib, log
6088 import allmydata # for __full_version__
6089 
6090hunk ./src/allmydata/storage/server.py 32
6091         'sharetypes': ('mutable', 'immutable'),
6092     }
6093 
6094-    def __init__(self, nodeid, backend, reserved_space=0,
6095-                 readonly_storage=False,
6096+    def __init__(self, serverid, backend, statedir,
6097                  stats_provider=None,
6098                  expiration_policy=None):
6099         service.MultiService.__init__(self)
6100hunk ./src/allmydata/storage/server.py 36
6101-        assert isinstance(nodeid, str)
6102-        assert len(nodeid) == 20
6103-        self.my_nodeid = nodeid
6104+        precondition(IStorageBackend.providedBy(backend), backend)
6105+        precondition(isinstance(serverid, str), serverid)
6106+        precondition(len(serverid) == 20, serverid)
6107+
6108+        self.my_nodeid = serverid
6109         self.stats_provider = stats_provider
6110         if self.stats_provider:
6111             self.stats_provider.register_producer(self)
6112hunk ./src/allmydata/storage/server.py 47
6113         self._active_writers = weakref.WeakKeyDictionary()
6114         self.backend = backend
6115         self.backend.setServiceParent(self)
6116-        self.backend.set_storage_server(self)
6117+        self._statedir = statedir
6118         log.msg("StorageServer created", facility="tahoe.storage")
6119 
6120         self.latencies = {"allocate": [], # immutable
6121hunk ./src/allmydata/storage/server.py 68
6122         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
6123 
6124     def _setup_bucket_counter(self):
6125-        statefp = self.storedir.child("bucket_counter.state")
6126-        self.bucket_counter = BucketCountingCrawler(statefp)
6127+        statefp = self._statedir.child("bucket_counter.state")
6128+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
6129         self.bucket_counter.setServiceParent(self)
6130 
6131     def _setup_lease_checker(self, expiration_policy):
6132hunk ./src/allmydata/storage/server.py 73
6133-        statefp = self.storedir.child("lease_checker.state")
6134-        historyfp = self.storedir.child("lease_checker.history")
6135-        self.lease_checker = self.LeaseCheckerClass(statefp, historyfp, expiration_policy)
6136+        statefp = self._statedir.child("lease_checker.state")
6137+        historyfp = self._statedir.child("lease_checker.history")
6138+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
6139         self.lease_checker.setServiceParent(self)
6140 
6141     def count(self, name, delta=1):
6142hunk ./src/allmydata/storage/server.py 140
6143         # remember: RIStatsProvider requires that our return dict
6144         # contains numeric, or None values.
6145         stats = { 'storage_server.allocated': self.allocated_size(), }
6146-        stats['storage_server.reserved_space'] = self.reserved_space
6147         for category,ld in self.get_latencies().items():
6148             for name,v in ld.items():
6149                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
6150hunk ./src/allmydata/storage/server.py 188
6151         # owner.
6152         start = time.time()
6153         self.count("allocate")
6154-        incoming = set()
6155         bucketwriters = {} # k: shnum, v: BucketWriter
6156 
6157         si_s = si_b2a(storageindex)
6158hunk ./src/allmydata/storage/server.py 208
6159             # has already been written to the backend, where it will show up in
6160             # get_available_space.
6161             remaining_space -= self.allocated_size()
6162-        # self.readonly_storage causes remaining_space <= 0
6163+            # If the backend is read-only, remaining_space will be <= 0.
6164+
6165+        shareset = self.backend.get_shareset(storageindex)
6166 
6167         # Fill alreadygot with all shares that we have, not just the ones
6168         # they asked about: this will save them a lot of work. Add or update
6169hunk ./src/allmydata/storage/server.py 220
6170         # XXX should we be making the assumption here that lease info is
6171         # duplicated in all shares?
6172         alreadygot = set()
6173-        for share in self.backend.get_shares(storageindex):
6174+        for share in shareset.get_shares():
6175             share.add_or_renew_lease(lease_info)
6176             alreadygot.add(share.shnum)
6177 
6178hunk ./src/allmydata/storage/server.py 224
6179-        # all share numbers that are incoming
6180-        incoming = self.backend.get_incoming_shnums(storageindex)
6181-
6182-        for shnum in ((sharenums - alreadygot) - incoming):
6183-            if (not limited) or (remaining_space >= max_space_per_bucket):
6184-                bw = self.backend.make_bucket_writer(storageindex, shnum, max_space_per_bucket,
6185-                                                     lease_info, canary)
6186+        for shnum in sharenums - alreadygot:
6187+            if shareset.has_incoming(shnum):
6188+                # Note that we don't create BucketWriters for shnums that
6189+                # have a partial share (in incoming/), so if a second upload
6190+                # occurs while the first is still in progress, the second
6191+                # uploader will use different storage servers.
6192+                pass
6193+            elif (not limited) or (remaining_space >= max_space_per_bucket):
6194+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
6195+                                                 lease_info, canary)
6196                 bucketwriters[shnum] = bw
6197                 self._active_writers[bw] = 1
6198                 if limited:
6199hunk ./src/allmydata/storage/server.py 284
6200 
6201         try:
6202             shareset = self.backend.get_shareset(storageindex)
6203-            for share in shareset.get_shares(storageindex):
6204-                bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(self, share)
6205+            for share in shareset.get_shares():
6206+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
6207             return bucketreaders
6208         finally:
6209             self.add_latency("get", time.time() - start)
6210replace ./src/allmydata/storage/server.py [A-Za-z_0-9] my_nodeid _serverid
6211hunk ./src/allmydata/test/no_network.py 21
6212 from twisted.application import service
6213 from twisted.internet import defer, reactor
6214 from twisted.python.failure import Failure
6215+from twisted.python.filepath import FilePath
6216 from foolscap.api import Referenceable, fireEventually, RemoteException
6217 from base64 import b32encode
6218hunk ./src/allmydata/test/no_network.py 24
6219+
6220 from allmydata import uri as tahoe_uri
6221 from allmydata.client import Client
6222 from allmydata.storage.server import StorageServer
6223hunk ./src/allmydata/test/no_network.py 28
6224+from allmydata.storage.backends.disk.disk_backend import DiskBackend
6225 from allmydata.util import fileutil, idlib, hashutil
6226 from allmydata.util.hashutil import sha1
6227 from allmydata.test.common_web import HTTPClientGETFactory
6228hunk ./src/allmydata/test/no_network.py 262
6229 
6230     def make_server(self, i, readonly=False):
6231         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
6232-        serverdir = os.path.join(self.basedir, "servers",
6233-                                 idlib.shortnodeid_b2a(serverid), "storage")
6234-        fileutil.make_dirs(serverdir)
6235-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
6236-                           readonly_storage=readonly)
6237+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
6238+
6239+        # The backend will make the storage directory and any necessary parents.
6240+        backend = DiskBackend(storagedir, readonly=readonly)
6241+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
6242         ss._no_network_server_number = i
6243         return ss
6244 
6245hunk ./src/allmydata/test/no_network.py 276
6246         middleman = service.MultiService()
6247         middleman.setServiceParent(self)
6248         ss.setServiceParent(middleman)
6249-        serverid = ss.my_nodeid
6250+        serverid = ss.get_serverid()
6251         self.servers_by_number[i] = ss
6252         wrapper = wrap_storage_server(ss)
6253         self.wrappers_by_id[serverid] = wrapper
6254hunk ./src/allmydata/test/no_network.py 295
6255         # it's enough to remove the server from c._servers (we don't actually
6256         # have to detach and stopService it)
6257         for i,ss in self.servers_by_number.items():
6258-            if ss.my_nodeid == serverid:
6259+            if ss.get_serverid() == serverid:
6260                 del self.servers_by_number[i]
6261                 break
6262         del self.wrappers_by_id[serverid]
6263hunk ./src/allmydata/test/no_network.py 351
6264     def get_serverdir(self, i):
6265         return self.g.servers_by_number[i].backend.storedir
6266 
6267+    def remove_server(self, i):
6268+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
6269+
6270     def iterate_servers(self):
6271         for i in sorted(self.g.servers_by_number.keys()):
6272             ss = self.g.servers_by_number[i]
6273hunk ./src/allmydata/test/test_backends.py 51
6274         self.shareincomingname = self.sharedirincomingname.child('0')
6275         self.sharefinalname = self.sharedirfinalname.child('0')
6276 
6277-        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.core.FilePath', new = MockFilePath)
6278+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
6279+        # or LeaseCheckingCrawler.
6280+
6281+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
6282         self.FilePathFake.__enter__()
6283 
6284hunk ./src/allmydata/test/test_backends.py 57
6285-        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.core.BucketCountingCrawler')
6286+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
6287         FakeBCC = self.BCountingCrawler.__enter__()
6288         FakeBCC.side_effect = self.call_FakeBCC
6289 
6290hunk ./src/allmydata/test/test_backends.py 61
6291-        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.core.LeaseCheckingCrawler')
6292+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
6293         FakeLCC = self.LeaseCheckingCrawler.__enter__()
6294         FakeLCC.side_effect = self.call_FakeLCC
6295 
6296hunk ./src/allmydata/test/test_repairer.py 537
6297         # happiness setting.
6298         def _delete_some_servers(ignored):
6299             for i in xrange(7):
6300-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
6301+                self.remove_server(i)
6302 
6303             assert len(self.g.servers_by_number) == 3
6304 
6305hunk ./src/allmydata/test/test_upload.py 1103
6306                 self._copy_share_to_server(i, 2)
6307         d.addCallback(_copy_shares)
6308         # Remove the first server, and add a placeholder with share 0
6309-        d.addCallback(lambda ign:
6310-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6311+        d.addCallback(lambda ign: self.remove_server(0))
6312         d.addCallback(lambda ign:
6313             self._add_server_with_share(server_number=4, share_number=0))
6314         # Now try uploading.
6315hunk ./src/allmydata/test/test_upload.py 1134
6316         d.addCallback(lambda ign:
6317             self._add_server(server_number=4))
6318         d.addCallback(_copy_shares)
6319-        d.addCallback(lambda ign:
6320-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6321+        d.addCallback(lambda ign: self.remove_server(0))
6322         d.addCallback(_reset_encoding_parameters)
6323         d.addCallback(lambda client:
6324             client.upload(upload.Data("data" * 10000, convergence="")))
6325hunk ./src/allmydata/test/test_upload.py 1196
6326                 self._copy_share_to_server(i, 2)
6327         d.addCallback(_copy_shares)
6328         # Remove server 0, and add another in its place
6329-        d.addCallback(lambda ign:
6330-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6331+        d.addCallback(lambda ign: self.remove_server(0))
6332         d.addCallback(lambda ign:
6333             self._add_server_with_share(server_number=4, share_number=0,
6334                                         readonly=True))
6335hunk ./src/allmydata/test/test_upload.py 1237
6336             for i in xrange(1, 10):
6337                 self._copy_share_to_server(i, 2)
6338         d.addCallback(_copy_shares)
6339-        d.addCallback(lambda ign:
6340-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6341+        d.addCallback(lambda ign: self.remove_server(0))
6342         def _reset_encoding_parameters(ign, happy=4):
6343             client = self.g.clients[0]
6344             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
6345hunk ./src/allmydata/test/test_upload.py 1273
6346         # remove the original server
6347         # (necessary to ensure that the Tahoe2ServerSelector will distribute
6348         #  all the shares)
6349-        def _remove_server(ign):
6350-            server = self.g.servers_by_number[0]
6351-            self.g.remove_server(server.my_nodeid)
6352-        d.addCallback(_remove_server)
6353+        d.addCallback(lambda ign: self.remove_server(0))
6354         # This should succeed; we still have 4 servers, and the
6355         # happiness of the upload is 4.
6356         d.addCallback(lambda ign:
6357hunk ./src/allmydata/test/test_upload.py 1285
6358         d.addCallback(lambda ign:
6359             self._setup_and_upload())
6360         d.addCallback(_do_server_setup)
6361-        d.addCallback(_remove_server)
6362+        d.addCallback(lambda ign: self.remove_server(0))
6363         d.addCallback(lambda ign:
6364             self.shouldFail(UploadUnhappinessError,
6365                             "test_dropped_servers_in_encoder",
6366hunk ./src/allmydata/test/test_upload.py 1307
6367             self._add_server_with_share(4, 7, readonly=True)
6368             self._add_server_with_share(5, 8, readonly=True)
6369         d.addCallback(_do_server_setup_2)
6370-        d.addCallback(_remove_server)
6371+        d.addCallback(lambda ign: self.remove_server(0))
6372         d.addCallback(lambda ign:
6373             self._do_upload_with_broken_servers(1))
6374         d.addCallback(_set_basedir)
6375hunk ./src/allmydata/test/test_upload.py 1314
6376         d.addCallback(lambda ign:
6377             self._setup_and_upload())
6378         d.addCallback(_do_server_setup_2)
6379-        d.addCallback(_remove_server)
6380+        d.addCallback(lambda ign: self.remove_server(0))
6381         d.addCallback(lambda ign:
6382             self.shouldFail(UploadUnhappinessError,
6383                             "test_dropped_servers_in_encoder",
6384hunk ./src/allmydata/test/test_upload.py 1528
6385             for i in xrange(1, 10):
6386                 self._copy_share_to_server(i, 1)
6387         d.addCallback(_copy_shares)
6388-        d.addCallback(lambda ign:
6389-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6390+        d.addCallback(lambda ign: self.remove_server(0))
6391         def _prepare_client(ign):
6392             client = self.g.clients[0]
6393             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6394hunk ./src/allmydata/test/test_upload.py 1550
6395         def _setup(ign):
6396             for i in xrange(1, 11):
6397                 self._add_server(server_number=i)
6398-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6399+            self.remove_server(0)
6400             c = self.g.clients[0]
6401             # We set happy to an unsatisfiable value so that we can check the
6402             # counting in the exception message. The same progress message
6403hunk ./src/allmydata/test/test_upload.py 1577
6404                 self._add_server(server_number=i)
6405             self._add_server(server_number=11, readonly=True)
6406             self._add_server(server_number=12, readonly=True)
6407-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6408+            self.remove_server(0)
6409             c = self.g.clients[0]
6410             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
6411             return c
6412hunk ./src/allmydata/test/test_upload.py 1605
6413             # the first one that the selector sees.
6414             for i in xrange(10):
6415                 self._copy_share_to_server(i, 9)
6416-            # Remove server 0, and its contents
6417-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6418+            self.remove_server(0)
6419             # Make happiness unsatisfiable
6420             c = self.g.clients[0]
6421             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
6422hunk ./src/allmydata/test/test_upload.py 1625
6423         def _then(ign):
6424             for i in xrange(1, 11):
6425                 self._add_server(server_number=i, readonly=True)
6426-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6427+            self.remove_server(0)
6428             c = self.g.clients[0]
6429             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
6430             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6431hunk ./src/allmydata/test/test_upload.py 1661
6432             self._add_server(server_number=4, readonly=True))
6433         d.addCallback(lambda ign:
6434             self._add_server(server_number=5, readonly=True))
6435-        d.addCallback(lambda ign:
6436-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6437+        d.addCallback(lambda ign: self.remove_server(0))
6438         def _reset_encoding_parameters(ign, happy=4):
6439             client = self.g.clients[0]
6440             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
6441hunk ./src/allmydata/test/test_upload.py 1696
6442         d.addCallback(lambda ign:
6443             self._add_server(server_number=2))
6444         def _break_server_2(ign):
6445-            serverid = self.g.servers_by_number[2].my_nodeid
6446+            serverid = self.get_server(2).get_serverid()
6447             self.g.break_server(serverid)
6448         d.addCallback(_break_server_2)
6449         d.addCallback(lambda ign:
6450hunk ./src/allmydata/test/test_upload.py 1705
6451             self._add_server(server_number=4, readonly=True))
6452         d.addCallback(lambda ign:
6453             self._add_server(server_number=5, readonly=True))
6454-        d.addCallback(lambda ign:
6455-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
6456+        d.addCallback(lambda ign: self.remove_server(0))
6457         d.addCallback(_reset_encoding_parameters)
6458         d.addCallback(lambda client:
6459             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
6460hunk ./src/allmydata/test/test_upload.py 1816
6461             # Copy shares
6462             self._copy_share_to_server(1, 1)
6463             self._copy_share_to_server(2, 1)
6464-            # Remove server 0
6465-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6466+            self.remove_server(0)
6467             client = self.g.clients[0]
6468             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
6469             return client
6470hunk ./src/allmydata/test/test_upload.py 1930
6471                                         readonly=True)
6472             self._add_server_with_share(server_number=4, share_number=3,
6473                                         readonly=True)
6474-            # Remove server 0.
6475-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
6476+            self.remove_server(0)
6477             # Set the client appropriately
6478             c = self.g.clients[0]
6479             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
6480hunk ./src/allmydata/test/test_util.py 9
6481 from twisted.trial import unittest
6482 from twisted.internet import defer, reactor
6483 from twisted.python.failure import Failure
6484+from twisted.python.filepath import FilePath
6485 from twisted.python import log
6486 from pycryptopp.hash.sha256 import SHA256 as _hash
6487 
6488hunk ./src/allmydata/test/test_util.py 508
6489                 os.chdir(saved_cwd)
6490 
6491     def test_disk_stats(self):
6492-        avail = fileutil.get_available_space('.', 2**14)
6493+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
6494         if avail == 0:
6495             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
6496 
6497hunk ./src/allmydata/test/test_util.py 512
6498-        disk = fileutil.get_disk_stats('.', 2**13)
6499+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
6500         self.failUnless(disk['total'] > 0, disk['total'])
6501         self.failUnless(disk['used'] > 0, disk['used'])
6502         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
6503hunk ./src/allmydata/test/test_util.py 521
6504 
6505     def test_disk_stats_avail_nonnegative(self):
6506         # This test will spuriously fail if you have more than 2^128
6507-        # bytes of available space on your filesystem.
6508-        disk = fileutil.get_disk_stats('.', 2**128)
6509+        # bytes of available space on your filesystem (lucky you).
6510+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
6511         self.failUnlessEqual(disk['avail'], 0)
6512 
6513 class PollMixinTests(unittest.TestCase):
6514hunk ./src/allmydata/util/fileutil.py 420
6515         n_total            = c_ulonglong(0)
6516         n_free_for_root    = c_ulonglong(0)
6517         retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
6518-                                               byref(n_total),
6519-                                               byref(n_free_for_root))
6520+                                                      byref(n_total),
6521+                                                      byref(n_free_for_root))
6522         if retval == 0:
6523             raise OSError("Windows error %d attempting to get disk statistics for %r"
6524                           % (GetLastError(), whichdirfp.path))
6525}
6526
6527Context:
6528
6529[misc/coding_tools/check_interfaces.py: report all violations rather than only one for a given class, by including a forked version of verifyClass. refs #1474
6530david-sarah@jacaranda.org**20110916223450
6531 Ignore-this: 927efeecf4d12588316826a4b3479aa9
6532]
6533[misc/coding_tools/check_interfaces.py: use os.walk instead of FilePath, since this script shouldn't really depend on Twisted. refs #1474
6534david-sarah@jacaranda.org**20110916212633
6535 Ignore-this: 46eeb4236b34375227dac71ef53f5428
6536]
6537[misc/coding_tools/check-interfaces.py: reduce false-positives by adding Dummy* to the set of excluded classnames, and bench-* to the set of excluded basenames. refs #1474
6538david-sarah@jacaranda.org**20110916212624
6539 Ignore-this: 4e78f6e6fe6c0e9be9df826a0e206804
6540]
6541[Make platform-detection code tolerate linux-3.0, patch by zooko.
6542Brian Warner <warner@lothar.com>**20110915202620
6543 Ignore-this: af63cf9177ae531984dea7a1cad03762
6544 
6545 Otherwise address-autodetection can't find ifconfig. refs #1536
6546]
6547[test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended.
6548david-sarah@jacaranda.org**20110915185126
6549 Ignore-this: d96632bc48d770b9b577cda1bbd8ff94
6550]
6551[Add a script 'misc/coding_tools/check-interfaces.py' that checks whether zope interfaces are enforced. Also add 'check-interfaces', 'version-and-path', and 'code-checks' targets to the Makefile. fixes #1474
6552david-sarah@jacaranda.org**20110915161532
6553 Ignore-this: 32d9bdc5bc4a86d21e927724560ad4b4
6554]
6555[interfaces.py: 'which -> that' grammar cleanup.
6556david-sarah@jacaranda.org**20110825003217
6557 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
6558]
6559[Fix interfaces related to MDMF. refs #393
6560david-sarah@jacaranda.org**20110825013046
6561 Ignore-this: ee510c7261f8b328f0db218d71208ca3
6562]
6563[tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow
6564zooko@zooko.com**20110913024255
6565 Ignore-this: 6a86d691e878cec583722faad06fb8e4
6566]
6567[interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528
6568david-sarah@jacaranda.org**20110913002843
6569 Ignore-this: 1a00a6029d40f6792af48c5578c1fd69
6570]
6571[CREDITS: more CREDITS for Kevan and David-Sarah
6572zooko@zooko.com**20110912223357
6573 Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada
6574]
6575[merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress
6576zooko@zooko.com**20110913205521
6577 Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8
6578]
6579[doc: add NEWS item about fixes to potential palimpsest issues in mutable files
6580zooko@zooko.com**20110912223329
6581 Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a
6582 ref. #1528
6583]
6584[merge the NEWS about the security fix (#1528) with the work-in-progress NEWS
6585zooko@zooko.com**20110913205153
6586 Ignore-this: 88e88a2ad140238c62010cf7c66953fc
6587]
6588[doc: add NEWS entry about the issue which allows unauthorized deletion of shares
6589zooko@zooko.com**20110912223246
6590 Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0
6591 ref. #1528
6592]
6593[doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares
6594zooko@zooko.com**20110912223135
6595 Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd
6596 ref. #1528
6597]
6598[storage: more paranoid handling of bounds and palimpsests in mutable share files
6599zooko@zooko.com**20110912222655
6600 Ignore-this: a20782fa423779ee851ea086901e1507
6601 * storage server ignores requests to extend shares by sending a new_length
6602 * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
6603 * storage server zeroes out lease info at the old location when moving it to a new location
6604 ref. #1528
6605]
6606[storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents
6607zooko@zooko.com**20110912222554
6608 Ignore-this: 61ebd7b11250963efdf5b1734a35271
6609 ref. #1528
6610]
6611[immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret
6612zooko@zooko.com**20110912222458
6613 Ignore-this: da1ebd31433ea052087b75b2e3480c25
6614 Declare explicitly that we prevent this problem in the server's version dict.
6615 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
6616]
6617[storage: remove the storage server's "remote_cancel_lease" function
6618zooko@zooko.com**20110912222331
6619 Ignore-this: 1c32dee50e0981408576daffad648c50
6620 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
6621 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
6622]
6623[storage: test that the storage server does *not* have a "remote_cancel_lease" function
6624zooko@zooko.com**20110912222324
6625 Ignore-this: 21c652009704652d35f34651f98dd403
6626 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
6627 ref. #1528
6628]
6629[immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret
6630zooko@zooko.com**20110912221201
6631 Ignore-this: 376e47b346c713d37096531491176349
6632 Also test whether the server explicitly declares that it prevents this problem.
6633 ref #1528
6634]
6635[Retrieve._activate_enough_peers: rewrite Verify logic
6636Brian Warner <warner@lothar.com>**20110909181150
6637 Ignore-this: 9367c11e1eacbf025f75ce034030d717
6638]
6639[Retrieve: implement/test stopProducing
6640Brian Warner <warner@lothar.com>**20110909181150
6641 Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178
6642]
6643[move DownloadStopped from download.common to interfaces
6644Brian Warner <warner@lothar.com>**20110909181150
6645 Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50
6646]
6647[retrieve.py: remove vestigal self._validated_readers
6648Brian Warner <warner@lothar.com>**20110909181150
6649 Ignore-this: faab2ec14e314a53a2ffb714de626e2d
6650]
6651[Retrieve: rewrite flow-control: use a top-level loop() to catch all errors
6652Brian Warner <warner@lothar.com>**20110909181150
6653 Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714
6654 
6655 This ought to close the potential for dropped errors and hanging downloads.
6656 Verify needs to be examined, I may have broken it, although all tests pass.
6657]
6658[Retrieve: merge _validate_active_prefixes into _add_active_peers
6659Brian Warner <warner@lothar.com>**20110909181150
6660 Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c
6661]
6662[Retrieve: remove the initial prefix-is-still-good check
6663Brian Warner <warner@lothar.com>**20110909181150
6664 Ignore-this: da66ee51c894eaa4e862e2dffb458acc
6665 
6666 This check needs to be done with each fetch from the storage server, to
6667 detect when someone has changed the share (i.e. our servermap goes stale).
6668 Doing it just once at the beginning of retrieve isn't enough: a write might
6669 occur after the first segment but before the second, etc.
6670 
6671 _try_to_validate_prefix() was not removed: it will be used by the future
6672 check-with-each-fetch code.
6673 
6674 test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
6675 fails until this check is brought back. (the corruption it applies only
6676 touches the prefix, not the block data, so the check-less retrieve actually
6677 tolerates it). Don't forget to re-enable it once the check is brought back.
6678]
6679[MDMFSlotReadProxy: remove the queue
6680Brian Warner <warner@lothar.com>**20110909181150
6681 Ignore-this: 96673cb8dda7a87a423de2f4897d66d2
6682 
6683 This is a neat trick to reduce Foolscap overhead, but the need for an
6684 explicit flush() complicates the Retrieve path and makes it prone to
6685 lost-progress bugs.
6686 
6687 Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
6688 same share in a row, a limitation exposed by turning off the queue.
6689]
6690[rearrange Retrieve: first step, shouldn't change order of execution
6691Brian Warner <warner@lothar.com>**20110909181149
6692 Ignore-this: e3006368bfd2802b82ea45c52409e8d6
6693]
6694[CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527
6695david-sarah@jacaranda.org**20110906183730
6696 Ignore-this: 122e2ffbee84861c32eda766a57759cf
6697]
6698[CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527
6699david-sarah@jacaranda.org**20110906183020
6700 Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d
6701]
6702[CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527
6703david-sarah@jacaranda.org**20110905020922
6704 Ignore-this: 75a6df0a2df9c467d8c010579e9a024e
6705]
6706[cli: make --mutable-type imply --mutable in 'tahoe put'
6707Kevan Carstensen <kevan@isnotajoke.com>**20110903190920
6708 Ignore-this: 23336d3c43b2a9554e40c2a11c675e93
6709]
6710[SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on.
6711david-sarah@jacaranda.org**20110903222304
6712 Ignore-this: 980c61d4dd0119337f1463a69aeebaf0
6713]
6714[improve the storage/mutable.py asserts even more
6715warner@lothar.com**20110901160543
6716 Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946
6717]
6718[storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts
6719wilcoxjg@gmail.com**20110901084144
6720 Ignore-this: 28ace2b2678642e4d7269ddab8c67f30
6721]
6722[docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs.
6723david-sarah@jacaranda.org**20110831232148
6724 Ignore-this: cd9c851d3eb4e0a1e088f337c291586c
6725]
6726[test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s
6727warner@lothar.com**20110831050451
6728 Ignore-this: 64815284d9e536f8f3798b5f44cf580c
6729]
6730[mutable/retrieve: handle the case where self._read_length is 0.
6731Kevan Carstensen <kevan@isnotajoke.com>**20110830210141
6732 Ignore-this: fceafbe485851ca53f2774e5a4fd8d30
6733 
6734 Note that the downloader will still fetch a segment for a zero-length
6735 read, which is wasteful. Fixing that isn't specifically required to fix
6736 #1512, but it should probably be fixed before 1.9.
6737]
6738[NEWS: added summary of all changes since 1.8.2. Needs editing.
6739Brian Warner <warner@lothar.com>**20110830163205
6740 Ignore-this: 273899b37a899fc6919b74572454b8b2
6741]
6742[test_mutable.Update: only upload the files needed for each test. refs #1500
6743Brian Warner <warner@lothar.com>**20110829072717
6744 Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7
6745 
6746 This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
6747 It also fixes a couple of places where a Deferred was being dropped, which
6748 would cause two tests to run in parallel and also confuse error reporting.
6749]
6750[Let Uploader retain History instead of passing it into upload(). Fixes #1079.
6751Brian Warner <warner@lothar.com>**20110829063246
6752 Ignore-this: 3902c58ec12bd4b2d876806248e19f17
6753 
6754 This consistently records all immutable uploads in the Recent Uploads And
6755 Downloads page, regardless of code path. Previously, certain webapi upload
6756 operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
6757 object and were left out.
6758]
6759[Fix mutable publish/retrieve timing status displays. Fixes #1505.
6760Brian Warner <warner@lothar.com>**20110828232221
6761 Ignore-this: 4080ce065cf481b2180fd711c9772dd6
6762 
6763 publish:
6764 * encrypt and encode times are cumulative, not just current-segment
6765 
6766 retrieve:
6767 * same for decrypt and decode times
6768 * update "current status" to include segment number
6769 * set status to Finished/Failed when download is complete
6770 * set progress to 1.0 when complete
6771 
6772 More improvements to consider:
6773 * progress is currently 0% or 100%: should calculate how many segments are
6774   involved (remembering retrieve can be less than the whole file) and set it
6775   to a fraction
6776 * "fetch" time is fuzzy: what we want is to know how much of the delay is not
6777   our own fault, but since we do decode/decrypt work while waiting for more
6778   shares, it's not straightforward
6779]
6780[Teach 'tahoe debug catalog-shares about MDMF. Closes #1507.
6781Brian Warner <warner@lothar.com>**20110828080931
6782 Ignore-this: 56ef2951db1a648353d7daac6a04c7d1
6783]
6784[debug.py: remove some dead comments
6785Brian Warner <warner@lothar.com>**20110828074556
6786 Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31
6787]
6788[hush pyflakes
6789Brian Warner <warner@lothar.com>**20110828074254
6790 Ignore-this: bef9d537a969fa82fe4decc4ba2acb09
6791]
6792[MutableFileNode.set_downloader_hints: never depend upon order of dict.values()
6793Brian Warner <warner@lothar.com>**20110828074103
6794 Ignore-this: caaf1aa518dbdde4d797b7f335230faa
6795 
6796 The old code was calculating the "extension parameters" (a list) from the
6797 downloader hints (a dictionary) with hints.values(), which is not stable, and
6798 would result in corrupted filecaps (with the 'k' and 'segsize' hints
6799 occasionally swapped). The new code always uses [k,segsize].
6800]
6801[layout.py: fix MDMF share layout documentation
6802Brian Warner <warner@lothar.com>**20110828073921
6803 Ignore-this: 3f13366fed75b5e31b51ae895450a225
6804]
6805[teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507
6806Brian Warner <warner@lothar.com>**20110828073834
6807 Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea
6808]
6809[test_mutable.Version.test_debug: use splitlines() to fix buildslaves
6810Brian Warner <warner@lothar.com>**20110828064728
6811 Ignore-this: c7f6245426fc80b9d1ae901d5218246a
6812 
6813 Any slave running in a directory with spaces in the name was miscounting
6814 shares, causing the test to fail.
6815]
6816[test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507
6817Brian Warner <warner@lothar.com>**20110828005542
6818 Ignore-this: cb20bea1c28bfa50a72317d70e109672
6819 
6820 Also changes NoNetworkGrid to put shares in storage/shares/ .
6821]
6822[test_mutable.py: oops, missed a .todo
6823Brian Warner <warner@lothar.com>**20110828002118
6824 Ignore-this: fda09ae86481352b7a627c278d2a3940
6825]
6826[test_mutable: merge davidsarah's patch with my Version refactorings
6827warner@lothar.com**20110827235707
6828 Ignore-this: b5aaf481c90d99e33827273b5d118fd0
6829]
6830[Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393
6831david-sarah@jacaranda.org**20110823012720
6832 Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721
6833]
6834[Additional tests for MDMF URIs and for zero-length files. refs #393
6835david-sarah@jacaranda.org**20110823011532
6836 Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5
6837]
6838[Additional tests for zero-length partial reads and updates to mutable versions. refs #393
6839david-sarah@jacaranda.org**20110822014111
6840 Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea
6841]
6842[test_mutable.Version: factor out some expensive uploads, save 25% runtime
6843Brian Warner <warner@lothar.com>**20110827232737
6844 Ignore-this: ea37383eb85ea0894b254fe4dfb45544
6845]
6846[SDMF: update filenode with correct k/N after Retrieve. Fixes #1510.
6847Brian Warner <warner@lothar.com>**20110827225031
6848 Ignore-this: b50ae6e1045818c400079f118b4ef48
6849 
6850 Without this, we get a regression when modifying a mutable file that was
6851 created with more shares (larger N) than our current tahoe.cfg . The
6852 modification attempt creates new versions of the (0,1,..,newN-1) shares, but
6853 leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
6854 assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
6855 
6856 The mixed versions that result (some shares with e.g. N=10, some with N=20,
6857 such that both versions are recoverable) cause problems for the Publish code,
6858 even before MDMF landed. Might be related to refs #1390 and refs #1042.
6859]
6860[layout.py: annotate assertion to figure out 'tahoe backup' failure
6861Brian Warner <warner@lothar.com>**20110827195253
6862 Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5
6863]
6864[Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507.
6865Brian Warner <warner@lothar.com>**20110827195048
6866 Ignore-this: 61c6af5e33fc88e0251e697a50addb2c
6867 
6868 This also adds tests for all those cases, and fixes an omission in uri.py
6869 that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier.
6870]
6871[MDMF: more writable/writeable consistentifications
6872warner@lothar.com**20110827190602
6873 Ignore-this: 22492a9e20c1819ddb12091062888b55
6874]
6875[MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code
6876warner@lothar.com**20110827183357
6877 Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b
6878]
6879[setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479
6880david-sarah@jacaranda.org**20110826230345
6881 Ignore-this: 40e908b8937322a290fb8012bfcad02a
6882]
6883[test_mutable.Update: increase timeout from 120s to 400s, slaves are failing
6884Brian Warner <warner@lothar.com>**20110825230140
6885 Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec
6886]
6887[tests: fix check_memory test
6888zooko@zooko.com**20110825201116
6889 Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835
6890 fixes #1503
6891]
6892[TAG allmydata-tahoe-1.9.0a1
6893warner@lothar.com**20110825161122
6894 Ignore-this: 3cbf49f00dbda58189f893c427f65605
6895]
6896[touch NEWS to trigger buildslaves
6897warner@lothar.com**20110825161026
6898 Ignore-this: 3d444737d005a9051780d15604166401
6899]
6900[test_mutable.Update: remove .timeout overrides, otherwise tests ERROR
6901Brian Warner <warner@lothar.com>**20110825022455
6902 Ignore-this: 140ea1f7207ffd68be40e112f6e3d310
6903]
6904[blacklist.py: add read() method too, for completeness
6905warner@lothar.com**20110825021902
6906 Ignore-this: c79a429f311b01732eba2a71119e84
6907]
6908[Implementation, tests and docs for blacklists. This version allows listing directories containing a blacklisted child. Inclusion of blacklist.py fixed. fixes #1425
6909david-sarah@jacaranda.org**20110824155928
6910 Ignore-this: a306f36bb6640eaf046e66dc4beeb11c
6911]
6912[mutable/layout.py: fix unused import. refs #393
6913david-sarah@jacaranda.org**20110816225043
6914 Ignore-this: 7c9d6d91521ceb9a7abd14b2c60c0604
6915]
6916[mutable/retrieve.py: cosmetics and remove a stale comment. refs #393
6917david-sarah@jacaranda.org**20110816214612
6918 Ignore-this: 916e60c9dff1ef85595822e609ff34b7
6919]
6920[mutable/filenode.py: don't fetch more segments than necesasry to update the file
6921Kevan Carstensen <kevan@isnotajoke.com>**20110813210005
6922 Ignore-this: 2b0ad0533baa6f19f18851317dfc9f15
6923]
6924[test/test_mutable: test for incorrect div_ceil equations
6925Kevan Carstensen <kevan@isnotajoke.com>**20110813183936
6926 Ignore-this: 74e6061ab2ec5e706a1235611f87d5d6
6927]
6928[mutable/retrieve.py: use floor division to calculate segment boundaries, don't fetch more segments than necessary
6929Kevan Carstensen <kevan@isnotajoke.com>**20110813183833
6930 Ignore-this: 3e272249107afd3fbc1dd30c6a4f1e31
6931]
6932[mdmf: clean up boolean expressions, correct typos, remove self._paused, and don't unconditionally initialize block hash trees, asll as suggested by davidsarahs' review comments
6933Kevan Carstensen <kevan@isnotajoke.com>**20110813183710
6934 Ignore-this: cc6ad9f98b64f379151aa58b77b6c4e5
6935]
6936[now that tests pass with full-size keys, return test-keys to normal (522bit)
6937warner@lothar.com**20110811175418
6938 Ignore-this: dbce8a6699ba9a90d91cffbc8aa87900
6939]
6940[fix SHARE_HASH_CHAIN_SIZE computation
6941warner@lothar.com**20110811175350
6942 Ignore-this: 4508359d2207c8c1b7552b546697264
6943]
6944[More idiomatic resolution of the conflict between ticket393-MDMF-2 and trunk. refs #393
6945david-sarah@jacaranda.org**20110810202942
6946 Ignore-this: 7fc54a30ab0bc6ce75b7d819800c1182
6947]
6948[Replace the hard-coded 522-bit RSA key size used for tests with a TEST_RSA_KEY_SIZE constant defined in test/common.py (part 2). refs #393
6949david-sarah@jacaranda.org**20110810202310
6950 Ignore-this: 7fbd4d004279599bbcb10f7b31fb010f
6951]
6952[Replace the hard-coded 522-bit RSA key size used for tests with a TEST_RSA_KEY_SIZE constant defined in test/common.py (part 1). refs #393
6953david-sarah@jacaranda.org**20110810202243
6954 Ignore-this: c58d8130a2f383ff4421c632499b027b
6955]
6956[merge some minor conflicts in test code from the 393-2 branch and trunk
6957zooko@zooko.com**20110810172139
6958 Ignore-this: 4a16f13eeae585c7c1dbe18c67072c90
6959]
6960[doc: eliminate the phrase "rootcap" from doc/frontends/FTP-and-SFTP.rst
6961zooko@zooko.com**20110809132601
6962 Ignore-this: f7e1dd212daa65c81fb57977bce24304
6963 Two different people have asked me for help, saying they couldn't figure out what a "rootcap" is. Hopefully just calling it a "cap" will make it easier for them to find out from the other docs what it is.
6964]
6965[test_web.py: fix a test failure dependent on whether simplejson.loads returns a unicode or str object.
6966david-sarah@jacaranda.org**20110808213925
6967 Ignore-this: f7b267be8be56fcabc968e3c89999490
6968]
6969[immutable/filenode: fix pyflakes warnings
6970Kevan Carstensen <kevan@isnotajoke.com>**20110807004514
6971 Ignore-this: e8d875bf8b1c5571e31b0eff42ecf64c
6972]
6973[test: fix assorted tests broken by MDMF changes
6974Kevan Carstensen <kevan@isnotajoke.com>**20110807004459
6975 Ignore-this: 9a0dc7e5c74bfe840a9fce278619a103
6976]
6977[uri: add MDMF and MDMF directory caps, add extension hint support
6978Kevan Carstensen <kevan@isnotajoke.com>**20110807004436
6979 Ignore-this: 6486b7d4dc0e849c6b1e9cdfb6318eac
6980]
6981[test/test_mutable: tests for MDMF
6982Kevan Carstensen <kevan@isnotajoke.com>**20110807004414
6983 Ignore-this: 29f9c3a806d67df0ed09c4f0d857d347
6984 
6985 These are their own patch because they cut across a lot of the changes
6986 I've made in implementing MDMF in such a way as to make it difficult to
6987 split them up into the other patches.
6988]
6989[webapi changes for MDMF
6990Kevan Carstensen <kevan@isnotajoke.com>**20110807004348
6991 Ignore-this: d6d4dac680baa4c99b05882b3828796c
6992 
6993     - Learn how to create MDMF files and directories through the
6994       mutable-type argument.
6995     - Operate with the interface changes associated with MDMF and #993.
6996     - Learn how to do partial updates of mutable files.
6997]
6998[mutable/servermap: Rework the servermap to work with MDMF mutable files
6999Kevan Carstensen <kevan@isnotajoke.com>**20110807004259
7000 Ignore-this: 154b987fa0af716c41185b88ff7ee2e1
7001]
7002[dirnode: teach dirnode to make MDMF directories
7003Kevan Carstensen <kevan@isnotajoke.com>**20110807004224
7004 Ignore-this: 765ccd6a07ff752bf6057a3dab9e5abd
7005]
7006[Fix some test failures caused by #393 patch.
7007david-sarah@jacaranda.org**20110802032810
7008 Ignore-this: 7f65e5adb5c859af289cea7011216fef
7009]
7010[docs: amend configuration, webapi documentation to talk about MDMF
7011Kevan Carstensen <kevan@isnotajoke.com>**20110802022056
7012 Ignore-this: 4cab9b7e4ab79cc1efdabe2d457f27a6
7013]
7014[cli: teach CLI how to create MDMF mutable files
7015Kevan Carstensen <kevan@isnotajoke.com>**20110802021613
7016 Ignore-this: 18d0ff98e75be231eed3c53319e76936
7017 
7018 Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
7019 argument.
7020]
7021[frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
7022Kevan Carstensen <kevan@isnotajoke.com>**20110802021207
7023 Ignore-this: 5e0f6e961048f71d4eed6d30210ffd2e
7024]
7025[mutable/layout: Define MDMF share format, write tools for working with MDMF share format
7026Kevan Carstensen <kevan@isnotajoke.com>**20110802021120
7027 Ignore-this: fa76ef4800939e19ba3cbc22a2eab4e
7028 
7029 The changes in layout.py are mostly concerned with the MDMF share
7030 format. In particular, we define read and write proxy objects used by
7031 retrieval, publishing, and other code to write and read the MDMF share
7032 format. We create equivalent proxies for SDMF objects so that these
7033 objects can be suitably general.
7034]
7035[immutable/filenode: implement unified filenode interface
7036Kevan Carstensen <kevan@isnotajoke.com>**20110802020905
7037 Ignore-this: d9a442fc285157f134f5d1b4607c6a48
7038]
7039[immutable/literal.py: Implement interface changes in literal nodes.
7040Kevan Carstensen <kevan@isnotajoke.com>**20110802020814
7041 Ignore-this: 4371e71a50e65ce2607c4d67d3a32171
7042]
7043[test/common: Alter common test code to work with MDMF.
7044Kevan Carstensen <kevan@isnotajoke.com>**20110802015643
7045 Ignore-this: e564403182d0030439b168dd9f8726fa
7046 
7047 This mostly has to do with making the test code implement the new
7048 unified filenode interfaces.
7049]
7050[mutable: train checker and repairer to work with MDMF mutable files
7051Kevan Carstensen <kevan@isnotajoke.com>**20110802015140
7052 Ignore-this: 8b1928925bed63708b71ab0de8d4306f
7053]
7054[nodemaker: teach nodemaker about MDMF caps
7055Kevan Carstensen <kevan@isnotajoke.com>**20110802014926
7056 Ignore-this: 430c73121b6883b99626cfd652fc65c4
7057]
7058[client: teach client how to create and work with MDMF files
7059Kevan Carstensen <kevan@isnotajoke.com>**20110802014811
7060 Ignore-this: d72fbc4c2ca63f00d9ab9dc2919098ff
7061]
7062[mutable/filenode: Modify mutable filenodes for use with MDMF
7063Kevan Carstensen <kevan@isnotajoke.com>**20110802014501
7064 Ignore-this: 3c230bb0ebe60a94c667b0ee0c3b28e0
7065 
7066 In particular:
7067     - Break MutableFileNode and MutableFileVersion into distinct classes.
7068     - Implement the interface modifications made for MDMF.
7069     - Be aware of MDMF caps.
7070     - Learn how to create and work with MDMF files.
7071]
7072[nodemaker: teach nodemaker how to create MDMF mutable files
7073Kevan Carstensen <kevan@isnotajoke.com>**20110802014258
7074 Ignore-this: 2bf1fd4f8c1d1ad0e855c678347b76c2
7075]
7076[interfaces: change interfaces to work with MDMF
7077Kevan Carstensen <kevan@isnotajoke.com>**20110802014119
7078 Ignore-this: 2f441022cf888c044bc9e6dd609db139
7079 
7080 A lot of this work concerns #993, in that it unifies (to an extent) the
7081 interfaces of mutable and immutable files.
7082]
7083[mutable/publish: teach the publisher how to publish MDMF mutable files
7084Kevan Carstensen <kevan@isnotajoke.com>**20110802013931
7085 Ignore-this: 115217ec2b289452ec774cb725da8a86
7086 
7087 Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
7088]
7089[mutable/retrieve: rework the mutable downloader to handle multiple-segment files
7090Kevan Carstensen <kevan@isnotajoke.com>**20110802013524
7091 Ignore-this: 398d11b5cb993b50e5e4fa6e7a3856dc
7092 
7093 The downloader needs substantial reworking to handle multiple segment
7094 mutable files, which it needs to handle for MDMF.
7095]
7096[Fix repeated 'the' in license text.
7097david-sarah@jacaranda.org**20110819204836
7098 Ignore-this: b3bd4e9ec22029fe15533ad2a60003ad
7099]
7100[Remove Non-Profit Open Software License from the set of 'added permission' licenses. Although it actually does qualify as an Open Source license (because it allows relicensing under plain OSL), its wording is unclear and could easily be misunderstood, and it contributes to incompatible license proliferation.
7101david-sarah@jacaranda.org**20110819204742
7102 Ignore-this: 7373819a6b5367581356728ea62cabb1
7103]
7104[docs: change links that pointed to COPYING.TGPPL.html to point to COPYING.TGPPL.rst instead
7105zooko@zooko.com**20110819060142
7106 Ignore-this: 301652554fd7ab4bfa5aa8f8a2863a9e
7107]
7108[docs: formatting: reflow to fill-column 77
7109zooko@zooko.com**20110819060110
7110 Ignore-this: ed1317c126f07c63b944bd2fa6aa2d21
7111]
7112[docs: formatting: M-x whitespace-cleanup
7113zooko@zooko.com**20110819060041
7114 Ignore-this: 8554b16a25067094d0dc4dc71e1b3950
7115]
7116[licensing: add to the list of licenses that we grant the added permission for
7117zooko@zooko.com**20110819054656
7118 Ignore-this: eb1490416ac6b7414a27f150a8a8a047
7119 Added: most of the ones listed on the FSF's "List of Free Software, GPL Incompatible Licenses", plus the Non-Profit Open Software License.
7120]
7121[docs: reflow the added text at the top of COPYING.GPL to fill-column 77
7122zooko@zooko.com**20110819053059
7123 Ignore-this: e994ed6ffbcc12656406f11cb862ce99
7124]
7125[docs: reformat COPYING.TGPPL.html to COPYING.TGPPL.rst
7126zooko@zooko.com**20110819052753
7127 Ignore-this: 34ddf623e0a6de008ba859ca9c92b2fd
7128]
7129[docs: reflow docs/logging.rst to fill-column 77
7130zooko@zooko.com**20110819044103
7131 Ignore-this: a6901f2244995f278ddf8d75d29410bf
7132]
7133[doc: fix formatting error in docs/logging.rst
7134zooko@zooko.com**20110819043946
7135 Ignore-this: fa182dbbe7f4fda15e0a8bfcf7f00051
7136]
7137[Cleanups for suppression of UserWarnings. refs #1435
7138david-sarah@jacaranda.org**20110818040749
7139 Ignore-this: 3863ef399c1c382a1365d51f000d314c
7140]
7141[suppress warning emitted by newer zope.interface with Nevow 0.10
7142zooko@zooko.com**20110817203134
7143 Ignore-this: b86d4ce0ed1c0da76d1f9eaf8d08d9c4
7144 refs #1435
7145]
7146[doc: formatting: reflow to fill-column=77
7147zooko@zooko.com**20110809132510
7148 Ignore-this: 2d6d2e203d52925968b4451f36364792
7149]
7150[_auto_deps.py: change the requirement for zope.interface to <= 3.6.2, >= 3.6.6. fixes #1435
7151david-sarah@jacaranda.org**20110815025347
7152 Ignore-this: 17a88c0f6573f044fbcd6b666667bd37
7153]
7154[allmydata/__init__.py, test_version.py: make version parsing understand '<=', with test. refs #1435
7155david-sarah@jacaranda.org**20110815035153
7156 Ignore-this: 8c3a75f4a2b42b56bac48b5053c5e9c2
7157]
7158[Makefile and setup.py: remove setup.py commands that we no longer need, and their uses in the Makefile. Delete a stale and incorrect comment about updating _version.py. Also fix some coding style checks in the Makefile to operate on all source files.
7159david-sarah@jacaranda.org**20110801031952
7160 Ignore-this: 80a435dee3bc6e29058d4b37ff579922
7161]
7162[remove misc/debian[_helpers], rely upon official packaging instead. fixes #1454
7163warner@lothar.com**20110811182705
7164 Ignore-this: 79673cafc7c108db49b5ab908d7b4668
7165]
7166[Makefile: remove targets that used misc/debian[_helpers] which no longer exist. Also change docs/debian.rst to reflect the fact that we no longer support building .debs using those targets. refs #1454
7167david-sarah@jacaranda.org**20110801031857
7168 Ignore-this: 347cbeff45757db630ce34d0cfb84f92
7169]
7170[replace tabs with spaces in the #1441 'tahoe debug' synopsis
7171warner@lothar.com**20110811173704
7172 Ignore-this: 513fbfb18a3dd93119ea3700118df7ee
7173]
7174[Correct the information printed by '/usr/bin/tahoe debug --help' on Debian/Ubuntu. fixes #1441
7175david-sarah@jacaranda.org**20110724162530
7176 Ignore-this: 30d4b8c20e420e9a9d1b73eba1113ae
7177]
7178[doc: edit the explanation of K-of-N tradeoffs
7179zooko@zooko.com**20110804193409
7180 Ignore-this: ab6f4e35a995c2099340b5c9c5d30f40
7181]
7182[doc: clean up formatting of doc/configuration.rst
7183zooko@zooko.com**20110804192722
7184 Ignore-this: 7a98a3a8afb7e5441ff1f534211199ba
7185 reflow to 77 chars line width, M-x white-space cleanup, blank link between name and definition
7186]
7187[Add test for webopen. fixes #1149
7188david-sarah@jacaranda.org**20110724211659
7189 Ignore-this: 1e22853f7eb05e24c3141d56a513f661
7190]
7191[test_client.py: relax a check in test_create_drop_uploader so that it should pass on Python 2.4.x. refs #1429
7192david-sarah@jacaranda.org**20110810052504
7193 Ignore-this: 1380749ceaf33c30e26c50d57476616c
7194]
7195[test/common_util.py: correct fix to mkdir_nonascii. refs #1472
7196david-sarah@jacaranda.org**20110810051906
7197 Ignore-this: 93c0c33370bc47d95c26c4cce8e05290
7198]
7199[test/common_util.py: fix a typo. refs #1472
7200david-sarah@jacaranda.org**20110810044235
7201 Ignore-this: f88643d7c82cb3577686d77bbff9e2bc
7202]
7203[test_client.py, test_drop_upload.py: fix pyflakes warnings.
7204david-sarah@jacaranda.org**20110810034505
7205 Ignore-this: 1e2d71bf2f43d63cbb423d32a6f96793
7206]
7207[Factor out methods dealing with non-ASCII directories and filenames from test_drop_upload.py into common_util.py. refs #1429, #1472
7208david-sarah@jacaranda.org**20110810031558
7209 Ignore-this: 3de8f945fa7a58fc318a1184bad0fd1a
7210]
7211[test_client.py: add a test that the drop-uploader is initialized correctly by client.py. Also give the DropUploader service a name, which is necessary for the test. refs #1429
7212david-sarah@jacaranda.org**20110810030538
7213 Ignore-this: 13d511ea9bbe9da2dcffe4a91ce94eae
7214]
7215[drop-upload: rename 'start' method to 'startService', which is what you're supposed to use to start a Service. refs #1429
7216david-sarah@jacaranda.org**20110810030345
7217 Ignore-this: d1f5e5c63937ea37be37324e2f1ae99d
7218]
7219[test_drop_upload.py: add comment explaining why we don't use FilePath.setContent. refs #1429
7220david-sarah@jacaranda.org**20110810025942
7221 Ignore-this: b95358030b63cb467d1d7f1b9a9b6978
7222]
7223[test_drop_upload.py: fix some grammatical and spelling nits. refs #1429
7224david-sarah@jacaranda.org**20110809221231
7225 Ignore-this: fd331acddd9f754173f274a34fe62f03
7226]
7227[drop-upload: report the configured local directory being absent differently from it being a file
7228zooko@zooko.com**20110809220930
7229 Ignore-this: a08879100f5f20e609be3f0ffa3b25cc
7230 refs #1429
7231]
7232[drop-upload: rename the 'upload.uri' parameter to 'upload.dircap', and a couple of cleanups to error messages. refs #1429
7233zooko@zooko.com**20110809220508
7234 Ignore-this: 4846368cbe331e8653bdce1f314e276b
7235 I rerecorded this patch, originally by David-Sarah, to use "darcs replace" instead of editing to do the renames. This uncovered one missed rename in Client.init_drop_uploader. (Which also means that code isn't exercised by the current unit tests.)
7236 refs #1429
7237]
7238[drop-upload test for non-existent local dir separately from test for non-directory local dir
7239zooko@zooko.com**20110809220115
7240 Ignore-this: cd85f345c02f5cb71b1c1527bd4ebddc
7241 A candidate patch for #1429 has a bug when it is using FilePath.is_dir() to detect whether the configured local dir exists and is a directory. FilePath.is_dir() raises exception, instead of returning False, if the thing doesn't exist. This test is to make sure that DropUploader.__init__ raise different exceptions for those two cases.
7242 refs #1429
7243]
7244[drop-upload: unit tests for the configuration options being named "cap" instead of "uri"
7245zooko@zooko.com**20110809215913
7246 Ignore-this: 958c78fffb3d76b3e4817647f824e7f9
7247 This is a subset of a patch that David-Sarah attached to #1429. This is just the unit-tests part of that patch, and uses darcs record instead of hunks to change the names.
7248 refs #1429
7249]
7250[src/allmydata/storage/server.py: use the filesystem of storage/shares/, rather than storage/, to calculate remaining space. fixes #1384
7251david-sarah@jacaranda.org**20110719022752
7252 Ignore-this: a4781043cfd453dbb66ae4f108d80bea
7253]
7254[test_storage.py: test that we are using the filesystem of storage/shares/, rather than storage/, to calculate remaining space, and that the HTML status output reflects the values returned by fileutil.get_disk_stats. This version works with older versions of the mock library. refs #1384
7255david-sarah@jacaranda.org**20110809190722
7256 Ignore-this: db447caca37a459ca49563efa58db58c
7257]
7258[Work around ref #1472 by having test_drop_upload delete the non-ASCII directories it creates.
7259david-sarah@jacaranda.org**20110809012334
7260 Ignore-this: 5881fd5db419ba8ad12e0b2a82f6c4f0
7261]
7262[Remove all trailing whitespace from .py files.
7263david-sarah@jacaranda.org**20110809001117
7264 Ignore-this: d2658b5ce44af70cc606ae4d3085b7cc
7265]
7266[test_drop_upload.py: fix unused imports. refs #1429
7267david-sarah@jacaranda.org**20110808235422
7268 Ignore-this: 834f6b946bfea699d7d8c743edd66671
7269]
7270[Documentation for drop-upload frontend. refs #1429
7271david-sarah@jacaranda.org**20110808182146
7272 Ignore-this: b33110834e586c0b784d1736c2af5779
7273]
7274[Drop-upload frontend, rerecorded for 1.9 beta (and correcting a minor mistake). Includes some fixes for Windows but not the Windows inotify implementation. fixes #1429
7275david-sarah@jacaranda.org**20110808234049
7276 Ignore-this: 67f824c7f554e9a3a85f9fd2e1123d97
7277]
7278[node.py: ensure that client and introducer nodes record their port number and use that port on the next restart, fixing a regression caused by #1385. fixes #1469.
7279david-sarah@jacaranda.org**20110806221934
7280 Ignore-this: 1aa9d340b6570320ab2f9edc89c9e0a8
7281]
7282[test_runner.py: fix a race condition in the test when NODE_URL_FILE is written before PORTNUM_FILE. refs #1469
7283david-sarah@jacaranda.org**20110806231842
7284 Ignore-this: ab01ae7cec3a073e29eec473e64052a0
7285]
7286[test_runner.py: cleanups of HOTLINE_FILE writing and removal.
7287david-sarah@jacaranda.org**20110806231652
7288 Ignore-this: 25f5c5d6f5d8faebb26a4ce80110a335
7289]
7290[test_runner.py: remove an unused constant.
7291david-sarah@jacaranda.org**20110806221416
7292 Ignore-this: eade2695cbabbea9cafeaa8debe410bb
7293]
7294[node.py: fix the error path for a missing config option so that it works for a Unicode base directory.
7295david-sarah@jacaranda.org**20110806221007
7296 Ignore-this: 4eb9cc04b2ce05182a274a0d69dafaf3
7297]
7298[test_runner.py: test that client and introducer nodes record their port number and use that port on the next restart. This tests for a regression caused by ref #1385.
7299david-sarah@jacaranda.org**20110806220635
7300 Ignore-this: 40a0c040b142dbddd47e69b3c3712f5
7301]
7302[test_runner.py: fix a bug in CreateNode.do_create introduced in changeset [5114] when the tahoe.cfg file has been written with CRLF line endings. refs #1385
7303david-sarah@jacaranda.org**20110804003032
7304 Ignore-this: 7b7afdcf99da6671afac2d42828883eb
7305]
7306[test_client.py: repair Basic.test_error_on_old_config_files. refs #1385
7307david-sarah@jacaranda.org**20110803235036
7308 Ignore-this: 31e2a9c3febe55948de7e144353663e
7309]
7310[test_checker.py: increase timeout for TooParallel.test_immutable again. The ARM buildslave took 38 seconds, so 40 seconds is too close to the edge; make it 80.
7311david-sarah@jacaranda.org**20110803214042
7312 Ignore-this: 2d8026a6b25534e01738f78d6c7495cb
7313]
7314[test_runner.py: fix RunNode.test_introducer to not rely on the mtime of introducer.furl to detect when the node has restarted. Instead we detect when node.url has been written. refs #1385
7315david-sarah@jacaranda.org**20110803180917
7316 Ignore-this: 11ddc43b107beca42cb78af88c5c394c
7317]
7318[Further improve error message about old config files. refs #1385
7319david-sarah@jacaranda.org**20110803174546
7320 Ignore-this: 9d6cc3c288d9863dce58faafb3855917
7321]
7322[Slightly improve error message about old config files (avoid unnecessary Unicode escaping). refs #1385
7323david-sarah@jacaranda.org**20110803163848
7324 Ignore-this: a3e3930fba7ccf90b8db3d2ed5829df4
7325]
7326[test_checker.py: increase timeout for TooParallel.test_immutable (was consistently failing on ARM buildslave).
7327david-sarah@jacaranda.org**20110803163213
7328 Ignore-this: d0efceaf12628e8791862b80c85b5d56
7329]
7330[Fix the bug that prevents an introducer from starting when introducer.furl already exists. Also remove some dead code that used to read old config files, and rename 'warn_about_old_config_files' to reflect that it's not a warning. refs #1385
7331david-sarah@jacaranda.org**20110803013212
7332 Ignore-this: 2d6cd14bd06a7493b26f2027aff78f4d
7333]
7334[test_runner.py: modify RunNode.test_introducer to test that starting an introducer works when the introducer.furl file already exists. refs #1385
7335david-sarah@jacaranda.org**20110803012704
7336 Ignore-this: 8cf7f27ac4bfbb5ad8ca4a974106d437
7337]
7338[verifier: correct a bug introduced in changeset [5106] that caused us to only verify the first block of a file. refs #1395
7339david-sarah@jacaranda.org**20110802172437
7340 Ignore-this: 87fb77854a839ff217dce73544775b11
7341]
7342[test_repairer: add a deterministic test of share data corruption that always flips the bits of the last byte of the share data. refs #1395
7343david-sarah@jacaranda.org**20110802175841
7344 Ignore-this: 72f54603785007e88220c8d979e08be7
7345]
7346[verifier: serialize the fetching of blocks within a share so that we don't use too much RAM
7347zooko@zooko.com**20110802063703
7348 Ignore-this: debd9bac07dcbb6803f835a9e2eabaa1
7349 
7350 Shares are still verified in parallel, but within a share, don't request a
7351 block until the previous block has been verified and the memory we used to hold
7352 it has been freed up.
7353 
7354 Patch originally due to Brian. This version has a mockery-patchery-style test
7355 which is "low tech" (it implements the patching inline in the test code instead
7356 of using an extension of the mock.patch() function from the mock library) and
7357 which unpatches in case of exception.
7358 
7359 fixes #1395
7360]
7361[add docs about timing-channel attacks
7362Brian Warner <warner@lothar.com>**20110802044541
7363 Ignore-this: 73114d5f5ed9ce252597b707dba3a194
7364]
7365['test-coverage' now needs PYTHONPATH=. to find TOP/twisted/plugins/
7366Brian Warner <warner@lothar.com>**20110802041952
7367 Ignore-this: d40f1f4cb426ea1c362fc961baedde2
7368]
7369[remove nodeid from WriteBucketProxy classes and customers
7370warner@lothar.com**20110801224317
7371 Ignore-this: e55334bb0095de11711eeb3af827e8e8
7372 refs #1363
7373]
7374[remove get_serverid() from ReadBucketProxy and customers, including Checker
7375warner@lothar.com**20110801224307
7376 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
7377 and debug.py dump-share commands
7378 refs #1363
7379]
7380[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
7381zooko@zooko.com**20110801232423
7382 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
7383 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
7384 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
7385 fixes #1385
7386]
7387[whitespace-cleanup
7388zooko@zooko.com**20110725015546
7389 Ignore-this: 442970d0545183b97adc7bd66657876c
7390]
7391[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
7392zooko@zooko.com**20110331145427
7393 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
7394 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
7395]
7396[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
7397david-sarah@jacaranda.org**20110801221317
7398 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
7399]
7400[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
7401david-sarah@jacaranda.org**20110801220919
7402 Ignore-this: 572327591137bb05c24c44812d4b163f
7403]
7404[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
7405david-sarah@jacaranda.org**20110801220108
7406 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
7407]
7408[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
7409david-sarah@jacaranda.org**20110801205356
7410 Ignore-this: 4fbf03864934753c951ddeff64392491
7411]
7412[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
7413david-sarah@jacaranda.org**20110713002722
7414 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
7415]
7416[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
7417david-sarah@jacaranda.org**20110713001218
7418 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
7419]
7420[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
7421david-sarah@jacaranda.org**20110801203009
7422 Ignore-this: b3912e95a874647027efdc97822dd10e
7423]
7424[fix typo introduced during rebasing of 'remove get_serverid from
7425Brian Warner <warner@lothar.com>**20110801200341
7426 Ignore-this: 4235b0f585c0533892193941dbbd89a8
7427 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
7428]
7429[remove get_serverid from DownloadStatus.add_dyhb_request and customers
7430zooko@zooko.com**20110801185401
7431 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
7432 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
7433 refs #1363
7434]
7435[remove get_serverid from DownloadStatus.add_block_request and customers
7436zooko@zooko.com**20110801185344
7437 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
7438 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
7439 refs #1363
7440]
7441[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
7442warner@lothar.com**20110801174452
7443 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
7444 refs #1363
7445]
7446[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
7447warner@lothar.com**20110801174444
7448 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
7449 remove now-unused ShareManglingMixin
7450 refs #1363
7451]
7452[DownloadStatus.add_known_share wants to be used by Finder, web.status
7453warner@lothar.com**20110801174436
7454 Ignore-this: 1433bcd73099a579abe449f697f35f9
7455 refs #1363
7456]
7457[replace IServer.name() with get_name(), and get_longname()
7458warner@lothar.com**20110801174428
7459 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
7460 
7461 This patch was originally written by Brian, but was re-recorded by Zooko to use
7462 darcs replace instead of hunks for any file in which it would result in fewer
7463 total hunks.
7464 refs #1363
7465]
7466[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
7467zooko@zooko.com**20110801174143
7468 Ignore-this: e36e1420bba0620a0107bd90032a5198
7469 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
7470 refs #1363
7471]
7472[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
7473Brian Warner <warner@lothar.com>**20110801151834
7474 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
7475]
7476[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
7477david-sarah@jacaranda.org**20110801034035
7478 Ignore-this: 6971e0621db2fba794d86395b4d51038
7479]
7480[tahoe_rm.py: better error message when there is no path. refs #1292
7481david-sarah@jacaranda.org**20110122064212
7482 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
7483]
7484[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
7485david-sarah@jacaranda.org**20110104105108
7486 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
7487]
7488[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
7489david-sarah@jacaranda.org**20110801005209
7490 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
7491]
7492[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
7493david-sarah@jacaranda.org**20110730032521
7494 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
7495]
7496[cli: make 'tahoe cp' overwrite mutable files in-place
7497Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
7498 Ignore-this: b2ad21a19439722f05c49bfd35b01855
7499]
7500[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
7501david-sarah@jacaranda.org**20110729233102
7502 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
7503]
7504[src/allmydata/scripts/cli.py: fix pyflakes warning.
7505david-sarah@jacaranda.org**20110728021402
7506 Ignore-this: 94050140ddb99865295973f49927c509
7507]
7508[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
7509david-sarah@jacaranda.org**20110724225440
7510 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
7511]
7512[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
7513david-sarah@jacaranda.org**20110629185356
7514 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
7515]
7516[docs/man/tahoe.1: add man page. fixes #1420
7517david-sarah@jacaranda.org**20110724171728
7518 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
7519]
7520[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
7521david-sarah@jacaranda.org**20110721234941
7522 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
7523]
7524[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
7525david-sarah@jacaranda.org**20110722000320
7526 Ignore-this: 55cd558b791526113db3f83c00ec328a
7527]
7528[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
7529david-sarah@jacaranda.org**20110721233658
7530 Ignore-this: 81b41745477163c9b39c0b59db91cc62
7531]
7532[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
7533david-sarah@jacaranda.org**20110722035402
7534 Ignore-this: 5d03f544c4154f088e26c7107494bf39
7535]
7536[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
7537david-sarah@jacaranda.org**20110722024907
7538 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
7539]
7540[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
7541david-sarah@jacaranda.org**20110718005949
7542 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
7543]
7544[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
7545david-sarah@jacaranda.org**20110717194315
7546 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
7547]
7548[README.txt: say that quickstart.rst is in the docs directory.
7549david-sarah@jacaranda.org**20110717192400
7550 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
7551]
7552[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
7553zooko@zooko.com**20110717114226
7554 Ignore-this: df222120d41447ce4102616921626c82
7555 fixes #1383
7556]
7557[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
7558david-sarah@jacaranda.org**20110716181813
7559 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
7560]
7561[docs: add missing link in NEWS.rst
7562zooko@zooko.com**20110712153307
7563 Ignore-this: be7b7eb81c03700b739daa1027d72b35
7564]
7565[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
7566zooko@zooko.com**20110712153229
7567 Ignore-this: 723c4f9e2211027c79d711715d972c5
7568 Also remove a couple of vestigial references to figleaf, which is long gone.
7569 fixes #1409 (remove contrib/fuse)
7570]
7571[add Protovis.js-based download-status timeline visualization
7572Brian Warner <warner@lothar.com>**20110629222606
7573 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
7574 
7575 provide status overlap info on the webapi t=json output, add decode/decrypt
7576 rate tooltips, add zoomin/zoomout buttons
7577]
7578[add more download-status data, fix tests
7579Brian Warner <warner@lothar.com>**20110629222555
7580 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
7581]
7582[prepare for viz: improve DownloadStatus events
7583Brian Warner <warner@lothar.com>**20110629222542
7584 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
7585 
7586 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
7587]
7588[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
7589zooko@zooko.com**20110629185711
7590 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
7591]
7592[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
7593david-sarah@jacaranda.org**20110130235809
7594 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
7595]
7596[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
7597david-sarah@jacaranda.org**20110626054124
7598 Ignore-this: abb864427a1b91bd10d5132b4589fd90
7599]
7600[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
7601david-sarah@jacaranda.org**20110623205528
7602 Ignore-this: c63e23146c39195de52fb17c7c49b2da
7603]
7604[Rename test_package_initialization.py to (much shorter) test_import.py .
7605Brian Warner <warner@lothar.com>**20110611190234
7606 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
7607 
7608 The former name was making my 'ls' listings hard to read, by forcing them
7609 down to just two columns.
7610]
7611[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
7612zooko@zooko.com**20110611163741
7613 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
7614 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
7615 fixes #1412
7616]
7617[wui: right-align the size column in the WUI
7618zooko@zooko.com**20110611153758
7619 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
7620 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
7621 fixes #1412
7622]
7623[docs: three minor fixes
7624zooko@zooko.com**20110610121656
7625 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
7626 CREDITS for arc for stats tweak
7627 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
7628 English usage tweak
7629]
7630[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
7631david-sarah@jacaranda.org**20110609223719
7632 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
7633]
7634[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
7635wilcoxjg@gmail.com**20110527120135
7636 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
7637 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
7638 NEWS.rst, stats.py: documentation of change to get_latencies
7639 stats.rst: now documents percentile modification in get_latencies
7640 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
7641 fixes #1392
7642]
7643[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
7644david-sarah@jacaranda.org**20110517011214
7645 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
7646]
7647[docs: convert NEWS to NEWS.rst and change all references to it.
7648david-sarah@jacaranda.org**20110517010255
7649 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
7650]
7651[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
7652david-sarah@jacaranda.org**20110512140559
7653 Ignore-this: 784548fc5367fac5450df1c46890876d
7654]
7655[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
7656david-sarah@jacaranda.org**20110130164923
7657 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
7658]
7659[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
7660zooko@zooko.com**20110128142006
7661 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
7662 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
7663]
7664[M-x whitespace-cleanup
7665zooko@zooko.com**20110510193653
7666 Ignore-this: dea02f831298c0f65ad096960e7df5c7
7667]
7668[docs: fix typo in running.rst, thanks to arch_o_median
7669zooko@zooko.com**20110510193633
7670 Ignore-this: ca06de166a46abbc61140513918e79e8
7671]
7672[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
7673david-sarah@jacaranda.org**20110204204902
7674 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
7675]
7676[relnotes.txt: forseeable -> foreseeable. refs #1342
7677david-sarah@jacaranda.org**20110204204116
7678 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
7679]
7680[replace remaining .html docs with .rst docs
7681zooko@zooko.com**20110510191650
7682 Ignore-this: d557d960a986d4ac8216d1677d236399
7683 Remove install.html (long since deprecated).
7684 Also replace some obsolete references to install.html with references to quickstart.rst.
7685 Fix some broken internal references within docs/historical/historical_known_issues.txt.
7686 Thanks to Ravi Pinjala and Patrick McDonald.
7687 refs #1227
7688]
7689[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
7690zooko@zooko.com**20110428055232
7691 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
7692]
7693[munin tahoe_files plugin: fix incorrect file count
7694francois@ctrlaltdel.ch**20110428055312
7695 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
7696 fixes #1391
7697]
7698[corrected "k must never be smaller than N" to "k must never be greater than N"
7699secorp@allmydata.org**20110425010308
7700 Ignore-this: 233129505d6c70860087f22541805eac
7701]
7702[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
7703david-sarah@jacaranda.org**20110411190738
7704 Ignore-this: 7847d26bc117c328c679f08a7baee519
7705]
7706[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
7707david-sarah@jacaranda.org**20110410155844
7708 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
7709]
7710[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
7711david-sarah@jacaranda.org**20110410155705
7712 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
7713]
7714[remove unused variable detected by pyflakes
7715zooko@zooko.com**20110407172231
7716 Ignore-this: 7344652d5e0720af822070d91f03daf9
7717]
7718[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
7719david-sarah@jacaranda.org**20110401202750
7720 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
7721]
7722[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
7723Brian Warner <warner@lothar.com>**20110325232511
7724 Ignore-this: d5307faa6900f143193bfbe14e0f01a
7725]
7726[control.py: remove all uses of s.get_serverid()
7727warner@lothar.com**20110227011203
7728 Ignore-this: f80a787953bd7fa3d40e828bde00e855
7729]
7730[web: remove some uses of s.get_serverid(), not all
7731warner@lothar.com**20110227011159
7732 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
7733]
7734[immutable/downloader/fetcher.py: remove all get_serverid() calls
7735warner@lothar.com**20110227011156
7736 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
7737]
7738[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
7739warner@lothar.com**20110227011153
7740 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
7741 
7742 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
7743 _shares_from_server dict was being popped incorrectly (using shnum as the
7744 index instead of serverid). I'm still thinking through the consequences of
7745 this bug. It was probably benign and really hard to detect. I think it would
7746 cause us to incorrectly believe that we're pulling too many shares from a
7747 server, and thus prefer a different server rather than asking for a second
7748 share from the first server. The diversity code is intended to spread out the
7749 number of shares simultaneously being requested from each server, but with
7750 this bug, it might be spreading out the total number of shares requested at
7751 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
7752 segment, so the effect doesn't last very long).
7753]
7754[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
7755warner@lothar.com**20110227011150
7756 Ignore-this: d8d56dd8e7b280792b40105e13664554
7757 
7758 test_download.py: create+check MyShare instances better, make sure they share
7759 Server objects, now that finder.py cares
7760]
7761[immutable/downloader/finder.py: reduce use of get_serverid(), one left
7762warner@lothar.com**20110227011146
7763 Ignore-this: 5785be173b491ae8a78faf5142892020
7764]
7765[immutable/offloaded.py: reduce use of get_serverid() a bit more
7766warner@lothar.com**20110227011142
7767 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
7768]
7769[immutable/upload.py: reduce use of get_serverid()
7770warner@lothar.com**20110227011138
7771 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
7772]
7773[immutable/checker.py: remove some uses of s.get_serverid(), not all
7774warner@lothar.com**20110227011134
7775 Ignore-this: e480a37efa9e94e8016d826c492f626e
7776]
7777[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
7778warner@lothar.com**20110227011132
7779 Ignore-this: 6078279ddf42b179996a4b53bee8c421
7780 MockIServer stubs
7781]
7782[upload.py: rearrange _make_trackers a bit, no behavior changes
7783warner@lothar.com**20110227011128
7784 Ignore-this: 296d4819e2af452b107177aef6ebb40f
7785]
7786[happinessutil.py: finally rename merge_peers to merge_servers
7787warner@lothar.com**20110227011124
7788 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
7789]
7790[test_upload.py: factor out FakeServerTracker
7791warner@lothar.com**20110227011120
7792 Ignore-this: 6c182cba90e908221099472cc159325b
7793]
7794[test_upload.py: server-vs-tracker cleanup
7795warner@lothar.com**20110227011115
7796 Ignore-this: 2915133be1a3ba456e8603885437e03
7797]
7798[happinessutil.py: server-vs-tracker cleanup
7799warner@lothar.com**20110227011111
7800 Ignore-this: b856c84033562d7d718cae7cb01085a9
7801]
7802[upload.py: more tracker-vs-server cleanup
7803warner@lothar.com**20110227011107
7804 Ignore-this: bb75ed2afef55e47c085b35def2de315
7805]
7806[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
7807warner@lothar.com**20110227011103
7808 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
7809]
7810[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
7811warner@lothar.com**20110227011100
7812 Ignore-this: 7ea858755cbe5896ac212a925840fe68
7813 
7814 No behavioral changes, just updating variable/method names and log messages.
7815 The effects outside these three files should be minimal: some exception
7816 messages changed (to say "server" instead of "peer"), and some internal class
7817 names were changed. A few things still use "peer" to minimize external
7818 changes, like UploadResults.timings["peer_selection"] and
7819 happinessutil.merge_peers, which can be changed later.
7820]
7821[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
7822warner@lothar.com**20110227011056
7823 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
7824]
7825[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
7826warner@lothar.com**20110227011051
7827 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
7828]
7829[test: increase timeout on a network test because Francois's ARM machine hit that timeout
7830zooko@zooko.com**20110317165909
7831 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
7832 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
7833]
7834[docs/configuration.rst: add a "Frontend Configuration" section
7835Brian Warner <warner@lothar.com>**20110222014323
7836 Ignore-this: 657018aa501fe4f0efef9851628444ca
7837 
7838 this points to docs/frontends/*.rst, which were previously underlinked
7839]
7840[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
7841"Brian Warner <warner@lothar.com>"**20110221061544
7842 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
7843]
7844[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
7845david-sarah@jacaranda.org**20110221015817
7846 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
7847]
7848[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
7849david-sarah@jacaranda.org**20110221020125
7850 Ignore-this: b0744ed58f161bf188e037bad077fc48
7851]
7852[Refactor StorageFarmBroker handling of servers
7853Brian Warner <warner@lothar.com>**20110221015804
7854 Ignore-this: 842144ed92f5717699b8f580eab32a51
7855 
7856 Pass around IServer instance instead of (peerid, rref) tuple. Replace
7857 "descriptor" with "server". Other replacements:
7858 
7859  get_all_servers -> get_connected_servers/get_known_servers
7860  get_servers_for_index -> get_servers_for_psi (now returns IServers)
7861 
7862 This change still needs to be pushed further down: lots of code is now
7863 getting the IServer and then distributing (peerid, rref) internally.
7864 Instead, it ought to distribute the IServer internally and delay
7865 extracting a serverid or rref until the last moment.
7866 
7867 no_network.py was updated to retain parallelism.
7868]
7869[TAG allmydata-tahoe-1.8.2
7870warner@lothar.com**20110131020101]
7871Patch bundle hash:
78722aea57155aba2ab93a5b7818c09dacd176448143