Ticket #393: 393status3.dpatch

File 393status3.dpatch, 243.6 KB (added by kevan, at 2010-06-14T22:58:49Z)
Line 
1Sun May 30 18:43:46 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
2  * Code cleanup
3 
4      - Change 'readv' to 'readvs' in remote_slot_readv in the storage
5        server, to more adaquately convey what the argument is.
6
7Fri Jun  4 12:48:04 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
8  * Add a notion of the mutable file version number to interfaces.py
9
10Fri Jun  4 12:52:17 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
11  * Add a salt hasher for MDMF uploads
12
13Fri Jun  4 12:55:27 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
14  * Add MDMF and SDMF version numbers to interfaces.py
15
16Fri Jun  4 13:49:33 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
17  * Tell NodeMaker and MutableFileNode about the distinction between SDMF and MDMF
18
19Fri Jun 11 12:17:29 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
20  * Alter the mutable file servermap to read MDMF files
21
22Fri Jun 11 12:21:50 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
23  * Add tests for new MDMF proxies
24
25Fri Jun 11 12:26:21 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
26  * A first stab at a segmented uploader
27 
28  This uploader will upload, segment-by-segment, MDMF files. It will only
29  do this if it thinks that the filenode that it is uploading represents
30  an MDMF file; otherwise, it uploads the file as SDMF.
31 
32  My TODO list so far:
33      - More robust peer selection; we'll want to use something like
34        servers of happiness to figure out reliability and unreliability.
35      - Clean up.
36
37Mon Jun 14 14:29:13 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
38  * Assorted servermap fixes
39 
40  - Check for failure when setting the private key
41  - Check for failure when setting other things
42
43Mon Jun 14 14:34:59 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
44  * Alter MDMF proxy tests to reflect the new form of caching
45
46Mon Jun 14 14:37:21 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
47  * Add tests and support functions for servermap tests
48
49Mon Jun 14 15:26:23 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
50  * Add objects for MDMF shares in support of a new segmented uploader
51 
52  This patch adds the following:
53      - MDMFSlotWriteProxy, which can write MDMF shares to the storage
54        server in the new format.
55      - MDMFSlotReadProxy, which can read both SDMF and MDMF shares from
56        the storage server.
57
58New patches:
59
60[Code cleanup
61Kevan Carstensen <kevan@isnotajoke.com>**20100531014346
62 Ignore-this: 697378037e83290267f108a4a88b8776
63 
64     - Change 'readv' to 'readvs' in remote_slot_readv in the storage
65       server, to more adaquately convey what the argument is.
66] {
67hunk ./src/allmydata/storage/server.py 569
68                                          self)
69         return share
70 
71-    def remote_slot_readv(self, storage_index, shares, readv):
72+    def remote_slot_readv(self, storage_index, shares, readvs):
73         start = time.time()
74         self.count("readv")
75         si_s = si_b2a(storage_index)
76hunk ./src/allmydata/storage/server.py 590
77             if sharenum in shares or not shares:
78                 filename = os.path.join(bucketdir, sharenum_s)
79                 msf = MutableShareFile(filename, self)
80-                datavs[sharenum] = msf.readv(readv)
81+                datavs[sharenum] = msf.readv(readvs)
82         log.msg("returning shares %s" % (datavs.keys(),),
83                 facility="tahoe.storage", level=log.NOISY, parent=lp)
84         self.add_latency("readv", time.time() - start)
85}
86[Add a notion of the mutable file version number to interfaces.py
87Kevan Carstensen <kevan@isnotajoke.com>**20100604194804
88 Ignore-this: fd767043437c3cd694807687e6dc677
89] hunk ./src/allmydata/interfaces.py 807
90         writer-visible data using this writekey.
91         """
92 
93+    def set_version(version):
94+        """Tahoe-LAFS supports SDMF and MDMF mutable files. By default,
95+        we upload in SDMF for reasons of compatibility. If you want to
96+        change this, set_version will let you do that.
97+
98+        To say that this file should be uploaded in SDMF, pass in a 0. To
99+        say that the file should be uploaded as MDMF, pass in a 1.
100+        """
101+
102+    def get_version():
103+        """Returns the mutable file protocol version."""
104+
105 class NotEnoughSharesError(Exception):
106     """Download was unable to get enough shares"""
107 
108[Add a salt hasher for MDMF uploads
109Kevan Carstensen <kevan@isnotajoke.com>**20100604195217
110 Ignore-this: 3072f4c4e75efa078f31aac3a56d36b2
111] {
112hunk ./src/allmydata/util/hashutil.py 90
113 MUTABLE_READKEY_TAG = "allmydata_mutable_writekey_to_readkey_v1"
114 MUTABLE_DATAKEY_TAG = "allmydata_mutable_readkey_to_datakey_v1"
115 MUTABLE_STORAGEINDEX_TAG = "allmydata_mutable_readkey_to_storage_index_v1"
116+MUTABLE_SALT_TAG = "allmydata_mutable_segment_salt_v1"
117 
118 # dirnodes
119 DIRNODE_CHILD_WRITECAP_TAG = "allmydata_mutable_writekey_and_salt_to_dirnode_child_capkey_v1"
120hunk ./src/allmydata/util/hashutil.py 134
121 def plaintext_segment_hasher():
122     return tagged_hasher(PLAINTEXT_SEGMENT_TAG)
123 
124+def mutable_salt_hash(data):
125+    return tagged_hash(MUTABLE_SALT_TAG, data)
126+def mutable_salt_hasher():
127+    return tagged_hasher(MUTABLE_SALT_TAG)
128+
129 KEYLEN = 16
130 IVLEN = 16
131 
132}
133[Add MDMF and SDMF version numbers to interfaces.py
134Kevan Carstensen <kevan@isnotajoke.com>**20100604195527
135 Ignore-this: 5736d229076ea432b9cf40fcee9b4749
136] hunk ./src/allmydata/interfaces.py 8
137 
138 HASH_SIZE=32
139 
140+SDMF_VERSION=0
141+MDMF_VERSION=1
142+
143 Hash = StringConstraint(maxLength=HASH_SIZE,
144                         minLength=HASH_SIZE)# binary format 32-byte SHA256 hash
145 Nodeid = StringConstraint(maxLength=20,
146[Tell NodeMaker and MutableFileNode about the distinction between SDMF and MDMF
147Kevan Carstensen <kevan@isnotajoke.com>**20100604204933
148 Ignore-this: b7f3682ef38b5077342ca3cdf17ff8f0
149] {
150hunk ./src/allmydata/mutable/filenode.py 8
151 from twisted.internet import defer, reactor
152 from foolscap.api import eventually
153 from allmydata.interfaces import IMutableFileNode, \
154-     ICheckable, ICheckResults, NotEnoughSharesError
155+     ICheckable, ICheckResults, NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION
156 from allmydata.util import hashutil, log
157 from allmydata.util.assertutil import precondition
158 from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
159hunk ./src/allmydata/mutable/filenode.py 67
160         self._sharemap = {} # known shares, shnum-to-[nodeids]
161         self._cache = ResponseCache()
162         self._most_recent_size = None
163+        # filled in after __init__ if we're being created for the first time;
164+        # filled in by the servermap updater before publishing, otherwise.
165+        # set to this default value in case neither of those things happen,
166+        # or in case the servermap can't find any shares to tell us what
167+        # to publish as.
168+        # TODO: Set this back to None, and find out why the tests fail
169+        #       with it set to None.
170+        self._protocol_version = SDMF_VERSION
171 
172         # all users of this MutableFileNode go through the serializer. This
173         # takes advantage of the fact that Deferreds discard the callbacks
174hunk ./src/allmydata/mutable/filenode.py 472
175     def _did_upload(self, res, size):
176         self._most_recent_size = size
177         return res
178+
179+
180+    def set_version(self, version):
181+        # I can be set in two ways:
182+        #  1. When the node is created.
183+        #  2. (for an existing share) when the Servermap is updated
184+        #     before I am read.
185+        assert version in (MDMF_VERSION, SDMF_VERSION)
186+        self._protocol_version = version
187+
188+
189+    def get_version(self):
190+        return self._protocol_version
191hunk ./src/allmydata/nodemaker.py 4
192 import weakref
193 from zope.interface import implements
194 from allmydata.util.assertutil import precondition
195-from allmydata.interfaces import INodeMaker, MustBeDeepImmutableError
196+from allmydata.interfaces import INodeMaker, MustBeDeepImmutableError, \
197+                                 SDMF_VERSION, MDMF_VERSION
198 from allmydata.immutable.filenode import ImmutableFileNode, LiteralFileNode
199 from allmydata.immutable.upload import Data
200 from allmydata.mutable.filenode import MutableFileNode
201hunk ./src/allmydata/nodemaker.py 92
202             return self._create_dirnode(filenode)
203         return None
204 
205-    def create_mutable_file(self, contents=None, keysize=None):
206+    def create_mutable_file(self, contents=None, keysize=None,
207+                            version=SDMF_VERSION):
208         n = MutableFileNode(self.storage_broker, self.secret_holder,
209                             self.default_encoding_parameters, self.history)
210hunk ./src/allmydata/nodemaker.py 96
211+        n.set_version(version)
212         d = self.key_generator.generate(keysize)
213         d.addCallback(n.create_with_keys, contents)
214         d.addCallback(lambda res: n)
215hunk ./src/allmydata/nodemaker.py 102
216         return d
217 
218-    def create_new_mutable_directory(self, initial_children={}):
219+    def create_new_mutable_directory(self, initial_children={},
220+                                     version=SDMF_VERSION):
221         # initial_children must have metadata (i.e. {} instead of None)
222         for (name, (node, metadata)) in initial_children.iteritems():
223             precondition(isinstance(metadata, dict),
224hunk ./src/allmydata/nodemaker.py 110
225                          "create_new_mutable_directory requires metadata to be a dict, not None", metadata)
226             node.raise_error()
227         d = self.create_mutable_file(lambda n:
228-                                     pack_children(n, initial_children))
229+                                     pack_children(n, initial_children),
230+                                     version)
231         d.addCallback(self._create_dirnode)
232         return d
233 
234}
235[Alter the mutable file servermap to read MDMF files
236Kevan Carstensen <kevan@isnotajoke.com>**20100611191729
237 Ignore-this: f05748597749f07b16cdbb711fae92e5
238] {
239hunk ./src/allmydata/mutable/servermap.py 7
240 from itertools import count
241 from twisted.internet import defer
242 from twisted.python import failure
243-from foolscap.api import DeadReferenceError, RemoteException, eventually
244+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
245+                         fireEventually
246 from allmydata.util import base32, hashutil, idlib, log
247 from allmydata.storage.server import si_b2a
248 from allmydata.interfaces import IServermapUpdaterStatus
249hunk ./src/allmydata/mutable/servermap.py 17
250 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \
251      DictOfSets, CorruptShareError, NeedMoreDataError
252 from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
253-     SIGNED_PREFIX_LENGTH
254+     SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
255 
256 class UpdateStatus:
257     implements(IServermapUpdaterStatus)
258hunk ./src/allmydata/mutable/servermap.py 254
259         """Return a set of versionids, one for each version that is currently
260         recoverable."""
261         versionmap = self.make_versionmap()
262-
263         recoverable_versions = set()
264         for (verinfo, shares) in versionmap.items():
265             (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
266hunk ./src/allmydata/mutable/servermap.py 366
267         self._servers_responded = set()
268 
269         # how much data should we read?
270+        # SDMF:
271         #  * if we only need the checkstring, then [0:75]
272         #  * if we need to validate the checkstring sig, then [543ish:799ish]
273         #  * if we need the verification key, then [107:436ish]
274hunk ./src/allmydata/mutable/servermap.py 374
275         #  * if we need the encrypted private key, we want [-1216ish:]
276         #   * but we can't read from negative offsets
277         #   * the offset table tells us the 'ish', also the positive offset
278-        # A future version of the SMDF slot format should consider using
279-        # fixed-size slots so we can retrieve less data. For now, we'll just
280-        # read 2000 bytes, which also happens to read enough actual data to
281-        # pre-fetch a 9-entry dirnode.
282+        # MDMF:
283+        #  * Checkstring? [0:72]
284+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
285+        #    the offset table will tell us for sure.
286+        #  * If we need the verification key, we have to consult the offset
287+        #    table as well.
288+        # At this point, we don't know which we are. Our filenode can
289+        # tell us, but it might be lying -- in some cases, we're
290+        # responsible for telling it which kind of file it is.
291         self._read_size = 4000
292         if mode == MODE_CHECK:
293             # we use unpack_prefix_and_signature, so we need 1k
294hunk ./src/allmydata/mutable/servermap.py 432
295         self._queries_completed = 0
296 
297         sb = self._storage_broker
298+        # All of the peers, permuted by the storage index, as usual.
299         full_peerlist = sb.get_servers_for_index(self._storage_index)
300         self.full_peerlist = full_peerlist # for use later, immutable
301         self.extra_peers = full_peerlist[:] # peers are removed as we use them
302hunk ./src/allmydata/mutable/servermap.py 439
303         self._good_peers = set() # peers who had some shares
304         self._empty_peers = set() # peers who don't have any shares
305         self._bad_peers = set() # peers to whom our queries failed
306+        self._readers = {} # peerid -> dict(sharewriters), filled in
307+                           # after responses come in.
308 
309         k = self._node.get_required_shares()
310hunk ./src/allmydata/mutable/servermap.py 443
311+        # For what cases can these conditions work?
312         if k is None:
313             # make a guess
314             k = 3
315hunk ./src/allmydata/mutable/servermap.py 456
316         self.num_peers_to_query = k + self.EPSILON
317 
318         if self.mode == MODE_CHECK:
319+            # We want to query all of the peers.
320             initial_peers_to_query = dict(full_peerlist)
321             must_query = set(initial_peers_to_query.keys())
322             self.extra_peers = []
323hunk ./src/allmydata/mutable/servermap.py 464
324             # we're planning to replace all the shares, so we want a good
325             # chance of finding them all. We will keep searching until we've
326             # seen epsilon that don't have a share.
327+            # We don't query all of the peers because that could take a while.
328             self.num_peers_to_query = N + self.EPSILON
329             initial_peers_to_query, must_query = self._build_initial_querylist()
330             self.required_num_empty_peers = self.EPSILON
331hunk ./src/allmydata/mutable/servermap.py 474
332             # might also avoid the round trip required to read the encrypted
333             # private key.
334 
335-        else:
336+        else: # MODE_READ, MODE_ANYTHING
337+            # 2k peers is good enough.
338             initial_peers_to_query, must_query = self._build_initial_querylist()
339 
340         # this is a set of peers that we are required to get responses from:
341hunk ./src/allmydata/mutable/servermap.py 485
342         # set as we get responses.
343         self._must_query = must_query
344 
345+        # This tells the done check whether requests are still being
346+        # processed. We should wait before returning until at least
347+        # updated correctly (and dealing with connection errors.
348+        self._processing = 0
349+
350         # now initial_peers_to_query contains the peers that we should ask,
351         # self.must_query contains the peers that we must have heard from
352         # before we can consider ourselves finished, and self.extra_peers
353hunk ./src/allmydata/mutable/servermap.py 495
354         # contains the overflow (peers that we should tap if we don't get
355         # enough responses)
356+        # I guess that self._must_query is a subset of
357+        # initial_peers_to_query?
358+        assert set(must_query).issubset(set(initial_peers_to_query))
359 
360         self._send_initial_requests(initial_peers_to_query)
361         self._status.timings["initial_queries"] = time.time() - self._started
362hunk ./src/allmydata/mutable/servermap.py 554
363         # errors that aren't handled by _query_failed (and errors caused by
364         # _query_failed) get logged, but we still want to check for doneness.
365         d.addErrback(log.err)
366-        d.addBoth(self._check_for_done)
367         d.addErrback(self._fatal_error)
368         return d
369 
370hunk ./src/allmydata/mutable/servermap.py 584
371         self._servermap.reachable_peers.add(peerid)
372         self._must_query.discard(peerid)
373         self._queries_completed += 1
374+        # self._processing counts the number of queries that have
375+        # completed, but are still processing. We wait until all queries
376+        # are done processing before returning a result to the client.
377+        # TODO: Should we do this? A response to the initial query means
378+        # that we may not have to query the server for anything else,
379+        # but if we're dealing with an MDMF share, we'll probably have
380+        # to ask it for its signature, unless we cache those sometplace,
381+        # and even then.
382+        self._processing += 1
383         if not self._running:
384             self.log("but we're not running, so we'll ignore it", parent=lp,
385                      level=log.NOISY)
386hunk ./src/allmydata/mutable/servermap.py 605
387         else:
388             self._empty_peers.add(peerid)
389 
390-        last_verinfo = None
391-        last_shnum = None
392+        ss, storage_index = stuff
393+        ds = []
394+
395+
396+        def _tattle(ignored, status):
397+            print status
398+            print ignored
399+            return ignored
400+
401+        def _cache(verinfo, shnum, now, data):
402+            self._queries_oustand
403+            self._node._add_to_cache(verinfo, shnum, 0, data, now)
404+            return shnum, verinfo
405+
406+        def _corrupt(e, shnum, data):
407+            # This gets raised when there was something wrong with
408+            # the remote server. Specifically, when there was an
409+            # error unpacking the remote data from the server, or
410+            # when the signature is invalid.
411+            print e
412+            f = failure.Failure()
413+            self.log(format="bad share: %(f_value)s", f_value=str(f.value),
414+                     failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
415+            # Notify the server that its share is corrupt.
416+            self.notify_server_corruption(peerid, shnum, str(e))
417+            # By flagging this as a bad peer, we won't count any of
418+            # the other shares on that peer as valid, though if we
419+            # happen to find a valid version string amongst those
420+            # shares, we'll keep track of it so that we don't need
421+            # to validate the signature on those again.
422+            self._bad_peers.add(peerid)
423+            self._last_failure = f
424+            # 393CHANGE: Use the reader for this.
425+            checkstring = data[:SIGNED_PREFIX_LENGTH]
426+            self._servermap.mark_bad_share(peerid, shnum, checkstring)
427+            self._servermap.problems.append(f)
428+
429         for shnum,datav in datavs.items():
430             data = datav[0]
431hunk ./src/allmydata/mutable/servermap.py 644
432-            try:
433-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
434-                last_verinfo = verinfo
435-                last_shnum = shnum
436-                self._node._add_to_cache(verinfo, shnum, 0, data, now)
437-            except CorruptShareError, e:
438-                # log it and give the other shares a chance to be processed
439-                f = failure.Failure()
440-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
441-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
442-                self.notify_server_corruption(peerid, shnum, str(e))
443-                self._bad_peers.add(peerid)
444-                self._last_failure = f
445-                checkstring = data[:SIGNED_PREFIX_LENGTH]
446-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
447-                self._servermap.problems.append(f)
448-                pass
449-
450-        self._status.timings["cumulative_verify"] += (time.time() - now)
451+            reader = MDMFSlotReadProxy(ss,
452+                                       storage_index,
453+                                       shnum,
454+                                       data)
455+            self._readers.setdefault(peerid, dict())[shnum] = reader
456+            # our goal, with each response, is to validate the version
457+            # information and share data as best we can at this point --
458+            # we do this by validating the signature. To do this, we
459+            # need to do the following:
460+            #   - If we don't already have the public key, fetch the
461+            #     public key. We use this to validate the signature.
462+            friendly_peer = idlib.shortnodeid_b2a(peerid)
463+            if not self._node.get_pubkey():
464+                # fetch and set the public key.
465+                d = reader.get_verification_key()
466+                d.addCallback(self._try_to_set_pubkey)
467+            else:
468+                # we already have the public key.
469+                d = defer.succeed(None)
470+            # Neither of these two branches return anything of
471+            # consequence, so the first entry in our deferredlist will
472+            # be None.
473 
474hunk ./src/allmydata/mutable/servermap.py 667
475-        if self._need_privkey and last_verinfo:
476-            # send them a request for the privkey. We send one request per
477-            # server.
478-            lp2 = self.log("sending privkey request",
479-                           parent=lp, level=log.NOISY)
480-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
481-             offsets_tuple) = last_verinfo
482-            o = dict(offsets_tuple)
483+            # - Next, we need the version information. We almost
484+            #   certainly got this by reading the first thousand or so
485+            #   bytes of the share on the storage server, so we
486+            #   shouldn't need to fetch anything at this step.
487+            d2 = reader.get_verinfo()
488+            # - Next, we need the signature. For an SDMF share, it is
489+            #   likely that we fetched this when doing our initial fetch
490+            #   to get the version information. In MDMF, this lives at
491+            #   the end of the share, so unless the file is quite small,
492+            #   we'll need to do a remote fetch to get it.
493+            d3 = reader.get_signature()
494+            #  Once we have all three of these responses, we can move on
495+            #  to validating the signature
496 
497hunk ./src/allmydata/mutable/servermap.py 681
498-            self._queries_outstanding.add(peerid)
499-            readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ]
500-            ss = self._servermap.connections[peerid]
501-            privkey_started = time.time()
502-            d = self._do_read(ss, peerid, self._storage_index,
503-                              [last_shnum], readv)
504-            d.addCallback(self._got_privkey_results, peerid, last_shnum,
505-                          privkey_started, lp2)
506-            d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2)
507-            d.addErrback(log.err)
508-            d.addCallback(self._check_for_done)
509-            d.addErrback(self._fatal_error)
510+            # Does the node already have a privkey? If not, we'll try to
511+            # fetch it here.
512+            if not self._node.get_privkey():
513+                d4 = reader.get_encprivkey()
514+                d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
515+                    self._try_to_validate_privkey(results, peerid, shnum, lp))
516+            else:
517+                d4 = defer.succeed(None)
518 
519hunk ./src/allmydata/mutable/servermap.py 690
520+            dl = defer.DeferredList([d, d2, d3, d4])
521+            dl.addCallback(lambda results, shnum=shnum, peerid=peerid:
522+                self._got_signature_one_share(results, shnum, peerid, lp))
523+            dl.addErrback(lambda error, shnum=shnum, data=data:
524+               _corrupt(error, shnum, data))
525+            ds.append(dl)
526+        # dl is a deferred list that will fire when all of the shares
527+        # that we found on this peer are done processing. When dl fires,
528+        # we know that processing is done, so we can decrement the
529+        # semaphore-like thing that we incremented earlier.
530+        dl = defer.DeferredList(ds)
531+        def _done_processing(ignored):
532+            self._processing -= 1
533+            return ignored
534+        dl.addCallback(_done_processing)
535+        # Are we done? Done means that there are no more queries to
536+        # send, that there are no outstanding queries, and that we
537+        # haven't received any queries that are still processing. If we
538+        # are done, self._check_for_done will cause the done deferred
539+        # that we returned to our caller to fire, which tells them that
540+        # they have a complete servermap, and that we won't be touching
541+        # the servermap anymore.
542+        dl.addBoth(self._check_for_done)
543+        dl.addErrback(self._fatal_error)
544         # all done!
545hunk ./src/allmydata/mutable/servermap.py 715
546+        return dl
547         self.log("_got_results done", parent=lp, level=log.NOISY)
548 
549hunk ./src/allmydata/mutable/servermap.py 718
550+    def _try_to_set_pubkey(self, pubkey_s):
551+        if self._node.get_pubkey():
552+            return # don't go through this again if we don't have to
553+        fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
554+        assert len(fingerprint) == 32
555+        if fingerprint != self._node.get_fingerprint():
556+            raise CorruptShareError(peerid, shnum,
557+                                "pubkey doesn't match fingerprint")
558+        self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
559+        assert self._node.get_pubkey()
560+
561+
562     def notify_server_corruption(self, peerid, shnum, reason):
563         ss = self._servermap.connections[peerid]
564         ss.callRemoteOnly("advise_corrupt_share",
565hunk ./src/allmydata/mutable/servermap.py 735
566                           "mutable", self._storage_index, shnum, reason)
567 
568-    def _got_results_one_share(self, shnum, data, peerid, lp):
569+
570+    def _got_signature_one_share(self, results, shnum, peerid, lp):
571+        # It is our job to give versioninfo to our caller. We need to
572+        # raise CorruptShareError if the share is corrupt for any
573+        # reason, something that our caller will handle.
574         self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s",
575                  shnum=shnum,
576                  peerid=idlib.shortnodeid_b2a(peerid),
577hunk ./src/allmydata/mutable/servermap.py 745
578                  level=log.NOISY,
579                  parent=lp)
580-
581-        # this might raise NeedMoreDataError, if the pubkey and signature
582-        # live at some weird offset. That shouldn't happen, so I'm going to
583-        # treat it as a bad share.
584-        (seqnum, root_hash, IV, k, N, segsize, datalength,
585-         pubkey_s, signature, prefix) = unpack_prefix_and_signature(data)
586-
587-        if not self._node.get_pubkey():
588-            fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
589-            assert len(fingerprint) == 32
590-            if fingerprint != self._node.get_fingerprint():
591-                raise CorruptShareError(peerid, shnum,
592-                                        "pubkey doesn't match fingerprint")
593-            self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
594-
595-        if self._need_privkey:
596-            self._try_to_extract_privkey(data, peerid, shnum, lp)
597-
598-        (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N,
599-         ig_segsize, ig_datalen, offsets) = unpack_header(data)
600+        _, verinfo, signature, __ = results
601+        (seqnum,
602+         root_hash,
603+         saltish,
604+         segsize,
605+         datalen,
606+         k,
607+         n,
608+         prefix,
609+         offsets) = verinfo[1]
610         offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
611 
612hunk ./src/allmydata/mutable/servermap.py 757
613-        verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
614+        # XXX: This should be done for us in the method, so
615+        # presumably you can go in there and fix it.
616+        verinfo = (seqnum,
617+                   root_hash,
618+                   saltish,
619+                   segsize,
620+                   datalen,
621+                   k,
622+                   n,
623+                   prefix,
624                    offsets_tuple)
625hunk ./src/allmydata/mutable/servermap.py 768
626+        # This tuple uniquely identifies a share on the grid; we use it
627+        # to keep track of the ones that we've already seen.
628 
629         if verinfo not in self._valid_versions:
630hunk ./src/allmydata/mutable/servermap.py 772
631-            # it's a new pair. Verify the signature.
632-            valid = self._node.get_pubkey().verify(prefix, signature)
633+            # This is a new version tuple, and we need to validate it
634+            # against the public key before keeping track of it.
635+            valid = self._node.get_pubkey().verify(prefix, signature[1])
636             if not valid:
637hunk ./src/allmydata/mutable/servermap.py 776
638-                raise CorruptShareError(peerid, shnum, "signature is invalid")
639+                raise CorruptShareError(peerid, shnum,
640+                                        "signature is invalid")
641 
642hunk ./src/allmydata/mutable/servermap.py 779
643-            # ok, it's a valid verinfo. Add it to the list of validated
644-            # versions.
645-            self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
646-                     % (seqnum, base32.b2a(root_hash)[:4],
647-                        idlib.shortnodeid_b2a(peerid), shnum,
648-                        k, N, segsize, datalength),
649-                     parent=lp)
650-            self._valid_versions.add(verinfo)
651-        # We now know that this is a valid candidate verinfo.
652+        # ok, it's a valid verinfo. Add it to the list of validated
653+        # versions.
654+        self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
655+                 % (seqnum, base32.b2a(root_hash)[:4],
656+                    idlib.shortnodeid_b2a(peerid), shnum,
657+                    k, n, segsize, datalen),
658+                    parent=lp)
659+        self._valid_versions.add(verinfo)
660+        # We now know that this is a valid candidate verinfo. Whether or
661+        # not this instance of it is valid is a matter for the next
662+        # statement; at this point, we just know that if we see this
663+        # version info again, that its signature checks out and that
664+        # we're okay to skip the signature-checking step.
665 
666hunk ./src/allmydata/mutable/servermap.py 793
667+        # (peerid, shnum) are bound in the method invocation.
668         if (peerid, shnum) in self._servermap.bad_shares:
669             # we've been told that the rest of the data in this share is
670             # unusable, so don't add it to the servermap.
671hunk ./src/allmydata/mutable/servermap.py 808
672         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
673         return verinfo
674 
675+
676     def _deserialize_pubkey(self, pubkey_s):
677         verifier = rsa.create_verifying_key_from_string(pubkey_s)
678         return verifier
679hunk ./src/allmydata/mutable/servermap.py 813
680 
681-    def _try_to_extract_privkey(self, data, peerid, shnum, lp):
682-        try:
683-            r = unpack_share(data)
684-        except NeedMoreDataError, e:
685-            # this share won't help us. oh well.
686-            offset = e.encprivkey_offset
687-            length = e.encprivkey_length
688-            self.log("shnum %d on peerid %s: share was too short (%dB) "
689-                     "to get the encprivkey; [%d:%d] ought to hold it" %
690-                     (shnum, idlib.shortnodeid_b2a(peerid), len(data),
691-                      offset, offset+length),
692-                     parent=lp)
693-            # NOTE: if uncoordinated writes are taking place, someone might
694-            # change the share (and most probably move the encprivkey) before
695-            # we get a chance to do one of these reads and fetch it. This
696-            # will cause us to see a NotEnoughSharesError(unable to fetch
697-            # privkey) instead of an UncoordinatedWriteError . This is a
698-            # nuisance, but it will go away when we move to DSA-based mutable
699-            # files (since the privkey will be small enough to fit in the
700-            # write cap).
701-
702-            return
703-
704-        (seqnum, root_hash, IV, k, N, segsize, datalen,
705-         pubkey, signature, share_hash_chain, block_hash_tree,
706-         share_data, enc_privkey) = r
707-
708-        return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
709 
710     def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
711hunk ./src/allmydata/mutable/servermap.py 815
712-
713+        """
714+        Given a writekey from a remote server, I validate it against the
715+        writekey stored in my node. If it is valid, then I set the
716+        privkey and encprivkey properties of the node.
717+        """
718         alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
719         alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
720         if alleged_writekey != self._node.get_writekey():
721hunk ./src/allmydata/mutable/servermap.py 925
722         #  return self._send_more_queries(outstanding) : send some more queries
723         #  return self._done() : all done
724         #  return : keep waiting, no new queries
725-
726         lp = self.log(format=("_check_for_done, mode is '%(mode)s', "
727                               "%(outstanding)d queries outstanding, "
728                               "%(extra)d extra peers available, "
729hunk ./src/allmydata/mutable/servermap.py 943
730             self.log("but we're not running", parent=lp, level=log.NOISY)
731             return
732 
733+        if self._processing > 0:
734+            # wait until more results are done before returning.
735+            return
736+
737         if self._must_query:
738             # we are still waiting for responses from peers that used to have
739             # a share, so we must continue to wait. No additional queries are
740hunk ./src/allmydata/mutable/servermap.py 1134
741         self._servermap.last_update_time = self._started
742         # the servermap will not be touched after this
743         self.log("servermap: %s" % self._servermap.summarize_versions())
744+
745         eventually(self._done_deferred.callback, self._servermap)
746 
747     def _fatal_error(self, f):
748}
749[Add tests for new MDMF proxies
750Kevan Carstensen <kevan@isnotajoke.com>**20100611192150
751 Ignore-this: 986d2cb867cbd4477b131cd951cd9eac
752] {
753hunk ./src/allmydata/test/test_storage.py 2
754 
755-import time, os.path, stat, re, simplejson, struct
756+import time, os.path, stat, re, simplejson, struct, shutil
757 
758 from twisted.trial import unittest
759 
760hunk ./src/allmydata/test/test_storage.py 22
761 from allmydata.storage.expirer import LeaseCheckingCrawler
762 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
763      ReadBucketProxy
764-from allmydata.interfaces import BadWriteEnablerError
765-from allmydata.test.common import LoggingServiceParent
766+from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
767+                                     LayoutInvalid
768+from allmydata.interfaces import BadWriteEnablerError, MDMF_VERSION, \
769+                                 SDMF_VERSION
770+from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
771 from allmydata.test.common_web import WebRenderingMixin
772 from allmydata.web.storage import StorageStatus, remove_prefix
773 
774hunk ./src/allmydata/test/test_storage.py 1286
775         self.failUnless(os.path.exists(prefixdir))
776         self.failIf(os.path.exists(bucketdir))
777 
778+
779+class MDMFProxies(unittest.TestCase, ShouldFailMixin):
780+    def setUp(self):
781+        self.sparent = LoggingServiceParent()
782+        self._lease_secret = itertools.count()
783+        self.ss = self.create("MDMFProxies storage test server")
784+        self.rref = RemoteBucket()
785+        self.rref.target = self.ss
786+        self.secrets = (self.write_enabler("we_secret"),
787+                        self.renew_secret("renew_secret"),
788+                        self.cancel_secret("cancel_secret"))
789+        self.segment = "aaaaaa"
790+        self.block = "aa"
791+        self.salt = "a" * 16
792+        self.block_hash = "a" * 32
793+        self.block_hash_tree = [self.block_hash for i in xrange(6)]
794+        self.share_hash = self.block_hash
795+        self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)])
796+        self.signature = "foobarbaz"
797+        self.verification_key = "vvvvvv"
798+        self.encprivkey = "private"
799+        self.root_hash = self.block_hash
800+        self.salt_hash = self.root_hash
801+        self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree)
802+        self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain)
803+
804+
805+    def tearDown(self):
806+        self.sparent.stopService()
807+        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
808+
809+
810+    def write_enabler(self, we_tag):
811+        return hashutil.tagged_hash("we_blah", we_tag)
812+
813+
814+    def renew_secret(self, tag):
815+        return hashutil.tagged_hash("renew_blah", str(tag))
816+
817+
818+    def cancel_secret(self, tag):
819+        return hashutil.tagged_hash("cancel_blah", str(tag))
820+
821+
822+    def workdir(self, name):
823+        basedir = os.path.join("storage", "MutableServer", name)
824+        return basedir
825+
826+
827+    def create(self, name):
828+        workdir = self.workdir(name)
829+        ss = StorageServer(workdir, "\x00" * 20)
830+        ss.setServiceParent(self.sparent)
831+        return ss
832+
833+
834+    def build_test_mdmf_share(self, tail_segment=False, empty=False):
835+        # Start with the checkstring
836+        data = struct.pack(">BQ32s32s",
837+                           1,
838+                           0,
839+                           self.root_hash,
840+                           self.salt_hash)
841+        self.checkstring = data
842+        # Next, the encoding parameters
843+        if tail_segment:
844+            data += struct.pack(">BBQQ",
845+                                3,
846+                                10,
847+                                6,
848+                                33)
849+        elif empty:
850+            data += struct.pack(">BBQQ",
851+                                3,
852+                                10,
853+                                0,
854+                                0)
855+        else:
856+            data += struct.pack(">BBQQ",
857+                                3,
858+                                10,
859+                                6,
860+                                36)
861+        # Now we'll build the offsets.
862+        # The header -- everything up to the salts -- is 143 bytes long.
863+        # The shares come after the salts.
864+        if empty:
865+            salts = ""
866+        else:
867+            salts = self.salt * 6
868+        share_offset = 143 + len(salts)
869+        if tail_segment:
870+            sharedata = self.block * 6
871+        elif empty:
872+            sharedata = ""
873+        else:
874+            sharedata = self.block * 6 + "a"
875+        # The encrypted private key comes after the shares
876+        encrypted_private_key_offset = share_offset + len(sharedata)
877+        # The blockhashes come after the private key
878+        blockhashes_offset = encrypted_private_key_offset + len(self.encprivkey)
879+        # The sharehashes come after the blockhashes
880+        sharehashes_offset = blockhashes_offset + len(self.block_hash_tree_s)
881+        # The signature comes after the share hash chain
882+        signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
883+        # The verification key comes after the signature
884+        verification_offset = signature_offset + len(self.signature)
885+        # The EOF comes after the verification key
886+        eof_offset = verification_offset + len(self.verification_key)
887+        data += struct.pack(">LQQQQQQ",
888+                            share_offset,
889+                            encrypted_private_key_offset,
890+                            blockhashes_offset,
891+                            sharehashes_offset,
892+                            signature_offset,
893+                            verification_offset,
894+                            eof_offset)
895+        self.offsets = {}
896+        self.offsets['share_data'] = share_offset
897+        self.offsets['enc_privkey'] = encrypted_private_key_offset
898+        self.offsets['block_hash_tree'] = blockhashes_offset
899+        self.offsets['share_hash_chain'] = sharehashes_offset
900+        self.offsets['signature'] = signature_offset
901+        self.offsets['verification_key'] = verification_offset
902+        self.offsets['EOF'] = eof_offset
903+        # Next, we'll add in the salts,
904+        data += salts
905+        # the share data,
906+        data += sharedata
907+        # the private key,
908+        data += self.encprivkey
909+        # the block hash tree,
910+        data += self.block_hash_tree_s
911+        # the share hash chain,
912+        data += self.share_hash_chain_s
913+        # the signature,
914+        data += self.signature
915+        # and the verification key
916+        data += self.verification_key
917+        return data
918+
919+
920+    def write_test_share_to_server(self,
921+                                   storage_index,
922+                                   tail_segment=False,
923+                                   empty=False):
924+        """
925+        I write some data for the read tests to read to self.ss
926+
927+        If tail_segment=True, then I will write a share that has a
928+        smaller tail segment than other segments.
929+        """
930+        write = self.ss.remote_slot_testv_and_readv_and_writev
931+        data = self.build_test_mdmf_share(tail_segment, empty)
932+        # Finally, we write the whole thing to the storage server in one
933+        # pass.
934+        testvs = [(0, 1, "eq", "")]
935+        tws = {}
936+        tws[0] = (testvs, [(0, data)], None)
937+        readv = [(0, 1)]
938+        results = write(storage_index, self.secrets, tws, readv)
939+        self.failUnless(results[0])
940+
941+
942+    def build_test_sdmf_share(self, empty=False):
943+        if empty:
944+            sharedata = ""
945+        else:
946+            sharedata = self.segment * 6
947+        blocksize = len(sharedata) / 3
948+        block = sharedata[:blocksize]
949+        prefix = struct.pack(">BQ32s16s BBQQ",
950+                             0, # version,
951+                             0,
952+                             self.root_hash,
953+                             self.salt,
954+                             3,
955+                             10,
956+                             len(sharedata),
957+                             len(sharedata),
958+                            )
959+        post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
960+        signature_offset = post_offset + len(self.verification_key)
961+        sharehashes_offset = signature_offset + len(self.signature)
962+        blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s)
963+        sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s)
964+        encprivkey_offset = sharedata_offset + len(block)
965+        eof_offset = encprivkey_offset + len(self.encprivkey)
966+        offsets = struct.pack(">LLLLQQ",
967+                              signature_offset,
968+                              sharehashes_offset,
969+                              blockhashes_offset,
970+                              sharedata_offset,
971+                              encprivkey_offset,
972+                              eof_offset)
973+        final_share = "".join([prefix,
974+                           offsets,
975+                           self.verification_key,
976+                           self.signature,
977+                           self.share_hash_chain_s,
978+                           self.block_hash_tree_s,
979+                           block,
980+                           self.encprivkey])
981+        self.offsets = {}
982+        self.offsets['signature'] = signature_offset
983+        self.offsets['share_hash_chain'] = sharehashes_offset
984+        self.offsets['block_hash_tree'] = blockhashes_offset
985+        self.offsets['share_data'] = sharedata_offset
986+        self.offsets['enc_privkey'] = encprivkey_offset
987+        self.offsets['EOF'] = eof_offset
988+        return final_share
989+
990+
991+    def write_sdmf_share_to_server(self,
992+                                   storage_index,
993+                                   empty=False):
994+        # Some tests need SDMF shares to verify that we can still
995+        # read them. This method writes one, which resembles but is not
996+        assert self.rref
997+        write = self.ss.remote_slot_testv_and_readv_and_writev
998+        share = self.build_test_sdmf_share(empty)
999+        testvs = [(0, 1, "eq", "")]
1000+        tws = {}
1001+        tws[0] = (testvs, [(0, share)], None)
1002+        readv = []
1003+        results = write(storage_index, self.secrets, tws, readv)
1004+        self.failUnless(results[0])
1005+
1006+
1007+    def test_read(self):
1008+        self.write_test_share_to_server("si1")
1009+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1010+        # Check that every method equals what we expect it to.
1011+        d = defer.succeed(None)
1012+        def _check_block_and_salt((block, salt)):
1013+            self.failUnlessEqual(block, self.block)
1014+            self.failUnlessEqual(salt, self.salt)
1015+
1016+        for i in xrange(6):
1017+            d.addCallback(lambda ignored, i=i:
1018+                mr.get_block_and_salt(i))
1019+            d.addCallback(_check_block_and_salt)
1020+
1021+        d.addCallback(lambda ignored:
1022+            mr.get_encprivkey())
1023+        d.addCallback(lambda encprivkey:
1024+            self.failUnlessEqual(self.encprivkey, encprivkey))
1025+
1026+        d.addCallback(lambda ignored:
1027+            mr.get_blockhashes())
1028+        d.addCallback(lambda blockhashes:
1029+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
1030+
1031+        d.addCallback(lambda ignored:
1032+            mr.get_sharehashes())
1033+        d.addCallback(lambda sharehashes:
1034+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
1035+
1036+        d.addCallback(lambda ignored:
1037+            mr.get_signature())
1038+        d.addCallback(lambda signature:
1039+            self.failUnlessEqual(signature, self.signature))
1040+
1041+        d.addCallback(lambda ignored:
1042+            mr.get_verification_key())
1043+        d.addCallback(lambda verification_key:
1044+            self.failUnlessEqual(verification_key, self.verification_key))
1045+
1046+        d.addCallback(lambda ignored:
1047+            mr.get_seqnum())
1048+        d.addCallback(lambda seqnum:
1049+            self.failUnlessEqual(seqnum, 0))
1050+
1051+        d.addCallback(lambda ignored:
1052+            mr.get_root_hash())
1053+        d.addCallback(lambda root_hash:
1054+            self.failUnlessEqual(self.root_hash, root_hash))
1055+
1056+        d.addCallback(lambda ignored:
1057+            mr.get_salt_hash())
1058+        d.addCallback(lambda salt_hash:
1059+            self.failUnlessEqual(self.salt_hash, salt_hash))
1060+
1061+        d.addCallback(lambda ignored:
1062+            mr.get_seqnum())
1063+        d.addCallback(lambda seqnum:
1064+            self.failUnlessEqual(0, seqnum))
1065+
1066+        d.addCallback(lambda ignored:
1067+            mr.get_encoding_parameters())
1068+        def _check_encoding_parameters((k, n, segsize, datalen)):
1069+            self.failUnlessEqual(k, 3)
1070+            self.failUnlessEqual(n, 10)
1071+            self.failUnlessEqual(segsize, 6)
1072+            self.failUnlessEqual(datalen, 36)
1073+        d.addCallback(_check_encoding_parameters)
1074+
1075+        d.addCallback(lambda ignored:
1076+            mr.get_checkstring())
1077+        d.addCallback(lambda checkstring:
1078+            self.failUnlessEqual(checkstring, checkstring))
1079+        return d
1080+
1081+
1082+    def test_read_with_different_tail_segment_size(self):
1083+        self.write_test_share_to_server("si1", tail_segment=True)
1084+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1085+        d = mr.get_block_and_salt(5)
1086+        def _check_tail_segment(results):
1087+            block, salt = results
1088+            self.failUnlessEqual(len(block), 1)
1089+            self.failUnlessEqual(block, "a")
1090+        d.addCallback(_check_tail_segment)
1091+        return d
1092+
1093+
1094+    def test_get_block_with_invalid_segnum(self):
1095+        self.write_test_share_to_server("si1")
1096+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1097+        d = defer.succeed(None)
1098+        d.addCallback(lambda ignored:
1099+            self.shouldFail(LayoutInvalid, "test invalid segnum",
1100+                            None,
1101+                            mr.get_block_and_salt, 7))
1102+        return d
1103+
1104+
1105+    def test_get_encoding_parameters_first(self):
1106+        self.write_test_share_to_server("si1")
1107+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1108+        d = mr.get_encoding_parameters()
1109+        def _check_encoding_parameters((k, n, segment_size, datalen)):
1110+            self.failUnlessEqual(k, 3)
1111+            self.failUnlessEqual(n, 10)
1112+            self.failUnlessEqual(segment_size, 6)
1113+            self.failUnlessEqual(datalen, 36)
1114+        d.addCallback(_check_encoding_parameters)
1115+        return d
1116+
1117+
1118+    def test_get_seqnum_first(self):
1119+        self.write_test_share_to_server("si1")
1120+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1121+        d = mr.get_seqnum()
1122+        d.addCallback(lambda seqnum:
1123+            self.failUnlessEqual(seqnum, 0))
1124+        return d
1125+
1126+
1127+    def test_get_root_hash_first(self):
1128+        self.write_test_share_to_server("si1")
1129+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1130+        d = mr.get_root_hash()
1131+        d.addCallback(lambda root_hash:
1132+            self.failUnlessEqual(root_hash, self.root_hash))
1133+        return d
1134+
1135+
1136+    def test_get_salt_hash_first(self):
1137+        self.write_test_share_to_server("si1")
1138+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1139+        d = mr.get_salt_hash()
1140+        d.addCallback(lambda salt_hash:
1141+            self.failUnlessEqual(salt_hash, self.salt_hash))
1142+        return d
1143+
1144+
1145+    def test_get_checkstring_first(self):
1146+        self.write_test_share_to_server("si1")
1147+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1148+        d = mr.get_checkstring()
1149+        d.addCallback(lambda checkstring:
1150+            self.failUnlessEqual(checkstring, self.checkstring))
1151+        return d
1152+
1153+
1154+    def test_write_read_vectors(self):
1155+        # When writing for us, the storage server will return to us a
1156+        # read vector, along with its result. If a write fails because
1157+        # the test vectors failed, this read vector can help us to
1158+        # diagnose the problem. This test ensures that the read vector
1159+        # is working appropriately.
1160+        mw = self._make_new_mw("si1", 0)
1161+        d = defer.succeed(None)
1162+
1163+        # Write one share. This should return a checkstring of nothing,
1164+        # since there is no data there.
1165+        d.addCallback(lambda ignored:
1166+            mw.put_block(self.block, 0, self.salt))
1167+        def _check_first_write(results):
1168+            result, readvs = results
1169+            self.failUnless(result)
1170+            self.failIf(readvs)
1171+        d.addCallback(_check_first_write)
1172+        # Now, there should be a different checkstring returned when
1173+        # we write other shares
1174+        d.addCallback(lambda ignored:
1175+            mw.put_block(self.block, 1, self.salt))
1176+        def _check_next_write(results):
1177+            result, readvs = results
1178+            self.failUnless(result)
1179+            self.expected_checkstring = mw.get_checkstring()
1180+            self.failUnlessIn(0, readvs)
1181+            self.failUnlessEqual(readvs[0][0], self.expected_checkstring)
1182+        d.addCallback(_check_next_write)
1183+        # Add the other four shares
1184+        for i in xrange(2, 6):
1185+            d.addCallback(lambda ignored, i=i:
1186+                mw.put_block(self.block, i, self.salt))
1187+            d.addCallback(_check_next_write)
1188+        # Add the encrypted private key
1189+        d.addCallback(lambda ignored:
1190+            mw.put_encprivkey(self.encprivkey))
1191+        d.addCallback(_check_next_write)
1192+        # Add the block hash tree and share hash tree
1193+        d.addCallback(lambda ignored:
1194+            mw.put_blockhashes(self.block_hash_tree))
1195+        d.addCallback(_check_next_write)
1196+        d.addCallback(lambda ignored:
1197+            mw.put_sharehashes(self.share_hash_chain))
1198+        d.addCallback(_check_next_write)
1199+        # Add the root hash and the salt hash. This should change the
1200+        # checkstring, but not in a way that we'll be able to see right
1201+        # now, since the read vectors are applied before the write
1202+        # vectors.
1203+        d.addCallback(lambda ignored:
1204+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1205+        def _check_old_testv_after_new_one_is_written(results):
1206+            result, readvs = results
1207+            self.failUnless(result)
1208+            self.failUnlessIn(0, readvs)
1209+            self.failUnlessEqual(self.expected_checkstring,
1210+                                 readvs[0][0])
1211+            new_checkstring = mw.get_checkstring()
1212+            self.failIfEqual(new_checkstring,
1213+                             readvs[0][0])
1214+        d.addCallback(_check_old_testv_after_new_one_is_written)
1215+        # Now add the signature. This should succeed, meaning that the
1216+        # data gets written and the read vector matches what the writer
1217+        # thinks should be there.
1218+        d.addCallback(lambda ignored:
1219+            mw.put_signature(self.signature))
1220+        d.addCallback(_check_next_write)
1221+        # The checkstring remains the same for the rest of the process.
1222+        return d
1223+
1224+
1225+    def test_blockhashes_after_share_hash_chain(self):
1226+        mw = self._make_new_mw("si1", 0)
1227+        d = defer.succeed(None)
1228+        # Put everything up to and including the share hash chain
1229+        for i in xrange(6):
1230+            d.addCallback(lambda ignored, i=i:
1231+                mw.put_block(self.block, i, self.salt))
1232+        d.addCallback(lambda ignored:
1233+            mw.put_encprivkey(self.encprivkey))
1234+        d.addCallback(lambda ignored:
1235+            mw.put_blockhashes(self.block_hash_tree))
1236+        d.addCallback(lambda ignored:
1237+            mw.put_sharehashes(self.share_hash_chain))
1238+        # Now try to put a block hash tree after the share hash chain.
1239+        # This won't necessarily overwrite the share hash chain, but it
1240+        # is a bad idea in general -- if we write one that is anything
1241+        # other than the exact size of the initial one, we will either
1242+        # overwrite the share hash chain, or give the reader (who uses
1243+        # the offset of the share hash chain as an end boundary) a
1244+        # shorter tree than they know to read, which will result in them
1245+        # reading junk. There is little reason to support it as a use
1246+        # case, so we should disallow it altogether.
1247+        d.addCallback(lambda ignored:
1248+            self.shouldFail(LayoutInvalid, "test same blockhashes",
1249+                            None,
1250+                            mw.put_blockhashes, self.block_hash_tree))
1251+        return d
1252+
1253+
1254+    def test_encprivkey_after_blockhashes(self):
1255+        mw = self._make_new_mw("si1", 0)
1256+        d = defer.succeed(None)
1257+        # Put everything up to and including the block hash tree
1258+        for i in xrange(6):
1259+            d.addCallback(lambda ignored, i=i:
1260+                mw.put_block(self.block, i, self.salt))
1261+        d.addCallback(lambda ignored:
1262+            mw.put_encprivkey(self.encprivkey))
1263+        d.addCallback(lambda ignored:
1264+            mw.put_blockhashes(self.block_hash_tree))
1265+        d.addCallback(lambda ignored:
1266+            self.shouldFail(LayoutInvalid, "out of order private key",
1267+                            None,
1268+                            mw.put_encprivkey, self.encprivkey))
1269+        return d
1270+
1271+
1272+    def test_share_hash_chain_after_signature(self):
1273+        mw = self._make_new_mw("si1", 0)
1274+        d = defer.succeed(None)
1275+        # Put everything up to and including the signature
1276+        for i in xrange(6):
1277+            d.addCallback(lambda ignored, i=i:
1278+                mw.put_block(self.block, i, self.salt))
1279+        d.addCallback(lambda ignored:
1280+            mw.put_encprivkey(self.encprivkey))
1281+        d.addCallback(lambda ignored:
1282+            mw.put_blockhashes(self.block_hash_tree))
1283+        d.addCallback(lambda ignored:
1284+            mw.put_sharehashes(self.share_hash_chain))
1285+        d.addCallback(lambda ignored:
1286+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1287+        d.addCallback(lambda ignored:
1288+            mw.put_signature(self.signature))
1289+        # Now try to put the share hash chain again. This should fail
1290+        d.addCallback(lambda ignored:
1291+            self.shouldFail(LayoutInvalid, "out of order share hash chain",
1292+                            None,
1293+                            mw.put_sharehashes, self.share_hash_chain))
1294+        return d
1295+
1296+
1297+    def test_signature_after_verification_key(self):
1298+        mw = self._make_new_mw("si1", 0)
1299+        d = defer.succeed(None)
1300+        # Put everything up to and including the verification key.
1301+        for i in xrange(6):
1302+            d.addCallback(lambda ignored, i=i:
1303+                mw.put_block(self.block, i, self.salt))
1304+        d.addCallback(lambda ignored:
1305+            mw.put_encprivkey(self.encprivkey))
1306+        d.addCallback(lambda ignored:
1307+            mw.put_blockhashes(self.block_hash_tree))
1308+        d.addCallback(lambda ignored:
1309+            mw.put_sharehashes(self.share_hash_chain))
1310+        d.addCallback(lambda ignored:
1311+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1312+        d.addCallback(lambda ignored:
1313+            mw.put_signature(self.signature))
1314+        d.addCallback(lambda ignored:
1315+            mw.put_verification_key(self.verification_key))
1316+        # Now try to put the signature again. This should fail
1317+        d.addCallback(lambda ignored:
1318+            self.shouldFail(LayoutInvalid, "signature after verification",
1319+                            None,
1320+                            mw.put_signature, self.signature))
1321+        return d
1322+
1323+
1324+    def test_uncoordinated_write(self):
1325+        # Make two mutable writers, both pointing to the same storage
1326+        # server, both at the same storage index, and try writing to the
1327+        # same share.
1328+        mw1 = self._make_new_mw("si1", 0)
1329+        mw2 = self._make_new_mw("si1", 0)
1330+        d = defer.succeed(None)
1331+        def _check_success(results):
1332+            result, readvs = results
1333+            self.failUnless(result)
1334+
1335+        def _check_failure(results):
1336+            result, readvs = results
1337+            self.failIf(result)
1338+
1339+        d.addCallback(lambda ignored:
1340+            mw1.put_block(self.block, 0, self.salt))
1341+        d.addCallback(_check_success)
1342+        d.addCallback(lambda ignored:
1343+            mw2.put_block(self.block, 0, self.salt))
1344+        d.addCallback(_check_failure)
1345+        return d
1346+
1347+
1348+    def test_invalid_salt_size(self):
1349+        # Salts need to be 16 bytes in size. Writes that attempt to
1350+        # write more or less than this should be rejected.
1351+        mw = self._make_new_mw("si1", 0)
1352+        invalid_salt = "a" * 17 # 17 bytes
1353+        another_invalid_salt = "b" * 15 # 15 bytes
1354+        d = defer.succeed(None)
1355+        d.addCallback(lambda ignored:
1356+            self.shouldFail(LayoutInvalid, "salt too big",
1357+                            None,
1358+                            mw.put_block, self.block, 0, invalid_salt))
1359+        d.addCallback(lambda ignored:
1360+            self.shouldFail(LayoutInvalid, "salt too small",
1361+                            None,
1362+                            mw.put_block, self.block, 0,
1363+                            another_invalid_salt))
1364+        return d
1365+
1366+
1367+    def test_write_test_vectors(self):
1368+        # If we give the write proxy a bogus test vector at
1369+        # any point during the process, it should fail to write.
1370+        mw = self._make_new_mw("si1", 0)
1371+        mw.set_checkstring("this is a lie")
1372+        # The initial write should be expecting to find the improbable
1373+        # checkstring above in place; finding nothing, it should fail.
1374+        d = defer.succeed(None)
1375+        d.addCallback(lambda ignored:
1376+            mw.put_block(self.block, 0, self.salt))
1377+        def _check_failure(results):
1378+            result, readv = results
1379+            self.failIf(result)
1380+        d.addCallback(_check_failure)
1381+        # Now set the checkstring to the empty string, which
1382+        # indicates that no share is there.
1383+        d.addCallback(lambda ignored:
1384+            mw.set_checkstring(""))
1385+        d.addCallback(lambda ignored:
1386+            mw.put_block(self.block, 0, self.salt))
1387+        def _check_success(results):
1388+            result, readv = results
1389+            self.failUnless(result)
1390+        d.addCallback(_check_success)
1391+        # Now set the checkstring to something wrong
1392+        d.addCallback(lambda ignored:
1393+            mw.set_checkstring("something wrong"))
1394+        # This should fail to do anything
1395+        d.addCallback(lambda ignored:
1396+            mw.put_block(self.block, 1, self.salt))
1397+        d.addCallback(_check_failure)
1398+        # Now set it back to what it should be.
1399+        d.addCallback(lambda ignored:
1400+            mw.set_checkstring(mw.get_checkstring()))
1401+        for i in xrange(1, 6):
1402+            d.addCallback(lambda ignored, i=i:
1403+                mw.put_block(self.block, i, self.salt))
1404+            d.addCallback(_check_success)
1405+        d.addCallback(lambda ignored:
1406+            mw.put_encprivkey(self.encprivkey))
1407+        d.addCallback(_check_success)
1408+        d.addCallback(lambda ignored:
1409+            mw.put_blockhashes(self.block_hash_tree))
1410+        d.addCallback(_check_success)
1411+        d.addCallback(lambda ignored:
1412+            mw.put_sharehashes(self.share_hash_chain))
1413+        d.addCallback(_check_success)
1414+        def _keep_old_checkstring(ignored):
1415+            self.old_checkstring = mw.get_checkstring()
1416+            mw.set_checkstring("foobarbaz")
1417+        d.addCallback(_keep_old_checkstring)
1418+        d.addCallback(lambda ignored:
1419+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1420+        d.addCallback(_check_failure)
1421+        d.addCallback(lambda ignored:
1422+            self.failUnlessEqual(self.old_checkstring, mw.get_checkstring()))
1423+        def _restore_old_checkstring(ignored):
1424+            mw.set_checkstring(self.old_checkstring)
1425+        d.addCallback(_restore_old_checkstring)
1426+        d.addCallback(lambda ignored:
1427+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1428+        # The checkstring should have been set appropriately for us on
1429+        # the last write; if we try to change it to something else,
1430+        # that change should cause the verification key step to fail.
1431+        d.addCallback(lambda ignored:
1432+            mw.set_checkstring("something else"))
1433+        d.addCallback(lambda ignored:
1434+            mw.put_signature(self.signature))
1435+        d.addCallback(_check_failure)
1436+        d.addCallback(lambda ignored:
1437+            mw.set_checkstring(mw.get_checkstring()))
1438+        d.addCallback(lambda ignored:
1439+            mw.put_signature(self.signature))
1440+        d.addCallback(_check_success)
1441+        d.addCallback(lambda ignored:
1442+            mw.put_verification_key(self.verification_key))
1443+        d.addCallback(_check_success)
1444+        return d
1445+
1446+
1447+    def test_offset_only_set_on_success(self):
1448+        # The write proxy should be smart enough to detect when a write
1449+        # has failed, and to temper its definition of progress based on
1450+        # that.
1451+        mw = self._make_new_mw("si1", 0)
1452+        d = defer.succeed(None)
1453+        for i in xrange(1, 6):
1454+            d.addCallback(lambda ignored, i=i:
1455+                mw.put_block(self.block, i, self.salt))
1456+        def _break_checkstring(ignored):
1457+            self._old_checkstring = mw.get_checkstring()
1458+            mw.set_checkstring("foobarbaz")
1459+
1460+        def _fix_checkstring(ignored):
1461+            mw.set_checkstring(self._old_checkstring)
1462+
1463+        d.addCallback(_break_checkstring)
1464+
1465+        # Setting the encrypted private key shouldn't work now, which is
1466+        # to be expected and is tested elsewhere. We also want to make
1467+        # sure that we can't add the block hash tree after a failed
1468+        # write of this sort.
1469+        d.addCallback(lambda ignored:
1470+            mw.put_encprivkey(self.encprivkey))
1471+        d.addCallback(lambda ignored:
1472+            self.shouldFail(LayoutInvalid, "test out-of-order blockhashes",
1473+                            None,
1474+                            mw.put_blockhashes, self.block_hash_tree))
1475+        d.addCallback(_fix_checkstring)
1476+        d.addCallback(lambda ignored:
1477+            mw.put_encprivkey(self.encprivkey))
1478+        d.addCallback(_break_checkstring)
1479+        d.addCallback(lambda ignored:
1480+            mw.put_blockhashes(self.block_hash_tree))
1481+        d.addCallback(lambda ignored:
1482+            self.shouldFail(LayoutInvalid, "test out-of-order sharehashes",
1483+                            None,
1484+                            mw.put_sharehashes, self.share_hash_chain))
1485+        d.addCallback(_fix_checkstring)
1486+        d.addCallback(lambda ignored:
1487+            mw.put_blockhashes(self.block_hash_tree))
1488+        d.addCallback(_break_checkstring)
1489+        d.addCallback(lambda ignored:
1490+            mw.put_sharehashes(self.share_hash_chain))
1491+        d.addCallback(lambda ignored:
1492+            self.shouldFail(LayoutInvalid, "out-of-order root hash",
1493+                            None,
1494+                            mw.put_root_and_salt_hashes,
1495+                            self.root_hash, self.salt_hash))
1496+        d.addCallback(_fix_checkstring)
1497+        d.addCallback(lambda ignored:
1498+            mw.put_sharehashes(self.share_hash_chain))
1499+        d.addCallback(_break_checkstring)
1500+        d.addCallback(lambda ignored:
1501+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1502+        d.addCallback(lambda ignored:
1503+            self.shouldFail(LayoutInvalid, "out-of-order signature",
1504+                            None,
1505+                            mw.put_signature, self.signature))
1506+        d.addCallback(_fix_checkstring)
1507+        d.addCallback(lambda ignored:
1508+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1509+        d.addCallback(_break_checkstring)
1510+        d.addCallback(lambda ignored:
1511+            mw.put_signature(self.signature))
1512+        d.addCallback(lambda ignored:
1513+            self.shouldFail(LayoutInvalid, "out-of-order verification key",
1514+                            None,
1515+                            mw.put_verification_key,
1516+                            self.verification_key))
1517+        d.addCallback(_fix_checkstring)
1518+        d.addCallback(lambda ignored:
1519+            mw.put_signature(self.signature))
1520+        d.addCallback(_break_checkstring)
1521+        d.addCallback(lambda ignored:
1522+            mw.put_verification_key(self.verification_key))
1523+        d.addCallback(lambda ignored:
1524+            self.shouldFail(LayoutInvalid, "out-of-order finish",
1525+                            None,
1526+                            mw.finish_publishing))
1527+        return d
1528+
1529+
1530+    def serialize_blockhashes(self, blockhashes):
1531+        return "".join(blockhashes)
1532+
1533+
1534+    def serialize_sharehashes(self, sharehashes):
1535+        ret = "".join([struct.pack(">H32s", i, sharehashes[i])
1536+                        for i in sorted(sharehashes.keys())])
1537+        return ret
1538+
1539+
1540+    def test_write(self):
1541+        # This translates to a file with 6 6-byte segments, and with 2-byte
1542+        # blocks.
1543+        mw = self._make_new_mw("si1", 0)
1544+        mw2 = self._make_new_mw("si1", 1)
1545+        # Test writing some blocks.
1546+        read = self.ss.remote_slot_readv
1547+        def _check_block_write(i, share):
1548+            self.failUnlessEqual(read("si1", [share], [(239 + (i * 2), 2)]),
1549+                                {share: [self.block]})
1550+            self.failUnlessEqual(read("si1", [share], [(143 + (i * 16), 16)]),
1551+                                 {share: [self.salt]})
1552+        d = defer.succeed(None)
1553+        for i in xrange(6):
1554+            d.addCallback(lambda ignored, i=i:
1555+                mw.put_block(self.block, i, self.salt))
1556+            d.addCallback(lambda ignored, i=i:
1557+                _check_block_write(i, 0))
1558+        # Now try the same thing, but with share 1 instead of share 0.
1559+        for i in xrange(6):
1560+            d.addCallback(lambda ignored, i=i:
1561+                mw2.put_block(self.block, i, self.salt))
1562+            d.addCallback(lambda ignored, i=i:
1563+                _check_block_write(i, 1))
1564+
1565+        def _spy_on_results(results):
1566+            print read("si1", [], [(0, 40000000)])
1567+            return results
1568+
1569+        # Next, we make a fake encrypted private key, and put it onto the
1570+        # storage server.
1571+        d.addCallback(lambda ignored:
1572+            mw.put_encprivkey(self.encprivkey))
1573+        # So far, we have:
1574+        #  header:  143 bytes
1575+        #  salts:   16 * 6 = 96 bytes
1576+        #  blocks:  2 * 6 = 12 bytes
1577+        #   = 251 bytes
1578+        expected_private_key_offset = 251
1579+        self.failUnlessEqual(len(self.encprivkey), 7)
1580+        d.addCallback(lambda ignored:
1581+            self.failUnlessEqual(read("si1", [0], [(251, 7)]),
1582+                                 {0: [self.encprivkey]}))
1583+
1584+        # Next, we put a fake block hash tree.
1585+        d.addCallback(lambda ignored:
1586+            mw.put_blockhashes(self.block_hash_tree))
1587+        # The block hash tree got inserted at:
1588+        #  header + salts + blocks: 251 bytes
1589+        #  encrypted private key:   7 bytes
1590+        #       = 258 bytes
1591+        expected_block_hash_offset = 258
1592+        self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
1593+        d.addCallback(lambda ignored:
1594+            self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
1595+                                 {0: [self.block_hash_tree_s]}))
1596+
1597+        # Next, put a fake share hash chain
1598+        d.addCallback(lambda ignored:
1599+            mw.put_sharehashes(self.share_hash_chain))
1600+        # The share hash chain got inserted at:
1601+        # header + salts + blocks + private key = 258 bytes
1602+        # block hash tree:                        32 * 6 = 192 bytes
1603+        #   = 450 bytes
1604+        expected_share_hash_offset = 450
1605+        d.addCallback(lambda ignored:
1606+            self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
1607+                                 {0: [self.share_hash_chain_s]}))
1608+
1609+        # Next, we put what is supposed to be the root hash of
1610+        # our share hash tree but isn't, along with the flat hash
1611+        # of all the salts.
1612+        d.addCallback(lambda ignored:
1613+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1614+        # The root hash gets inserted at byte 9 (its position is in the header,
1615+        # and is fixed). The salt is right after it.
1616+        def _check(ignored):
1617+            self.failUnlessEqual(read("si1", [0], [(9, 32)]),
1618+                                 {0: [self.root_hash]})
1619+            self.failUnlessEqual(read("si1", [0], [(41, 32)]),
1620+                                 {0: [self.salt_hash]})
1621+        d.addCallback(_check)
1622+
1623+        # Next, we put a signature of the header block.
1624+        d.addCallback(lambda ignored:
1625+            mw.put_signature(self.signature))
1626+        # The signature gets written to:
1627+        #   header + salts + blocks + block and share hash tree = 654
1628+        expected_signature_offset = 654
1629+        self.failUnlessEqual(len(self.signature), 9)
1630+        d.addCallback(lambda ignored:
1631+            self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
1632+                                 {0: [self.signature]}))
1633+
1634+        # Next, we put the verification key
1635+        d.addCallback(lambda ignored:
1636+            mw.put_verification_key(self.verification_key))
1637+        # The verification key gets written to:
1638+        #   654 + 9 = 663 bytes
1639+        expected_verification_key_offset = 663
1640+        self.failUnlessEqual(len(self.verification_key), 6)
1641+        d.addCallback(lambda ignored:
1642+            self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]),
1643+                                 {0: [self.verification_key]}))
1644+
1645+        def _check_signable(ignored):
1646+            # Make sure that the signable is what we think it should be.
1647+            signable = mw.get_signable()
1648+            verno, seq, roothash, salthash, k, n, segsize, datalen = \
1649+                                            struct.unpack(">BQ32s32sBBQQ",
1650+                                                          signable)
1651+            self.failUnlessEqual(verno, 1)
1652+            self.failUnlessEqual(seq, 0)
1653+            self.failUnlessEqual(roothash, self.root_hash)
1654+            self.failUnlessEqual(salthash, self.salt_hash)
1655+            self.failUnlessEqual(k, 3)
1656+            self.failUnlessEqual(n, 10)
1657+            self.failUnlessEqual(segsize, 6)
1658+            self.failUnlessEqual(datalen, 36)
1659+        d.addCallback(_check_signable)
1660+        # Next, we cause the offset table to be published.
1661+        d.addCallback(lambda ignored:
1662+            mw.finish_publishing())
1663+        expected_eof_offset = 669
1664+
1665+        # The offset table starts at byte 91. Happily, we have already
1666+        # worked out most of these offsets above, but we want to make
1667+        # sure that the representation on disk agrees what what we've
1668+        # calculated.
1669+        #
1670+        # (we don't have an explicit offset for the AES salts, because
1671+        # we know that they start right after the header)
1672+        def _check_offsets(ignored):
1673+            # Check the version number to make sure that it is correct.
1674+            expected_version_number = struct.pack(">B", 1)
1675+            self.failUnlessEqual(read("si1", [0], [(0, 1)]),
1676+                                 {0: [expected_version_number]})
1677+            # Check the sequence number to make sure that it is correct
1678+            expected_sequence_number = struct.pack(">Q", 0)
1679+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
1680+                                 {0: [expected_sequence_number]})
1681+            # Check that the encoding parameters (k, N, segement size, data
1682+            # length) are what they should be. These are  3, 10, 6, 36
1683+            expected_k = struct.pack(">B", 3)
1684+            self.failUnlessEqual(read("si1", [0], [(73, 1)]),
1685+                                 {0: [expected_k]})
1686+            expected_n = struct.pack(">B", 10)
1687+            self.failUnlessEqual(read("si1", [0], [(74, 1)]),
1688+                                 {0: [expected_n]})
1689+            expected_segment_size = struct.pack(">Q", 6)
1690+            self.failUnlessEqual(read("si1", [0], [(75, 8)]),
1691+                                 {0: [expected_segment_size]})
1692+            expected_data_length = struct.pack(">Q", 36)
1693+            self.failUnlessEqual(read("si1", [0], [(83, 8)]),
1694+                                 {0: [expected_data_length]})
1695+            # 91          4           The offset of the share data
1696+            expected_offset = struct.pack(">L", 239)
1697+            self.failUnlessEqual(read("si1", [0], [(91, 4)]),
1698+                                 {0: [expected_offset]})
1699+            # 95          8           The offset of the encrypted private key
1700+            expected_offset = struct.pack(">Q", expected_private_key_offset)
1701+            self.failUnlessEqual(read("si1", [0], [(95, 8)]),
1702+                                 {0: [expected_offset]})
1703+            # 103         8           The offset of the block hash tree
1704+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
1705+            self.failUnlessEqual(read("si1", [0], [(103, 8)]),
1706+                                 {0: [expected_offset]})
1707+            # 111         8           The offset of the share hash chain
1708+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
1709+            self.failUnlessEqual(read("si1", [0], [(111, 8)]),
1710+                                 {0: [expected_offset]})
1711+            # 119         8           The offset of the signature
1712+            expected_offset = struct.pack(">Q", expected_signature_offset)
1713+            self.failUnlessEqual(read("si1", [0], [(119, 8)]),
1714+                                 {0: [expected_offset]})
1715+            # 127         8           The offset of the verification key
1716+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
1717+            self.failUnlessEqual(read("si1", [0], [(127, 8)]),
1718+                                 {0: [expected_offset]})
1719+            # 135         8           offset of the EOF
1720+            expected_offset = struct.pack(">Q", expected_eof_offset)
1721+            self.failUnlessEqual(read("si1", [0], [(135, 8)]),
1722+                                 {0: [expected_offset]})
1723+            # = 143 bytes in total.
1724+        d.addCallback(_check_offsets)
1725+        return d
1726+
1727+    def _make_new_mw(self, si, share, datalength=36):
1728+        # This is a file of size 36 bytes. Since it has a segment
1729+        # size of 6, we know that it has 6 byte segments, which will
1730+        # be split into blocks of 2 bytes because our FEC k
1731+        # parameter is 3.
1732+        mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10,
1733+                                6, datalength)
1734+        return mw
1735+
1736+
1737+    def test_write_rejected_with_too_many_blocks(self):
1738+        mw = self._make_new_mw("si0", 0)
1739+
1740+        # Try writing too many blocks. We should not be able to write
1741+        # more than 6
1742+        # blocks into each share.
1743+        d = defer.succeed(None)
1744+        for i in xrange(6):
1745+            d.addCallback(lambda ignored, i=i:
1746+                mw.put_block(self.block, i, self.salt))
1747+        d.addCallback(lambda ignored:
1748+            self.shouldFail(LayoutInvalid, "too many blocks",
1749+                            None,
1750+                            mw.put_block, self.block, 7, self.salt))
1751+        return d
1752+
1753+
1754+    def test_write_rejected_with_invalid_salt(self):
1755+        # Try writing an invalid salt. Salts are 16 bytes -- any more or
1756+        # less should cause an error.
1757+        mw = self._make_new_mw("si1", 0)
1758+        bad_salt = "a" * 17 # 17 bytes
1759+        d = defer.succeed(None)
1760+        d.addCallback(lambda ignored:
1761+            self.shouldFail(LayoutInvalid, "test_invalid_salt",
1762+                            None, mw.put_block, self.block, 7, bad_salt))
1763+        return d
1764+
1765+
1766+    def test_write_rejected_with_invalid_salt_hash(self):
1767+        # Try writing an invalid salt hash. These should be SHA256d, and
1768+        # 32 bytes long as a result.
1769+        mw = self._make_new_mw("si2", 0)
1770+        invalid_salt_hash = "b" * 31
1771+        d = defer.succeed(None)
1772+        # Before this test can work, we need to put some blocks + salts,
1773+        # a block hash tree, and a share hash tree. Otherwise, we'll see
1774+        # failures that match what we are looking for, but are caused by
1775+        # the constraints imposed on operation ordering.
1776+        for i in xrange(6):
1777+            d.addCallback(lambda ignored, i=i:
1778+                mw.put_block(self.block, i, self.salt))
1779+        d.addCallback(lambda ignored:
1780+            mw.put_encprivkey(self.encprivkey))
1781+        d.addCallback(lambda ignored:
1782+            mw.put_blockhashes(self.block_hash_tree))
1783+        d.addCallback(lambda ignored:
1784+            mw.put_sharehashes(self.share_hash_chain))
1785+        d.addCallback(lambda ignored:
1786+            self.shouldFail(LayoutInvalid, "invalid root hash",
1787+                            None, mw.put_root_and_salt_hashes,
1788+                            self.root_hash, invalid_salt_hash))
1789+        return d
1790+
1791+
1792+    def test_write_rejected_with_invalid_root_hash(self):
1793+        # Try writing an invalid root hash. This should be SHA256d, and
1794+        # 32 bytes long as a result.
1795+        mw = self._make_new_mw("si2", 0)
1796+        # 17 bytes != 32 bytes
1797+        invalid_root_hash = "a" * 17
1798+        d = defer.succeed(None)
1799+        # Before this test can work, we need to put some blocks + salts,
1800+        # a block hash tree, and a share hash tree. Otherwise, we'll see
1801+        # failures that match what we are looking for, but are caused by
1802+        # the constraints imposed on operation ordering.
1803+        for i in xrange(6):
1804+            d.addCallback(lambda ignored, i=i:
1805+                mw.put_block(self.block, i, self.salt))
1806+        d.addCallback(lambda ignored:
1807+            mw.put_encprivkey(self.encprivkey))
1808+        d.addCallback(lambda ignored:
1809+            mw.put_blockhashes(self.block_hash_tree))
1810+        d.addCallback(lambda ignored:
1811+            mw.put_sharehashes(self.share_hash_chain))
1812+        d.addCallback(lambda ignored:
1813+            self.shouldFail(LayoutInvalid, "invalid root hash",
1814+                            None, mw.put_root_and_salt_hashes,
1815+                            invalid_root_hash, self.salt_hash))
1816+        return d
1817+
1818+
1819+    def test_write_rejected_with_invalid_blocksize(self):
1820+        # The blocksize implied by the writer that we get from
1821+        # _make_new_mw is 2bytes -- any more or any less than this
1822+        # should be cause for failure, unless it is the tail segment, in
1823+        # which case it may not be failure.
1824+        invalid_block = "a"
1825+        mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with
1826+                                             # one byte blocks
1827+        # 1 bytes != 2 bytes
1828+        d = defer.succeed(None)
1829+        d.addCallback(lambda ignored, invalid_block=invalid_block:
1830+            self.shouldFail(LayoutInvalid, "test blocksize too small",
1831+                            None, mw.put_block, invalid_block, 0,
1832+                            self.salt))
1833+        invalid_block = invalid_block * 3
1834+        # 3 bytes != 2 bytes
1835+        d.addCallback(lambda ignored:
1836+            self.shouldFail(LayoutInvalid, "test blocksize too large",
1837+                            None,
1838+                            mw.put_block, invalid_block, 0, self.salt))
1839+        for i in xrange(5):
1840+            d.addCallback(lambda ignored, i=i:
1841+                mw.put_block(self.block, i, self.salt))
1842+        # Try to put an invalid tail segment
1843+        d.addCallback(lambda ignored:
1844+            self.shouldFail(LayoutInvalid, "test invalid tail segment",
1845+                            None,
1846+                            mw.put_block, self.block, 5, self.salt))
1847+        valid_block = "a"
1848+        d.addCallback(lambda ignored:
1849+            mw.put_block(valid_block, 5, self.salt))
1850+        return d
1851+
1852+
1853+    def test_write_enforces_order_constraints(self):
1854+        # We require that the MDMFSlotWriteProxy be interacted with in a
1855+        # specific way.
1856+        # That way is:
1857+        # 0: __init__
1858+        # 1: write blocks and salts
1859+        # 2: Write the encrypted private key
1860+        # 3: Write the block hashes
1861+        # 4: Write the share hashes
1862+        # 5: Write the root hash and salt hash
1863+        # 6: Write the signature and verification key
1864+        # 7: Write the file.
1865+        #
1866+        # Some of these can be performed out-of-order, and some can't.
1867+        # The dependencies that I want to test here are:
1868+        #  - Private key before block hashes
1869+        #  - share hashes and block hashes before root hash
1870+        #  - root hash before signature
1871+        #  - signature before verification key
1872+        mw0 = self._make_new_mw("si0", 0)
1873+        # Write some shares
1874+        d = defer.succeed(None)
1875+        for i in xrange(6):
1876+            d.addCallback(lambda ignored, i=i:
1877+                mw0.put_block(self.block, i, self.salt))
1878+        # Try to write the block hashes before writing the encrypted
1879+        # private key
1880+        d.addCallback(lambda ignored:
1881+            self.shouldFail(LayoutInvalid, "block hashes before key",
1882+                            None, mw0.put_blockhashes,
1883+                            self.block_hash_tree))
1884+
1885+        # Write the private key.
1886+        d.addCallback(lambda ignored:
1887+            mw0.put_encprivkey(self.encprivkey))
1888+
1889+
1890+        # Try to write the share hash chain without writing the block
1891+        # hash tree
1892+        d.addCallback(lambda ignored:
1893+            self.shouldFail(LayoutInvalid, "share hash chain before "
1894+                                           "block hash tree",
1895+                            None,
1896+                            mw0.put_sharehashes, self.share_hash_chain))
1897+
1898+        # Try to write the root hash and salt hash without writing either the
1899+        # block hashes or the share hashes
1900+        d.addCallback(lambda ignored:
1901+            self.shouldFail(LayoutInvalid, "root hash before share hashes",
1902+                            None,
1903+                            mw0.put_root_and_salt_hashes,
1904+                            self.root_hash, self.salt_hash))
1905+
1906+        # Now write the block hashes and try again
1907+        d.addCallback(lambda ignored:
1908+            mw0.put_blockhashes(self.block_hash_tree))
1909+        d.addCallback(lambda ignored:
1910+            self.shouldFail(LayoutInvalid, "root hash before share hashes",
1911+                            None, mw0.put_root_and_salt_hashes,
1912+                            self.root_hash, self.salt_hash))
1913+
1914+        # We haven't yet put the root hash on the share, so we shouldn't
1915+        # be able to sign it.
1916+        d.addCallback(lambda ignored:
1917+            self.shouldFail(LayoutInvalid, "signature before root hash",
1918+                            None, mw0.put_signature, self.signature))
1919+
1920+        d.addCallback(lambda ignored:
1921+            self.failUnlessRaises(LayoutInvalid, mw0.get_signable))
1922+
1923+        # ..and, since that fails, we also shouldn't be able to put the
1924+        # verification key.
1925+        d.addCallback(lambda ignored:
1926+            self.shouldFail(LayoutInvalid, "key before signature",
1927+                            None, mw0.put_verification_key,
1928+                            self.verification_key))
1929+
1930+        # Now write the share hashes and verify that it works.
1931+        d.addCallback(lambda ignored:
1932+            mw0.put_sharehashes(self.share_hash_chain))
1933+
1934+        # We should still be unable to sign the header
1935+        d.addCallback(lambda ignored:
1936+            self.shouldFail(LayoutInvalid, "signature before hashes",
1937+                            None,
1938+                            mw0.put_signature, self.signature))
1939+
1940+        # We should be able to write the root hash now too
1941+        d.addCallback(lambda ignored:
1942+            mw0.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1943+
1944+        # We should still be unable to put the verification key
1945+        d.addCallback(lambda ignored:
1946+            self.shouldFail(LayoutInvalid, "key before signature",
1947+                            None, mw0.put_verification_key,
1948+                            self.verification_key))
1949+
1950+        d.addCallback(lambda ignored:
1951+            mw0.put_signature(self.signature))
1952+
1953+        # We shouldn't be able to write the offsets to the remote server
1954+        # until the offset table is finished; IOW, until we have written
1955+        # the verification key.
1956+        d.addCallback(lambda ignored:
1957+            self.shouldFail(LayoutInvalid, "offsets before verification key",
1958+                            None,
1959+                            mw0.finish_publishing))
1960+
1961+        d.addCallback(lambda ignored:
1962+            mw0.put_verification_key(self.verification_key))
1963+        return d
1964+
1965+
1966+    def test_end_to_end(self):
1967+        mw = self._make_new_mw("si1", 0)
1968+        # Write a share using the mutable writer, and make sure that the
1969+        # reader knows how to read everything back to us.
1970+        d = defer.succeed(None)
1971+        for i in xrange(6):
1972+            d.addCallback(lambda ignored, i=i:
1973+                mw.put_block(self.block, i, self.salt))
1974+        d.addCallback(lambda ignored:
1975+            mw.put_encprivkey(self.encprivkey))
1976+        d.addCallback(lambda ignored:
1977+            mw.put_blockhashes(self.block_hash_tree))
1978+        d.addCallback(lambda ignored:
1979+            mw.put_sharehashes(self.share_hash_chain))
1980+        d.addCallback(lambda ignored:
1981+            mw.put_root_and_salt_hashes(self.root_hash, self.salt_hash))
1982+        d.addCallback(lambda ignored:
1983+            mw.put_signature(self.signature))
1984+        d.addCallback(lambda ignored:
1985+            mw.put_verification_key(self.verification_key))
1986+        d.addCallback(lambda ignored:
1987+            mw.finish_publishing())
1988+
1989+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
1990+        def _check_block_and_salt((block, salt)):
1991+            self.failUnlessEqual(block, self.block)
1992+            self.failUnlessEqual(salt, self.salt)
1993+
1994+        for i in xrange(6):
1995+            d.addCallback(lambda ignored, i=i:
1996+                mr.get_block_and_salt(i))
1997+            d.addCallback(_check_block_and_salt)
1998+
1999+        d.addCallback(lambda ignored:
2000+            mr.get_encprivkey())
2001+        d.addCallback(lambda encprivkey:
2002+            self.failUnlessEqual(self.encprivkey, encprivkey))
2003+
2004+        d.addCallback(lambda ignored:
2005+            mr.get_blockhashes())
2006+        d.addCallback(lambda blockhashes:
2007+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
2008+
2009+        d.addCallback(lambda ignored:
2010+            mr.get_sharehashes())
2011+        d.addCallback(lambda sharehashes:
2012+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
2013+
2014+        d.addCallback(lambda ignored:
2015+            mr.get_signature())
2016+        d.addCallback(lambda signature:
2017+            self.failUnlessEqual(signature, self.signature))
2018+
2019+        d.addCallback(lambda ignored:
2020+            mr.get_verification_key())
2021+        d.addCallback(lambda verification_key:
2022+            self.failUnlessEqual(verification_key, self.verification_key))
2023+
2024+        d.addCallback(lambda ignored:
2025+            mr.get_seqnum())
2026+        d.addCallback(lambda seqnum:
2027+            self.failUnlessEqual(seqnum, 0))
2028+
2029+        d.addCallback(lambda ignored:
2030+            mr.get_root_hash())
2031+        d.addCallback(lambda root_hash:
2032+            self.failUnlessEqual(self.root_hash, root_hash))
2033+
2034+        d.addCallback(lambda ignored:
2035+            mr.get_salt_hash())
2036+        d.addCallback(lambda salt_hash:
2037+            self.failUnlessEqual(self.salt_hash, salt_hash))
2038+
2039+        d.addCallback(lambda ignored:
2040+            mr.get_encoding_parameters())
2041+        def _check_encoding_parameters((k, n, segsize, datalen)):
2042+            self.failUnlessEqual(k, 3)
2043+            self.failUnlessEqual(n, 10)
2044+            self.failUnlessEqual(segsize, 6)
2045+            self.failUnlessEqual(datalen, 36)
2046+        d.addCallback(_check_encoding_parameters)
2047+
2048+        d.addCallback(lambda ignored:
2049+            mr.get_checkstring())
2050+        d.addCallback(lambda checkstring:
2051+            self.failUnlessEqual(checkstring, mw.get_checkstring()))
2052+        return d
2053+
2054+
2055+    def test_is_sdmf(self):
2056+        # The MDMFSlotReadProxy should also know how to read SDMF files,
2057+        # since it will encounter them on the grid. Callers use the
2058+        # is_sdmf method to test this.
2059+        self.write_sdmf_share_to_server("si1")
2060+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2061+        d = mr.is_sdmf()
2062+        d.addCallback(lambda issdmf:
2063+            self.failUnless(issdmf))
2064+        return d
2065+
2066+
2067+    def test_reads_sdmf(self):
2068+        # The slot read proxy should, naturally, know how to tell us
2069+        # about data in the SDMF format
2070+        self.write_sdmf_share_to_server("si1")
2071+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2072+        d = defer.succeed(None)
2073+        d.addCallback(lambda ignored:
2074+            mr.is_sdmf())
2075+        d.addCallback(lambda issdmf:
2076+            self.failUnless(issdmf))
2077+
2078+        # What do we need to read?
2079+        #  - The sharedata
2080+        #  - The salt
2081+        d.addCallback(lambda ignored:
2082+            mr.get_block_and_salt(0))
2083+        def _check_block_and_salt(results):
2084+            block, salt = results
2085+            self.failUnlessEqual(block, self.block * 6)
2086+            self.failUnlessEqual(salt, self.salt)
2087+        d.addCallback(_check_block_and_salt)
2088+
2089+        #  - The blockhashes
2090+        d.addCallback(lambda ignored:
2091+            mr.get_blockhashes())
2092+        d.addCallback(lambda blockhashes:
2093+            self.failUnlessEqual(self.block_hash_tree,
2094+                                 blockhashes,
2095+                                 blockhashes))
2096+        #  - The sharehashes
2097+        d.addCallback(lambda ignored:
2098+            mr.get_sharehashes())
2099+        d.addCallback(lambda sharehashes:
2100+            self.failUnlessEqual(self.share_hash_chain,
2101+                                 sharehashes))
2102+        #  - The keys
2103+        d.addCallback(lambda ignored:
2104+            mr.get_encprivkey())
2105+        d.addCallback(lambda encprivkey:
2106+            self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey))
2107+        d.addCallback(lambda ignored:
2108+            mr.get_verification_key())
2109+        d.addCallback(lambda verification_key:
2110+            self.failUnlessEqual(verification_key,
2111+                                 self.verification_key,
2112+                                 verification_key))
2113+        #  - The signature
2114+        d.addCallback(lambda ignored:
2115+            mr.get_signature())
2116+        d.addCallback(lambda signature:
2117+            self.failUnlessEqual(signature, self.signature, signature))
2118+
2119+        #  - The sequence number
2120+        d.addCallback(lambda ignored:
2121+            mr.get_seqnum())
2122+        d.addCallback(lambda seqnum:
2123+            self.failUnlessEqual(seqnum, 0, seqnum))
2124+
2125+        #  - The root hash
2126+        #  - The salt hash (to verify that it is None)
2127+        d.addCallback(lambda ignored:
2128+            mr.get_root_hash())
2129+        d.addCallback(lambda root_hash:
2130+            self.failUnlessEqual(root_hash, self.root_hash, root_hash))
2131+        d.addCallback(lambda ignored:
2132+            mr.get_salt_hash())
2133+        d.addCallback(lambda salt_hash:
2134+            self.failIf(salt_hash))
2135+        return d
2136+
2137+
2138+    def test_only_reads_one_segment_sdmf(self):
2139+        # SDMF shares have only one segment, so it doesn't make sense to
2140+        # read more segments than that. The reader should know this and
2141+        # complain if we try to do that.
2142+        self.write_sdmf_share_to_server("si1")
2143+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2144+        d = defer.succeed(None)
2145+        d.addCallback(lambda ignored:
2146+            mr.is_sdmf())
2147+        d.addCallback(lambda issdmf:
2148+            self.failUnless(issdmf))
2149+        d.addCallback(lambda ignored:
2150+            self.shouldFail(LayoutInvalid, "test bad segment",
2151+                            None,
2152+                            mr.get_block_and_salt, 1))
2153+        return d
2154+
2155+
2156+    def test_read_with_prefetched_mdmf_data(self):
2157+        # The MDMFSlotReadProxy will prefill certain fields if you pass
2158+        # it data that you have already fetched. This is useful for
2159+        # cases like the Servermap, which prefetches ~2kb of data while
2160+        # finding out which shares are on the remote peer so that it
2161+        # doesn't waste round trips.
2162+        mdmf_data = self.build_test_mdmf_share()
2163+        # We're telling it enough to figure out whether it is SDMF or
2164+        # MDMF.
2165+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:1])
2166+        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2167+
2168+        # Now we're telling it more, but still not enough to flesh out
2169+        # the rest of the encoding parameter, so none of them should be
2170+        # set.
2171+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:10])
2172+        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2173+        self.failIf(mr._sequence_number)
2174+
2175+        # This should be enough to flesh out the encoding parameters of
2176+        # an MDMF file.
2177+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:91])
2178+        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2179+        self.failUnlessEqual(mr._root_hash, self.root_hash),
2180+        self.failUnlessEqual(mr._sequence_number, 0)
2181+        self.failUnlessEqual(mr._required_shares, 3)
2182+        self.failUnlessEqual(mr._total_shares, 10)
2183+
2184+        # This should be enough to fill in the encoding parameters and
2185+        # a little more, but not enough to complete the offset table.
2186+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:100])
2187+        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2188+        self.failUnlessEqual(mr._root_hash, self.root_hash)
2189+        self.failUnlessEqual(mr._sequence_number, 0)
2190+        self.failUnlessEqual(mr._required_shares, 3)
2191+        self.failUnlessEqual(mr._total_shares, 10)
2192+        self.failIf(mr._offsets)
2193+
2194+        # This should be enough to fill in both the encoding parameters
2195+        # and the table of offsets
2196+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:143])
2197+        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2198+        self.failUnlessEqual(mr._root_hash, self.root_hash)
2199+        self.failUnlessEqual(mr._sequence_number, 0)
2200+        self.failUnlessEqual(mr._required_shares, 3)
2201+        self.failUnlessEqual(mr._total_shares, 10)
2202+        self.failUnless(mr._offsets)
2203+
2204+
2205+    def test_read_with_prefetched_sdmf_data(self):
2206+        sdmf_data = self.build_test_sdmf_share()
2207+        # Feed it just enough data to check the share type
2208+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:1])
2209+        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
2210+        self.failIf(mr._sequence_number)
2211+
2212+        # Now feed it more data, but not enough data to populate the
2213+        # encoding parameters. The results should be exactly the same as
2214+        # before.
2215+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:10])
2216+        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
2217+        self.failIf(mr._sequence_number)
2218+
2219+        # Now feed it enough data to populate the encoding parameters
2220+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:75])
2221+        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
2222+        self.failUnlessEqual(mr._sequence_number, 0)
2223+        self.failUnlessEqual(mr._root_hash, self.root_hash)
2224+        self.failUnlessEqual(mr._required_shares, 3)
2225+        self.failUnlessEqual(mr._total_shares, 10)
2226+
2227+        # Now feed it enough data to populate the encoding parameters
2228+        # and then some, but not enough to fill in the offset table.
2229+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:100])
2230+        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
2231+        self.failUnlessEqual(mr._sequence_number, 0)
2232+        self.failUnlessEqual(mr._root_hash, self.root_hash)
2233+        self.failUnlessEqual(mr._required_shares, 3)
2234+        self.failUnlessEqual(mr._total_shares, 10)
2235+        self.failIf(mr._offsets)
2236+
2237+        # Now fill in the offset table.
2238+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:107])
2239+        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
2240+        self.failUnlessEqual(mr._sequence_number, 0)
2241+        self.failUnlessEqual(mr._root_hash, self.root_hash)
2242+        self.failUnlessEqual(mr._required_shares, 3)
2243+        self.failUnlessEqual(mr._total_shares, 10)
2244+        self.failUnless(mr._offsets)
2245+
2246+
2247+    def test_read_with_prefetched_bogus_data(self):
2248+        bogus_data = "kjkasdlkjsjkdjksajdjsadjsajdskaj"
2249+        # This shouldn't do anything.
2250+        mr = MDMFSlotReadProxy(self.rref, "si1", 0, bogus_data)
2251+        self.failIf(mr._version_number)
2252+
2253+
2254+    def test_read_with_empty_mdmf_file(self):
2255+        # Some tests upload a file with no contents to test things
2256+        # unrelated to the actual handling of the content of the file.
2257+        # The reader should behave intelligently in these cases.
2258+        self.write_test_share_to_server("si1", empty=True)
2259+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2260+        # We should be able to get the encoding parameters, and they
2261+        # should be correct.
2262+        d = defer.succeed(None)
2263+        d.addCallback(lambda ignored:
2264+            mr.get_encoding_parameters())
2265+        def _check_encoding_parameters(params):
2266+            self.failUnlessEqual(len(params), 4)
2267+            k, n, segsize, datalen = params
2268+            self.failUnlessEqual(k, 3)
2269+            self.failUnlessEqual(n, 10)
2270+            self.failUnlessEqual(segsize, 0)
2271+            self.failUnlessEqual(datalen, 0)
2272+        d.addCallback(_check_encoding_parameters)
2273+
2274+        # We should not be able to fetch a block, since there are no
2275+        # blocks to fetch
2276+        d.addCallback(lambda ignored:
2277+            self.shouldFail(LayoutInvalid, "get block on empty file",
2278+                            None,
2279+                            mr.get_block_and_salt, 0))
2280+        return d
2281+
2282+
2283+    def test_read_with_empty_sdmf_file(self):
2284+        self.write_sdmf_share_to_server("si1", empty=True)
2285+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2286+        # We should be able to get the encoding parameters, and they
2287+        # should be correct
2288+        d = defer.succeed(None)
2289+        d.addCallback(lambda ignored:
2290+            mr.get_encoding_parameters())
2291+        def _check_encoding_parameters(params):
2292+            self.failUnlessEqual(len(params), 4)
2293+            k, n, segsize, datalen = params
2294+            self.failUnlessEqual(k, 3)
2295+            self.failUnlessEqual(n, 10)
2296+            self.failUnlessEqual(segsize, 0)
2297+            self.failUnlessEqual(datalen, 0)
2298+        d.addCallback(_check_encoding_parameters)
2299+
2300+        # It does not make sense to get a block in this format, so we
2301+        # should not be able to.
2302+        d.addCallback(lambda ignored:
2303+            self.shouldFail(LayoutInvalid, "get block on an empty file",
2304+                            None,
2305+                            mr.get_block_and_salt, 0))
2306+        return d
2307+
2308+
2309+    def test_verinfo_with_sdmf_file(self):
2310+        self.write_sdmf_share_to_server("si1")
2311+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2312+        # We should be able to get the version information.
2313+        d = defer.succeed(None)
2314+        d.addCallback(lambda ignored:
2315+            mr.get_verinfo())
2316+        def _check_verinfo(verinfo):
2317+            self.failUnless(verinfo)
2318+            self.failUnlessEqual(len(verinfo), 9)
2319+            (seqnum,
2320+             root_hash,
2321+             salt,
2322+             segsize,
2323+             datalen,
2324+             k,
2325+             n,
2326+             prefix,
2327+             offsets) = verinfo
2328+            self.failUnlessEqual(seqnum, 0)
2329+            self.failUnlessEqual(root_hash, self.root_hash)
2330+            self.failUnlessEqual(salt, self.salt)
2331+            self.failUnlessEqual(segsize, 36)
2332+            self.failUnlessEqual(datalen, 36)
2333+            self.failUnlessEqual(k, 3)
2334+            self.failUnlessEqual(n, 10)
2335+            expected_prefix = struct.pack(">BQ32s16s BBQQ",
2336+                                          0,
2337+                                          seqnum,
2338+                                          root_hash,
2339+                                          salt,
2340+                                          k,
2341+                                          n,
2342+                                          segsize,
2343+                                          datalen)
2344+            self.failUnlessEqual(prefix, expected_prefix)
2345+            self.failUnlessEqual(offsets, self.offsets)
2346+        d.addCallback(_check_verinfo)
2347+        return d
2348+
2349+
2350+    def test_verinfo_with_mdmf_file(self):
2351+        self.write_test_share_to_server("si1")
2352+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
2353+        d = defer.succeed(None)
2354+        d.addCallback(lambda ignored:
2355+            mr.get_verinfo())
2356+        def _check_verinfo(verinfo):
2357+            self.failUnless(verinfo)
2358+            self.failUnlessEqual(len(verinfo), 9)
2359+            (seqnum,
2360+             root_hash,
2361+             salt_hash,
2362+             segsize,
2363+             datalen,
2364+             k,
2365+             n,
2366+             prefix,
2367+             offsets) = verinfo
2368+            self.failUnlessEqual(seqnum, 0)
2369+            self.failUnlessEqual(root_hash, self.root_hash)
2370+            self.failUnlessEqual(salt_hash, self.salt_hash)
2371+            self.failUnlessEqual(segsize, 6)
2372+            self.failUnlessEqual(datalen, 36)
2373+            self.failUnlessEqual(k, 3)
2374+            self.failUnlessEqual(n, 10)
2375+            expected_prefix = struct.pack(">BQ32s32s BBQQ",
2376+                                          1,
2377+                                          seqnum,
2378+                                          root_hash,
2379+                                          salt_hash,
2380+                                          k,
2381+                                          n,
2382+                                          segsize,
2383+                                          datalen)
2384+            self.failUnlessEqual(prefix, expected_prefix)
2385+            self.failUnlessEqual(offsets, self.offsets)
2386+        d.addCallback(_check_verinfo)
2387+        return d
2388+
2389+
2390 class Stats(unittest.TestCase):
2391 
2392     def setUp(self):
2393}
2394[A first stab at a segmented uploader
2395Kevan Carstensen <kevan@isnotajoke.com>**20100611192621
2396 Ignore-this: cb23fe66b4d642996e8e09e474e41c5e
2397 
2398 This uploader will upload, segment-by-segment, MDMF files. It will only
2399 do this if it thinks that the filenode that it is uploading represents
2400 an MDMF file; otherwise, it uploads the file as SDMF.
2401 
2402 My TODO list so far:
2403     - More robust peer selection; we'll want to use something like
2404       servers of happiness to figure out reliability and unreliability.
2405     - Clean up.
2406] {
2407hunk ./src/allmydata/mutable/publish.py 8
2408 from zope.interface import implements
2409 from twisted.internet import defer
2410 from twisted.python import failure
2411-from allmydata.interfaces import IPublishStatus
2412+from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION
2413 from allmydata.util import base32, hashutil, mathutil, idlib, log
2414 from allmydata import hashtree, codec
2415 from allmydata.storage.server import si_b2a
2416hunk ./src/allmydata/mutable/publish.py 19
2417      UncoordinatedWriteError, NotEnoughServersError
2418 from allmydata.mutable.servermap import ServerMap
2419 from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \
2420-     unpack_checkstring, SIGNED_PREFIX
2421+     unpack_checkstring, SIGNED_PREFIX, MDMFSlotWriteProxy
2422+
2423+KiB = 1024
2424+DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB
2425 
2426 class PublishStatus:
2427     implements(IPublishStatus)
2428hunk ./src/allmydata/mutable/publish.py 112
2429         self._status.set_helper(False)
2430         self._status.set_progress(0.0)
2431         self._status.set_active(True)
2432+        # We use this to control how the file is written.
2433+        version = self._node.get_version()
2434+        assert version in (SDMF_VERSION, MDMF_VERSION)
2435+        self._version = version
2436 
2437     def get_status(self):
2438         return self._status
2439hunk ./src/allmydata/mutable/publish.py 134
2440         simultaneous write.
2441         """
2442 
2443-        # 1: generate shares (SDMF: files are small, so we can do it in RAM)
2444-        # 2: perform peer selection, get candidate servers
2445-        #  2a: send queries to n+epsilon servers, to determine current shares
2446-        #  2b: based upon responses, create target map
2447-        # 3: send slot_testv_and_readv_and_writev messages
2448-        # 4: as responses return, update share-dispatch table
2449-        # 4a: may need to run recovery algorithm
2450-        # 5: when enough responses are back, we're done
2451+        # 0. Setup encoding parameters, encoder, and other such things.
2452+        # 1. Encrypt, encode, and publish segments.
2453 
2454         self.log("starting publish, datalen is %s" % len(newdata))
2455         self._status.set_size(len(newdata))
2456hunk ./src/allmydata/mutable/publish.py 187
2457         self.bad_peers = set() # peerids who have errbacked/refused requests
2458 
2459         self.newdata = newdata
2460-        self.salt = os.urandom(16)
2461 
2462hunk ./src/allmydata/mutable/publish.py 188
2463+        # This will set self.segment_size, self.num_segments, and
2464+        # self.fec.
2465         self.setup_encoding_parameters()
2466 
2467         # if we experience any surprises (writes which were rejected because
2468hunk ./src/allmydata/mutable/publish.py 238
2469             self.bad_share_checkstrings[key] = old_checkstring
2470             self.connections[peerid] = self._servermap.connections[peerid]
2471 
2472-        # create the shares. We'll discard these as they are delivered. SDMF:
2473-        # we're allowed to hold everything in memory.
2474+        # Now, the process dovetails -- if this is an SDMF file, we need
2475+        # to write an SDMF file. Otherwise, we need to write an MDMF
2476+        # file.
2477+        if self._version == MDMF_VERSION:
2478+            return self._publish_mdmf()
2479+        else:
2480+            return self._publish_sdmf()
2481+        #return self.done_deferred
2482+
2483+    def _publish_mdmf(self):
2484+        # Next, we find homes for all of the shares that we don't have
2485+        # homes for yet.
2486+        # TODO: Make this part do peer selection.
2487+        self.update_goal()
2488+        self.writers = {}
2489+        # For each (peerid, shnum) in self.goal, we make an
2490+        # MDMFSlotWriteProxy for that peer. We'll use this to write
2491+        # shares to the peer.
2492+        for key in self.goal:
2493+            peerid, shnum = key
2494+            write_enabler = self._node.get_write_enabler(peerid)
2495+            renew_secret = self._node.get_renewal_secret(peerid)
2496+            cancel_secret = self._node.get_cancel_secret(peerid)
2497+            secrets = (write_enabler, renew_secret, cancel_secret)
2498+
2499+            self.writers[shnum] =  MDMFSlotWriteProxy(shnum,
2500+                                                      self.connections[peerid],
2501+                                                      self._storage_index,
2502+                                                      secrets,
2503+                                                      self._new_seqnum,
2504+                                                      self.required_shares,
2505+                                                      self.total_shares,
2506+                                                      self.segment_size,
2507+                                                      len(self.newdata))
2508+            if (peerid, shnum) in self._servermap.servermap:
2509+                old_versionid, old_timestamp = self._servermap.servermap[key]
2510+                (old_seqnum, old_root_hash, old_salt, old_segsize,
2511+                 old_datalength, old_k, old_N, old_prefix,
2512+                 old_offsets_tuple) = old_versionid
2513+                old_checkstring = pack_checkstring(old_seqnum,
2514+                                                   old_root_hash,
2515+                                                   old_salt, 1)
2516+                self.writers[shnum].set_checkstring(old_checkstring)
2517+
2518+        # Now, we start pushing shares.
2519+        self._status.timings["setup"] = time.time() - self._started
2520+        def _start_pushing(res):
2521+            self._started_pushing = time.time()
2522+            return res
2523+
2524+        # First, we encrypt, encode, and publish the shares that we need
2525+        # to encrypt, encode, and publish.
2526+
2527+        # This will eventually hold the block hash chain for each share
2528+        # that we publish. We define it this way so that empty publishes
2529+        # will still have something to write to the remote slot.
2530+        self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)])
2531+        self.sharehash_leaves = None # eventually [sharehashes]
2532+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
2533+                              # validate the share]
2534 
2535hunk ./src/allmydata/mutable/publish.py 299
2536+        d = defer.succeed(None)
2537+        self.log("Starting push")
2538+        for i in xrange(self.num_segments - 1):
2539+            d.addCallback(lambda ignored, i=i:
2540+                self.push_segment(i))
2541+            d.addCallback(self._turn_barrier)
2542+        # We have at least one segment, so we will have a tail segment
2543+        if self.num_segments > 0:
2544+            d.addCallback(lambda ignored:
2545+                self.push_tail_segment())
2546+
2547+        d.addCallback(lambda ignored:
2548+            self.push_encprivkey())
2549+        d.addCallback(lambda ignored:
2550+            self.push_blockhashes())
2551+        d.addCallback(lambda ignored:
2552+            self.push_sharehashes())
2553+        d.addCallback(lambda ignored:
2554+            self.push_toplevel_hashes_and_signature())
2555+        d.addCallback(lambda ignored:
2556+            self.finish_publishing())
2557+        return d
2558+
2559+
2560+    def _publish_sdmf(self):
2561         self._status.timings["setup"] = time.time() - self._started
2562hunk ./src/allmydata/mutable/publish.py 325
2563+        self.salt = os.urandom(16)
2564+
2565         d = self._encrypt_and_encode()
2566         d.addCallback(self._generate_shares)
2567         def _start_pushing(res):
2568hunk ./src/allmydata/mutable/publish.py 338
2569 
2570         return self.done_deferred
2571 
2572+
2573     def setup_encoding_parameters(self):
2574hunk ./src/allmydata/mutable/publish.py 340
2575-        segment_size = len(self.newdata)
2576+        if self._version == MDMF_VERSION:
2577+            segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default
2578+        else:
2579+            segment_size = len(self.newdata) # SDMF is only one segment
2580         # this must be a multiple of self.required_shares
2581         segment_size = mathutil.next_multiple(segment_size,
2582                                               self.required_shares)
2583hunk ./src/allmydata/mutable/publish.py 353
2584                                                   segment_size)
2585         else:
2586             self.num_segments = 0
2587-        assert self.num_segments in [0, 1,] # SDMF restrictions
2588+        if self._version == SDMF_VERSION:
2589+            assert self.num_segments in (0, 1) # SDMF
2590+            return
2591+        # calculate the tail segment size.
2592+        self.tail_segment_size = len(self.newdata) % segment_size
2593+
2594+        if self.tail_segment_size == 0:
2595+            # The tail segment is the same size as the other segments.
2596+            self.tail_segment_size = segment_size
2597+
2598+        # We'll make an encoder ahead-of-time for the normal-sized
2599+        # segments (defined as any segment of segment_size size.
2600+        # (the part of the code that puts the tail segment will make its
2601+        #  own encoder for that part)
2602+        fec = codec.CRSEncoder()
2603+        fec.set_params(self.segment_size,
2604+                       self.required_shares, self.total_shares)
2605+        self.piece_size = fec.get_block_size()
2606+        self.fec = fec
2607+        # This is not technically part of the encoding parameters, but
2608+        # that we are setting up the encoder and encoding parameters is
2609+        # a good indicator that we will soon need it.
2610+        self.salt_hasher = hashutil.mutable_salt_hasher()
2611+
2612+
2613+    def push_segment(self, segnum):
2614+        started = time.time()
2615+        segsize = self.segment_size
2616+        self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments))
2617+        data = self.newdata[segsize * segnum:segsize*(segnum + 1)]
2618+        assert len(data) == segsize
2619+
2620+        salt = os.urandom(16)
2621+        self.salt_hasher.update(salt)
2622+
2623+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
2624+        enc = AES(key)
2625+        crypttext = enc.process(data)
2626+        assert len(crypttext) == len(data)
2627+
2628+        now = time.time()
2629+        self._status.timings["encrypt"] = now - started
2630+        started = now
2631+
2632+        # now apply FEC
2633+
2634+        self._status.set_status("Encoding")
2635+        crypttext_pieces = [None] * self.required_shares
2636+        piece_size = self.piece_size
2637+        for i in range(len(crypttext_pieces)):
2638+            offset = i * piece_size
2639+            piece = crypttext[offset:offset+piece_size]
2640+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2641+            crypttext_pieces[i] = piece
2642+            assert len(piece) == piece_size
2643+        d = self.fec.encode(crypttext_pieces)
2644+        def _done_encoding(res):
2645+            elapsed = time.time() - started
2646+            self._status.timings["encode"] = elapsed
2647+            return res
2648+        d.addCallback(_done_encoding)
2649+
2650+        def _push_shares_and_salt(results):
2651+            shares, shareids = results
2652+            dl = []
2653+            for i in xrange(len(shares)):
2654+                sharedata = shares[i]
2655+                shareid = shareids[i]
2656+                block_hash = hashutil.block_hash(sharedata)
2657+                self.blockhashes[shareid].append(block_hash)
2658+
2659+                # find the writer for this share
2660+                d = self.writers[shareid].put_block(sharedata, segnum, salt)
2661+                dl.append(d)
2662+            # TODO: Naturally, we need to check on the results of these.
2663+            return defer.DeferredList(dl)
2664+        d.addCallback(_push_shares_and_salt)
2665+        return d
2666+
2667+
2668+    def push_tail_segment(self):
2669+        # This is essentially the same as push_segment, except that we
2670+        # don't use the cached encoder that we use elsewhere.
2671+        self.log("Pushing tail segment")
2672+        started = time.time()
2673+        segsize = self.segment_size
2674+        data = self.newdata[segsize * (self.num_segments-1):]
2675+        assert len(data) == self.tail_segment_size
2676+        salt = os.urandom(16)
2677+        self.salt_hasher.update(salt)
2678+
2679+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
2680+        enc = AES(key)
2681+        crypttext = enc.process(data)
2682+        assert len(crypttext) == len(data)
2683+
2684+        now = time.time()
2685+        self._status.timings['encrypt'] = now - started
2686+        started = now
2687+
2688+        self._status.set_status("Encoding")
2689+        tail_fec = codec.CRSEncoder()
2690+        tail_fec.set_params(self.tail_segment_size,
2691+                            self.required_shares,
2692+                            self.total_shares)
2693+
2694+        crypttext_pieces = [None] * self.required_shares
2695+        piece_size = tail_fec.get_block_size()
2696+        for i in range(len(crypttext_pieces)):
2697+            offset = i * piece_size
2698+            piece = crypttext[offset:offset+piece_size]
2699+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2700+            crypttext_pieces[i] = piece
2701+            assert len(piece) == piece_size
2702+        d = tail_fec.encode(crypttext_pieces)
2703+        def _push_shares_and_salt(results):
2704+            shares, shareids = results
2705+            dl = []
2706+            for i in xrange(len(shares)):
2707+                sharedata = shares[i]
2708+                shareid = shareids[i]
2709+                block_hash = hashutil.block_hash(sharedata)
2710+                self.blockhashes[shareid].append(block_hash)
2711+                # find the writer for this share
2712+                d = self.writers[shareid].put_block(sharedata,
2713+                                                    self.num_segments - 1,
2714+                                                    salt)
2715+                dl.append(d)
2716+            # TODO: Naturally, we need to check on the results of these.
2717+            return defer.DeferredList(dl)
2718+        d.addCallback(_push_shares_and_salt)
2719+        return d
2720+
2721+
2722+    def push_encprivkey(self):
2723+        started = time.time()
2724+        encprivkey = self._encprivkey
2725+        dl = []
2726+        def _spy_on_writer(results):
2727+            print results
2728+            return results
2729+        for shnum, writer in self.writers.iteritems():
2730+            d = writer.put_encprivkey(encprivkey)
2731+            dl.append(d)
2732+        d = defer.DeferredList(dl)
2733+        return d
2734+
2735+
2736+    def push_blockhashes(self):
2737+        started = time.time()
2738+        dl = []
2739+        def _spy_on_results(results):
2740+            print results
2741+            return results
2742+        self.sharehash_leaves = [None] * len(self.blockhashes)
2743+        for shnum, blockhashes in self.blockhashes.iteritems():
2744+            t = hashtree.HashTree(blockhashes)
2745+            self.blockhashes[shnum] = list(t)
2746+            # set the leaf for future use.
2747+            self.sharehash_leaves[shnum] = t[0]
2748+            d = self.writers[shnum].put_blockhashes(self.blockhashes[shnum])
2749+            dl.append(d)
2750+        d = defer.DeferredList(dl)
2751+        return d
2752+
2753+
2754+    def push_sharehashes(self):
2755+        share_hash_tree = hashtree.HashTree(self.sharehash_leaves)
2756+        share_hash_chain = {}
2757+        ds = []
2758+        def _spy_on_results(results):
2759+            print results
2760+            return results
2761+        for shnum in xrange(len(self.sharehash_leaves)):
2762+            needed_hashes = share_hash_tree.needed_hashes(shnum)
2763+            self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i])
2764+                                             for i in needed_hashes ] )
2765+            d = self.writers[shnum].put_sharehashes(self.sharehashes[shnum])
2766+            ds.append(d)
2767+        self.root_hash = share_hash_tree[0]
2768+        d = defer.DeferredList(ds)
2769+        return d
2770+
2771+
2772+    def push_toplevel_hashes_and_signature(self):
2773+        # We need to to three things here:
2774+        #   - Push the root hash and salt hash
2775+        #   - Get the checkstring of the resulting layout; sign that.
2776+        #   - Push the signature
2777+        salt_hash = self.salt_hasher.digest()
2778+        ds = []
2779+        def _spy_on_results(results):
2780+            print results
2781+            return results
2782+        for shnum in xrange(self.total_shares):
2783+            d = self.writers[shnum].put_root_and_salt_hashes(self.root_hash,
2784+                                                             salt_hash)
2785+            ds.append(d)
2786+        d = defer.DeferredList(ds)
2787+        def _make_and_place_signature(ignored):
2788+            signable = self.writers[0].get_signable()
2789+            self.signature = self._privkey.sign(signable)
2790+
2791+            ds = []
2792+            for (shnum, writer) in self.writers.iteritems():
2793+                d = writer.put_signature(self.signature)
2794+                ds.append(d)
2795+            return defer.DeferredList(ds)
2796+        d.addCallback(_make_and_place_signature)
2797+        return d
2798+
2799+
2800+    def finish_publishing(self):
2801+        # We're almost done -- we just need to put the verification key
2802+        # and the offsets
2803+        ds = []
2804+        verification_key = self._pubkey.serialize()
2805+
2806+        def _spy_on_results(results):
2807+            print results
2808+            return results
2809+        for (shnum, writer) in self.writers.iteritems():
2810+            d = writer.put_verification_key(verification_key)
2811+            d.addCallback(lambda ignored, writer=writer:
2812+                writer.finish_publishing())
2813+            ds.append(d)
2814+        return defer.DeferredList(ds)
2815+
2816+
2817+    def _turn_barrier(self, res):
2818+        # putting this method in a Deferred chain imposes a guaranteed
2819+        # reactor turn between the pre- and post- portions of that chain.
2820+        # This can be useful to limit memory consumption: since Deferreds do
2821+        # not do tail recursion, code which uses defer.succeed(result) for
2822+        # consistency will cause objects to live for longer than you might
2823+        # normally expect.
2824+        return fireEventually(res)
2825+
2826 
2827     def _fatal_error(self, f):
2828         self.log("error during loop", failure=f, level=log.UNUSUAL)
2829hunk ./src/allmydata/mutable/publish.py 727
2830             self.log_goal(self.goal, "after update: ")
2831 
2832 
2833-
2834     def _encrypt_and_encode(self):
2835         # this returns a Deferred that fires with a list of (sharedata,
2836         # sharenum) tuples. TODO: cache the ciphertext, only produce the
2837hunk ./src/allmydata/mutable/publish.py 768
2838         d.addCallback(_done_encoding)
2839         return d
2840 
2841+
2842     def _generate_shares(self, shares_and_shareids):
2843         # this sets self.shares and self.root_hash
2844         self.log("_generate_shares")
2845hunk ./src/allmydata/mutable/publish.py 1156
2846             self._status.set_progress(1.0)
2847         eventually(self.done_deferred.callback, res)
2848 
2849-
2850}
2851[Assorted servermap fixes
2852Kevan Carstensen <kevan@isnotajoke.com>**20100614212913
2853 Ignore-this: ac56de7aac2f3466aed9dd7f917c8aa
2854 
2855 - Check for failure when setting the private key
2856 - Check for failure when setting other things
2857] {
2858hunk ./src/allmydata/mutable/servermap.py 624
2859             # the remote server. Specifically, when there was an
2860             # error unpacking the remote data from the server, or
2861             # when the signature is invalid.
2862-            print e
2863-            f = failure.Failure()
2864+            f = failure.Failure(e)
2865             self.log(format="bad share: %(f_value)s", f_value=str(f.value),
2866                      failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
2867             # Notify the server that its share is corrupt.
2868hunk ./src/allmydata/mutable/servermap.py 658
2869             if not self._node.get_pubkey():
2870                 # fetch and set the public key.
2871                 d = reader.get_verification_key()
2872-                d.addCallback(self._try_to_set_pubkey)
2873+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
2874+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
2875+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
2876+                    _corrupt(error, shnum, data))
2877             else:
2878                 # we already have the public key.
2879                 d = defer.succeed(None)
2880hunk ./src/allmydata/mutable/servermap.py 674
2881             #   bytes of the share on the storage server, so we
2882             #   shouldn't need to fetch anything at this step.
2883             d2 = reader.get_verinfo()
2884+            d2.addErrback(lambda error, shnum=shnum, peerid=peerid:
2885+                _corrupt(error, shnum, data))
2886             # - Next, we need the signature. For an SDMF share, it is
2887             #   likely that we fetched this when doing our initial fetch
2888             #   to get the version information. In MDMF, this lives at
2889hunk ./src/allmydata/mutable/servermap.py 682
2890             #   the end of the share, so unless the file is quite small,
2891             #   we'll need to do a remote fetch to get it.
2892             d3 = reader.get_signature()
2893+            d3.addErrback(lambda error, shnum=shnum, peerid=peerid:
2894+                _corrupt(error, shnum, data))
2895             #  Once we have all three of these responses, we can move on
2896             #  to validating the signature
2897 
2898hunk ./src/allmydata/mutable/servermap.py 689
2899             # Does the node already have a privkey? If not, we'll try to
2900             # fetch it here.
2901-            if not self._node.get_privkey():
2902+            if self._need_privkey:
2903                 d4 = reader.get_encprivkey()
2904                 d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
2905                     self._try_to_validate_privkey(results, peerid, shnum, lp))
2906hunk ./src/allmydata/mutable/servermap.py 693
2907+                d4.addErrback(lambda error, shnum=shnum, peerid=peerid:
2908+                    self._privkey_query_failed(error, shnum, data, lp))
2909             else:
2910                 d4 = defer.succeed(None)
2911 
2912hunk ./src/allmydata/mutable/servermap.py 726
2913         return dl
2914         self.log("_got_results done", parent=lp, level=log.NOISY)
2915 
2916-    def _try_to_set_pubkey(self, pubkey_s):
2917+    def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp):
2918         if self._node.get_pubkey():
2919             return # don't go through this again if we don't have to
2920         fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
2921hunk ./src/allmydata/mutable/servermap.py 782
2922         if verinfo not in self._valid_versions:
2923             # This is a new version tuple, and we need to validate it
2924             # against the public key before keeping track of it.
2925+            assert self._node.get_pubkey()
2926             valid = self._node.get_pubkey().verify(prefix, signature[1])
2927             if not valid:
2928                 raise CorruptShareError(peerid, shnum,
2929}
2930[Alter MDMF proxy tests to reflect the new form of caching
2931Kevan Carstensen <kevan@isnotajoke.com>**20100614213459
2932 Ignore-this: 3e84dbd1b6ea103be36e0e98babe79d4
2933] {
2934hunk ./src/allmydata/test/test_storage.py 23
2935 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
2936      ReadBucketProxy
2937 from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
2938-                                     LayoutInvalid
2939+                                     LayoutInvalid, MDMFSIGNABLEHEADER, \
2940+                                     SIGNED_PREFIX
2941 from allmydata.interfaces import BadWriteEnablerError, MDMF_VERSION, \
2942                                  SDMF_VERSION
2943 from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
2944hunk ./src/allmydata/test/test_storage.py 105
2945 
2946 class RemoteBucket:
2947 
2948+    def __init__(self):
2949+        self.read_count = 0
2950+        self.write_count = 0
2951+
2952     def callRemote(self, methname, *args, **kwargs):
2953         def _call():
2954             meth = getattr(self.target, "remote_" + methname)
2955hunk ./src/allmydata/test/test_storage.py 113
2956             return meth(*args, **kwargs)
2957+
2958+        if methname == "slot_readv":
2959+            self.read_count += 1
2960+        if methname == "slot_writev":
2961+            self.write_count += 1
2962+
2963         return defer.maybeDeferred(_call)
2964 
2965hunk ./src/allmydata/test/test_storage.py 121
2966+
2967 class BucketProxy(unittest.TestCase):
2968     def make_bucket(self, name, size):
2969         basedir = os.path.join("storage", "BucketProxy", name)
2970hunk ./src/allmydata/test/test_storage.py 2605
2971             mr.get_block_and_salt(0))
2972         def _check_block_and_salt(results):
2973             block, salt = results
2974+            # Our original file is 36 bytes long. Then each share is 12
2975+            # bytes in size. The share is composed entirely of the
2976+            # letter a. self.block contains 2 as, so 6 * self.block is
2977+            # what we are looking for.
2978             self.failUnlessEqual(block, self.block * 6)
2979             self.failUnlessEqual(salt, self.salt)
2980         d.addCallback(_check_block_and_salt)
2981hunk ./src/allmydata/test/test_storage.py 2687
2982         # finding out which shares are on the remote peer so that it
2983         # doesn't waste round trips.
2984         mdmf_data = self.build_test_mdmf_share()
2985-        # We're telling it enough to figure out whether it is SDMF or
2986-        # MDMF.
2987-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:1])
2988-        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2989-
2990-        # Now we're telling it more, but still not enough to flesh out
2991-        # the rest of the encoding parameter, so none of them should be
2992-        # set.
2993-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:10])
2994-        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
2995-        self.failIf(mr._sequence_number)
2996-
2997-        # This should be enough to flesh out the encoding parameters of
2998-        # an MDMF file.
2999-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:91])
3000-        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
3001-        self.failUnlessEqual(mr._root_hash, self.root_hash),
3002-        self.failUnlessEqual(mr._sequence_number, 0)
3003-        self.failUnlessEqual(mr._required_shares, 3)
3004-        self.failUnlessEqual(mr._total_shares, 10)
3005-
3006-        # This should be enough to fill in the encoding parameters and
3007-        # a little more, but not enough to complete the offset table.
3008-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:100])
3009-        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
3010-        self.failUnlessEqual(mr._root_hash, self.root_hash)
3011-        self.failUnlessEqual(mr._sequence_number, 0)
3012-        self.failUnlessEqual(mr._required_shares, 3)
3013-        self.failUnlessEqual(mr._total_shares, 10)
3014-        self.failIf(mr._offsets)
3015+        self.write_test_share_to_server("si1")
3016+        def _make_mr(ignored, length):
3017+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length])
3018+            return mr
3019 
3020hunk ./src/allmydata/test/test_storage.py 2692
3021+        d = defer.succeed(None)
3022         # This should be enough to fill in both the encoding parameters
3023hunk ./src/allmydata/test/test_storage.py 2694
3024-        # and the table of offsets
3025-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:143])
3026-        self.failUnlessEqual(mr._version_number, MDMF_VERSION)
3027-        self.failUnlessEqual(mr._root_hash, self.root_hash)
3028-        self.failUnlessEqual(mr._sequence_number, 0)
3029-        self.failUnlessEqual(mr._required_shares, 3)
3030-        self.failUnlessEqual(mr._total_shares, 10)
3031-        self.failUnless(mr._offsets)
3032+        # and the table of offsets, which will complete the version
3033+        # information tuple.
3034+        d.addCallback(_make_mr, 143)
3035+        d.addCallback(lambda mr:
3036+            mr.get_verinfo())
3037+        def _check_verinfo(verinfo):
3038+            self.failUnless(verinfo)
3039+            self.failUnlessEqual(len(verinfo), 9)
3040+            (seqnum,
3041+             root_hash,
3042+             salt_hash,
3043+             segsize,
3044+             datalen,
3045+             k,
3046+             n,
3047+             prefix,
3048+             offsets) = verinfo
3049+            self.failUnlessEqual(seqnum, 0)
3050+            self.failUnlessEqual(root_hash, self.root_hash)
3051+            self.failUnlessEqual(salt_hash, self.salt_hash)
3052+            self.failUnlessEqual(segsize, 6)
3053+            self.failUnlessEqual(datalen, 36)
3054+            self.failUnlessEqual(k, 3)
3055+            self.failUnlessEqual(n, 10)
3056+            expected_prefix = struct.pack(MDMFSIGNABLEHEADER,
3057+                                          1,
3058+                                          seqnum,
3059+                                          root_hash,
3060+                                          salt_hash,
3061+                                          k,
3062+                                          n,
3063+                                          segsize,
3064+                                          datalen)
3065+            self.failUnlessEqual(expected_prefix, prefix)
3066+            self.failUnlessEqual(self.rref.read_count, 0)
3067+        d.addCallback(_check_verinfo)
3068+        # This is not enough data to read a block and a share, so the
3069+        # wrapper should attempt to read this from the remote server.
3070+        d.addCallback(_make_mr, 143)
3071+        d.addCallback(lambda mr:
3072+            mr.get_block_and_salt(0))
3073+        def _check_block_and_salt((block, salt)):
3074+            self.failUnlessEqual(block, self.block)
3075+            self.failUnlessEqual(salt, self.salt)
3076+            self.failUnlessEqual(self.rref.read_count, 1)
3077+        # The file that we're playing with has 6 segments. Then there
3078+        # are 6 * 16 = 96 bytes of salts before we can write shares.
3079+        # Each block has two bytes, so 143 + 96 + 2 = 241 bytes should
3080+        # be enough to read one block.
3081+        d.addCallback(_make_mr, 241)
3082+        d.addCallback(lambda mr:
3083+            mr.get_block_and_salt(0))
3084+        d.addCallback(_check_block_and_salt)
3085+        return d
3086 
3087 
3088     def test_read_with_prefetched_sdmf_data(self):
3089hunk ./src/allmydata/test/test_storage.py 2752
3090         sdmf_data = self.build_test_sdmf_share()
3091-        # Feed it just enough data to check the share type
3092-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:1])
3093-        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
3094-        self.failIf(mr._sequence_number)
3095-
3096-        # Now feed it more data, but not enough data to populate the
3097-        # encoding parameters. The results should be exactly the same as
3098-        # before.
3099-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:10])
3100-        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
3101-        self.failIf(mr._sequence_number)
3102-
3103-        # Now feed it enough data to populate the encoding parameters
3104-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:75])
3105-        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
3106-        self.failUnlessEqual(mr._sequence_number, 0)
3107-        self.failUnlessEqual(mr._root_hash, self.root_hash)
3108-        self.failUnlessEqual(mr._required_shares, 3)
3109-        self.failUnlessEqual(mr._total_shares, 10)
3110-
3111-        # Now feed it enough data to populate the encoding parameters
3112-        # and then some, but not enough to fill in the offset table.
3113-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:100])
3114-        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
3115-        self.failUnlessEqual(mr._sequence_number, 0)
3116-        self.failUnlessEqual(mr._root_hash, self.root_hash)
3117-        self.failUnlessEqual(mr._required_shares, 3)
3118-        self.failUnlessEqual(mr._total_shares, 10)
3119-        self.failIf(mr._offsets)
3120-
3121-        # Now fill in the offset table.
3122-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:107])
3123-        self.failUnlessEqual(mr._version_number, SDMF_VERSION)
3124-        self.failUnlessEqual(mr._sequence_number, 0)
3125-        self.failUnlessEqual(mr._root_hash, self.root_hash)
3126-        self.failUnlessEqual(mr._required_shares, 3)
3127-        self.failUnlessEqual(mr._total_shares, 10)
3128-        self.failUnless(mr._offsets)
3129+        self.write_sdmf_share_to_server("si1")
3130+        def _make_mr(ignored, length):
3131+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length])
3132+            return mr
3133 
3134hunk ./src/allmydata/test/test_storage.py 2757
3135+        d = defer.succeed(None)
3136+        # This should be enough to get us the encoding parameters,
3137+        # offset table, and everything else we need to build a verinfo
3138+        # string.
3139+        d.addCallback(_make_mr, 107)
3140+        d.addCallback(lambda mr:
3141+            mr.get_verinfo())
3142+        def _check_verinfo(verinfo):
3143+            self.failUnless(verinfo)
3144+            self.failUnlessEqual(len(verinfo), 9)
3145+            (seqnum,
3146+             root_hash,
3147+             salt,
3148+             segsize,
3149+             datalen,
3150+             k,
3151+             n,
3152+             prefix,
3153+             offsets) = verinfo
3154+            self.failUnlessEqual(seqnum, 0)
3155+            self.failUnlessEqual(root_hash, self.root_hash)
3156+            self.failUnlessEqual(salt, self.salt)
3157+            self.failUnlessEqual(segsize, 36)
3158+            self.failUnlessEqual(datalen, 36)
3159+            self.failUnlessEqual(k, 3)
3160+            self.failUnlessEqual(n, 10)
3161+            expected_prefix = struct.pack(SIGNED_PREFIX,
3162+                                          0,
3163+                                          seqnum,
3164+                                          root_hash,
3165+                                          salt,
3166+                                          k,
3167+                                          n,
3168+                                          segsize,
3169+                                          datalen)
3170+            self.failUnlessEqual(expected_prefix, prefix)
3171+            self.failUnlessEqual(self.rref.read_count, 0)
3172+        d.addCallback(_check_verinfo)
3173+        # This shouldn't be enough to read any share data.
3174+        d.addCallback(_make_mr, 107)
3175+        d.addCallback(lambda mr:
3176+            mr.get_block_and_salt(0))
3177+        def _check_block_and_salt((block, salt)):
3178+            self.failUnlessEqual(block, self.block * 6)
3179+            self.failUnlessEqual(salt, self.salt)
3180+            # TODO: Fix the read routine so that it reads only the data
3181+            #       that it has cached if it can't read all of it.
3182+            self.failUnlessEqual(self.rref.read_count, 2)
3183 
3184hunk ./src/allmydata/test/test_storage.py 2806
3185-    def test_read_with_prefetched_bogus_data(self):
3186-        bogus_data = "kjkasdlkjsjkdjksajdjsadjsajdskaj"
3187-        # This shouldn't do anything.
3188-        mr = MDMFSlotReadProxy(self.rref, "si1", 0, bogus_data)
3189-        self.failIf(mr._version_number)
3190+        # This should be enough to read share data.
3191+        d.addCallback(_make_mr, self.offsets['share_data'])
3192+        d.addCallback(lambda mr:
3193+            mr.get_block_and_salt(0))
3194+        d.addCallback(_check_block_and_salt)
3195+        return d
3196 
3197 
3198     def test_read_with_empty_mdmf_file(self):
3199}
3200[Add tests and support functions for servermap tests
3201Kevan Carstensen <kevan@isnotajoke.com>**20100614213721
3202 Ignore-this: 583734d2f728fc80637b5c0c0f4c0fc
3203] {
3204hunk ./src/allmydata/test/test_mutable.py 103
3205         d = fireEventually()
3206         d.addCallback(lambda res: _call())
3207         return d
3208+
3209     def callRemoteOnly(self, methname, *args, **kwargs):
3210         d = self.callRemote(methname, *args, **kwargs)
3211         d.addBoth(lambda ignore: None)
3212hunk ./src/allmydata/test/test_mutable.py 152
3213             chr(ord(original[byte_offset]) ^ 0x01) +
3214             original[byte_offset+1:])
3215 
3216+def add_two(original, byte_offset):
3217+    # It isn't enough to simply flip the bit for the version number,
3218+    # because 1 is a valid version number. So we add two instead.
3219+    return (original[:byte_offset] +
3220+            chr(ord(original[byte_offset]) ^ 0x02) +
3221+            original[byte_offset+1:])
3222+
3223 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0):
3224     # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a
3225     # list of shnums to corrupt.
3226hunk ./src/allmydata/test/test_mutable.py 188
3227                 real_offset = offset1
3228             real_offset = int(real_offset) + offset2 + offset_offset
3229             assert isinstance(real_offset, int), offset
3230-            shares[shnum] = flip_bit(data, real_offset)
3231+            if offset1 == 0: # verbyte
3232+                f = add_two
3233+            else:
3234+                f = flip_bit
3235+            shares[shnum] = f(data, real_offset)
3236     return res
3237 
3238 def make_storagebroker(s=None, num_peers=10):
3239hunk ./src/allmydata/test/test_mutable.py 625
3240         d.addCallback(_created)
3241         return d
3242 
3243-    def publish_multiple(self):
3244+    def publish_mdmf(self):
3245+        # like publish_one, except that the result is guaranteed to be
3246+        # an MDMF file.
3247+        # self.CONTENTS should have more than one segment.
3248+        self.CONTENTS = "This is an MDMF file" * 100000
3249+        self._storage = FakeStorage()
3250+        self._nodemaker = make_nodemaker(self._storage)
3251+        self._storage_broker = self._nodemaker.storage_broker
3252+        d = self._nodemaker.create_mutable_file(self.CONTENTS, version=1)
3253+        def _created(node):
3254+            self._fn = node
3255+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
3256+        d.addCallback(_created)
3257+        return d
3258+
3259+
3260+    def publish_sdmf(self):
3261+        # like publish_one, except that the result is guaranteed to be
3262+        # an SDMF file
3263+        self.CONTENTS = "This is an SDMF file" * 1000
3264+        self._storage = FakeStorage()
3265+        self._nodemaker = make_nodemaker(self._storage)
3266+        self._storage_broker = self._nodemaker.storage_broker
3267+        d = self._nodemaker.create_mutable_file(self.CONTENTS, version=0)
3268+        def _created(node):
3269+            self._fn = node
3270+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
3271+        d.addCallback(_created)
3272+        return d
3273+
3274+
3275+    def publish_multiple(self, version=0):
3276         self.CONTENTS = ["Contents 0",
3277                          "Contents 1",
3278                          "Contents 2",
3279hunk ./src/allmydata/test/test_mutable.py 665
3280         self._copied_shares = {}
3281         self._storage = FakeStorage()
3282         self._nodemaker = make_nodemaker(self._storage)
3283-        d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1
3284+        d = self._nodemaker.create_mutable_file(self.CONTENTS[0], version=version) # seqnum=1
3285         def _created(node):
3286             self._fn = node
3287             # now create multiple versions of the same file, and accumulate
3288hunk ./src/allmydata/test/test_mutable.py 689
3289         d.addCallback(_created)
3290         return d
3291 
3292+
3293     def _copy_shares(self, ignored, index):
3294         shares = self._storage._peers
3295         # we need a deep copy
3296hunk ./src/allmydata/test/test_mutable.py 842
3297         self._storage._peers = {} # delete all shares
3298         ms = self.make_servermap
3299         d = defer.succeed(None)
3300-
3301+#
3302         d.addCallback(lambda res: ms(mode=MODE_CHECK))
3303         d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm))
3304 
3305hunk ./src/allmydata/test/test_mutable.py 894
3306         return d
3307 
3308 
3309+    def test_servermapupdater_finds_mdmf_files(self):
3310+        # setUp already published an MDMF file for us. We just need to
3311+        # make sure that when we run the ServermapUpdater, the file is
3312+        # reported to have one recoverable version.
3313+        d = defer.succeed(None)
3314+        d.addCallback(lambda ignored:
3315+            self.publish_mdmf())
3316+        d.addCallback(lambda ignored:
3317+            self.make_servermap(mode=MODE_CHECK))
3318+        # Calling make_servermap also updates the servermap in the mode
3319+        # that we specify, so we just need to see what it says.
3320+        def _check_servermap(sm):
3321+            self.failUnlessEqual(len(sm.recoverable_versions()), 1)
3322+        d.addCallback(_check_servermap)
3323+        # Now, we upload more versions
3324+        d.addCallback(lambda ignored:
3325+            self.publish_multiple(version=1))
3326+        d.addCallback(lambda ignored:
3327+            self.make_servermap(mode=MODE_CHECK))
3328+        def _check_servermap_multiple(sm):
3329+            v = sm.recoverable_versions()
3330+            i = sm.unrecoverable_versions()
3331+        d.addCallback(_check_servermap_multiple)
3332+        return d
3333+    test_servermapupdater_finds_mdmf_files.todo = ("I don't know how to "
3334+                                                   "write this yet")
3335+
3336+
3337+    def test_servermapupdater_finds_sdmf_files(self):
3338+        d = defer.succeed(None)
3339+        d.addCallback(lambda ignored:
3340+            self.publish_sdmf())
3341+        d.addCallback(lambda ignored:
3342+            self.make_servermap(mode=MODE_CHECK))
3343+        d.addCallback(lambda servermap:
3344+            self.failUnlessEqual(len(servermap.recoverable_versions()), 1))
3345+        return d
3346+
3347 
3348 class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin):
3349     def setUp(self):
3350hunk ./src/allmydata/test/test_mutable.py 1084
3351         return d
3352 
3353     def test_corrupt_all_verbyte(self):
3354-        # when the version byte is not 0, we hit an UnknownVersionError error
3355-        # in unpack_share().
3356+        # when the version byte is not 0 or 1, we hit an UnknownVersionError
3357+        # error in unpack_share().
3358         d = self._test_corrupt_all(0, "UnknownVersionError")
3359         def _check_servermap(servermap):
3360             # and the dump should mention the problems
3361hunk ./src/allmydata/test/test_mutable.py 1091
3362             s = StringIO()
3363             dump = servermap.dump(s).getvalue()
3364-            self.failUnless("10 PROBLEMS" in dump, dump)
3365+            self.failUnless("30 PROBLEMS" in dump, dump)
3366         d.addCallback(_check_servermap)
3367         return d
3368 
3369hunk ./src/allmydata/test/test_mutable.py 2153
3370         self.basedir = "mutable/Problems/test_privkey_query_missing"
3371         self.set_up_grid(num_servers=20)
3372         nm = self.g.clients[0].nodemaker
3373-        LARGE = "These are Larger contents" * 2000 # about 50KB
3374+        LARGE = "These are Larger contents" * 2000 # about 50KiB
3375         nm._node_cache = DevNullDictionary() # disable the nodecache
3376 
3377         d = nm.create_mutable_file(LARGE)
3378}
3379[Add objects for MDMF shares in support of a new segmented uploader
3380Kevan Carstensen <kevan@isnotajoke.com>**20100614222623
3381 Ignore-this: 2b1c7f0b6db2dc7147104ed23805a492
3382 
3383 This patch adds the following:
3384     - MDMFSlotWriteProxy, which can write MDMF shares to the storage
3385       server in the new format.
3386     - MDMFSlotReadProxy, which can read both SDMF and MDMF shares from
3387       the storage server.
3388] {
3389hunk ./src/allmydata/interfaces.py 7
3390      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
3391 
3392 HASH_SIZE=32
3393+SALT_SIZE=16
3394 
3395 SDMF_VERSION=0
3396 MDMF_VERSION=1
3397hunk ./src/allmydata/mutable/layout.py 4
3398 
3399 import struct
3400 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
3401+from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
3402+                                 MDMF_VERSION
3403+from allmydata.util import mathutil
3404+from twisted.python import failure
3405+from twisted.internet import defer
3406+
3407+
3408+# These strings describe the format of the packed structs they help process
3409+# Here's what they mean:
3410+#
3411+#  PREFIX:
3412+#    >: Big-endian byte order; the most significant byte is first (leftmost).
3413+#    B: The version information; an 8 bit version identifier. Stored as
3414+#       an unsigned char. This is currently 00 00 00 00; our modifications
3415+#       will turn it into 00 00 00 01.
3416+#    Q: The sequence number; this is sort of like a revision history for
3417+#       mutable files; they start at 1 and increase as they are changed after
3418+#       being uploaded. Stored as an unsigned long long, which is 8 bytes in
3419+#       length.
3420+#  32s: The root hash of the share hash tree. We use sha-256d, so we use 32
3421+#       characters = 32 bytes to store the value.
3422+#  16s: The salt for the readkey. This is a 16-byte random value, stored as
3423+#       16 characters.
3424+#
3425+#  SIGNED_PREFIX additions, things that are covered by the signature:
3426+#    B: The "k" encoding parameter. We store this as an 8-bit character,
3427+#       which is convenient because our erasure coding scheme cannot
3428+#       encode if you ask for more than 255 pieces.
3429+#    B: The "N" encoding parameter. Stored as an 8-bit character for the
3430+#       same reasons as above.
3431+#    Q: The segment size of the uploaded file. This will essentially be the
3432+#       length of the file in SDMF. An unsigned long long, so we can store
3433+#       files of quite large size.
3434+#    Q: The data length of the uploaded file. Modulo padding, this will be
3435+#       the same of the data length field. Like the data length field, it is
3436+#       an unsigned long long and can be quite large.
3437+#
3438+#   HEADER additions:
3439+#     L: The offset of the signature of this. An unsigned long.
3440+#     L: The offset of the share hash chain. An unsigned long.
3441+#     L: The offset of the block hash tree. An unsigned long.
3442+#     L: The offset of the share data. An unsigned long.
3443+#     Q: The offset of the encrypted private key. An unsigned long long, to
3444+#        account for the possibility of a lot of share data.
3445+#     Q: The offset of the EOF. An unsigned long long, to account for the
3446+#        possibility of a lot of share data.
3447+#
3448+#  After all of these, we have the following:
3449+#    - The verification key: Occupies the space between the end of the header
3450+#      and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']].
3451+#    - The signature, which goes from the signature offset to the share hash
3452+#      chain offset.
3453+#    - The share hash chain, which goes from the share hash chain offset to
3454+#      the block hash tree offset.
3455+#    - The share data, which goes from the share data offset to the encrypted
3456+#      private key offset.
3457+#    - The encrypted private key offset, which goes until the end of the file.
3458+#
3459+#  The block hash tree in this encoding has only one share, so the offset of
3460+#  the share data will be 32 bits more than the offset of the block hash tree.
3461+#  Given this, we may need to check to see how many bytes a reasonably sized
3462+#  block hash tree will take up.
3463 
3464 PREFIX = ">BQ32s16s" # each version has a different prefix
3465 SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature
3466hunk ./src/allmydata/mutable/layout.py 191
3467     return (share_hash_chain, block_hash_tree, share_data)
3468 
3469 
3470-def pack_checkstring(seqnum, root_hash, IV):
3471+def pack_checkstring(seqnum, root_hash, IV, version=0):
3472     return struct.pack(PREFIX,
3473hunk ./src/allmydata/mutable/layout.py 193
3474-                       0, # version,
3475+                       version,
3476                        seqnum,
3477                        root_hash,
3478                        IV)
3479hunk ./src/allmydata/mutable/layout.py 266
3480                            encprivkey])
3481     return final_share
3482 
3483+def pack_prefix(seqnum, root_hash, IV,
3484+                required_shares, total_shares,
3485+                segment_size, data_length):
3486+    prefix = struct.pack(SIGNED_PREFIX,
3487+                         0, # version,
3488+                         seqnum,
3489+                         root_hash,
3490+                         IV,
3491+                         required_shares,
3492+                         total_shares,
3493+                         segment_size,
3494+                         data_length,
3495+                         )
3496+    return prefix
3497+
3498+
3499+MDMFHEADER = ">BQ32s32sBBQQ LQQQQQQ"
3500+MDMFHEADERWITHOUTOFFSETS = ">BQ32s32sBBQQ"
3501+MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
3502+MDMFCHECKSTRING = ">BQ32s32s"
3503+MDMFSIGNABLEHEADER = ">BQ32s32sBBQQ"
3504+MDMFOFFSETS = ">LQQQQQQ"
3505+
3506+class MDMFSlotWriteProxy:
3507+    #implements(IMutableSlotWriter) TODO
3508+
3509+    """
3510+    I represent a remote write slot for an MDMF mutable file.
3511+
3512+    I abstract away from my caller the details of block and salt
3513+    management, and the implementation of the on-disk format for MDMF
3514+    shares.
3515+    """
3516+
3517+    # Expected layout, MDMF:
3518+    # offset:     size:       name:
3519+    #-- signed part --
3520+    # 0           1           version number (01)
3521+    # 1           8           sequence number
3522+    # 9           32          share tree root hash
3523+    # 41          32          concatenated salts hash
3524+    # 73          1           The "k" encoding parameter
3525+    # 74          1           The "N" encoding parameter
3526+    # 75          8           The segment size of the uploaded file
3527+    # 83          8           The data length of the uploaded file
3528+    #-- end signed part --
3529+    # 91          4           The offset of the share data
3530+    # 95          8           The offset of the encrypted private key
3531+    # 103         8           The offset of the block hash tree
3532+    # 111         8           The offset of the signature hash chain
3533+    # 119         8           The offset of the signature
3534+    # 127         8           The offset of the verification key
3535+    # 135         8           offset of the EOF
3536+    #
3537+    # followed by salts, share data, the encrypted private key, the
3538+    # block hash tree, the share hash chain, a signature over the first
3539+    # eight fields, and a verification key.
3540+    #
3541+    # The checkstring is the first four fields -- the version number,
3542+    # sequence number, root hash and salt hash. This is consistent in
3543+    # meaning to what we have with SDMF files, except now instead of
3544+    # using the literal salt, we use a value derived from all of the
3545+    # salts.
3546+    #
3547+    # The ordering of the offsets is different to reflect the dependencies
3548+    # that we'll run into with an MDMF file. The expected write flow is
3549+    # something like this:
3550+    #
3551+    #   0: Initialize with the sequence number, encoding
3552+    #      parameters and data length. From this, we can deduce the
3553+    #      number of segments, and from that we can deduce the size of
3554+    #      the AES salt field, telling us where to write AES salts, and
3555+    #      where to write share data. We can also figure out where the
3556+    #      encrypted private key should go, because we can figure out
3557+    #      how big the share data will be.
3558+    #
3559+    #   1: Encrypt, encode, and upload the file in chunks. Do something
3560+    #      like
3561+    #
3562+    #       put_block(data, segnum, salt)
3563+    #
3564+    #      to write a block and a salt to the disk. We can do both of
3565+    #      these operations now because we have enough of the offsets to
3566+    #      know where to put them.
3567+    #
3568+    #   2: Put the encrypted private key. Use:
3569+    #
3570+    #        put_encprivkey(encprivkey)
3571+    #
3572+    #      Now that we know the length of the private key, we can fill
3573+    #      in the offset for the block hash tree.
3574+    #
3575+    #   3: We're now in a position to upload the block hash tree for
3576+    #      a share. Put that using something like:
3577+    #       
3578+    #        put_blockhashes(block_hash_tree)
3579+    #
3580+    #      Note that block_hash_tree is a list of hashes -- we'll take
3581+    #      care of the details of serializing that appropriately. When
3582+    #      we get the block hash tree, we are also in a position to
3583+    #      calculate the offset for the share hash chain, and fill that
3584+    #      into the offsets table.
3585+    #
3586+    #   4: We're now in a position to upload the share hash chain for
3587+    #      a share. Do that with something like:
3588+    #     
3589+    #        put_sharehashes(share_hash_chain)
3590+    #
3591+    #      share_hash_chain should be a dictionary mapping shnums to
3592+    #      32-byte hashes -- the wrapper handles serialization.
3593+    #      We'll know where to put the signature at this point, also,
3594+    #      but, for various reasons, will not allow clients to do that
3595+    #      until after they've put the flat salt hash and the root hash
3596+    #      in the next step.
3597+    #
3598+    #   5: Put the root hash and the flat salt hash. Use:
3599+    #
3600+    #        put_root_and_salt_hashes(root_hash, salt_hash)
3601+    #
3602+    #      These must both be 32-byte values. Since they have fixed
3603+    #      offsets in the header, we could conceivably put them whenever
3604+    #      we want to, but it makes sense enough to put them only after
3605+    #      putting the share hash chain, since having a root hash
3606+    #      implies that we have a share hash chain.
3607+    #
3608+    #      After this step, callers can call my get_signable method,
3609+    #      which returns a packed representation of the data that they
3610+    #      need to sign for the signature field, which is the next one
3611+    #      to be placed.
3612+    #
3613+    #   5: With the root hash put, we can now sign the header. Use:
3614+    #       
3615+    #        put_signature(signature)
3616+    #
3617+    #   6: Add the verification key, and finish. Do:
3618+    #
3619+    #        put_verification_key(key)
3620+    #
3621+    #      and
3622+    #
3623+    #        finish_publish()
3624+    #
3625+    # Checkstring management:
3626+    #
3627+    # To write to a mutable slot, we have to provide test vectors to ensure
3628+    # that we are writing to the same data that we think we are. These
3629+    # vectors allow us to detect uncoordinated writes; that is, writes
3630+    # where both we and some other shareholder are writing to the
3631+    # mutable slot, and to report those back to the parts of the program
3632+    # doing the writing.
3633+    #
3634+    # With SDMF, this was easy -- all of the share data was written in
3635+    # one go, so it was easy to detect uncoordinated writes, and we only
3636+    # had to do it once. With MDMF, not all of the file is written at
3637+    # once.
3638+    #
3639+    # If a share is new, we write out as much of the header as we can
3640+    # before writing out anything else. This gives other writers a
3641+    # canary that they can use to detect uncoordinated writes, and, if
3642+    # they do the same thing, gives us the same canary. We them update
3643+    # the share. We won't be able to write out two fields of the header
3644+    # -- the share tree hash and the salt hash -- until we finish
3645+    # writing out the share. We only require the writer to provide the
3646+    # initial checkstring, and keep track of what it should be after
3647+    # updates ourselves.
3648+    #
3649+    # If we haven't written anything yet, then on the first write (which
3650+    # will probably be a block + salt of a share), we'll also write out
3651+    # the header. On subsequent passes, we'll expect to see the header.
3652+    # This changes in two places:
3653+    #
3654+    #   - When we write out the salt hash
3655+    #   - When we write out the root of the share hash tree
3656+    #
3657+    # since these values will change the header. It is possible that we
3658+    # can just make those be written in one operation to minimize
3659+    # disruption.
3660+    def __init__(self,
3661+                 shnum,
3662+                 rref, # a remote reference to a storage server
3663+                 storage_index,
3664+                 secrets, # (write_enabler, renew_secret, cancel_secret)
3665+                 seqnum, # the sequence number of the mutable file
3666+                 required_shares,
3667+                 total_shares,
3668+                 segment_size,
3669+                 data_length): # the length of the original file
3670+        self._shnum = shnum
3671+        self._rref = rref
3672+        self._storage_index = storage_index
3673+        self._seqnum = seqnum
3674+        self._required_shares = required_shares
3675+        assert self._shnum >= 0 and self._shnum < total_shares
3676+        self._total_shares = total_shares
3677+        # We build up the offset table as we write things. It is the
3678+        # last thing we write to the remote server.
3679+        self._offsets = {}
3680+        self._testvs = []
3681+        self._secrets = secrets
3682+        # The segment size needs to be a multiple of the k parameter --
3683+        # any padding should have been carried out by the publisher
3684+        # already.
3685+        assert segment_size % required_shares == 0
3686+        self._segment_size = segment_size
3687+        self._data_length = data_length
3688+
3689+        # These are set later -- we define them here so that we can
3690+        # check for their existence easily
3691+        self._root_hash = None
3692+        self._salt_hash = None
3693+
3694+        # We haven't yet written anything to the remote bucket. By
3695+        # setting this, we tell the _write method as much. The write
3696+        # method will then know that it also needs to add a write vector
3697+        # for the checkstring (or what we have of it) to the first write
3698+        # request. We'll then record that value for future use.  If
3699+        # we're expecting something to be there already, we need to call
3700+        # set_checkstring before we write anything to tell the first
3701+        # write about that.
3702+        self._written = False
3703+
3704+        # When writing data to the storage servers, we get a read vector
3705+        # for free. We'll read the checkstring, which will help us
3706+        # figure out what's gone wrong if a write fails.
3707+        self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))]
3708+
3709+        # We calculate the number of segments because it tells us
3710+        # where the salt part of the file ends/share segment begins,
3711+        # and also because it provides a useful amount of bounds checking.
3712+        self._num_segments = mathutil.div_ceil(self._data_length,
3713+                                               self._segment_size)
3714+        self._block_size = self._segment_size / self._required_shares
3715+        # We also calculate the share size, to help us with block
3716+        # constraints later.
3717+        tail_size = self._data_length % self._segment_size
3718+        if not tail_size:
3719+            self._tail_block_size = self._block_size
3720+        else:
3721+            self._tail_block_size = mathutil.next_multiple(tail_size,
3722+                                                           self._required_shares)
3723+            self._tail_block_size /= self._required_shares
3724+
3725+        # We already know where the AES salts start; right after the end
3726+        # of the header (which is defined as the signable part + the offsets)
3727+        # We need to calculate where the share data starts, since we're
3728+        # responsible (after this method) for being able to write it.
3729+        self._offsets['share-data'] = MDMFHEADERSIZE
3730+        self._offsets['share-data'] += self._num_segments * SALT_SIZE
3731+        # We can also calculate where the encrypted private key begins
3732+        # from what we know know.
3733+        self._offsets['enc_privkey'] = self._offsets['share-data']
3734+        self._offsets['enc_privkey'] += self._block_size * self._num_segments
3735+        # We'll wait for the rest. Callers can now call my "put_block" and
3736+        # "set_checkstring" methods.
3737+
3738+
3739+    def set_checkstring(self, checkstring):
3740+        """
3741+        Set checkstring checkstring for the given shnum.
3742+
3743+        By default, I assume that I am writing new shares to the grid.
3744+        If you don't explcitly set your own checkstring, I will use
3745+        one that requires that the remote share not exist.
3746+        """
3747+        # You're allowed to overwrite checkstrings with this method;
3748+        # I assume that users know what they are doing when they call
3749+        # it.
3750+        if checkstring == "":
3751+            # We special-case this, since len("") = 0, but we need
3752+            # length of 1 for the case of an empty share to work on the
3753+            # storage server, which is what a checkstring that is the
3754+            # empty string means.
3755+            self._testvs = []
3756+        else:
3757+            self._testvs = []
3758+            self._testvs.append((0, len(checkstring), "eq", checkstring))
3759+
3760+
3761+    def __repr__(self):
3762+        return "MDMFSlotWriteProxy for share %d" % self._shnum
3763+
3764+
3765+    def get_checkstring(self):
3766+        """
3767+        Given a share number, I return a representation of what the
3768+        checkstring for that share on the server will look like.
3769+        """
3770+        if self._root_hash:
3771+            roothash = self._root_hash
3772+        else:
3773+            roothash = "\x00" * 32
3774+        if self._salt_hash:
3775+            salthash = self._salt_hash
3776+        else:
3777+            salthash = "\x00" * 32
3778+        checkstring = struct.pack(MDMFCHECKSTRING,
3779+                                  1,
3780+                                  self._seqnum,
3781+                                  roothash,
3782+                                  salthash)
3783+        return checkstring
3784+
3785+
3786+    def put_block(self, data, segnum, salt):
3787+        """
3788+        Put the encrypted-and-encoded data segment in the slot, along
3789+        with the salt.
3790+        """
3791+        if segnum >= self._num_segments:
3792+            raise LayoutInvalid("I won't overwrite the private key")
3793+        if len(salt) != SALT_SIZE:
3794+            raise LayoutInvalid("I was given a salt of size %d, but "
3795+                                "I wanted a salt of size %d")
3796+        if segnum + 1 == self._num_segments:
3797+            if len(data) != self._tail_block_size:
3798+                raise LayoutInvalid("I was given the wrong size block to write")
3799+        elif len(data) != self._block_size:
3800+            raise LayoutInvalid("I was given the wrong size block to write")
3801+
3802+        # We want to write at offsets['share-data'] + segnum * block_size.
3803+        assert self._offsets
3804+        assert self._offsets['share-data']
3805+
3806+        offset = self._offsets['share-data'] + segnum * self._block_size
3807+        datavs = [tuple([offset, data])]
3808+        # We also have to write the salt. This is at:
3809+        salt_offset = MDMFHEADERSIZE + SALT_SIZE * segnum
3810+        datavs.append(tuple([salt_offset, salt]))
3811+        return self._write(datavs)
3812+
3813+
3814+    def put_encprivkey(self, encprivkey):
3815+        """
3816+        Put the encrypted private key in the remote slot.
3817+        """
3818+        assert self._offsets
3819+        assert self._offsets['enc_privkey']
3820+        # You shouldn't re-write the encprivkey after the block hash
3821+        # tree is written, since that could cause the private key to run
3822+        # into the block hash tree. Before it writes the block hash
3823+        # tree, the block hash tree writing method writes the offset of
3824+        # the signature hash chain. So that's a good indicator of
3825+        # whether or not the block hash tree has been written.
3826+        if "share_hash_chain" in self._offsets:
3827+            raise LayoutInvalid("You must write this before the block hash tree")
3828+
3829+        self._offsets['block_hash_tree'] = self._offsets['enc_privkey'] + len(encprivkey)
3830+        datavs = [(tuple([self._offsets['enc_privkey'], encprivkey]))]
3831+        def _on_failure():
3832+            del(self._offsets['block_hash_tree'])
3833+        return self._write(datavs, on_failure=_on_failure)
3834+
3835+
3836+    def put_blockhashes(self, blockhashes):
3837+        """
3838+        Put the block hash tree in the remote slot.
3839+
3840+        The encrypted private key must be put before the block hash
3841+        tree, since we need to know how large it is to know where the
3842+        block hash tree should go. The block hash tree must be put
3843+        before the share hash chain, since its size determines the
3844+        offset of the share hash chain.
3845+        """
3846+        assert self._offsets
3847+        assert isinstance(blockhashes, list)
3848+        if "block_hash_tree" not in self._offsets:
3849+            raise LayoutInvalid("You must put the encrypted private key "
3850+                                "before you put the block hash tree")
3851+        # If written, the share hash chain causes the signature offset
3852+        # to be defined.
3853+        if "signature" in self._offsets:
3854+            raise LayoutInvalid("You must put the block hash tree before "
3855+                                "you put the share hash chain")
3856+        blockhashes_s = "".join(blockhashes)
3857+        self._offsets['share_hash_chain'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
3858+        datavs = []
3859+        datavs.append(tuple([self._offsets['block_hash_tree'], blockhashes_s]))
3860+        def _on_failure():
3861+            del(self._offsets['share_hash_chain'])
3862+        return self._write(datavs, on_failure=_on_failure)
3863+
3864+
3865+    def put_sharehashes(self, sharehashes):
3866+        """
3867+        Put the share hash chain in the remote slot.
3868+
3869+        The block hash tree must be put before the share hash chain,
3870+        since we need to know where the block hash tree ends before we
3871+        can know where the share hash chain starts. The share hash chain
3872+        must be put before the signature, since the length of the packed
3873+        share hash chain determines the offset of the signature.
3874+        """
3875+        assert isinstance(sharehashes, dict)
3876+        if "share_hash_chain" not in self._offsets:
3877+            raise LayoutInvalid("You need to put the block hashes before "
3878+                                "you can put the share hash chain")
3879+        # The signature comes after the share hash chain. If the
3880+        # signature has already been written, we must not write another
3881+        # share hash chain. The signature writes the verification key
3882+        # offset when it gets sent to the remote server, so we look for
3883+        # that.
3884+        if "verification_key" in self._offsets:
3885+            raise LayoutInvalid("You must write the share hash chain "
3886+                                "before you write the signature")
3887+        datavs = []
3888+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
3889+                                  for i in sorted(sharehashes.keys())])
3890+        self._offsets['signature'] = self._offsets['share_hash_chain'] + len(sharehashes_s)
3891+        datavs.append(tuple([self._offsets['share_hash_chain'], sharehashes_s]))
3892+        def _on_failure():
3893+            del(self._offsets['signature'])
3894+        return self._write(datavs, on_failure=_on_failure)
3895+
3896+
3897+    def put_root_and_salt_hashes(self, roothash, salthash):
3898+        """
3899+        Put the root hash (the root of the share hash tree) in the
3900+        remote slot.
3901+        """
3902+        # It does not make sense to be able to put the root and salt
3903+        # hashes without first putting the share hashes, since you need
3904+        # the share hashes to generate the root hash.
3905+        #
3906+        # Signature is defined by the routine that places the share hash
3907+        # chain, so it's a good thing to look for in finding out whether
3908+        # or not the share hash chain exists on the remote server.
3909+        if "signature" not in self._offsets:
3910+            raise LayoutInvalid("You need to put the share hash chain "
3911+                                "before you can put the root share hash")
3912+        if len(roothash) != HASH_SIZE or len(salthash) != HASH_SIZE:
3913+            raise LayoutInvalid("hashes and salts must be exactly %d bytes"
3914+                                 % HASH_SIZE)
3915+        datavs = []
3916+        self._root_hash = roothash
3917+        self._salt_hash = salthash
3918+        checkstring = self.get_checkstring()
3919+        datavs.append(tuple([0, checkstring]))
3920+        # This write, if successful, changes the checkstring, so we need
3921+        # to update our internal checkstring to be consistent with the
3922+        # one on the server.
3923+        def _on_success():
3924+            self._testvs = [(0, len(checkstring), "eq", checkstring)]
3925+        def _on_failure():
3926+            self._root_hash = None
3927+            self._salt_hash = None
3928+        return self._write(datavs,
3929+                           on_success=_on_success,
3930+                           on_failure=_on_failure)
3931+
3932+
3933+    def get_signable(self):
3934+        """
3935+        Get the first eight fields of the mutable file; the parts that
3936+        are signed.
3937+        """
3938+        if not self._root_hash or not self._salt_hash:
3939+            raise LayoutInvalid("You need to set the root hash and the "
3940+                                "salt hash before getting something to "
3941+                                "sign")
3942+        return struct.pack(MDMFSIGNABLEHEADER,
3943+                           1,
3944+                           self._seqnum,
3945+                           self._root_hash,
3946+                           self._salt_hash,
3947+                           self._required_shares,
3948+                           self._total_shares,
3949+                           self._segment_size,
3950+                           self._data_length)
3951+
3952+
3953+    def put_signature(self, signature):
3954+        """
3955+        Put the signature field to the remote slot.
3956+
3957+        I require that the root hash and share hash chain have been put
3958+        to the grid before I will write the signature to the grid.
3959+        """
3960+        if "signature" not in self._offsets:
3961+            raise LayoutInvalid("You must put the share hash chain "
3962+        # It does not make sense to put a signature without first
3963+        # putting the root hash and the salt hash (since otherwise
3964+        # the signature would be incomplete), so we don't allow that.
3965+                       "before putting the signature")
3966+        if not self._root_hash:
3967+            raise LayoutInvalid("You must complete the signed prefix "
3968+                                "before computing a signature")
3969+        # If we put the signature after we put the verification key, we
3970+        # could end up running into the verification key, and will
3971+        # probably screw up the offsets as well. So we don't allow that.
3972+        # The method that writes the verification key defines the EOF
3973+        # offset before writing the verification key, so look for that.
3974+        if "EOF" in self._offsets:
3975+            raise LayoutInvalid("You must write the signature before the verification key")
3976+
3977+        self._offsets['verification_key'] = self._offsets['signature'] + len(signature)
3978+        datavs = []
3979+        datavs.append(tuple([self._offsets['signature'], signature]))
3980+        def _on_failure():
3981+            del(self._offsets['verification_key'])
3982+        return self._write(datavs, on_failure=_on_failure)
3983+
3984+
3985+    def put_verification_key(self, verification_key):
3986+        """
3987+        Put the verification key into the remote slot.
3988+
3989+        I require that the signature have been written to the storage
3990+        server before I allow the verification key to be written to the
3991+        remote server.
3992+        """
3993+        if "verification_key" not in self._offsets:
3994+            raise LayoutInvalid("You must put the signature before you "
3995+                                "can put the verification key")
3996+        self._offsets['EOF'] = self._offsets['verification_key'] + len(verification_key)
3997+        datavs = []
3998+        datavs.append(tuple([self._offsets['verification_key'], verification_key]))
3999+        def _on_failure():
4000+            del(self._offsets['EOF'])
4001+        return self._write(datavs, on_failure=_on_failure)
4002+
4003+
4004+    def finish_publishing(self):
4005+        """
4006+        Write the offset table and encoding parameters to the remote
4007+        slot, since that's the only thing we have yet to publish at this
4008+        point.
4009+        """
4010+        if "EOF" not in self._offsets:
4011+            raise LayoutInvalid("You must put the verification key before "
4012+                                "you can publish the offsets")
4013+        offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
4014+        offsets = struct.pack(MDMFOFFSETS,
4015+                              self._offsets['share-data'],
4016+                              self._offsets['enc_privkey'],
4017+                              self._offsets['block_hash_tree'],
4018+                              self._offsets['share_hash_chain'],
4019+                              self._offsets['signature'],
4020+                              self._offsets['verification_key'],
4021+                              self._offsets['EOF'])
4022+        datavs = []
4023+        datavs.append(tuple([offsets_offset, offsets]))
4024+        encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
4025+        params = struct.pack(">BBQQ",
4026+                             self._required_shares,
4027+                             self._total_shares,
4028+                             self._segment_size,
4029+                             self._data_length)
4030+        datavs.append(tuple([encoding_parameters_offset, params]))
4031+        return self._write(datavs)
4032+
4033+
4034+    def _write(self, datavs, on_failure=None, on_success=None):
4035+        """I write the data vectors in datavs to the remote slot."""
4036+        tw_vectors = {}
4037+        new_share = False
4038+        if not self._testvs:
4039+            self._testvs = []
4040+            self._testvs.append(tuple([0, 1, "eq", ""]))
4041+            new_share = True
4042+        if not self._written:
4043+            # Write a new checkstring to the share when we write it, so
4044+            # that we have something to check later.
4045+            new_checkstring = self.get_checkstring()
4046+            datavs.append((0, new_checkstring))
4047+            def _first_write():
4048+                self._written = True
4049+                self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)]
4050+            on_success = _first_write
4051+        tw_vectors[self._shnum] = (self._testvs, datavs, None)
4052+        datalength = sum([len(x[1]) for x in datavs])
4053+        d = self._rref.callRemote("slot_testv_and_readv_and_writev",
4054+                                  self._storage_index,
4055+                                  self._secrets,
4056+                                  tw_vectors,
4057+                                  self._readv)
4058+        def _result(results):
4059+            if isinstance(results, failure.Failure) or not results[0]:
4060+                # Do nothing; the write was unsuccessful.
4061+                if on_failure: on_failure()
4062+            else:
4063+                if on_success: on_success()
4064+            return results
4065+        d.addCallback(_result)
4066+        return d
4067+
4068+
4069+class MDMFSlotReadProxy:
4070+    """
4071+    I read from a mutable slot filled with data written in the MDMF data
4072+    format (which is described above).
4073+
4074+    I can be initialized with some amount of data, which I will use (if
4075+    it is valid) to eliminate some of the need to fetch it from servers.
4076+    """
4077+    def __init__(self,
4078+                 rref,
4079+                 storage_index,
4080+                 shnum,
4081+                 data=""):
4082+        # Start the initialization process.
4083+        self._rref = rref
4084+        self._storage_index = storage_index
4085+        self._shnum = shnum
4086+
4087+        # Before doing anything, the reader is probably going to want to
4088+        # verify that the signature is correct. To do that, they'll need
4089+        # the verification key, and the signature. To get those, we'll
4090+        # need the offset table. So fetch the offset table on the
4091+        # assumption that that will be the first thing that a reader is
4092+        # going to do.
4093+
4094+        # The fact that these encoding parameters are None tells us
4095+        # that we haven't yet fetched them from the remote share, so we
4096+        # should. We could just not set them, but the checks will be
4097+        # easier to read if we don't have to use hasattr.
4098+        self._version_number = None
4099+        self._sequence_number = None
4100+        self._root_hash = None
4101+        self._salt_hash = None
4102+        self._salt = None
4103+        self._required_shares = None
4104+        self._total_shares = None
4105+        self._segment_size = None
4106+        self._data_length = None
4107+        self._offsets = None
4108+
4109+        # If the user has chosen to initialize us with some data, we'll
4110+        # try to satisfy subsequent data requests with that data before
4111+        # asking the storage server for it. If
4112+        self._data = data
4113+        #if self._data:
4114+        #    self._process_prefetched_data()
4115+
4116+
4117+    def _process_prefetched_data(self):
4118+        # If data is at least one byte long, we get the version
4119+        # number (MDMF or SDMF)
4120+        data = self._data
4121+        if len(data) >= 1:
4122+            (verno, ) = struct.unpack(">B", data[:1])
4123+            if verno in (SDMF_VERSION, MDMF_VERSION):
4124+                self._version_number = verno
4125+            else:
4126+                # We don't know how to process this data, so we'll just
4127+                # fetch it ourselves at another time
4128+                return
4129+
4130+        # Now what happens depends on whether or not we're MDMF
4131+        if self._version_number == MDMF_VERSION:
4132+            # If we have at least 91 bytes of data, we have the encoding
4133+            # parameters.
4134+            if len(data) >= 91:
4135+                self._process_encoding_parameters({self._shnum:[data[:91]]})
4136+            # If we have at least 143 bytes of data, we have the offset
4137+            # table.
4138+            if len(data) >= 143:
4139+                self._process_offsets({self._shnum:[data[91:143]]})
4140+            # TODO: Other caching that we're not quite equipped to do
4141+            # yet
4142+
4143+        else:
4144+            # If we have at least 75 bytes of data, we have the encoding
4145+            # parameters.
4146+            if len(data) >= 75:
4147+                self._process_encoding_parameters({self._shnum:[data]})
4148+            # If we have at least 107 bytes of data, we have the offset
4149+            # table.
4150+            if len(data) >= 107:
4151+                self._process_offsets({self._shnum:[data[75:107]]})
4152+            # TODO: Other caching that we're not quite equipped to do
4153+            # yet.
4154+
4155+
4156+    def _maybe_fetch_offsets_and_header(self):
4157+        """
4158+        I fetch the offset table and the header from the remote slot if
4159+        I don't already have them. If I do have them, I do nothing and
4160+        return an empty Deferred.
4161+        """
4162+        if self._offsets:
4163+            return defer.succeed(None)
4164+        # At this point, we may be either SDMF or MDMF. Fetching 91
4165+        # bytes will be enough to get information for both SDMF and
4166+        # MDMF, though we'll be left with about 20 more bytes than we
4167+        # need if this ends up being SDMF. We could just fetch the first
4168+        # byte, which would save the extra bytes at the cost of an
4169+        # additional roundtrip after we parse the result.
4170+        readvs = [(0, 91)]
4171+        d = self._read(readvs)
4172+        d.addCallback(self._process_encoding_parameters)
4173+
4174+        # Now, we have the encoding parameters, which will tell us
4175+        # where we need to look for the offset table.
4176+        def _fetch_offsets(ignored):
4177+            if self._version_number == 0:
4178+                # In SDMF, the offset table starts at byte 75, and
4179+                # extends for 32 bytes
4180+                readv = [(75, 32)] # struct.calcsize(">LLLLQQ") == 32
4181+
4182+            elif self._version_number == 1:
4183+                # In MDMF, the offset table starts at byte 91 and
4184+                # extends for 52 bytes
4185+                readv = [(91, 52)] # struct.calcsize(">LQQQQQQ") == 52
4186+            else:
4187+                raise LayoutInvalid("I only understand SDMF and MDMF")
4188+            return readv
4189+
4190+        d.addCallback(_fetch_offsets)
4191+        d.addCallback(lambda readv:
4192+            self._read(readv))
4193+        d.addCallback(self._process_offsets)
4194+        return d
4195+
4196+
4197+    def _process_encoding_parameters(self, encoding_parameters):
4198+        assert self._shnum in encoding_parameters
4199+        encoding_parameters = encoding_parameters[self._shnum][0]
4200+        # The first byte is the version number. It will tell us what
4201+        # to do next.
4202+        (verno,) = struct.unpack(">B", encoding_parameters[:1])
4203+        if verno == MDMF_VERSION:
4204+            (verno,
4205+             seqnum,
4206+             root_hash,
4207+             salt_hash,
4208+             k,
4209+             n,
4210+             segsize,
4211+             datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS,
4212+                                      encoding_parameters)
4213+            self._salt_hash = salt_hash
4214+            if segsize == 0 and datalen == 0:
4215+                # Empty file, no segments.
4216+                self._num_segments = 0
4217+            else:
4218+                self._num_segments = mathutil.div_ceil(datalen, segsize)
4219+
4220+        elif verno == SDMF_VERSION:
4221+            (verno,
4222+             seqnum,
4223+             root_hash,
4224+             salt,
4225+             k,
4226+             n,
4227+             segsize,
4228+             datalen) = struct.unpack(">BQ32s16s BBQQ",
4229+                                      encoding_parameters[:75])
4230+            self._salt = salt
4231+            if segsize == 0 and datalen == 0:
4232+                # empty file
4233+                self._num_segments = 0
4234+            else:
4235+                # non-empty SDMF files have one segment.
4236+                self._num_segments = 1
4237+        else:
4238+            raise UnknownVersionError("You asked me to read mutable file "
4239+                                      "version %d, but I only understand "
4240+                                      "%d and %d" % (verno, SDMF_VERSION,
4241+                                                     MDMF_VERSION))
4242+
4243+        self._version_number = verno
4244+        self._sequence_number = seqnum
4245+        self._root_hash = root_hash
4246+        self._required_shares = k
4247+        self._total_shares = n
4248+        self._segment_size = segsize
4249+        self._data_length = datalen
4250+
4251+        self._block_size = self._segment_size / self._required_shares
4252+        # We can upload empty files, and need to account for this fact
4253+        # so as to avoid zero-division and zero-modulo errors.
4254+        if datalen > 0:
4255+            tail_size = self._data_length % self._segment_size
4256+        else:
4257+            tail_size = 0
4258+        if not tail_size:
4259+            self._tail_block_size = self._block_size
4260+        else:
4261+            self._tail_block_size = mathutil.next_multiple(tail_size,
4262+                                                    self._required_shares)
4263+            self._tail_block_size /= self._required_shares
4264+
4265+
4266+    def _process_offsets(self, offsets):
4267+        assert self._shnum in offsets
4268+        offsets = offsets[self._shnum][0]
4269+        if self._version_number == 0:
4270+            (signature,
4271+             share_hash_chain,
4272+             block_hash_tree,
4273+             share_data,
4274+             enc_privkey,
4275+             EOF) = struct.unpack(">LLLLQQ", offsets)
4276+            self._offsets = {}
4277+            self._offsets['signature'] = signature
4278+            self._offsets['share_data'] = share_data
4279+            self._offsets['block_hash_tree'] = block_hash_tree
4280+            self._offsets['share_hash_chain'] = share_hash_chain
4281+            self._offsets['enc_privkey'] = enc_privkey
4282+            self._offsets['EOF'] = EOF
4283+        elif self._version_number == 1:
4284+            (share_data,
4285+             encprivkey,
4286+             blockhashes,
4287+             sharehashes,
4288+             signature,
4289+             verification_key,
4290+             eof) = struct.unpack(MDMFOFFSETS, offsets)
4291+            self._offsets = {}
4292+            self._offsets['share_data'] = share_data
4293+            self._offsets['enc_privkey'] = encprivkey
4294+            self._offsets['block_hash_tree'] = blockhashes
4295+            self._offsets['share_hash_chain'] = sharehashes
4296+            self._offsets['signature'] = signature
4297+            self._offsets['verification_key'] = verification_key
4298+            self._offsets['EOF'] = eof
4299+
4300+
4301+    def get_block_and_salt(self, segnum):
4302+        """
4303+        I return (block, salt), where block is the block data and
4304+        salt is the salt used to encrypt that segment.
4305+        """
4306+        d = self._maybe_fetch_offsets_and_header()
4307+        def _then(ignored):
4308+            base_share_offset = self._offsets['share_data']
4309+            if self._version_number == 1:
4310+                base_salt_offset = struct.calcsize(MDMFHEADER)
4311+                salt_offset = base_salt_offset + SALT_SIZE * segnum
4312+            else:
4313+                salt_offset = None # no per-segment salts in SDMF
4314+            return base_share_offset, salt_offset
4315+
4316+        d.addCallback(_then)
4317+
4318+        def _calculate_share_offset(share_and_salt_offset):
4319+            base_share_offset, salt_offset = share_and_salt_offset
4320+            if segnum + 1 > self._num_segments:
4321+                raise LayoutInvalid("Not a valid segment number")
4322+
4323+            share_offset = base_share_offset + self._block_size * segnum
4324+            if segnum + 1 == self._num_segments:
4325+                data = self._tail_block_size
4326+            else:
4327+                data = self._block_size
4328+            readvs = [(share_offset, data)]
4329+            if salt_offset:
4330+                readvs.insert(0,(salt_offset, SALT_SIZE))
4331+            return readvs
4332+
4333+        d.addCallback(_calculate_share_offset)
4334+        d.addCallback(lambda readvs:
4335+            self._read(readvs))
4336+        def _process_results(results):
4337+            assert self._shnum in results
4338+            if self._version_number == 0:
4339+                # We only read the share data, but we know the salt from
4340+                # when we fetched the header
4341+                data = results[self._shnum][0]
4342+                salt = self._salt
4343+            else:
4344+                salt, data = results[self._shnum]
4345+            return data, salt
4346+        d.addCallback(_process_results)
4347+        return d
4348+
4349+
4350+    def get_blockhashes(self):
4351+        """
4352+        I return the block hash tree
4353+        """
4354+        # TODO: Return only the parts of the block hash tree necessary
4355+        # to validate the blocknum provided?
4356+        d = self._maybe_fetch_offsets_and_header()
4357+        def _then(ignored):
4358+            blockhashes_offset = self._offsets['block_hash_tree']
4359+            if self._version_number == 1:
4360+                blockhashes_length = self._offsets['share_hash_chain'] - blockhashes_offset
4361+            else:
4362+                blockhashes_length = self._offsets['share_data'] - blockhashes_offset
4363+            readvs = [(blockhashes_offset, blockhashes_length)]
4364+            return readvs
4365+        d.addCallback(_then)
4366+        d.addCallback(lambda readvs:
4367+            self._read(readvs))
4368+        def _build_block_hash_tree(results):
4369+            assert self._shnum in results
4370+
4371+            rawhashes = results[self._shnum][0]
4372+            results = [rawhashes[i:i+HASH_SIZE]
4373+                       for i in range(0, len(rawhashes), HASH_SIZE)]
4374+            return results
4375+        d.addCallback(_build_block_hash_tree)
4376+        return d
4377+
4378+
4379+    def get_sharehashes(self):
4380+        """
4381+        I return the part of the share hash chain placed to validate
4382+        this share.
4383+        """
4384+        d = self._maybe_fetch_offsets_and_header()
4385+
4386+        def _make_readvs(ignored):
4387+            sharehashes_offset = self._offsets['share_hash_chain']
4388+            if self._version_number == 0:
4389+                sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset
4390+            else:
4391+                sharehashes_length = self._offsets['signature'] - sharehashes_offset
4392+            readvs = [(sharehashes_offset, sharehashes_length)]
4393+            return readvs
4394+        d.addCallback(_make_readvs)
4395+        d.addCallback(lambda readvs:
4396+            self._read(readvs))
4397+        def _build_share_hash_chain(results):
4398+            assert self._shnum in results
4399+
4400+            sharehashes = results[self._shnum][0]
4401+            results = [sharehashes[i:i+(HASH_SIZE + 2)]
4402+                       for i in range(0, len(sharehashes), HASH_SIZE + 2)]
4403+            results = dict([struct.unpack(">H32s", data)
4404+                            for data in results])
4405+            return results
4406+        d.addCallback(_build_share_hash_chain)
4407+        return d
4408+
4409+
4410+    def get_encprivkey(self):
4411+        """
4412+        I return the encrypted private key.
4413+        """
4414+        d = self._maybe_fetch_offsets_and_header()
4415+
4416+        def _make_readvs(ignored):
4417+            privkey_offset = self._offsets['enc_privkey']
4418+            if self._version_number == 0:
4419+                privkey_length = self._offsets['EOF'] - privkey_offset
4420+            else:
4421+                privkey_length = self._offsets['block_hash_tree'] - privkey_offset
4422+            readvs = [(privkey_offset, privkey_length)]
4423+            return readvs
4424+        d.addCallback(_make_readvs)
4425+        d.addCallback(lambda readvs:
4426+            self._read(readvs))
4427+        def _process_results(results):
4428+            assert self._shnum in results
4429+            privkey = results[self._shnum][0]
4430+            return privkey
4431+        d.addCallback(_process_results)
4432+        return d
4433+
4434+
4435+    def get_signature(self):
4436+        """
4437+        I return the signature of my share.
4438+        """
4439+        d = self._maybe_fetch_offsets_and_header()
4440+
4441+        def _make_readvs(ignored):
4442+            signature_offset = self._offsets['signature']
4443+            if self._version_number == 1:
4444+                signature_length = self._offsets['verification_key'] - signature_offset
4445+            else:
4446+                signature_length = self._offsets['share_hash_chain'] - signature_offset
4447+            readvs = [(signature_offset, signature_length)]
4448+            return readvs
4449+        d.addCallback(_make_readvs)
4450+        d.addCallback(lambda readvs:
4451+            self._read(readvs))
4452+        def _process_results(results):
4453+            assert self._shnum in results
4454+            signature = results[self._shnum][0]
4455+            return signature
4456+        d.addCallback(_process_results)
4457+        return d
4458+
4459+
4460+    def get_verification_key(self):
4461+        """
4462+        I return the verification key.
4463+        """
4464+        d = self._maybe_fetch_offsets_and_header()
4465+
4466+        def _make_readvs(ignored):
4467+            if self._version_number == 1:
4468+                vk_offset = self._offsets['verification_key']
4469+                vk_length = self._offsets['EOF'] - vk_offset
4470+            else:
4471+                vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
4472+                vk_length = self._offsets['signature'] - vk_offset
4473+            readvs = [(vk_offset, vk_length)]
4474+            return readvs
4475+        d.addCallback(_make_readvs)
4476+        d.addCallback(lambda readvs:
4477+            self._read(readvs))
4478+        def _process_results(results):
4479+            assert self._shnum in results
4480+            verification_key = results[self._shnum][0]
4481+            return verification_key
4482+        d.addCallback(_process_results)
4483+        return d
4484+
4485+
4486+    def get_encoding_parameters(self):
4487+        """
4488+        I return (k, n, segsize, datalen)
4489+        """
4490+        d = self._maybe_fetch_offsets_and_header()
4491+        d.addCallback(lambda ignored:
4492+            (self._required_shares,
4493+             self._total_shares,
4494+             self._segment_size,
4495+             self._data_length))
4496+        return d
4497+
4498+
4499+    def get_seqnum(self):
4500+        """
4501+        I return the sequence number for this share.
4502+        """
4503+        d = self._maybe_fetch_offsets_and_header()
4504+        d.addCallback(lambda ignored:
4505+            self._sequence_number)
4506+        return d
4507+
4508+
4509+    def get_root_hash(self):
4510+        """
4511+        I return the root of the block hash tree
4512+        """
4513+        d = self._maybe_fetch_offsets_and_header()
4514+        d.addCallback(lambda ignored: self._root_hash)
4515+        return d
4516+
4517+
4518+    def get_salt_hash(self):
4519+        """
4520+        I return the flat salt hash
4521+        """
4522+        d = self._maybe_fetch_offsets_and_header()
4523+        d.addCallback(lambda ignored: self._salt_hash)
4524+        return d
4525+
4526+
4527+    def get_checkstring(self):
4528+        """
4529+        I return the packed representation of the following:
4530+
4531+            - version number
4532+            - sequence number
4533+            - root hash
4534+            - salt hash
4535+
4536+        which my users use as a checkstring to detect other writers.
4537+        """
4538+        d = self._maybe_fetch_offsets_and_header()
4539+        def _build_checkstring(ignored):
4540+            checkstring = struct.pack(MDMFCHECKSTRING,
4541+                                      self._version_number,
4542+                                      self._sequence_number,
4543+                                      self._root_hash,
4544+                                      self._salt_hash)
4545+            return checkstring
4546+        d.addCallback(_build_checkstring)
4547+        return d
4548+
4549+
4550+    def _get_prefix(self):
4551+        # The prefix is another name for the part of the remote share
4552+        # that gets signed. It consists of everything up to and
4553+        # including the datalength, packed by struct.
4554+        if self._version_number == SDMF_VERSION:
4555+            format_string = SIGNED_PREFIX
4556+            salt_to_use = self._salt
4557+        else:
4558+            format_string = MDMFSIGNABLEHEADER
4559+            salt_to_use = self._salt_hash
4560+        return struct.pack(format_string,
4561+                           self._version_number,
4562+                           self._sequence_number,
4563+                           self._root_hash,
4564+                           salt_to_use,
4565+                           self._required_shares,
4566+                           self._total_shares,
4567+                           self._segment_size,
4568+                           self._data_length)
4569+
4570+
4571+    def _get_offsets_tuple(self):
4572+        # The offsets tuple is another component of the version
4573+        # information tuple. It is basically our offsets dictionary,
4574+        # itemized and in a tuple.
4575+        return self._offsets.copy()
4576+
4577+
4578+    def get_verinfo(self):
4579+        """
4580+        I return my verinfo tuple. This is used by the ServermapUpdater
4581+        to keep track of versions of mutable files.
4582+
4583+        The verinfo tuple for MDMF files contains:
4584+            - seqnum
4585+            - root hash
4586+            - salt hash
4587+            - segsize
4588+            - datalen
4589+            - k
4590+            - n
4591+            - prefix (the thing that you sign)
4592+            - a tuple of offsets
4593+
4594+        The verinfo tuple for SDMF files is the same, but contains a
4595+        16-byte IV instead of a hash of salts.
4596+        """
4597+        d = self._maybe_fetch_offsets_and_header()
4598+        def _build_verinfo(ignored):
4599+            if self._version_number == SDMF_VERSION:
4600+                salt_to_use = self._salt
4601+            else:
4602+                salt_to_use = self._salt_hash
4603+            return (self._sequence_number,
4604+                    self._root_hash,
4605+                    salt_to_use,
4606+                    self._segment_size,
4607+                    self._data_length,
4608+                    self._required_shares,
4609+                    self._total_shares,
4610+                    self._get_prefix(),
4611+                    self._get_offsets_tuple())
4612+        d.addCallback(_build_verinfo)
4613+        return d
4614+
4615+
4616+    def _read(self, readvs):
4617+        unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs)
4618+        # TODO: It's entirely possible to tweak this so that it just
4619+        # fulfills the requests that it can, and not demand that all
4620+        # requests are satisfiable before running it.
4621+        if not unsatisfiable:
4622+            results = [self._data[offset:offset+length]
4623+                       for (offset, length) in readvs]
4624+            results = {self._shnum: results}
4625+            d = defer.succeed(results)
4626+        else:
4627+            d = self._rref.callRemote("slot_readv",
4628+                                      self._storage_index,
4629+                                      [self._shnum],
4630+                                      readvs)
4631+        return d
4632+
4633+
4634+    def is_sdmf(self):
4635+        """I tell my caller whether or not my remote file is SDMF or MDMF
4636+        """
4637+        d = self._maybe_fetch_offsets_and_header()
4638+        d.addCallback(lambda ignored:
4639+            self._version_number == 0)
4640+        return d
4641+
4642+
4643+class LayoutInvalid(Exception):
4644+    """
4645+    This isn't a valid MDMF mutable file
4646+    """
4647}
4648
4649Context:
4650
4651[SFTP: remove a dubious use of 'pragma: no cover'.
4652david-sarah@jacaranda.org**20100613164356
4653 Ignore-this: 8f96a81b1196017ed6cfa1d914e56fa5
4654] 
4655[SFTP: test that renaming onto a just-opened file fails.
4656david-sarah@jacaranda.org**20100612033709
4657 Ignore-this: 9b14147ad78b16a5ab0e0e4813491414
4658] 
4659[SFTP: further small improvements to test coverage. Also ensure that after a test failure, later tests don't fail spuriously due to the checks for heisenfile leaks.
4660david-sarah@jacaranda.org**20100612030737
4661 Ignore-this: 4ec1dd3d7542be42007987a2f51508e7
4662] 
4663[SFTP: further improve test coverage (paths containing '.', bad data for posix-rename extension, and error in test of openShell).
4664david-sarah@jacaranda.org**20100611213142
4665 Ignore-this: 956f9df7f9e8a66b506ca58dd9a5dbe7
4666] 
4667[SFTP: improve test coverage for no-write on mutable files, and check for heisenfile table leaks in all relevant tests. Delete test_memory_leak since it is now redundant.
4668david-sarah@jacaranda.org**20100611205752
4669 Ignore-this: 88be1cf323c10dd534a4b8fdac121e31
4670] 
4671[CLI.txt: introduce 'create-alias' before 'add-alias', document Unicode argument support, and other minor updates.
4672david-sarah@jacaranda.org**20100610225547
4673 Ignore-this: de7326e98d79291cdc15aed86ae61fe8
4674] 
4675[SFTP: add test for extension of file opened with FXF_APPEND.
4676david-sarah@jacaranda.org**20100610182647
4677 Ignore-this: c0216d26453ce3cb4b92eef37d218fb4
4678] 
4679[NEWS: add UTF-8 coding declaration.
4680david-sarah@jacaranda.org**20100609234851
4681 Ignore-this: 3e6ef125b278e0a982c88d23180a78ae
4682] 
4683[tests: bump up the timeout on this iputil test from 2s to 4s
4684zooko@zooko.com**20100609143017
4685 Ignore-this: 786b7f7bbc85d45cdf727a6293750798
4686] 
4687[docs: a few tweaks to NEWS and CREDITS and make quickstart.html point to 1.7.0β!
4688zooko@zooko.com**20100609142927
4689 Ignore-this: f8097d3062f41f06c4420a7c84a56481
4690] 
4691[docs: Update NEWS file with new features and bugfixes in 1.7.0
4692francois@ctrlaltdel.ch**20100609091120
4693 Ignore-this: 8c1014e4469ef530e5ff48d7d6ae71c5
4694] 
4695[docs: wording fix, thanks to Jeremy Visser, fix #987
4696francois@ctrlaltdel.ch**20100609081103
4697 Ignore-this: 6d2e627e0f1cd58c0e1394e193287a4b
4698] 
4699[SFTP: fix most significant memory leak described in #1045 (due to a file being added to all_heisenfiles under more than one direntry when renamed).
4700david-sarah@jacaranda.org**20100609080003
4701 Ignore-this: 490b4c14207f6725d0dd32c395fbcefa
4702] 
4703[test_stringutils.py: Fix test failure on CentOS builder, possibly Python 2.4.3-related.
4704david-sarah@jacaranda.org**20100609065056
4705 Ignore-this: 503b561b213baf1b92ae641f2fdf080a
4706] 
4707[Fix for Unicode-related test failures on Zooko's OS X 10.6 machine.
4708david-sarah@jacaranda.org**20100609055448
4709 Ignore-this: 395ad16429e56623edfa74457a121190
4710] 
4711[docs: update relnote.txt for Tahoe-LAFS v1.7.0β
4712zooko@zooko.com**20100609054602
4713 Ignore-this: 52e1bf86a91d45315960fb8806b7a479
4714] 
4715[stringutils.py, sftpd.py: Portability fixes for Python <= 2.5.
4716david-sarah@jacaranda.org**20100609013302
4717 Ignore-this: 9d9ce476ee1b96796e0f48cc5338f852
4718] 
4719[setup: move the mock library from install_requires to tests_require (re: #1016)
4720zooko@zooko.com**20100609050542
4721 Ignore-this: c51a4ff3e19ed630755be752d2233db4
4722] 
4723[Back out Windows-specific Unicode argument support for v1.7.
4724david-sarah@jacaranda.org**20100609000803
4725 Ignore-this: b230ffe6fdaf9a0d85dfe745b37b42fb
4726] 
4727[_auto_deps.py: allow Python 2.4.3 on Redhat-based distributions.
4728david-sarah@jacaranda.org**20100609003646
4729 Ignore-this: ad3cafdff200caf963024873d0ebff3c
4730] 
4731[setup: show-tool-versions.py: print out the output from the unix command "locale" and re-arrange encoding data a little bit
4732zooko@zooko.com**20100609040714
4733 Ignore-this: 69382719b462d13ff940fcd980776004
4734] 
4735[setup: add zope.interface to the packages described by show-tool-versions.py
4736zooko@zooko.com**20100609034915
4737 Ignore-this: b5262b2af5c953a5f68a60bd48dcaa75
4738] 
4739[CREDITS: update François's Description
4740zooko@zooko.com**20100608155513
4741 Ignore-this: a266b438d25ca2cb28eafff75aa4b2a
4742] 
4743[CREDITS: jsgf
4744zooko@zooko.com**20100608143052
4745 Ignore-this: 10abe06d40b88e22a9107d30f1b84810
4746] 
4747[setup: rename the setuptools_trial .egg that comes bundled in the base dir to not have "-py2.6" in its name, since it works with other versions of python as well
4748zooko@zooko.com**20100608041607
4749 Ignore-this: 64fe386d2e5fba0ab441116e74dad5a3
4750] 
4751[setup: rename the darcsver .egg that comes bundled in the base dir to not have "-py2.6" in its name, since it works with other versions of python as well
4752zooko@zooko.com**20100608041534
4753 Ignore-this: 53f925f160256409cf01b76d2583f83f
4754] 
4755[SFTP: suppress NoSuchChildError if heisenfile attributes have been updated in setAttrs, in the case where the parent is available.
4756david-sarah@jacaranda.org**20100608063753
4757 Ignore-this: 8c72a5a9c15934f8fe4594ba3ee50ddd
4758] 
4759[SFTP: ignore permissions when opening a file (needed for sshfs interoperability).
4760david-sarah@jacaranda.org**20100608055700
4761 Ignore-this: f87f6a430f629326a324ddd94426c797
4762] 
4763[test_web.py: fix pyflakes warnings introduced by byterange patch.
4764david-sarah@jacaranda.org**20100608042012
4765 Ignore-this: a7612724893b51d1154dec4372e0508
4766] 
4767[Improve HTTP/1.1 byterange handling
4768Jeremy Fitzhardinge <jeremy@goop.org>**20100310025913
4769 Ignore-this: 6d69e694973d618f0dc65983735cd9be
4770 
4771 Fix parsing of a Range: header to support:
4772  - multiple ranges (parsed, but not returned)
4773  - suffix byte ranges ("-2139")
4774  - correct handling of incorrectly formatted range headers
4775    (correct behaviour is to ignore the header and return the full
4776     file)
4777  - return appropriate error for ranges outside the file
4778 
4779 Multiple ranges are parsed, but only the first range is returned.
4780 Returning multiple ranges requires using the multipart/byterange
4781 content type.
4782 
4783] 
4784[tests: bump up the timeout on these tests; MM's buildslave is sometimes extremely slow on tests, but it will complete them if given enough time. MM is working on making that buildslave more predictable in how long it takes to run tests.
4785zooko@zooko.com**20100608033754
4786 Ignore-this: 98dc27692c5ace1e4b0650b6680629d7
4787] 
4788[test_cli.py: remove invalid 'test_listdir_unicode_bad' test.
4789david-sarah@jacaranda.org**20100607183730
4790 Ignore-this: fadfe87980dc1862f349bfcc21b2145f
4791] 
4792[check_memory.py: adapt to servers-of-happiness changes.
4793david-sarah@jacaranda.org**20100608013528
4794 Ignore-this: c6b28411c543d1aea2f148a955f7998
4795] 
4796[show-tool-versions.py: platform.linux_distribution() is not always available
4797david-sarah@jacaranda.org**20100608004523
4798 Ignore-this: 793fb4050086723af05d06bed8b1b92a
4799] 
4800[show-tool-versions.py: show platform.linux_distribution()
4801david-sarah@jacaranda.org**20100608003829
4802 Ignore-this: 81cb5e5fc6324044f0fc6d82903c8223
4803] 
4804[Remove the 'tahoe debug consolidate' subcommand.
4805david-sarah@jacaranda.org**20100607183757
4806 Ignore-this: 4b14daa3ae557cea07d6e119d25dafe9
4807] 
4808[common_http.py, tahoe_cp.py: Fix an error in calling the superclass constructor in HTTPError and MissingSourceError (introduced by the Unicode fixes).
4809david-sarah@jacaranda.org**20100607174714
4810 Ignore-this: 1a118d593d81c918a4717c887f033aec
4811] 
4812[tests: drastically increase timeout of this very time-consuming test in honor of François's ARM box
4813zooko@zooko.com**20100607115929
4814 Ignore-this: bf1bb52ffb6b5ccae71d4dde14621bc8
4815] 
4816[setup: update authorship, datestamp, licensing, and add special exceptions to allow combination with Eclipse- and QPL- licensed code
4817zooko@zooko.com**20100607062329
4818 Ignore-this: 5a1d7b12dfafd61283ea65a245416381
4819] 
4820[FTP-and-SFTP.txt: minor technical correction to doc for 'no-write' flag.
4821david-sarah@jacaranda.org**20100607061600
4822 Ignore-this: 66aee0c1b6c00538602d08631225e114
4823] 
4824[test_stringutils.py: trivial error in exception message for skipped test.
4825david-sarah@jacaranda.org**20100607061455
4826 Ignore-this: f261a5d4e2b8fe3bcc37e02539ba1ae2
4827] 
4828[More Unicode test fixes.
4829david-sarah@jacaranda.org**20100607053358
4830 Ignore-this: 6a271fb77c31f28cb7bdba63b26a2dd2
4831] 
4832[Unicode fixes for platforms with non-native-Unicode filesystems.
4833david-sarah@jacaranda.org**20100607043238
4834 Ignore-this: 2134dc1793c4f8e50350bd749c4c98c2
4835] 
4836[Unicode fixes.
4837david-sarah@jacaranda.org**20100607010215
4838 Ignore-this: d58727b5cd2ce00e6b6dae3166030138
4839] 
4840[setup: organize misc/ scripts and tools and remove obsolete ones
4841zooko@zooko.com**20100607051618
4842 Ignore-this: 161db1158c6b7be8365b0b3dee2e0b28
4843 This is for ticket #1068.
4844] 
4845[quickstart.html: link to snapshots page, sorted with most recent first.
4846david-sarah@jacaranda.org**20100606221127
4847 Ignore-this: 93ea7e6ee47acc66f6daac9cabffed2d
4848] 
4849[quickstart.html: We haven't released 1.7beta yet.
4850david-sarah@jacaranda.org**20100606220301
4851 Ignore-this: 4e18898cfdb08cc3ddd1ff94d43fdda7
4852] 
4853[setup: loosen the Desert Island test to allow it to check the network for new packages as long as it doesn't actually download any
4854zooko@zooko.com**20100606175717
4855 Ignore-this: e438a8eb3c1b0e68080711ec6ff93ffa
4856 (You can look but don't touch.)
4857] 
4858[Raise Python version requirement to 2.4.4 for non-UCS-2 builds, to avoid a critical Python security bug.
4859david-sarah@jacaranda.org**20100605031713
4860 Ignore-this: 2df2b6d620c5d8191c79eefe655059e2
4861] 
4862[setup: have the buildbots print out locale.getpreferredencoding(), locale.getdefaultlocale(), locale.getlocale(), and os.path.supports_unicode_filenames
4863zooko@zooko.com**20100605162932
4864 Ignore-this: 85e31e0e0e1364e9215420e272d58116
4865 Even though that latter one is completely useless, I'm curious.
4866] 
4867[unicode tests: fix missing import
4868zooko@zooko.com**20100604142630
4869 Ignore-this: db437fe8009971882aaea9de05e2bc3
4870] 
4871[unicode: make test_cli test a non-ascii argument, and make the fallback term encoding be locale.getpreferredencoding()
4872zooko@zooko.com**20100604141251
4873 Ignore-this: b2bfc07942f69141811e59891842bd8c
4874] 
4875[unicode: always decode json manifest as utf-8 then encode for stdout
4876zooko@zooko.com**20100604084840
4877 Ignore-this: ac481692315fae870a0f3562bd7db48e
4878 pyflakes pointed out that the exception handler fallback called an un-imported function, showing that the fallback wasn't being exercised.
4879 I'm not 100% sure that this patch is right and would appreciate François or someone reviewing it.
4880] 
4881[fix flakes
4882zooko@zooko.com**20100604075845
4883 Ignore-this: 3e6a84b78771b0ad519e771a13605f0
4884] 
4885[fix syntax of assertion handling that isn't portable to older versions of Python
4886zooko@zooko.com**20100604075805
4887 Ignore-this: 3a12b293aad25883fb17230266eb04ec
4888] 
4889[test_stringutils.py: Skip test test_listdir_unicode_good if filesystem supports only ASCII filenames
4890Francois Deppierraz <francois@ctrlaltdel.ch>**20100521160839
4891 Ignore-this: f2ccdbd04c8d9f42f1efb0eb80018257
4892] 
4893[test_stringutils.py: Skip test_listdir_unicode on mocked platform which cannot store non-ASCII filenames
4894Francois Deppierraz <francois@ctrlaltdel.ch>**20100521160559
4895 Ignore-this: b93fde736a8904712b506e799250a600
4896] 
4897[test_stringutils.py: Add a test class for OpenBSD 4.1 with LANG=C
4898Francois Deppierraz <francois@ctrlaltdel.ch>**20100521140053
4899 Ignore-this: 63f568aec259cef0e807752fc8150b73
4900] 
4901[test_stringutils.py: Mock the open() call in test_open_unicode
4902Francois Deppierraz <francois@ctrlaltdel.ch>**20100521135817
4903 Ignore-this: d8be4e56a6eefe7d60f97f01ea20ac67
4904 
4905 This test ensure that open(a_unicode_string) is used on Unicode platforms
4906 (Windows or MacOS X) and that open(a_correctly_encoded_bytestring) on other
4907 platforms such as Unix.
4908 
4909] 
4910[test_stringutils.py: Fix a trivial Python 2.4 syntax incompatibility
4911Francois Deppierraz <francois@ctrlaltdel.ch>**20100521093345
4912 Ignore-this: 9297e3d14a0dd37d0c1a4c6954fd59d3
4913] 
4914[test_cli.py: Fix tests when sys.stdout.encoding=None and refactor this code into functions
4915Francois Deppierraz <francois@ctrlaltdel.ch>**20100520084447
4916 Ignore-this: cf2286e225aaa4d7b1927c78c901477f
4917] 
4918[Fix handling of correctly encoded unicode filenames (#534)
4919Francois Deppierraz <francois@ctrlaltdel.ch>**20100520004356
4920 Ignore-this: 8a3a7df214a855f5a12dc0eeab6f2e39
4921 
4922 Tahoe CLI commands working on local files, for instance 'tahoe cp' or 'tahoe
4923 backup', have been improved to correctly handle filenames containing non-ASCII
4924 characters.
4925   
4926 In the case where Tahoe encounters a filename which cannot be decoded using the
4927 system encoding, an error will be returned and the operation will fail.  Under
4928 Linux, this typically happens when the filesystem contains filenames encoded
4929 with another encoding, for instance latin1, than the system locale, for
4930 instance UTF-8.  In such case, you'll need to fix your system with tools such
4931 as 'convmv' before using Tahoe CLI.
4932   
4933 All CLI commands have been improved to support non-ASCII parameters such as
4934 filenames and aliases on all supported Operating Systems except Windows as of
4935 now.
4936] 
4937[stringutils.py: Unicode helper functions + associated tests
4938Francois Deppierraz <francois@ctrlaltdel.ch>**20100520004105
4939 Ignore-this: 7a73fc31de2fd39d437d6abd278bfa9a
4940 
4941 This file contains a bunch of helper functions which converts
4942 unicode string from and to argv, filenames and stdout.
4943] 
4944[Add dependency on Michael Foord's mock library
4945Francois Deppierraz <francois@ctrlaltdel.ch>**20100519233325
4946 Ignore-this: 9bb01bf1e4780f6b98ed394c3b772a80
4947] 
4948[Resolve merge conflict for sftpd.py
4949david-sarah@jacaranda.org**20100603182537
4950 Ignore-this: ba8b543e51312ac949798eb8f5bd9d9c
4951] 
4952[SFTP: possible fix for metadata times being shown as the epoch.
4953david-sarah@jacaranda.org**20100602234514
4954 Ignore-this: bdd7dfccf34eff818ff88aa4f3d28790
4955] 
4956[SFTP: further improvements to test coverage.
4957david-sarah@jacaranda.org**20100602234422
4958 Ignore-this: 87eeee567e8d7562659442ea491e187c
4959] 
4960[SFTP: improve test coverage. Also make creating a directory fail when permissions are read-only (rather than ignoring the permissions).
4961david-sarah@jacaranda.org**20100602041934
4962 Ignore-this: a5e9d9081677bc7f3ddb18ca7a1f531f
4963] 
4964[dirnode.py: fix a bug in the no-write change for Adder, and improve test coverage. Add a 'metadata' argument to create_subdirectory, with documentation. Also update some comments in test_dirnode.py made stale by the ctime/mtime change.
4965david-sarah@jacaranda.org**20100602032641
4966 Ignore-this: 48817b54cd63f5422cb88214c053b03b
4967] 
4968[SFTP: fix a bug that caused the temporary files underlying EncryptedTemporaryFiles not to be closed.
4969david-sarah@jacaranda.org**20100601055310
4970 Ignore-this: 44fee4cfe222b2b1690f4c5e75083a52
4971] 
4972[SFTP: changes for #1063 ('no-write' field) including comment:1 (clearing owner write permission diminishes to a read cap). Includes documentation changes, but not tests for the new behaviour.
4973david-sarah@jacaranda.org**20100601051139
4974 Ignore-this: eff7c08bd47fd52bfe2b844dabf02558
4975] 
4976[SFTP: the same bug as in _sync_heisenfiles also occurred in two other places.
4977david-sarah@jacaranda.org**20100530060127
4978 Ignore-this: 8d137658fc6e4596fa42697476c39aa3
4979] 
4980[SFTP: another try at fixing the _sync_heisenfiles bug.
4981david-sarah@jacaranda.org**20100530055254
4982 Ignore-this: c15f76f32a60083a6b7de6ca0e917934
4983] 
4984[SFTP: fix silly bug in _sync_heisenfiles ('f is not ignore' vs 'not (f is ignore)').
4985david-sarah@jacaranda.org**20100530053807
4986 Ignore-this: 71c4bc62613bf8fef835886d8eb61c27
4987] 
4988[SFTP: log when a sync completes.
4989david-sarah@jacaranda.org**20100530051840
4990 Ignore-this: d99765663ceb673c8a693dfcf88c25ea
4991] 
4992[SFTP: fix bug in previous logging patch.
4993david-sarah@jacaranda.org**20100530050000
4994 Ignore-this: 613e4c115f03fe2d04c621b510340817
4995] 
4996[SFTP: more logging to track down OpenOffice hang.
4997david-sarah@jacaranda.org**20100530040809
4998 Ignore-this: 6c11f2d1eac9f62e2d0f04f006476a03
4999] 
5000[SFTP: avoid blocking close on a heisenfile that has been abandoned or never changed. Also, improve the logging to help track down a case where OpenOffice hangs on opening a file with FXF_READ|FXF_WRITE.
5001david-sarah@jacaranda.org**20100530025544
5002 Ignore-this: 9919dddd446fff64de4031ad51490d1c
5003] 
5004[Move suppression of DeprecationWarning about BaseException.message from sftpd.py to main __init__.py. Also, remove the global suppression of the 'integer argument expected, got float' warning, which turned out to be a bug.
5005david-sarah@jacaranda.org**20100529050537
5006 Ignore-this: 87648afa0dec0d2e73614007de102a16
5007] 
5008[SFTP: cater to clients that assume a file is created as soon as they have made an open request; also, fix some race conditions associated with closing a file at about the same time as renaming or removing it.
5009david-sarah@jacaranda.org**20100529045253
5010 Ignore-this: 2404076b2154ff2659e2b10e0b9e813c
5011] 
5012[SFTP: 'sync' any open files at a direntry before opening any new file at that direntry. This works around the sshfs misbehaviour of returning success to clients immediately on close.
5013david-sarah@jacaranda.org**20100525230257
5014 Ignore-this: 63245d6d864f8f591c86170864d7c57f
5015] 
5016[SFTP: handle removing a file while it is open. Also some simplifications of the logout handling.
5017david-sarah@jacaranda.org**20100525184210
5018 Ignore-this: 660ee80be6ecab783c60452a9da896de
5019] 
5020[SFTP: a posix-rename response should actually return an FXP_STATUS reply, not an FXP_EXTENDED_REPLY as Twisted Conch assumes. Work around this by raising an SFTPError with code FX_OK.
5021david-sarah@jacaranda.org**20100525033323
5022 Ignore-this: fe2914d3ef7f5194bbeaf3f2dda2ad7d
5023] 
5024[SFTP: fix problem with posix-rename code returning a Deferred for the renamed filenode, not for the result of the request (an empty string).
5025david-sarah@jacaranda.org**20100525020209
5026 Ignore-this: 69f7491df2a8f7ea92d999a6d9f0581d
5027] 
5028[SFTP: fix time handling to make sure floats are not passed into twisted.conch, and to print times in the future less ambiguously in directory listings.
5029david-sarah@jacaranda.org**20100524230412
5030 Ignore-this: eb1a3fb72492fa2fb19667b6e4300440
5031] 
5032[SFTP: name of the POSIX rename extension should be 'posix-rename@openssh.com', not 'extposix-rename@openssh.com'.
5033david-sarah@jacaranda.org**20100524021156
5034 Ignore-this: f90eb1ff9560176635386ee797a3fdc7
5035] 
5036[SFTP: avoid race condition where .write could be called on an OverwriteableFileConsumer after it had been closed.
5037david-sarah@jacaranda.org**20100523233830
5038 Ignore-this: 55d381064a15bd64381163341df4d09f
5039] 
5040[SFTP: log tracebacks for RAISEd exceptions.
5041david-sarah@jacaranda.org**20100523221535
5042 Ignore-this: c76a7852df099b358642f0631237cc89
5043] 
5044[SFTP: more logging to investigate behaviour of getAttrs(path).
5045david-sarah@jacaranda.org**20100523204236
5046 Ignore-this: e58fd35dc9015316e16a9f49f19bb469
5047] 
5048[SFTP: fix pyflakes warnings; drop 'noisy' versions of eventually_callback and eventually_errback; robustify conversion of exception messages to UTF-8.
5049david-sarah@jacaranda.org**20100523140905
5050 Ignore-this: 420196fc58646b05bbc9c3732b6eb314
5051] 
5052[SFTP: fixes and test cases for renaming of open files.
5053david-sarah@jacaranda.org**20100523032549
5054 Ignore-this: 32e0726be0fc89335f3035157e202c68
5055] 
5056[SFTP: Increase test_sftp timeout to cater for francois' ARM buildslave.
5057david-sarah@jacaranda.org**20100522191639
5058 Ignore-this: a5acf9660d304677048ab4dd72908ad8
5059] 
5060[SFTP: Fix error in support for getAttrs on an open file, to index open files by directory entry rather than path. Extend that support to renaming open files. Also, implement the extposix-rename@openssh.org extension, and some other minor refactoring.
5061david-sarah@jacaranda.org**20100522035836
5062 Ignore-this: 8ef93a828e927cce2c23b805250b81a4
5063] 
5064[SFTP tests: fix test_openDirectory_and_attrs that was failing in timezones west of UTC.
5065david-sarah@jacaranda.org**20100520181027
5066 Ignore-this: 9beaf602beef437c11c7e97f54ce2599
5067] 
5068[SFTP: allow getAttrs to succeed on a file that has been opened for creation but not yet uploaded or linked (part of #1050).
5069david-sarah@jacaranda.org**20100520035613
5070 Ignore-this: 2f59107d60d5476edac19361ccf6cf94
5071] 
5072[SFTP: improve logging so that results of requests are (usually) logged.
5073david-sarah@jacaranda.org**20100520003652
5074 Ignore-this: 3f59eeee374a3eba71db9be31d5a95
5075] 
5076[SFTP: add tests for more combinations of open flags.
5077david-sarah@jacaranda.org**20100519053933
5078 Ignore-this: b97ee351b1e8ecfecabac70698060665
5079] 
5080[SFTP: allow FXF_WRITE | FXF_TRUNC (#1050).
5081david-sarah@jacaranda.org**20100519043240
5082 Ignore-this: bd70009f11d07ac6e9fd0d1e3fa87a9b
5083] 
5084[SFTP: remove another case where we were logging data.
5085david-sarah@jacaranda.org**20100519012713
5086 Ignore-this: 83115daf3a90278fed0e3fc267607584
5087] 
5088[SFTP: avoid logging all data passed to callbacks.
5089david-sarah@jacaranda.org**20100519000651
5090 Ignore-this: ade6d69a473ada50acef6389fc7fdf69
5091] 
5092[SFTP: fixes related to reporting of permissions (needed for sshfs).
5093david-sarah@jacaranda.org**20100518054521
5094 Ignore-this: c51f8a5d0dc76b80d33ffef9b0541325
5095] 
5096[SFTP: change error code returned for ExistingChildError to FX_FAILURE (fixes gvfs with some picky programs such as gedit).
5097david-sarah@jacaranda.org**20100518004205
5098 Ignore-this: c194c2c9aaf3edba7af84b7413cec375
5099] 
5100[SFTP: fixed bugs that caused hangs during write (#1037).
5101david-sarah@jacaranda.org**20100517044228
5102 Ignore-this: b8b95e82c4057367388a1e6baada993b
5103] 
5104[SFTP: work around a probable bug in twisted.conch.ssh.session:loseConnection(). Also some minor error handling cleanups.
5105david-sarah@jacaranda.org**20100517012606
5106 Ignore-this: 5d3da7c4219cb0c14547e7fd70c74204
5107] 
5108[SFTP: Support statvfs extensions, avoid logging actual data, and decline shell sessions politely.
5109david-sarah@jacaranda.org**20100516154347
5110 Ignore-this: 9d05d23ba77693c03a61accd348ccbe5
5111] 
5112[SFTP: fix error in SFTPUserHandler arguments introduced by execCommand patch.
5113david-sarah@jacaranda.org**20100516014045
5114 Ignore-this: f5ee494dc6ad6aa536cc8144bd2e3d19
5115] 
5116[SFTP: implement execCommand to interoperate with clients that issue a 'df -P -k /' command. Also eliminate use of Zope adaptation.
5117david-sarah@jacaranda.org**20100516012754
5118 Ignore-this: 2d0ed28b759f67f83875b1eaf5778992
5119] 
5120[sftpd.py: 'log.OPERATIONAL' should be just 'OPERATIONAL'.
5121david-sarah@jacaranda.org**20100515155533
5122 Ignore-this: f2347cb3301bbccc086356f6edc685
5123] 
5124[Attempt to fix #1040 by making SFTPUser implement ISession.
5125david-sarah@jacaranda.org**20100515005719
5126 Ignore-this: b3baaf088ba567e861e61e347195dfc4
5127] 
5128[Eliminate Windows newlines from sftpd.py.
5129david-sarah@jacaranda.org**20100515005656
5130 Ignore-this: cd54fd25beb957887514ae76e08c277
5131] 
5132[Update SFTP implementation and tests: fix #1038 and switch to foolscap logging; also some code reorganization.
5133david-sarah@jacaranda.org**20100514043113
5134 Ignore-this: 262f76d953dcd4317210789f2b2bf5da
5135] 
5136[Tests for new SFTP implementation
5137david-sarah@jacaranda.org**20100512060552
5138 Ignore-this: 20308d4a59b3ebc868aad55ae0a7a981
5139] 
5140[New SFTP implementation: mutable files, read/write support, streaming download, Unicode filenames, and more
5141david-sarah@jacaranda.org**20100512055407
5142 Ignore-this: 906f51c48d974ba9cf360c27845c55eb
5143] 
5144[setup: adjust make clean target to ignore our bundled build tools
5145zooko@zooko.com**20100604051250
5146 Ignore-this: d24d2a3b849000790cfbfab69237454e
5147] 
5148[setup: bundle a copy of setuptools_trial as an unzipped egg in the base dir of the Tahoe-LAFS source tree
5149zooko@zooko.com**20100604044648
5150 Ignore-this: a4736e9812b4dab2d5a2bc4bfc5c3b28
5151 This is to work-around this Distribute issue:
5152 http://bitbucket.org/tarek/distribute/issue/55/revision-control-plugin-automatically-installed-as-a-build-dependency-is-not-present-when-another-build-dependency-is-being
5153] 
5154[setup: bundle a copy of darcsver in unzipped egg form in the root of the Tahoe-LAFS source tree
5155zooko@zooko.com**20100604044146
5156 Ignore-this: a51a52e82dd3a39225657ffa27decae2
5157 This is to work-around this Distribute issue:
5158 http://bitbucket.org/tarek/distribute/issue/55/revision-control-plugin-automatically-installed-as-a-build-dependency-is-not-present-when-another-build-dependency-is-being
5159] 
5160[quickstart.html: warn against installing Python at a path containing spaces.
5161david-sarah@jacaranda.org**20100604032413
5162 Ignore-this: c7118332573abd7762d9a897e650bc6a
5163] 
5164[setup: undo the previous patch to quote the executable in scripts
5165zooko@zooko.com**20100604025204
5166 Ignore-this: beda3b951c49d1111478618b8cabe005
5167 The problem isn't in the script, it is in the cli.exe script that is built by setuptools. This might be related to
5168 http://bugs.python.org/issue6792
5169 and
5170 http://bugs.python.org/setuptools/issue2
5171 Or it might be a separate issue involving the launcher.c code e.g. http://tahoe-lafs.org/trac/zetuptoolz/browser/launcher.c?rev=576#L210 and its handling of the interpreter name.
5172] 
5173[setup: put quotes around the path to executable in case it has spaces in it, when building a tahoe.exe for win32
5174zooko@zooko.com**20100604020836
5175 Ignore-this: 478684843169c94a9c14726fedeeed7d
5176] 
5177[Add must_exist, must_be_directory, and must_be_file arguments to DirectoryNode.delete. This will be used to fixes a minor condition in the SFTP frontend.
5178david-sarah@jacaranda.org**20100527194529
5179 Ignore-this: 6d8114cef4450c52c57639f82852716f
5180] 
5181[Fix test failures in test_web caused by changes to web page titles in #1062. Also, change a 'target' field to '_blank' instead of 'blank' in welcome.xhtml.
5182david-sarah@jacaranda.org**20100603232105
5183 Ignore-this: 6e2cc63f42b07e2a3b2d1a857abc50a6
5184] 
5185[misc/show-tool-versions.py: Display additional Python interpreter encoding informations (stdout, stdin and filesystem)
5186Francois Deppierraz <francois@ctrlaltdel.ch>**20100521094313
5187 Ignore-this: 3ae9b0b07fd1d53fb632ef169f7c5d26
5188] 
5189[dirnode.py: Fix bug that caused 'tahoe' fields, 'ctime' and 'mtime' not to be updated when new metadata is present.
5190david-sarah@jacaranda.org**20100602014644
5191 Ignore-this: 5bac95aa897b68f2785d481e49b6a66
5192] 
5193[dirnode.py: Fix #1034 (MetadataSetter does not enforce restriction on setting 'tahoe' subkeys), and expose the metadata updater for use by SFTP. Also, support diminishing a child cap to read-only if 'no-write' is set in the metadata.
5194david-sarah@jacaranda.org**20100601045428
5195 Ignore-this: 14f26e17e58db97fad0dcfd350b38e95
5196] 
5197[Change doc comments in interfaces.py to take into account unknown nodes.
5198david-sarah@jacaranda.org**20100528171922
5199 Ignore-this: d2fde6890b3bca9c7275775f64fbff56
5200] 
5201[Trivial whitespace changes.
5202david-sarah@jacaranda.org**20100527194114
5203 Ignore-this: 98d611bc54ee20b01a5f6b334ff61b2d
5204] 
5205[Suppress 'integer argument expected, got float' DeprecationWarning everywhere
5206david-sarah@jacaranda.org**20100523221157
5207 Ignore-this: 80efd7e27798f5d2ad66c7a53e7048e5
5208] 
5209[Change shouldFail to avoid Unicode errors when converting Failure to str
5210david-sarah@jacaranda.org**20100512060754
5211 Ignore-this: 86ed419d332d9c33090aae2cde1dc5df
5212] 
5213[SFTP: relax pyasn1 version dependency to >= 0.0.8a.
5214david-sarah@jacaranda.org**20100520181437
5215 Ignore-this: 2c7b3dee7b7e14ba121d3118193a386a
5216] 
5217[SFTP: add pyasn1 as dependency, needed if we are using Twisted >= 9.0.0.
5218david-sarah@jacaranda.org**20100516193710
5219 Ignore-this: 76fd92e8a950bb1983a90a09e89c54d3
5220] 
5221[allmydata.org -> tahoe-lafs.org in __init__.py
5222david-sarah@jacaranda.org**20100603063530
5223 Ignore-this: f7d82331d5b4a3c4c0938023409335af
5224] 
5225[small change to CREDITS
5226david-sarah@jacaranda.org**20100603062421
5227 Ignore-this: 2909cdbedc19da5573dec810fc23243
5228] 
5229[Resolve conflict in patch to change imports to absolute.
5230david-sarah@jacaranda.org**20100603054608
5231 Ignore-this: 15aa1caa88e688ffa6dc53bed7dcca7d
5232] 
5233[Minor documentation tweaks.
5234david-sarah@jacaranda.org**20100603054458
5235 Ignore-this: e30ae407b0039dfa5b341d8f88e7f959
5236] 
5237[title_rename_xhtml.dpatch.txt
5238freestorm77@gmail.com**20100529172542
5239 Ignore-this: d2846afcc9ea72ac443a62ecc23d121b
5240 
5241 - Renamed xhtml Title from "Allmydata - Tahoe" to "Tahoe-LAFS"
5242 - Renamed Tahoe to Tahoe-LAFS in page content
5243 - Changed Tahoe-LAFS home page link to http://tahoe-lafs.org (added target="blank")
5244 - Deleted commented css script in info.xhtml
5245 
5246 
5247] 
5248[tests: refactor test_web.py to have less duplication of literal caps-from-the-future
5249zooko@zooko.com**20100519055146
5250 Ignore-this: 49e5412e6cc4566ca67f069ffd850af6
5251 This is a prelude to a patch which will add tests of caps from the future which have non-ascii chars in them.
5252] 
5253[doc_reformat_stats.txt
5254freestorm77@gmail.com**20100424114615
5255 Ignore-this: af315db5f7e3a17219ff8fb39bcfcd60
5256 
5257 
5258    - Added heading format begining and ending by "=="
5259    - Added Index
5260    - Added Title
5261           
5262    Note: No change are made in paragraphs content
5263 
5264 
5265 **END OF DESCRIPTION***
5266 
5267 Place the long patch description above the ***END OF DESCRIPTION*** marker.
5268 The first line of this file will be the patch name.
5269 
5270 
5271 This patch contains the following changes:
5272 
5273 M ./docs/stats.txt -2 +2
5274] 
5275[doc_reformat_performance.txt
5276freestorm77@gmail.com**20100424114444
5277 Ignore-this: 55295ff5cd8a5b67034eb661a5b0699d
5278 
5279    - Added heading format begining and ending by "=="
5280    - Added Index
5281    - Added Title
5282         
5283    Note: No change are made in paragraphs content
5284 
5285 
5286] 
5287[doc_refomat_logging.txt
5288freestorm77@gmail.com**20100424114316
5289 Ignore-this: 593f0f9914516bf1924dfa6eee74e35f
5290 
5291    - Added heading format begining and ending by "=="
5292    - Added Index
5293    - Added Title
5294         
5295    Note: No change are made in paragraphs content
5296 
5297] 
5298[doc_reformat_known_issues.txt
5299freestorm77@gmail.com**20100424114118
5300 Ignore-this: 9577c3965d77b7ac18698988cfa06049
5301 
5302     - Added heading format begining and ending by "=="
5303     - Added Index
5304     - Added Title
5305           
5306     Note: No change are made in paragraphs content
5307   
5308 
5309] 
5310[doc_reformat_helper.txt
5311freestorm77@gmail.com**20100424120649
5312 Ignore-this: de2080d6152ae813b20514b9908e37fb
5313 
5314 
5315    - Added heading format begining and ending by "=="
5316    - Added Index
5317    - Added Title
5318             
5319    Note: No change are made in paragraphs content
5320 
5321] 
5322[doc_reformat_garbage-collection.txt
5323freestorm77@gmail.com**20100424120830
5324 Ignore-this: aad3e4c99670871b66467062483c977d
5325 
5326 
5327    - Added heading format begining and ending by "=="
5328    - Added Index
5329    - Added Title
5330             
5331    Note: No change are made in paragraphs content
5332 
5333] 
5334[doc_reformat_FTP-and-SFTP.txt
5335freestorm77@gmail.com**20100424121334
5336 Ignore-this: 3736b3d8f9a542a3521fbb566d44c7cf
5337 
5338 
5339    - Added heading format begining and ending by "=="
5340    - Added Index
5341    - Added Title
5342           
5343    Note: No change are made in paragraphs content
5344 
5345] 
5346[doc_reformat_debian.txt
5347freestorm77@gmail.com**20100424120537
5348 Ignore-this: 45fe4355bb869e55e683405070f47eff
5349 
5350 
5351    - Added heading format begining and ending by "=="
5352    - Added Index
5353    - Added Title
5354             
5355    Note: No change are made in paragraphs content
5356 
5357] 
5358[doc_reformat_configuration.txt
5359freestorm77@gmail.com**20100424104903
5360 Ignore-this: 4fbabc51b8122fec69ce5ad1672e79f2
5361 
5362 
5363 - Added heading format begining and ending by "=="
5364 - Added Index
5365 - Added Title
5366 
5367 Note: No change are made in paragraphs content
5368 
5369] 
5370[doc_reformat_CLI.txt
5371freestorm77@gmail.com**20100424121512
5372 Ignore-this: 2d3a59326810adcb20ea232cea405645
5373 
5374      - Added heading format begining and ending by "=="
5375      - Added Index
5376      - Added Title
5377           
5378      Note: No change are made in paragraphs content
5379 
5380] 
5381[doc_reformat_backupdb.txt
5382freestorm77@gmail.com**20100424120416
5383 Ignore-this: fed696530e9d2215b6f5058acbedc3ab
5384 
5385 
5386    - Added heading format begining and ending by "=="
5387    - Added Index
5388    - Added Title
5389             
5390    Note: No change are made in paragraphs content
5391 
5392] 
5393[doc_reformat_architecture.txt
5394freestorm77@gmail.com**20100424120133
5395 Ignore-this: 6e2cab4635080369f2b8cadf7b2f58e
5396 
5397 
5398     - Added heading format begining and ending by "=="
5399     - Added Index
5400     - Added Title
5401             
5402     Note: No change are made in paragraphs content
5403 
5404 
5405] 
5406[Correct harmless indentation errors found by pylint
5407david-sarah@jacaranda.org**20100226052151
5408 Ignore-this: 41335bce830700b18b80b6e00b45aef5
5409] 
5410[Change relative imports to absolute
5411david-sarah@jacaranda.org**20100226071433
5412 Ignore-this: 32e6ce1a86e2ffaaba1a37d9a1a5de0e
5413] 
5414[Document reason for the trialcoverage version requirement being 0.3.3.
5415david-sarah@jacaranda.org**20100525004444
5416 Ignore-this: 2f9f1df6882838b000c063068f258aec
5417] 
5418[Downgrade version requirement for trialcoverage to 0.3.3 (from 0.3.10), to avoid needing to compile coveragepy on Windows.
5419david-sarah@jacaranda.org**20100524233707
5420 Ignore-this: 9c397a374c8b8017e2244b8a686432a8
5421] 
5422[Suppress deprecation warning for twisted.web.error.NoResource when using Twisted >= 9.0.0.
5423david-sarah@jacaranda.org**20100516205625
5424 Ignore-this: 2361a3023cd3db86bde5e1af759ed01
5425] 
5426[docs: CREDITS for Jeremy Visser
5427zooko@zooko.com**20100524081829
5428 Ignore-this: d7c1465fd8d4e25b8d46d38a1793465b
5429] 
5430[test: show stdout and stderr in case of non-zero exit code from "tahoe" command
5431zooko@zooko.com**20100524073348
5432 Ignore-this: 695e81cd6683f4520229d108846cd551
5433] 
5434[setup: upgrade bundled zetuptoolz to zetuptoolz-0.6c15dev and make it unpacked and directly loaded by setup.py
5435zooko@zooko.com**20100523205228
5436 Ignore-this: 24fb32aaee3904115a93d1762f132c7
5437 Also fix the relevant "make clean" target behavior.
5438] 
5439[setup: remove bundled zipfile egg of setuptools
5440zooko@zooko.com**20100523205120
5441 Ignore-this: c68b5f2635bb93d1c1fa7b613a026f9e
5442 We're about to replace it with bundled unpacked source code of setuptools, which is much nicer for debugging and evolving under revision control.
5443] 
5444[setup: remove bundled copy of setuptools_trial-0.5.2.tar
5445zooko@zooko.com**20100522221539
5446 Ignore-this: 140f90eb8fb751a509029c4b24afe647
5447 Hopefully it will get installed automatically as needed and we won't bundle it anymore.
5448] 
5449[setup: remove bundled setuptools_darcs-1.2.8.tar
5450zooko@zooko.com**20100522015333
5451 Ignore-this: 378b1964b513ae7fe22bae2d3478285d
5452 This version of setuptools_darcs had a bug when used on Windows which has been fixed in setuptools_darcs-1.2.9. Hopefully we will not need to bundle a copy of setuptools_darcs-1.2.9 in with Tahoe-LAFS and can instead rely on it to be downloaded from PyPI or bundled in the "tahoe deps" separate tarball.
5453] 
5454[tests: fix pyflakes warnings in bench_dirnode.py
5455zooko@zooko.com**20100521202511
5456 Ignore-this: f23d55b4ed05e52865032c65a15753c4
5457] 
5458[setup: if the string '--reporter=bwverbose-coverage' appears on sys.argv then you need trialcoverage
5459zooko@zooko.com**20100521122226
5460 Ignore-this: e760c45dcfb5a43c1dc1e8a27346bdc2
5461] 
5462[tests: don't let bench_dirnode.py do stuff and have side-effects at import time (unless __name__ == '__main__')
5463zooko@zooko.com**20100521122052
5464 Ignore-this: 96144a412250d9bbb5fccbf83b8753b8
5465] 
5466[tests: increase timeout to give François's ARM buildslave a chance to complete the tests
5467zooko@zooko.com**20100520134526
5468 Ignore-this: 3dd399fdc8b91149c82b52f955b50833
5469] 
5470[run_trial.darcspath
5471freestorm77@gmail.com**20100510232829
5472 Ignore-this: 5ebb4df74e9ea8a4bdb22b65373d1ff2
5473] 
5474[docs: line-wrap README.txt
5475zooko@zooko.com**20100518174240
5476 Ignore-this: 670a02d360df7de51ebdcf4fae752577
5477] 
5478[Hush pyflakes warnings
5479Kevan Carstensen <kevan@isnotajoke.com>**20100515184344
5480 Ignore-this: fd602c3bba115057770715c36a87b400
5481] 
5482[setup: new improved misc/show-tool-versions.py
5483zooko@zooko.com**20100516050122
5484 Ignore-this: ce9b1de1b35b07d733e6cf823b66335a
5485] 
5486[Improve code coverage of the Tahoe2PeerSelector tests.
5487Kevan Carstensen <kevan@isnotajoke.com>**20100515032913
5488 Ignore-this: 793151b63ffa65fdae6915db22d9924a
5489] 
5490[Remove a comment that no longer makes sense.
5491Kevan Carstensen <kevan@isnotajoke.com>**20100514203516
5492 Ignore-this: 956983c7e7c7e4477215494dfce8f058
5493] 
5494[docs: update docs/architecture.txt to more fully and correctly explain the upload procedure
5495zooko@zooko.com**20100514043458
5496 Ignore-this: 538b6ea256a49fed837500342092efa3
5497] 
5498[Fix up the behavior of #778, per reviewers' comments
5499Kevan Carstensen <kevan@isnotajoke.com>**20100514004917
5500 Ignore-this: 9c20b60716125278b5456e8feb396bff
5501 
5502   - Make some important utility functions clearer and more thoroughly
5503     documented.
5504   - Assert in upload.servers_of_happiness that the buckets attributes
5505     of PeerTrackers passed to it are mutually disjoint.
5506   - Get rid of some silly non-Pythonisms that I didn't see when I first
5507     wrote these patches.
5508   - Make sure that should_add_server returns true when queried about a
5509     shnum that it doesn't know about yet.
5510   - Change Tahoe2PeerSelector.preexisting_shares to map a shareid to a set
5511     of peerids, alter dependencies to deal with that.
5512   - Remove upload.should_add_servers, because it is no longer necessary
5513   - Move upload.shares_of_happiness and upload.shares_by_server to a utility
5514     file.
5515   - Change some points in Tahoe2PeerSelector.
5516   - Compute servers_of_happiness using a bipartite matching algorithm that
5517     we know is optimal instead of an ad-hoc greedy algorithm that isn't.
5518   - Change servers_of_happiness to just take a sharemap as an argument,
5519     change its callers to merge existing_shares and used_peers before
5520     calling it.
5521   - Change an error message in the encoder to be more appropriate for
5522     servers of happiness.
5523   - Clarify the wording of an error message in immutable/upload.py
5524   - Refactor a happiness failure message to happinessutil.py, and make
5525     immutable/upload.py and immutable/encode.py use it.
5526   - Move the word "only" as far to the right as possible in failure
5527     messages.
5528   - Use a better definition of progress during peer selection.
5529   - Do read-only peer share detection queries in parallel, not sequentially.
5530   - Clean up logging semantics; print the query statistics whenever an
5531     upload is unsuccessful, not just in one case.
5532 
5533] 
5534[Alter the error message when an upload fails, per some comments in #778.
5535Kevan Carstensen <kevan@isnotajoke.com>**20091230210344
5536 Ignore-this: ba97422b2f9737c46abeb828727beb1
5537 
5538 When I first implemented #778, I just altered the error messages to refer to
5539 servers where they referred to shares. The resulting error messages weren't
5540 very good. These are a bit better.
5541] 
5542[Change "UploadHappinessError" to "UploadUnhappinessError"
5543Kevan Carstensen <kevan@isnotajoke.com>**20091205043037
5544 Ignore-this: 236b64ab19836854af4993bb5c1b221a
5545] 
5546[Alter the error message returned when peer selection fails
5547Kevan Carstensen <kevan@isnotajoke.com>**20091123002405
5548 Ignore-this: b2a7dc163edcab8d9613bfd6907e5166
5549 
5550 The Tahoe2PeerSelector returned either NoSharesError or NotEnoughSharesError
5551 for a variety of error conditions that weren't informatively described by them.
5552 This patch creates a new error, UploadHappinessError, replaces uses of
5553 NoSharesError and NotEnoughSharesError with it, and alters the error message
5554 raised with the errors to be more in line with the new servers_of_happiness
5555 behavior. See ticket #834 for more information.
5556] 
5557[Eliminate overcounting iof servers_of_happiness in Tahoe2PeerSelector; also reorganize some things.
5558Kevan Carstensen <kevan@isnotajoke.com>**20091118014542
5559 Ignore-this: a6cb032cbff74f4f9d4238faebd99868
5560] 
5561[Change stray "shares_of_happiness" to "servers_of_happiness"
5562Kevan Carstensen <kevan@isnotajoke.com>**20091116212459
5563 Ignore-this: 1c971ba8c3c4d2e7ba9f020577b28b73
5564] 
5565[Alter Tahoe2PeerSelector to make sure that it recognizes existing shares on readonly servers, fixing an issue in #778
5566Kevan Carstensen <kevan@isnotajoke.com>**20091116192805
5567 Ignore-this: 15289f4d709e03851ed0587b286fd955
5568] 
5569[Alter 'immutable/encode.py' and 'immutable/upload.py' to use servers_of_happiness instead of shares_of_happiness.
5570Kevan Carstensen <kevan@isnotajoke.com>**20091104111222
5571 Ignore-this: abb3283314820a8bbf9b5d0cbfbb57c8
5572] 
5573[Alter the signature of set_shareholders in IEncoder to add a 'servermap' parameter, which gives IEncoders enough information to perform a sane check for servers_of_happiness.
5574Kevan Carstensen <kevan@isnotajoke.com>**20091104033241
5575 Ignore-this: b3a6649a8ac66431beca1026a31fed94
5576] 
5577[Alter CiphertextDownloader to work with servers_of_happiness
5578Kevan Carstensen <kevan@isnotajoke.com>**20090924041932
5579 Ignore-this: e81edccf0308c2d3bedbc4cf217da197
5580] 
5581[Revisions of the #778 tests, per reviewers' comments
5582Kevan Carstensen <kevan@isnotajoke.com>**20100514012542
5583 Ignore-this: 735bbc7f663dce633caeb3b66a53cf6e
5584 
5585 - Fix comments and confusing naming.
5586 - Add tests for the new error messages suggested by David-Sarah
5587   and Zooko.
5588 - Alter existing tests for new error messages.
5589 - Make sure that the tests continue to work with the trunk.
5590 - Add a test for a mutual disjointedness assertion that I added to
5591   upload.servers_of_happiness.
5592 - Fix the comments to correctly reflect read-onlyness
5593 - Add a test for an edge case in should_add_server
5594 - Add an assertion to make sure that share redistribution works as it
5595   should
5596 - Alter tests to work with revised servers_of_happiness semantics
5597 - Remove tests for should_add_server, since that function no longer exists.
5598 - Alter tests to know about merge_peers, and to use it before calling
5599   servers_of_happiness.
5600 - Add tests for merge_peers.
5601 - Add Zooko's puzzles to the tests.
5602 - Edit encoding tests to expect the new kind of failure message.
5603 - Edit tests to expect error messages with the word "only" moved as far
5604   to the right as possible.
5605 - Extended and cleaned up some helper functions.
5606 - Changed some tests to call more appropriate helper functions.
5607 - Added a test for the failing redistribution algorithm
5608 - Added a test for the progress message
5609 - Added a test for the upper bound on readonly peer share discovery.
5610 
5611] 
5612[Alter various unit tests to work with the new happy behavior
5613Kevan Carstensen <kevan@isnotajoke.com>**20100107181325
5614 Ignore-this: 132032bbf865e63a079f869b663be34a
5615] 
5616[Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
5617Kevan Carstensen <kevan@isnotajoke.com>**20091205043453
5618 Ignore-this: 83f4bc50c697d21b5f4e2a4cd91862ca
5619] 
5620[Add tests for the behavior described in #834.
5621Kevan Carstensen <kevan@isnotajoke.com>**20091123012008
5622 Ignore-this: d8e0aa0f3f7965ce9b5cea843c6d6f9f
5623] 
5624[Re-work 'test_upload.py' to be more readable; add more tests for #778
5625Kevan Carstensen <kevan@isnotajoke.com>**20091116192334
5626 Ignore-this: 7e8565f92fe51dece5ae28daf442d659
5627] 
5628[Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
5629Kevan Carstensen <kevan@isnotajoke.com>**20091109003735
5630 Ignore-this: 12f9b4cff5752fca7ed32a6ebcff6446
5631] 
5632[Add more tests for comment:53 in ticket #778
5633Kevan Carstensen <kevan@isnotajoke.com>**20091104112849
5634 Ignore-this: 3bb2edd299a944cc9586e14d5d83ec8c
5635] 
5636[Add a test for upload.shares_by_server
5637Kevan Carstensen <kevan@isnotajoke.com>**20091104111324
5638 Ignore-this: f9802e82d6982a93e00f92e0b276f018
5639] 
5640[Minor tweak to an existing test -- make the first server read-write, instead of read-only
5641Kevan Carstensen <kevan@isnotajoke.com>**20091104034232
5642 Ignore-this: a951a46c93f7f58dd44d93d8623b2aee
5643] 
5644[Alter tests to use the new form of set_shareholders
5645Kevan Carstensen <kevan@isnotajoke.com>**20091104033602
5646 Ignore-this: 3deac11fc831618d11441317463ef830
5647] 
5648[Refactor some behavior into a mixin, and add tests for the behavior described in #778
5649"Kevan Carstensen" <kevan@isnotajoke.com>**20091030091908
5650 Ignore-this: a6f9797057ca135579b249af3b2b66ac
5651] 
5652[Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
5653Kevan Carstensen <kevan@isnotajoke.com>**20091018013013
5654 Ignore-this: e12cd7c4ddeb65305c5a7e08df57c754
5655] 
5656[Update 'docs/architecture.txt' to reflect readonly share discovery
5657kevan@isnotajoke.com**20100514003852
5658 Ignore-this: 7ead71b34df3b1ecfdcfd3cb2882e4f9
5659] 
5660[Alter the wording in docs/architecture.txt to more accurately describe the servers_of_happiness behavior.
5661Kevan Carstensen <kevan@isnotajoke.com>**20100428002455
5662 Ignore-this: 6eff7fa756858a1c6f73728d989544cc
5663] 
5664[Alter wording in 'interfaces.py' to be correct wrt #778
5665"Kevan Carstensen" <kevan@isnotajoke.com>**20091205034005
5666 Ignore-this: c9913c700ac14e7a63569458b06980e0
5667] 
5668[Update 'docs/configuration.txt' to reflect the servers_of_happiness behavior.
5669Kevan Carstensen <kevan@isnotajoke.com>**20091205033813
5670 Ignore-this: 5e1cb171f8239bfb5b565d73c75ac2b8
5671] 
5672[Clarify quickstart instructions for installing pywin32
5673david-sarah@jacaranda.org**20100511180300
5674 Ignore-this: d4668359673600d2acbc7cd8dd44b93c
5675] 
5676[web: add a simple test that you can load directory.xhtml
5677zooko@zooko.com**20100510063729
5678 Ignore-this: e49b25fa3c67b3c7a56c8b1ae01bb463
5679] 
5680[setup: fix typos in misc/show-tool-versions.py
5681zooko@zooko.com**20100510063615
5682 Ignore-this: 2181b1303a0e288e7a9ebd4c4855628
5683] 
5684[setup: show code-coverage tool versions in show-tools-versions.py
5685zooko@zooko.com**20100510062955
5686 Ignore-this: 4b4c68eb3780b762c8dbbd22b39df7cf
5687] 
5688[docs: update README, mv it to README.txt, update setup.py
5689zooko@zooko.com**20100504094340
5690 Ignore-this: 40e28ca36c299ea1fd12d3b91e5b421c
5691] 
5692[Dependency on Windmill test framework is not needed yet.
5693david-sarah@jacaranda.org**20100504161043
5694 Ignore-this: be088712bec650d4ef24766c0026ebc8
5695] 
5696[tests: pass z to tar so that BSD tar will know to ungzip
5697zooko@zooko.com**20100504090628
5698 Ignore-this: 1339e493f255e8fc0b01b70478f23a09
5699] 
5700[setup: update comments and URLs in setup.cfg
5701zooko@zooko.com**20100504061653
5702 Ignore-this: f97692807c74bcab56d33100c899f829
5703] 
5704[setup: reorder and extend the show-tool-versions script, the better to glean information about our new buildslaves
5705zooko@zooko.com**20100504045643
5706 Ignore-this: 836084b56b8d4ee8f1de1f4efb706d36
5707] 
5708[CLI: Support for https url in option --node-url
5709Francois Deppierraz <francois@ctrlaltdel.ch>**20100430185609
5710 Ignore-this: 1717176b4d27c877e6bc67a944d9bf34
5711 
5712 This patch modifies the regular expression used for verifying of '--node-url'
5713 parameter.  Support for accessing a Tahoe gateway over HTTPS was already
5714 present, thanks to Python's urllib.
5715 
5716] 
5717[backupdb.did_create_directory: use REPLACE INTO, not INSERT INTO + ignore error
5718Brian Warner <warner@lothar.com>**20100428050803
5719 Ignore-this: 1fca7b8f364a21ae413be8767161e32f
5720 
5721 This handles the case where we upload a new tahoe directory for a
5722 previously-processed local directory, possibly creating a new dircap (if the
5723 metadata had changed). Now we replace the old dirhash->dircap record. The
5724 previous behavior left the old record in place (with the old dircap and
5725 timestamps), so we'd never stop creating new directories and never converge
5726 on a null backup.
5727] 
5728["tahoe webopen": add --info flag, to get ?t=info
5729Brian Warner <warner@lothar.com>**20100424233003
5730 Ignore-this: 126b0bb6db340fabacb623d295eb45fa
5731 
5732 Also fix some trailing whitespace.
5733] 
5734[docs: install.html http-equiv refresh to quickstart.html
5735zooko@zooko.com**20100421165708
5736 Ignore-this: 52b4b619f9dde5886ae2cd7f1f3b734b
5737] 
5738[docs: install.html -> quickstart.html
5739zooko@zooko.com**20100421155757
5740 Ignore-this: 6084e203909306bed93efb09d0e6181d
5741 It is not called "installing" because that implies that it is going to change the configuration of your operating system. It is not called "building" because that implies that you need developer tools like a compiler. Also I added a stern warning against looking at the "InstallDetails" wiki page, which I have renamed to "AdvancedInstall".
5742] 
5743[Fix another typo in tahoe_storagespace munin plugin
5744david-sarah@jacaranda.org**20100416220935
5745 Ignore-this: ad1f7aa66b554174f91dfb2b7a3ea5f3
5746] 
5747[Add dependency on windmill >= 1.3
5748david-sarah@jacaranda.org**20100416190404
5749 Ignore-this: 4437a7a464e92d6c9012926b18676211
5750] 
5751[licensing: phrase the OpenSSL-exemption in the vocabulary of copyright instead of computer technology, and replicate the exemption from the GPL to the TGPPL
5752zooko@zooko.com**20100414232521
5753 Ignore-this: a5494b2f582a295544c6cad3f245e91
5754] 
5755[munin-tahoe_storagespace
5756freestorm77@gmail.com**20100221203626
5757 Ignore-this: 14d6d6a587afe1f8883152bf2e46b4aa
5758 
5759 Plugin configuration rename
5760 
5761] 
5762[setup: add licensing declaration for setuptools (noticed by the FSF compliance folks)
5763zooko@zooko.com**20100309184415
5764 Ignore-this: 2dfa7d812d65fec7c72ddbf0de609ccb
5765] 
5766[setup: fix error in licensing declaration from Shawn Willden, as noted by the FSF compliance division
5767zooko@zooko.com**20100309163736
5768 Ignore-this: c0623d27e469799d86cabf67921a13f8
5769] 
5770[CREDITS to Jacob Appelbaum
5771zooko@zooko.com**20100304015616
5772 Ignore-this: 70db493abbc23968fcc8db93f386ea54
5773] 
5774[desert-island-build-with-proper-versions
5775jacob@appelbaum.net**20100304013858] 
5776[docs: a few small edits to try to guide newcomers through the docs
5777zooko@zooko.com**20100303231902
5778 Ignore-this: a6aab44f5bf5ad97ea73e6976bc4042d
5779 These edits were suggested by my watching over Jake Appelbaum's shoulder as he completely ignored/skipped/missed install.html and also as he decided that debian.txt wouldn't help him with basic installation. Then I threw in a few docs edits that have been sitting around in my sandbox asking to be committed for months.
5780] 
5781[TAG allmydata-tahoe-1.6.1
5782david-sarah@jacaranda.org**20100228062314
5783 Ignore-this: eb5f03ada8ea953ee7780e7fe068539
5784] 
5785Patch bundle hash:
57869683f777a7a589992fbf3acbc79d8fa375dca986