Ticket #393: fix-test-failures-393.darcs.patch

File fix-test-failures-393.darcs.patch, 787.7 KB (added by davidsarah, at 2011-08-02T04:31:40Z)

Fix for test failures in test_immutable.py caused by 393status47.dpatch

Line 
122 patches for repository http://tahoe-lafs.org/source/tahoe-lafs/trunk:
2
3Tue Aug  2 02:35:24 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
4  * mutable/retrieve: rework the mutable downloader to handle multiple-segment files
5 
6  The downloader needs substantial reworking to handle multiple segment
7  mutable files, which it needs to handle for MDMF.
8
9Tue Aug  2 02:39:31 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
10  * mutable/publish: teach the publisher how to publish MDMF mutable files
11 
12  Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
13
14Tue Aug  2 02:40:18 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
15  * mutable/servermap: Rework the servermap to work with MDMF mutable files
16
17Tue Aug  2 02:41:19 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
18  * interfaces: change interfaces to work with MDMF
19 
20  A lot of this work concerns #993, in that it unifies (to an extent) the
21  interfaces of mutable and immutable files.
22
23Tue Aug  2 02:42:58 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
24  * nodemaker: teach nodemaker how to create MDMF mutable files
25
26Tue Aug  2 02:45:01 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
27  * mutable/filenode: Modify mutable filenodes for use with MDMF
28 
29  In particular:
30      - Break MutableFileNode and MutableFileVersion into distinct classes.
31      - Implement the interface modifications made for MDMF.
32      - Be aware of MDMF caps.
33      - Learn how to create and work with MDMF files.
34
35Tue Aug  2 02:48:11 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
36  * client: teach client how to create and work with MDMF files
37
38Tue Aug  2 02:49:26 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
39  * nodemaker: teach nodemaker about MDMF caps
40
41Tue Aug  2 02:51:40 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
42  * mutable: train checker and repairer to work with MDMF mutable files
43
44Tue Aug  2 02:56:43 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
45  * test/common: Alter common test code to work with MDMF.
46 
47  This mostly has to do with making the test code implement the new
48  unified filenode interfaces.
49
50Tue Aug  2 03:05:11 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
51  * dirnode: teach dirnode to make MDMF directories
52
53Tue Aug  2 03:08:14 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
54  * immutable/literal.py: Implement interface changes in literal nodes.
55
56Tue Aug  2 03:09:05 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
57  * immutable/filenode: implement unified filenode interface
58
59Tue Aug  2 03:09:24 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
60  * test/test_mutable: tests for MDMF
61 
62  These are their own patch because they cut across a lot of the changes
63  I've made in implementing MDMF in such a way as to make it difficult to
64  split them up into the other patches.
65
66Tue Aug  2 03:11:20 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
67  * mutable/layout: Define MDMF share format, write tools for working with MDMF share format
68 
69  The changes in layout.py are mostly concerned with the MDMF share
70  format. In particular, we define read and write proxy objects used by
71  retrieval, publishing, and other code to write and read the MDMF share
72  format. We create equivalent proxies for SDMF objects so that these
73  objects can be suitably general.
74
75Tue Aug  2 03:12:07 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
76  * frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
77
78Tue Aug  2 03:12:33 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
79  * uri: add MDMF and MDMF directory caps, add extension hint support
80
81Tue Aug  2 03:13:11 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
82  * webapi changes for MDMF
83 
84      - Learn how to create MDMF files and directories through the
85        mutable-type argument.
86      - Operate with the interface changes associated with MDMF and #993.
87      - Learn how to do partial updates of mutable files.
88
89Tue Aug  2 03:14:38 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
90  * test: fix assorted tests broken by MDMF changes
91
92Tue Aug  2 03:16:13 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
93  * cli: teach CLI how to create MDMF mutable files
94 
95  Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
96  argument.
97
98Tue Aug  2 03:20:56 BST 2011  Kevan Carstensen <kevan@isnotajoke.com>
99  * docs: amend configuration, webapi documentation to talk about MDMF
100
101Tue Aug  2 04:28:10 BST 2011  david-sarah@jacaranda.org
102  * Fix some test failures caused by #393 patch.
103
104New patches:
105
106[mutable/retrieve: rework the mutable downloader to handle multiple-segment files
107Kevan Carstensen <kevan@isnotajoke.com>**20110802013524
108 Ignore-this: 398d11b5cb993b50e5e4fa6e7a3856dc
109 
110 The downloader needs substantial reworking to handle multiple segment
111 mutable files, which it needs to handle for MDMF.
112] {
113hunk ./src/allmydata/mutable/retrieve.py 2
114 
115-import struct, time
116+import time
117 from itertools import count
118 from zope.interface import implements
119 from twisted.internet import defer
120hunk ./src/allmydata/mutable/retrieve.py 7
121 from twisted.python import failure
122-from foolscap.api import DeadReferenceError, eventually, fireEventually
123-from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError
124-from allmydata.util import hashutil, idlib, log
125+from twisted.internet.interfaces import IPushProducer, IConsumer
126+from foolscap.api import eventually, fireEventually
127+from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \
128+                                 MDMF_VERSION, SDMF_VERSION
129+from allmydata.util import hashutil, log, mathutil
130 from allmydata.util.dictutil import DictOfSets
131 from allmydata import hashtree, codec
132 from allmydata.storage.server import si_b2a
133hunk ./src/allmydata/mutable/retrieve.py 19
134 from pycryptopp.publickey import rsa
135 
136 from allmydata.mutable.common import CorruptShareError, UncoordinatedWriteError
137-from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data
138+from allmydata.mutable.layout import MDMFSlotReadProxy
139 
140 class RetrieveStatus:
141     implements(IRetrieveStatus)
142hunk ./src/allmydata/mutable/retrieve.py 86
143     # times, and each will have a separate response chain. However the
144     # Retrieve object will remain tied to a specific version of the file, and
145     # will use a single ServerMap instance.
146+    implements(IPushProducer)
147 
148hunk ./src/allmydata/mutable/retrieve.py 88
149-    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False):
150+    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False,
151+                 verify=False):
152         self._node = filenode
153         assert self._node.get_pubkey()
154         self._storage_index = filenode.get_storage_index()
155hunk ./src/allmydata/mutable/retrieve.py 107
156         self.verinfo = verinfo
157         # during repair, we may be called upon to grab the private key, since
158         # it wasn't picked up during a verify=False checker run, and we'll
159-        # need it for repair to generate the a new version.
160-        self._need_privkey = fetch_privkey
161-        if self._node.get_privkey():
162+        # need it for repair to generate a new version.
163+        self._need_privkey = fetch_privkey or verify
164+        if self._node.get_privkey() and not verify:
165             self._need_privkey = False
166 
167hunk ./src/allmydata/mutable/retrieve.py 112
168+        if self._need_privkey:
169+            # TODO: Evaluate the need for this. We'll use it if we want
170+            # to limit how many queries are on the wire for the privkey
171+            # at once.
172+            self._privkey_query_markers = [] # one Marker for each time we've
173+                                             # tried to get the privkey.
174+
175+        # verify means that we are using the downloader logic to verify all
176+        # of our shares. This tells the downloader a few things.
177+        #
178+        # 1. We need to download all of the shares.
179+        # 2. We don't need to decode or decrypt the shares, since our
180+        #    caller doesn't care about the plaintext, only the
181+        #    information about which shares are or are not valid.
182+        # 3. When we are validating readers, we need to validate the
183+        #    signature on the prefix. Do we? We already do this in the
184+        #    servermap update?
185+        self._verify = False
186+        if verify:
187+            self._verify = True
188+
189         self._status = RetrieveStatus()
190         self._status.set_storage_index(self._storage_index)
191         self._status.set_helper(False)
192hunk ./src/allmydata/mutable/retrieve.py 142
193          offsets_tuple) = self.verinfo
194         self._status.set_size(datalength)
195         self._status.set_encoding(k, N)
196+        self.readers = {}
197+        self._paused = False
198+        self._paused_deferred = None
199+        self._offset = None
200+        self._read_length = None
201+        self.log("got seqnum %d" % self.verinfo[0])
202+
203 
204     def get_status(self):
205         return self._status
206hunk ./src/allmydata/mutable/retrieve.py 160
207             kwargs["facility"] = "tahoe.mutable.retrieve"
208         return log.msg(*args, **kwargs)
209 
210-    def download(self):
211+
212+    ###################
213+    # IPushProducer
214+
215+    def pauseProducing(self):
216+        """
217+        I am called by my download target if we have produced too much
218+        data for it to handle. I make the downloader stop producing new
219+        data until my resumeProducing method is called.
220+        """
221+        if self._paused:
222+            return
223+
224+        # fired when the download is unpaused.
225+        self._old_status = self._status.get_status()
226+        self._status.set_status("Paused")
227+
228+        self._pause_deferred = defer.Deferred()
229+        self._paused = True
230+
231+
232+    def resumeProducing(self):
233+        """
234+        I am called by my download target once it is ready to begin
235+        receiving data again.
236+        """
237+        if not self._paused:
238+            return
239+
240+        self._paused = False
241+        p = self._pause_deferred
242+        self._pause_deferred = None
243+        self._status.set_status(self._old_status)
244+
245+        eventually(p.callback, None)
246+
247+
248+    def _check_for_paused(self, res):
249+        """
250+        I am called just before a write to the consumer. I return a
251+        Deferred that eventually fires with the data that is to be
252+        written to the consumer. If the download has not been paused,
253+        the Deferred fires immediately. Otherwise, the Deferred fires
254+        when the downloader is unpaused.
255+        """
256+        if self._paused:
257+            d = defer.Deferred()
258+            self._pause_deferred.addCallback(lambda ignored: d.callback(res))
259+            return d
260+        return defer.succeed(res)
261+
262+
263+    def download(self, consumer=None, offset=0, size=None):
264+        assert IConsumer.providedBy(consumer) or self._verify
265+
266+        if consumer:
267+            self._consumer = consumer
268+            # we provide IPushProducer, so streaming=True, per
269+            # IConsumer.
270+            self._consumer.registerProducer(self, streaming=True)
271+
272         self._done_deferred = defer.Deferred()
273         self._started = time.time()
274         self._status.set_status("Retrieving Shares")
275hunk ./src/allmydata/mutable/retrieve.py 225
276 
277+        self._offset = offset
278+        self._read_length = size
279+
280         # first, which servers can we use?
281         versionmap = self.servermap.make_versionmap()
282         shares = versionmap[self.verinfo]
283hunk ./src/allmydata/mutable/retrieve.py 235
284         self.remaining_sharemap = DictOfSets()
285         for (shnum, peerid, timestamp) in shares:
286             self.remaining_sharemap.add(shnum, peerid)
287+            # If the servermap update fetched anything, it fetched at least 1
288+            # KiB, so we ask for that much.
289+            # TODO: Change the cache methods to allow us to fetch all of the
290+            # data that they have, then change this method to do that.
291+            any_cache = self._node._read_from_cache(self.verinfo, shnum,
292+                                                    0, 1000)
293+            ss = self.servermap.connections[peerid]
294+            reader = MDMFSlotReadProxy(ss,
295+                                       self._storage_index,
296+                                       shnum,
297+                                       any_cache)
298+            reader.peerid = peerid
299+            self.readers[shnum] = reader
300+
301 
302         self.shares = {} # maps shnum to validated blocks
303hunk ./src/allmydata/mutable/retrieve.py 251
304+        self._active_readers = [] # list of active readers for this dl.
305+        self._validated_readers = set() # set of readers that we have
306+                                        # validated the prefix of
307+        self._block_hash_trees = {} # shnum => hashtree
308 
309         # how many shares do we need?
310hunk ./src/allmydata/mutable/retrieve.py 257
311-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
312+        (seqnum,
313+         root_hash,
314+         IV,
315+         segsize,
316+         datalength,
317+         k,
318+         N,
319+         prefix,
320          offsets_tuple) = self.verinfo
321hunk ./src/allmydata/mutable/retrieve.py 266
322+
323+
324+        # We need one share hash tree for the entire file; its leaves
325+        # are the roots of the block hash trees for the shares that
326+        # comprise it, and its root is in the verinfo.
327+        self.share_hash_tree = hashtree.IncompleteHashTree(N)
328+        self.share_hash_tree.set_hashes({0: root_hash})
329+
330+        # This will set up both the segment decoder and the tail segment
331+        # decoder, as well as a variety of other instance variables that
332+        # the download process will use.
333+        self._setup_encoding_parameters()
334         assert len(self.remaining_sharemap) >= k
335hunk ./src/allmydata/mutable/retrieve.py 279
336-        # we start with the lowest shnums we have available, since FEC is
337-        # faster if we're using "primary shares"
338-        self.active_shnums = set(sorted(self.remaining_sharemap.keys())[:k])
339-        for shnum in self.active_shnums:
340-            # we use an arbitrary peer who has the share. If shares are
341-            # doubled up (more than one share per peer), we could make this
342-            # run faster by spreading the load among multiple peers. But the
343-            # algorithm to do that is more complicated than I want to write
344-            # right now, and a well-provisioned grid shouldn't have multiple
345-            # shares per peer.
346-            peerid = list(self.remaining_sharemap[shnum])[0]
347-            self.get_data(shnum, peerid)
348 
349hunk ./src/allmydata/mutable/retrieve.py 280
350-        # control flow beyond this point: state machine. Receiving responses
351-        # from queries is the input. We might send out more queries, or we
352-        # might produce a result.
353+        self.log("starting download")
354+        self._paused = False
355+        self._started_fetching = time.time()
356 
357hunk ./src/allmydata/mutable/retrieve.py 284
358+        self._add_active_peers()
359+        # The download process beyond this is a state machine.
360+        # _add_active_peers will select the peers that we want to use
361+        # for the download, and then attempt to start downloading. After
362+        # each segment, it will check for doneness, reacting to broken
363+        # peers and corrupt shares as necessary. If it runs out of good
364+        # peers before downloading all of the segments, _done_deferred
365+        # will errback.  Otherwise, it will eventually callback with the
366+        # contents of the mutable file.
367         return self._done_deferred
368 
369hunk ./src/allmydata/mutable/retrieve.py 295
370-    def get_data(self, shnum, peerid):
371-        self.log(format="sending sh#%(shnum)d request to [%(peerid)s]",
372-                 shnum=shnum,
373-                 peerid=idlib.shortnodeid_b2a(peerid),
374-                 level=log.NOISY)
375-        ss = self.servermap.connections[peerid]
376-        started = time.time()
377-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
378+
379+    def decode(self, blocks_and_salts, segnum):
380+        """
381+        I am a helper method that the mutable file update process uses
382+        as a shortcut to decode and decrypt the segments that it needs
383+        to fetch in order to perform a file update. I take in a
384+        collection of blocks and salts, and pick some of those to make a
385+        segment with. I return the plaintext associated with that
386+        segment.
387+        """
388+        # shnum => block hash tree. Unusued, but setup_encoding_parameters will
389+        # want to set this.
390+        # XXX: Make it so that it won't set this if we're just decoding.
391+        self._block_hash_trees = {}
392+        self._setup_encoding_parameters()
393+        # This is the form expected by decode.
394+        blocks_and_salts = blocks_and_salts.items()
395+        blocks_and_salts = [(True, [d]) for d in blocks_and_salts]
396+
397+        d = self._decode_blocks(blocks_and_salts, segnum)
398+        d.addCallback(self._decrypt_segment)
399+        return d
400+
401+
402+    def _setup_encoding_parameters(self):
403+        """
404+        I set up the encoding parameters, including k, n, the number
405+        of segments associated with this file, and the segment decoder.
406+        """
407+        (seqnum,
408+         root_hash,
409+         IV,
410+         segsize,
411+         datalength,
412+         k,
413+         n,
414+         known_prefix,
415          offsets_tuple) = self.verinfo
416hunk ./src/allmydata/mutable/retrieve.py 333
417-        offsets = dict(offsets_tuple)
418+        self._required_shares = k
419+        self._total_shares = n
420+        self._segment_size = segsize
421+        self._data_length = datalength
422+
423+        if not IV:
424+            self._version = MDMF_VERSION
425+        else:
426+            self._version = SDMF_VERSION
427 
428hunk ./src/allmydata/mutable/retrieve.py 343
429-        # we read the checkstring, to make sure that the data we grab is from
430-        # the right version.
431-        readv = [ (0, struct.calcsize(SIGNED_PREFIX)) ]
432+        if datalength and segsize:
433+            self._num_segments = mathutil.div_ceil(datalength, segsize)
434+            self._tail_data_size = datalength % segsize
435+        else:
436+            self._num_segments = 0
437+            self._tail_data_size = 0
438 
439hunk ./src/allmydata/mutable/retrieve.py 350
440-        # We also read the data, and the hashes necessary to validate them
441-        # (share_hash_chain, block_hash_tree, share_data). We don't read the
442-        # signature or the pubkey, since that was handled during the
443-        # servermap phase, and we'll be comparing the share hash chain
444-        # against the roothash that was validated back then.
445+        self._segment_decoder = codec.CRSDecoder()
446+        self._segment_decoder.set_params(segsize, k, n)
447 
448hunk ./src/allmydata/mutable/retrieve.py 353
449-        readv.append( (offsets['share_hash_chain'],
450-                       offsets['enc_privkey'] - offsets['share_hash_chain'] ) )
451+        if  not self._tail_data_size:
452+            self._tail_data_size = segsize
453 
454hunk ./src/allmydata/mutable/retrieve.py 356
455-        # if we need the private key (for repair), we also fetch that
456-        if self._need_privkey:
457-            readv.append( (offsets['enc_privkey'],
458-                           offsets['EOF'] - offsets['enc_privkey']) )
459+        self._tail_segment_size = mathutil.next_multiple(self._tail_data_size,
460+                                                         self._required_shares)
461+        if self._tail_segment_size == self._segment_size:
462+            self._tail_decoder = self._segment_decoder
463+        else:
464+            self._tail_decoder = codec.CRSDecoder()
465+            self._tail_decoder.set_params(self._tail_segment_size,
466+                                          self._required_shares,
467+                                          self._total_shares)
468+
469+        self.log("got encoding parameters: "
470+                 "k: %d "
471+                 "n: %d "
472+                 "%d segments of %d bytes each (%d byte tail segment)" % \
473+                 (k, n, self._num_segments, self._segment_size,
474+                  self._tail_segment_size))
475 
476hunk ./src/allmydata/mutable/retrieve.py 373
477-        m = Marker()
478-        self._outstanding_queries[m] = (peerid, shnum, started)
479+        for i in xrange(self._total_shares):
480+            # So we don't have to do this later.
481+            self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments)
482 
483hunk ./src/allmydata/mutable/retrieve.py 377
484-        # ask the cache first
485-        got_from_cache = False
486-        datavs = []
487-        for (offset, length) in readv:
488-            data = self._node._read_from_cache(self.verinfo, shnum, offset, length)
489-            if data is not None:
490-                datavs.append(data)
491-        if len(datavs) == len(readv):
492-            self.log("got data from cache")
493-            got_from_cache = True
494-            d = fireEventually({shnum: datavs})
495-            # datavs is a dict mapping shnum to a pair of strings
496+        # Our last task is to tell the downloader where to start and
497+        # where to stop. We use three parameters for that:
498+        #   - self._start_segment: the segment that we need to start
499+        #     downloading from.
500+        #   - self._current_segment: the next segment that we need to
501+        #     download.
502+        #   - self._last_segment: The last segment that we were asked to
503+        #     download.
504+        #
505+        #  We say that the download is complete when
506+        #  self._current_segment > self._last_segment. We use
507+        #  self._start_segment and self._last_segment to know when to
508+        #  strip things off of segments, and how much to strip.
509+        if self._offset:
510+            self.log("got offset: %d" % self._offset)
511+            # our start segment is the first segment containing the
512+            # offset we were given.
513+            start = mathutil.div_ceil(self._offset,
514+                                      self._segment_size)
515+            # this gets us the first segment after self._offset. Then
516+            # our start segment is the one before it.
517+            start -= 1
518+
519+            assert start < self._num_segments
520+            self._start_segment = start
521+            self.log("got start segment: %d" % self._start_segment)
522         else:
523hunk ./src/allmydata/mutable/retrieve.py 404
524-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
525-        self.remaining_sharemap.discard(shnum, peerid)
526+            self._start_segment = 0
527 
528hunk ./src/allmydata/mutable/retrieve.py 406
529-        d.addCallback(self._got_results, m, peerid, started, got_from_cache)
530-        d.addErrback(self._query_failed, m, peerid)
531-        # errors that aren't handled by _query_failed (and errors caused by
532-        # _query_failed) get logged, but we still want to check for doneness.
533-        def _oops(f):
534-            self.log(format="problem in _query_failed for sh#%(shnum)d to %(peerid)s",
535-                     shnum=shnum,
536-                     peerid=idlib.shortnodeid_b2a(peerid),
537-                     failure=f,
538-                     level=log.WEIRD, umid="W0xnQA")
539-        d.addErrback(_oops)
540-        d.addBoth(self._check_for_done)
541-        # any error during _check_for_done means the download fails. If the
542-        # download is successful, _check_for_done will fire _done by itself.
543-        d.addErrback(self._done)
544-        d.addErrback(log.err)
545-        return d # purely for testing convenience
546 
547hunk ./src/allmydata/mutable/retrieve.py 407
548-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
549-        # isolate the callRemote to a separate method, so tests can subclass
550-        # Publish and override it
551-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
552-        return d
553+        if self._read_length:
554+            # our end segment is the last segment containing part of the
555+            # segment that we were asked to read.
556+            self.log("got read length %d" % self._read_length)
557+            end_data = self._offset + self._read_length
558+            end = mathutil.div_ceil(end_data,
559+                                    self._segment_size)
560+            end -= 1
561+            assert end < self._num_segments
562+            self._last_segment = end
563+            self.log("got end segment: %d" % self._last_segment)
564+        else:
565+            self._last_segment = self._num_segments - 1
566 
567hunk ./src/allmydata/mutable/retrieve.py 421
568-    def remove_peer(self, peerid):
569-        for shnum in list(self.remaining_sharemap.keys()):
570-            self.remaining_sharemap.discard(shnum, peerid)
571+        self._current_segment = self._start_segment
572 
573hunk ./src/allmydata/mutable/retrieve.py 423
574-    def _got_results(self, datavs, marker, peerid, started, got_from_cache):
575-        now = time.time()
576-        elapsed = now - started
577-        if not got_from_cache:
578-            self._status.add_fetch_timing(peerid, elapsed)
579-        self.log(format="got results (%(shares)d shares) from [%(peerid)s]",
580-                 shares=len(datavs),
581-                 peerid=idlib.shortnodeid_b2a(peerid),
582-                 level=log.NOISY)
583-        self._outstanding_queries.pop(marker, None)
584-        if not self._running:
585-            return
586+    def _add_active_peers(self):
587+        """
588+        I populate self._active_readers with enough active readers to
589+        retrieve the contents of this mutable file. I am called before
590+        downloading starts, and (eventually) after each validation
591+        error, connection error, or other problem in the download.
592+        """
593+        # TODO: It would be cool to investigate other heuristics for
594+        # reader selection. For instance, the cost (in time the user
595+        # spends waiting for their file) of selecting a really slow peer
596+        # that happens to have a primary share is probably more than
597+        # selecting a really fast peer that doesn't have a primary
598+        # share. Maybe the servermap could be extended to provide this
599+        # information; it could keep track of latency information while
600+        # it gathers more important data, and then this routine could
601+        # use that to select active readers.
602+        #
603+        # (these and other questions would be easier to answer with a
604+        #  robust, configurable tahoe-lafs simulator, which modeled node
605+        #  failures, differences in node speed, and other characteristics
606+        #  that we expect storage servers to have.  You could have
607+        #  presets for really stable grids (like allmydata.com),
608+        #  friendnets, make it easy to configure your own settings, and
609+        #  then simulate the effect of big changes on these use cases
610+        #  instead of just reasoning about what the effect might be. Out
611+        #  of scope for MDMF, though.)
612 
613hunk ./src/allmydata/mutable/retrieve.py 450
614-        # note that we only ask for a single share per query, so we only
615-        # expect a single share back. On the other hand, we use the extra
616-        # shares if we get them.. seems better than an assert().
617+        # We need at least self._required_shares readers to download a
618+        # segment.
619+        if self._verify:
620+            needed = self._total_shares
621+        else:
622+            needed = self._required_shares - len(self._active_readers)
623+        # XXX: Why don't format= log messages work here?
624+        self.log("adding %d peers to the active peers list" % needed)
625 
626hunk ./src/allmydata/mutable/retrieve.py 459
627-        for shnum,datav in datavs.items():
628-            (prefix, hash_and_data) = datav[:2]
629-            try:
630-                self._got_results_one_share(shnum, peerid,
631-                                            prefix, hash_and_data)
632-            except CorruptShareError, e:
633-                # log it and give the other shares a chance to be processed
634-                f = failure.Failure()
635-                self.log(format="bad share: %(f_value)s",
636-                         f_value=str(f.value), failure=f,
637-                         level=log.WEIRD, umid="7fzWZw")
638-                self.notify_server_corruption(peerid, shnum, str(e))
639-                self.remove_peer(peerid)
640-                self.servermap.mark_bad_share(peerid, shnum, prefix)
641-                self._bad_shares.add( (peerid, shnum) )
642-                self._status.problems[peerid] = f
643-                self._last_failure = f
644-                pass
645-            if self._need_privkey and len(datav) > 2:
646-                lp = None
647-                self._try_to_validate_privkey(datav[2], peerid, shnum, lp)
648-        # all done!
649+        # We favor lower numbered shares, since FEC is faster with
650+        # primary shares than with other shares, and lower-numbered
651+        # shares are more likely to be primary than higher numbered
652+        # shares.
653+        active_shnums = set(sorted(self.remaining_sharemap.keys()))
654+        # We shouldn't consider adding shares that we already have; this
655+        # will cause problems later.
656+        active_shnums -= set([reader.shnum for reader in self._active_readers])
657+        active_shnums = list(active_shnums)[:needed]
658+        if len(active_shnums) < needed and not self._verify:
659+            # We don't have enough readers to retrieve the file; fail.
660+            return self._failed()
661 
662hunk ./src/allmydata/mutable/retrieve.py 472
663-    def notify_server_corruption(self, peerid, shnum, reason):
664-        ss = self.servermap.connections[peerid]
665-        ss.callRemoteOnly("advise_corrupt_share",
666-                          "mutable", self._storage_index, shnum, reason)
667+        for shnum in active_shnums:
668+            self._active_readers.append(self.readers[shnum])
669+            self.log("added reader for share %d" % shnum)
670+        assert len(self._active_readers) >= self._required_shares
671+        # Conceptually, this is part of the _add_active_peers step. It
672+        # validates the prefixes of newly added readers to make sure
673+        # that they match what we are expecting for self.verinfo. If
674+        # validation is successful, _validate_active_prefixes will call
675+        # _download_current_segment for us. If validation is
676+        # unsuccessful, then _validate_prefixes will remove the peer and
677+        # call _add_active_peers again, where we will attempt to rectify
678+        # the problem by choosing another peer.
679+        return self._validate_active_prefixes()
680 
681hunk ./src/allmydata/mutable/retrieve.py 486
682-    def _got_results_one_share(self, shnum, peerid,
683-                               got_prefix, got_hash_and_data):
684-        self.log("_got_results: got shnum #%d from peerid %s"
685-                 % (shnum, idlib.shortnodeid_b2a(peerid)))
686-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
687+
688+    def _validate_active_prefixes(self):
689+        """
690+        I check to make sure that the prefixes on the peers that I am
691+        currently reading from match the prefix that we want to see, as
692+        said in self.verinfo.
693+
694+        If I find that all of the active peers have acceptable prefixes,
695+        I pass control to _download_current_segment, which will use
696+        those peers to do cool things. If I find that some of the active
697+        peers have unacceptable prefixes, I will remove them from active
698+        peers (and from further consideration) and call
699+        _add_active_peers to attempt to rectify the situation. I keep
700+        track of which peers I have already validated so that I don't
701+        need to do so again.
702+        """
703+        assert self._active_readers, "No more active readers"
704+
705+        ds = []
706+        new_readers = set(self._active_readers) - self._validated_readers
707+        self.log('validating %d newly-added active readers' % len(new_readers))
708+
709+        for reader in new_readers:
710+            # We force a remote read here -- otherwise, we are relying
711+            # on cached data that we already verified as valid, and we
712+            # won't detect an uncoordinated write that has occurred
713+            # since the last servermap update.
714+            d = reader.get_prefix(force_remote=True)
715+            d.addCallback(self._try_to_validate_prefix, reader)
716+            ds.append(d)
717+        dl = defer.DeferredList(ds, consumeErrors=True)
718+        def _check_results(results):
719+            # Each result in results will be of the form (success, msg).
720+            # We don't care about msg, but success will tell us whether
721+            # or not the checkstring validated. If it didn't, we need to
722+            # remove the offending (peer,share) from our active readers,
723+            # and ensure that active readers is again populated.
724+            bad_readers = []
725+            for i, result in enumerate(results):
726+                if not result[0]:
727+                    reader = self._active_readers[i]
728+                    f = result[1]
729+                    assert isinstance(f, failure.Failure)
730+
731+                    self.log("The reader %s failed to "
732+                             "properly validate: %s" % \
733+                             (reader, str(f.value)))
734+                    bad_readers.append((reader, f))
735+                else:
736+                    reader = self._active_readers[i]
737+                    self.log("the reader %s checks out, so we'll use it" % \
738+                             reader)
739+                    self._validated_readers.add(reader)
740+                    # Each time we validate a reader, we check to see if
741+                    # we need the private key. If we do, we politely ask
742+                    # for it and then continue computing. If we find
743+                    # that we haven't gotten it at the end of
744+                    # segment decoding, then we'll take more drastic
745+                    # measures.
746+                    if self._need_privkey and not self._node.is_readonly():
747+                        d = reader.get_encprivkey()
748+                        d.addCallback(self._try_to_validate_privkey, reader)
749+            if bad_readers:
750+                # We do them all at once, or else we screw up list indexing.
751+                for (reader, f) in bad_readers:
752+                    self._mark_bad_share(reader, f)
753+                if self._verify:
754+                    if len(self._active_readers) >= self._required_shares:
755+                        return self._download_current_segment()
756+                    else:
757+                        return self._failed()
758+                else:
759+                    return self._add_active_peers()
760+            else:
761+                return self._download_current_segment()
762+            # The next step will assert that it has enough active
763+            # readers to fetch shares; we just need to remove it.
764+        dl.addCallback(_check_results)
765+        return dl
766+
767+
768+    def _try_to_validate_prefix(self, prefix, reader):
769+        """
770+        I check that the prefix returned by a candidate server for
771+        retrieval matches the prefix that the servermap knows about
772+        (and, hence, the prefix that was validated earlier). If it does,
773+        I return True, which means that I approve of the use of the
774+        candidate server for segment retrieval. If it doesn't, I return
775+        False, which means that another server must be chosen.
776+        """
777+        (seqnum,
778+         root_hash,
779+         IV,
780+         segsize,
781+         datalength,
782+         k,
783+         N,
784+         known_prefix,
785          offsets_tuple) = self.verinfo
786hunk ./src/allmydata/mutable/retrieve.py 585
787-        assert len(got_prefix) == len(prefix), (len(got_prefix), len(prefix))
788-        if got_prefix != prefix:
789-            msg = "someone wrote to the data since we read the servermap: prefix changed"
790-            raise UncoordinatedWriteError(msg)
791-        (share_hash_chain, block_hash_tree,
792-         share_data) = unpack_share_data(self.verinfo, got_hash_and_data)
793+        if known_prefix != prefix:
794+            self.log("prefix from share %d doesn't match" % reader.shnum)
795+            raise UncoordinatedWriteError("Mismatched prefix -- this could "
796+                                          "indicate an uncoordinated write")
797+        # Otherwise, we're okay -- no issues.
798 
799hunk ./src/allmydata/mutable/retrieve.py 591
800-        assert isinstance(share_data, str)
801-        # build the block hash tree. SDMF has only one leaf.
802-        leaves = [hashutil.block_hash(share_data)]
803-        t = hashtree.HashTree(leaves)
804-        if list(t) != block_hash_tree:
805-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
806-        share_hash_leaf = t[0]
807-        t2 = hashtree.IncompleteHashTree(N)
808-        # root_hash was checked by the signature
809-        t2.set_hashes({0: root_hash})
810-        try:
811-            t2.set_hashes(hashes=share_hash_chain,
812-                          leaves={shnum: share_hash_leaf})
813-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
814-                IndexError), e:
815-            msg = "corrupt hashes: %s" % (e,)
816-            raise CorruptShareError(peerid, shnum, msg)
817-        self.log(" data valid! len=%d" % len(share_data))
818-        # each query comes down to this: placing validated share data into
819-        # self.shares
820-        self.shares[shnum] = share_data
821 
822hunk ./src/allmydata/mutable/retrieve.py 592
823-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
824+    def _remove_reader(self, reader):
825+        """
826+        At various points, we will wish to remove a peer from
827+        consideration and/or use. These include, but are not necessarily
828+        limited to:
829 
830hunk ./src/allmydata/mutable/retrieve.py 598
831-        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
832-        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
833-        if alleged_writekey != self._node.get_writekey():
834-            self.log("invalid privkey from %s shnum %d" %
835-                     (idlib.nodeid_b2a(peerid)[:8], shnum),
836-                     parent=lp, level=log.WEIRD, umid="YIw4tA")
837-            return
838+            - A connection error.
839+            - A mismatched prefix (that is, a prefix that does not match
840+              our conception of the version information string).
841+            - A failing block hash, salt hash, or share hash, which can
842+              indicate disk failure/bit flips, or network trouble.
843 
844hunk ./src/allmydata/mutable/retrieve.py 604
845-        # it's good
846-        self.log("got valid privkey from shnum %d on peerid %s" %
847-                 (shnum, idlib.shortnodeid_b2a(peerid)),
848-                 parent=lp)
849-        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
850-        self._node._populate_encprivkey(enc_privkey)
851-        self._node._populate_privkey(privkey)
852-        self._need_privkey = False
853+        This method will do that. I will make sure that the
854+        (shnum,reader) combination represented by my reader argument is
855+        not used for anything else during this download. I will not
856+        advise the reader of any corruption, something that my callers
857+        may wish to do on their own.
858+        """
859+        # TODO: When you're done writing this, see if this is ever
860+        # actually used for something that _mark_bad_share isn't. I have
861+        # a feeling that they will be used for very similar things, and
862+        # that having them both here is just going to be an epic amount
863+        # of code duplication.
864+        #
865+        # (well, okay, not epic, but meaningful)
866+        self.log("removing reader %s" % reader)
867+        # Remove the reader from _active_readers
868+        self._active_readers.remove(reader)
869+        # TODO: self.readers.remove(reader)?
870+        for shnum in list(self.remaining_sharemap.keys()):
871+            self.remaining_sharemap.discard(shnum, reader.peerid)
872 
873hunk ./src/allmydata/mutable/retrieve.py 624
874-    def _query_failed(self, f, marker, peerid):
875-        self.log(format="query to [%(peerid)s] failed",
876-                 peerid=idlib.shortnodeid_b2a(peerid),
877-                 level=log.NOISY)
878-        self._status.problems[peerid] = f
879-        self._outstanding_queries.pop(marker, None)
880-        if not self._running:
881-            return
882+
883+    def _mark_bad_share(self, reader, f):
884+        """
885+        I mark the (peerid, shnum) encapsulated by my reader argument as
886+        a bad share, which means that it will not be used anywhere else.
887+
888+        There are several reasons to want to mark something as a bad
889+        share. These include:
890+
891+            - A connection error to the peer.
892+            - A mismatched prefix (that is, a prefix that does not match
893+              our local conception of the version information string).
894+            - A failing block hash, salt hash, share hash, or other
895+              integrity check.
896+
897+        This method will ensure that readers that we wish to mark bad
898+        (for these reasons or other reasons) are not used for the rest
899+        of the download. Additionally, it will attempt to tell the
900+        remote peer (with no guarantee of success) that its share is
901+        corrupt.
902+        """
903+        self.log("marking share %d on server %s as bad" % \
904+                 (reader.shnum, reader))
905+        prefix = self.verinfo[-2]
906+        self.servermap.mark_bad_share(reader.peerid,
907+                                      reader.shnum,
908+                                      prefix)
909+        self._remove_reader(reader)
910+        self._bad_shares.add((reader.peerid, reader.shnum, f))
911+        self._status.problems[reader.peerid] = f
912         self._last_failure = f
913hunk ./src/allmydata/mutable/retrieve.py 655
914-        self.remove_peer(peerid)
915-        level = log.WEIRD
916-        if f.check(DeadReferenceError):
917-            level = log.UNUSUAL
918-        self.log(format="error during query: %(f_value)s",
919-                 f_value=str(f.value), failure=f, level=level, umid="gOJB5g")
920+        self.notify_server_corruption(reader.peerid, reader.shnum,
921+                                      str(f.value))
922 
923hunk ./src/allmydata/mutable/retrieve.py 658
924-    def _check_for_done(self, res):
925-        # exit paths:
926-        #  return : keep waiting, no new queries
927-        #  return self._send_more_queries(outstanding) : send some more queries
928-        #  fire self._done(plaintext) : download successful
929-        #  raise exception : download fails
930 
931hunk ./src/allmydata/mutable/retrieve.py 659
932-        self.log(format="_check_for_done: running=%(running)s, decoding=%(decoding)s",
933-                 running=self._running, decoding=self._decoding,
934-                 level=log.NOISY)
935-        if not self._running:
936-            return
937-        if self._decoding:
938-            return
939-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
940-         offsets_tuple) = self.verinfo
941+    def _download_current_segment(self):
942+        """
943+        I download, validate, decode, decrypt, and assemble the segment
944+        that this Retrieve is currently responsible for downloading.
945+        """
946+        assert len(self._active_readers) >= self._required_shares
947+        if self._current_segment <= self._last_segment:
948+            d = self._process_segment(self._current_segment)
949+        else:
950+            d = defer.succeed(None)
951+        d.addBoth(self._turn_barrier)
952+        d.addCallback(self._check_for_done)
953+        return d
954 
955hunk ./src/allmydata/mutable/retrieve.py 673
956-        if len(self.shares) < k:
957-            # we don't have enough shares yet
958-            return self._maybe_send_more_queries(k)
959-        if self._need_privkey:
960-            # we got k shares, but none of them had a valid privkey. TODO:
961-            # look further. Adding code to do this is a bit complicated, and
962-            # I want to avoid that complication, and this should be pretty
963-            # rare (k shares with bitflips in the enc_privkey but not in the
964-            # data blocks). If we actually do get here, the subsequent repair
965-            # will fail for lack of a privkey.
966-            self.log("got k shares but still need_privkey, bummer",
967-                     level=log.WEIRD, umid="MdRHPA")
968 
969hunk ./src/allmydata/mutable/retrieve.py 674
970-        # we have enough to finish. All the shares have had their hashes
971-        # checked, so if something fails at this point, we don't know how
972-        # to fix it, so the download will fail.
973+    def _turn_barrier(self, result):
974+        """
975+        I help the download process avoid the recursion limit issues
976+        discussed in #237.
977+        """
978+        return fireEventually(result)
979 
980hunk ./src/allmydata/mutable/retrieve.py 681
981-        self._decoding = True # avoid reentrancy
982-        self._status.set_status("decoding")
983-        now = time.time()
984-        elapsed = now - self._started
985-        self._status.timings["fetch"] = elapsed
986 
987hunk ./src/allmydata/mutable/retrieve.py 682
988-        d = defer.maybeDeferred(self._decode)
989-        d.addCallback(self._decrypt, IV, self._node.get_readkey())
990-        d.addBoth(self._done)
991-        return d # purely for test convenience
992+    def _process_segment(self, segnum):
993+        """
994+        I download, validate, decode, and decrypt one segment of the
995+        file that this Retrieve is retrieving. This means coordinating
996+        the process of getting k blocks of that file, validating them,
997+        assembling them into one segment with the decoder, and then
998+        decrypting them.
999+        """
1000+        self.log("processing segment %d" % segnum)
1001 
1002hunk ./src/allmydata/mutable/retrieve.py 692
1003-    def _maybe_send_more_queries(self, k):
1004-        # we don't have enough shares yet. Should we send out more queries?
1005-        # There are some number of queries outstanding, each for a single
1006-        # share. If we can generate 'needed_shares' additional queries, we do
1007-        # so. If we can't, then we know this file is a goner, and we raise
1008-        # NotEnoughSharesError.
1009-        self.log(format=("_maybe_send_more_queries, have=%(have)d, k=%(k)d, "
1010-                         "outstanding=%(outstanding)d"),
1011-                 have=len(self.shares), k=k,
1012-                 outstanding=len(self._outstanding_queries),
1013-                 level=log.NOISY)
1014+        # TODO: The old code uses a marker. Should this code do that
1015+        # too? What did the Marker do?
1016+        assert len(self._active_readers) >= self._required_shares
1017 
1018hunk ./src/allmydata/mutable/retrieve.py 696
1019-        remaining_shares = k - len(self.shares)
1020-        needed = remaining_shares - len(self._outstanding_queries)
1021-        if not needed:
1022-            # we have enough queries in flight already
1023+        # We need to ask each of our active readers for its block and
1024+        # salt. We will then validate those. If validation is
1025+        # successful, we will assemble the results into plaintext.
1026+        ds = []
1027+        for reader in self._active_readers:
1028+            started = time.time()
1029+            d = reader.get_block_and_salt(segnum, queue=True)
1030+            d2 = self._get_needed_hashes(reader, segnum)
1031+            dl = defer.DeferredList([d, d2], consumeErrors=True)
1032+            dl.addCallback(self._validate_block, segnum, reader, started)
1033+            dl.addErrback(self._validation_or_decoding_failed, [reader])
1034+            ds.append(dl)
1035+            reader.flush()
1036+        dl = defer.DeferredList(ds)
1037+        if self._verify:
1038+            dl.addCallback(lambda ignored: "")
1039+            dl.addCallback(self._set_segment)
1040+        else:
1041+            dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum)
1042+        return dl
1043 
1044hunk ./src/allmydata/mutable/retrieve.py 717
1045-            # TODO: but if they've been in flight for a long time, and we
1046-            # have reason to believe that new queries might respond faster
1047-            # (i.e. we've seen other queries come back faster, then consider
1048-            # sending out new queries. This could help with peers which have
1049-            # silently gone away since the servermap was updated, for which
1050-            # we're still waiting for the 15-minute TCP disconnect to happen.
1051-            self.log("enough queries are in flight, no more are needed",
1052-                     level=log.NOISY)
1053-            return
1054 
1055hunk ./src/allmydata/mutable/retrieve.py 718
1056-        outstanding_shnums = set([shnum
1057-                                  for (peerid, shnum, started)
1058-                                  in self._outstanding_queries.values()])
1059-        # prefer low-numbered shares, they are more likely to be primary
1060-        available_shnums = sorted(self.remaining_sharemap.keys())
1061-        for shnum in available_shnums:
1062-            if shnum in outstanding_shnums:
1063-                # skip ones that are already in transit
1064-                continue
1065-            if shnum not in self.remaining_sharemap:
1066-                # no servers for that shnum. note that DictOfSets removes
1067-                # empty sets from the dict for us.
1068-                continue
1069-            peerid = list(self.remaining_sharemap[shnum])[0]
1070-            # get_data will remove that peerid from the sharemap, and add the
1071-            # query to self._outstanding_queries
1072-            self._status.set_status("Retrieving More Shares")
1073-            self.get_data(shnum, peerid)
1074-            needed -= 1
1075-            if not needed:
1076+    def _maybe_decode_and_decrypt_segment(self, blocks_and_salts, segnum):
1077+        """
1078+        I take the results of fetching and validating the blocks from a
1079+        callback chain in another method. If the results are such that
1080+        they tell me that validation and fetching succeeded without
1081+        incident, I will proceed with decoding and decryption.
1082+        Otherwise, I will do nothing.
1083+        """
1084+        self.log("trying to decode and decrypt segment %d" % segnum)
1085+        failures = False
1086+        for block_and_salt in blocks_and_salts:
1087+            if not block_and_salt[0] or block_and_salt[1] == None:
1088+                self.log("some validation operations failed; not proceeding")
1089+                failures = True
1090                 break
1091hunk ./src/allmydata/mutable/retrieve.py 733
1092+        if not failures:
1093+            self.log("everything looks ok, building segment %d" % segnum)
1094+            d = self._decode_blocks(blocks_and_salts, segnum)
1095+            d.addCallback(self._decrypt_segment)
1096+            d.addErrback(self._validation_or_decoding_failed,
1097+                         self._active_readers)
1098+            # check to see whether we've been paused before writing
1099+            # anything.
1100+            d.addCallback(self._check_for_paused)
1101+            d.addCallback(self._set_segment)
1102+            return d
1103+        else:
1104+            return defer.succeed(None)
1105 
1106hunk ./src/allmydata/mutable/retrieve.py 747
1107-        # at this point, we have as many outstanding queries as we can. If
1108-        # needed!=0 then we might not have enough to recover the file.
1109-        if needed:
1110-            format = ("ran out of peers: "
1111-                      "have %(have)d shares (k=%(k)d), "
1112-                      "%(outstanding)d queries in flight, "
1113-                      "need %(need)d more, "
1114-                      "found %(bad)d bad shares")
1115-            args = {"have": len(self.shares),
1116-                    "k": k,
1117-                    "outstanding": len(self._outstanding_queries),
1118-                    "need": needed,
1119-                    "bad": len(self._bad_shares),
1120-                    }
1121-            self.log(format=format,
1122-                     level=log.WEIRD, umid="ezTfjw", **args)
1123-            err = NotEnoughSharesError("%s, last failure: %s" %
1124-                                      (format % args, self._last_failure))
1125-            if self._bad_shares:
1126-                self.log("We found some bad shares this pass. You should "
1127-                         "update the servermap and try again to check "
1128-                         "more peers",
1129-                         level=log.WEIRD, umid="EFkOlA")
1130-                err.servermap = self.servermap
1131-            raise err
1132 
1133hunk ./src/allmydata/mutable/retrieve.py 748
1134+    def _set_segment(self, segment):
1135+        """
1136+        Given a plaintext segment, I register that segment with the
1137+        target that is handling the file download.
1138+        """
1139+        self.log("got plaintext for segment %d" % self._current_segment)
1140+        if self._current_segment == self._start_segment:
1141+            # We're on the first segment. It's possible that we want
1142+            # only some part of the end of this segment, and that we
1143+            # just downloaded the whole thing to get that part. If so,
1144+            # we need to account for that and give the reader just the
1145+            # data that they want.
1146+            n = self._offset % self._segment_size
1147+            self.log("stripping %d bytes off of the first segment" % n)
1148+            self.log("original segment length: %d" % len(segment))
1149+            segment = segment[n:]
1150+            self.log("new segment length: %d" % len(segment))
1151+
1152+        if self._current_segment == self._last_segment and self._read_length is not None:
1153+            # We're on the last segment. It's possible that we only want
1154+            # part of the beginning of this segment, and that we
1155+            # downloaded the whole thing anyway. Make sure to give the
1156+            # caller only the portion of the segment that they want to
1157+            # receive.
1158+            extra = self._read_length
1159+            if self._start_segment != self._last_segment:
1160+                extra -= self._segment_size - \
1161+                            (self._offset % self._segment_size)
1162+            extra %= self._segment_size
1163+            self.log("original segment length: %d" % len(segment))
1164+            segment = segment[:extra]
1165+            self.log("new segment length: %d" % len(segment))
1166+            self.log("only taking %d bytes of the last segment" % extra)
1167+
1168+        if not self._verify:
1169+            self._consumer.write(segment)
1170+        else:
1171+            # we don't care about the plaintext if we are doing a verify.
1172+            segment = None
1173+        self._current_segment += 1
1174+
1175+
1176+    def _validation_or_decoding_failed(self, f, readers):
1177+        """
1178+        I am called when a block or a salt fails to correctly validate, or when
1179+        the decryption or decoding operation fails for some reason.  I react to
1180+        this failure by notifying the remote server of corruption, and then
1181+        removing the remote peer from further activity.
1182+        """
1183+        assert isinstance(readers, list)
1184+        bad_shnums = [reader.shnum for reader in readers]
1185+
1186+        self.log("validation or decoding failed on share(s) %s, peer(s) %s "
1187+                 ", segment %d: %s" % \
1188+                 (bad_shnums, readers, self._current_segment, str(f)))
1189+        for reader in readers:
1190+            self._mark_bad_share(reader, f)
1191         return
1192 
1193hunk ./src/allmydata/mutable/retrieve.py 807
1194-    def _decode(self):
1195-        started = time.time()
1196-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1197-         offsets_tuple) = self.verinfo
1198 
1199hunk ./src/allmydata/mutable/retrieve.py 808
1200-        # shares_dict is a dict mapping shnum to share data, but the codec
1201-        # wants two lists.
1202-        shareids = []; shares = []
1203-        for shareid, share in self.shares.items():
1204+    def _validate_block(self, results, segnum, reader, started):
1205+        """
1206+        I validate a block from one share on a remote server.
1207+        """
1208+        # Grab the part of the block hash tree that is necessary to
1209+        # validate this block, then generate the block hash root.
1210+        self.log("validating share %d for segment %d" % (reader.shnum,
1211+                                                             segnum))
1212+        self._status.add_fetch_timing(reader.peerid, started)
1213+        self._status.set_status("Valdiating blocks for segment %d" % segnum)
1214+        # Did we fail to fetch either of the things that we were
1215+        # supposed to? Fail if so.
1216+        if not results[0][0] and results[1][0]:
1217+            # handled by the errback handler.
1218+
1219+            # These all get batched into one query, so the resulting
1220+            # failure should be the same for all of them, so we can just
1221+            # use the first one.
1222+            assert isinstance(results[0][1], failure.Failure)
1223+
1224+            f = results[0][1]
1225+            raise CorruptShareError(reader.peerid,
1226+                                    reader.shnum,
1227+                                    "Connection error: %s" % str(f))
1228+
1229+        block_and_salt, block_and_sharehashes = results
1230+        block, salt = block_and_salt[1]
1231+        blockhashes, sharehashes = block_and_sharehashes[1]
1232+
1233+        blockhashes = dict(enumerate(blockhashes[1]))
1234+        self.log("the reader gave me the following blockhashes: %s" % \
1235+                 blockhashes.keys())
1236+        self.log("the reader gave me the following sharehashes: %s" % \
1237+                 sharehashes[1].keys())
1238+        bht = self._block_hash_trees[reader.shnum]
1239+
1240+        if bht.needed_hashes(segnum, include_leaf=True):
1241+            try:
1242+                bht.set_hashes(blockhashes)
1243+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1244+                    IndexError), e:
1245+                raise CorruptShareError(reader.peerid,
1246+                                        reader.shnum,
1247+                                        "block hash tree failure: %s" % e)
1248+
1249+        if self._version == MDMF_VERSION:
1250+            blockhash = hashutil.block_hash(salt + block)
1251+        else:
1252+            blockhash = hashutil.block_hash(block)
1253+        # If this works without an error, then validation is
1254+        # successful.
1255+        try:
1256+           bht.set_hashes(leaves={segnum: blockhash})
1257+        except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1258+                IndexError), e:
1259+            raise CorruptShareError(reader.peerid,
1260+                                    reader.shnum,
1261+                                    "block hash tree failure: %s" % e)
1262+
1263+        # Reaching this point means that we know that this segment
1264+        # is correct. Now we need to check to see whether the share
1265+        # hash chain is also correct.
1266+        # SDMF wrote share hash chains that didn't contain the
1267+        # leaves, which would be produced from the block hash tree.
1268+        # So we need to validate the block hash tree first. If
1269+        # successful, then bht[0] will contain the root for the
1270+        # shnum, which will be a leaf in the share hash tree, which
1271+        # will allow us to validate the rest of the tree.
1272+        if self.share_hash_tree.needed_hashes(reader.shnum,
1273+                                              include_leaf=True) or \
1274+                                              self._verify:
1275+            try:
1276+                self.share_hash_tree.set_hashes(hashes=sharehashes[1],
1277+                                            leaves={reader.shnum: bht[0]})
1278+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1279+                    IndexError), e:
1280+                raise CorruptShareError(reader.peerid,
1281+                                        reader.shnum,
1282+                                        "corrupt hashes: %s" % e)
1283+
1284+        self.log('share %d is valid for segment %d' % (reader.shnum,
1285+                                                       segnum))
1286+        return {reader.shnum: (block, salt)}
1287+
1288+
1289+    def _get_needed_hashes(self, reader, segnum):
1290+        """
1291+        I get the hashes needed to validate segnum from the reader, then return
1292+        to my caller when this is done.
1293+        """
1294+        bht = self._block_hash_trees[reader.shnum]
1295+        needed = bht.needed_hashes(segnum, include_leaf=True)
1296+        # The root of the block hash tree is also a leaf in the share
1297+        # hash tree. So we don't need to fetch it from the remote
1298+        # server. In the case of files with one segment, this means that
1299+        # we won't fetch any block hash tree from the remote server,
1300+        # since the hash of each share of the file is the entire block
1301+        # hash tree, and is a leaf in the share hash tree. This is fine,
1302+        # since any share corruption will be detected in the share hash
1303+        # tree.
1304+        #needed.discard(0)
1305+        self.log("getting blockhashes for segment %d, share %d: %s" % \
1306+                 (segnum, reader.shnum, str(needed)))
1307+        d1 = reader.get_blockhashes(needed, queue=True, force_remote=True)
1308+        if self.share_hash_tree.needed_hashes(reader.shnum):
1309+            need = self.share_hash_tree.needed_hashes(reader.shnum)
1310+            self.log("also need sharehashes for share %d: %s" % (reader.shnum,
1311+                                                                 str(need)))
1312+            d2 = reader.get_sharehashes(need, queue=True, force_remote=True)
1313+        else:
1314+            d2 = defer.succeed({}) # the logic in the next method
1315+                                   # expects a dict
1316+        dl = defer.DeferredList([d1, d2], consumeErrors=True)
1317+        return dl
1318+
1319+
1320+    def _decode_blocks(self, blocks_and_salts, segnum):
1321+        """
1322+        I take a list of k blocks and salts, and decode that into a
1323+        single encrypted segment.
1324+        """
1325+        d = {}
1326+        # We want to merge our dictionaries to the form
1327+        # {shnum: blocks_and_salts}
1328+        #
1329+        # The dictionaries come from validate block that way, so we just
1330+        # need to merge them.
1331+        for block_and_salt in blocks_and_salts:
1332+            d.update(block_and_salt[1])
1333+
1334+        # All of these blocks should have the same salt; in SDMF, it is
1335+        # the file-wide IV, while in MDMF it is the per-segment salt. In
1336+        # either case, we just need to get one of them and use it.
1337+        #
1338+        # d.items()[0] is like (shnum, (block, salt))
1339+        # d.items()[0][1] is like (block, salt)
1340+        # d.items()[0][1][1] is the salt.
1341+        salt = d.items()[0][1][1]
1342+        # Next, extract just the blocks from the dict. We'll use the
1343+        # salt in the next step.
1344+        share_and_shareids = [(k, v[0]) for k, v in d.items()]
1345+        d2 = dict(share_and_shareids)
1346+        shareids = []
1347+        shares = []
1348+        for shareid, share in d2.items():
1349             shareids.append(shareid)
1350             shares.append(share)
1351 
1352hunk ./src/allmydata/mutable/retrieve.py 956
1353-        assert len(shareids) >= k, len(shareids)
1354+        self._status.set_status("Decoding")
1355+        started = time.time()
1356+        assert len(shareids) >= self._required_shares, len(shareids)
1357         # zfec really doesn't want extra shares
1358hunk ./src/allmydata/mutable/retrieve.py 960
1359-        shareids = shareids[:k]
1360-        shares = shares[:k]
1361-
1362-        fec = codec.CRSDecoder()
1363-        fec.set_params(segsize, k, N)
1364-
1365-        self.log("params %s, we have %d shares" % ((segsize, k, N), len(shares)))
1366-        self.log("about to decode, shareids=%s" % (shareids,))
1367-        d = defer.maybeDeferred(fec.decode, shares, shareids)
1368-        def _done(buffers):
1369-            self._status.timings["decode"] = time.time() - started
1370-            self.log(" decode done, %d buffers" % len(buffers))
1371+        shareids = shareids[:self._required_shares]
1372+        shares = shares[:self._required_shares]
1373+        self.log("decoding segment %d" % segnum)
1374+        if segnum == self._num_segments - 1:
1375+            d = defer.maybeDeferred(self._tail_decoder.decode, shares, shareids)
1376+        else:
1377+            d = defer.maybeDeferred(self._segment_decoder.decode, shares, shareids)
1378+        def _process(buffers):
1379             segment = "".join(buffers)
1380hunk ./src/allmydata/mutable/retrieve.py 969
1381+            self.log(format="now decoding segment %(segnum)s of %(numsegs)s",
1382+                     segnum=segnum,
1383+                     numsegs=self._num_segments,
1384+                     level=log.NOISY)
1385             self.log(" joined length %d, datalength %d" %
1386hunk ./src/allmydata/mutable/retrieve.py 974
1387-                     (len(segment), datalength))
1388-            segment = segment[:datalength]
1389+                     (len(segment), self._data_length))
1390+            if segnum == self._num_segments - 1:
1391+                size_to_use = self._tail_data_size
1392+            else:
1393+                size_to_use = self._segment_size
1394+            segment = segment[:size_to_use]
1395             self.log(" segment len=%d" % len(segment))
1396hunk ./src/allmydata/mutable/retrieve.py 981
1397-            return segment
1398-        def _err(f):
1399-            self.log(" decode failed: %s" % f)
1400-            return f
1401-        d.addCallback(_done)
1402-        d.addErrback(_err)
1403+            self._status.timings.setdefault("decode", 0)
1404+            self._status.timings['decode'] = time.time() - started
1405+            return segment, salt
1406+        d.addCallback(_process)
1407         return d
1408 
1409hunk ./src/allmydata/mutable/retrieve.py 987
1410-    def _decrypt(self, crypttext, IV, readkey):
1411+
1412+    def _decrypt_segment(self, segment_and_salt):
1413+        """
1414+        I take a single segment and its salt, and decrypt it. I return
1415+        the plaintext of the segment that is in my argument.
1416+        """
1417+        segment, salt = segment_and_salt
1418         self._status.set_status("decrypting")
1419hunk ./src/allmydata/mutable/retrieve.py 995
1420+        self.log("decrypting segment %d" % self._current_segment)
1421         started = time.time()
1422hunk ./src/allmydata/mutable/retrieve.py 997
1423-        key = hashutil.ssk_readkey_data_hash(IV, readkey)
1424+        key = hashutil.ssk_readkey_data_hash(salt, self._node.get_readkey())
1425         decryptor = AES(key)
1426hunk ./src/allmydata/mutable/retrieve.py 999
1427-        plaintext = decryptor.process(crypttext)
1428-        self._status.timings["decrypt"] = time.time() - started
1429+        plaintext = decryptor.process(segment)
1430+        self._status.timings.setdefault("decrypt", 0)
1431+        self._status.timings['decrypt'] = time.time() - started
1432         return plaintext
1433 
1434hunk ./src/allmydata/mutable/retrieve.py 1004
1435-    def _done(self, res):
1436-        if not self._running:
1437+
1438+    def notify_server_corruption(self, peerid, shnum, reason):
1439+        ss = self.servermap.connections[peerid]
1440+        ss.callRemoteOnly("advise_corrupt_share",
1441+                          "mutable", self._storage_index, shnum, reason)
1442+
1443+
1444+    def _try_to_validate_privkey(self, enc_privkey, reader):
1445+        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
1446+        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
1447+        if alleged_writekey != self._node.get_writekey():
1448+            self.log("invalid privkey from %s shnum %d" %
1449+                     (reader, reader.shnum),
1450+                     level=log.WEIRD, umid="YIw4tA")
1451+            if self._verify:
1452+                self.servermap.mark_bad_share(reader.peerid, reader.shnum,
1453+                                              self.verinfo[-2])
1454+                e = CorruptShareError(reader.peerid,
1455+                                      reader.shnum,
1456+                                      "invalid privkey")
1457+                f = failure.Failure(e)
1458+                self._bad_shares.add((reader.peerid, reader.shnum, f))
1459             return
1460hunk ./src/allmydata/mutable/retrieve.py 1027
1461+
1462+        # it's good
1463+        self.log("got valid privkey from shnum %d on reader %s" %
1464+                 (reader.shnum, reader))
1465+        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
1466+        self._node._populate_encprivkey(enc_privkey)
1467+        self._node._populate_privkey(privkey)
1468+        self._need_privkey = False
1469+
1470+
1471+    def _check_for_done(self, res):
1472+        """
1473+        I check to see if this Retrieve object has successfully finished
1474+        its work.
1475+
1476+        I can exit in the following ways:
1477+            - If there are no more segments to download, then I exit by
1478+              causing self._done_deferred to fire with the plaintext
1479+              content requested by the caller.
1480+            - If there are still segments to be downloaded, and there
1481+              are enough active readers (readers which have not broken
1482+              and have not given us corrupt data) to continue
1483+              downloading, I send control back to
1484+              _download_current_segment.
1485+            - If there are still segments to be downloaded but there are
1486+              not enough active peers to download them, I ask
1487+              _add_active_peers to add more peers. If it is successful,
1488+              it will call _download_current_segment. If there are not
1489+              enough peers to retrieve the file, then that will cause
1490+              _done_deferred to errback.
1491+        """
1492+        self.log("checking for doneness")
1493+        if self._current_segment > self._last_segment:
1494+            # No more segments to download, we're done.
1495+            self.log("got plaintext, done")
1496+            return self._done()
1497+
1498+        if len(self._active_readers) >= self._required_shares:
1499+            # More segments to download, but we have enough good peers
1500+            # in self._active_readers that we can do that without issue,
1501+            # so go nab the next segment.
1502+            self.log("not done yet: on segment %d of %d" % \
1503+                     (self._current_segment + 1, self._num_segments))
1504+            return self._download_current_segment()
1505+
1506+        self.log("not done yet: on segment %d of %d, need to add peers" % \
1507+                 (self._current_segment + 1, self._num_segments))
1508+        return self._add_active_peers()
1509+
1510+
1511+    def _done(self):
1512+        """
1513+        I am called by _check_for_done when the download process has
1514+        finished successfully. After making some useful logging
1515+        statements, I return the decrypted contents to the owner of this
1516+        Retrieve object through self._done_deferred.
1517+        """
1518         self._running = False
1519         self._status.set_active(False)
1520hunk ./src/allmydata/mutable/retrieve.py 1086
1521-        self._status.timings["total"] = time.time() - self._started
1522-        # res is either the new contents, or a Failure
1523-        if isinstance(res, failure.Failure):
1524-            self.log("Retrieve done, with failure", failure=res,
1525-                     level=log.UNUSUAL)
1526-            self._status.set_status("Failed")
1527+        now = time.time()
1528+        self._status.timings['total'] = now - self._started
1529+        self._status.timings['fetch'] = now - self._started_fetching
1530+
1531+        if self._verify:
1532+            ret = list(self._bad_shares)
1533+            self.log("done verifying, found %d bad shares" % len(ret))
1534         else:
1535hunk ./src/allmydata/mutable/retrieve.py 1094
1536-            self.log("Retrieve done, success!")
1537-            self._status.set_status("Finished")
1538-            self._status.set_progress(1.0)
1539-            # remember the encoding parameters, use them again next time
1540-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1541-             offsets_tuple) = self.verinfo
1542-            self._node._populate_required_shares(k)
1543-            self._node._populate_total_shares(N)
1544-        eventually(self._done_deferred.callback, res)
1545+            # TODO: upload status here?
1546+            ret = self._consumer
1547+            self._consumer.unregisterProducer()
1548+        eventually(self._done_deferred.callback, ret)
1549+
1550+
1551+    def _failed(self):
1552+        """
1553+        I am called by _add_active_peers when there are not enough
1554+        active peers left to complete the download. After making some
1555+        useful logging statements, I return an exception to that effect
1556+        to the caller of this Retrieve object through
1557+        self._done_deferred.
1558+        """
1559+        self._running = False
1560+        self._status.set_active(False)
1561+        now = time.time()
1562+        self._status.timings['total'] = now - self._started
1563+        self._status.timings['fetch'] = now - self._started_fetching
1564 
1565hunk ./src/allmydata/mutable/retrieve.py 1114
1566+        if self._verify:
1567+            ret = list(self._bad_shares)
1568+        else:
1569+            format = ("ran out of peers: "
1570+                      "have %(have)d of %(total)d segments "
1571+                      "found %(bad)d bad shares "
1572+                      "encoding %(k)d-of-%(n)d")
1573+            args = {"have": self._current_segment,
1574+                    "total": self._num_segments,
1575+                    "need": self._last_segment,
1576+                    "k": self._required_shares,
1577+                    "n": self._total_shares,
1578+                    "bad": len(self._bad_shares)}
1579+            e = NotEnoughSharesError("%s, last failure: %s" % \
1580+                                     (format % args, str(self._last_failure)))
1581+            f = failure.Failure(e)
1582+            ret = f
1583+        eventually(self._done_deferred.callback, ret)
1584}
1585[mutable/publish: teach the publisher how to publish MDMF mutable files
1586Kevan Carstensen <kevan@isnotajoke.com>**20110802013931
1587 Ignore-this: 115217ec2b289452ec774cb725da8a86
1588 
1589 Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
1590] {
1591hunk ./src/allmydata/mutable/publish.py 3
1592 
1593 
1594-import os, struct, time
1595+import os, time
1596+from StringIO import StringIO
1597 from itertools import count
1598 from zope.interface import implements
1599 from twisted.internet import defer
1600hunk ./src/allmydata/mutable/publish.py 9
1601 from twisted.python import failure
1602-from allmydata.interfaces import IPublishStatus
1603+from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \
1604+                                 IMutableUploadable
1605 from allmydata.util import base32, hashutil, mathutil, idlib, log
1606 from allmydata.util.dictutil import DictOfSets
1607 from allmydata import hashtree, codec
1608hunk ./src/allmydata/mutable/publish.py 21
1609 from allmydata.mutable.common import MODE_WRITE, MODE_CHECK, \
1610      UncoordinatedWriteError, NotEnoughServersError
1611 from allmydata.mutable.servermap import ServerMap
1612-from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \
1613-     unpack_checkstring, SIGNED_PREFIX
1614+from allmydata.mutable.layout import unpack_checkstring, MDMFSlotWriteProxy, \
1615+                                     SDMFSlotWriteProxy
1616+
1617+KiB = 1024
1618+DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB
1619+PUSHING_BLOCKS_STATE = 0
1620+PUSHING_EVERYTHING_ELSE_STATE = 1
1621+DONE_STATE = 2
1622 
1623 class PublishStatus:
1624     implements(IPublishStatus)
1625hunk ./src/allmydata/mutable/publish.py 112
1626         self._log_number = num
1627         self._running = True
1628         self._first_write_error = None
1629+        self._last_failure = None
1630 
1631         self._status = PublishStatus()
1632         self._status.set_storage_index(self._storage_index)
1633hunk ./src/allmydata/mutable/publish.py 119
1634         self._status.set_helper(False)
1635         self._status.set_progress(0.0)
1636         self._status.set_active(True)
1637+        self._version = self._node.get_version()
1638+        assert self._version in (SDMF_VERSION, MDMF_VERSION)
1639+
1640 
1641     def get_status(self):
1642         return self._status
1643hunk ./src/allmydata/mutable/publish.py 133
1644             kwargs["facility"] = "tahoe.mutable.publish"
1645         return log.msg(*args, **kwargs)
1646 
1647+
1648+    def update(self, data, offset, blockhashes, version):
1649+        """
1650+        I replace the contents of this file with the contents of data,
1651+        starting at offset. I return a Deferred that fires with None
1652+        when the replacement has been completed, or with an error if
1653+        something went wrong during the process.
1654+
1655+        Note that this process will not upload new shares. If the file
1656+        being updated is in need of repair, callers will have to repair
1657+        it on their own.
1658+        """
1659+        # How this works:
1660+        # 1: Make peer assignments. We'll assign each share that we know
1661+        # about on the grid to that peer that currently holds that
1662+        # share, and will not place any new shares.
1663+        # 2: Setup encoding parameters. Most of these will stay the same
1664+        # -- datalength will change, as will some of the offsets.
1665+        # 3. Upload the new segments.
1666+        # 4. Be done.
1667+        assert IMutableUploadable.providedBy(data)
1668+
1669+        self.data = data
1670+
1671+        # XXX: Use the MutableFileVersion instead.
1672+        self.datalength = self._node.get_size()
1673+        if data.get_size() > self.datalength:
1674+            self.datalength = data.get_size()
1675+
1676+        self.log("starting update")
1677+        self.log("adding new data of length %d at offset %d" % \
1678+                    (data.get_size(), offset))
1679+        self.log("new data length is %d" % self.datalength)
1680+        self._status.set_size(self.datalength)
1681+        self._status.set_status("Started")
1682+        self._started = time.time()
1683+
1684+        self.done_deferred = defer.Deferred()
1685+
1686+        self._writekey = self._node.get_writekey()
1687+        assert self._writekey, "need write capability to publish"
1688+
1689+        # first, which servers will we publish to? We require that the
1690+        # servermap was updated in MODE_WRITE, so we can depend upon the
1691+        # peerlist computed by that process instead of computing our own.
1692+        assert self._servermap
1693+        assert self._servermap.last_update_mode in (MODE_WRITE, MODE_CHECK)
1694+        # we will push a version that is one larger than anything present
1695+        # in the grid, according to the servermap.
1696+        self._new_seqnum = self._servermap.highest_seqnum() + 1
1697+        self._status.set_servermap(self._servermap)
1698+
1699+        self.log(format="new seqnum will be %(seqnum)d",
1700+                 seqnum=self._new_seqnum, level=log.NOISY)
1701+
1702+        # We're updating an existing file, so all of the following
1703+        # should be available.
1704+        self.readkey = self._node.get_readkey()
1705+        self.required_shares = self._node.get_required_shares()
1706+        assert self.required_shares is not None
1707+        self.total_shares = self._node.get_total_shares()
1708+        assert self.total_shares is not None
1709+        self._status.set_encoding(self.required_shares, self.total_shares)
1710+
1711+        self._pubkey = self._node.get_pubkey()
1712+        assert self._pubkey
1713+        self._privkey = self._node.get_privkey()
1714+        assert self._privkey
1715+        self._encprivkey = self._node.get_encprivkey()
1716+
1717+        sb = self._storage_broker
1718+        full_peerlist = [(s.get_serverid(), s.get_rref())
1719+                         for s in sb.get_servers_for_psi(self._storage_index)]
1720+        self.full_peerlist = full_peerlist # for use later, immutable
1721+        self.bad_peers = set() # peerids who have errbacked/refused requests
1722+
1723+        # This will set self.segment_size, self.num_segments, and
1724+        # self.fec. TODO: Does it know how to do the offset? Probably
1725+        # not. So do that part next.
1726+        self.setup_encoding_parameters(offset=offset)
1727+
1728+        # if we experience any surprises (writes which were rejected because
1729+        # our test vector did not match, or shares which we didn't expect to
1730+        # see), we set this flag and report an UncoordinatedWriteError at the
1731+        # end of the publish process.
1732+        self.surprised = False
1733+
1734+        # we keep track of three tables. The first is our goal: which share
1735+        # we want to see on which servers. This is initially populated by the
1736+        # existing servermap.
1737+        self.goal = set() # pairs of (peerid, shnum) tuples
1738+
1739+        # the second table is our list of outstanding queries: those which
1740+        # are in flight and may or may not be delivered, accepted, or
1741+        # acknowledged. Items are added to this table when the request is
1742+        # sent, and removed when the response returns (or errbacks).
1743+        self.outstanding = set() # (peerid, shnum) tuples
1744+
1745+        # the third is a table of successes: share which have actually been
1746+        # placed. These are populated when responses come back with success.
1747+        # When self.placed == self.goal, we're done.
1748+        self.placed = set() # (peerid, shnum) tuples
1749+
1750+        # we also keep a mapping from peerid to RemoteReference. Each time we
1751+        # pull a connection out of the full peerlist, we add it to this for
1752+        # use later.
1753+        self.connections = {}
1754+
1755+        self.bad_share_checkstrings = {}
1756+
1757+        # This is set at the last step of the publishing process.
1758+        self.versioninfo = ""
1759+
1760+        # we use the servermap to populate the initial goal: this way we will
1761+        # try to update each existing share in place. Since we're
1762+        # updating, we ignore damaged and missing shares -- callers must
1763+        # do a repair to repair and recreate these.
1764+        for (peerid, shnum) in self._servermap.servermap:
1765+            self.goal.add( (peerid, shnum) )
1766+            self.connections[peerid] = self._servermap.connections[peerid]
1767+        self.writers = {}
1768+
1769+        # SDMF files are updated differently.
1770+        self._version = MDMF_VERSION
1771+        writer_class = MDMFSlotWriteProxy
1772+
1773+        # For each (peerid, shnum) in self.goal, we make a
1774+        # write proxy for that peer. We'll use this to write
1775+        # shares to the peer.
1776+        for key in self.goal:
1777+            peerid, shnum = key
1778+            write_enabler = self._node.get_write_enabler(peerid)
1779+            renew_secret = self._node.get_renewal_secret(peerid)
1780+            cancel_secret = self._node.get_cancel_secret(peerid)
1781+            secrets = (write_enabler, renew_secret, cancel_secret)
1782+
1783+            self.writers[shnum] =  writer_class(shnum,
1784+                                                self.connections[peerid],
1785+                                                self._storage_index,
1786+                                                secrets,
1787+                                                self._new_seqnum,
1788+                                                self.required_shares,
1789+                                                self.total_shares,
1790+                                                self.segment_size,
1791+                                                self.datalength)
1792+            self.writers[shnum].peerid = peerid
1793+            assert (peerid, shnum) in self._servermap.servermap
1794+            old_versionid, old_timestamp = self._servermap.servermap[key]
1795+            (old_seqnum, old_root_hash, old_salt, old_segsize,
1796+             old_datalength, old_k, old_N, old_prefix,
1797+             old_offsets_tuple) = old_versionid
1798+            self.writers[shnum].set_checkstring(old_seqnum,
1799+                                                old_root_hash,
1800+                                                old_salt)
1801+
1802+        # Our remote shares will not have a complete checkstring until
1803+        # after we are done writing share data and have started to write
1804+        # blocks. In the meantime, we need to know what to look for when
1805+        # writing, so that we can detect UncoordinatedWriteErrors.
1806+        self._checkstring = self.writers.values()[0].get_checkstring()
1807+
1808+        # Now, we start pushing shares.
1809+        self._status.timings["setup"] = time.time() - self._started
1810+        # First, we encrypt, encode, and publish the shares that we need
1811+        # to encrypt, encode, and publish.
1812+
1813+        # Our update process fetched these for us. We need to update
1814+        # them in place as publishing happens.
1815+        self.blockhashes = {} # (shnum, [blochashes])
1816+        for (i, bht) in blockhashes.iteritems():
1817+            # We need to extract the leaves from our old hash tree.
1818+            old_segcount = mathutil.div_ceil(version[4],
1819+                                             version[3])
1820+            h = hashtree.IncompleteHashTree(old_segcount)
1821+            bht = dict(enumerate(bht))
1822+            h.set_hashes(bht)
1823+            leaves = h[h.get_leaf_index(0):]
1824+            for j in xrange(self.num_segments - len(leaves)):
1825+                leaves.append(None)
1826+
1827+            assert len(leaves) >= self.num_segments
1828+            self.blockhashes[i] = leaves
1829+            # This list will now be the leaves that were set during the
1830+            # initial upload + enough empty hashes to make it a
1831+            # power-of-two. If we exceed a power of two boundary, we
1832+            # should be encoding the file over again, and should not be
1833+            # here. So, we have
1834+            #assert len(self.blockhashes[i]) == \
1835+            #    hashtree.roundup_pow2(self.num_segments), \
1836+            #        len(self.blockhashes[i])
1837+            # XXX: Except this doesn't work. Figure out why.
1838+
1839+        # These are filled in later, after we've modified the block hash
1840+        # tree suitably.
1841+        self.sharehash_leaves = None # eventually [sharehashes]
1842+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1843+                              # validate the share]
1844+
1845+        self.log("Starting push")
1846+
1847+        self._state = PUSHING_BLOCKS_STATE
1848+        self._push()
1849+
1850+        return self.done_deferred
1851+
1852+
1853     def publish(self, newdata):
1854         """Publish the filenode's current contents.  Returns a Deferred that
1855         fires (with None) when the publish has done as much work as it's ever
1856hunk ./src/allmydata/mutable/publish.py 346
1857         simultaneous write.
1858         """
1859 
1860-        # 1: generate shares (SDMF: files are small, so we can do it in RAM)
1861-        # 2: perform peer selection, get candidate servers
1862-        #  2a: send queries to n+epsilon servers, to determine current shares
1863-        #  2b: based upon responses, create target map
1864-        # 3: send slot_testv_and_readv_and_writev messages
1865-        # 4: as responses return, update share-dispatch table
1866-        # 4a: may need to run recovery algorithm
1867-        # 5: when enough responses are back, we're done
1868+        # 0. Setup encoding parameters, encoder, and other such things.
1869+        # 1. Encrypt, encode, and publish segments.
1870+        assert IMutableUploadable.providedBy(newdata)
1871 
1872hunk ./src/allmydata/mutable/publish.py 350
1873-        self.log("starting publish, datalen is %s" % len(newdata))
1874-        self._status.set_size(len(newdata))
1875+        self.data = newdata
1876+        self.datalength = newdata.get_size()
1877+        #if self.datalength >= DEFAULT_MAX_SEGMENT_SIZE:
1878+        #    self._version = MDMF_VERSION
1879+        #else:
1880+        #    self._version = SDMF_VERSION
1881+
1882+        self.log("starting publish, datalen is %s" % self.datalength)
1883+        self._status.set_size(self.datalength)
1884         self._status.set_status("Started")
1885         self._started = time.time()
1886 
1887hunk ./src/allmydata/mutable/publish.py 407
1888         self.full_peerlist = full_peerlist # for use later, immutable
1889         self.bad_peers = set() # peerids who have errbacked/refused requests
1890 
1891-        self.newdata = newdata
1892-        self.salt = os.urandom(16)
1893-
1894+        # This will set self.segment_size, self.num_segments, and
1895+        # self.fec.
1896         self.setup_encoding_parameters()
1897 
1898         # if we experience any surprises (writes which were rejected because
1899hunk ./src/allmydata/mutable/publish.py 417
1900         # end of the publish process.
1901         self.surprised = False
1902 
1903-        # as a failsafe, refuse to iterate through self.loop more than a
1904-        # thousand times.
1905-        self.looplimit = 1000
1906-
1907         # we keep track of three tables. The first is our goal: which share
1908         # we want to see on which servers. This is initially populated by the
1909         # existing servermap.
1910hunk ./src/allmydata/mutable/publish.py 440
1911 
1912         self.bad_share_checkstrings = {}
1913 
1914+        # This is set at the last step of the publishing process.
1915+        self.versioninfo = ""
1916+
1917         # we use the servermap to populate the initial goal: this way we will
1918         # try to update each existing share in place.
1919         for (peerid, shnum) in self._servermap.servermap:
1920hunk ./src/allmydata/mutable/publish.py 456
1921             self.bad_share_checkstrings[key] = old_checkstring
1922             self.connections[peerid] = self._servermap.connections[peerid]
1923 
1924-        # create the shares. We'll discard these as they are delivered. SDMF:
1925-        # we're allowed to hold everything in memory.
1926+        # TODO: Make this part do peer selection.
1927+        self.update_goal()
1928+        self.writers = {}
1929+        if self._version == MDMF_VERSION:
1930+            writer_class = MDMFSlotWriteProxy
1931+        else:
1932+            writer_class = SDMFSlotWriteProxy
1933+
1934+        # For each (peerid, shnum) in self.goal, we make a
1935+        # write proxy for that peer. We'll use this to write
1936+        # shares to the peer.
1937+        for key in self.goal:
1938+            peerid, shnum = key
1939+            write_enabler = self._node.get_write_enabler(peerid)
1940+            renew_secret = self._node.get_renewal_secret(peerid)
1941+            cancel_secret = self._node.get_cancel_secret(peerid)
1942+            secrets = (write_enabler, renew_secret, cancel_secret)
1943 
1944hunk ./src/allmydata/mutable/publish.py 474
1945+            self.writers[shnum] =  writer_class(shnum,
1946+                                                self.connections[peerid],
1947+                                                self._storage_index,
1948+                                                secrets,
1949+                                                self._new_seqnum,
1950+                                                self.required_shares,
1951+                                                self.total_shares,
1952+                                                self.segment_size,
1953+                                                self.datalength)
1954+            self.writers[shnum].peerid = peerid
1955+            if (peerid, shnum) in self._servermap.servermap:
1956+                old_versionid, old_timestamp = self._servermap.servermap[key]
1957+                (old_seqnum, old_root_hash, old_salt, old_segsize,
1958+                 old_datalength, old_k, old_N, old_prefix,
1959+                 old_offsets_tuple) = old_versionid
1960+                self.writers[shnum].set_checkstring(old_seqnum,
1961+                                                    old_root_hash,
1962+                                                    old_salt)
1963+            elif (peerid, shnum) in self.bad_share_checkstrings:
1964+                old_checkstring = self.bad_share_checkstrings[(peerid, shnum)]
1965+                self.writers[shnum].set_checkstring(old_checkstring)
1966+
1967+        # Our remote shares will not have a complete checkstring until
1968+        # after we are done writing share data and have started to write
1969+        # blocks. In the meantime, we need to know what to look for when
1970+        # writing, so that we can detect UncoordinatedWriteErrors.
1971+        self._checkstring = self.writers.values()[0].get_checkstring()
1972+
1973+        # Now, we start pushing shares.
1974         self._status.timings["setup"] = time.time() - self._started
1975hunk ./src/allmydata/mutable/publish.py 504
1976-        d = self._encrypt_and_encode()
1977-        d.addCallback(self._generate_shares)
1978-        def _start_pushing(res):
1979-            self._started_pushing = time.time()
1980-            return res
1981-        d.addCallback(_start_pushing)
1982-        d.addCallback(self.loop) # trigger delivery
1983-        d.addErrback(self._fatal_error)
1984+        # First, we encrypt, encode, and publish the shares that we need
1985+        # to encrypt, encode, and publish.
1986+
1987+        # This will eventually hold the block hash chain for each share
1988+        # that we publish. We define it this way so that empty publishes
1989+        # will still have something to write to the remote slot.
1990+        self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)])
1991+        for i in xrange(self.total_shares):
1992+            blocks = self.blockhashes[i]
1993+            for j in xrange(self.num_segments):
1994+                blocks.append(None)
1995+        self.sharehash_leaves = None # eventually [sharehashes]
1996+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1997+                              # validate the share]
1998+
1999+        self.log("Starting push")
2000+
2001+        self._state = PUSHING_BLOCKS_STATE
2002+        self._push()
2003 
2004         return self.done_deferred
2005 
2006hunk ./src/allmydata/mutable/publish.py 526
2007-    def setup_encoding_parameters(self):
2008-        segment_size = len(self.newdata)
2009+
2010+    def _update_status(self):
2011+        self._status.set_status("Sending Shares: %d placed out of %d, "
2012+                                "%d messages outstanding" %
2013+                                (len(self.placed),
2014+                                 len(self.goal),
2015+                                 len(self.outstanding)))
2016+        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2017+
2018+
2019+    def setup_encoding_parameters(self, offset=0):
2020+        if self._version == MDMF_VERSION:
2021+            segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default
2022+        else:
2023+            segment_size = self.datalength # SDMF is only one segment
2024         # this must be a multiple of self.required_shares
2025         segment_size = mathutil.next_multiple(segment_size,
2026                                               self.required_shares)
2027hunk ./src/allmydata/mutable/publish.py 545
2028         self.segment_size = segment_size
2029+
2030+        # Calculate the starting segment for the upload.
2031         if segment_size:
2032hunk ./src/allmydata/mutable/publish.py 548
2033-            self.num_segments = mathutil.div_ceil(len(self.newdata),
2034+            # We use div_ceil instead of integer division here because
2035+            # it is semantically correct.
2036+            # If datalength isn't an even multiple of segment_size, but
2037+            # is larger than segment_size, datalength // segment_size
2038+            # will be the largest number such that num <= datalength and
2039+            # num % segment_size == 0. But that's not what we want,
2040+            # because it ignores the extra data. div_ceil will give us
2041+            # the right number of segments for the data that we're
2042+            # given.
2043+            self.num_segments = mathutil.div_ceil(self.datalength,
2044                                                   segment_size)
2045hunk ./src/allmydata/mutable/publish.py 559
2046+
2047+            self.starting_segment = offset // segment_size
2048+
2049         else:
2050             self.num_segments = 0
2051hunk ./src/allmydata/mutable/publish.py 564
2052-        assert self.num_segments in [0, 1,] # SDMF restrictions
2053+            self.starting_segment = 0
2054 
2055hunk ./src/allmydata/mutable/publish.py 566
2056-    def _fatal_error(self, f):
2057-        self.log("error during loop", failure=f, level=log.UNUSUAL)
2058-        self._done(f)
2059 
2060hunk ./src/allmydata/mutable/publish.py 567
2061-    def _update_status(self):
2062-        self._status.set_status("Sending Shares: %d placed out of %d, "
2063-                                "%d messages outstanding" %
2064-                                (len(self.placed),
2065-                                 len(self.goal),
2066-                                 len(self.outstanding)))
2067-        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2068+        self.log("building encoding parameters for file")
2069+        self.log("got segsize %d" % self.segment_size)
2070+        self.log("got %d segments" % self.num_segments)
2071 
2072hunk ./src/allmydata/mutable/publish.py 571
2073-    def loop(self, ignored=None):
2074-        self.log("entering loop", level=log.NOISY)
2075-        if not self._running:
2076-            return
2077+        if self._version == SDMF_VERSION:
2078+            assert self.num_segments in (0, 1) # SDMF
2079+        # calculate the tail segment size.
2080 
2081hunk ./src/allmydata/mutable/publish.py 575
2082-        self.looplimit -= 1
2083-        if self.looplimit <= 0:
2084-            raise LoopLimitExceededError("loop limit exceeded")
2085+        if segment_size and self.datalength:
2086+            self.tail_segment_size = self.datalength % segment_size
2087+            self.log("got tail segment size %d" % self.tail_segment_size)
2088+        else:
2089+            self.tail_segment_size = 0
2090 
2091hunk ./src/allmydata/mutable/publish.py 581
2092-        if self.surprised:
2093-            # don't send out any new shares, just wait for the outstanding
2094-            # ones to be retired.
2095-            self.log("currently surprised, so don't send any new shares",
2096-                     level=log.NOISY)
2097+        if self.tail_segment_size == 0 and segment_size:
2098+            # The tail segment is the same size as the other segments.
2099+            self.tail_segment_size = segment_size
2100+
2101+        # Make FEC encoders
2102+        fec = codec.CRSEncoder()
2103+        fec.set_params(self.segment_size,
2104+                       self.required_shares, self.total_shares)
2105+        self.piece_size = fec.get_block_size()
2106+        self.fec = fec
2107+
2108+        if self.tail_segment_size == self.segment_size:
2109+            self.tail_fec = self.fec
2110         else:
2111hunk ./src/allmydata/mutable/publish.py 595
2112-            self.update_goal()
2113-            # how far are we from our goal?
2114-            needed = self.goal - self.placed - self.outstanding
2115-            self._update_status()
2116+            tail_fec = codec.CRSEncoder()
2117+            tail_fec.set_params(self.tail_segment_size,
2118+                                self.required_shares,
2119+                                self.total_shares)
2120+            self.tail_fec = tail_fec
2121 
2122hunk ./src/allmydata/mutable/publish.py 601
2123-            if needed:
2124-                # we need to send out new shares
2125-                self.log(format="need to send %(needed)d new shares",
2126-                         needed=len(needed), level=log.NOISY)
2127-                self._send_shares(needed)
2128-                return
2129+        self._current_segment = self.starting_segment
2130+        self.end_segment = self.num_segments - 1
2131+        # Now figure out where the last segment should be.
2132+        if self.data.get_size() != self.datalength:
2133+            # We're updating a few segments in the middle of a mutable
2134+            # file, so we don't want to republish the whole thing.
2135+            # (we don't have enough data to do that even if we wanted
2136+            # to)
2137+            end = self.data.get_size()
2138+            self.end_segment = end // segment_size
2139+            if end % segment_size == 0:
2140+                self.end_segment -= 1
2141 
2142hunk ./src/allmydata/mutable/publish.py 614
2143-        if self.outstanding:
2144-            # queries are still pending, keep waiting
2145-            self.log(format="%(outstanding)d queries still outstanding",
2146-                     outstanding=len(self.outstanding),
2147-                     level=log.NOISY)
2148-            return
2149+        self.log("got start segment %d" % self.starting_segment)
2150+        self.log("got end segment %d" % self.end_segment)
2151+
2152+
2153+    def _push(self, ignored=None):
2154+        """
2155+        I manage state transitions. In particular, I see that we still
2156+        have a good enough number of writers to complete the upload
2157+        successfully.
2158+        """
2159+        # Can we still successfully publish this file?
2160+        # TODO: Keep track of outstanding queries before aborting the
2161+        #       process.
2162+        if len(self.writers) < self.required_shares or self.surprised:
2163+            return self._failure()
2164+
2165+        # Figure out what we need to do next. Each of these needs to
2166+        # return a deferred so that we don't block execution when this
2167+        # is first called in the upload method.
2168+        if self._state == PUSHING_BLOCKS_STATE:
2169+            return self.push_segment(self._current_segment)
2170+
2171+        elif self._state == PUSHING_EVERYTHING_ELSE_STATE:
2172+            return self.push_everything_else()
2173+
2174+        # If we make it to this point, we were successful in placing the
2175+        # file.
2176+        return self._done()
2177+
2178+
2179+    def push_segment(self, segnum):
2180+        if self.num_segments == 0 and self._version == SDMF_VERSION:
2181+            self._add_dummy_salts()
2182+
2183+        if segnum > self.end_segment:
2184+            # We don't have any more segments to push.
2185+            self._state = PUSHING_EVERYTHING_ELSE_STATE
2186+            return self._push()
2187+
2188+        d = self._encode_segment(segnum)
2189+        d.addCallback(self._push_segment, segnum)
2190+        def _increment_segnum(ign):
2191+            self._current_segment += 1
2192+        # XXX: I don't think we need to do addBoth here -- any errBacks
2193+        # should be handled within push_segment.
2194+        d.addCallback(_increment_segnum)
2195+        d.addCallback(self._turn_barrier)
2196+        d.addCallback(self._push)
2197+        d.addErrback(self._failure)
2198+
2199+
2200+    def _turn_barrier(self, result):
2201+        """
2202+        I help the publish process avoid the recursion limit issues
2203+        described in #237.
2204+        """
2205+        return fireEventually(result)
2206+
2207+
2208+    def _add_dummy_salts(self):
2209+        """
2210+        SDMF files need a salt even if they're empty, or the signature
2211+        won't make sense. This method adds a dummy salt to each of our
2212+        SDMF writers so that they can write the signature later.
2213+        """
2214+        salt = os.urandom(16)
2215+        assert self._version == SDMF_VERSION
2216+
2217+        for writer in self.writers.itervalues():
2218+            writer.put_salt(salt)
2219+
2220+
2221+    def _encode_segment(self, segnum):
2222+        """
2223+        I encrypt and encode the segment segnum.
2224+        """
2225+        started = time.time()
2226+
2227+        if segnum + 1 == self.num_segments:
2228+            segsize = self.tail_segment_size
2229+        else:
2230+            segsize = self.segment_size
2231+
2232+
2233+        self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments))
2234+        data = self.data.read(segsize)
2235+        # XXX: This is dumb. Why return a list?
2236+        data = "".join(data)
2237+
2238+        assert len(data) == segsize, len(data)
2239+
2240+        salt = os.urandom(16)
2241+
2242+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
2243+        self._status.set_status("Encrypting")
2244+        enc = AES(key)
2245+        crypttext = enc.process(data)
2246+        assert len(crypttext) == len(data)
2247 
2248hunk ./src/allmydata/mutable/publish.py 713
2249-        # no queries outstanding, no placements needed: we're done
2250-        self.log("no queries outstanding, no placements needed: done",
2251-                 level=log.OPERATIONAL)
2252         now = time.time()
2253hunk ./src/allmydata/mutable/publish.py 714
2254-        elapsed = now - self._started_pushing
2255-        self._status.timings["push"] = elapsed
2256-        return self._done(None)
2257+        self._status.timings["encrypt"] = now - started
2258+        started = now
2259+
2260+        # now apply FEC
2261+        if segnum + 1 == self.num_segments:
2262+            fec = self.tail_fec
2263+        else:
2264+            fec = self.fec
2265+
2266+        self._status.set_status("Encoding")
2267+        crypttext_pieces = [None] * self.required_shares
2268+        piece_size = fec.get_block_size()
2269+        for i in range(len(crypttext_pieces)):
2270+            offset = i * piece_size
2271+            piece = crypttext[offset:offset+piece_size]
2272+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2273+            crypttext_pieces[i] = piece
2274+            assert len(piece) == piece_size
2275+        d = fec.encode(crypttext_pieces)
2276+        def _done_encoding(res):
2277+            elapsed = time.time() - started
2278+            self._status.timings["encode"] = elapsed
2279+            return (res, salt)
2280+        d.addCallback(_done_encoding)
2281+        return d
2282+
2283+
2284+    def _push_segment(self, encoded_and_salt, segnum):
2285+        """
2286+        I push (data, salt) as segment number segnum.
2287+        """
2288+        results, salt = encoded_and_salt
2289+        shares, shareids = results
2290+        self._status.set_status("Pushing segment")
2291+        for i in xrange(len(shares)):
2292+            sharedata = shares[i]
2293+            shareid = shareids[i]
2294+            if self._version == MDMF_VERSION:
2295+                hashed = salt + sharedata
2296+            else:
2297+                hashed = sharedata
2298+            block_hash = hashutil.block_hash(hashed)
2299+            self.blockhashes[shareid][segnum] = block_hash
2300+            # find the writer for this share
2301+            writer = self.writers[shareid]
2302+            writer.put_block(sharedata, segnum, salt)
2303+
2304+
2305+    def push_everything_else(self):
2306+        """
2307+        I put everything else associated with a share.
2308+        """
2309+        self._pack_started = time.time()
2310+        self.push_encprivkey()
2311+        self.push_blockhashes()
2312+        self.push_sharehashes()
2313+        self.push_toplevel_hashes_and_signature()
2314+        d = self.finish_publishing()
2315+        def _change_state(ignored):
2316+            self._state = DONE_STATE
2317+        d.addCallback(_change_state)
2318+        d.addCallback(self._push)
2319+        return d
2320+
2321+
2322+    def push_encprivkey(self):
2323+        encprivkey = self._encprivkey
2324+        self._status.set_status("Pushing encrypted private key")
2325+        for writer in self.writers.itervalues():
2326+            writer.put_encprivkey(encprivkey)
2327+
2328+
2329+    def push_blockhashes(self):
2330+        self.sharehash_leaves = [None] * len(self.blockhashes)
2331+        self._status.set_status("Building and pushing block hash tree")
2332+        for shnum, blockhashes in self.blockhashes.iteritems():
2333+            t = hashtree.HashTree(blockhashes)
2334+            self.blockhashes[shnum] = list(t)
2335+            # set the leaf for future use.
2336+            self.sharehash_leaves[shnum] = t[0]
2337+
2338+            writer = self.writers[shnum]
2339+            writer.put_blockhashes(self.blockhashes[shnum])
2340+
2341+
2342+    def push_sharehashes(self):
2343+        self._status.set_status("Building and pushing share hash chain")
2344+        share_hash_tree = hashtree.HashTree(self.sharehash_leaves)
2345+        for shnum in xrange(len(self.sharehash_leaves)):
2346+            needed_indices = share_hash_tree.needed_hashes(shnum)
2347+            self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i])
2348+                                             for i in needed_indices] )
2349+            writer = self.writers[shnum]
2350+            writer.put_sharehashes(self.sharehashes[shnum])
2351+        self.root_hash = share_hash_tree[0]
2352+
2353+
2354+    def push_toplevel_hashes_and_signature(self):
2355+        # We need to to three things here:
2356+        #   - Push the root hash and salt hash
2357+        #   - Get the checkstring of the resulting layout; sign that.
2358+        #   - Push the signature
2359+        self._status.set_status("Pushing root hashes and signature")
2360+        for shnum in xrange(self.total_shares):
2361+            writer = self.writers[shnum]
2362+            writer.put_root_hash(self.root_hash)
2363+        self._update_checkstring()
2364+        self._make_and_place_signature()
2365+
2366+
2367+    def _update_checkstring(self):
2368+        """
2369+        After putting the root hash, MDMF files will have the
2370+        checkstring written to the storage server. This means that we
2371+        can update our copy of the checkstring so we can detect
2372+        uncoordinated writes. SDMF files will have the same checkstring,
2373+        so we need not do anything.
2374+        """
2375+        self._checkstring = self.writers.values()[0].get_checkstring()
2376+
2377+
2378+    def _make_and_place_signature(self):
2379+        """
2380+        I create and place the signature.
2381+        """
2382+        started = time.time()
2383+        self._status.set_status("Signing prefix")
2384+        signable = self.writers[0].get_signable()
2385+        self.signature = self._privkey.sign(signable)
2386+
2387+        for (shnum, writer) in self.writers.iteritems():
2388+            writer.put_signature(self.signature)
2389+        self._status.timings['sign'] = time.time() - started
2390+
2391+
2392+    def finish_publishing(self):
2393+        # We're almost done -- we just need to put the verification key
2394+        # and the offsets
2395+        started = time.time()
2396+        self._status.set_status("Pushing shares")
2397+        self._started_pushing = started
2398+        ds = []
2399+        verification_key = self._pubkey.serialize()
2400+
2401+
2402+        # TODO: Bad, since we remove from this same dict. We need to
2403+        # make a copy, or just use a non-iterated value.
2404+        for (shnum, writer) in self.writers.iteritems():
2405+            writer.put_verification_key(verification_key)
2406+            d = writer.finish_publishing()
2407+            # Add the (peerid, shnum) tuple to our list of outstanding
2408+            # queries. This gets used by _loop if some of our queries
2409+            # fail to place shares.
2410+            self.outstanding.add((writer.peerid, writer.shnum))
2411+            d.addCallback(self._got_write_answer, writer, started)
2412+            d.addErrback(self._connection_problem, writer)
2413+            ds.append(d)
2414+        self._record_verinfo()
2415+        self._status.timings['pack'] = time.time() - started
2416+        return defer.DeferredList(ds)
2417+
2418+
2419+    def _record_verinfo(self):
2420+        self.versioninfo = self.writers.values()[0].get_verinfo()
2421+
2422+
2423+    def _connection_problem(self, f, writer):
2424+        """
2425+        We ran into a connection problem while working with writer, and
2426+        need to deal with that.
2427+        """
2428+        self.log("found problem: %s" % str(f))
2429+        self._last_failure = f
2430+        del(self.writers[writer.shnum])
2431+
2432 
2433     def log_goal(self, goal, message=""):
2434         logmsg = [message]
2435hunk ./src/allmydata/mutable/publish.py 971
2436             self.log_goal(self.goal, "after update: ")
2437 
2438 
2439+    def _got_write_answer(self, answer, writer, started):
2440+        if not answer:
2441+            # SDMF writers only pretend to write when readers set their
2442+            # blocks, salts, and so on -- they actually just write once,
2443+            # at the end of the upload process. In fake writes, they
2444+            # return defer.succeed(None). If we see that, we shouldn't
2445+            # bother checking it.
2446+            return
2447 
2448hunk ./src/allmydata/mutable/publish.py 980
2449-    def _encrypt_and_encode(self):
2450-        # this returns a Deferred that fires with a list of (sharedata,
2451-        # sharenum) tuples. TODO: cache the ciphertext, only produce the
2452-        # shares that we care about.
2453-        self.log("_encrypt_and_encode")
2454-
2455-        self._status.set_status("Encrypting")
2456-        started = time.time()
2457-
2458-        key = hashutil.ssk_readkey_data_hash(self.salt, self.readkey)
2459-        enc = AES(key)
2460-        crypttext = enc.process(self.newdata)
2461-        assert len(crypttext) == len(self.newdata)
2462+        peerid = writer.peerid
2463+        lp = self.log("_got_write_answer from %s, share %d" %
2464+                      (idlib.shortnodeid_b2a(peerid), writer.shnum))
2465 
2466         now = time.time()
2467hunk ./src/allmydata/mutable/publish.py 985
2468-        self._status.timings["encrypt"] = now - started
2469-        started = now
2470-
2471-        # now apply FEC
2472-
2473-        self._status.set_status("Encoding")
2474-        fec = codec.CRSEncoder()
2475-        fec.set_params(self.segment_size,
2476-                       self.required_shares, self.total_shares)
2477-        piece_size = fec.get_block_size()
2478-        crypttext_pieces = [None] * self.required_shares
2479-        for i in range(len(crypttext_pieces)):
2480-            offset = i * piece_size
2481-            piece = crypttext[offset:offset+piece_size]
2482-            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2483-            crypttext_pieces[i] = piece
2484-            assert len(piece) == piece_size
2485-
2486-        d = fec.encode(crypttext_pieces)
2487-        def _done_encoding(res):
2488-            elapsed = time.time() - started
2489-            self._status.timings["encode"] = elapsed
2490-            return res
2491-        d.addCallback(_done_encoding)
2492-        return d
2493-
2494-    def _generate_shares(self, shares_and_shareids):
2495-        # this sets self.shares and self.root_hash
2496-        self.log("_generate_shares")
2497-        self._status.set_status("Generating Shares")
2498-        started = time.time()
2499-
2500-        # we should know these by now
2501-        privkey = self._privkey
2502-        encprivkey = self._encprivkey
2503-        pubkey = self._pubkey
2504-
2505-        (shares, share_ids) = shares_and_shareids
2506-
2507-        assert len(shares) == len(share_ids)
2508-        assert len(shares) == self.total_shares
2509-        all_shares = {}
2510-        block_hash_trees = {}
2511-        share_hash_leaves = [None] * len(shares)
2512-        for i in range(len(shares)):
2513-            share_data = shares[i]
2514-            shnum = share_ids[i]
2515-            all_shares[shnum] = share_data
2516-
2517-            # build the block hash tree. SDMF has only one leaf.
2518-            leaves = [hashutil.block_hash(share_data)]
2519-            t = hashtree.HashTree(leaves)
2520-            block_hash_trees[shnum] = list(t)
2521-            share_hash_leaves[shnum] = t[0]
2522-        for leaf in share_hash_leaves:
2523-            assert leaf is not None
2524-        share_hash_tree = hashtree.HashTree(share_hash_leaves)
2525-        share_hash_chain = {}
2526-        for shnum in range(self.total_shares):
2527-            needed_hashes = share_hash_tree.needed_hashes(shnum)
2528-            share_hash_chain[shnum] = dict( [ (i, share_hash_tree[i])
2529-                                              for i in needed_hashes ] )
2530-        root_hash = share_hash_tree[0]
2531-        assert len(root_hash) == 32
2532-        self.log("my new root_hash is %s" % base32.b2a(root_hash))
2533-        self._new_version_info = (self._new_seqnum, root_hash, self.salt)
2534-
2535-        prefix = pack_prefix(self._new_seqnum, root_hash, self.salt,
2536-                             self.required_shares, self.total_shares,
2537-                             self.segment_size, len(self.newdata))
2538-
2539-        # now pack the beginning of the share. All shares are the same up
2540-        # to the signature, then they have divergent share hash chains,
2541-        # then completely different block hash trees + salt + share data,
2542-        # then they all share the same encprivkey at the end. The sizes
2543-        # of everything are the same for all shares.
2544-
2545-        sign_started = time.time()
2546-        signature = privkey.sign(prefix)
2547-        self._status.timings["sign"] = time.time() - sign_started
2548-
2549-        verification_key = pubkey.serialize()
2550-
2551-        final_shares = {}
2552-        for shnum in range(self.total_shares):
2553-            final_share = pack_share(prefix,
2554-                                     verification_key,
2555-                                     signature,
2556-                                     share_hash_chain[shnum],
2557-                                     block_hash_trees[shnum],
2558-                                     all_shares[shnum],
2559-                                     encprivkey)
2560-            final_shares[shnum] = final_share
2561-        elapsed = time.time() - started
2562-        self._status.timings["pack"] = elapsed
2563-        self.shares = final_shares
2564-        self.root_hash = root_hash
2565-
2566-        # we also need to build up the version identifier for what we're
2567-        # pushing. Extract the offsets from one of our shares.
2568-        assert final_shares
2569-        offsets = unpack_header(final_shares.values()[0])[-1]
2570-        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
2571-        verinfo = (self._new_seqnum, root_hash, self.salt,
2572-                   self.segment_size, len(self.newdata),
2573-                   self.required_shares, self.total_shares,
2574-                   prefix, offsets_tuple)
2575-        self.versioninfo = verinfo
2576-
2577-
2578-
2579-    def _send_shares(self, needed):
2580-        self.log("_send_shares")
2581-
2582-        # we're finally ready to send out our shares. If we encounter any
2583-        # surprises here, it's because somebody else is writing at the same
2584-        # time. (Note: in the future, when we remove the _query_peers() step
2585-        # and instead speculate about [or remember] which shares are where,
2586-        # surprises here are *not* indications of UncoordinatedWriteError,
2587-        # and we'll need to respond to them more gracefully.)
2588-
2589-        # needed is a set of (peerid, shnum) tuples. The first thing we do is
2590-        # organize it by peerid.
2591-
2592-        peermap = DictOfSets()
2593-        for (peerid, shnum) in needed:
2594-            peermap.add(peerid, shnum)
2595-
2596-        # the next thing is to build up a bunch of test vectors. The
2597-        # semantics of Publish are that we perform the operation if the world
2598-        # hasn't changed since the ServerMap was constructed (more or less).
2599-        # For every share we're trying to place, we create a test vector that
2600-        # tests to see if the server*share still corresponds to the
2601-        # map.
2602-
2603-        all_tw_vectors = {} # maps peerid to tw_vectors
2604-        sm = self._servermap.servermap
2605-
2606-        for key in needed:
2607-            (peerid, shnum) = key
2608-
2609-            if key in sm:
2610-                # an old version of that share already exists on the
2611-                # server, according to our servermap. We will create a
2612-                # request that attempts to replace it.
2613-                old_versionid, old_timestamp = sm[key]
2614-                (old_seqnum, old_root_hash, old_salt, old_segsize,
2615-                 old_datalength, old_k, old_N, old_prefix,
2616-                 old_offsets_tuple) = old_versionid
2617-                old_checkstring = pack_checkstring(old_seqnum,
2618-                                                   old_root_hash,
2619-                                                   old_salt)
2620-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2621-
2622-            elif key in self.bad_share_checkstrings:
2623-                old_checkstring = self.bad_share_checkstrings[key]
2624-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2625-
2626-            else:
2627-                # add a testv that requires the share not exist
2628-
2629-                # Unfortunately, foolscap-0.2.5 has a bug in the way inbound
2630-                # constraints are handled. If the same object is referenced
2631-                # multiple times inside the arguments, foolscap emits a
2632-                # 'reference' token instead of a distinct copy of the
2633-                # argument. The bug is that these 'reference' tokens are not
2634-                # accepted by the inbound constraint code. To work around
2635-                # this, we need to prevent python from interning the
2636-                # (constant) tuple, by creating a new copy of this vector
2637-                # each time.
2638-
2639-                # This bug is fixed in foolscap-0.2.6, and even though this
2640-                # version of Tahoe requires foolscap-0.3.1 or newer, we are
2641-                # supposed to be able to interoperate with older versions of
2642-                # Tahoe which are allowed to use older versions of foolscap,
2643-                # including foolscap-0.2.5 . In addition, I've seen other
2644-                # foolscap problems triggered by 'reference' tokens (see #541
2645-                # for details). So we must keep this workaround in place.
2646-
2647-                #testv = (0, 1, 'eq', "")
2648-                testv = tuple([0, 1, 'eq', ""])
2649-
2650-            testvs = [testv]
2651-            # the write vector is simply the share
2652-            writev = [(0, self.shares[shnum])]
2653-
2654-            if peerid not in all_tw_vectors:
2655-                all_tw_vectors[peerid] = {}
2656-                # maps shnum to (testvs, writevs, new_length)
2657-            assert shnum not in all_tw_vectors[peerid]
2658-
2659-            all_tw_vectors[peerid][shnum] = (testvs, writev, None)
2660-
2661-        # we read the checkstring back from each share, however we only use
2662-        # it to detect whether there was a new share that we didn't know
2663-        # about. The success or failure of the write will tell us whether
2664-        # there was a collision or not. If there is a collision, the first
2665-        # thing we'll do is update the servermap, which will find out what
2666-        # happened. We could conceivably reduce a roundtrip by using the
2667-        # readv checkstring to populate the servermap, but really we'd have
2668-        # to read enough data to validate the signatures too, so it wouldn't
2669-        # be an overall win.
2670-        read_vector = [(0, struct.calcsize(SIGNED_PREFIX))]
2671-
2672-        # ok, send the messages!
2673-        self.log("sending %d shares" % len(all_tw_vectors), level=log.NOISY)
2674-        started = time.time()
2675-        for (peerid, tw_vectors) in all_tw_vectors.items():
2676-
2677-            write_enabler = self._node.get_write_enabler(peerid)
2678-            renew_secret = self._node.get_renewal_secret(peerid)
2679-            cancel_secret = self._node.get_cancel_secret(peerid)
2680-            secrets = (write_enabler, renew_secret, cancel_secret)
2681-            shnums = tw_vectors.keys()
2682-
2683-            for shnum in shnums:
2684-                self.outstanding.add( (peerid, shnum) )
2685-
2686-            d = self._do_testreadwrite(peerid, secrets,
2687-                                       tw_vectors, read_vector)
2688-            d.addCallbacks(self._got_write_answer, self._got_write_error,
2689-                           callbackArgs=(peerid, shnums, started),
2690-                           errbackArgs=(peerid, shnums, started))
2691-            # tolerate immediate errback, like with DeadReferenceError
2692-            d.addBoth(fireEventually)
2693-            d.addCallback(self.loop)
2694-            d.addErrback(self._fatal_error)
2695-
2696-        self._update_status()
2697-        self.log("%d shares sent" % len(all_tw_vectors), level=log.NOISY)
2698+        elapsed = now - started
2699 
2700hunk ./src/allmydata/mutable/publish.py 987
2701-    def _do_testreadwrite(self, peerid, secrets,
2702-                          tw_vectors, read_vector):
2703-        storage_index = self._storage_index
2704-        ss = self.connections[peerid]
2705+        self._status.add_per_server_time(peerid, elapsed)
2706 
2707hunk ./src/allmydata/mutable/publish.py 989
2708-        #print "SS[%s] is %s" % (idlib.shortnodeid_b2a(peerid), ss), ss.tracker.interfaceName
2709-        d = ss.callRemote("slot_testv_and_readv_and_writev",
2710-                          storage_index,
2711-                          secrets,
2712-                          tw_vectors,
2713-                          read_vector)
2714-        return d
2715+        wrote, read_data = answer
2716 
2717hunk ./src/allmydata/mutable/publish.py 991
2718-    def _got_write_answer(self, answer, peerid, shnums, started):
2719-        lp = self.log("_got_write_answer from %s" %
2720-                      idlib.shortnodeid_b2a(peerid))
2721-        for shnum in shnums:
2722-            self.outstanding.discard( (peerid, shnum) )
2723+        surprise_shares = set(read_data.keys()) - set([writer.shnum])
2724 
2725hunk ./src/allmydata/mutable/publish.py 993
2726-        now = time.time()
2727-        elapsed = now - started
2728-        self._status.add_per_server_time(peerid, elapsed)
2729+        # We need to remove from surprise_shares any shares that we are
2730+        # knowingly also writing to that peer from other writers.
2731 
2732hunk ./src/allmydata/mutable/publish.py 996
2733-        wrote, read_data = answer
2734+        # TODO: Precompute this.
2735+        known_shnums = [x.shnum for x in self.writers.values()
2736+                        if x.peerid == peerid]
2737+        surprise_shares -= set(known_shnums)
2738+        self.log("found the following surprise shares: %s" %
2739+                 str(surprise_shares))
2740 
2741hunk ./src/allmydata/mutable/publish.py 1003
2742-        surprise_shares = set(read_data.keys()) - set(shnums)
2743+        # Now surprise shares contains all of the shares that we did not
2744+        # expect to be there.
2745 
2746         surprised = False
2747         for shnum in surprise_shares:
2748hunk ./src/allmydata/mutable/publish.py 1010
2749             # read_data is a dict mapping shnum to checkstring (SIGNED_PREFIX)
2750             checkstring = read_data[shnum][0]
2751-            their_version_info = unpack_checkstring(checkstring)
2752-            if their_version_info == self._new_version_info:
2753+            # What we want to do here is to see if their (seqnum,
2754+            # roothash, salt) is the same as our (seqnum, roothash,
2755+            # salt), or the equivalent for MDMF. The best way to do this
2756+            # is to store a packed representation of our checkstring
2757+            # somewhere, then not bother unpacking the other
2758+            # checkstring.
2759+            if checkstring == self._checkstring:
2760                 # they have the right share, somehow
2761 
2762                 if (peerid,shnum) in self.goal:
2763hunk ./src/allmydata/mutable/publish.py 1095
2764             self.log("our testv failed, so the write did not happen",
2765                      parent=lp, level=log.WEIRD, umid="8sc26g")
2766             self.surprised = True
2767-            self.bad_peers.add(peerid) # don't ask them again
2768+            self.bad_peers.add(writer) # don't ask them again
2769             # use the checkstring to add information to the log message
2770             for (shnum,readv) in read_data.items():
2771                 checkstring = readv[0]
2772hunk ./src/allmydata/mutable/publish.py 1117
2773                 # if expected_version==None, then we didn't expect to see a
2774                 # share on that peer, and the 'surprise_shares' clause above
2775                 # will have logged it.
2776-            # self.loop() will take care of finding new homes
2777             return
2778 
2779hunk ./src/allmydata/mutable/publish.py 1119
2780-        for shnum in shnums:
2781-            self.placed.add( (peerid, shnum) )
2782-            # and update the servermap
2783-            self._servermap.add_new_share(peerid, shnum,
2784+        # and update the servermap
2785+        # self.versioninfo is set during the last phase of publishing.
2786+        # If we get there, we know that responses correspond to placed
2787+        # shares, and can safely execute these statements.
2788+        if self.versioninfo:
2789+            self.log("wrote successfully: adding new share to servermap")
2790+            self._servermap.add_new_share(peerid, writer.shnum,
2791                                           self.versioninfo, started)
2792hunk ./src/allmydata/mutable/publish.py 1127
2793-
2794-        # self.loop() will take care of checking to see if we're done
2795-        return
2796-
2797-    def _got_write_error(self, f, peerid, shnums, started):
2798-        for shnum in shnums:
2799-            self.outstanding.discard( (peerid, shnum) )
2800-        self.bad_peers.add(peerid)
2801-        if self._first_write_error is None:
2802-            self._first_write_error = f
2803-        self.log(format="error while writing shares %(shnums)s to peerid %(peerid)s",
2804-                 shnums=list(shnums), peerid=idlib.shortnodeid_b2a(peerid),
2805-                 failure=f,
2806-                 level=log.UNUSUAL)
2807-        # self.loop() will take care of checking to see if we're done
2808+            self.placed.add( (peerid, writer.shnum) )
2809+        self._update_status()
2810+        # the next method in the deferred chain will check to see if
2811+        # we're done and successful.
2812         return
2813 
2814 
2815hunk ./src/allmydata/mutable/publish.py 1134
2816-    def _done(self, res):
2817+    def _done(self):
2818         if not self._running:
2819             return
2820         self._running = False
2821hunk ./src/allmydata/mutable/publish.py 1140
2822         now = time.time()
2823         self._status.timings["total"] = now - self._started
2824+
2825+        elapsed = now - self._started_pushing
2826+        self._status.timings['push'] = elapsed
2827+
2828         self._status.set_active(False)
2829hunk ./src/allmydata/mutable/publish.py 1145
2830-        if isinstance(res, failure.Failure):
2831-            self.log("Publish done, with failure", failure=res,
2832-                     level=log.WEIRD, umid="nRsR9Q")
2833-            self._status.set_status("Failed")
2834-        elif self.surprised:
2835-            self.log("Publish done, UncoordinatedWriteError", level=log.UNUSUAL)
2836-            self._status.set_status("UncoordinatedWriteError")
2837-            # deliver a failure
2838-            res = failure.Failure(UncoordinatedWriteError())
2839-            # TODO: recovery
2840+        self.log("Publish done, success")
2841+        self._status.set_status("Finished")
2842+        self._status.set_progress(1.0)
2843+        # Get k and segsize, then give them to the caller.
2844+        hints = {}
2845+        hints['segsize'] = self.segment_size
2846+        hints['k'] = self.required_shares
2847+        self._node.set_downloader_hints(hints)
2848+        eventually(self.done_deferred.callback, None)
2849+
2850+    def _failure(self, f=None):
2851+        if f:
2852+            self._last_failure = f
2853+
2854+        if not self.surprised:
2855+            # We ran out of servers
2856+            msg = "Publish ran out of good servers"
2857+            if self._last_failure:
2858+                msg += ", last failure was: %s" % str(self._last_failure)
2859+            self.log(msg)
2860+            e = NotEnoughServersError(msg)
2861+
2862+        else:
2863+            # We ran into shares that we didn't recognize, which means
2864+            # that we need to return an UncoordinatedWriteError.
2865+            self.log("Publish failed with UncoordinatedWriteError")
2866+            e = UncoordinatedWriteError()
2867+        f = failure.Failure(e)
2868+        eventually(self.done_deferred.callback, f)
2869+
2870+
2871+class MutableFileHandle:
2872+    """
2873+    I am a mutable uploadable built around a filehandle-like object,
2874+    usually either a StringIO instance or a handle to an actual file.
2875+    """
2876+    implements(IMutableUploadable)
2877+
2878+    def __init__(self, filehandle):
2879+        # The filehandle is defined as a generally file-like object that
2880+        # has these two methods. We don't care beyond that.
2881+        assert hasattr(filehandle, "read")
2882+        assert hasattr(filehandle, "close")
2883+
2884+        self._filehandle = filehandle
2885+        # We must start reading at the beginning of the file, or we risk
2886+        # encountering errors when the data read does not match the size
2887+        # reported to the uploader.
2888+        self._filehandle.seek(0)
2889+
2890+        # We have not yet read anything, so our position is 0.
2891+        self._marker = 0
2892+
2893+
2894+    def get_size(self):
2895+        """
2896+        I return the amount of data in my filehandle.
2897+        """
2898+        if not hasattr(self, "_size"):
2899+            old_position = self._filehandle.tell()
2900+            # Seek to the end of the file by seeking 0 bytes from the
2901+            # file's end
2902+            self._filehandle.seek(0, 2) # 2 == os.SEEK_END in 2.5+
2903+            self._size = self._filehandle.tell()
2904+            # Restore the previous position, in case this was called
2905+            # after a read.
2906+            self._filehandle.seek(old_position)
2907+            assert self._filehandle.tell() == old_position
2908+
2909+        assert hasattr(self, "_size")
2910+        return self._size
2911+
2912+
2913+    def pos(self):
2914+        """
2915+        I return the position of my read marker -- i.e., how much data I
2916+        have already read and returned to callers.
2917+        """
2918+        return self._marker
2919+
2920+
2921+    def read(self, length):
2922+        """
2923+        I return some data (up to length bytes) from my filehandle.
2924+
2925+        In most cases, I return length bytes, but sometimes I won't --
2926+        for example, if I am asked to read beyond the end of a file, or
2927+        an error occurs.
2928+        """
2929+        results = self._filehandle.read(length)
2930+        self._marker += len(results)
2931+        return [results]
2932+
2933+
2934+    def close(self):
2935+        """
2936+        I close the underlying filehandle. Any further operations on the
2937+        filehandle fail at this point.
2938+        """
2939+        self._filehandle.close()
2940+
2941+
2942+class MutableData(MutableFileHandle):
2943+    """
2944+    I am a mutable uploadable built around a string, which I then cast
2945+    into a StringIO and treat as a filehandle.
2946+    """
2947+
2948+    def __init__(self, s):
2949+        # Take a string and return a file-like uploadable.
2950+        assert isinstance(s, str)
2951+
2952+        MutableFileHandle.__init__(self, StringIO(s))
2953+
2954+
2955+class TransformingUploadable:
2956+    """
2957+    I am an IMutableUploadable that wraps another IMutableUploadable,
2958+    and some segments that are already on the grid. When I am called to
2959+    read, I handle merging of boundary segments.
2960+    """
2961+    implements(IMutableUploadable)
2962+
2963+
2964+    def __init__(self, data, offset, segment_size, start, end):
2965+        assert IMutableUploadable.providedBy(data)
2966+
2967+        self._newdata = data
2968+        self._offset = offset
2969+        self._segment_size = segment_size
2970+        self._start = start
2971+        self._end = end
2972+
2973+        self._read_marker = 0
2974+
2975+        self._first_segment_offset = offset % segment_size
2976+
2977+        num = self.log("TransformingUploadable: starting", parent=None)
2978+        self._log_number = num
2979+        self.log("got fso: %d" % self._first_segment_offset)
2980+        self.log("got offset: %d" % self._offset)
2981+
2982+
2983+    def log(self, *args, **kwargs):
2984+        if 'parent' not in kwargs:
2985+            kwargs['parent'] = self._log_number
2986+        if "facility" not in kwargs:
2987+            kwargs["facility"] = "tahoe.mutable.transforminguploadable"
2988+        return log.msg(*args, **kwargs)
2989+
2990+
2991+    def get_size(self):
2992+        return self._offset + self._newdata.get_size()
2993+
2994+
2995+    def read(self, length):
2996+        # We can get data from 3 sources here.
2997+        #   1. The first of the segments provided to us.
2998+        #   2. The data that we're replacing things with.
2999+        #   3. The last of the segments provided to us.
3000+
3001+        # are we in state 0?
3002+        self.log("reading %d bytes" % length)
3003+
3004+        old_start_data = ""
3005+        old_data_length = self._first_segment_offset - self._read_marker
3006+        if old_data_length > 0:
3007+            if old_data_length > length:
3008+                old_data_length = length
3009+            self.log("returning %d bytes of old start data" % old_data_length)
3010+
3011+            old_data_end = old_data_length + self._read_marker
3012+            old_start_data = self._start[self._read_marker:old_data_end]
3013+            length -= old_data_length
3014         else:
3015hunk ./src/allmydata/mutable/publish.py 1320
3016-            self.log("Publish done, success")
3017-            self._status.set_status("Finished")
3018-            self._status.set_progress(1.0)
3019-        eventually(self.done_deferred.callback, res)
3020+            # otherwise calculations later get screwed up.
3021+            old_data_length = 0
3022+
3023+        # Is there enough new data to satisfy this read? If not, we need
3024+        # to pad the end of the data with data from our last segment.
3025+        old_end_length = length - \
3026+            (self._newdata.get_size() - self._newdata.pos())
3027+        old_end_data = ""
3028+        if old_end_length > 0:
3029+            self.log("reading %d bytes of old end data" % old_end_length)
3030+
3031+            # TODO: We're not explicitly checking for tail segment size
3032+            # here. Is that a problem?
3033+            old_data_offset = (length - old_end_length + \
3034+                               old_data_length) % self._segment_size
3035+            self.log("reading at offset %d" % old_data_offset)
3036+            old_end = old_data_offset + old_end_length
3037+            old_end_data = self._end[old_data_offset:old_end]
3038+            length -= old_end_length
3039+            assert length == self._newdata.get_size() - self._newdata.pos()
3040+
3041+        self.log("reading %d bytes of new data" % length)
3042+        new_data = self._newdata.read(length)
3043+        new_data = "".join(new_data)
3044+
3045+        self._read_marker += len(old_start_data + new_data + old_end_data)
3046 
3047hunk ./src/allmydata/mutable/publish.py 1347
3048+        return old_start_data + new_data + old_end_data
3049 
3050hunk ./src/allmydata/mutable/publish.py 1349
3051+    def close(self):
3052+        pass
3053}
3054[mutable/servermap: Rework the servermap to work with MDMF mutable files
3055Kevan Carstensen <kevan@isnotajoke.com>**20110802014018
3056 Ignore-this: 4d74b1fd4f03096c84d5d90dd4a33598
3057] {
3058hunk ./src/allmydata/mutable/servermap.py 2
3059 
3060-import sys, time
3061+import sys, time, struct
3062 from zope.interface import implements
3063 from itertools import count
3064 from twisted.internet import defer
3065hunk ./src/allmydata/mutable/servermap.py 7
3066 from twisted.python import failure
3067-from foolscap.api import DeadReferenceError, RemoteException, eventually
3068-from allmydata.util import base32, hashutil, idlib, log
3069+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
3070+                         fireEventually
3071+from allmydata.util import base32, hashutil, idlib, log, deferredutil
3072 from allmydata.util.dictutil import DictOfSets
3073 from allmydata.storage.server import si_b2a
3074 from allmydata.interfaces import IServermapUpdaterStatus
3075hunk ./src/allmydata/mutable/servermap.py 16
3076 from pycryptopp.publickey import rsa
3077 
3078 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \
3079-     CorruptShareError, NeedMoreDataError
3080-from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
3081-     SIGNED_PREFIX_LENGTH
3082+     CorruptShareError
3083+from allmydata.mutable.layout import SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
3084 
3085 class UpdateStatus:
3086     implements(IServermapUpdaterStatus)
3087hunk ./src/allmydata/mutable/servermap.py 124
3088         self.bad_shares = {} # maps (peerid,shnum) to old checkstring
3089         self.last_update_mode = None
3090         self.last_update_time = 0
3091+        self.update_data = {} # (verinfo,shnum) => data
3092 
3093     def copy(self):
3094         s = ServerMap()
3095hunk ./src/allmydata/mutable/servermap.py 255
3096         """Return a set of versionids, one for each version that is currently
3097         recoverable."""
3098         versionmap = self.make_versionmap()
3099-
3100         recoverable_versions = set()
3101         for (verinfo, shares) in versionmap.items():
3102             (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3103hunk ./src/allmydata/mutable/servermap.py 340
3104         return False
3105 
3106 
3107+    def get_update_data_for_share_and_verinfo(self, shnum, verinfo):
3108+        """
3109+        I return the update data for the given shnum
3110+        """
3111+        update_data = self.update_data[shnum]
3112+        update_datum = [i[1] for i in update_data if i[0] == verinfo][0]
3113+        return update_datum
3114+
3115+
3116+    def set_update_data_for_share_and_verinfo(self, shnum, verinfo, data):
3117+        """
3118+        I record the block hash tree for the given shnum.
3119+        """
3120+        self.update_data.setdefault(shnum , []).append((verinfo, data))
3121+
3122+
3123 class ServermapUpdater:
3124     def __init__(self, filenode, storage_broker, monitor, servermap,
3125hunk ./src/allmydata/mutable/servermap.py 358
3126-                 mode=MODE_READ, add_lease=False):
3127+                 mode=MODE_READ, add_lease=False, update_range=None):
3128         """I update a servermap, locating a sufficient number of useful
3129         shares and remembering where they are located.
3130 
3131hunk ./src/allmydata/mutable/servermap.py 383
3132         self._servers_responded = set()
3133 
3134         # how much data should we read?
3135+        # SDMF:
3136         #  * if we only need the checkstring, then [0:75]
3137         #  * if we need to validate the checkstring sig, then [543ish:799ish]
3138         #  * if we need the verification key, then [107:436ish]
3139hunk ./src/allmydata/mutable/servermap.py 391
3140         #  * if we need the encrypted private key, we want [-1216ish:]
3141         #   * but we can't read from negative offsets
3142         #   * the offset table tells us the 'ish', also the positive offset
3143-        # A future version of the SMDF slot format should consider using
3144-        # fixed-size slots so we can retrieve less data. For now, we'll just
3145-        # read 4000 bytes, which also happens to read enough actual data to
3146-        # pre-fetch an 18-entry dirnode.
3147+        # MDMF:
3148+        #  * Checkstring? [0:72]
3149+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
3150+        #    the offset table will tell us for sure.
3151+        #  * If we need the verification key, we have to consult the offset
3152+        #    table as well.
3153+        # At this point, we don't know which we are. Our filenode can
3154+        # tell us, but it might be lying -- in some cases, we're
3155+        # responsible for telling it which kind of file it is.
3156         self._read_size = 4000
3157         if mode == MODE_CHECK:
3158             # we use unpack_prefix_and_signature, so we need 1k
3159hunk ./src/allmydata/mutable/servermap.py 405
3160             self._read_size = 1000
3161         self._need_privkey = False
3162+
3163         if mode == MODE_WRITE and not self._node.get_privkey():
3164             self._need_privkey = True
3165         # check+repair: repair requires the privkey, so if we didn't happen
3166hunk ./src/allmydata/mutable/servermap.py 412
3167         # to ask for it during the check, we'll have problems doing the
3168         # publish.
3169 
3170+        self.fetch_update_data = False
3171+        if mode == MODE_WRITE and update_range:
3172+            # We're updating the servermap in preparation for an
3173+            # in-place file update, so we need to fetch some additional
3174+            # data from each share that we find.
3175+            assert len(update_range) == 2
3176+
3177+            self.start_segment = update_range[0]
3178+            self.end_segment = update_range[1]
3179+            self.fetch_update_data = True
3180+
3181         prefix = si_b2a(self._storage_index)[:5]
3182         self._log_number = log.msg(format="SharemapUpdater(%(si)s): starting (%(mode)s)",
3183                                    si=prefix, mode=mode)
3184hunk ./src/allmydata/mutable/servermap.py 461
3185         self._queries_completed = 0
3186 
3187         sb = self._storage_broker
3188+        # All of the peers, permuted by the storage index, as usual.
3189         full_peerlist = [(s.get_serverid(), s.get_rref())
3190                          for s in sb.get_servers_for_psi(self._storage_index)]
3191         self.full_peerlist = full_peerlist # for use later, immutable
3192hunk ./src/allmydata/mutable/servermap.py 469
3193         self._good_peers = set() # peers who had some shares
3194         self._empty_peers = set() # peers who don't have any shares
3195         self._bad_peers = set() # peers to whom our queries failed
3196+        self._readers = {} # peerid -> dict(sharewriters), filled in
3197+                           # after responses come in.
3198 
3199         k = self._node.get_required_shares()
3200hunk ./src/allmydata/mutable/servermap.py 473
3201+        # For what cases can these conditions work?
3202         if k is None:
3203             # make a guess
3204             k = 3
3205hunk ./src/allmydata/mutable/servermap.py 486
3206         self.num_peers_to_query = k + self.EPSILON
3207 
3208         if self.mode == MODE_CHECK:
3209+            # We want to query all of the peers.
3210             initial_peers_to_query = dict(full_peerlist)
3211             must_query = set(initial_peers_to_query.keys())
3212             self.extra_peers = []
3213hunk ./src/allmydata/mutable/servermap.py 494
3214             # we're planning to replace all the shares, so we want a good
3215             # chance of finding them all. We will keep searching until we've
3216             # seen epsilon that don't have a share.
3217+            # We don't query all of the peers because that could take a while.
3218             self.num_peers_to_query = N + self.EPSILON
3219             initial_peers_to_query, must_query = self._build_initial_querylist()
3220             self.required_num_empty_peers = self.EPSILON
3221hunk ./src/allmydata/mutable/servermap.py 504
3222             # might also avoid the round trip required to read the encrypted
3223             # private key.
3224 
3225-        else:
3226+        else: # MODE_READ, MODE_ANYTHING
3227+            # 2k peers is good enough.
3228             initial_peers_to_query, must_query = self._build_initial_querylist()
3229 
3230         # this is a set of peers that we are required to get responses from:
3231hunk ./src/allmydata/mutable/servermap.py 520
3232         # before we can consider ourselves finished, and self.extra_peers
3233         # contains the overflow (peers that we should tap if we don't get
3234         # enough responses)
3235+        # I guess that self._must_query is a subset of
3236+        # initial_peers_to_query?
3237+        assert set(must_query).issubset(set(initial_peers_to_query))
3238 
3239         self._send_initial_requests(initial_peers_to_query)
3240         self._status.timings["initial_queries"] = time.time() - self._started
3241hunk ./src/allmydata/mutable/servermap.py 579
3242         # errors that aren't handled by _query_failed (and errors caused by
3243         # _query_failed) get logged, but we still want to check for doneness.
3244         d.addErrback(log.err)
3245-        d.addBoth(self._check_for_done)
3246         d.addErrback(self._fatal_error)
3247hunk ./src/allmydata/mutable/servermap.py 580
3248+        d.addCallback(self._check_for_done)
3249         return d
3250 
3251     def _do_read(self, ss, peerid, storage_index, shnums, readv):
3252hunk ./src/allmydata/mutable/servermap.py 599
3253         d = ss.callRemote("slot_readv", storage_index, shnums, readv)
3254         return d
3255 
3256+
3257+    def _got_corrupt_share(self, e, shnum, peerid, data, lp):
3258+        """
3259+        I am called when a remote server returns a corrupt share in
3260+        response to one of our queries. By corrupt, I mean a share
3261+        without a valid signature. I then record the failure, notify the
3262+        server of the corruption, and record the share as bad.
3263+        """
3264+        f = failure.Failure(e)
3265+        self.log(format="bad share: %(f_value)s", f_value=str(f),
3266+                 failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
3267+        # Notify the server that its share is corrupt.
3268+        self.notify_server_corruption(peerid, shnum, str(e))
3269+        # By flagging this as a bad peer, we won't count any of
3270+        # the other shares on that peer as valid, though if we
3271+        # happen to find a valid version string amongst those
3272+        # shares, we'll keep track of it so that we don't need
3273+        # to validate the signature on those again.
3274+        self._bad_peers.add(peerid)
3275+        self._last_failure = f
3276+        # XXX: Use the reader for this?
3277+        checkstring = data[:SIGNED_PREFIX_LENGTH]
3278+        self._servermap.mark_bad_share(peerid, shnum, checkstring)
3279+        self._servermap.problems.append(f)
3280+
3281+
3282+    def _cache_good_sharedata(self, verinfo, shnum, now, data):
3283+        """
3284+        If one of my queries returns successfully (which means that we
3285+        were able to and successfully did validate the signature), I
3286+        cache the data that we initially fetched from the storage
3287+        server. This will help reduce the number of roundtrips that need
3288+        to occur when the file is downloaded, or when the file is
3289+        updated.
3290+        """
3291+        if verinfo:
3292+            self._node._add_to_cache(verinfo, shnum, 0, data)
3293+
3294+
3295     def _got_results(self, datavs, peerid, readsize, stuff, started):
3296         lp = self.log(format="got result from [%(peerid)s], %(numshares)d shares",
3297                       peerid=idlib.shortnodeid_b2a(peerid),
3298hunk ./src/allmydata/mutable/servermap.py 641
3299-                      numshares=len(datavs),
3300-                      level=log.NOISY)
3301+                      numshares=len(datavs))
3302         now = time.time()
3303         elapsed = now - started
3304hunk ./src/allmydata/mutable/servermap.py 644
3305-        self._queries_outstanding.discard(peerid)
3306-        self._servermap.reachable_peers.add(peerid)
3307-        self._must_query.discard(peerid)
3308-        self._queries_completed += 1
3309+        def _done_processing(ignored=None):
3310+            self._queries_outstanding.discard(peerid)
3311+            self._servermap.reachable_peers.add(peerid)
3312+            self._must_query.discard(peerid)
3313+            self._queries_completed += 1
3314         if not self._running:
3315hunk ./src/allmydata/mutable/servermap.py 650
3316-            self.log("but we're not running, so we'll ignore it", parent=lp,
3317-                     level=log.NOISY)
3318+            self.log("but we're not running, so we'll ignore it", parent=lp)
3319+            _done_processing()
3320             self._status.add_per_server_time(peerid, "late", started, elapsed)
3321             return
3322         self._status.add_per_server_time(peerid, "query", started, elapsed)
3323hunk ./src/allmydata/mutable/servermap.py 661
3324         else:
3325             self._empty_peers.add(peerid)
3326 
3327-        last_verinfo = None
3328-        last_shnum = None
3329+        ss, storage_index = stuff
3330+        ds = []
3331+
3332         for shnum,datav in datavs.items():
3333             data = datav[0]
3334hunk ./src/allmydata/mutable/servermap.py 666
3335-            try:
3336-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
3337-                last_verinfo = verinfo
3338-                last_shnum = shnum
3339-                self._node._add_to_cache(verinfo, shnum, 0, data)
3340-            except CorruptShareError, e:
3341-                # log it and give the other shares a chance to be processed
3342-                f = failure.Failure()
3343-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
3344-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
3345-                self.notify_server_corruption(peerid, shnum, str(e))
3346-                self._bad_peers.add(peerid)
3347-                self._last_failure = f
3348-                checkstring = data[:SIGNED_PREFIX_LENGTH]
3349-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
3350-                self._servermap.problems.append(f)
3351-                pass
3352+            reader = MDMFSlotReadProxy(ss,
3353+                                       storage_index,
3354+                                       shnum,
3355+                                       data)
3356+            self._readers.setdefault(peerid, dict())[shnum] = reader
3357+            # our goal, with each response, is to validate the version
3358+            # information and share data as best we can at this point --
3359+            # we do this by validating the signature. To do this, we
3360+            # need to do the following:
3361+            #   - If we don't already have the public key, fetch the
3362+            #     public key. We use this to validate the signature.
3363+            if not self._node.get_pubkey():
3364+                # fetch and set the public key.
3365+                d = reader.get_verification_key(queue=True)
3366+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
3367+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
3368+                # XXX: Make self._pubkey_query_failed?
3369+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
3370+                    self._got_corrupt_share(error, shnum, peerid, data, lp))
3371+            else:
3372+                # we already have the public key.
3373+                d = defer.succeed(None)
3374 
3375hunk ./src/allmydata/mutable/servermap.py 689
3376-        self._status.timings["cumulative_verify"] += (time.time() - now)
3377+            # Neither of these two branches return anything of
3378+            # consequence, so the first entry in our deferredlist will
3379+            # be None.
3380 
3381hunk ./src/allmydata/mutable/servermap.py 693
3382-        if self._need_privkey and last_verinfo:
3383-            # send them a request for the privkey. We send one request per
3384-            # server.
3385-            lp2 = self.log("sending privkey request",
3386-                           parent=lp, level=log.NOISY)
3387-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3388-             offsets_tuple) = last_verinfo
3389-            o = dict(offsets_tuple)
3390+            # - Next, we need the version information. We almost
3391+            #   certainly got this by reading the first thousand or so
3392+            #   bytes of the share on the storage server, so we
3393+            #   shouldn't need to fetch anything at this step.
3394+            d2 = reader.get_verinfo()
3395+            d2.addErrback(lambda error, shnum=shnum, peerid=peerid:
3396+                self._got_corrupt_share(error, shnum, peerid, data, lp))
3397+            # - Next, we need the signature. For an SDMF share, it is
3398+            #   likely that we fetched this when doing our initial fetch
3399+            #   to get the version information. In MDMF, this lives at
3400+            #   the end of the share, so unless the file is quite small,
3401+            #   we'll need to do a remote fetch to get it.
3402+            d3 = reader.get_signature(queue=True)
3403+            d3.addErrback(lambda error, shnum=shnum, peerid=peerid:
3404+                self._got_corrupt_share(error, shnum, peerid, data, lp))
3405+            #  Once we have all three of these responses, we can move on
3406+            #  to validating the signature
3407+
3408+            # Does the node already have a privkey? If not, we'll try to
3409+            # fetch it here.
3410+            if self._need_privkey:
3411+                d4 = reader.get_encprivkey(queue=True)
3412+                d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
3413+                    self._try_to_validate_privkey(results, peerid, shnum, lp))
3414+                d4.addErrback(lambda error, shnum=shnum, peerid=peerid:
3415+                    self._privkey_query_failed(error, shnum, data, lp))
3416+            else:
3417+                d4 = defer.succeed(None)
3418 
3419hunk ./src/allmydata/mutable/servermap.py 722
3420-            self._queries_outstanding.add(peerid)
3421-            readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ]
3422-            ss = self._servermap.connections[peerid]
3423-            privkey_started = time.time()
3424-            d = self._do_read(ss, peerid, self._storage_index,
3425-                              [last_shnum], readv)
3426-            d.addCallback(self._got_privkey_results, peerid, last_shnum,
3427-                          privkey_started, lp2)
3428-            d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2)
3429-            d.addErrback(log.err)
3430-            d.addCallback(self._check_for_done)
3431-            d.addErrback(self._fatal_error)
3432 
3433hunk ./src/allmydata/mutable/servermap.py 723
3434+            if self.fetch_update_data:
3435+                # fetch the block hash tree and first + last segment, as
3436+                # configured earlier.
3437+                # Then set them in wherever we happen to want to set
3438+                # them.
3439+                ds = []
3440+                # XXX: We do this above, too. Is there a good way to
3441+                # make the two routines share the value without
3442+                # introducing more roundtrips?
3443+                ds.append(reader.get_verinfo())
3444+                ds.append(reader.get_blockhashes(queue=True))
3445+                ds.append(reader.get_block_and_salt(self.start_segment,
3446+                                                    queue=True))
3447+                ds.append(reader.get_block_and_salt(self.end_segment,
3448+                                                    queue=True))
3449+                d5 = deferredutil.gatherResults(ds)
3450+                d5.addCallback(self._got_update_results_one_share, shnum)
3451+            else:
3452+                d5 = defer.succeed(None)
3453+
3454+            dl = defer.DeferredList([d, d2, d3, d4, d5])
3455+            dl.addBoth(self._turn_barrier)
3456+            reader.flush()
3457+            dl.addCallback(lambda results, shnum=shnum, peerid=peerid:
3458+                self._got_signature_one_share(results, shnum, peerid, lp))
3459+            dl.addErrback(lambda error, shnum=shnum, data=data:
3460+               self._got_corrupt_share(error, shnum, peerid, data, lp))
3461+            dl.addCallback(lambda verinfo, shnum=shnum, peerid=peerid, data=data:
3462+                self._cache_good_sharedata(verinfo, shnum, now, data))
3463+            ds.append(dl)
3464+        # dl is a deferred list that will fire when all of the shares
3465+        # that we found on this peer are done processing. When dl fires,
3466+        # we know that processing is done, so we can decrement the
3467+        # semaphore-like thing that we incremented earlier.
3468+        dl = defer.DeferredList(ds, fireOnOneErrback=True)
3469+        # Are we done? Done means that there are no more queries to
3470+        # send, that there are no outstanding queries, and that we
3471+        # haven't received any queries that are still processing. If we
3472+        # are done, self._check_for_done will cause the done deferred
3473+        # that we returned to our caller to fire, which tells them that
3474+        # they have a complete servermap, and that we won't be touching
3475+        # the servermap anymore.
3476+        dl.addCallback(_done_processing)
3477+        dl.addCallback(self._check_for_done)
3478+        dl.addErrback(self._fatal_error)
3479         # all done!
3480         self.log("_got_results done", parent=lp, level=log.NOISY)
3481hunk ./src/allmydata/mutable/servermap.py 770
3482+        return dl
3483+
3484+
3485+    def _turn_barrier(self, result):
3486+        """
3487+        I help the servermap updater avoid the recursion limit issues
3488+        discussed in #237.
3489+        """
3490+        return fireEventually(result)
3491+
3492+
3493+    def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp):
3494+        if self._node.get_pubkey():
3495+            return # don't go through this again if we don't have to
3496+        fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
3497+        assert len(fingerprint) == 32
3498+        if fingerprint != self._node.get_fingerprint():
3499+            raise CorruptShareError(peerid, shnum,
3500+                                "pubkey doesn't match fingerprint")
3501+        self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
3502+        assert self._node.get_pubkey()
3503+
3504 
3505     def notify_server_corruption(self, peerid, shnum, reason):
3506         ss = self._servermap.connections[peerid]
3507hunk ./src/allmydata/mutable/servermap.py 798
3508         ss.callRemoteOnly("advise_corrupt_share",
3509                           "mutable", self._storage_index, shnum, reason)
3510 
3511-    def _got_results_one_share(self, shnum, data, peerid, lp):
3512+
3513+    def _got_signature_one_share(self, results, shnum, peerid, lp):
3514+        # It is our job to give versioninfo to our caller. We need to
3515+        # raise CorruptShareError if the share is corrupt for any
3516+        # reason, something that our caller will handle.
3517         self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s",
3518                  shnum=shnum,
3519                  peerid=idlib.shortnodeid_b2a(peerid),
3520hunk ./src/allmydata/mutable/servermap.py 808
3521                  level=log.NOISY,
3522                  parent=lp)
3523+        if not self._running:
3524+            # We can't process the results, since we can't touch the
3525+            # servermap anymore.
3526+            self.log("but we're not running anymore.")
3527+            return None
3528 
3529hunk ./src/allmydata/mutable/servermap.py 814
3530-        # this might raise NeedMoreDataError, if the pubkey and signature
3531-        # live at some weird offset. That shouldn't happen, so I'm going to
3532-        # treat it as a bad share.
3533-        (seqnum, root_hash, IV, k, N, segsize, datalength,
3534-         pubkey_s, signature, prefix) = unpack_prefix_and_signature(data)
3535-
3536-        if not self._node.get_pubkey():
3537-            fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
3538-            assert len(fingerprint) == 32
3539-            if fingerprint != self._node.get_fingerprint():
3540-                raise CorruptShareError(peerid, shnum,
3541-                                        "pubkey doesn't match fingerprint")
3542-            self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
3543-
3544-        if self._need_privkey:
3545-            self._try_to_extract_privkey(data, peerid, shnum, lp)
3546-
3547-        (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N,
3548-         ig_segsize, ig_datalen, offsets) = unpack_header(data)
3549+        _, verinfo, signature, __, ___ = results
3550+        (seqnum,
3551+         root_hash,
3552+         saltish,
3553+         segsize,
3554+         datalen,
3555+         k,
3556+         n,
3557+         prefix,
3558+         offsets) = verinfo[1]
3559         offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
3560 
3561hunk ./src/allmydata/mutable/servermap.py 826
3562-        verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3563+        # XXX: This should be done for us in the method, so
3564+        # presumably you can go in there and fix it.
3565+        verinfo = (seqnum,
3566+                   root_hash,
3567+                   saltish,
3568+                   segsize,
3569+                   datalen,
3570+                   k,
3571+                   n,
3572+                   prefix,
3573                    offsets_tuple)
3574hunk ./src/allmydata/mutable/servermap.py 837
3575+        # This tuple uniquely identifies a share on the grid; we use it
3576+        # to keep track of the ones that we've already seen.
3577 
3578         if verinfo not in self._valid_versions:
3579hunk ./src/allmydata/mutable/servermap.py 841
3580-            # it's a new pair. Verify the signature.
3581-            valid = self._node.get_pubkey().verify(prefix, signature)
3582+            # This is a new version tuple, and we need to validate it
3583+            # against the public key before keeping track of it.
3584+            assert self._node.get_pubkey()
3585+            valid = self._node.get_pubkey().verify(prefix, signature[1])
3586             if not valid:
3587hunk ./src/allmydata/mutable/servermap.py 846
3588-                raise CorruptShareError(peerid, shnum, "signature is invalid")
3589+                raise CorruptShareError(peerid, shnum,
3590+                                        "signature is invalid")
3591 
3592hunk ./src/allmydata/mutable/servermap.py 849
3593-            # ok, it's a valid verinfo. Add it to the list of validated
3594-            # versions.
3595-            self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
3596-                     % (seqnum, base32.b2a(root_hash)[:4],
3597-                        idlib.shortnodeid_b2a(peerid), shnum,
3598-                        k, N, segsize, datalength),
3599-                     parent=lp)
3600-            self._valid_versions.add(verinfo)
3601-        # We now know that this is a valid candidate verinfo.
3602+        # ok, it's a valid verinfo. Add it to the list of validated
3603+        # versions.
3604+        self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
3605+                 % (seqnum, base32.b2a(root_hash)[:4],
3606+                    idlib.shortnodeid_b2a(peerid), shnum,
3607+                    k, n, segsize, datalen),
3608+                    parent=lp)
3609+        self._valid_versions.add(verinfo)
3610+        # We now know that this is a valid candidate verinfo. Whether or
3611+        # not this instance of it is valid is a matter for the next
3612+        # statement; at this point, we just know that if we see this
3613+        # version info again, that its signature checks out and that
3614+        # we're okay to skip the signature-checking step.
3615 
3616hunk ./src/allmydata/mutable/servermap.py 863
3617+        # (peerid, shnum) are bound in the method invocation.
3618         if (peerid, shnum) in self._servermap.bad_shares:
3619             # we've been told that the rest of the data in this share is
3620             # unusable, so don't add it to the servermap.
3621hunk ./src/allmydata/mutable/servermap.py 876
3622         self._servermap.add_new_share(peerid, shnum, verinfo, timestamp)
3623         # and the versionmap
3624         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
3625+
3626         return verinfo
3627 
3628hunk ./src/allmydata/mutable/servermap.py 879
3629-    def _deserialize_pubkey(self, pubkey_s):
3630-        verifier = rsa.create_verifying_key_from_string(pubkey_s)
3631-        return verifier
3632 
3633hunk ./src/allmydata/mutable/servermap.py 880
3634-    def _try_to_extract_privkey(self, data, peerid, shnum, lp):
3635-        try:
3636-            r = unpack_share(data)
3637-        except NeedMoreDataError, e:
3638-            # this share won't help us. oh well.
3639-            offset = e.encprivkey_offset
3640-            length = e.encprivkey_length
3641-            self.log("shnum %d on peerid %s: share was too short (%dB) "
3642-                     "to get the encprivkey; [%d:%d] ought to hold it" %
3643-                     (shnum, idlib.shortnodeid_b2a(peerid), len(data),
3644-                      offset, offset+length),
3645-                     parent=lp)
3646-            # NOTE: if uncoordinated writes are taking place, someone might
3647-            # change the share (and most probably move the encprivkey) before
3648-            # we get a chance to do one of these reads and fetch it. This
3649-            # will cause us to see a NotEnoughSharesError(unable to fetch
3650-            # privkey) instead of an UncoordinatedWriteError . This is a
3651-            # nuisance, but it will go away when we move to DSA-based mutable
3652-            # files (since the privkey will be small enough to fit in the
3653-            # write cap).
3654+    def _got_update_results_one_share(self, results, share):
3655+        """
3656+        I record the update results in results.
3657+        """
3658+        assert len(results) == 4
3659+        verinfo, blockhashes, start, end = results
3660+        (seqnum,
3661+         root_hash,
3662+         saltish,
3663+         segsize,
3664+         datalen,
3665+         k,
3666+         n,
3667+         prefix,
3668+         offsets) = verinfo
3669+        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
3670 
3671hunk ./src/allmydata/mutable/servermap.py 897
3672-            return
3673+        # XXX: This should be done for us in the method, so
3674+        # presumably you can go in there and fix it.
3675+        verinfo = (seqnum,
3676+                   root_hash,
3677+                   saltish,
3678+                   segsize,
3679+                   datalen,
3680+                   k,
3681+                   n,
3682+                   prefix,
3683+                   offsets_tuple)
3684 
3685hunk ./src/allmydata/mutable/servermap.py 909
3686-        (seqnum, root_hash, IV, k, N, segsize, datalen,
3687-         pubkey, signature, share_hash_chain, block_hash_tree,
3688-         share_data, enc_privkey) = r
3689+        update_data = (blockhashes, start, end)
3690+        self._servermap.set_update_data_for_share_and_verinfo(share,
3691+                                                              verinfo,
3692+                                                              update_data)
3693 
3694hunk ./src/allmydata/mutable/servermap.py 914
3695-        return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
3696 
3697hunk ./src/allmydata/mutable/servermap.py 915
3698-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
3699+    def _deserialize_pubkey(self, pubkey_s):
3700+        verifier = rsa.create_verifying_key_from_string(pubkey_s)
3701+        return verifier
3702 
3703hunk ./src/allmydata/mutable/servermap.py 919
3704+
3705+    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
3706+        """
3707+        Given a writekey from a remote server, I validate it against the
3708+        writekey stored in my node. If it is valid, then I set the
3709+        privkey and encprivkey properties of the node.
3710+        """
3711         alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
3712         alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
3713         if alleged_writekey != self._node.get_writekey():
3714hunk ./src/allmydata/mutable/servermap.py 998
3715         self._queries_completed += 1
3716         self._last_failure = f
3717 
3718-    def _got_privkey_results(self, datavs, peerid, shnum, started, lp):
3719-        now = time.time()
3720-        elapsed = now - started
3721-        self._status.add_per_server_time(peerid, "privkey", started, elapsed)
3722-        self._queries_outstanding.discard(peerid)
3723-        if not self._need_privkey:
3724-            return
3725-        if shnum not in datavs:
3726-            self.log("privkey wasn't there when we asked it",
3727-                     level=log.WEIRD, umid="VA9uDQ")
3728-            return
3729-        datav = datavs[shnum]
3730-        enc_privkey = datav[0]
3731-        self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
3732 
3733     def _privkey_query_failed(self, f, peerid, shnum, lp):
3734         self._queries_outstanding.discard(peerid)
3735hunk ./src/allmydata/mutable/servermap.py 1012
3736         self._servermap.problems.append(f)
3737         self._last_failure = f
3738 
3739+
3740     def _check_for_done(self, res):
3741         # exit paths:
3742         #  return self._send_more_queries(outstanding) : send some more queries
3743hunk ./src/allmydata/mutable/servermap.py 1018
3744         #  return self._done() : all done
3745         #  return : keep waiting, no new queries
3746-
3747         lp = self.log(format=("_check_for_done, mode is '%(mode)s', "
3748                               "%(outstanding)d queries outstanding, "
3749                               "%(extra)d extra peers available, "
3750hunk ./src/allmydata/mutable/servermap.py 1209
3751 
3752     def _done(self):
3753         if not self._running:
3754+            self.log("not running; we're already done")
3755             return
3756         self._running = False
3757         now = time.time()
3758hunk ./src/allmydata/mutable/servermap.py 1224
3759         self._servermap.last_update_time = self._started
3760         # the servermap will not be touched after this
3761         self.log("servermap: %s" % self._servermap.summarize_versions())
3762+
3763         eventually(self._done_deferred.callback, self._servermap)
3764 
3765     def _fatal_error(self, f):
3766}
3767[interfaces: change interfaces to work with MDMF
3768Kevan Carstensen <kevan@isnotajoke.com>**20110802014119
3769 Ignore-this: 2f441022cf888c044bc9e6dd609db139
3770 
3771 A lot of this work concerns #993, in that it unifies (to an extent) the
3772 interfaces of mutable and immutable files.
3773] {
3774hunk ./src/allmydata/interfaces.py 7
3775      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
3776 
3777 HASH_SIZE=32
3778+SALT_SIZE=16
3779+
3780+SDMF_VERSION=0
3781+MDMF_VERSION=1
3782 
3783 Hash = StringConstraint(maxLength=HASH_SIZE,
3784                         minLength=HASH_SIZE)# binary format 32-byte SHA256 hash
3785hunk ./src/allmydata/interfaces.py 424
3786         """
3787 
3788 
3789+class IMutableSlotWriter(Interface):
3790+    """
3791+    The interface for a writer around a mutable slot on a remote server.
3792+    """
3793+    def set_checkstring(checkstring, *args):
3794+        """
3795+        Set the checkstring that I will pass to the remote server when
3796+        writing.
3797+
3798+            @param checkstring A packed checkstring to use.
3799+
3800+        Note that implementations can differ in which semantics they
3801+        wish to support for set_checkstring -- they can, for example,
3802+        build the checkstring themselves from its constituents, or
3803+        some other thing.
3804+        """
3805+
3806+    def get_checkstring():
3807+        """
3808+        Get the checkstring that I think currently exists on the remote
3809+        server.
3810+        """
3811+
3812+    def put_block(data, segnum, salt):
3813+        """
3814+        Add a block and salt to the share.
3815+        """
3816+
3817+    def put_encprivey(encprivkey):
3818+        """
3819+        Add the encrypted private key to the share.
3820+        """
3821+
3822+    def put_blockhashes(blockhashes=list):
3823+        """
3824+        Add the block hash tree to the share.
3825+        """
3826+
3827+    def put_sharehashes(sharehashes=dict):
3828+        """
3829+        Add the share hash chain to the share.
3830+        """
3831+
3832+    def get_signable():
3833+        """
3834+        Return the part of the share that needs to be signed.
3835+        """
3836+
3837+    def put_signature(signature):
3838+        """
3839+        Add the signature to the share.
3840+        """
3841+
3842+    def put_verification_key(verification_key):
3843+        """
3844+        Add the verification key to the share.
3845+        """
3846+
3847+    def finish_publishing():
3848+        """
3849+        Do anything necessary to finish writing the share to a remote
3850+        server. I require that no further publishing needs to take place
3851+        after this method has been called.
3852+        """
3853+
3854+
3855 class IURI(Interface):
3856     def init_from_string(uri):
3857         """Accept a string (as created by my to_string() method) and populate
3858hunk ./src/allmydata/interfaces.py 546
3859 
3860 class IMutableFileURI(Interface):
3861     """I am a URI which represents a mutable filenode."""
3862+    def get_extension_params():
3863+        """Return the extension parameters in the URI"""
3864+
3865+    def set_extension_params():
3866+        """Set the extension parameters that should be in the URI"""
3867 
3868 class IDirectoryURI(Interface):
3869     pass
3870hunk ./src/allmydata/interfaces.py 574
3871 class MustNotBeUnknownRWError(CapConstraintError):
3872     """Cannot add an unknown child cap specified in a rw_uri field."""
3873 
3874+
3875+class IReadable(Interface):
3876+    """I represent a readable object -- either an immutable file, or a
3877+    specific version of a mutable file.
3878+    """
3879+
3880+    def is_readonly():
3881+        """Return True if this reference provides mutable access to the given
3882+        file or directory (i.e. if you can modify it), or False if not. Note
3883+        that even if this reference is read-only, someone else may hold a
3884+        read-write reference to it.
3885+
3886+        For an IReadable returned by get_best_readable_version(), this will
3887+        always return True, but for instances of subinterfaces such as
3888+        IMutableFileVersion, it may return False."""
3889+
3890+    def is_mutable():
3891+        """Return True if this file or directory is mutable (by *somebody*,
3892+        not necessarily you), False if it is is immutable. Note that a file
3893+        might be mutable overall, but your reference to it might be
3894+        read-only. On the other hand, all references to an immutable file
3895+        will be read-only; there are no read-write references to an immutable
3896+        file."""
3897+
3898+    def get_storage_index():
3899+        """Return the storage index of the file."""
3900+
3901+    def get_size():
3902+        """Return the length (in bytes) of this readable object."""
3903+
3904+    def download_to_data():
3905+        """Download all of the file contents. I return a Deferred that fires
3906+        with the contents as a byte string."""
3907+
3908+    def read(consumer, offset=0, size=None):
3909+        """Download a portion (possibly all) of the file's contents, making
3910+        them available to the given IConsumer. Return a Deferred that fires
3911+        (with the consumer) when the consumer is unregistered (either because
3912+        the last byte has been given to it, or because the consumer threw an
3913+        exception during write(), possibly because it no longer wants to
3914+        receive data). The portion downloaded will start at 'offset' and
3915+        contain 'size' bytes (or the remainder of the file if size==None).
3916+
3917+        The consumer will be used in non-streaming mode: an IPullProducer
3918+        will be attached to it.
3919+
3920+        The consumer will not receive data right away: several network trips
3921+        must occur first. The order of events will be::
3922+
3923+         consumer.registerProducer(p, streaming)
3924+          (if streaming == False)::
3925+           consumer does p.resumeProducing()
3926+            consumer.write(data)
3927+           consumer does p.resumeProducing()
3928+            consumer.write(data).. (repeat until all data is written)
3929+         consumer.unregisterProducer()
3930+         deferred.callback(consumer)
3931+
3932+        If a download error occurs, or an exception is raised by
3933+        consumer.registerProducer() or consumer.write(), I will call
3934+        consumer.unregisterProducer() and then deliver the exception via
3935+        deferred.errback(). To cancel the download, the consumer should call
3936+        p.stopProducing(), which will result in an exception being delivered
3937+        via deferred.errback().
3938+
3939+        See src/allmydata/util/consumer.py for an example of a simple
3940+        download-to-memory consumer.
3941+        """
3942+
3943+
3944+class IWritable(Interface):
3945+    """
3946+    I define methods that callers can use to update SDMF and MDMF
3947+    mutable files on a Tahoe-LAFS grid.
3948+    """
3949+    # XXX: For the moment, we have only this. It is possible that we
3950+    #      want to move overwrite() and modify() in here too.
3951+    def update(data, offset):
3952+        """
3953+        I write the data from my data argument to the MDMF file,
3954+        starting at offset. I continue writing data until my data
3955+        argument is exhausted, appending data to the file as necessary.
3956+        """
3957+        # assert IMutableUploadable.providedBy(data)
3958+        # to append data: offset=node.get_size_of_best_version()
3959+        # do we want to support compacting MDMF?
3960+        # for an MDMF file, this can be done with O(data.get_size())
3961+        # memory. For an SDMF file, any modification takes
3962+        # O(node.get_size_of_best_version()).
3963+
3964+
3965+class IMutableFileVersion(IReadable):
3966+    """I provide access to a particular version of a mutable file. The
3967+    access is read/write if I was obtained from a filenode derived from
3968+    a write cap, or read-only if the filenode was derived from a read cap.
3969+    """
3970+
3971+    def get_sequence_number():
3972+        """Return the sequence number of this version."""
3973+
3974+    def get_servermap():
3975+        """Return the IMutableFileServerMap instance that was used to create
3976+        this object.
3977+        """
3978+
3979+    def get_writekey():
3980+        """Return this filenode's writekey, or None if the node does not have
3981+        write-capability. This may be used to assist with data structures
3982+        that need to make certain data available only to writers, such as the
3983+        read-write child caps in dirnodes. The recommended process is to have
3984+        reader-visible data be submitted to the filenode in the clear (where
3985+        it will be encrypted by the filenode using the readkey), but encrypt
3986+        writer-visible data using this writekey.
3987+        """
3988+
3989+    # TODO: Can this be overwrite instead of replace?
3990+    def replace(new_contents):
3991+        """Replace the contents of the mutable file, provided that no other
3992+        node has published (or is attempting to publish, concurrently) a
3993+        newer version of the file than this one.
3994+
3995+        I will avoid modifying any share that is different than the version
3996+        given by get_sequence_number(). However, if another node is writing
3997+        to the file at the same time as me, I may manage to update some shares
3998+        while they update others. If I see any evidence of this, I will signal
3999+        UncoordinatedWriteError, and the file will be left in an inconsistent
4000+        state (possibly the version you provided, possibly the old version,
4001+        possibly somebody else's version, and possibly a mix of shares from
4002+        all of these).
4003+
4004+        The recommended response to UncoordinatedWriteError is to either
4005+        return it to the caller (since they failed to coordinate their
4006+        writes), or to attempt some sort of recovery. It may be sufficient to
4007+        wait a random interval (with exponential backoff) and repeat your
4008+        operation. If I do not signal UncoordinatedWriteError, then I was
4009+        able to write the new version without incident.
4010+
4011+        I return a Deferred that fires (with a PublishStatus object) when the
4012+        update has completed.
4013+        """
4014+
4015+    def modify(modifier_cb):
4016+        """Modify the contents of the file, by downloading this version,
4017+        applying the modifier function (or bound method), then uploading
4018+        the new version. This will succeed as long as no other node
4019+        publishes a version between the download and the upload.
4020+        I return a Deferred that fires (with a PublishStatus object) when
4021+        the update is complete.
4022+
4023+        The modifier callable will be given three arguments: a string (with
4024+        the old contents), a 'first_time' boolean, and a servermap. As with
4025+        download_to_data(), the old contents will be from this version,
4026+        but the modifier can use the servermap to make other decisions
4027+        (such as refusing to apply the delta if there are multiple parallel
4028+        versions, or if there is evidence of a newer unrecoverable version).
4029+        'first_time' will be True the first time the modifier is called,
4030+        and False on any subsequent calls.
4031+
4032+        The callable should return a string with the new contents. The
4033+        callable must be prepared to be called multiple times, and must
4034+        examine the input string to see if the change that it wants to make
4035+        is already present in the old version. If it does not need to make
4036+        any changes, it can either return None, or return its input string.
4037+
4038+        If the modifier raises an exception, it will be returned in the
4039+        errback.
4040+        """
4041+
4042+
4043 # The hierarchy looks like this:
4044 #  IFilesystemNode
4045 #   IFileNode
4046hunk ./src/allmydata/interfaces.py 833
4047     def raise_error():
4048         """Raise any error associated with this node."""
4049 
4050+    # XXX: These may not be appropriate outside the context of an IReadable.
4051     def get_size():
4052         """Return the length (in bytes) of the data this node represents. For
4053         directory nodes, I return the size of the backing store. I return
4054hunk ./src/allmydata/interfaces.py 850
4055 class IFileNode(IFilesystemNode):
4056     """I am a node which represents a file: a sequence of bytes. I am not a
4057     container, like IDirectoryNode."""
4058+    def get_best_readable_version():
4059+        """Return a Deferred that fires with an IReadable for the 'best'
4060+        available version of the file. The IReadable provides only read
4061+        access, even if this filenode was derived from a write cap.
4062 
4063hunk ./src/allmydata/interfaces.py 855
4064-class IImmutableFileNode(IFileNode):
4065-    def read(consumer, offset=0, size=None):
4066-        """Download a portion (possibly all) of the file's contents, making
4067-        them available to the given IConsumer. Return a Deferred that fires
4068-        (with the consumer) when the consumer is unregistered (either because
4069-        the last byte has been given to it, or because the consumer threw an
4070-        exception during write(), possibly because it no longer wants to
4071-        receive data). The portion downloaded will start at 'offset' and
4072-        contain 'size' bytes (or the remainder of the file if size==None).
4073-
4074-        The consumer will be used in non-streaming mode: an IPullProducer
4075-        will be attached to it.
4076+        For an immutable file, there is only one version. For a mutable
4077+        file, the 'best' version is the recoverable version with the
4078+        highest sequence number. If no uncoordinated writes have occurred,
4079+        and if enough shares are available, then this will be the most
4080+        recent version that has been uploaded. If no version is recoverable,
4081+        the Deferred will errback with an UnrecoverableFileError.
4082+        """
4083 
4084hunk ./src/allmydata/interfaces.py 863
4085-        The consumer will not receive data right away: several network trips
4086-        must occur first. The order of events will be::
4087+    def download_best_version():
4088+        """Download the contents of the version that would be returned
4089+        by get_best_readable_version(). This is equivalent to calling
4090+        download_to_data() on the IReadable given by that method.
4091 
4092hunk ./src/allmydata/interfaces.py 868
4093-         consumer.registerProducer(p, streaming)
4094-          (if streaming == False)::
4095-           consumer does p.resumeProducing()
4096-            consumer.write(data)
4097-           consumer does p.resumeProducing()
4098-            consumer.write(data).. (repeat until all data is written)
4099-         consumer.unregisterProducer()
4100-         deferred.callback(consumer)
4101+        I return a Deferred that fires with a byte string when the file
4102+        has been fully downloaded. To support streaming download, use
4103+        the 'read' method of IReadable. If no version is recoverable,
4104+        the Deferred will errback with an UnrecoverableFileError.
4105+        """
4106 
4107hunk ./src/allmydata/interfaces.py 874
4108-        If a download error occurs, or an exception is raised by
4109-        consumer.registerProducer() or consumer.write(), I will call
4110-        consumer.unregisterProducer() and then deliver the exception via
4111-        deferred.errback(). To cancel the download, the consumer should call
4112-        p.stopProducing(), which will result in an exception being delivered
4113-        via deferred.errback().
4114+    def get_size_of_best_version():
4115+        """Find the size of the version that would be returned by
4116+        get_best_readable_version().
4117 
4118hunk ./src/allmydata/interfaces.py 878
4119-        See src/allmydata/util/consumer.py for an example of a simple
4120-        download-to-memory consumer.
4121+        I return a Deferred that fires with an integer. If no version
4122+        is recoverable, the Deferred will errback with an
4123+        UnrecoverableFileError.
4124         """
4125 
4126hunk ./src/allmydata/interfaces.py 883
4127+
4128+class IImmutableFileNode(IFileNode, IReadable):
4129+    """I am a node representing an immutable file. Immutable files have
4130+    only one version"""
4131+
4132+
4133 class IMutableFileNode(IFileNode):
4134     """I provide access to a 'mutable file', which retains its identity
4135     regardless of what contents are put in it.
4136hunk ./src/allmydata/interfaces.py 948
4137     only be retrieved and updated all-at-once, as a single big string. Future
4138     versions of our mutable files will remove this restriction.
4139     """
4140-
4141-    def download_best_version():
4142-        """Download the 'best' available version of the file, meaning one of
4143-        the recoverable versions with the highest sequence number. If no
4144+    def get_best_mutable_version():
4145+        """Return a Deferred that fires with an IMutableFileVersion for
4146+        the 'best' available version of the file. The best version is
4147+        the recoverable version with the highest sequence number. If no
4148         uncoordinated writes have occurred, and if enough shares are
4149hunk ./src/allmydata/interfaces.py 953
4150-        available, then this will be the most recent version that has been
4151-        uploaded.
4152+        available, then this will be the most recent version that has
4153+        been uploaded.
4154 
4155hunk ./src/allmydata/interfaces.py 956
4156-        I update an internal servermap with MODE_READ, determine which
4157-        version of the file is indicated by
4158-        servermap.best_recoverable_version(), and return a Deferred that
4159-        fires with its contents. If no version is recoverable, the Deferred
4160-        will errback with UnrecoverableFileError.
4161-        """
4162-
4163-    def get_size_of_best_version():
4164-        """Find the size of the version that would be downloaded with
4165-        download_best_version(), without actually downloading the whole file.
4166-
4167-        I return a Deferred that fires with an integer.
4168+        If no version is recoverable, the Deferred will errback with an
4169+        UnrecoverableFileError.
4170         """
4171 
4172     def overwrite(new_contents):
4173hunk ./src/allmydata/interfaces.py 996
4174         errback.
4175         """
4176 
4177-
4178     def get_servermap(mode):
4179         """Return a Deferred that fires with an IMutableFileServerMap
4180         instance, updated using the given mode.
4181hunk ./src/allmydata/interfaces.py 1049
4182         writer-visible data using this writekey.
4183         """
4184 
4185+    def get_version():
4186+        """Returns the mutable file protocol version."""
4187+
4188 class NotEnoughSharesError(Exception):
4189     """Download was unable to get enough shares"""
4190 
4191hunk ./src/allmydata/interfaces.py 1888
4192         """The upload is finished, and whatever filehandle was in use may be
4193         closed."""
4194 
4195+
4196+class IMutableUploadable(Interface):
4197+    """
4198+    I represent content that is due to be uploaded to a mutable filecap.
4199+    """
4200+    # This is somewhat simpler than the IUploadable interface above
4201+    # because mutable files do not need to be concerned with possibly
4202+    # generating a CHK, nor with per-file keys. It is a subset of the
4203+    # methods in IUploadable, though, so we could just as well implement
4204+    # the mutable uploadables as IUploadables that don't happen to use
4205+    # those methods (with the understanding that the unused methods will
4206+    # never be called on such objects)
4207+    def get_size():
4208+        """
4209+        Returns a Deferred that fires with the size of the content held
4210+        by the uploadable.
4211+        """
4212+
4213+    def read(length):
4214+        """
4215+        Returns a list of strings which, when concatenated, are the next
4216+        length bytes of the file, or fewer if there are fewer bytes
4217+        between the current location and the end of the file.
4218+        """
4219+
4220+    def close():
4221+        """
4222+        The process that used the Uploadable is finished using it, so
4223+        the uploadable may be closed.
4224+        """
4225+
4226 class IUploadResults(Interface):
4227     """I am returned by upload() methods. I contain a number of public
4228     attributes which can be read to determine the results of the upload. Some
4229}
4230[nodemaker: teach nodemaker how to create MDMF mutable files
4231Kevan Carstensen <kevan@isnotajoke.com>**20110802014258
4232 Ignore-this: 2bf1fd4f8c1d1ad0e855c678347b76c2
4233] {
4234hunk ./src/allmydata/nodemaker.py 3
4235 import weakref
4236 from zope.interface import implements
4237-from allmydata.interfaces import INodeMaker
4238+from allmydata.util.assertutil import precondition
4239+from allmydata.interfaces import INodeMaker, SDMF_VERSION
4240 from allmydata.immutable.literal import LiteralFileNode
4241 from allmydata.immutable.filenode import ImmutableFileNode, CiphertextFileNode
4242 from allmydata.immutable.upload import Data
4243hunk ./src/allmydata/nodemaker.py 9
4244 from allmydata.mutable.filenode import MutableFileNode
4245+from allmydata.mutable.publish import MutableData
4246 from allmydata.dirnode import DirectoryNode, pack_children
4247 from allmydata.unknown import UnknownNode
4248 from allmydata import uri
4249hunk ./src/allmydata/nodemaker.py 92
4250             return self._create_dirnode(filenode)
4251         return None
4252 
4253-    def create_mutable_file(self, contents=None, keysize=None):
4254+    def create_mutable_file(self, contents=None, keysize=None,
4255+                            version=SDMF_VERSION):
4256         n = MutableFileNode(self.storage_broker, self.secret_holder,
4257                             self.default_encoding_parameters, self.history)
4258         d = self.key_generator.generate(keysize)
4259hunk ./src/allmydata/nodemaker.py 97
4260-        d.addCallback(n.create_with_keys, contents)
4261+        d.addCallback(n.create_with_keys, contents, version=version)
4262         d.addCallback(lambda res: n)
4263         return d
4264 
4265hunk ./src/allmydata/nodemaker.py 101
4266-    def create_new_mutable_directory(self, initial_children={}):
4267+    def create_new_mutable_directory(self, initial_children={},
4268+                                     version=SDMF_VERSION):
4269+        # initial_children must have metadata (i.e. {} instead of None)
4270+        for (name, (node, metadata)) in initial_children.iteritems():
4271+            precondition(isinstance(metadata, dict),
4272+                         "create_new_mutable_directory requires metadata to be a dict, not None", metadata)
4273+            node.raise_error()
4274         d = self.create_mutable_file(lambda n:
4275hunk ./src/allmydata/nodemaker.py 109
4276-                                     pack_children(initial_children, n.get_writekey()))
4277+                                     MutableData(pack_children(initial_children,
4278+                                                    n.get_writekey())),
4279+                                     version=version)
4280         d.addCallback(self._create_dirnode)
4281         return d
4282 
4283}
4284[mutable/filenode: Modify mutable filenodes for use with MDMF
4285Kevan Carstensen <kevan@isnotajoke.com>**20110802014501
4286 Ignore-this: 3c230bb0ebe60a94c667b0ee0c3b28e0
4287 
4288 In particular:
4289     - Break MutableFileNode and MutableFileVersion into distinct classes.
4290     - Implement the interface modifications made for MDMF.
4291     - Be aware of MDMF caps.
4292     - Learn how to create and work with MDMF files.
4293] {
4294hunk ./src/allmydata/mutable/filenode.py 7
4295 from zope.interface import implements
4296 from twisted.internet import defer, reactor
4297 from foolscap.api import eventually
4298-from allmydata.interfaces import IMutableFileNode, \
4299-     ICheckable, ICheckResults, NotEnoughSharesError
4300-from allmydata.util import hashutil, log
4301+from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \
4302+     NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION, IMutableUploadable, \
4303+     IMutableFileVersion, IWritable
4304+from allmydata.util import hashutil, log, consumer, deferredutil, mathutil
4305 from allmydata.util.assertutil import precondition
4306hunk ./src/allmydata/mutable/filenode.py 12
4307-from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
4308+from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI, \
4309+                          WritableMDMFFileURI, ReadonlyMDMFFileURI
4310 from allmydata.monitor import Monitor
4311 from pycryptopp.cipher.aes import AES
4312 
4313hunk ./src/allmydata/mutable/filenode.py 17
4314-from allmydata.mutable.publish import Publish
4315-from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \
4316+from allmydata.mutable.publish import Publish, MutableData,\
4317+                                      TransformingUploadable
4318+from allmydata.mutable.common import MODE_READ, MODE_WRITE, MODE_CHECK, UnrecoverableFileError, \
4319      ResponseCache, UncoordinatedWriteError
4320 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
4321 from allmydata.mutable.retrieve import Retrieve
4322hunk ./src/allmydata/mutable/filenode.py 70
4323         self._sharemap = {} # known shares, shnum-to-[nodeids]
4324         self._cache = ResponseCache()
4325         self._most_recent_size = None
4326+        # filled in after __init__ if we're being created for the first time;
4327+        # filled in by the servermap updater before publishing, otherwise.
4328+        # set to this default value in case neither of those things happen,
4329+        # or in case the servermap can't find any shares to tell us what
4330+        # to publish as.
4331+        self._protocol_version = None
4332 
4333         # all users of this MutableFileNode go through the serializer. This
4334         # takes advantage of the fact that Deferreds discard the callbacks
4335hunk ./src/allmydata/mutable/filenode.py 83
4336         # forever without consuming more and more memory.
4337         self._serializer = defer.succeed(None)
4338 
4339+        # Starting with MDMF, we can get these from caps if they're
4340+        # there. Leave them alone for now; they'll be filled in by my
4341+        # init_from_cap method if necessary.
4342+        self._downloader_hints = {}
4343+
4344     def __repr__(self):
4345         if hasattr(self, '_uri'):
4346             return "<%s %x %s %s>" % (self.__class__.__name__, id(self), self.is_readonly() and 'RO' or 'RW', self._uri.abbrev())
4347hunk ./src/allmydata/mutable/filenode.py 99
4348         # verification key, nor things like 'k' or 'N'. If and when someone
4349         # wants to get our contents, we'll pull from shares and fill those
4350         # in.
4351-        assert isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI))
4352+        if isinstance(filecap, (WritableMDMFFileURI, ReadonlyMDMFFileURI)):
4353+            self._protocol_version = MDMF_VERSION
4354+        elif isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI)):
4355+            self._protocol_version = SDMF_VERSION
4356+
4357         self._uri = filecap
4358         self._writekey = None
4359hunk ./src/allmydata/mutable/filenode.py 106
4360-        if isinstance(filecap, WriteableSSKFileURI):
4361+
4362+        if not filecap.is_readonly() and filecap.is_mutable():
4363             self._writekey = self._uri.writekey
4364         self._readkey = self._uri.readkey
4365         self._storage_index = self._uri.storage_index
4366hunk ./src/allmydata/mutable/filenode.py 120
4367         # if possible, otherwise by the first peer that Publish talks to.
4368         self._privkey = None
4369         self._encprivkey = None
4370+
4371+        # Starting with MDMF caps, we allowed arbitrary extensions in
4372+        # caps. If we were initialized with a cap that had extensions,
4373+        # we want to remember them so we can tell MutableFileVersions
4374+        # about them.
4375+        extensions = self._uri.get_extension_params()
4376+        if extensions:
4377+            extensions = map(int, extensions)
4378+            suspected_k, suspected_segsize = extensions
4379+            self._downloader_hints['k'] = suspected_k
4380+            self._downloader_hints['segsize'] = suspected_segsize
4381+
4382         return self
4383 
4384hunk ./src/allmydata/mutable/filenode.py 134
4385-    def create_with_keys(self, (pubkey, privkey), contents):
4386+    def create_with_keys(self, (pubkey, privkey), contents,
4387+                         version=SDMF_VERSION):
4388         """Call this to create a brand-new mutable file. It will create the
4389         shares, find homes for them, and upload the initial contents (created
4390         with the same rules as IClient.create_mutable_file() ). Returns a
4391hunk ./src/allmydata/mutable/filenode.py 148
4392         self._writekey = hashutil.ssk_writekey_hash(privkey_s)
4393         self._encprivkey = self._encrypt_privkey(self._writekey, privkey_s)
4394         self._fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
4395-        self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
4396+        if version == MDMF_VERSION:
4397+            self._uri = WritableMDMFFileURI(self._writekey, self._fingerprint)
4398+            self._protocol_version = version
4399+        elif version == SDMF_VERSION:
4400+            self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
4401+            self._protocol_version = version
4402         self._readkey = self._uri.readkey
4403         self._storage_index = self._uri.storage_index
4404         initial_contents = self._get_initial_contents(contents)
4405hunk ./src/allmydata/mutable/filenode.py 160
4406         return self._upload(initial_contents, None)
4407 
4408     def _get_initial_contents(self, contents):
4409+        if contents is None:
4410+            return MutableData("")
4411+
4412         if isinstance(contents, str):
4413hunk ./src/allmydata/mutable/filenode.py 164
4414+            return MutableData(contents)
4415+
4416+        if IMutableUploadable.providedBy(contents):
4417             return contents
4418hunk ./src/allmydata/mutable/filenode.py 168
4419-        if contents is None:
4420-            return ""
4421+
4422         assert callable(contents), "%s should be callable, not %s" % \
4423                (contents, type(contents))
4424         return contents(self)
4425hunk ./src/allmydata/mutable/filenode.py 238
4426 
4427     def get_size(self):
4428         return self._most_recent_size
4429+
4430     def get_current_size(self):
4431         d = self.get_size_of_best_version()
4432         d.addCallback(self._stash_size)
4433hunk ./src/allmydata/mutable/filenode.py 243
4434         return d
4435+
4436     def _stash_size(self, size):
4437         self._most_recent_size = size
4438         return size
4439hunk ./src/allmydata/mutable/filenode.py 302
4440             return cmp(self.__class__, them.__class__)
4441         return cmp(self._uri, them._uri)
4442 
4443-    def _do_serialized(self, cb, *args, **kwargs):
4444-        # note: to avoid deadlock, this callable is *not* allowed to invoke
4445-        # other serialized methods within this (or any other)
4446-        # MutableFileNode. The callable should be a bound method of this same
4447-        # MFN instance.
4448-        d = defer.Deferred()
4449-        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4450-        # we need to put off d.callback until this Deferred is finished being
4451-        # processed. Otherwise the caller's subsequent activities (like,
4452-        # doing other things with this node) can cause reentrancy problems in
4453-        # the Deferred code itself
4454-        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4455-        # add a log.err just in case something really weird happens, because
4456-        # self._serializer stays around forever, therefore we won't see the
4457-        # usual Unhandled Error in Deferred that would give us a hint.
4458-        self._serializer.addErrback(log.err)
4459-        return d
4460 
4461     #################################
4462     # ICheckable
4463hunk ./src/allmydata/mutable/filenode.py 327
4464 
4465 
4466     #################################
4467-    # IMutableFileNode
4468+    # IFileNode
4469+
4470+    def get_best_readable_version(self):
4471+        """
4472+        I return a Deferred that fires with a MutableFileVersion
4473+        representing the best readable version of the file that I
4474+        represent
4475+        """
4476+        return self.get_readable_version()
4477+
4478+
4479+    def get_readable_version(self, servermap=None, version=None):
4480+        """
4481+        I return a Deferred that fires with an MutableFileVersion for my
4482+        version argument, if there is a recoverable file of that version
4483+        on the grid. If there is no recoverable version, I fire with an
4484+        UnrecoverableFileError.
4485+
4486+        If a servermap is provided, I look in there for the requested
4487+        version. If no servermap is provided, I create and update a new
4488+        one.
4489+
4490+        If no version is provided, then I return a MutableFileVersion
4491+        representing the best recoverable version of the file.
4492+        """
4493+        d = self._get_version_from_servermap(MODE_READ, servermap, version)
4494+        def _build_version((servermap, their_version)):
4495+            assert their_version in servermap.recoverable_versions()
4496+            assert their_version in servermap.make_versionmap()
4497+
4498+            mfv = MutableFileVersion(self,
4499+                                     servermap,
4500+                                     their_version,
4501+                                     self._storage_index,
4502+                                     self._storage_broker,
4503+                                     self._readkey,
4504+                                     history=self._history)
4505+            assert mfv.is_readonly()
4506+            mfv.set_downloader_hints(self._downloader_hints)
4507+            # our caller can use this to download the contents of the
4508+            # mutable file.
4509+            return mfv
4510+        return d.addCallback(_build_version)
4511+
4512+
4513+    def _get_version_from_servermap(self,
4514+                                    mode,
4515+                                    servermap=None,
4516+                                    version=None):
4517+        """
4518+        I return a Deferred that fires with (servermap, version).
4519+
4520+        This function performs validation and a servermap update. If it
4521+        returns (servermap, version), the caller can assume that:
4522+            - servermap was last updated in mode.
4523+            - version is recoverable, and corresponds to the servermap.
4524+
4525+        If version and servermap are provided to me, I will validate
4526+        that version exists in the servermap, and that the servermap was
4527+        updated correctly.
4528+
4529+        If version is not provided, but servermap is, I will validate
4530+        the servermap and return the best recoverable version that I can
4531+        find in the servermap.
4532+
4533+        If the version is provided but the servermap isn't, I will
4534+        obtain a servermap that has been updated in the correct mode and
4535+        validate that version is found and recoverable.
4536+
4537+        If neither servermap nor version are provided, I will obtain a
4538+        servermap updated in the correct mode, and return the best
4539+        recoverable version that I can find in there.
4540+        """
4541+        # XXX: wording ^^^^
4542+        if servermap and servermap.last_update_mode == mode:
4543+            d = defer.succeed(servermap)
4544+        else:
4545+            d = self._get_servermap(mode)
4546+
4547+        def _get_version(servermap, v):
4548+            if v and v not in servermap.recoverable_versions():
4549+                v = None
4550+            elif not v:
4551+                v = servermap.best_recoverable_version()
4552+            if not v:
4553+                raise UnrecoverableFileError("no recoverable versions")
4554+
4555+            return (servermap, v)
4556+        return d.addCallback(_get_version, version)
4557+
4558 
4559     def download_best_version(self):
4560hunk ./src/allmydata/mutable/filenode.py 419
4561+        """
4562+        I return a Deferred that fires with the contents of the best
4563+        version of this mutable file.
4564+        """
4565         return self._do_serialized(self._download_best_version)
4566hunk ./src/allmydata/mutable/filenode.py 424
4567+
4568+
4569     def _download_best_version(self):
4570hunk ./src/allmydata/mutable/filenode.py 427
4571-        servermap = ServerMap()
4572-        d = self._try_once_to_download_best_version(servermap, MODE_READ)
4573-        def _maybe_retry(f):
4574-            f.trap(NotEnoughSharesError)
4575-            # the download is worth retrying once. Make sure to use the
4576-            # old servermap, since it is what remembers the bad shares,
4577-            # but use MODE_WRITE to make it look for even more shares.
4578-            # TODO: consider allowing this to retry multiple times.. this
4579-            # approach will let us tolerate about 8 bad shares, I think.
4580-            return self._try_once_to_download_best_version(servermap,
4581-                                                           MODE_WRITE)
4582+        """
4583+        I am the serialized sibling of download_best_version.
4584+        """
4585+        d = self.get_best_readable_version()
4586+        d.addCallback(self._record_size)
4587+        d.addCallback(lambda version: version.download_to_data())
4588+
4589+        # It is possible that the download will fail because there
4590+        # aren't enough shares to be had. If so, we will try again after
4591+        # updating the servermap in MODE_WRITE, which may find more
4592+        # shares than updating in MODE_READ, as we just did. We can do
4593+        # this by getting the best mutable version and downloading from
4594+        # that -- the best mutable version will be a MutableFileVersion
4595+        # with a servermap that was last updated in MODE_WRITE, as we
4596+        # want. If this fails, then we give up.
4597+        def _maybe_retry(failure):
4598+            failure.trap(NotEnoughSharesError)
4599+
4600+            d = self.get_best_mutable_version()
4601+            d.addCallback(self._record_size)
4602+            d.addCallback(lambda version: version.download_to_data())
4603+            return d
4604+
4605         d.addErrback(_maybe_retry)
4606         return d
4607hunk ./src/allmydata/mutable/filenode.py 452
4608-    def _try_once_to_download_best_version(self, servermap, mode):
4609-        d = self._update_servermap(servermap, mode)
4610-        d.addCallback(self._once_updated_download_best_version, servermap)
4611-        return d
4612-    def _once_updated_download_best_version(self, ignored, servermap):
4613-        goal = servermap.best_recoverable_version()
4614-        if not goal:
4615-            raise UnrecoverableFileError("no recoverable versions")
4616-        return self._try_once_to_download_version(servermap, goal)
4617+
4618+
4619+    def _record_size(self, mfv):
4620+        """
4621+        I record the size of a mutable file version.
4622+        """
4623+        self._most_recent_size = mfv.get_size()
4624+        return mfv
4625+
4626 
4627     def get_size_of_best_version(self):
4628hunk ./src/allmydata/mutable/filenode.py 463
4629-        d = self.get_servermap(MODE_READ)
4630-        def _got_servermap(smap):
4631-            ver = smap.best_recoverable_version()
4632-            if not ver:
4633-                raise UnrecoverableFileError("no recoverable version")
4634-            return smap.size_of_version(ver)
4635-        d.addCallback(_got_servermap)
4636-        return d
4637+        """
4638+        I return the size of the best version of this mutable file.
4639+
4640+        This is equivalent to calling get_size() on the result of
4641+        get_best_readable_version().
4642+        """
4643+        d = self.get_best_readable_version()
4644+        return d.addCallback(lambda mfv: mfv.get_size())
4645+
4646+
4647+    #################################
4648+    # IMutableFileNode
4649+
4650+    def get_best_mutable_version(self, servermap=None):
4651+        """
4652+        I return a Deferred that fires with a MutableFileVersion
4653+        representing the best readable version of the file that I
4654+        represent. I am like get_best_readable_version, except that I
4655+        will try to make a writable version if I can.
4656+        """
4657+        return self.get_mutable_version(servermap=servermap)
4658+
4659+
4660+    def get_mutable_version(self, servermap=None, version=None):
4661+        """
4662+        I return a version of this mutable file. I return a Deferred
4663+        that fires with a MutableFileVersion
4664+
4665+        If version is provided, the Deferred will fire with a
4666+        MutableFileVersion initailized with that version. Otherwise, it
4667+        will fire with the best version that I can recover.
4668+
4669+        If servermap is provided, I will use that to find versions
4670+        instead of performing my own servermap update.
4671+        """
4672+        if self.is_readonly():
4673+            return self.get_readable_version(servermap=servermap,
4674+                                             version=version)
4675+
4676+        # get_mutable_version => write intent, so we require that the
4677+        # servermap is updated in MODE_WRITE
4678+        d = self._get_version_from_servermap(MODE_WRITE, servermap, version)
4679+        def _build_version((servermap, smap_version)):
4680+            # these should have been set by the servermap update.
4681+            assert self._secret_holder
4682+            assert self._writekey
4683+
4684+            mfv = MutableFileVersion(self,
4685+                                     servermap,
4686+                                     smap_version,
4687+                                     self._storage_index,
4688+                                     self._storage_broker,
4689+                                     self._readkey,
4690+                                     self._writekey,
4691+                                     self._secret_holder,
4692+                                     history=self._history)
4693+            assert not mfv.is_readonly()
4694+            mfv.set_downloader_hints(self._downloader_hints)
4695+            return mfv
4696+
4697+        return d.addCallback(_build_version)
4698 
4699hunk ./src/allmydata/mutable/filenode.py 525
4700+
4701+    # XXX: I'm uncomfortable with the difference between upload and
4702+    #      overwrite, which, FWICT, is basically that you don't have to
4703+    #      do a servermap update before you overwrite. We split them up
4704+    #      that way anyway, so I guess there's no real difficulty in
4705+    #      offering both ways to callers, but it also makes the
4706+    #      public-facing API cluttery, and makes it hard to discern the
4707+    #      right way of doing things.
4708+
4709+    # In general, we leave it to callers to ensure that they aren't
4710+    # going to cause UncoordinatedWriteErrors when working with
4711+    # MutableFileVersions. We know that the next three operations
4712+    # (upload, overwrite, and modify) will all operate on the same
4713+    # version, so we say that only one of them can be going on at once,
4714+    # and serialize them to ensure that that actually happens, since as
4715+    # the caller in this situation it is our job to do that.
4716     def overwrite(self, new_contents):
4717hunk ./src/allmydata/mutable/filenode.py 542
4718+        """
4719+        I overwrite the contents of the best recoverable version of this
4720+        mutable file with new_contents. This is equivalent to calling
4721+        overwrite on the result of get_best_mutable_version with
4722+        new_contents as an argument. I return a Deferred that eventually
4723+        fires with the results of my replacement process.
4724+        """
4725+        # TODO: Update downloader hints.
4726         return self._do_serialized(self._overwrite, new_contents)
4727hunk ./src/allmydata/mutable/filenode.py 551
4728+
4729+
4730     def _overwrite(self, new_contents):
4731hunk ./src/allmydata/mutable/filenode.py 554
4732+        """
4733+        I am the serialized sibling of overwrite.
4734+        """
4735+        d = self.get_best_mutable_version()
4736+        d.addCallback(lambda mfv: mfv.overwrite(new_contents))
4737+        d.addCallback(self._did_upload, new_contents.get_size())
4738+        return d
4739+
4740+
4741+    def upload(self, new_contents, servermap):
4742+        """
4743+        I overwrite the contents of the best recoverable version of this
4744+        mutable file with new_contents, using servermap instead of
4745+        creating/updating our own servermap. I return a Deferred that
4746+        fires with the results of my upload.
4747+        """
4748+        # TODO: Update downloader hints
4749+        return self._do_serialized(self._upload, new_contents, servermap)
4750+
4751+
4752+    def modify(self, modifier, backoffer=None):
4753+        """
4754+        I modify the contents of the best recoverable version of this
4755+        mutable file with the modifier. This is equivalent to calling
4756+        modify on the result of get_best_mutable_version. I return a
4757+        Deferred that eventually fires with an UploadResults instance
4758+        describing this process.
4759+        """
4760+        # TODO: Update downloader hints.
4761+        return self._do_serialized(self._modify, modifier, backoffer)
4762+
4763+
4764+    def _modify(self, modifier, backoffer):
4765+        """
4766+        I am the serialized sibling of modify.
4767+        """
4768+        d = self.get_best_mutable_version()
4769+        d.addCallback(lambda mfv: mfv.modify(modifier, backoffer))
4770+        return d
4771+
4772+
4773+    def download_version(self, servermap, version, fetch_privkey=False):
4774+        """
4775+        Download the specified version of this mutable file. I return a
4776+        Deferred that fires with the contents of the specified version
4777+        as a bytestring, or errbacks if the file is not recoverable.
4778+        """
4779+        d = self.get_readable_version(servermap, version)
4780+        return d.addCallback(lambda mfv: mfv.download_to_data(fetch_privkey))
4781+
4782+
4783+    def get_servermap(self, mode):
4784+        """
4785+        I return a servermap that has been updated in mode.
4786+
4787+        mode should be one of MODE_READ, MODE_WRITE, MODE_CHECK or
4788+        MODE_ANYTHING. See servermap.py for more on what these mean.
4789+        """
4790+        return self._do_serialized(self._get_servermap, mode)
4791+
4792+
4793+    def _get_servermap(self, mode):
4794+        """
4795+        I am a serialized twin to get_servermap.
4796+        """
4797         servermap = ServerMap()
4798hunk ./src/allmydata/mutable/filenode.py 620
4799-        d = self._update_servermap(servermap, mode=MODE_WRITE)
4800-        d.addCallback(lambda ignored: self._upload(new_contents, servermap))
4801+        d = self._update_servermap(servermap, mode)
4802+        # The servermap will tell us about the most recent size of the
4803+        # file, so we may as well set that so that callers might get
4804+        # more data about us.
4805+        if not self._most_recent_size:
4806+            d.addCallback(self._get_size_from_servermap)
4807+        return d
4808+
4809+
4810+    def _get_size_from_servermap(self, servermap):
4811+        """
4812+        I extract the size of the best version of this file and record
4813+        it in self._most_recent_size. I return the servermap that I was
4814+        given.
4815+        """
4816+        if servermap.recoverable_versions():
4817+            v = servermap.best_recoverable_version()
4818+            size = v[4] # verinfo[4] == size
4819+            self._most_recent_size = size
4820+        return servermap
4821+
4822+
4823+    def _update_servermap(self, servermap, mode):
4824+        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
4825+                             mode)
4826+        if self._history:
4827+            self._history.notify_mapupdate(u.get_status())
4828+        return u.update()
4829+
4830+
4831+    #def set_version(self, version):
4832+        # I can be set in two ways:
4833+        #  1. When the node is created.
4834+        #  2. (for an existing share) when the Servermap is updated
4835+        #     before I am read.
4836+    #    assert version in (MDMF_VERSION, SDMF_VERSION)
4837+    #    self._protocol_version = version
4838+
4839+
4840+    def get_version(self):
4841+        return self._protocol_version
4842+
4843+
4844+    def _do_serialized(self, cb, *args, **kwargs):
4845+        # note: to avoid deadlock, this callable is *not* allowed to invoke
4846+        # other serialized methods within this (or any other)
4847+        # MutableFileNode. The callable should be a bound method of this same
4848+        # MFN instance.
4849+        d = defer.Deferred()
4850+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4851+        # we need to put off d.callback until this Deferred is finished being
4852+        # processed. Otherwise the caller's subsequent activities (like,
4853+        # doing other things with this node) can cause reentrancy problems in
4854+        # the Deferred code itself
4855+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4856+        # add a log.err just in case something really weird happens, because
4857+        # self._serializer stays around forever, therefore we won't see the
4858+        # usual Unhandled Error in Deferred that would give us a hint.
4859+        self._serializer.addErrback(log.err)
4860+        return d
4861+
4862+
4863+    def _upload(self, new_contents, servermap):
4864+        """
4865+        A MutableFileNode still has to have some way of getting
4866+        published initially, which is what I am here for. After that,
4867+        all publishing, updating, modifying and so on happens through
4868+        MutableFileVersions.
4869+        """
4870+        assert self._pubkey, "update_servermap must be called before publish"
4871+
4872+        # Define IPublishInvoker with a set_downloader_hints method?
4873+        # Then have the publisher call that method when it's done publishing?
4874+        p = Publish(self, self._storage_broker, servermap)
4875+        if self._history:
4876+            self._history.notify_publish(p.get_status(),
4877+                                         new_contents.get_size())
4878+        d = p.publish(new_contents)
4879+        d.addCallback(self._did_upload, new_contents.get_size())
4880         return d
4881 
4882 
4883hunk ./src/allmydata/mutable/filenode.py 702
4884+    def set_downloader_hints(self, hints):
4885+        self._downloader_hints = hints
4886+        extensions = hints.values()
4887+        self._uri.set_extension_params(extensions)
4888+
4889+
4890+    def _did_upload(self, res, size):
4891+        self._most_recent_size = size
4892+        return res
4893+
4894+
4895+class MutableFileVersion:
4896+    """
4897+    I represent a specific version (most likely the best version) of a
4898+    mutable file.
4899+
4900+    Since I implement IReadable, instances which hold a
4901+    reference to an instance of me are guaranteed the ability (absent
4902+    connection difficulties or unrecoverable versions) to read the file
4903+    that I represent. Depending on whether I was initialized with a
4904+    write capability or not, I may also provide callers the ability to
4905+    overwrite or modify the contents of the mutable file that I
4906+    reference.
4907+    """
4908+    implements(IMutableFileVersion, IWritable)
4909+
4910+    def __init__(self,
4911+                 node,
4912+                 servermap,
4913+                 version,
4914+                 storage_index,
4915+                 storage_broker,
4916+                 readcap,
4917+                 writekey=None,
4918+                 write_secrets=None,
4919+                 history=None):
4920+
4921+        self._node = node
4922+        self._servermap = servermap
4923+        self._version = version
4924+        self._storage_index = storage_index
4925+        self._write_secrets = write_secrets
4926+        self._history = history
4927+        self._storage_broker = storage_broker
4928+
4929+        #assert isinstance(readcap, IURI)
4930+        self._readcap = readcap
4931+
4932+        self._writekey = writekey
4933+        self._serializer = defer.succeed(None)
4934+
4935+
4936+    def get_sequence_number(self):
4937+        """
4938+        Get the sequence number of the mutable version that I represent.
4939+        """
4940+        return self._version[0] # verinfo[0] == the sequence number
4941+
4942+
4943+    # TODO: Terminology?
4944+    def get_writekey(self):
4945+        """
4946+        I return a writekey or None if I don't have a writekey.
4947+        """
4948+        return self._writekey
4949+
4950+
4951+    def set_downloader_hints(self, hints):
4952+        """
4953+        I set the downloader hints.
4954+        """
4955+        assert isinstance(hints, dict)
4956+
4957+        self._downloader_hints = hints
4958+
4959+
4960+    def get_downloader_hints(self):
4961+        """
4962+        I return the downloader hints.
4963+        """
4964+        return self._downloader_hints
4965+
4966+
4967+    def overwrite(self, new_contents):
4968+        """
4969+        I overwrite the contents of this mutable file version with the
4970+        data in new_contents.
4971+        """
4972+        assert not self.is_readonly()
4973+
4974+        return self._do_serialized(self._overwrite, new_contents)
4975+
4976+
4977+    def _overwrite(self, new_contents):
4978+        assert IMutableUploadable.providedBy(new_contents)
4979+        assert self._servermap.last_update_mode == MODE_WRITE
4980+
4981+        return self._upload(new_contents)
4982+
4983+
4984     def modify(self, modifier, backoffer=None):
4985         """I use a modifier callback to apply a change to the mutable file.
4986         I implement the following pseudocode::
4987hunk ./src/allmydata/mutable/filenode.py 842
4988         backoffer should not invoke any methods on this MutableFileNode
4989         instance, and it needs to be highly conscious of deadlock issues.
4990         """
4991+        assert not self.is_readonly()
4992+
4993         return self._do_serialized(self._modify, modifier, backoffer)
4994hunk ./src/allmydata/mutable/filenode.py 845
4995+
4996+
4997     def _modify(self, modifier, backoffer):
4998hunk ./src/allmydata/mutable/filenode.py 848
4999-        servermap = ServerMap()
5000         if backoffer is None:
5001             backoffer = BackoffAgent().delay
5002hunk ./src/allmydata/mutable/filenode.py 850
5003-        return self._modify_and_retry(servermap, modifier, backoffer, True)
5004-    def _modify_and_retry(self, servermap, modifier, backoffer, first_time):
5005-        d = self._modify_once(servermap, modifier, first_time)
5006+        return self._modify_and_retry(modifier, backoffer, True)
5007+
5008+
5009+    def _modify_and_retry(self, modifier, backoffer, first_time):
5010+        """
5011+        I try to apply modifier to the contents of this version of the
5012+        mutable file. If I succeed, I return an UploadResults instance
5013+        describing my success. If I fail, I try again after waiting for
5014+        a little bit.
5015+        """
5016+        log.msg("doing modify")
5017+        if first_time:
5018+            d = self._update_servermap()
5019+        else:
5020+            # We ran into trouble; do MODE_CHECK so we're a little more
5021+            # careful on subsequent tries.
5022+            d = self._update_servermap(mode=MODE_CHECK)
5023+
5024+        d.addCallback(lambda ignored:
5025+            self._modify_once(modifier, first_time))
5026         def _retry(f):
5027             f.trap(UncoordinatedWriteError)
5028hunk ./src/allmydata/mutable/filenode.py 872
5029+            # Uh oh, it broke. We're allowed to trust the servermap for our
5030+            # first try, but after that we need to update it. It's
5031+            # possible that we've failed due to a race with another
5032+            # uploader, and if the race is to converge correctly, we
5033+            # need to know about that upload.
5034             d2 = defer.maybeDeferred(backoffer, self, f)
5035             d2.addCallback(lambda ignored:
5036hunk ./src/allmydata/mutable/filenode.py 879
5037-                           self._modify_and_retry(servermap, modifier,
5038+                           self._modify_and_retry(modifier,
5039                                                   backoffer, False))
5040             return d2
5041         d.addErrback(_retry)
5042hunk ./src/allmydata/mutable/filenode.py 884
5043         return d
5044-    def _modify_once(self, servermap, modifier, first_time):
5045-        d = self._update_servermap(servermap, MODE_WRITE)
5046-        d.addCallback(self._once_updated_download_best_version, servermap)
5047+
5048+
5049+    def _modify_once(self, modifier, first_time):
5050+        """
5051+        I attempt to apply a modifier to the contents of the mutable
5052+        file.
5053+        """
5054+        assert self._servermap.last_update_mode != MODE_READ
5055+
5056+        # download_to_data is serialized, so we have to call this to
5057+        # avoid deadlock.
5058+        d = self._try_to_download_data()
5059         def _apply(old_contents):
5060hunk ./src/allmydata/mutable/filenode.py 897
5061-            new_contents = modifier(old_contents, servermap, first_time)
5062+            new_contents = modifier(old_contents, self._servermap, first_time)
5063+            precondition((isinstance(new_contents, str) or
5064+                          new_contents is None),
5065+                         "Modifier function must return a string "
5066+                         "or None")
5067+
5068             if new_contents is None or new_contents == old_contents:
5069hunk ./src/allmydata/mutable/filenode.py 904
5070+                log.msg("no changes")
5071                 # no changes need to be made
5072                 if first_time:
5073                     return
5074hunk ./src/allmydata/mutable/filenode.py 912
5075                 # recovery when it observes UCWE, we need to do a second
5076                 # publish. See #551 for details. We'll basically loop until
5077                 # we managed an uncontested publish.
5078-                new_contents = old_contents
5079-            precondition(isinstance(new_contents, str),
5080-                         "Modifier function must return a string or None")
5081-            return self._upload(new_contents, servermap)
5082+                old_uploadable = MutableData(old_contents)
5083+                new_contents = old_uploadable
5084+            else:
5085+                new_contents = MutableData(new_contents)
5086+
5087+            return self._upload(new_contents)
5088         d.addCallback(_apply)
5089         return d
5090 
5091hunk ./src/allmydata/mutable/filenode.py 921
5092-    def get_servermap(self, mode):
5093-        return self._do_serialized(self._get_servermap, mode)
5094-    def _get_servermap(self, mode):
5095-        servermap = ServerMap()
5096-        return self._update_servermap(servermap, mode)
5097-    def _update_servermap(self, servermap, mode):
5098-        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
5099-                             mode)
5100-        if self._history:
5101-            self._history.notify_mapupdate(u.get_status())
5102-        return u.update()
5103 
5104hunk ./src/allmydata/mutable/filenode.py 922
5105-    def download_version(self, servermap, version, fetch_privkey=False):
5106-        return self._do_serialized(self._try_once_to_download_version,
5107-                                   servermap, version, fetch_privkey)
5108-    def _try_once_to_download_version(self, servermap, version,
5109-                                      fetch_privkey=False):
5110-        r = Retrieve(self, servermap, version, fetch_privkey)
5111+    def is_readonly(self):
5112+        """
5113+        I return True if this MutableFileVersion provides no write
5114+        access to the file that it encapsulates, and False if it
5115+        provides the ability to modify the file.
5116+        """
5117+        return self._writekey is None
5118+
5119+
5120+    def is_mutable(self):
5121+        """
5122+        I return True, since mutable files are always mutable by
5123+        somebody.
5124+        """
5125+        return True
5126+
5127+
5128+    def get_storage_index(self):
5129+        """
5130+        I return the storage index of the reference that I encapsulate.
5131+        """
5132+        return self._storage_index
5133+
5134+
5135+    def get_size(self):
5136+        """
5137+        I return the length, in bytes, of this readable object.
5138+        """
5139+        return self._servermap.size_of_version(self._version)
5140+
5141+
5142+    def download_to_data(self, fetch_privkey=False):
5143+        """
5144+        I return a Deferred that fires with the contents of this
5145+        readable object as a byte string.
5146+
5147+        """
5148+        c = consumer.MemoryConsumer()
5149+        d = self.read(c, fetch_privkey=fetch_privkey)
5150+        d.addCallback(lambda mc: "".join(mc.chunks))
5151+        return d
5152+
5153+
5154+    def _try_to_download_data(self):
5155+        """
5156+        I am an unserialized cousin of download_to_data; I am called
5157+        from the children of modify() to download the data associated
5158+        with this mutable version.
5159+        """
5160+        c = consumer.MemoryConsumer()
5161+        # modify will almost certainly write, so we need the privkey.
5162+        d = self._read(c, fetch_privkey=True)
5163+        d.addCallback(lambda mc: "".join(mc.chunks))
5164+        return d
5165+
5166+
5167+    def read(self, consumer, offset=0, size=None, fetch_privkey=False):
5168+        """
5169+        I read a portion (possibly all) of the mutable file that I
5170+        reference into consumer.
5171+        """
5172+        return self._do_serialized(self._read, consumer, offset, size,
5173+                                   fetch_privkey)
5174+
5175+
5176+    def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
5177+        """
5178+        I am the serialized companion of read.
5179+        """
5180+        r = Retrieve(self._node, self._servermap, self._version, fetch_privkey)
5181         if self._history:
5182             self._history.notify_retrieve(r.get_status())
5183hunk ./src/allmydata/mutable/filenode.py 994
5184-        d = r.download()
5185-        d.addCallback(self._downloaded_version)
5186+        d = r.download(consumer, offset, size)
5187         return d
5188hunk ./src/allmydata/mutable/filenode.py 996
5189-    def _downloaded_version(self, data):
5190-        self._most_recent_size = len(data)
5191-        return data
5192 
5193hunk ./src/allmydata/mutable/filenode.py 997
5194-    def upload(self, new_contents, servermap):
5195-        return self._do_serialized(self._upload, new_contents, servermap)
5196-    def _upload(self, new_contents, servermap):
5197-        assert self._pubkey, "update_servermap must be called before publish"
5198-        p = Publish(self, self._storage_broker, servermap)
5199+
5200+    def _do_serialized(self, cb, *args, **kwargs):
5201+        # note: to avoid deadlock, this callable is *not* allowed to invoke
5202+        # other serialized methods within this (or any other)
5203+        # MutableFileNode. The callable should be a bound method of this same
5204+        # MFN instance.
5205+        d = defer.Deferred()
5206+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
5207+        # we need to put off d.callback until this Deferred is finished being
5208+        # processed. Otherwise the caller's subsequent activities (like,
5209+        # doing other things with this node) can cause reentrancy problems in
5210+        # the Deferred code itself
5211+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
5212+        # add a log.err just in case something really weird happens, because
5213+        # self._serializer stays around forever, therefore we won't see the
5214+        # usual Unhandled Error in Deferred that would give us a hint.
5215+        self._serializer.addErrback(log.err)
5216+        return d
5217+
5218+
5219+    def _upload(self, new_contents):
5220+        #assert self._pubkey, "update_servermap must be called before publish"
5221+        p = Publish(self._node, self._storage_broker, self._servermap)
5222         if self._history:
5223hunk ./src/allmydata/mutable/filenode.py 1021
5224-            self._history.notify_publish(p.get_status(), len(new_contents))
5225+            self._history.notify_publish(p.get_status(),
5226+                                         new_contents.get_size())
5227         d = p.publish(new_contents)
5228hunk ./src/allmydata/mutable/filenode.py 1024
5229-        d.addCallback(self._did_upload, len(new_contents))
5230+        d.addCallback(self._did_upload, new_contents.get_size())
5231         return d
5232hunk ./src/allmydata/mutable/filenode.py 1026
5233+
5234+
5235     def _did_upload(self, res, size):
5236         self._most_recent_size = size
5237         return res
5238hunk ./src/allmydata/mutable/filenode.py 1031
5239+
5240+    def update(self, data, offset):
5241+        """
5242+        Do an update of this mutable file version by inserting data at
5243+        offset within the file. If offset is the EOF, this is an append
5244+        operation. I return a Deferred that fires with the results of
5245+        the update operation when it has completed.
5246+
5247+        In cases where update does not append any data, or where it does
5248+        not append so many blocks that the block count crosses a
5249+        power-of-two boundary, this operation will use roughly
5250+        O(data.get_size()) memory/bandwidth/CPU to perform the update.
5251+        Otherwise, it must download, re-encode, and upload the entire
5252+        file again, which will use O(filesize) resources.
5253+        """
5254+        return self._do_serialized(self._update, data, offset)
5255+
5256+
5257+    def _update(self, data, offset):
5258+        """
5259+        I update the mutable file version represented by this particular
5260+        IMutableVersion by inserting the data in data at the offset
5261+        offset. I return a Deferred that fires when this has been
5262+        completed.
5263+        """
5264+        new_size = data.get_size() + offset
5265+        old_size = self.get_size()
5266+        segment_size = self._version[3]
5267+        num_old_segments = mathutil.div_ceil(old_size,
5268+                                             segment_size)
5269+        num_new_segments = mathutil.div_ceil(new_size,
5270+                                             segment_size)
5271+        log.msg("got %d old segments, %d new segments" % \
5272+                        (num_old_segments, num_new_segments))
5273+
5274+        # We do a whole file re-encode if the file is an SDMF file.
5275+        if self._version[2]: # version[2] == SDMF salt, which MDMF lacks
5276+            log.msg("doing re-encode instead of in-place update")
5277+            return self._do_modify_update(data, offset)
5278+
5279+        # Otherwise, we can replace just the parts that are changing.
5280+        log.msg("updating in place")
5281+        d = self._do_update_update(data, offset)
5282+        d.addCallback(self._decode_and_decrypt_segments, data, offset)
5283+        d.addCallback(self._build_uploadable_and_finish, data, offset)
5284+        return d
5285+
5286+
5287+    def _do_modify_update(self, data, offset):
5288+        """
5289+        I perform a file update by modifying the contents of the file
5290+        after downloading it, then reuploading it. I am less efficient
5291+        than _do_update_update, but am necessary for certain updates.
5292+        """
5293+        def m(old, servermap, first_time):
5294+            start = offset
5295+            rest = offset + data.get_size()
5296+            new = old[:start]
5297+            new += "".join(data.read(data.get_size()))
5298+            new += old[rest:]
5299+            return new
5300+        return self._modify(m, None)
5301+
5302+
5303+    def _do_update_update(self, data, offset):
5304+        """
5305+        I start the Servermap update that gets us the data we need to
5306+        continue the update process. I return a Deferred that fires when
5307+        the servermap update is done.
5308+        """
5309+        assert IMutableUploadable.providedBy(data)
5310+        assert self.is_mutable()
5311+        # offset == self.get_size() is valid and means that we are
5312+        # appending data to the file.
5313+        assert offset <= self.get_size()
5314+
5315+        segsize = self._version[3]
5316+        # We'll need the segment that the data starts in, regardless of
5317+        # what we'll do later.
5318+        start_segment = offset // segsize
5319+
5320+        # We only need the end segment if the data we append does not go
5321+        # beyond the current end-of-file.
5322+        end_segment = start_segment
5323+        if offset + data.get_size() < self.get_size():
5324+            end_data = offset + data.get_size()
5325+            end_segment = end_data // segsize
5326+
5327+        self._start_segment = start_segment
5328+        self._end_segment = end_segment
5329+
5330+        # Now ask for the servermap to be updated in MODE_WRITE with
5331+        # this update range.
5332+        return self._update_servermap(update_range=(start_segment,
5333+                                                    end_segment))
5334+
5335+
5336+    def _decode_and_decrypt_segments(self, ignored, data, offset):
5337+        """
5338+        After the servermap update, I take the encrypted and encoded
5339+        data that the servermap fetched while doing its update and
5340+        transform it into decoded-and-decrypted plaintext that can be
5341+        used by the new uploadable. I return a Deferred that fires with
5342+        the segments.
5343+        """
5344+        r = Retrieve(self._node, self._servermap, self._version)
5345+        # decode: takes in our blocks and salts from the servermap,
5346+        # returns a Deferred that fires with the corresponding plaintext
5347+        # segments. Does not download -- simply takes advantage of
5348+        # existing infrastructure within the Retrieve class to avoid
5349+        # duplicating code.
5350+        sm = self._servermap
5351+        # XXX: If the methods in the servermap don't work as
5352+        # abstractions, you should rewrite them instead of going around
5353+        # them.
5354+        update_data = sm.update_data
5355+        start_segments = {} # shnum -> start segment
5356+        end_segments = {} # shnum -> end segment
5357+        blockhashes = {} # shnum -> blockhash tree
5358+        for (shnum, data) in update_data.iteritems():
5359+            data = [d[1] for d in data if d[0] == self._version]
5360+
5361+            # Every data entry in our list should now be share shnum for
5362+            # a particular version of the mutable file, so all of the
5363+            # entries should be identical.
5364+            datum = data[0]
5365+            assert filter(lambda x: x != datum, data) == []
5366+
5367+            blockhashes[shnum] = datum[0]
5368+            start_segments[shnum] = datum[1]
5369+            end_segments[shnum] = datum[2]
5370+
5371+        d1 = r.decode(start_segments, self._start_segment)
5372+        d2 = r.decode(end_segments, self._end_segment)
5373+        d3 = defer.succeed(blockhashes)
5374+        return deferredutil.gatherResults([d1, d2, d3])
5375+
5376+
5377+    def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
5378+        """
5379+        After the process has the plaintext segments, I build the
5380+        TransformingUploadable that the publisher will eventually
5381+        re-upload to the grid. I then invoke the publisher with that
5382+        uploadable, and return a Deferred when the publish operation has
5383+        completed without issue.
5384+        """
5385+        u = TransformingUploadable(data, offset,
5386+                                   self._version[3],
5387+                                   segments_and_bht[0],
5388+                                   segments_and_bht[1])
5389+        p = Publish(self._node, self._storage_broker, self._servermap)
5390+        return p.update(u, offset, segments_and_bht[2], self._version)
5391+
5392+
5393+    def _update_servermap(self, mode=MODE_WRITE, update_range=None):
5394+        """
5395+        I update the servermap. I return a Deferred that fires when the
5396+        servermap update is done.
5397+        """
5398+        if update_range:
5399+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
5400+                                 self._servermap,
5401+                                 mode=mode,
5402+                                 update_range=update_range)
5403+        else:
5404+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
5405+                                 self._servermap,
5406+                                 mode=mode)
5407+        return u.update()
5408}
5409[client: teach client how to create and work with MDMF files
5410Kevan Carstensen <kevan@isnotajoke.com>**20110802014811
5411 Ignore-this: d72fbc4c2ca63f00d9ab9dc2919098ff
5412] {
5413hunk ./src/allmydata/client.py 25
5414 from allmydata.util.time_format import parse_duration, parse_date
5415 from allmydata.stats import StatsProvider
5416 from allmydata.history import History
5417-from allmydata.interfaces import IStatsProducer, RIStubClient
5418+from allmydata.interfaces import IStatsProducer, RIStubClient, \
5419+                                 SDMF_VERSION, MDMF_VERSION
5420 from allmydata.nodemaker import NodeMaker
5421 
5422 
5423hunk ./src/allmydata/client.py 357
5424                                    self.terminator,
5425                                    self.get_encoding_parameters(),
5426                                    self._key_generator)
5427+        default = self.get_config("client", "mutable.format", default="sdmf")
5428+        if default == "mdmf":
5429+            self.mutable_file_default = MDMF_VERSION
5430+        else:
5431+            self.mutable_file_default = SDMF_VERSION
5432 
5433     def get_history(self):
5434         return self.history
5435hunk ./src/allmydata/client.py 493
5436         # may get an opaque node if there were any problems.
5437         return self.nodemaker.create_from_cap(write_uri, read_uri, deep_immutable=deep_immutable, name=name)
5438 
5439-    def create_dirnode(self, initial_children={}):
5440-        d = self.nodemaker.create_new_mutable_directory(initial_children)
5441+    def create_dirnode(self, initial_children={}, version=SDMF_VERSION):
5442+        d = self.nodemaker.create_new_mutable_directory(initial_children, version=version)
5443         return d
5444 
5445     def create_immutable_dirnode(self, children, convergence=None):
5446hunk ./src/allmydata/client.py 500
5447         return self.nodemaker.create_immutable_directory(children, convergence)
5448 
5449-    def create_mutable_file(self, contents=None, keysize=None):
5450-        return self.nodemaker.create_mutable_file(contents, keysize)
5451+    def create_mutable_file(self, contents=None, keysize=None, version=None):
5452+        if not version:
5453+            version = self.mutable_file_default
5454+        return self.nodemaker.create_mutable_file(contents, keysize,
5455+                                                  version=version)
5456 
5457     def upload(self, uploadable):
5458         uploader = self.getServiceNamed("uploader")
5459}
5460[nodemaker: teach nodemaker about MDMF caps
5461Kevan Carstensen <kevan@isnotajoke.com>**20110802014926
5462 Ignore-this: 430c73121b6883b99626cfd652fc65c4
5463] {
5464hunk ./src/allmydata/nodemaker.py 82
5465             return self._create_immutable(cap)
5466         if isinstance(cap, uri.CHKFileVerifierURI):
5467             return self._create_immutable_verifier(cap)
5468-        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI)):
5469+        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI,
5470+                            uri.WritableMDMFFileURI, uri.ReadonlyMDMFFileURI)):
5471             return self._create_mutable(cap)
5472         if isinstance(cap, (uri.DirectoryURI,
5473                             uri.ReadonlyDirectoryURI,
5474hunk ./src/allmydata/nodemaker.py 88
5475                             uri.ImmutableDirectoryURI,
5476-                            uri.LiteralDirectoryURI)):
5477+                            uri.LiteralDirectoryURI,
5478+                            uri.MDMFDirectoryURI,
5479+                            uri.ReadonlyMDMFDirectoryURI)):
5480             filenode = self._create_from_single_cap(cap.get_filenode_cap())
5481             return self._create_dirnode(filenode)
5482         return None
5483}
5484[mutable: train checker and repairer to work with MDMF mutable files
5485Kevan Carstensen <kevan@isnotajoke.com>**20110802015140
5486 Ignore-this: 8b1928925bed63708b71ab0de8d4306f
5487] {
5488hunk ./src/allmydata/mutable/checker.py 2
5489 
5490-from twisted.internet import defer
5491-from twisted.python import failure
5492-from allmydata import hashtree
5493 from allmydata.uri import from_string
5494hunk ./src/allmydata/mutable/checker.py 3
5495-from allmydata.util import hashutil, base32, idlib, log
5496+from allmydata.util import base32, idlib, log
5497 from allmydata.check_results import CheckAndRepairResults, CheckResults
5498 
5499 from allmydata.mutable.common import MODE_CHECK, CorruptShareError
5500hunk ./src/allmydata/mutable/checker.py 8
5501 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
5502-from allmydata.mutable.layout import unpack_share, SIGNED_PREFIX_LENGTH
5503+from allmydata.mutable.retrieve import Retrieve # for verifying
5504 
5505 class MutableChecker:
5506 
5507hunk ./src/allmydata/mutable/checker.py 25
5508 
5509     def check(self, verify=False, add_lease=False):
5510         servermap = ServerMap()
5511+        # Updating the servermap in MODE_CHECK will stand a good chance
5512+        # of finding all of the shares, and getting a good idea of
5513+        # recoverability, etc, without verifying.
5514         u = ServermapUpdater(self._node, self._storage_broker, self._monitor,
5515                              servermap, MODE_CHECK, add_lease=add_lease)
5516         if self._history:
5517hunk ./src/allmydata/mutable/checker.py 51
5518         if num_recoverable:
5519             self.best_version = servermap.best_recoverable_version()
5520 
5521+        # The file is unhealthy and needs to be repaired if:
5522+        # - There are unrecoverable versions.
5523         if servermap.unrecoverable_versions():
5524             self.need_repair = True
5525hunk ./src/allmydata/mutable/checker.py 55
5526+        # - There isn't a recoverable version.
5527         if num_recoverable != 1:
5528             self.need_repair = True
5529hunk ./src/allmydata/mutable/checker.py 58
5530+        # - The best recoverable version is missing some shares.
5531         if self.best_version:
5532             available_shares = servermap.shares_available()
5533             (num_distinct_shares, k, N) = available_shares[self.best_version]
5534hunk ./src/allmydata/mutable/checker.py 69
5535 
5536     def _verify_all_shares(self, servermap):
5537         # read every byte of each share
5538+        #
5539+        # This logic is going to be very nearly the same as the
5540+        # downloader. I bet we could pass the downloader a flag that
5541+        # makes it do this, and piggyback onto that instead of
5542+        # duplicating a bunch of code.
5543+        #
5544+        # Like:
5545+        #  r = Retrieve(blah, blah, blah, verify=True)
5546+        #  d = r.download()
5547+        #  (wait, wait, wait, d.callback)
5548+        # 
5549+        #  Then, when it has finished, we can check the servermap (which
5550+        #  we provided to Retrieve) to figure out which shares are bad,
5551+        #  since the Retrieve process will have updated the servermap as
5552+        #  it went along.
5553+        #
5554+        #  By passing the verify=True flag to the constructor, we are
5555+        #  telling the downloader a few things.
5556+        #
5557+        #  1. It needs to download all N shares, not just K shares.
5558+        #  2. It doesn't need to decrypt or decode the shares, only
5559+        #     verify them.
5560         if not self.best_version:
5561             return
5562hunk ./src/allmydata/mutable/checker.py 93
5563-        versionmap = servermap.make_versionmap()
5564-        shares = versionmap[self.best_version]
5565-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
5566-         offsets_tuple) = self.best_version
5567-        offsets = dict(offsets_tuple)
5568-        readv = [ (0, offsets["EOF"]) ]
5569-        dl = []
5570-        for (shnum, peerid, timestamp) in shares:
5571-            ss = servermap.connections[peerid]
5572-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
5573-            d.addCallback(self._got_answer, peerid, servermap)
5574-            dl.append(d)
5575-        return defer.DeferredList(dl, fireOnOneErrback=True, consumeErrors=True)
5576 
5577hunk ./src/allmydata/mutable/checker.py 94
5578-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
5579-        # isolate the callRemote to a separate method, so tests can subclass
5580-        # Publish and override it
5581-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
5582+        r = Retrieve(self._node, servermap, self.best_version, verify=True)
5583+        d = r.download()
5584+        d.addCallback(self._process_bad_shares)
5585         return d
5586 
5587hunk ./src/allmydata/mutable/checker.py 99
5588-    def _got_answer(self, datavs, peerid, servermap):
5589-        for shnum,datav in datavs.items():
5590-            data = datav[0]
5591-            try:
5592-                self._got_results_one_share(shnum, peerid, data)
5593-            except CorruptShareError:
5594-                f = failure.Failure()
5595-                self.need_repair = True
5596-                self.bad_shares.append( (peerid, shnum, f) )
5597-                prefix = data[:SIGNED_PREFIX_LENGTH]
5598-                servermap.mark_bad_share(peerid, shnum, prefix)
5599-                ss = servermap.connections[peerid]
5600-                self.notify_server_corruption(ss, shnum, str(f.value))
5601-
5602-    def check_prefix(self, peerid, shnum, data):
5603-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
5604-         offsets_tuple) = self.best_version
5605-        got_prefix = data[:SIGNED_PREFIX_LENGTH]
5606-        if got_prefix != prefix:
5607-            raise CorruptShareError(peerid, shnum,
5608-                                    "prefix mismatch: share changed while we were reading it")
5609-
5610-    def _got_results_one_share(self, shnum, peerid, data):
5611-        self.check_prefix(peerid, shnum, data)
5612-
5613-        # the [seqnum:signature] pieces are validated by _compare_prefix,
5614-        # which checks their signature against the pubkey known to be
5615-        # associated with this file.
5616 
5617hunk ./src/allmydata/mutable/checker.py 100
5618-        (seqnum, root_hash, IV, k, N, segsize, datalen, pubkey, signature,
5619-         share_hash_chain, block_hash_tree, share_data,
5620-         enc_privkey) = unpack_share(data)
5621-
5622-        # validate [share_hash_chain,block_hash_tree,share_data]
5623-
5624-        leaves = [hashutil.block_hash(share_data)]
5625-        t = hashtree.HashTree(leaves)
5626-        if list(t) != block_hash_tree:
5627-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
5628-        share_hash_leaf = t[0]
5629-        t2 = hashtree.IncompleteHashTree(N)
5630-        # root_hash was checked by the signature
5631-        t2.set_hashes({0: root_hash})
5632-        try:
5633-            t2.set_hashes(hashes=share_hash_chain,
5634-                          leaves={shnum: share_hash_leaf})
5635-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
5636-                IndexError), e:
5637-            msg = "corrupt hashes: %s" % (e,)
5638-            raise CorruptShareError(peerid, shnum, msg)
5639-
5640-        # validate enc_privkey: only possible if we have a write-cap
5641-        if not self._node.is_readonly():
5642-            alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
5643-            alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
5644-            if alleged_writekey != self._node.get_writekey():
5645-                raise CorruptShareError(peerid, shnum, "invalid privkey")
5646+    def _process_bad_shares(self, bad_shares):
5647+        if bad_shares:
5648+            self.need_repair = True
5649+        self.bad_shares = bad_shares
5650 
5651hunk ./src/allmydata/mutable/checker.py 105
5652-    def notify_server_corruption(self, ss, shnum, reason):
5653-        ss.callRemoteOnly("advise_corrupt_share",
5654-                          "mutable", self._storage_index, shnum, reason)
5655 
5656     def _count_shares(self, smap, version):
5657         available_shares = smap.shares_available()
5658hunk ./src/allmydata/mutable/repairer.py 5
5659 from zope.interface import implements
5660 from twisted.internet import defer
5661 from allmydata.interfaces import IRepairResults, ICheckResults
5662+from allmydata.mutable.publish import MutableData
5663 
5664 class RepairResults:
5665     implements(IRepairResults)
5666hunk ./src/allmydata/mutable/repairer.py 108
5667             raise RepairRequiresWritecapError("Sorry, repair currently requires a writecap, to set the write-enabler properly.")
5668 
5669         d = self.node.download_version(smap, best_version, fetch_privkey=True)
5670+        d.addCallback(lambda data:
5671+            MutableData(data))
5672         d.addCallback(self.node.upload, smap)
5673         d.addCallback(self.get_results, smap)
5674         return d
5675}
5676[test/common: Alter common test code to work with MDMF.
5677Kevan Carstensen <kevan@isnotajoke.com>**20110802015643
5678 Ignore-this: e564403182d0030439b168dd9f8726fa
5679 
5680 This mostly has to do with making the test code implement the new
5681 unified filenode interfaces.
5682] {
5683hunk ./src/allmydata/test/common.py 11
5684 from foolscap.api import flushEventualQueue, fireEventually
5685 from allmydata import uri, dirnode, client
5686 from allmydata.introducer.server import IntroducerNode
5687-from allmydata.interfaces import IMutableFileNode, IImmutableFileNode, \
5688-     FileTooLargeError, NotEnoughSharesError, ICheckable
5689+from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
5690+                                 NotEnoughSharesError, ICheckable, \
5691+                                 IMutableUploadable, SDMF_VERSION, \
5692+                                 MDMF_VERSION
5693 from allmydata.check_results import CheckResults, CheckAndRepairResults, \
5694      DeepCheckResults, DeepCheckAndRepairResults
5695 from allmydata.mutable.common import CorruptShareError
5696hunk ./src/allmydata/test/common.py 19
5697 from allmydata.mutable.layout import unpack_header
5698+from allmydata.mutable.publish import MutableData
5699+from allmydata.storage.server import storage_index_to_dir
5700 from allmydata.storage.mutable import MutableShareFile
5701 from allmydata.util import hashutil, log, fileutil, pollmixin
5702 from allmydata.util.assertutil import precondition
5703hunk ./src/allmydata/test/common.py 152
5704         consumer.write(data[start:end])
5705         return consumer
5706 
5707+
5708+    def get_best_readable_version(self):
5709+        return defer.succeed(self)
5710+
5711+
5712+    def download_to_data(self):
5713+        return download_to_data(self)
5714+
5715+
5716+    download_best_version = download_to_data
5717+
5718+
5719+    def get_size_of_best_version(self):
5720+        return defer.succeed(self.get_size)
5721+
5722+
5723 def make_chk_file_cap(size):
5724     return uri.CHKFileURI(key=os.urandom(16),
5725                           uri_extension_hash=os.urandom(32),
5726hunk ./src/allmydata/test/common.py 192
5727     MUTABLE_SIZELIMIT = 10000
5728     all_contents = {}
5729     bad_shares = {}
5730+    file_types = {} # storage index => MDMF_VERSION or SDMF_VERSION
5731 
5732     def __init__(self, storage_broker, secret_holder,
5733                  default_encoding_parameters, history):
5734hunk ./src/allmydata/test/common.py 197
5735         self.init_from_cap(make_mutable_file_cap())
5736-    def create(self, contents, key_generator=None, keysize=None):
5737+        self._k = default_encoding_parameters['k']
5738+        self._segsize = default_encoding_parameters['max_segment_size']
5739+    def create(self, contents, key_generator=None, keysize=None,
5740+               version=SDMF_VERSION):
5741+        if version == MDMF_VERSION and \
5742+            isinstance(self.my_uri, (uri.ReadonlySSKFileURI,
5743+                                 uri.WriteableSSKFileURI)):
5744+            self.init_from_cap(make_mdmf_mutable_file_cap())
5745+        self.file_types[self.storage_index] = version
5746         initial_contents = self._get_initial_contents(contents)
5747hunk ./src/allmydata/test/common.py 207
5748-        if len(initial_contents) > self.MUTABLE_SIZELIMIT:
5749-            raise FileTooLargeError("SDMF is limited to one segment, and "
5750-                                    "%d > %d" % (len(initial_contents),
5751-                                                 self.MUTABLE_SIZELIMIT))
5752-        self.all_contents[self.storage_index] = initial_contents
5753+        data = initial_contents.read(initial_contents.get_size())
5754+        data = "".join(data)
5755+        self.all_contents[self.storage_index] = data
5756+        self.my_uri.set_extension_params([self._k, self._segsize])
5757         return defer.succeed(self)
5758     def _get_initial_contents(self, contents):
5759hunk ./src/allmydata/test/common.py 213
5760-        if isinstance(contents, str):
5761-            return contents
5762         if contents is None:
5763hunk ./src/allmydata/test/common.py 214
5764-            return ""
5765+            return MutableData("")
5766+
5767+        if IMutableUploadable.providedBy(contents):
5768+            return contents
5769+
5770         assert callable(contents), "%s should be callable, not %s" % \
5771                (contents, type(contents))
5772         return contents(self)
5773hunk ./src/allmydata/test/common.py 224
5774     def init_from_cap(self, filecap):
5775         assert isinstance(filecap, (uri.WriteableSSKFileURI,
5776-                                    uri.ReadonlySSKFileURI))
5777+                                    uri.ReadonlySSKFileURI,
5778+                                    uri.WritableMDMFFileURI,
5779+                                    uri.ReadonlyMDMFFileURI))
5780         self.my_uri = filecap
5781         self.storage_index = self.my_uri.get_storage_index()
5782hunk ./src/allmydata/test/common.py 229
5783+        if isinstance(filecap, (uri.WritableMDMFFileURI,
5784+                                uri.ReadonlyMDMFFileURI)):
5785+            self.file_types[self.storage_index] = MDMF_VERSION
5786+
5787+        else:
5788+            self.file_types[self.storage_index] = SDMF_VERSION
5789+
5790         return self
5791     def get_cap(self):
5792         return self.my_uri
5793hunk ./src/allmydata/test/common.py 253
5794         return self.my_uri.get_readonly().to_string()
5795     def get_verify_cap(self):
5796         return self.my_uri.get_verify_cap()
5797+    def get_repair_cap(self):
5798+        if self.my_uri.is_readonly():
5799+            return None
5800+        return self.my_uri
5801     def is_readonly(self):
5802         return self.my_uri.is_readonly()
5803     def is_mutable(self):
5804hunk ./src/allmydata/test/common.py 279
5805     def get_storage_index(self):
5806         return self.storage_index
5807 
5808+    def get_servermap(self, mode):
5809+        return defer.succeed(None)
5810+
5811+    def get_version(self):
5812+        assert self.storage_index in self.file_types
5813+        return self.file_types[self.storage_index]
5814+
5815     def check(self, monitor, verify=False, add_lease=False):
5816         r = CheckResults(self.my_uri, self.storage_index)
5817         is_bad = self.bad_shares.get(self.storage_index, None)
5818hunk ./src/allmydata/test/common.py 344
5819         return d
5820 
5821     def download_best_version(self):
5822+        return defer.succeed(self._download_best_version())
5823+
5824+
5825+    def _download_best_version(self, ignored=None):
5826         if isinstance(self.my_uri, uri.LiteralFileURI):
5827hunk ./src/allmydata/test/common.py 349
5828-            return defer.succeed(self.my_uri.data)
5829+            return self.my_uri.data
5830         if self.storage_index not in self.all_contents:
5831hunk ./src/allmydata/test/common.py 351
5832-            return defer.fail(NotEnoughSharesError(None, 0, 3))
5833-        return defer.succeed(self.all_contents[self.storage_index])
5834+            raise NotEnoughSharesError(None, 0, 3)
5835+        return self.all_contents[self.storage_index]
5836+
5837 
5838     def overwrite(self, new_contents):
5839hunk ./src/allmydata/test/common.py 356
5840-        if len(new_contents) > self.MUTABLE_SIZELIMIT:
5841-            raise FileTooLargeError("SDMF is limited to one segment, and "
5842-                                    "%d > %d" % (len(new_contents),
5843-                                                 self.MUTABLE_SIZELIMIT))
5844         assert not self.is_readonly()
5845hunk ./src/allmydata/test/common.py 357
5846-        self.all_contents[self.storage_index] = new_contents
5847+        new_data = new_contents.read(new_contents.get_size())
5848+        new_data = "".join(new_data)
5849+        self.all_contents[self.storage_index] = new_data
5850+        self.my_uri.set_extension_params([self._k, self._segsize])
5851         return defer.succeed(None)
5852     def modify(self, modifier):
5853         # this does not implement FileTooLargeError, but the real one does
5854hunk ./src/allmydata/test/common.py 368
5855     def _modify(self, modifier):
5856         assert not self.is_readonly()
5857         old_contents = self.all_contents[self.storage_index]
5858-        self.all_contents[self.storage_index] = modifier(old_contents, None, True)
5859+        new_data = modifier(old_contents, None, True)
5860+        self.all_contents[self.storage_index] = new_data
5861+        self.my_uri.set_extension_params([self._k, self._segsize])
5862         return None
5863 
5864hunk ./src/allmydata/test/common.py 373
5865+    # As actually implemented, MutableFilenode and MutableFileVersion
5866+    # are distinct. However, nothing in the webapi uses (yet) that
5867+    # distinction -- it just uses the unified download interface
5868+    # provided by get_best_readable_version and read. When we start
5869+    # doing cooler things like LDMF, we will want to revise this code to
5870+    # be less simplistic.
5871+    def get_best_readable_version(self):
5872+        return defer.succeed(self)
5873+
5874+
5875+    def get_best_mutable_version(self):
5876+        return defer.succeed(self)
5877+
5878+    # Ditto for this, which is an implementation of IWritable.
5879+    # XXX: Declare that the same is implemented.
5880+    def update(self, data, offset):
5881+        assert not self.is_readonly()
5882+        def modifier(old, servermap, first_time):
5883+            new = old[:offset] + "".join(data.read(data.get_size()))
5884+            new += old[len(new):]
5885+            return new
5886+        return self.modify(modifier)
5887+
5888+
5889+    def read(self, consumer, offset=0, size=None):
5890+        data = self._download_best_version()
5891+        if size:
5892+            data = data[offset:offset+size]
5893+        consumer.write(data)
5894+        return defer.succeed(consumer)
5895+
5896+
5897 def make_mutable_file_cap():
5898     return uri.WriteableSSKFileURI(writekey=os.urandom(16),
5899                                    fingerprint=os.urandom(32))
5900hunk ./src/allmydata/test/common.py 408
5901-def make_mutable_file_uri():
5902-    return make_mutable_file_cap().to_string()
5903+
5904+def make_mdmf_mutable_file_cap():
5905+    return uri.WritableMDMFFileURI(writekey=os.urandom(16),
5906+                                   fingerprint=os.urandom(32))
5907+
5908+def make_mutable_file_uri(mdmf=False):
5909+    if mdmf:
5910+        uri = make_mdmf_mutable_file_cap()
5911+    else:
5912+        uri = make_mutable_file_cap()
5913+
5914+    return uri.to_string()
5915 
5916 def make_verifier_uri():
5917     return uri.SSKVerifierURI(storage_index=os.urandom(16),
5918hunk ./src/allmydata/test/common.py 425
5919                               fingerprint=os.urandom(32)).to_string()
5920 
5921+def create_mutable_filenode(contents, mdmf=False):
5922+    # XXX: All of these arguments are kind of stupid.
5923+    if mdmf:
5924+        cap = make_mdmf_mutable_file_cap()
5925+    else:
5926+        cap = make_mutable_file_cap()
5927+
5928+    encoding_params = {}
5929+    encoding_params['k'] = 3
5930+    encoding_params['max_segment_size'] = 128*1024
5931+
5932+    filenode = FakeMutableFileNode(None, None, encoding_params, None)
5933+    filenode.init_from_cap(cap)
5934+    if mdmf:
5935+        filenode.create(MutableData(contents), version=MDMF_VERSION)
5936+    else:
5937+        filenode.create(MutableData(contents), version=SDMF_VERSION)
5938+    return filenode
5939+
5940+
5941 class FakeDirectoryNode(dirnode.DirectoryNode):
5942     """This offers IDirectoryNode, but uses a FakeMutableFileNode for the
5943     backing store, so it doesn't go to the grid. The child data is still
5944}
5945[dirnode: teach dirnode to make MDMF directories
5946Kevan Carstensen <kevan@isnotajoke.com>**20110802020511
5947 Ignore-this: 143631400a6136467eb82455487df525
5948] {
5949hunk ./src/allmydata/dirnode.py 14
5950 from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
5951      IImmutableFileNode, IMutableFileNode, \
5952      ExistingChildError, NoSuchChildError, ICheckable, IDeepCheckable, \
5953-     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError
5954+     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError, \
5955+     SDMF_VERSION, MDMF_VERSION
5956 from allmydata.check_results import DeepCheckResults, \
5957      DeepCheckAndRepairResults
5958 from allmydata.monitor import Monitor
5959hunk ./src/allmydata/dirnode.py 617
5960         d.addCallback(lambda res: deleter.old_child)
5961         return d
5962 
5963+    # XXX: Too many arguments? Worthwhile to break into mutable/immutable?
5964     def create_subdirectory(self, namex, initial_children={}, overwrite=True,
5965hunk ./src/allmydata/dirnode.py 619
5966-                            mutable=True, metadata=None):
5967+                            mutable=True, mutable_version=None, metadata=None):
5968         name = normalize(namex)
5969         if self.is_readonly():
5970             return defer.fail(NotWriteableError())
5971hunk ./src/allmydata/dirnode.py 624
5972         if mutable:
5973-            d = self._nodemaker.create_new_mutable_directory(initial_children)
5974+            if mutable_version:
5975+                d = self._nodemaker.create_new_mutable_directory(initial_children,
5976+                                                                 version=mutable_version)
5977+            else:
5978+                d = self._nodemaker.create_new_mutable_directory(initial_children)
5979         else:
5980hunk ./src/allmydata/dirnode.py 630
5981+            # mutable version doesn't make sense for immmutable directories.
5982+            assert mutable_version is None
5983             d = self._nodemaker.create_immutable_directory(initial_children)
5984         def _created(child):
5985             entries = {name: (child, metadata)}
5986hunk ./src/allmydata/test/test_dirnode.py 14
5987 from allmydata.interfaces import IImmutableFileNode, IMutableFileNode, \
5988      ExistingChildError, NoSuchChildError, MustNotBeUnknownRWError, \
5989      MustBeDeepImmutableError, MustBeReadonlyError, \
5990-     IDeepCheckResults, IDeepCheckAndRepairResults
5991+     IDeepCheckResults, IDeepCheckAndRepairResults, \
5992+     MDMF_VERSION, SDMF_VERSION
5993 from allmydata.mutable.filenode import MutableFileNode
5994 from allmydata.mutable.common import UncoordinatedWriteError
5995 from allmydata.util import hashutil, base32
5996hunk ./src/allmydata/test/test_dirnode.py 61
5997               testutil.ReallyEqualMixin, testutil.ShouldFailMixin, testutil.StallMixin, ErrorMixin):
5998     timeout = 480 # It occasionally takes longer than 240 seconds on Francois's arm box.
5999 
6000-    def test_basic(self):
6001-        self.basedir = "dirnode/Dirnode/test_basic"
6002-        self.set_up_grid()
6003+    def _do_create_test(self, mdmf=False):
6004         c = self.g.clients[0]
6005hunk ./src/allmydata/test/test_dirnode.py 63
6006-        d = c.create_dirnode()
6007-        def _done(res):
6008-            self.failUnless(isinstance(res, dirnode.DirectoryNode))
6009-            self.failUnless(res.is_mutable())
6010-            self.failIf(res.is_readonly())
6011-            self.failIf(res.is_unknown())
6012-            self.failIf(res.is_allowed_in_immutable_directory())
6013-            res.raise_error()
6014-            rep = str(res)
6015-            self.failUnless("RW-MUT" in rep)
6016-        d.addCallback(_done)
6017+
6018+        self.expected_manifest = []
6019+        self.expected_verifycaps = set()
6020+        self.expected_storage_indexes = set()
6021+
6022+        d = None
6023+        if mdmf:
6024+            d = c.create_dirnode(version=MDMF_VERSION)
6025+        else:
6026+            d = c.create_dirnode()
6027+        def _then(n):
6028+            # /
6029+            self.rootnode = n
6030+            backing_node = n._node
6031+            if mdmf:
6032+                self.failUnlessEqual(backing_node.get_version(),
6033+                                     MDMF_VERSION)
6034+            else:
6035+                self.failUnlessEqual(backing_node.get_version(),
6036+                                     SDMF_VERSION)
6037+            self.failUnless(n.is_mutable())
6038+            u = n.get_uri()
6039+            self.failUnless(u)
6040+            cap_formats = []
6041+            if mdmf:
6042+                cap_formats = ["URI:DIR2-MDMF:",
6043+                               "URI:DIR2-MDMF-RO:",
6044+                               "URI:DIR2-MDMF-Verifier:"]
6045+            else:
6046+                cap_formats = ["URI:DIR2:",
6047+                               "URI:DIR2-RO",
6048+                               "URI:DIR2-Verifier:"]
6049+            rw, ro, v = cap_formats
6050+            self.failUnless(u.startswith(rw), u)
6051+            u_ro = n.get_readonly_uri()
6052+            self.failUnless(u_ro.startswith(ro), u_ro)
6053+            u_v = n.get_verify_cap().to_string()
6054+            self.failUnless(u_v.startswith(v), u_v)
6055+            u_r = n.get_repair_cap().to_string()
6056+            self.failUnlessReallyEqual(u_r, u)
6057+            self.expected_manifest.append( ((), u) )
6058+            self.expected_verifycaps.add(u_v)
6059+            si = n.get_storage_index()
6060+            self.expected_storage_indexes.add(base32.b2a(si))
6061+            expected_si = n._uri.get_storage_index()
6062+            self.failUnlessReallyEqual(si, expected_si)
6063+
6064+            d = n.list()
6065+            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
6066+            d.addCallback(lambda res: n.has_child(u"missing"))
6067+            d.addCallback(lambda res: self.failIf(res))
6068+
6069+            fake_file_uri = make_mutable_file_uri()
6070+            other_file_uri = make_mutable_file_uri()
6071+            m = c.nodemaker.create_from_cap(fake_file_uri)
6072+            ffu_v = m.get_verify_cap().to_string()
6073+            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
6074+            self.expected_verifycaps.add(ffu_v)
6075+            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
6076+            d.addCallback(lambda res: n.set_uri(u"child",
6077+                                                fake_file_uri, fake_file_uri))
6078+            d.addCallback(lambda res:
6079+                          self.shouldFail(ExistingChildError, "set_uri-no",
6080+                                          "child 'child' already exists",
6081+                                          n.set_uri, u"child",
6082+                                          other_file_uri, other_file_uri,
6083+                                          overwrite=False))
6084+            # /
6085+            # /child = mutable
6086+
6087+            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
6088+
6089+            # /
6090+            # /child = mutable
6091+            # /subdir = directory
6092+            def _created(subdir):
6093+                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
6094+                self.subdir = subdir
6095+                new_v = subdir.get_verify_cap().to_string()
6096+                assert isinstance(new_v, str)
6097+                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
6098+                self.expected_verifycaps.add(new_v)
6099+                si = subdir.get_storage_index()
6100+                self.expected_storage_indexes.add(base32.b2a(si))
6101+            d.addCallback(_created)
6102+
6103+            d.addCallback(lambda res:
6104+                          self.shouldFail(ExistingChildError, "mkdir-no",
6105+                                          "child 'subdir' already exists",
6106+                                          n.create_subdirectory, u"subdir",
6107+                                          overwrite=False))
6108+
6109+            d.addCallback(lambda res: n.list())
6110+            d.addCallback(lambda children:
6111+                          self.failUnlessReallyEqual(set(children.keys()),
6112+                                                     set([u"child", u"subdir"])))
6113+
6114+            d.addCallback(lambda res: n.start_deep_stats().when_done())
6115+            def _check_deepstats(stats):
6116+                self.failUnless(isinstance(stats, dict))
6117+                expected = {"count-immutable-files": 0,
6118+                            "count-mutable-files": 1,
6119+                            "count-literal-files": 0,
6120+                            "count-files": 1,
6121+                            "count-directories": 2,
6122+                            "size-immutable-files": 0,
6123+                            "size-literal-files": 0,
6124+                            #"size-directories": 616, # varies
6125+                            #"largest-directory": 616,
6126+                            "largest-directory-children": 2,
6127+                            "largest-immutable-file": 0,
6128+                            }
6129+                for k,v in expected.iteritems():
6130+                    self.failUnlessReallyEqual(stats[k], v,
6131+                                               "stats[%s] was %s, not %s" %
6132+                                               (k, stats[k], v))
6133+                self.failUnless(stats["size-directories"] > 500,
6134+                                stats["size-directories"])
6135+                self.failUnless(stats["largest-directory"] > 500,
6136+                                stats["largest-directory"])
6137+                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
6138+            d.addCallback(_check_deepstats)
6139+
6140+            d.addCallback(lambda res: n.build_manifest().when_done())
6141+            def _check_manifest(res):
6142+                manifest = res["manifest"]
6143+                self.failUnlessReallyEqual(sorted(manifest),
6144+                                           sorted(self.expected_manifest))
6145+                stats = res["stats"]
6146+                _check_deepstats(stats)
6147+                self.failUnlessReallyEqual(self.expected_verifycaps,
6148+                                           res["verifycaps"])
6149+                self.failUnlessReallyEqual(self.expected_storage_indexes,
6150+                                           res["storage-index"])
6151+            d.addCallback(_check_manifest)
6152+
6153+            def _add_subsubdir(res):
6154+                return self.subdir.create_subdirectory(u"subsubdir")
6155+            d.addCallback(_add_subsubdir)
6156+            # /
6157+            # /child = mutable
6158+            # /subdir = directory
6159+            # /subdir/subsubdir = directory
6160+            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
6161+            d.addCallback(lambda subsubdir:
6162+                          self.failUnless(isinstance(subsubdir,
6163+                                                     dirnode.DirectoryNode)))
6164+            d.addCallback(lambda res: n.get_child_at_path(u""))
6165+            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
6166+                                                                 n.get_uri()))
6167+
6168+            d.addCallback(lambda res: n.get_metadata_for(u"child"))
6169+            d.addCallback(lambda metadata:
6170+                          self.failUnlessEqual(set(metadata.keys()),
6171+                                               set(["tahoe"])))
6172+
6173+            d.addCallback(lambda res:
6174+                          self.shouldFail(NoSuchChildError, "gcamap-no",
6175+                                          "nope",
6176+                                          n.get_child_and_metadata_at_path,
6177+                                          u"subdir/nope"))
6178+            d.addCallback(lambda res:
6179+                          n.get_child_and_metadata_at_path(u""))
6180+            def _check_child_and_metadata1(res):
6181+                child, metadata = res
6182+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6183+                # edge-metadata needs at least one path segment
6184+                self.failUnlessEqual(set(metadata.keys()), set([]))
6185+            d.addCallback(_check_child_and_metadata1)
6186+            d.addCallback(lambda res:
6187+                          n.get_child_and_metadata_at_path(u"child"))
6188+
6189+            def _check_child_and_metadata2(res):
6190+                child, metadata = res
6191+                self.failUnlessReallyEqual(child.get_uri(),
6192+                                           fake_file_uri)
6193+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6194+            d.addCallback(_check_child_and_metadata2)
6195+
6196+            d.addCallback(lambda res:
6197+                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
6198+            def _check_child_and_metadata3(res):
6199+                child, metadata = res
6200+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6201+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6202+            d.addCallback(_check_child_and_metadata3)
6203+
6204+            # set_uri + metadata
6205+            # it should be possible to add a child without any metadata
6206+            d.addCallback(lambda res: n.set_uri(u"c2",
6207+                                                fake_file_uri, fake_file_uri,
6208+                                                {}))
6209+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6210+            d.addCallback(lambda metadata:
6211+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6212+
6213+            # You can't override the link timestamps.
6214+            d.addCallback(lambda res: n.set_uri(u"c2",
6215+                                                fake_file_uri, fake_file_uri,
6216+                                                { 'tahoe': {'linkcrtime': "bogus"}}))
6217+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6218+            def _has_good_linkcrtime(metadata):
6219+                self.failUnless(metadata.has_key('tahoe'))
6220+                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
6221+                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
6222+            d.addCallback(_has_good_linkcrtime)
6223+
6224+            # if we don't set any defaults, the child should get timestamps
6225+            d.addCallback(lambda res: n.set_uri(u"c3",
6226+                                                fake_file_uri, fake_file_uri))
6227+            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
6228+            d.addCallback(lambda metadata:
6229+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6230+
6231+            # we can also add specific metadata at set_uri() time
6232+            d.addCallback(lambda res: n.set_uri(u"c4",
6233+                                                fake_file_uri, fake_file_uri,
6234+                                                {"key": "value"}))
6235+            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
6236+            d.addCallback(lambda metadata:
6237+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6238+                                              (metadata['key'] == "value"), metadata))
6239+
6240+            d.addCallback(lambda res: n.delete(u"c2"))
6241+            d.addCallback(lambda res: n.delete(u"c3"))
6242+            d.addCallback(lambda res: n.delete(u"c4"))
6243+
6244+            # set_node + metadata
6245+            # it should be possible to add a child without any metadata except for timestamps
6246+            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
6247+            d.addCallback(lambda res: c.create_dirnode())
6248+            d.addCallback(lambda n2:
6249+                          self.shouldFail(ExistingChildError, "set_node-no",
6250+                                          "child 'd2' already exists",
6251+                                          n.set_node, u"d2", n2,
6252+                                          overwrite=False))
6253+            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
6254+            d.addCallback(lambda metadata:
6255+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6256+
6257+            # if we don't set any defaults, the child should get timestamps
6258+            d.addCallback(lambda res: n.set_node(u"d3", n))
6259+            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
6260+            d.addCallback(lambda metadata:
6261+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6262+
6263+            # we can also add specific metadata at set_node() time
6264+            d.addCallback(lambda res: n.set_node(u"d4", n,
6265+                                                {"key": "value"}))
6266+            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
6267+            d.addCallback(lambda metadata:
6268+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6269+                                          (metadata["key"] == "value"), metadata))
6270+
6271+            d.addCallback(lambda res: n.delete(u"d2"))
6272+            d.addCallback(lambda res: n.delete(u"d3"))
6273+            d.addCallback(lambda res: n.delete(u"d4"))
6274+
6275+            # metadata through set_children()
6276+            d.addCallback(lambda res:
6277+                          n.set_children({
6278+                              u"e1": (fake_file_uri, fake_file_uri),
6279+                              u"e2": (fake_file_uri, fake_file_uri, {}),
6280+                              u"e3": (fake_file_uri, fake_file_uri,
6281+                                      {"key": "value"}),
6282+                              }))
6283+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
6284+            d.addCallback(lambda res:
6285+                          self.shouldFail(ExistingChildError, "set_children-no",
6286+                                          "child 'e1' already exists",
6287+                                          n.set_children,
6288+                                          { u"e1": (other_file_uri,
6289+                                                    other_file_uri),
6290+                                            u"new": (other_file_uri,
6291+                                                     other_file_uri),
6292+                                            },
6293+                                          overwrite=False))
6294+            # and 'new' should not have been created
6295+            d.addCallback(lambda res: n.list())
6296+            d.addCallback(lambda children: self.failIf(u"new" in children))
6297+            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
6298+            d.addCallback(lambda metadata:
6299+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6300+            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
6301+            d.addCallback(lambda metadata:
6302+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6303+            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
6304+            d.addCallback(lambda metadata:
6305+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6306+                                          (metadata["key"] == "value"), metadata))
6307+
6308+            d.addCallback(lambda res: n.delete(u"e1"))
6309+            d.addCallback(lambda res: n.delete(u"e2"))
6310+            d.addCallback(lambda res: n.delete(u"e3"))
6311+
6312+            # metadata through set_nodes()
6313+            d.addCallback(lambda res:
6314+                          n.set_nodes({ u"f1": (n, None),
6315+                                        u"f2": (n, {}),
6316+                                        u"f3": (n, {"key": "value"}),
6317+                                        }))
6318+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
6319+            d.addCallback(lambda res:
6320+                          self.shouldFail(ExistingChildError, "set_nodes-no",
6321+                                          "child 'f1' already exists",
6322+                                          n.set_nodes, { u"f1": (n, None),
6323+                                                         u"new": (n, None), },
6324+                                          overwrite=False))
6325+            # and 'new' should not have been created
6326+            d.addCallback(lambda res: n.list())
6327+            d.addCallback(lambda children: self.failIf(u"new" in children))
6328+            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
6329+            d.addCallback(lambda metadata:
6330+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6331+            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
6332+            d.addCallback(lambda metadata:
6333+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6334+            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
6335+            d.addCallback(lambda metadata:
6336+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6337+                                          (metadata["key"] == "value"), metadata))
6338+
6339+            d.addCallback(lambda res: n.delete(u"f1"))
6340+            d.addCallback(lambda res: n.delete(u"f2"))
6341+            d.addCallback(lambda res: n.delete(u"f3"))
6342+
6343+
6344+            d.addCallback(lambda res:
6345+                          n.set_metadata_for(u"child",
6346+                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
6347+            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
6348+            d.addCallback(lambda metadata:
6349+                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
6350+                                          metadata["tags"] == ["web2.0-compatible"] and
6351+                                          "bad" not in metadata["tahoe"], metadata))
6352+
6353+            d.addCallback(lambda res:
6354+                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
6355+                                          n.set_metadata_for, u"nosuch", {}))
6356+
6357+
6358+            def _start(res):
6359+                self._start_timestamp = time.time()
6360+            d.addCallback(_start)
6361+            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
6362+            # floats to hundredeths (it uses str(num) instead of repr(num)).
6363+            # simplejson-1.7.3 does not have this bug. To prevent this bug
6364+            # from causing the test to fail, stall for more than a few
6365+            # hundrededths of a second.
6366+            d.addCallback(self.stall, 0.1)
6367+            d.addCallback(lambda res: n.add_file(u"timestamps",
6368+                                                 upload.Data("stamp me", convergence="some convergence string")))
6369+            d.addCallback(self.stall, 0.1)
6370+            def _stop(res):
6371+                self._stop_timestamp = time.time()
6372+            d.addCallback(_stop)
6373+
6374+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
6375+            def _check_timestamp1(metadata):
6376+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6377+                tahoe_md = metadata["tahoe"]
6378+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
6379+
6380+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
6381+                                                  self._start_timestamp)
6382+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
6383+                                                  tahoe_md["linkcrtime"])
6384+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
6385+                                                  self._start_timestamp)
6386+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
6387+                                                  tahoe_md["linkmotime"])
6388+                # Our current timestamp rules say that replacing an existing
6389+                # child should preserve the 'linkcrtime' but update the
6390+                # 'linkmotime'
6391+                self._old_linkcrtime = tahoe_md["linkcrtime"]
6392+                self._old_linkmotime = tahoe_md["linkmotime"]
6393+            d.addCallback(_check_timestamp1)
6394+            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
6395+            d.addCallback(lambda res: n.set_node(u"timestamps", n))
6396+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
6397+            def _check_timestamp2(metadata):
6398+                self.failUnlessIn("tahoe", metadata)
6399+                tahoe_md = metadata["tahoe"]
6400+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
6401+
6402+                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
6403+                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
6404+                return n.delete(u"timestamps")
6405+            d.addCallback(_check_timestamp2)
6406+
6407+            d.addCallback(lambda res: n.delete(u"subdir"))
6408+            d.addCallback(lambda old_child:
6409+                          self.failUnlessReallyEqual(old_child.get_uri(),
6410+                                                     self.subdir.get_uri()))
6411+
6412+            d.addCallback(lambda res: n.list())
6413+            d.addCallback(lambda children:
6414+                          self.failUnlessReallyEqual(set(children.keys()),
6415+                                                     set([u"child"])))
6416+
6417+            uploadable1 = upload.Data("some data", convergence="converge")
6418+            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
6419+            d.addCallback(lambda newnode:
6420+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
6421+            uploadable2 = upload.Data("some data", convergence="stuff")
6422+            d.addCallback(lambda res:
6423+                          self.shouldFail(ExistingChildError, "add_file-no",
6424+                                          "child 'newfile' already exists",
6425+                                          n.add_file, u"newfile",
6426+                                          uploadable2,
6427+                                          overwrite=False))
6428+            d.addCallback(lambda res: n.list())
6429+            d.addCallback(lambda children:
6430+                          self.failUnlessReallyEqual(set(children.keys()),
6431+                                                     set([u"child", u"newfile"])))
6432+            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
6433+            d.addCallback(lambda metadata:
6434+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6435+
6436+            uploadable3 = upload.Data("some data", convergence="converge")
6437+            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
6438+                                                 uploadable3,
6439+                                                 {"key": "value"}))
6440+            d.addCallback(lambda newnode:
6441+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
6442+            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
6443+            d.addCallback(lambda metadata:
6444+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6445+                                              (metadata['key'] == "value"), metadata))
6446+            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
6447+
6448+            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
6449+            def _created2(subdir2):
6450+                self.subdir2 = subdir2
6451+                # put something in the way, to make sure it gets overwritten
6452+                return subdir2.add_file(u"child", upload.Data("overwrite me",
6453+                                                              "converge"))
6454+            d.addCallback(_created2)
6455+
6456+            d.addCallback(lambda res:
6457+                          n.move_child_to(u"child", self.subdir2))
6458+            d.addCallback(lambda res: n.list())
6459+            d.addCallback(lambda children:
6460+                          self.failUnlessReallyEqual(set(children.keys()),
6461+                                                     set([u"newfile", u"subdir2"])))
6462+            d.addCallback(lambda res: self.subdir2.list())
6463+            d.addCallback(lambda children:
6464+                          self.failUnlessReallyEqual(set(children.keys()),
6465+                                                     set([u"child"])))
6466+            d.addCallback(lambda res: self.subdir2.get(u"child"))
6467+            d.addCallback(lambda child:
6468+                          self.failUnlessReallyEqual(child.get_uri(),
6469+                                                     fake_file_uri))
6470+
6471+            # move it back, using new_child_name=
6472+            d.addCallback(lambda res:
6473+                          self.subdir2.move_child_to(u"child", n, u"newchild"))
6474+            d.addCallback(lambda res: n.list())
6475+            d.addCallback(lambda children:
6476+                          self.failUnlessReallyEqual(set(children.keys()),
6477+                                                     set([u"newchild", u"newfile",
6478+                                                          u"subdir2"])))
6479+            d.addCallback(lambda res: self.subdir2.list())
6480+            d.addCallback(lambda children:
6481+                          self.failUnlessReallyEqual(set(children.keys()), set([])))
6482+
6483+            # now make sure that we honor overwrite=False
6484+            d.addCallback(lambda res:
6485+                          self.subdir2.set_uri(u"newchild",
6486+                                               other_file_uri, other_file_uri))
6487+
6488+            d.addCallback(lambda res:
6489+                          self.shouldFail(ExistingChildError, "move_child_to-no",
6490+                                          "child 'newchild' already exists",
6491+                                          n.move_child_to, u"newchild",
6492+                                          self.subdir2,
6493+                                          overwrite=False))
6494+            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
6495+            d.addCallback(lambda child:
6496+                          self.failUnlessReallyEqual(child.get_uri(),
6497+                                                     other_file_uri))
6498+
6499+
6500+            # Setting the no-write field should diminish a mutable cap to read-only
6501+            # (for both files and directories).
6502+
6503+            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
6504+            d.addCallback(lambda ign: n.get(u"mutable"))
6505+            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
6506+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
6507+            d.addCallback(lambda ign: n.get(u"mutable"))
6508+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
6509+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
6510+            d.addCallback(lambda ign: n.get(u"mutable"))
6511+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
6512+
6513+            d.addCallback(lambda ign: n.get(u"subdir2"))
6514+            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
6515+            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
6516+            d.addCallback(lambda ign: n.get(u"subdir2"))
6517+            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
6518+
6519+            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
6520+                                                metadata={"no-write": True}))
6521+            d.addCallback(lambda ign: n.get(u"mutable_ro"))
6522+            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
6523+
6524+            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
6525+            d.addCallback(lambda ign: n.get(u"subdir_ro"))
6526+            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
6527+
6528+            return d
6529+
6530+        d.addCallback(_then)
6531+
6532+        d.addErrback(self.explain_error)
6533         return d
6534 
6535hunk ./src/allmydata/test/test_dirnode.py 581
6536-    def test_initial_children(self):
6537-        self.basedir = "dirnode/Dirnode/test_initial_children"
6538-        self.set_up_grid()
6539+
6540+    def _do_initial_children_test(self, mdmf=False):
6541         c = self.g.clients[0]
6542         nm = c.nodemaker
6543 
6544hunk ./src/allmydata/test/test_dirnode.py 597
6545                 u"empty_litdir": (nm.create_from_cap(empty_litdir_uri), {}),
6546                 u"tiny_litdir": (nm.create_from_cap(tiny_litdir_uri), {}),
6547                 }
6548-        d = c.create_dirnode(kids)
6549-       
6550+        d = None
6551+        if mdmf:
6552+            d = c.create_dirnode(kids, version=MDMF_VERSION)
6553+        else:
6554+            d = c.create_dirnode(kids)
6555         def _created(dn):
6556             self.failUnless(isinstance(dn, dirnode.DirectoryNode))
6557hunk ./src/allmydata/test/test_dirnode.py 604
6558+            backing_node = dn._node
6559+            if mdmf:
6560+                self.failUnlessEqual(backing_node.get_version(),
6561+                                     MDMF_VERSION)
6562+            else:
6563+                self.failUnlessEqual(backing_node.get_version(),
6564+                                     SDMF_VERSION)
6565             self.failUnless(dn.is_mutable())
6566             self.failIf(dn.is_readonly())
6567             self.failIf(dn.is_unknown())
6568hunk ./src/allmydata/test/test_dirnode.py 619
6569             rep = str(dn)
6570             self.failUnless("RW-MUT" in rep)
6571             return dn.list()
6572-        d.addCallback(_created)
6573-       
6574+
6575         def _check_kids(children):
6576             self.failUnlessReallyEqual(set(children.keys()),
6577                                        set([one_nfc, u"two", u"mut", u"fut", u"fro",
6578hunk ./src/allmydata/test/test_dirnode.py 623
6579-                                            u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
6580+                                        u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
6581             one_node, one_metadata = children[one_nfc]
6582             two_node, two_metadata = children[u"two"]
6583             mut_node, mut_metadata = children[u"mut"]
6584hunk ./src/allmydata/test/test_dirnode.py 683
6585             d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
6586             d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
6587             return d2
6588-
6589+        d.addCallback(_created)
6590         d.addCallback(_check_kids)
6591 
6592         d.addCallback(lambda ign: nm.create_new_mutable_directory(kids))
6593hunk ./src/allmydata/test/test_dirnode.py 707
6594                                       bad_kids2))
6595         return d
6596 
6597+    def _do_basic_test(self, mdmf=False):
6598+        c = self.g.clients[0]
6599+        d = None
6600+        if mdmf:
6601+            d = c.create_dirnode(version=MDMF_VERSION)
6602+        else:
6603+            d = c.create_dirnode()
6604+        def _done(res):
6605+            self.failUnless(isinstance(res, dirnode.DirectoryNode))
6606+            self.failUnless(res.is_mutable())
6607+            self.failIf(res.is_readonly())
6608+            self.failIf(res.is_unknown())
6609+            self.failIf(res.is_allowed_in_immutable_directory())
6610+            res.raise_error()
6611+            rep = str(res)
6612+            self.failUnless("RW-MUT" in rep)
6613+        d.addCallback(_done)
6614+        return d
6615+
6616+    def test_basic(self):
6617+        self.basedir = "dirnode/Dirnode/test_basic"
6618+        self.set_up_grid()
6619+        return self._do_basic_test()
6620+
6621+    def test_basic_mdmf(self):
6622+        self.basedir = "dirnode/Dirnode/test_basic_mdmf"
6623+        self.set_up_grid()
6624+        return self._do_basic_test(mdmf=True)
6625+
6626+    def test_initial_children(self):
6627+        self.basedir = "dirnode/Dirnode/test_initial_children"
6628+        self.set_up_grid()
6629+        return self._do_initial_children_test()
6630+
6631     def test_immutable(self):
6632         self.basedir = "dirnode/Dirnode/test_immutable"
6633         self.set_up_grid()
6634hunk ./src/allmydata/test/test_dirnode.py 1025
6635         d.addCallback(_done)
6636         return d
6637 
6638-    def _test_deepcheck_create(self):
6639+    def _test_deepcheck_create(self, version=SDMF_VERSION):
6640         # create a small tree with a loop, and some non-directories
6641         #  root/
6642         #  root/subdir/
6643hunk ./src/allmydata/test/test_dirnode.py 1033
6644         #  root/subdir/link -> root
6645         #  root/rodir
6646         c = self.g.clients[0]
6647-        d = c.create_dirnode()
6648+        d = c.create_dirnode(version=version)
6649         def _created_root(rootnode):
6650             self._rootnode = rootnode
6651hunk ./src/allmydata/test/test_dirnode.py 1036
6652+            self.failUnlessEqual(rootnode._node.get_version(), version)
6653             return rootnode.create_subdirectory(u"subdir")
6654         d.addCallback(_created_root)
6655         def _created_subdir(subdir):
6656hunk ./src/allmydata/test/test_dirnode.py 1075
6657         d.addCallback(_check_results)
6658         return d
6659 
6660+    def test_deepcheck_mdmf(self):
6661+        self.basedir = "dirnode/Dirnode/test_deepcheck_mdmf"
6662+        self.set_up_grid()
6663+        d = self._test_deepcheck_create(MDMF_VERSION)
6664+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
6665+        def _check_results(r):
6666+            self.failUnless(IDeepCheckResults.providedBy(r))
6667+            c = r.get_counters()
6668+            self.failUnlessReallyEqual(c,
6669+                                       {"count-objects-checked": 4,
6670+                                        "count-objects-healthy": 4,
6671+                                        "count-objects-unhealthy": 0,
6672+                                        "count-objects-unrecoverable": 0,
6673+                                        "count-corrupt-shares": 0,
6674+                                        })
6675+            self.failIf(r.get_corrupt_shares())
6676+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
6677+        d.addCallback(_check_results)
6678+        return d
6679+
6680     def test_deepcheck_and_repair(self):
6681         self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair"
6682         self.set_up_grid()
6683hunk ./src/allmydata/test/test_dirnode.py 1124
6684         d.addCallback(_check_results)
6685         return d
6686 
6687+    def test_deepcheck_and_repair_mdmf(self):
6688+        self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair_mdmf"
6689+        self.set_up_grid()
6690+        d = self._test_deepcheck_create(version=MDMF_VERSION)
6691+        d.addCallback(lambda rootnode:
6692+                      rootnode.start_deep_check_and_repair().when_done())
6693+        def _check_results(r):
6694+            self.failUnless(IDeepCheckAndRepairResults.providedBy(r))
6695+            c = r.get_counters()
6696+            self.failUnlessReallyEqual(c,
6697+                                       {"count-objects-checked": 4,
6698+                                        "count-objects-healthy-pre-repair": 4,
6699+                                        "count-objects-unhealthy-pre-repair": 0,
6700+                                        "count-objects-unrecoverable-pre-repair": 0,
6701+                                        "count-corrupt-shares-pre-repair": 0,
6702+                                        "count-objects-healthy-post-repair": 4,
6703+                                        "count-objects-unhealthy-post-repair": 0,
6704+                                        "count-objects-unrecoverable-post-repair": 0,
6705+                                        "count-corrupt-shares-post-repair": 0,
6706+                                        "count-repairs-attempted": 0,
6707+                                        "count-repairs-successful": 0,
6708+                                        "count-repairs-unsuccessful": 0,
6709+                                        })
6710+            self.failIf(r.get_corrupt_shares())
6711+            self.failIf(r.get_remaining_corrupt_shares())
6712+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
6713+        d.addCallback(_check_results)
6714+        return d
6715+
6716     def _mark_file_bad(self, rootnode):
6717         self.delete_shares_numbered(rootnode.get_uri(), [0])
6718         return rootnode
6719hunk ./src/allmydata/test/test_dirnode.py 1176
6720         d.addCallback(_check_results)
6721         return d
6722 
6723-    def test_readonly(self):
6724-        self.basedir = "dirnode/Dirnode/test_readonly"
6725+    def test_deepcheck_problems_mdmf(self):
6726+        self.basedir = "dirnode/Dirnode/test_deepcheck_problems_mdmf"
6727         self.set_up_grid()
6728hunk ./src/allmydata/test/test_dirnode.py 1179
6729+        d = self._test_deepcheck_create(version=MDMF_VERSION)
6730+        d.addCallback(lambda rootnode: self._mark_file_bad(rootnode))
6731+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
6732+        def _check_results(r):
6733+            c = r.get_counters()
6734+            self.failUnlessReallyEqual(c,
6735+                                       {"count-objects-checked": 4,
6736+                                        "count-objects-healthy": 3,
6737+                                        "count-objects-unhealthy": 1,
6738+                                        "count-objects-unrecoverable": 0,
6739+                                        "count-corrupt-shares": 0,
6740+                                        })
6741+            #self.failUnlessReallyEqual(len(r.get_problems()), 1) # TODO
6742+        d.addCallback(_check_results)
6743+        return d
6744+
6745+    def _do_readonly_test(self, version=SDMF_VERSION):
6746         c = self.g.clients[0]
6747         nm = c.nodemaker
6748         filecap = make_chk_file_uri(1234)
6749hunk ./src/allmydata/test/test_dirnode.py 1202
6750         filenode = nm.create_from_cap(filecap)
6751         uploadable = upload.Data("some data", convergence="some convergence string")
6752 
6753-        d = c.create_dirnode()
6754+        d = c.create_dirnode(version=version)
6755         def _created(rw_dn):
6756hunk ./src/allmydata/test/test_dirnode.py 1204
6757+            backing_node = rw_dn._node
6758+            self.failUnlessEqual(backing_node.get_version(), version)
6759             d2 = rw_dn.set_uri(u"child", filecap, filecap)
6760             d2.addCallback(lambda res: rw_dn)
6761             return d2
6762hunk ./src/allmydata/test/test_dirnode.py 1245
6763         d.addCallback(_listed)
6764         return d
6765 
6766+    def test_readonly(self):
6767+        self.basedir = "dirnode/Dirnode/test_readonly"
6768+        self.set_up_grid()
6769+        return self._do_readonly_test()
6770+
6771+    def test_readonly_mdmf(self):
6772+        self.basedir = "dirnode/Dirnode/test_readonly_mdmf"
6773+        self.set_up_grid()
6774+        return self._do_readonly_test(version=MDMF_VERSION)
6775+
6776     def failUnlessGreaterThan(self, a, b):
6777         self.failUnless(a > b, "%r should be > %r" % (a, b))
6778 
6779hunk ./src/allmydata/test/test_dirnode.py 1264
6780     def test_create(self):
6781         self.basedir = "dirnode/Dirnode/test_create"
6782         self.set_up_grid()
6783-        c = self.g.clients[0]
6784-
6785-        self.expected_manifest = []
6786-        self.expected_verifycaps = set()
6787-        self.expected_storage_indexes = set()
6788-
6789-        d = c.create_dirnode()
6790-        def _then(n):
6791-            # /
6792-            self.rootnode = n
6793-            self.failUnless(n.is_mutable())
6794-            u = n.get_uri()
6795-            self.failUnless(u)
6796-            self.failUnless(u.startswith("URI:DIR2:"), u)
6797-            u_ro = n.get_readonly_uri()
6798-            self.failUnless(u_ro.startswith("URI:DIR2-RO:"), u_ro)
6799-            u_v = n.get_verify_cap().to_string()
6800-            self.failUnless(u_v.startswith("URI:DIR2-Verifier:"), u_v)
6801-            u_r = n.get_repair_cap().to_string()
6802-            self.failUnlessReallyEqual(u_r, u)
6803-            self.expected_manifest.append( ((), u) )
6804-            self.expected_verifycaps.add(u_v)
6805-            si = n.get_storage_index()
6806-            self.expected_storage_indexes.add(base32.b2a(si))
6807-            expected_si = n._uri.get_storage_index()
6808-            self.failUnlessReallyEqual(si, expected_si)
6809-
6810-            d = n.list()
6811-            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
6812-            d.addCallback(lambda res: n.has_child(u"missing"))
6813-            d.addCallback(lambda res: self.failIf(res))
6814-
6815-            fake_file_uri = make_mutable_file_uri()
6816-            other_file_uri = make_mutable_file_uri()
6817-            m = c.nodemaker.create_from_cap(fake_file_uri)
6818-            ffu_v = m.get_verify_cap().to_string()
6819-            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
6820-            self.expected_verifycaps.add(ffu_v)
6821-            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
6822-            d.addCallback(lambda res: n.set_uri(u"child",
6823-                                                fake_file_uri, fake_file_uri))
6824-            d.addCallback(lambda res:
6825-                          self.shouldFail(ExistingChildError, "set_uri-no",
6826-                                          "child 'child' already exists",
6827-                                          n.set_uri, u"child",
6828-                                          other_file_uri, other_file_uri,
6829-                                          overwrite=False))
6830-            # /
6831-            # /child = mutable
6832-
6833-            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
6834-
6835-            # /
6836-            # /child = mutable
6837-            # /subdir = directory
6838-            def _created(subdir):
6839-                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
6840-                self.subdir = subdir
6841-                new_v = subdir.get_verify_cap().to_string()
6842-                assert isinstance(new_v, str)
6843-                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
6844-                self.expected_verifycaps.add(new_v)
6845-                si = subdir.get_storage_index()
6846-                self.expected_storage_indexes.add(base32.b2a(si))
6847-            d.addCallback(_created)
6848-
6849-            d.addCallback(lambda res:
6850-                          self.shouldFail(ExistingChildError, "mkdir-no",
6851-                                          "child 'subdir' already exists",
6852-                                          n.create_subdirectory, u"subdir",
6853-                                          overwrite=False))
6854-
6855-            d.addCallback(lambda res: n.list())
6856-            d.addCallback(lambda children:
6857-                          self.failUnlessReallyEqual(set(children.keys()),
6858-                                                     set([u"child", u"subdir"])))
6859-
6860-            d.addCallback(lambda res: n.start_deep_stats().when_done())
6861-            def _check_deepstats(stats):
6862-                self.failUnless(isinstance(stats, dict))
6863-                expected = {"count-immutable-files": 0,
6864-                            "count-mutable-files": 1,
6865-                            "count-literal-files": 0,
6866-                            "count-files": 1,
6867-                            "count-directories": 2,
6868-                            "size-immutable-files": 0,
6869-                            "size-literal-files": 0,
6870-                            #"size-directories": 616, # varies
6871-                            #"largest-directory": 616,
6872-                            "largest-directory-children": 2,
6873-                            "largest-immutable-file": 0,
6874-                            }
6875-                for k,v in expected.iteritems():
6876-                    self.failUnlessReallyEqual(stats[k], v,
6877-                                               "stats[%s] was %s, not %s" %
6878-                                               (k, stats[k], v))
6879-                self.failUnless(stats["size-directories"] > 500,
6880-                                stats["size-directories"])
6881-                self.failUnless(stats["largest-directory"] > 500,
6882-                                stats["largest-directory"])
6883-                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
6884-            d.addCallback(_check_deepstats)
6885-
6886-            d.addCallback(lambda res: n.build_manifest().when_done())
6887-            def _check_manifest(res):
6888-                manifest = res["manifest"]
6889-                self.failUnlessReallyEqual(sorted(manifest),
6890-                                           sorted(self.expected_manifest))
6891-                stats = res["stats"]
6892-                _check_deepstats(stats)
6893-                self.failUnlessReallyEqual(self.expected_verifycaps,
6894-                                           res["verifycaps"])
6895-                self.failUnlessReallyEqual(self.expected_storage_indexes,
6896-                                           res["storage-index"])
6897-            d.addCallback(_check_manifest)
6898-
6899-            def _add_subsubdir(res):
6900-                return self.subdir.create_subdirectory(u"subsubdir")
6901-            d.addCallback(_add_subsubdir)
6902-            # /
6903-            # /child = mutable
6904-            # /subdir = directory
6905-            # /subdir/subsubdir = directory
6906-            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
6907-            d.addCallback(lambda subsubdir:
6908-                          self.failUnless(isinstance(subsubdir,
6909-                                                     dirnode.DirectoryNode)))
6910-            d.addCallback(lambda res: n.get_child_at_path(u""))
6911-            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
6912-                                                                 n.get_uri()))
6913-
6914-            d.addCallback(lambda res: n.get_metadata_for(u"child"))
6915-            d.addCallback(lambda metadata:
6916-                          self.failUnlessEqual(set(metadata.keys()),
6917-                                               set(["tahoe"])))
6918-
6919-            d.addCallback(lambda res:
6920-                          self.shouldFail(NoSuchChildError, "gcamap-no",
6921-                                          "nope",
6922-                                          n.get_child_and_metadata_at_path,
6923-                                          u"subdir/nope"))
6924-            d.addCallback(lambda res:
6925-                          n.get_child_and_metadata_at_path(u""))
6926-            def _check_child_and_metadata1(res):
6927-                child, metadata = res
6928-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6929-                # edge-metadata needs at least one path segment
6930-                self.failUnlessEqual(set(metadata.keys()), set([]))
6931-            d.addCallback(_check_child_and_metadata1)
6932-            d.addCallback(lambda res:
6933-                          n.get_child_and_metadata_at_path(u"child"))
6934-
6935-            def _check_child_and_metadata2(res):
6936-                child, metadata = res
6937-                self.failUnlessReallyEqual(child.get_uri(),
6938-                                           fake_file_uri)
6939-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6940-            d.addCallback(_check_child_and_metadata2)
6941-
6942-            d.addCallback(lambda res:
6943-                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
6944-            def _check_child_and_metadata3(res):
6945-                child, metadata = res
6946-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6947-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6948-            d.addCallback(_check_child_and_metadata3)
6949-
6950-            # set_uri + metadata
6951-            # it should be possible to add a child without any metadata
6952-            d.addCallback(lambda res: n.set_uri(u"c2",
6953-                                                fake_file_uri, fake_file_uri,
6954-                                                {}))
6955-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6956-            d.addCallback(lambda metadata:
6957-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6958-
6959-            # You can't override the link timestamps.
6960-            d.addCallback(lambda res: n.set_uri(u"c2",
6961-                                                fake_file_uri, fake_file_uri,
6962-                                                { 'tahoe': {'linkcrtime': "bogus"}}))
6963-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6964-            def _has_good_linkcrtime(metadata):
6965-                self.failUnless(metadata.has_key('tahoe'))
6966-                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
6967-                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
6968-            d.addCallback(_has_good_linkcrtime)
6969-
6970-            # if we don't set any defaults, the child should get timestamps
6971-            d.addCallback(lambda res: n.set_uri(u"c3",
6972-                                                fake_file_uri, fake_file_uri))
6973-            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
6974-            d.addCallback(lambda metadata:
6975-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6976-
6977-            # we can also add specific metadata at set_uri() time
6978-            d.addCallback(lambda res: n.set_uri(u"c4",
6979-                                                fake_file_uri, fake_file_uri,
6980-                                                {"key": "value"}))
6981-            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
6982-            d.addCallback(lambda metadata:
6983-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6984-                                              (metadata['key'] == "value"), metadata))
6985-
6986-            d.addCallback(lambda res: n.delete(u"c2"))
6987-            d.addCallback(lambda res: n.delete(u"c3"))
6988-            d.addCallback(lambda res: n.delete(u"c4"))
6989-
6990-            # set_node + metadata
6991-            # it should be possible to add a child without any metadata except for timestamps
6992-            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
6993-            d.addCallback(lambda res: c.create_dirnode())
6994-            d.addCallback(lambda n2:
6995-                          self.shouldFail(ExistingChildError, "set_node-no",
6996-                                          "child 'd2' already exists",
6997-                                          n.set_node, u"d2", n2,
6998-                                          overwrite=False))
6999-            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
7000-            d.addCallback(lambda metadata:
7001-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7002-
7003-            # if we don't set any defaults, the child should get timestamps
7004-            d.addCallback(lambda res: n.set_node(u"d3", n))
7005-            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
7006-            d.addCallback(lambda metadata:
7007-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7008-
7009-            # we can also add specific metadata at set_node() time
7010-            d.addCallback(lambda res: n.set_node(u"d4", n,
7011-                                                {"key": "value"}))
7012-            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
7013-            d.addCallback(lambda metadata:
7014-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7015-                                          (metadata["key"] == "value"), metadata))
7016-
7017-            d.addCallback(lambda res: n.delete(u"d2"))
7018-            d.addCallback(lambda res: n.delete(u"d3"))
7019-            d.addCallback(lambda res: n.delete(u"d4"))
7020-
7021-            # metadata through set_children()
7022-            d.addCallback(lambda res:
7023-                          n.set_children({
7024-                              u"e1": (fake_file_uri, fake_file_uri),
7025-                              u"e2": (fake_file_uri, fake_file_uri, {}),
7026-                              u"e3": (fake_file_uri, fake_file_uri,
7027-                                      {"key": "value"}),
7028-                              }))
7029-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
7030-            d.addCallback(lambda res:
7031-                          self.shouldFail(ExistingChildError, "set_children-no",
7032-                                          "child 'e1' already exists",
7033-                                          n.set_children,
7034-                                          { u"e1": (other_file_uri,
7035-                                                    other_file_uri),
7036-                                            u"new": (other_file_uri,
7037-                                                     other_file_uri),
7038-                                            },
7039-                                          overwrite=False))
7040-            # and 'new' should not have been created
7041-            d.addCallback(lambda res: n.list())
7042-            d.addCallback(lambda children: self.failIf(u"new" in children))
7043-            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
7044-            d.addCallback(lambda metadata:
7045-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7046-            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
7047-            d.addCallback(lambda metadata:
7048-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7049-            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
7050-            d.addCallback(lambda metadata:
7051-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7052-                                          (metadata["key"] == "value"), metadata))
7053-
7054-            d.addCallback(lambda res: n.delete(u"e1"))
7055-            d.addCallback(lambda res: n.delete(u"e2"))
7056-            d.addCallback(lambda res: n.delete(u"e3"))
7057-
7058-            # metadata through set_nodes()
7059-            d.addCallback(lambda res:
7060-                          n.set_nodes({ u"f1": (n, None),
7061-                                        u"f2": (n, {}),
7062-                                        u"f3": (n, {"key": "value"}),
7063-                                        }))
7064-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
7065-            d.addCallback(lambda res:
7066-                          self.shouldFail(ExistingChildError, "set_nodes-no",
7067-                                          "child 'f1' already exists",
7068-                                          n.set_nodes, { u"f1": (n, None),
7069-                                                         u"new": (n, None), },
7070-                                          overwrite=False))
7071-            # and 'new' should not have been created
7072-            d.addCallback(lambda res: n.list())
7073-            d.addCallback(lambda children: self.failIf(u"new" in children))
7074-            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
7075-            d.addCallback(lambda metadata:
7076-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7077-            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
7078-            d.addCallback(lambda metadata:
7079-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7080-            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
7081-            d.addCallback(lambda metadata:
7082-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7083-                                          (metadata["key"] == "value"), metadata))
7084-
7085-            d.addCallback(lambda res: n.delete(u"f1"))
7086-            d.addCallback(lambda res: n.delete(u"f2"))
7087-            d.addCallback(lambda res: n.delete(u"f3"))
7088-
7089-
7090-            d.addCallback(lambda res:
7091-                          n.set_metadata_for(u"child",
7092-                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
7093-            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
7094-            d.addCallback(lambda metadata:
7095-                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
7096-                                          metadata["tags"] == ["web2.0-compatible"] and
7097-                                          "bad" not in metadata["tahoe"], metadata))
7098-
7099-            d.addCallback(lambda res:
7100-                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
7101-                                          n.set_metadata_for, u"nosuch", {}))
7102-
7103-
7104-            def _start(res):
7105-                self._start_timestamp = time.time()
7106-            d.addCallback(_start)
7107-            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
7108-            # floats to hundredeths (it uses str(num) instead of repr(num)).
7109-            # simplejson-1.7.3 does not have this bug. To prevent this bug
7110-            # from causing the test to fail, stall for more than a few
7111-            # hundrededths of a second.
7112-            d.addCallback(self.stall, 0.1)
7113-            d.addCallback(lambda res: n.add_file(u"timestamps",
7114-                                                 upload.Data("stamp me", convergence="some convergence string")))
7115-            d.addCallback(self.stall, 0.1)
7116-            def _stop(res):
7117-                self._stop_timestamp = time.time()
7118-            d.addCallback(_stop)
7119-
7120-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
7121-            def _check_timestamp1(metadata):
7122-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
7123-                tahoe_md = metadata["tahoe"]
7124-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
7125-
7126-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
7127-                                                  self._start_timestamp)
7128-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
7129-                                                  tahoe_md["linkcrtime"])
7130-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
7131-                                                  self._start_timestamp)
7132-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
7133-                                                  tahoe_md["linkmotime"])
7134-                # Our current timestamp rules say that replacing an existing
7135-                # child should preserve the 'linkcrtime' but update the
7136-                # 'linkmotime'
7137-                self._old_linkcrtime = tahoe_md["linkcrtime"]
7138-                self._old_linkmotime = tahoe_md["linkmotime"]
7139-            d.addCallback(_check_timestamp1)
7140-            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
7141-            d.addCallback(lambda res: n.set_node(u"timestamps", n))
7142-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
7143-            def _check_timestamp2(metadata):
7144-                self.failUnlessIn("tahoe", metadata)
7145-                tahoe_md = metadata["tahoe"]
7146-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
7147-
7148-                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
7149-                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
7150-                return n.delete(u"timestamps")
7151-            d.addCallback(_check_timestamp2)
7152-
7153-            d.addCallback(lambda res: n.delete(u"subdir"))
7154-            d.addCallback(lambda old_child:
7155-                          self.failUnlessReallyEqual(old_child.get_uri(),
7156-                                                     self.subdir.get_uri()))
7157-
7158-            d.addCallback(lambda res: n.list())
7159-            d.addCallback(lambda children:
7160-                          self.failUnlessReallyEqual(set(children.keys()),
7161-                                                     set([u"child"])))
7162-
7163-            uploadable1 = upload.Data("some data", convergence="converge")
7164-            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
7165-            d.addCallback(lambda newnode:
7166-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
7167-            uploadable2 = upload.Data("some data", convergence="stuff")
7168-            d.addCallback(lambda res:
7169-                          self.shouldFail(ExistingChildError, "add_file-no",
7170-                                          "child 'newfile' already exists",
7171-                                          n.add_file, u"newfile",
7172-                                          uploadable2,
7173-                                          overwrite=False))
7174-            d.addCallback(lambda res: n.list())
7175-            d.addCallback(lambda children:
7176-                          self.failUnlessReallyEqual(set(children.keys()),
7177-                                                     set([u"child", u"newfile"])))
7178-            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
7179-            d.addCallback(lambda metadata:
7180-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7181-
7182-            uploadable3 = upload.Data("some data", convergence="converge")
7183-            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
7184-                                                 uploadable3,
7185-                                                 {"key": "value"}))
7186-            d.addCallback(lambda newnode:
7187-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
7188-            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
7189-            d.addCallback(lambda metadata:
7190-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7191-                                              (metadata['key'] == "value"), metadata))
7192-            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
7193-
7194-            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
7195-            def _created2(subdir2):
7196-                self.subdir2 = subdir2
7197-                # put something in the way, to make sure it gets overwritten
7198-                return subdir2.add_file(u"child", upload.Data("overwrite me",
7199-                                                              "converge"))
7200-            d.addCallback(_created2)
7201-
7202-            d.addCallback(lambda res:
7203-                          n.move_child_to(u"child", self.subdir2))
7204-            d.addCallback(lambda res: n.list())
7205-            d.addCallback(lambda children:
7206-                          self.failUnlessReallyEqual(set(children.keys()),
7207-                                                     set([u"newfile", u"subdir2"])))
7208-            d.addCallback(lambda res: self.subdir2.list())
7209-            d.addCallback(lambda children:
7210-                          self.failUnlessReallyEqual(set(children.keys()),
7211-                                                     set([u"child"])))
7212-            d.addCallback(lambda res: self.subdir2.get(u"child"))
7213-            d.addCallback(lambda child:
7214-                          self.failUnlessReallyEqual(child.get_uri(),
7215-                                                     fake_file_uri))
7216-
7217-            # move it back, using new_child_name=
7218-            d.addCallback(lambda res:
7219-                          self.subdir2.move_child_to(u"child", n, u"newchild"))
7220-            d.addCallback(lambda res: n.list())
7221-            d.addCallback(lambda children:
7222-                          self.failUnlessReallyEqual(set(children.keys()),
7223-                                                     set([u"newchild", u"newfile",
7224-                                                          u"subdir2"])))
7225-            d.addCallback(lambda res: self.subdir2.list())
7226-            d.addCallback(lambda children:
7227-                          self.failUnlessReallyEqual(set(children.keys()), set([])))
7228-
7229-            # now make sure that we honor overwrite=False
7230-            d.addCallback(lambda res:
7231-                          self.subdir2.set_uri(u"newchild",
7232-                                               other_file_uri, other_file_uri))
7233-
7234-            d.addCallback(lambda res:
7235-                          self.shouldFail(ExistingChildError, "move_child_to-no",
7236-                                          "child 'newchild' already exists",
7237-                                          n.move_child_to, u"newchild",
7238-                                          self.subdir2,
7239-                                          overwrite=False))
7240-            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
7241-            d.addCallback(lambda child:
7242-                          self.failUnlessReallyEqual(child.get_uri(),
7243-                                                     other_file_uri))
7244-
7245-
7246-            # Setting the no-write field should diminish a mutable cap to read-only
7247-            # (for both files and directories).
7248-
7249-            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
7250-            d.addCallback(lambda ign: n.get(u"mutable"))
7251-            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
7252-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
7253-            d.addCallback(lambda ign: n.get(u"mutable"))
7254-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
7255-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
7256-            d.addCallback(lambda ign: n.get(u"mutable"))
7257-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
7258-
7259-            d.addCallback(lambda ign: n.get(u"subdir2"))
7260-            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
7261-            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
7262-            d.addCallback(lambda ign: n.get(u"subdir2"))
7263-            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
7264-
7265-            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
7266-                                                metadata={"no-write": True}))
7267-            d.addCallback(lambda ign: n.get(u"mutable_ro"))
7268-            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
7269-
7270-            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
7271-            d.addCallback(lambda ign: n.get(u"subdir_ro"))
7272-            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
7273-
7274-            return d
7275-
7276-        d.addCallback(_then)
7277-
7278-        d.addErrback(self.explain_error)
7279-        return d
7280+        return self._do_create_test()
7281 
7282     def test_update_metadata(self):
7283         (t1, t2, t3) = (626644800.0, 634745640.0, 892226160.0)
7284hunk ./src/allmydata/test/test_dirnode.py 1283
7285         self.failUnlessEqual(md4, {"bool": True, "number": 42,
7286                                    "tahoe":{"linkcrtime": t1, "linkmotime": t1}})
7287 
7288-    def test_create_subdirectory(self):
7289-        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
7290-        self.set_up_grid()
7291+    def _do_create_subdirectory_test(self, version=SDMF_VERSION):
7292         c = self.g.clients[0]
7293         nm = c.nodemaker
7294 
7295hunk ./src/allmydata/test/test_dirnode.py 1287
7296-        d = c.create_dirnode()
7297+        d = c.create_dirnode(version=version)
7298         def _then(n):
7299             # /
7300             self.rootnode = n
7301hunk ./src/allmydata/test/test_dirnode.py 1297
7302             kids = {u"kid1": (nm.create_from_cap(fake_file_uri), {}),
7303                     u"kid2": (nm.create_from_cap(other_file_uri), md),
7304                     }
7305-            d = n.create_subdirectory(u"subdir", kids)
7306+            d = n.create_subdirectory(u"subdir", kids,
7307+                                      mutable_version=version)
7308             def _check(sub):
7309                 d = n.get_child_at_path(u"subdir")
7310                 d.addCallback(lambda sub2: self.failUnlessReallyEqual(sub2.get_uri(),
7311hunk ./src/allmydata/test/test_dirnode.py 1314
7312         d.addCallback(_then)
7313         return d
7314 
7315+    def test_create_subdirectory(self):
7316+        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
7317+        self.set_up_grid()
7318+        return self._do_create_subdirectory_test()
7319+
7320+    def test_create_subdirectory_mdmf(self):
7321+        self.basedir = "dirnode/Dirnode/test_create_subdirectory_mdmf"
7322+        self.set_up_grid()
7323+        return self._do_create_subdirectory_test(version=MDMF_VERSION)
7324+
7325+    def test_create_mdmf(self):
7326+        self.basedir = "dirnode/Dirnode/test_mdmf"
7327+        self.set_up_grid()
7328+        return self._do_create_test(mdmf=True)
7329+
7330+    def test_mdmf_initial_children(self):
7331+        self.basedir = "dirnode/Dirnode/test_mdmf"
7332+        self.set_up_grid()
7333+        return self._do_initial_children_test(mdmf=True)
7334+
7335 class MinimalFakeMutableFile:
7336     def get_writekey(self):
7337         return "writekey"
7338hunk ./src/allmydata/test/test_dirnode.py 1452
7339     implements(IMutableFileNode)
7340     counter = 0
7341     def __init__(self, initial_contents=""):
7342-        self.data = self._get_initial_contents(initial_contents)
7343+        data = self._get_initial_contents(initial_contents)
7344+        self.data = data.read(data.get_size())
7345+        self.data = "".join(self.data)
7346+
7347         counter = FakeMutableFile.counter
7348         FakeMutableFile.counter += 1
7349         writekey = hashutil.ssk_writekey_hash(str(counter))
7350hunk ./src/allmydata/test/test_dirnode.py 1502
7351         pass
7352 
7353     def modify(self, modifier):
7354-        self.data = modifier(self.data, None, True)
7355+        data = modifier(self.data, None, True)
7356+        self.data = data
7357         return defer.succeed(None)
7358 
7359 class FakeNodeMaker(NodeMaker):
7360hunk ./src/allmydata/test/test_dirnode.py 1507
7361-    def create_mutable_file(self, contents="", keysize=None):
7362+    def create_mutable_file(self, contents="", keysize=None, version=None):
7363         return defer.succeed(FakeMutableFile(contents))
7364 
7365 class FakeClient2(Client):
7366hunk ./src/allmydata/test/test_dirnode.py 1706
7367             self.failUnless(n.get_readonly_uri().startswith("imm."), i)
7368 
7369 
7370+
7371 class DeepStats(testutil.ReallyEqualMixin, unittest.TestCase):
7372     timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
7373     def test_stats(self):
7374}
7375[immutable/literal.py: Implement interface changes in literal nodes.
7376Kevan Carstensen <kevan@isnotajoke.com>**20110802020814
7377 Ignore-this: 4371e71a50e65ce2607c4d67d3a32171
7378] {
7379hunk ./src/allmydata/immutable/literal.py 106
7380         d.addCallback(lambda lastSent: consumer)
7381         return d
7382 
7383+    # IReadable, IFileNode, IFilesystemNode
7384+    def get_best_readable_version(self):
7385+        return defer.succeed(self)
7386+
7387+
7388+    def download_best_version(self):
7389+        return defer.succeed(self.u.data)
7390+
7391+
7392+    download_to_data = download_best_version
7393+    get_size_of_best_version = get_current_size
7394+
7395hunk ./src/allmydata/test/test_filenode.py 98
7396         def _check_segment(res):
7397             self.failUnlessEqual(res, DATA[1:1+5])
7398         d.addCallback(_check_segment)
7399+        d.addCallback(lambda ignored: fn1.get_best_readable_version())
7400+        d.addCallback(lambda fn2: self.failUnlessEqual(fn1, fn2))
7401+        d.addCallback(lambda ignored:
7402+            fn1.get_size_of_best_version())
7403+        d.addCallback(lambda size:
7404+            self.failUnlessEqual(size, len(DATA)))
7405+        d.addCallback(lambda ignored:
7406+            fn1.download_to_data())
7407+        d.addCallback(lambda data:
7408+            self.failUnlessEqual(data, DATA))
7409+        d.addCallback(lambda ignored:
7410+            fn1.download_best_version())
7411+        d.addCallback(lambda data:
7412+            self.failUnlessEqual(data, DATA))
7413 
7414         return d
7415 
7416}
7417[immutable/filenode: implement unified filenode interface
7418Kevan Carstensen <kevan@isnotajoke.com>**20110802020905
7419 Ignore-this: d9a442fc285157f134f5d1b4607c6a48
7420] {
7421hunk ./src/allmydata/immutable/filenode.py 8
7422 now = time.time
7423 from zope.interface import implements
7424 from twisted.internet import defer
7425-from twisted.internet.interfaces import IConsumer
7426 
7427hunk ./src/allmydata/immutable/filenode.py 9
7428-from allmydata.interfaces import IImmutableFileNode, IUploadResults
7429 from allmydata import uri
7430hunk ./src/allmydata/immutable/filenode.py 10
7431+from twisted.internet.interfaces import IConsumer
7432+from twisted.protocols import basic
7433+from foolscap.api import eventually
7434+from allmydata.interfaces import IImmutableFileNode, ICheckable, \
7435+     IDownloadTarget, IUploadResults
7436+from allmydata.util import dictutil, log, base32, consumer
7437+from allmydata.immutable.checker import Checker
7438 from allmydata.check_results import CheckResults, CheckAndRepairResults
7439 from allmydata.util.dictutil import DictOfSets
7440 from pycryptopp.cipher.aes import AES
7441hunk ./src/allmydata/immutable/filenode.py 285
7442         return self._cnode.check_and_repair(monitor, verify, add_lease)
7443     def check(self, monitor, verify=False, add_lease=False):
7444         return self._cnode.check(monitor, verify, add_lease)
7445+
7446+    def get_best_readable_version(self):
7447+        """
7448+        Return an IReadable of the best version of this file. Since
7449+        immutable files can have only one version, we just return the
7450+        current filenode.
7451+        """
7452+        return defer.succeed(self)
7453+
7454+
7455+    def download_best_version(self):
7456+        """
7457+        Download the best version of this file, returning its contents
7458+        as a bytestring. Since there is only one version of an immutable
7459+        file, we download and return the contents of this file.
7460+        """
7461+        d = consumer.download_to_data(self)
7462+        return d
7463+
7464+    # for an immutable file, download_to_data (specified in IReadable)
7465+    # is the same as download_best_version (specified in IFileNode). For
7466+    # mutable files, the difference is more meaningful, since they can
7467+    # have multiple versions.
7468+    download_to_data = download_best_version
7469+
7470+
7471+    # get_size() (IReadable), get_current_size() (IFilesystemNode), and
7472+    # get_size_of_best_version(IFileNode) are all the same for immutable
7473+    # files.
7474+    get_size_of_best_version = get_current_size
7475hunk ./src/allmydata/test/test_immutable.py 290
7476         d.addCallback(_try_download)
7477         return d
7478 
7479+    def test_download_to_data(self):
7480+        d = self.n.download_to_data()
7481+        d.addCallback(lambda data:
7482+            self.failUnlessEqual(data, common.TEST_DATA))
7483+        return d
7484+
7485+
7486+    def test_download_best_version(self):
7487+        d = self.n.download_best_version()
7488+        d.addCallback(lambda data:
7489+            self.failUnlessEqual(data, common.TEST_DATA))
7490+        return d
7491+
7492+
7493+    def test_get_best_readable_version(self):
7494+        d = self.n.get_best_readable_version()
7495+        d.addCallback(lambda n2:
7496+            self.failUnlessEqual(n2, self.n))
7497+        return d
7498+
7499+    def test_get_size_of_best_version(self):
7500+        d = self.n.get_size_of_best_version()
7501+        d.addCallback(lambda size:
7502+            self.failUnlessEqual(size, len(common.TEST_DATA)))
7503+        return d
7504+
7505 
7506 # XXX extend these tests to show bad behavior of various kinds from servers:
7507 # raising exception from each remove_foo() method, for example
7508}
7509[test/test_mutable: tests for MDMF
7510Kevan Carstensen <kevan@isnotajoke.com>**20110802020924
7511 Ignore-this: 6b5269849b3f987aa6e266a57ee01041
7512 
7513 These are their own patch because they cut across a lot of the changes
7514 I've made in implementing MDMF in such a way as to make it difficult to
7515 split them up into the other patches.
7516] {
7517hunk ./src/allmydata/test/test_mutable.py 2
7518 
7519-import struct
7520+import os, re, base64
7521 from cStringIO import StringIO
7522 from twisted.trial import unittest
7523 from twisted.internet import defer, reactor
7524hunk ./src/allmydata/test/test_mutable.py 6
7525+from twisted.internet.interfaces import IConsumer
7526+from zope.interface import implements
7527 from allmydata import uri, client
7528 from allmydata.nodemaker import NodeMaker
7529hunk ./src/allmydata/test/test_mutable.py 10
7530-from allmydata.util import base32
7531+from allmydata.util import base32, consumer, fileutil
7532 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \
7533      ssk_pubkey_fingerprint_hash
7534hunk ./src/allmydata/test/test_mutable.py 13
7535+from allmydata.util.deferredutil import gatherResults
7536 from allmydata.interfaces import IRepairResults, ICheckAndRepairResults, \
7537hunk ./src/allmydata/test/test_mutable.py 15
7538-     NotEnoughSharesError
7539+     NotEnoughSharesError, SDMF_VERSION, MDMF_VERSION
7540 from allmydata.monitor import Monitor
7541 from allmydata.test.common import ShouldFailMixin
7542 from allmydata.test.no_network import GridTestMixin
7543hunk ./src/allmydata/test/test_mutable.py 22
7544 from foolscap.api import eventually, fireEventually
7545 from foolscap.logging import log
7546 from allmydata.storage_client import StorageFarmBroker
7547+from allmydata.storage.common import storage_index_to_dir, si_b2a
7548 
7549 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
7550 from allmydata.mutable.common import ResponseCache, \
7551hunk ./src/allmydata/test/test_mutable.py 30
7552      NeedMoreDataError, UnrecoverableFileError, UncoordinatedWriteError, \
7553      NotEnoughServersError, CorruptShareError
7554 from allmydata.mutable.retrieve import Retrieve
7555-from allmydata.mutable.publish import Publish
7556+from allmydata.mutable.publish import Publish, MutableFileHandle, \
7557+                                      MutableData, \
7558+                                      DEFAULT_MAX_SEGMENT_SIZE
7559 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
7560hunk ./src/allmydata/test/test_mutable.py 34
7561-from allmydata.mutable.layout import unpack_header, unpack_share
7562+from allmydata.mutable.layout import unpack_header, MDMFSlotReadProxy
7563 from allmydata.mutable.repairer import MustForceRepairError
7564 
7565 import allmydata.test.common_util as testutil
7566hunk ./src/allmydata/test/test_mutable.py 103
7567         self.storage = storage
7568         self.queries = 0
7569     def callRemote(self, methname, *args, **kwargs):
7570+        self.queries += 1
7571         def _call():
7572             meth = getattr(self, methname)
7573             return meth(*args, **kwargs)
7574hunk ./src/allmydata/test/test_mutable.py 110
7575         d = fireEventually()
7576         d.addCallback(lambda res: _call())
7577         return d
7578+
7579     def callRemoteOnly(self, methname, *args, **kwargs):
7580hunk ./src/allmydata/test/test_mutable.py 112
7581+        self.queries += 1
7582         d = self.callRemote(methname, *args, **kwargs)
7583         d.addBoth(lambda ignore: None)
7584         pass
7585hunk ./src/allmydata/test/test_mutable.py 160
7586             chr(ord(original[byte_offset]) ^ 0x01) +
7587             original[byte_offset+1:])
7588 
7589+def add_two(original, byte_offset):
7590+    # It isn't enough to simply flip the bit for the version number,
7591+    # because 1 is a valid version number. So we add two instead.
7592+    return (original[:byte_offset] +
7593+            chr(ord(original[byte_offset]) ^ 0x02) +
7594+            original[byte_offset+1:])
7595+
7596 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0):
7597     # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a
7598     # list of shnums to corrupt.
7599hunk ./src/allmydata/test/test_mutable.py 170
7600+    ds = []
7601     for peerid in s._peers:
7602         shares = s._peers[peerid]
7603         for shnum in shares:
7604hunk ./src/allmydata/test/test_mutable.py 178
7605                 and shnum not in shnums_to_corrupt):
7606                 continue
7607             data = shares[shnum]
7608-            (version,
7609-             seqnum,
7610-             root_hash,
7611-             IV,
7612-             k, N, segsize, datalen,
7613-             o) = unpack_header(data)
7614-            if isinstance(offset, tuple):
7615-                offset1, offset2 = offset
7616-            else:
7617-                offset1 = offset
7618-                offset2 = 0
7619-            if offset1 == "pubkey":
7620-                real_offset = 107
7621-            elif offset1 in o:
7622-                real_offset = o[offset1]
7623-            else:
7624-                real_offset = offset1
7625-            real_offset = int(real_offset) + offset2 + offset_offset
7626-            assert isinstance(real_offset, int), offset
7627-            shares[shnum] = flip_bit(data, real_offset)
7628-    return res
7629+            # We're feeding the reader all of the share data, so it
7630+            # won't need to use the rref that we didn't provide, nor the
7631+            # storage index that we didn't provide. We do this because
7632+            # the reader will work for both MDMF and SDMF.
7633+            reader = MDMFSlotReadProxy(None, None, shnum, data)
7634+            # We need to get the offsets for the next part.
7635+            d = reader.get_verinfo()
7636+            def _do_corruption(verinfo, data, shnum):
7637+                (seqnum,
7638+                 root_hash,
7639+                 IV,
7640+                 segsize,
7641+                 datalen,
7642+                 k, n, prefix, o) = verinfo
7643+                if isinstance(offset, tuple):
7644+                    offset1, offset2 = offset
7645+                else:
7646+                    offset1 = offset
7647+                    offset2 = 0
7648+                if offset1 == "pubkey" and IV:
7649+                    real_offset = 107
7650+                elif offset1 in o:
7651+                    real_offset = o[offset1]
7652+                else:
7653+                    real_offset = offset1
7654+                real_offset = int(real_offset) + offset2 + offset_offset
7655+                assert isinstance(real_offset, int), offset
7656+                if offset1 == 0: # verbyte
7657+                    f = add_two
7658+                else:
7659+                    f = flip_bit
7660+                shares[shnum] = f(data, real_offset)
7661+            d.addCallback(_do_corruption, data, shnum)
7662+            ds.append(d)
7663+    dl = defer.DeferredList(ds)
7664+    dl.addCallback(lambda ignored: res)
7665+    return dl
7666 
7667 def make_storagebroker(s=None, num_peers=10):
7668     if not s:
7669hunk ./src/allmydata/test/test_mutable.py 257
7670             self.failUnlessEqual(len(shnums), 1)
7671         d.addCallback(_created)
7672         return d
7673+    test_create.timeout = 15
7674+
7675+
7676+    def test_create_mdmf(self):
7677+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7678+        def _created(n):
7679+            self.failUnless(isinstance(n, MutableFileNode))
7680+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7681+            sb = self.nodemaker.storage_broker
7682+            peer0 = sorted(sb.get_all_serverids())[0]
7683+            shnums = self._storage._peers[peer0].keys()
7684+            self.failUnlessEqual(len(shnums), 1)
7685+        d.addCallback(_created)
7686+        return d
7687+
7688+    def test_single_share(self):
7689+        # Make sure that we tolerate publishing a single share.
7690+        self.nodemaker.default_encoding_parameters['k'] = 1
7691+        self.nodemaker.default_encoding_parameters['happy'] = 1
7692+        self.nodemaker.default_encoding_parameters['n'] = 1
7693+        d = defer.succeed(None)
7694+        for v in (SDMF_VERSION, MDMF_VERSION):
7695+            d.addCallback(lambda ignored:
7696+                self.nodemaker.create_mutable_file(version=v))
7697+            def _created(n):
7698+                self.failUnless(isinstance(n, MutableFileNode))
7699+                self._node = n
7700+                return n
7701+            d.addCallback(_created)
7702+            d.addCallback(lambda n:
7703+                n.overwrite(MutableData("Contents" * 50000)))
7704+            d.addCallback(lambda ignored:
7705+                self._node.download_best_version())
7706+            d.addCallback(lambda contents:
7707+                self.failUnlessEqual(contents, "Contents" * 50000))
7708+        return d
7709+
7710+    def test_max_shares(self):
7711+        self.nodemaker.default_encoding_parameters['n'] = 255
7712+        d = self.nodemaker.create_mutable_file(version=SDMF_VERSION)
7713+        def _created(n):
7714+            self.failUnless(isinstance(n, MutableFileNode))
7715+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7716+            sb = self.nodemaker.storage_broker
7717+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
7718+                              in sb.get_all_serverids()])
7719+            self.failUnlessEqual(num_shares, 255)
7720+            self._node = n
7721+            return n
7722+        d.addCallback(_created)
7723+        # Now we upload some contents
7724+        d.addCallback(lambda n:
7725+            n.overwrite(MutableData("contents" * 50000)))
7726+        # ...then download contents
7727+        d.addCallback(lambda ignored:
7728+            self._node.download_best_version())
7729+        # ...and check to make sure everything went okay.
7730+        d.addCallback(lambda contents:
7731+            self.failUnlessEqual("contents" * 50000, contents))
7732+        return d
7733+
7734+    def test_max_shares_mdmf(self):
7735+        # Test how files behave when there are 255 shares.
7736+        self.nodemaker.default_encoding_parameters['n'] = 255
7737+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7738+        def _created(n):
7739+            self.failUnless(isinstance(n, MutableFileNode))
7740+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7741+            sb = self.nodemaker.storage_broker
7742+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
7743+                              in sb.get_all_serverids()])
7744+            self.failUnlessEqual(num_shares, 255)
7745+            self._node = n
7746+            return n
7747+        d.addCallback(_created)
7748+        d.addCallback(lambda n:
7749+            n.overwrite(MutableData("contents" * 50000)))
7750+        d.addCallback(lambda ignored:
7751+            self._node.download_best_version())
7752+        d.addCallback(lambda contents:
7753+            self.failUnlessEqual(contents, "contents" * 50000))
7754+        return d
7755+
7756+    def test_mdmf_filenode_cap(self):
7757+        # Test that an MDMF filenode, once created, returns an MDMF URI.
7758+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7759+        def _created(n):
7760+            self.failUnless(isinstance(n, MutableFileNode))
7761+            cap = n.get_cap()
7762+            self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
7763+            rcap = n.get_readcap()
7764+            self.failUnless(isinstance(rcap, uri.ReadonlyMDMFFileURI))
7765+            vcap = n.get_verify_cap()
7766+            self.failUnless(isinstance(vcap, uri.MDMFVerifierURI))
7767+        d.addCallback(_created)
7768+        return d
7769+
7770+
7771+    def test_create_from_mdmf_writecap(self):
7772+        # Test that the nodemaker is capable of creating an MDMF
7773+        # filenode given an MDMF cap.
7774+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7775+        def _created(n):
7776+            self.failUnless(isinstance(n, MutableFileNode))
7777+            s = n.get_uri()
7778+            self.failUnless(s.startswith("URI:MDMF"))
7779+            n2 = self.nodemaker.create_from_cap(s)
7780+            self.failUnless(isinstance(n2, MutableFileNode))
7781+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
7782+            self.failUnlessEqual(n.get_uri(), n2.get_uri())
7783+        d.addCallback(_created)
7784+        return d
7785+
7786+
7787+    def test_create_from_mdmf_writecap_with_extensions(self):
7788+        # Test that the nodemaker is capable of creating an MDMF
7789+        # filenode when given a writecap with extension parameters in
7790+        # them.
7791+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7792+        def _created(n):
7793+            self.failUnless(isinstance(n, MutableFileNode))
7794+            s = n.get_uri()
7795+            # We need to cheat a little and delete the nodemaker's
7796+            # cache, otherwise we'll get the same node instance back.
7797+            self.failUnlessIn(":3:131073", s)
7798+            n2 = self.nodemaker.create_from_cap(s)
7799+
7800+            self.failUnlessEqual(n2.get_storage_index(), n.get_storage_index())
7801+            self.failUnlessEqual(n.get_writekey(), n2.get_writekey())
7802+            hints = n2._downloader_hints
7803+            self.failUnlessEqual(hints['k'], 3)
7804+            self.failUnlessEqual(hints['segsize'], 131073)
7805+        d.addCallback(_created)
7806+        return d
7807+
7808+
7809+    def test_create_from_mdmf_readcap(self):
7810+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7811+        def _created(n):
7812+            self.failUnless(isinstance(n, MutableFileNode))
7813+            s = n.get_readonly_uri()
7814+            n2 = self.nodemaker.create_from_cap(s)
7815+            self.failUnless(isinstance(n2, MutableFileNode))
7816+
7817+            # Check that it's a readonly node
7818+            self.failUnless(n2.is_readonly())
7819+        d.addCallback(_created)
7820+        return d
7821+
7822+
7823+    def test_create_from_mdmf_readcap_with_extensions(self):
7824+        # We should be able to create an MDMF filenode with the
7825+        # extension parameters without it breaking.
7826+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7827+        def _created(n):
7828+            self.failUnless(isinstance(n, MutableFileNode))
7829+            s = n.get_readonly_uri()
7830+            self.failUnlessIn(":3:131073", s)
7831+
7832+            n2 = self.nodemaker.create_from_cap(s)
7833+            self.failUnless(isinstance(n2, MutableFileNode))
7834+            self.failUnless(n2.is_readonly())
7835+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
7836+            hints = n2._downloader_hints
7837+            self.failUnlessEqual(hints["k"], 3)
7838+            self.failUnlessEqual(hints["segsize"], 131073)
7839+        d.addCallback(_created)
7840+        return d
7841+
7842+
7843+    def test_internal_version_from_cap(self):
7844+        # MutableFileNodes and MutableFileVersions have an internal
7845+        # switch that tells them whether they're dealing with an SDMF or
7846+        # MDMF mutable file when they start doing stuff. We want to make
7847+        # sure that this is set appropriately given an MDMF cap.
7848+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7849+        def _created(n):
7850+            self.uri = n.get_uri()
7851+            self.failUnlessEqual(n._protocol_version, MDMF_VERSION)
7852+
7853+            n2 = self.nodemaker.create_from_cap(self.uri)
7854+            self.failUnlessEqual(n2._protocol_version, MDMF_VERSION)
7855+        d.addCallback(_created)
7856+        return d
7857+
7858 
7859     def test_serialize(self):
7860         n = MutableFileNode(None, None, {"k": 3, "n": 10}, None)
7861hunk ./src/allmydata/test/test_mutable.py 472
7862             d.addCallback(lambda smap: smap.dump(StringIO()))
7863             d.addCallback(lambda sio:
7864                           self.failUnless("3-of-10" in sio.getvalue()))
7865-            d.addCallback(lambda res: n.overwrite("contents 1"))
7866+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
7867             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
7868             d.addCallback(lambda res: n.download_best_version())
7869             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
7870hunk ./src/allmydata/test/test_mutable.py 479
7871             d.addCallback(lambda res: n.get_size_of_best_version())
7872             d.addCallback(lambda size:
7873                           self.failUnlessEqual(size, len("contents 1")))
7874-            d.addCallback(lambda res: n.overwrite("contents 2"))
7875+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
7876             d.addCallback(lambda res: n.download_best_version())
7877             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
7878             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
7879hunk ./src/allmydata/test/test_mutable.py 483
7880-            d.addCallback(lambda smap: n.upload("contents 3", smap))
7881+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
7882             d.addCallback(lambda res: n.download_best_version())
7883             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
7884             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
7885hunk ./src/allmydata/test/test_mutable.py 495
7886             # mapupdate-to-retrieve data caching (i.e. make the shares larger
7887             # than the default readsize, which is 2000 bytes). A 15kB file
7888             # will have 5kB shares.
7889-            d.addCallback(lambda res: n.overwrite("large size file" * 1000))
7890+            d.addCallback(lambda res: n.overwrite(MutableData("large size file" * 1000)))
7891             d.addCallback(lambda res: n.download_best_version())
7892             d.addCallback(lambda res:
7893                           self.failUnlessEqual(res, "large size file" * 1000))
7894hunk ./src/allmydata/test/test_mutable.py 503
7895         d.addCallback(_created)
7896         return d
7897 
7898+
7899+    def test_upload_and_download_mdmf(self):
7900+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7901+        def _created(n):
7902+            d = defer.succeed(None)
7903+            d.addCallback(lambda ignored:
7904+                n.get_servermap(MODE_READ))
7905+            def _then(servermap):
7906+                dumped = servermap.dump(StringIO())
7907+                self.failUnlessIn("3-of-10", dumped.getvalue())
7908+            d.addCallback(_then)
7909+            # Now overwrite the contents with some new contents. We want
7910+            # to make them big enough to force the file to be uploaded
7911+            # in more than one segment.
7912+            big_contents = "contents1" * 100000 # about 900 KiB
7913+            big_contents_uploadable = MutableData(big_contents)
7914+            d.addCallback(lambda ignored:
7915+                n.overwrite(big_contents_uploadable))
7916+            d.addCallback(lambda ignored:
7917+                n.download_best_version())
7918+            d.addCallback(lambda data:
7919+                self.failUnlessEqual(data, big_contents))
7920+            # Overwrite the contents again with some new contents. As
7921+            # before, they need to be big enough to force multiple
7922+            # segments, so that we make the downloader deal with
7923+            # multiple segments.
7924+            bigger_contents = "contents2" * 1000000 # about 9MiB
7925+            bigger_contents_uploadable = MutableData(bigger_contents)
7926+            d.addCallback(lambda ignored:
7927+                n.overwrite(bigger_contents_uploadable))
7928+            d.addCallback(lambda ignored:
7929+                n.download_best_version())
7930+            d.addCallback(lambda data:
7931+                self.failUnlessEqual(data, bigger_contents))
7932+            return d
7933+        d.addCallback(_created)
7934+        return d
7935+
7936+
7937+    def test_retrieve_pause(self):
7938+        # We should make sure that the retriever is able to pause
7939+        # correctly.
7940+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7941+        def _created(node):
7942+            self.node = node
7943+
7944+            return node.overwrite(MutableData("contents1" * 100000))
7945+        d.addCallback(_created)
7946+        # Now we'll retrieve it into a pausing consumer.
7947+        d.addCallback(lambda ignored:
7948+            self.node.get_best_mutable_version())
7949+        def _got_version(version):
7950+            self.c = PausingConsumer()
7951+            return version.read(self.c)
7952+        d.addCallback(_got_version)
7953+        d.addCallback(lambda ignored:
7954+            self.failUnlessEqual(self.c.data, "contents1" * 100000))
7955+        return d
7956+    test_retrieve_pause.timeout = 25
7957+
7958+
7959+    def test_download_from_mdmf_cap(self):
7960+        # We should be able to download an MDMF file given its cap
7961+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7962+        def _created(node):
7963+            self.uri = node.get_uri()
7964+
7965+            return node.overwrite(MutableData("contents1" * 100000))
7966+        def _then(ignored):
7967+            node = self.nodemaker.create_from_cap(self.uri)
7968+            return node.download_best_version()
7969+        def _downloaded(data):
7970+            self.failUnlessEqual(data, "contents1" * 100000)
7971+        d.addCallback(_created)
7972+        d.addCallback(_then)
7973+        d.addCallback(_downloaded)
7974+        return d
7975+
7976+
7977+    def test_create_and_download_from_bare_mdmf_cap(self):
7978+        # MDMF caps have extension parameters on them by default. We
7979+        # need to make sure that they work without extension parameters.
7980+        contents = MutableData("contents" * 100000)
7981+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION,
7982+                                               contents=contents)
7983+        def _created(node):
7984+            uri = node.get_uri()
7985+            self._created = node
7986+            self.failUnlessIn(":3:131073", uri)
7987+            # Now strip that off the end of the uri, then try creating
7988+            # and downloading the node again.
7989+            bare_uri = uri.replace(":3:131073", "")
7990+            assert ":3:131073" not in bare_uri
7991+
7992+            return self.nodemaker.create_from_cap(bare_uri)
7993+        d.addCallback(_created)
7994+        def _created_bare(node):
7995+            self.failUnlessEqual(node.get_writekey(),
7996+                                 self._created.get_writekey())
7997+            self.failUnlessEqual(node.get_readkey(),
7998+                                 self._created.get_readkey())
7999+            self.failUnlessEqual(node.get_storage_index(),
8000+                                 self._created.get_storage_index())
8001+            return node.download_best_version()
8002+        d.addCallback(_created_bare)
8003+        d.addCallback(lambda data:
8004+            self.failUnlessEqual(data, "contents" * 100000))
8005+        return d
8006+
8007+
8008+    def test_mdmf_write_count(self):
8009+        # Publishing an MDMF file should only cause one write for each
8010+        # share that is to be published. Otherwise, we introduce
8011+        # undesirable semantics that are a regression from SDMF
8012+        upload = MutableData("MDMF" * 100000) # about 400 KiB
8013+        d = self.nodemaker.create_mutable_file(upload,
8014+                                               version=MDMF_VERSION)
8015+        def _check_server_write_counts(ignored):
8016+            sb = self.nodemaker.storage_broker
8017+            peers = sb.test_servers.values()
8018+            for peer in peers:
8019+                self.failUnlessEqual(peer.queries, 1)
8020+        d.addCallback(_check_server_write_counts)
8021+        return d
8022+
8023+
8024     def test_create_with_initial_contents(self):
8025hunk ./src/allmydata/test/test_mutable.py 630
8026-        d = self.nodemaker.create_mutable_file("contents 1")
8027+        upload1 = MutableData("contents 1")
8028+        d = self.nodemaker.create_mutable_file(upload1)
8029         def _created(n):
8030             d = n.download_best_version()
8031             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
8032hunk ./src/allmydata/test/test_mutable.py 635
8033-            d.addCallback(lambda res: n.overwrite("contents 2"))
8034+            upload2 = MutableData("contents 2")
8035+            d.addCallback(lambda res: n.overwrite(upload2))
8036             d.addCallback(lambda res: n.download_best_version())
8037             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
8038             return d
8039hunk ./src/allmydata/test/test_mutable.py 642
8040         d.addCallback(_created)
8041         return d
8042+    test_create_with_initial_contents.timeout = 15
8043+
8044+
8045+    def test_create_mdmf_with_initial_contents(self):
8046+        initial_contents = "foobarbaz" * 131072 # 900KiB
8047+        initial_contents_uploadable = MutableData(initial_contents)
8048+        d = self.nodemaker.create_mutable_file(initial_contents_uploadable,
8049+                                               version=MDMF_VERSION)
8050+        def _created(n):
8051+            d = n.download_best_version()
8052+            d.addCallback(lambda data:
8053+                self.failUnlessEqual(data, initial_contents))
8054+            uploadable2 = MutableData(initial_contents + "foobarbaz")
8055+            d.addCallback(lambda ignored:
8056+                n.overwrite(uploadable2))
8057+            d.addCallback(lambda ignored:
8058+                n.download_best_version())
8059+            d.addCallback(lambda data:
8060+                self.failUnlessEqual(data, initial_contents +
8061+                                           "foobarbaz"))
8062+            return d
8063+        d.addCallback(_created)
8064+        return d
8065+    test_create_mdmf_with_initial_contents.timeout = 20
8066+
8067 
8068     def test_response_cache_memory_leak(self):
8069         d = self.nodemaker.create_mutable_file("contents")
8070hunk ./src/allmydata/test/test_mutable.py 693
8071             key = n.get_writekey()
8072             self.failUnless(isinstance(key, str), key)
8073             self.failUnlessEqual(len(key), 16) # AES key size
8074-            return data
8075+            return MutableData(data)
8076         d = self.nodemaker.create_mutable_file(_make_contents)
8077         def _created(n):
8078             return n.download_best_version()
8079hunk ./src/allmydata/test/test_mutable.py 701
8080         d.addCallback(lambda data2: self.failUnlessEqual(data2, data))
8081         return d
8082 
8083+
8084+    def test_create_mdmf_with_initial_contents_function(self):
8085+        data = "initial contents" * 100000
8086+        def _make_contents(n):
8087+            self.failUnless(isinstance(n, MutableFileNode))
8088+            key = n.get_writekey()
8089+            self.failUnless(isinstance(key, str), key)
8090+            self.failUnlessEqual(len(key), 16)
8091+            return MutableData(data)
8092+        d = self.nodemaker.create_mutable_file(_make_contents,
8093+                                               version=MDMF_VERSION)
8094+        d.addCallback(lambda n:
8095+            n.download_best_version())
8096+        d.addCallback(lambda data2:
8097+            self.failUnlessEqual(data2, data))
8098+        return d
8099+
8100+
8101     def test_create_with_too_large_contents(self):
8102         BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1)
8103hunk ./src/allmydata/test/test_mutable.py 721
8104-        d = self.nodemaker.create_mutable_file(BIG)
8105+        BIG_uploadable = MutableData(BIG)
8106+        d = self.nodemaker.create_mutable_file(BIG_uploadable)
8107         def _created(n):
8108hunk ./src/allmydata/test/test_mutable.py 724
8109-            d = n.overwrite(BIG)
8110+            other_BIG_uploadable = MutableData(BIG)
8111+            d = n.overwrite(other_BIG_uploadable)
8112             return d
8113         d.addCallback(_created)
8114         return d
8115hunk ./src/allmydata/test/test_mutable.py 739
8116 
8117     def test_modify(self):
8118         def _modifier(old_contents, servermap, first_time):
8119-            return old_contents + "line2"
8120+            new_contents = old_contents + "line2"
8121+            return new_contents
8122         def _non_modifier(old_contents, servermap, first_time):
8123             return old_contents
8124         def _none_modifier(old_contents, servermap, first_time):
8125hunk ./src/allmydata/test/test_mutable.py 748
8126         def _error_modifier(old_contents, servermap, first_time):
8127             raise ValueError("oops")
8128         def _toobig_modifier(old_contents, servermap, first_time):
8129-            return "b" * (self.OLD_MAX_SEGMENT_SIZE+1)
8130+            new_content = "b" * (self.OLD_MAX_SEGMENT_SIZE + 1)
8131+            return new_content
8132         calls = []
8133         def _ucw_error_modifier(old_contents, servermap, first_time):
8134             # simulate an UncoordinatedWriteError once
8135hunk ./src/allmydata/test/test_mutable.py 756
8136             calls.append(1)
8137             if len(calls) <= 1:
8138                 raise UncoordinatedWriteError("simulated")
8139-            return old_contents + "line3"
8140+            new_contents = old_contents + "line3"
8141+            return new_contents
8142         def _ucw_error_non_modifier(old_contents, servermap, first_time):
8143             # simulate an UncoordinatedWriteError once, and don't actually
8144             # modify the contents on subsequent invocations
8145hunk ./src/allmydata/test/test_mutable.py 766
8146                 raise UncoordinatedWriteError("simulated")
8147             return old_contents
8148 
8149-        d = self.nodemaker.create_mutable_file("line1")
8150+        initial_contents = "line1"
8151+        d = self.nodemaker.create_mutable_file(MutableData(initial_contents))
8152         def _created(n):
8153             d = n.modify(_modifier)
8154             d.addCallback(lambda res: n.download_best_version())
8155hunk ./src/allmydata/test/test_mutable.py 824
8156             return d
8157         d.addCallback(_created)
8158         return d
8159+    test_modify.timeout = 15
8160+
8161 
8162     def test_modify_backoffer(self):
8163         def _modifier(old_contents, servermap, first_time):
8164hunk ./src/allmydata/test/test_mutable.py 851
8165         giveuper._delay = 0.1
8166         giveuper.factor = 1
8167 
8168-        d = self.nodemaker.create_mutable_file("line1")
8169+        d = self.nodemaker.create_mutable_file(MutableData("line1"))
8170         def _created(n):
8171             d = n.modify(_modifier)
8172             d.addCallback(lambda res: n.download_best_version())
8173hunk ./src/allmydata/test/test_mutable.py 901
8174             d.addCallback(lambda smap: smap.dump(StringIO()))
8175             d.addCallback(lambda sio:
8176                           self.failUnless("3-of-10" in sio.getvalue()))
8177-            d.addCallback(lambda res: n.overwrite("contents 1"))
8178+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
8179             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
8180             d.addCallback(lambda res: n.download_best_version())
8181             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
8182hunk ./src/allmydata/test/test_mutable.py 905
8183-            d.addCallback(lambda res: n.overwrite("contents 2"))
8184+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
8185             d.addCallback(lambda res: n.download_best_version())
8186             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
8187             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
8188hunk ./src/allmydata/test/test_mutable.py 909
8189-            d.addCallback(lambda smap: n.upload("contents 3", smap))
8190+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
8191             d.addCallback(lambda res: n.download_best_version())
8192             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
8193             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
8194hunk ./src/allmydata/test/test_mutable.py 922
8195         return d
8196 
8197 
8198-class MakeShares(unittest.TestCase):
8199-    def test_encrypt(self):
8200-        nm = make_nodemaker()
8201-        CONTENTS = "some initial contents"
8202-        d = nm.create_mutable_file(CONTENTS)
8203-        def _created(fn):
8204-            p = Publish(fn, nm.storage_broker, None)
8205-            p.salt = "SALT" * 4
8206-            p.readkey = "\x00" * 16
8207-            p.newdata = CONTENTS
8208-            p.required_shares = 3
8209-            p.total_shares = 10
8210-            p.setup_encoding_parameters()
8211-            return p._encrypt_and_encode()
8212+    def test_size_after_servermap_update(self):
8213+        # a mutable file node should have something to say about how big
8214+        # it is after a servermap update is performed, since this tells
8215+        # us how large the best version of that mutable file is.
8216+        d = self.nodemaker.create_mutable_file()
8217+        def _created(n):
8218+            self.n = n
8219+            return n.get_servermap(MODE_READ)
8220         d.addCallback(_created)
8221hunk ./src/allmydata/test/test_mutable.py 931
8222-        def _done(shares_and_shareids):
8223-            (shares, share_ids) = shares_and_shareids
8224-            self.failUnlessEqual(len(shares), 10)
8225-            for sh in shares:
8226-                self.failUnless(isinstance(sh, str))
8227-                self.failUnlessEqual(len(sh), 7)
8228-            self.failUnlessEqual(len(share_ids), 10)
8229-        d.addCallback(_done)
8230-        return d
8231-
8232-    def test_generate(self):
8233-        nm = make_nodemaker()
8234-        CONTENTS = "some initial contents"
8235-        d = nm.create_mutable_file(CONTENTS)
8236-        def _created(fn):
8237-            self._fn = fn
8238-            p = Publish(fn, nm.storage_broker, None)
8239-            self._p = p
8240-            p.newdata = CONTENTS
8241-            p.required_shares = 3
8242-            p.total_shares = 10
8243-            p.setup_encoding_parameters()
8244-            p._new_seqnum = 3
8245-            p.salt = "SALT" * 4
8246-            # make some fake shares
8247-            shares_and_ids = ( ["%07d" % i for i in range(10)], range(10) )
8248-            p._privkey = fn.get_privkey()
8249-            p._encprivkey = fn.get_encprivkey()
8250-            p._pubkey = fn.get_pubkey()
8251-            return p._generate_shares(shares_and_ids)
8252+        d.addCallback(lambda ignored:
8253+            self.failUnlessEqual(self.n.get_size(), 0))
8254+        d.addCallback(lambda ignored:
8255+            self.n.overwrite(MutableData("foobarbaz")))
8256+        d.addCallback(lambda ignored:
8257+            self.failUnlessEqual(self.n.get_size(), 9))
8258+        d.addCallback(lambda ignored:
8259+            self.nodemaker.create_mutable_file(MutableData("foobarbaz")))
8260         d.addCallback(_created)
8261hunk ./src/allmydata/test/test_mutable.py 940
8262-        def _generated(res):
8263-            p = self._p
8264-            final_shares = p.shares
8265-            root_hash = p.root_hash
8266-            self.failUnlessEqual(len(root_hash), 32)
8267-            self.failUnless(isinstance(final_shares, dict))
8268-            self.failUnlessEqual(len(final_shares), 10)
8269-            self.failUnlessEqual(sorted(final_shares.keys()), range(10))
8270-            for i,sh in final_shares.items():
8271-                self.failUnless(isinstance(sh, str))
8272-                # feed the share through the unpacker as a sanity-check
8273-                pieces = unpack_share(sh)
8274-                (u_seqnum, u_root_hash, IV, k, N, segsize, datalen,
8275-                 pubkey, signature, share_hash_chain, block_hash_tree,
8276-                 share_data, enc_privkey) = pieces
8277-                self.failUnlessEqual(u_seqnum, 3)
8278-                self.failUnlessEqual(u_root_hash, root_hash)
8279-                self.failUnlessEqual(k, 3)
8280-                self.failUnlessEqual(N, 10)
8281-                self.failUnlessEqual(segsize, 21)
8282-                self.failUnlessEqual(datalen, len(CONTENTS))
8283-                self.failUnlessEqual(pubkey, p._pubkey.serialize())
8284-                sig_material = struct.pack(">BQ32s16s BBQQ",
8285-                                           0, p._new_seqnum, root_hash, IV,
8286-                                           k, N, segsize, datalen)
8287-                self.failUnless(p._pubkey.verify(sig_material, signature))
8288-                #self.failUnlessEqual(signature, p._privkey.sign(sig_material))
8289-                self.failUnless(isinstance(share_hash_chain, dict))
8290-                self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++
8291-                for shnum,share_hash in share_hash_chain.items():
8292-                    self.failUnless(isinstance(shnum, int))
8293-                    self.failUnless(isinstance(share_hash, str))
8294-                    self.failUnlessEqual(len(share_hash), 32)
8295-                self.failUnless(isinstance(block_hash_tree, list))
8296-                self.failUnlessEqual(len(block_hash_tree), 1) # very small tree
8297-                self.failUnlessEqual(IV, "SALT"*4)
8298-                self.failUnlessEqual(len(share_data), len("%07d" % 1))
8299-                self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey())
8300-        d.addCallback(_generated)
8301+        d.addCallback(lambda ignored:
8302+            self.failUnlessEqual(self.n.get_size(), 9))
8303         return d
8304 
8305hunk ./src/allmydata/test/test_mutable.py 944
8306-    # TODO: when we publish to 20 peers, we should get one share per peer on 10
8307-    # when we publish to 3 peers, we should get either 3 or 4 shares per peer
8308-    # when we publish to zero peers, we should get a NotEnoughSharesError
8309 
8310 class PublishMixin:
8311     def publish_one(self):
8312hunk ./src/allmydata/test/test_mutable.py 950
8313         # publish a file and create shares, which can then be manipulated
8314         # later.
8315         self.CONTENTS = "New contents go here" * 1000
8316+        self.uploadable = MutableData(self.CONTENTS)
8317+        self._storage = FakeStorage()
8318+        self._nodemaker = make_nodemaker(self._storage)
8319+        self._storage_broker = self._nodemaker.storage_broker
8320+        d = self._nodemaker.create_mutable_file(self.uploadable)
8321+        def _created(node):
8322+            self._fn = node
8323+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8324+        d.addCallback(_created)
8325+        return d
8326+
8327+    def publish_mdmf(self):
8328+        # like publish_one, except that the result is guaranteed to be
8329+        # an MDMF file.
8330+        # self.CONTENTS should have more than one segment.
8331+        self.CONTENTS = "This is an MDMF file" * 100000
8332+        self.uploadable = MutableData(self.CONTENTS)
8333+        self._storage = FakeStorage()
8334+        self._nodemaker = make_nodemaker(self._storage)
8335+        self._storage_broker = self._nodemaker.storage_broker
8336+        d = self._nodemaker.create_mutable_file(self.uploadable, version=MDMF_VERSION)
8337+        def _created(node):
8338+            self._fn = node
8339+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8340+        d.addCallback(_created)
8341+        return d
8342+
8343+
8344+    def publish_sdmf(self):
8345+        # like publish_one, except that the result is guaranteed to be
8346+        # an SDMF file
8347+        self.CONTENTS = "This is an SDMF file" * 1000
8348+        self.uploadable = MutableData(self.CONTENTS)
8349         self._storage = FakeStorage()
8350         self._nodemaker = make_nodemaker(self._storage)
8351         self._storage_broker = self._nodemaker.storage_broker
8352hunk ./src/allmydata/test/test_mutable.py 986
8353-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
8354+        d = self._nodemaker.create_mutable_file(self.uploadable, version=SDMF_VERSION)
8355         def _created(node):
8356             self._fn = node
8357             self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8358hunk ./src/allmydata/test/test_mutable.py 993
8359         d.addCallback(_created)
8360         return d
8361 
8362-    def publish_multiple(self):
8363+
8364+    def publish_multiple(self, version=0):
8365         self.CONTENTS = ["Contents 0",
8366                          "Contents 1",
8367                          "Contents 2",
8368hunk ./src/allmydata/test/test_mutable.py 1000
8369                          "Contents 3a",
8370                          "Contents 3b"]
8371+        self.uploadables = [MutableData(d) for d in self.CONTENTS]
8372         self._copied_shares = {}
8373         self._storage = FakeStorage()
8374         self._nodemaker = make_nodemaker(self._storage)
8375hunk ./src/allmydata/test/test_mutable.py 1004
8376-        d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1
8377+        d = self._nodemaker.create_mutable_file(self.uploadables[0], version=version) # seqnum=1
8378         def _created(node):
8379             self._fn = node
8380             # now create multiple versions of the same file, and accumulate
8381hunk ./src/allmydata/test/test_mutable.py 1011
8382             # their shares, so we can mix and match them later.
8383             d = defer.succeed(None)
8384             d.addCallback(self._copy_shares, 0)
8385-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[1])) #s2
8386+            d.addCallback(lambda res: node.overwrite(self.uploadables[1])) #s2
8387             d.addCallback(self._copy_shares, 1)
8388hunk ./src/allmydata/test/test_mutable.py 1013
8389-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[2])) #s3
8390+            d.addCallback(lambda res: node.overwrite(self.uploadables[2])) #s3
8391             d.addCallback(self._copy_shares, 2)
8392hunk ./src/allmydata/test/test_mutable.py 1015
8393-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[3])) #s4a
8394+            d.addCallback(lambda res: node.overwrite(self.uploadables[3])) #s4a
8395             d.addCallback(self._copy_shares, 3)
8396             # now we replace all the shares with version s3, and upload a new
8397             # version to get s4b.
8398hunk ./src/allmydata/test/test_mutable.py 1021
8399             rollback = dict([(i,2) for i in range(10)])
8400             d.addCallback(lambda res: self._set_versions(rollback))
8401-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[4])) #s4b
8402+            d.addCallback(lambda res: node.overwrite(self.uploadables[4])) #s4b
8403             d.addCallback(self._copy_shares, 4)
8404             # we leave the storage in state 4
8405             return d
8406hunk ./src/allmydata/test/test_mutable.py 1028
8407         d.addCallback(_created)
8408         return d
8409 
8410+
8411     def _copy_shares(self, ignored, index):
8412         shares = self._storage._peers
8413         # we need a deep copy
8414hunk ./src/allmydata/test/test_mutable.py 1051
8415                     index = versionmap[shnum]
8416                     shares[peerid][shnum] = oldshares[index][peerid][shnum]
8417 
8418+class PausingConsumer:
8419+    implements(IConsumer)
8420+    def __init__(self):
8421+        self.data = ""
8422+        self.already_paused = False
8423+
8424+    def registerProducer(self, producer, streaming):
8425+        self.producer = producer
8426+        self.producer.resumeProducing()
8427+
8428+    def unregisterProducer(self):
8429+        self.producer = None
8430+
8431+    def _unpause(self, ignored):
8432+        self.producer.resumeProducing()
8433+
8434+    def write(self, data):
8435+        self.data += data
8436+        if not self.already_paused:
8437+           self.producer.pauseProducing()
8438+           self.already_paused = True
8439+           reactor.callLater(15, self._unpause, None)
8440+
8441 
8442 class Servermap(unittest.TestCase, PublishMixin):
8443     def setUp(self):
8444hunk ./src/allmydata/test/test_mutable.py 1079
8445         return self.publish_one()
8446 
8447-    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None):
8448+    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None,
8449+                       update_range=None):
8450         if fn is None:
8451             fn = self._fn
8452         if sb is None:
8453hunk ./src/allmydata/test/test_mutable.py 1086
8454             sb = self._storage_broker
8455         smu = ServermapUpdater(fn, sb, Monitor(),
8456-                               ServerMap(), mode)
8457+                               ServerMap(), mode, update_range=update_range)
8458         d = smu.update()
8459         return d
8460 
8461hunk ./src/allmydata/test/test_mutable.py 1152
8462         # create a new file, which is large enough to knock the privkey out
8463         # of the early part of the file
8464         LARGE = "These are Larger contents" * 200 # about 5KB
8465-        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE))
8466+        LARGE_uploadable = MutableData(LARGE)
8467+        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE_uploadable))
8468         def _created(large_fn):
8469             large_fn2 = self._nodemaker.create_from_cap(large_fn.get_uri())
8470             return self.make_servermap(MODE_WRITE, large_fn2)
8471hunk ./src/allmydata/test/test_mutable.py 1161
8472         d.addCallback(lambda sm: self.failUnlessOneRecoverable(sm, 10))
8473         return d
8474 
8475+
8476     def test_mark_bad(self):
8477         d = defer.succeed(None)
8478         ms = self.make_servermap
8479hunk ./src/allmydata/test/test_mutable.py 1207
8480         self._storage._peers = {} # delete all shares
8481         ms = self.make_servermap
8482         d = defer.succeed(None)
8483-
8484+#
8485         d.addCallback(lambda res: ms(mode=MODE_CHECK))
8486         d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm))
8487 
8488hunk ./src/allmydata/test/test_mutable.py 1259
8489         return d
8490 
8491 
8492+    def test_servermapupdater_finds_mdmf_files(self):
8493+        # setUp already published an MDMF file for us. We just need to
8494+        # make sure that when we run the ServermapUpdater, the file is
8495+        # reported to have one recoverable version.
8496+        d = defer.succeed(None)
8497+        d.addCallback(lambda ignored:
8498+            self.publish_mdmf())
8499+        d.addCallback(lambda ignored:
8500+            self.make_servermap(mode=MODE_CHECK))
8501+        # Calling make_servermap also updates the servermap in the mode
8502+        # that we specify, so we just need to see what it says.
8503+        def _check_servermap(sm):
8504+            self.failUnlessEqual(len(sm.recoverable_versions()), 1)
8505+        d.addCallback(_check_servermap)
8506+        return d
8507+
8508+
8509+    def test_fetch_update(self):
8510+        d = defer.succeed(None)
8511+        d.addCallback(lambda ignored:
8512+            self.publish_mdmf())
8513+        d.addCallback(lambda ignored:
8514+            self.make_servermap(mode=MODE_WRITE, update_range=(1, 2)))
8515+        def _check_servermap(sm):
8516+            # 10 shares
8517+            self.failUnlessEqual(len(sm.update_data), 10)
8518+            # one version
8519+            for data in sm.update_data.itervalues():
8520+                self.failUnlessEqual(len(data), 1)
8521+        d.addCallback(_check_servermap)
8522+        return d
8523+
8524+
8525+    def test_servermapupdater_finds_sdmf_files(self):
8526+        d = defer.succeed(None)
8527+        d.addCallback(lambda ignored:
8528+            self.publish_sdmf())
8529+        d.addCallback(lambda ignored:
8530+            self.make_servermap(mode=MODE_CHECK))
8531+        d.addCallback(lambda servermap:
8532+            self.failUnlessEqual(len(servermap.recoverable_versions()), 1))
8533+        return d
8534+
8535 
8536 class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin):
8537     def setUp(self):
8538hunk ./src/allmydata/test/test_mutable.py 1342
8539         if version is None:
8540             version = servermap.best_recoverable_version()
8541         r = Retrieve(self._fn, servermap, version)
8542-        return r.download()
8543+        c = consumer.MemoryConsumer()
8544+        d = r.download(consumer=c)
8545+        d.addCallback(lambda mc: "".join(mc.chunks))
8546+        return d
8547+
8548 
8549     def test_basic(self):
8550         d = self.make_servermap()
8551hunk ./src/allmydata/test/test_mutable.py 1423
8552         return d
8553     test_no_servers_download.timeout = 15
8554 
8555+
8556     def _test_corrupt_all(self, offset, substring,
8557hunk ./src/allmydata/test/test_mutable.py 1425
8558-                          should_succeed=False, corrupt_early=True,
8559-                          failure_checker=None):
8560+                          should_succeed=False,
8561+                          corrupt_early=True,
8562+                          failure_checker=None,
8563+                          fetch_privkey=False):
8564         d = defer.succeed(None)
8565         if corrupt_early:
8566             d.addCallback(corrupt, self._storage, offset)
8567hunk ./src/allmydata/test/test_mutable.py 1445
8568                     self.failUnlessIn(substring, "".join(allproblems))
8569                 return servermap
8570             if should_succeed:
8571-                d1 = self._fn.download_version(servermap, ver)
8572+                d1 = self._fn.download_version(servermap, ver,
8573+                                               fetch_privkey)
8574                 d1.addCallback(lambda new_contents:
8575                                self.failUnlessEqual(new_contents, self.CONTENTS))
8576             else:
8577hunk ./src/allmydata/test/test_mutable.py 1453
8578                 d1 = self.shouldFail(NotEnoughSharesError,
8579                                      "_corrupt_all(offset=%s)" % (offset,),
8580                                      substring,
8581-                                     self._fn.download_version, servermap, ver)
8582+                                     self._fn.download_version, servermap,
8583+                                                                ver,
8584+                                                                fetch_privkey)
8585             if failure_checker:
8586                 d1.addCallback(failure_checker)
8587             d1.addCallback(lambda res: servermap)
8588hunk ./src/allmydata/test/test_mutable.py 1464
8589         return d
8590 
8591     def test_corrupt_all_verbyte(self):
8592-        # when the version byte is not 0, we hit an UnknownVersionError error
8593-        # in unpack_share().
8594+        # when the version byte is not 0 or 1, we hit an UnknownVersionError
8595+        # error in unpack_share().
8596         d = self._test_corrupt_all(0, "UnknownVersionError")
8597         def _check_servermap(servermap):
8598             # and the dump should mention the problems
8599hunk ./src/allmydata/test/test_mutable.py 1471
8600             s = StringIO()
8601             dump = servermap.dump(s).getvalue()
8602-            self.failUnless("10 PROBLEMS" in dump, dump)
8603+            self.failUnless("30 PROBLEMS" in dump, dump)
8604         d.addCallback(_check_servermap)
8605         return d
8606 
8607hunk ./src/allmydata/test/test_mutable.py 1541
8608         return self._test_corrupt_all("enc_privkey", None, should_succeed=True)
8609 
8610 
8611+    def test_corrupt_all_encprivkey_late(self):
8612+        # this should work for the same reason as above, but we corrupt
8613+        # after the servermap update to exercise the error handling
8614+        # code.
8615+        # We need to remove the privkey from the node, or the retrieve
8616+        # process won't know to update it.
8617+        self._fn._privkey = None
8618+        return self._test_corrupt_all("enc_privkey",
8619+                                      None, # this shouldn't fail
8620+                                      should_succeed=True,
8621+                                      corrupt_early=False,
8622+                                      fetch_privkey=True)
8623+
8624+
8625     def test_corrupt_all_seqnum_late(self):
8626         # corrupting the seqnum between mapupdate and retrieve should result
8627         # in NotEnoughSharesError, since each share will look invalid
8628hunk ./src/allmydata/test/test_mutable.py 1561
8629         def _check(res):
8630             f = res[0]
8631             self.failUnless(f.check(NotEnoughSharesError))
8632-            self.failUnless("someone wrote to the data since we read the servermap" in str(f))
8633+            self.failUnless("uncoordinated write" in str(f))
8634         return self._test_corrupt_all(1, "ran out of peers",
8635                                       corrupt_early=False,
8636                                       failure_checker=_check)
8637hunk ./src/allmydata/test/test_mutable.py 1605
8638                             in str(servermap.problems[0]))
8639             ver = servermap.best_recoverable_version()
8640             r = Retrieve(self._fn, servermap, ver)
8641-            return r.download()
8642+            c = consumer.MemoryConsumer()
8643+            return r.download(c)
8644         d.addCallback(_do_retrieve)
8645hunk ./src/allmydata/test/test_mutable.py 1608
8646+        d.addCallback(lambda mc: "".join(mc.chunks))
8647         d.addCallback(lambda new_contents:
8648                       self.failUnlessEqual(new_contents, self.CONTENTS))
8649         return d
8650hunk ./src/allmydata/test/test_mutable.py 1613
8651 
8652-    def test_corrupt_some(self):
8653-        # corrupt the data of first five shares (so the servermap thinks
8654-        # they're good but retrieve marks them as bad), so that the
8655-        # MODE_READ set of 6 will be insufficient, forcing node.download to
8656-        # retry with more servers.
8657-        corrupt(None, self._storage, "share_data", range(5))
8658-        d = self.make_servermap()
8659+
8660+    def _test_corrupt_some(self, offset, mdmf=False):
8661+        if mdmf:
8662+            d = self.publish_mdmf()
8663+        else:
8664+            d = defer.succeed(None)
8665+        d.addCallback(lambda ignored:
8666+            corrupt(None, self._storage, offset, range(5)))
8667+        d.addCallback(lambda ignored:
8668+            self.make_servermap())
8669         def _do_retrieve(servermap):
8670             ver = servermap.best_recoverable_version()
8671             self.failUnless(ver)
8672hunk ./src/allmydata/test/test_mutable.py 1629
8673             return self._fn.download_best_version()
8674         d.addCallback(_do_retrieve)
8675         d.addCallback(lambda new_contents:
8676-                      self.failUnlessEqual(new_contents, self.CONTENTS))
8677+            self.failUnlessEqual(new_contents, self.CONTENTS))
8678         return d
8679 
8680hunk ./src/allmydata/test/test_mutable.py 1632
8681+
8682+    def test_corrupt_some(self):
8683+        # corrupt the data of first five shares (so the servermap thinks
8684+        # they're good but retrieve marks them as bad), so that the
8685+        # MODE_READ set of 6 will be insufficient, forcing node.download to
8686+        # retry with more servers.
8687+        return self._test_corrupt_some("share_data")
8688+
8689+
8690     def test_download_fails(self):
8691hunk ./src/allmydata/test/test_mutable.py 1642
8692-        corrupt(None, self._storage, "signature")
8693-        d = self.shouldFail(UnrecoverableFileError, "test_download_anyway",
8694+        d = corrupt(None, self._storage, "signature")
8695+        d.addCallback(lambda ignored:
8696+            self.shouldFail(UnrecoverableFileError, "test_download_anyway",
8697                             "no recoverable versions",
8698hunk ./src/allmydata/test/test_mutable.py 1646
8699-                            self._fn.download_best_version)
8700+                            self._fn.download_best_version))
8701+        return d
8702+
8703+
8704+
8705+    def test_corrupt_mdmf_block_hash_tree(self):
8706+        d = self.publish_mdmf()
8707+        d.addCallback(lambda ignored:
8708+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
8709+                                   "block hash tree failure",
8710+                                   corrupt_early=False,
8711+                                   should_succeed=False))
8712         return d
8713 
8714 
8715hunk ./src/allmydata/test/test_mutable.py 1661
8716+    def test_corrupt_mdmf_block_hash_tree_late(self):
8717+        d = self.publish_mdmf()
8718+        d.addCallback(lambda ignored:
8719+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
8720+                                   "block hash tree failure",
8721+                                   corrupt_early=True,
8722+                                   should_succeed=False))
8723+        return d
8724+
8725+
8726+    def test_corrupt_mdmf_share_data(self):
8727+        d = self.publish_mdmf()
8728+        d.addCallback(lambda ignored:
8729+            # TODO: Find out what the block size is and corrupt a
8730+            # specific block, rather than just guessing.
8731+            self._test_corrupt_all(("share_data", 12 * 40),
8732+                                    "block hash tree failure",
8733+                                    corrupt_early=True,
8734+                                    should_succeed=False))
8735+        return d
8736+
8737+
8738+    def test_corrupt_some_mdmf(self):
8739+        return self._test_corrupt_some(("share_data", 12 * 40),
8740+                                       mdmf=True)
8741+
8742+
8743 class CheckerMixin:
8744     def check_good(self, r, where):
8745         self.failUnless(r.is_healthy(), where)
8746hunk ./src/allmydata/test/test_mutable.py 1718
8747         d.addCallback(self.check_good, "test_check_good")
8748         return d
8749 
8750+    def test_check_mdmf_good(self):
8751+        d = self.publish_mdmf()
8752+        d.addCallback(lambda ignored:
8753+            self._fn.check(Monitor()))
8754+        d.addCallback(self.check_good, "test_check_mdmf_good")
8755+        return d
8756+
8757     def test_check_no_shares(self):
8758         for shares in self._storage._peers.values():
8759             shares.clear()
8760hunk ./src/allmydata/test/test_mutable.py 1732
8761         d.addCallback(self.check_bad, "test_check_no_shares")
8762         return d
8763 
8764+    def test_check_mdmf_no_shares(self):
8765+        d = self.publish_mdmf()
8766+        def _then(ignored):
8767+            for share in self._storage._peers.values():
8768+                share.clear()
8769+        d.addCallback(_then)
8770+        d.addCallback(lambda ignored:
8771+            self._fn.check(Monitor()))
8772+        d.addCallback(self.check_bad, "test_check_mdmf_no_shares")
8773+        return d
8774+
8775     def test_check_not_enough_shares(self):
8776         for shares in self._storage._peers.values():
8777             for shnum in shares.keys():
8778hunk ./src/allmydata/test/test_mutable.py 1752
8779         d.addCallback(self.check_bad, "test_check_not_enough_shares")
8780         return d
8781 
8782+    def test_check_mdmf_not_enough_shares(self):
8783+        d = self.publish_mdmf()
8784+        def _then(ignored):
8785+            for shares in self._storage._peers.values():
8786+                for shnum in shares.keys():
8787+                    if shnum > 0:
8788+                        del shares[shnum]
8789+        d.addCallback(_then)
8790+        d.addCallback(lambda ignored:
8791+            self._fn.check(Monitor()))
8792+        d.addCallback(self.check_bad, "test_check_mdmf_not_enougH_shares")
8793+        return d
8794+
8795+
8796     def test_check_all_bad_sig(self):
8797hunk ./src/allmydata/test/test_mutable.py 1767
8798-        corrupt(None, self._storage, 1) # bad sig
8799-        d = self._fn.check(Monitor())
8800+        d = corrupt(None, self._storage, 1) # bad sig
8801+        d.addCallback(lambda ignored:
8802+            self._fn.check(Monitor()))
8803         d.addCallback(self.check_bad, "test_check_all_bad_sig")
8804         return d
8805 
8806hunk ./src/allmydata/test/test_mutable.py 1773
8807+    def test_check_mdmf_all_bad_sig(self):
8808+        d = self.publish_mdmf()
8809+        d.addCallback(lambda ignored:
8810+            corrupt(None, self._storage, 1))
8811+        d.addCallback(lambda ignored:
8812+            self._fn.check(Monitor()))
8813+        d.addCallback(self.check_bad, "test_check_mdmf_all_bad_sig")
8814+        return d
8815+
8816     def test_check_all_bad_blocks(self):
8817hunk ./src/allmydata/test/test_mutable.py 1783
8818-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
8819+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
8820         # the Checker won't notice this.. it doesn't look at actual data
8821hunk ./src/allmydata/test/test_mutable.py 1785
8822-        d = self._fn.check(Monitor())
8823+        d.addCallback(lambda ignored:
8824+            self._fn.check(Monitor()))
8825         d.addCallback(self.check_good, "test_check_all_bad_blocks")
8826         return d
8827 
8828hunk ./src/allmydata/test/test_mutable.py 1790
8829+
8830+    def test_check_mdmf_all_bad_blocks(self):
8831+        d = self.publish_mdmf()
8832+        d.addCallback(lambda ignored:
8833+            corrupt(None, self._storage, "share_data"))
8834+        d.addCallback(lambda ignored:
8835+            self._fn.check(Monitor()))
8836+        d.addCallback(self.check_good, "test_check_mdmf_all_bad_blocks")
8837+        return d
8838+
8839     def test_verify_good(self):
8840         d = self._fn.check(Monitor(), verify=True)
8841         d.addCallback(self.check_good, "test_verify_good")
8842hunk ./src/allmydata/test/test_mutable.py 1804
8843         return d
8844+    test_verify_good.timeout = 15
8845 
8846     def test_verify_all_bad_sig(self):
8847hunk ./src/allmydata/test/test_mutable.py 1807
8848-        corrupt(None, self._storage, 1) # bad sig
8849-        d = self._fn.check(Monitor(), verify=True)
8850+        d = corrupt(None, self._storage, 1) # bad sig
8851+        d.addCallback(lambda ignored:
8852+            self._fn.check(Monitor(), verify=True))
8853         d.addCallback(self.check_bad, "test_verify_all_bad_sig")
8854         return d
8855 
8856hunk ./src/allmydata/test/test_mutable.py 1814
8857     def test_verify_one_bad_sig(self):
8858-        corrupt(None, self._storage, 1, [9]) # bad sig
8859-        d = self._fn.check(Monitor(), verify=True)
8860+        d = corrupt(None, self._storage, 1, [9]) # bad sig
8861+        d.addCallback(lambda ignored:
8862+            self._fn.check(Monitor(), verify=True))
8863         d.addCallback(self.check_bad, "test_verify_one_bad_sig")
8864         return d
8865 
8866hunk ./src/allmydata/test/test_mutable.py 1821
8867     def test_verify_one_bad_block(self):
8868-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
8869+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
8870         # the Verifier *will* notice this, since it examines every byte
8871hunk ./src/allmydata/test/test_mutable.py 1823
8872-        d = self._fn.check(Monitor(), verify=True)
8873+        d.addCallback(lambda ignored:
8874+            self._fn.check(Monitor(), verify=True))
8875         d.addCallback(self.check_bad, "test_verify_one_bad_block")
8876         d.addCallback(self.check_expected_failure,
8877                       CorruptShareError, "block hash tree failure",
8878hunk ./src/allmydata/test/test_mutable.py 1832
8879         return d
8880 
8881     def test_verify_one_bad_sharehash(self):
8882-        corrupt(None, self._storage, "share_hash_chain", [9], 5)
8883-        d = self._fn.check(Monitor(), verify=True)
8884+        d = corrupt(None, self._storage, "share_hash_chain", [9], 5)
8885+        d.addCallback(lambda ignored:
8886+            self._fn.check(Monitor(), verify=True))
8887         d.addCallback(self.check_bad, "test_verify_one_bad_sharehash")
8888         d.addCallback(self.check_expected_failure,
8889                       CorruptShareError, "corrupt hashes",
8890hunk ./src/allmydata/test/test_mutable.py 1842
8891         return d
8892 
8893     def test_verify_one_bad_encprivkey(self):
8894-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8895-        d = self._fn.check(Monitor(), verify=True)
8896+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8897+        d.addCallback(lambda ignored:
8898+            self._fn.check(Monitor(), verify=True))
8899         d.addCallback(self.check_bad, "test_verify_one_bad_encprivkey")
8900         d.addCallback(self.check_expected_failure,
8901                       CorruptShareError, "invalid privkey",
8902hunk ./src/allmydata/test/test_mutable.py 1852
8903         return d
8904 
8905     def test_verify_one_bad_encprivkey_uncheckable(self):
8906-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8907+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8908         readonly_fn = self._fn.get_readonly()
8909         # a read-only node has no way to validate the privkey
8910hunk ./src/allmydata/test/test_mutable.py 1855
8911-        d = readonly_fn.check(Monitor(), verify=True)
8912+        d.addCallback(lambda ignored:
8913+            readonly_fn.check(Monitor(), verify=True))
8914         d.addCallback(self.check_good,
8915                       "test_verify_one_bad_encprivkey_uncheckable")
8916         return d
8917hunk ./src/allmydata/test/test_mutable.py 1861
8918 
8919+
8920+    def test_verify_mdmf_good(self):
8921+        d = self.publish_mdmf()
8922+        d.addCallback(lambda ignored:
8923+            self._fn.check(Monitor(), verify=True))
8924+        d.addCallback(self.check_good, "test_verify_mdmf_good")
8925+        return d
8926+
8927+
8928+    def test_verify_mdmf_one_bad_block(self):
8929+        d = self.publish_mdmf()
8930+        d.addCallback(lambda ignored:
8931+            corrupt(None, self._storage, "share_data", [1]))
8932+        d.addCallback(lambda ignored:
8933+            self._fn.check(Monitor(), verify=True))
8934+        # We should find one bad block here
8935+        d.addCallback(self.check_bad, "test_verify_mdmf_one_bad_block")
8936+        d.addCallback(self.check_expected_failure,
8937+                      CorruptShareError, "block hash tree failure",
8938+                      "test_verify_mdmf_one_bad_block")
8939+        return d
8940+
8941+
8942+    def test_verify_mdmf_bad_encprivkey(self):
8943+        d = self.publish_mdmf()
8944+        d.addCallback(lambda ignored:
8945+            corrupt(None, self._storage, "enc_privkey", [0]))
8946+        d.addCallback(lambda ignored:
8947+            self._fn.check(Monitor(), verify=True))
8948+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey")
8949+        d.addCallback(self.check_expected_failure,
8950+                      CorruptShareError, "privkey",
8951+                      "test_verify_mdmf_bad_encprivkey")
8952+        return d
8953+
8954+
8955+    def test_verify_mdmf_bad_sig(self):
8956+        d = self.publish_mdmf()
8957+        d.addCallback(lambda ignored:
8958+            corrupt(None, self._storage, 1, [1]))
8959+        d.addCallback(lambda ignored:
8960+            self._fn.check(Monitor(), verify=True))
8961+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_sig")
8962+        return d
8963+
8964+
8965+    def test_verify_mdmf_bad_encprivkey_uncheckable(self):
8966+        d = self.publish_mdmf()
8967+        d.addCallback(lambda ignored:
8968+            corrupt(None, self._storage, "enc_privkey", [1]))
8969+        d.addCallback(lambda ignored:
8970+            self._fn.get_readonly())
8971+        d.addCallback(lambda fn:
8972+            fn.check(Monitor(), verify=True))
8973+        d.addCallback(self.check_good,
8974+                      "test_verify_mdmf_bad_encprivkey_uncheckable")
8975+        return d
8976+
8977+
8978 class Repair(unittest.TestCase, PublishMixin, ShouldFailMixin):
8979 
8980     def get_shares(self, s):
8981hunk ./src/allmydata/test/test_mutable.py 1985
8982         current_shares = self.old_shares[-1]
8983         self.failUnlessEqual(old_shares, current_shares)
8984 
8985+
8986     def test_unrepairable_0shares(self):
8987         d = self.publish_one()
8988         def _delete_all_shares(ign):
8989hunk ./src/allmydata/test/test_mutable.py 2000
8990         d.addCallback(_check)
8991         return d
8992 
8993+    def test_mdmf_unrepairable_0shares(self):
8994+        d = self.publish_mdmf()
8995+        def _delete_all_shares(ign):
8996+            shares = self._storage._peers
8997+            for peerid in shares:
8998+                shares[peerid] = {}
8999+        d.addCallback(_delete_all_shares)
9000+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9001+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9002+        d.addCallback(lambda crr: self.failIf(crr.get_successful()))
9003+        return d
9004+
9005+
9006     def test_unrepairable_1share(self):
9007         d = self.publish_one()
9008         def _delete_all_shares(ign):
9009hunk ./src/allmydata/test/test_mutable.py 2029
9010         d.addCallback(_check)
9011         return d
9012 
9013+    def test_mdmf_unrepairable_1share(self):
9014+        d = self.publish_mdmf()
9015+        def _delete_all_shares(ign):
9016+            shares = self._storage._peers
9017+            for peerid in shares:
9018+                for shnum in list(shares[peerid]):
9019+                    if shnum > 0:
9020+                        del shares[peerid][shnum]
9021+        d.addCallback(_delete_all_shares)
9022+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9023+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9024+        def _check(crr):
9025+            self.failUnlessEqual(crr.get_successful(), False)
9026+        d.addCallback(_check)
9027+        return d
9028+
9029+    def test_repairable_5shares(self):
9030+        d = self.publish_mdmf()
9031+        def _delete_all_shares(ign):
9032+            shares = self._storage._peers
9033+            for peerid in shares:
9034+                for shnum in list(shares[peerid]):
9035+                    if shnum > 4:
9036+                        del shares[peerid][shnum]
9037+        d.addCallback(_delete_all_shares)
9038+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9039+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9040+        def _check(crr):
9041+            self.failUnlessEqual(crr.get_successful(), True)
9042+        d.addCallback(_check)
9043+        return d
9044+
9045+    def test_mdmf_repairable_5shares(self):
9046+        d = self.publish_mdmf()
9047+        def _delete_some_shares(ign):
9048+            shares = self._storage._peers
9049+            for peerid in shares:
9050+                for shnum in list(shares[peerid]):
9051+                    if shnum > 5:
9052+                        del shares[peerid][shnum]
9053+        d.addCallback(_delete_some_shares)
9054+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9055+        def _check(cr):
9056+            self.failIf(cr.is_healthy())
9057+            self.failUnless(cr.is_recoverable())
9058+            return cr
9059+        d.addCallback(_check)
9060+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9061+        def _check1(crr):
9062+            self.failUnlessEqual(crr.get_successful(), True)
9063+        d.addCallback(_check1)
9064+        return d
9065+
9066+
9067     def test_merge(self):
9068         self.old_shares = []
9069         d = self.publish_multiple()
9070hunk ./src/allmydata/test/test_mutable.py 2197
9071 class MultipleEncodings(unittest.TestCase):
9072     def setUp(self):
9073         self.CONTENTS = "New contents go here"
9074+        self.uploadable = MutableData(self.CONTENTS)
9075         self._storage = FakeStorage()
9076         self._nodemaker = make_nodemaker(self._storage, num_peers=20)
9077         self._storage_broker = self._nodemaker.storage_broker
9078hunk ./src/allmydata/test/test_mutable.py 2201
9079-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
9080+        d = self._nodemaker.create_mutable_file(self.uploadable)
9081         def _created(node):
9082             self._fn = node
9083         d.addCallback(_created)
9084hunk ./src/allmydata/test/test_mutable.py 2207
9085         return d
9086 
9087-    def _encode(self, k, n, data):
9088+    def _encode(self, k, n, data, version=SDMF_VERSION):
9089         # encode 'data' into a peerid->shares dict.
9090 
9091         fn = self._fn
9092hunk ./src/allmydata/test/test_mutable.py 2227
9093         s = self._storage
9094         s._peers = {} # clear existing storage
9095         p2 = Publish(fn2, self._storage_broker, None)
9096-        d = p2.publish(data)
9097+        uploadable = MutableData(data)
9098+        d = p2.publish(uploadable)
9099         def _published(res):
9100             shares = s._peers
9101             s._peers = {}
9102hunk ./src/allmydata/test/test_mutable.py 2495
9103         self.basedir = "mutable/Problems/test_publish_surprise"
9104         self.set_up_grid()
9105         nm = self.g.clients[0].nodemaker
9106-        d = nm.create_mutable_file("contents 1")
9107+        d = nm.create_mutable_file(MutableData("contents 1"))
9108         def _created(n):
9109             d = defer.succeed(None)
9110             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
9111hunk ./src/allmydata/test/test_mutable.py 2505
9112             d.addCallback(_got_smap1)
9113             # then modify the file, leaving the old map untouched
9114             d.addCallback(lambda res: log.msg("starting winning write"))
9115-            d.addCallback(lambda res: n.overwrite("contents 2"))
9116+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9117             # now attempt to modify the file with the old servermap. This
9118             # will look just like an uncoordinated write, in which every
9119             # single share got updated between our mapupdate and our publish
9120hunk ./src/allmydata/test/test_mutable.py 2514
9121                           self.shouldFail(UncoordinatedWriteError,
9122                                           "test_publish_surprise", None,
9123                                           n.upload,
9124-                                          "contents 2a", self.old_map))
9125+                                          MutableData("contents 2a"), self.old_map))
9126             return d
9127         d.addCallback(_created)
9128         return d
9129hunk ./src/allmydata/test/test_mutable.py 2523
9130         self.basedir = "mutable/Problems/test_retrieve_surprise"
9131         self.set_up_grid()
9132         nm = self.g.clients[0].nodemaker
9133-        d = nm.create_mutable_file("contents 1")
9134+        d = nm.create_mutable_file(MutableData("contents 1"))
9135         def _created(n):
9136             d = defer.succeed(None)
9137             d.addCallback(lambda res: n.get_servermap(MODE_READ))
9138hunk ./src/allmydata/test/test_mutable.py 2533
9139             d.addCallback(_got_smap1)
9140             # then modify the file, leaving the old map untouched
9141             d.addCallback(lambda res: log.msg("starting winning write"))
9142-            d.addCallback(lambda res: n.overwrite("contents 2"))
9143+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9144             # now attempt to retrieve the old version with the old servermap.
9145             # This will look like someone has changed the file since we
9146             # updated the servermap.
9147hunk ./src/allmydata/test/test_mutable.py 2542
9148             d.addCallback(lambda res:
9149                           self.shouldFail(NotEnoughSharesError,
9150                                           "test_retrieve_surprise",
9151-                                          "ran out of peers: have 0 shares (k=3)",
9152+                                          "ran out of peers: have 0 of 1",
9153                                           n.download_version,
9154                                           self.old_map,
9155                                           self.old_map.best_recoverable_version(),
9156hunk ./src/allmydata/test/test_mutable.py 2551
9157         d.addCallback(_created)
9158         return d
9159 
9160+
9161     def test_unexpected_shares(self):
9162         # upload the file, take a servermap, shut down one of the servers,
9163         # upload it again (causing shares to appear on a new server), then
9164hunk ./src/allmydata/test/test_mutable.py 2561
9165         self.basedir = "mutable/Problems/test_unexpected_shares"
9166         self.set_up_grid()
9167         nm = self.g.clients[0].nodemaker
9168-        d = nm.create_mutable_file("contents 1")
9169+        d = nm.create_mutable_file(MutableData("contents 1"))
9170         def _created(n):
9171             d = defer.succeed(None)
9172             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
9173hunk ./src/allmydata/test/test_mutable.py 2573
9174                 self.g.remove_server(peer0)
9175                 # then modify the file, leaving the old map untouched
9176                 log.msg("starting winning write")
9177-                return n.overwrite("contents 2")
9178+                return n.overwrite(MutableData("contents 2"))
9179             d.addCallback(_got_smap1)
9180             # now attempt to modify the file with the old servermap. This
9181             # will look just like an uncoordinated write, in which every
9182hunk ./src/allmydata/test/test_mutable.py 2583
9183                           self.shouldFail(UncoordinatedWriteError,
9184                                           "test_surprise", None,
9185                                           n.upload,
9186-                                          "contents 2a", self.old_map))
9187+                                          MutableData("contents 2a"), self.old_map))
9188             return d
9189         d.addCallback(_created)
9190         return d
9191hunk ./src/allmydata/test/test_mutable.py 2587
9192+    test_unexpected_shares.timeout = 15
9193 
9194     def test_bad_server(self):
9195         # Break one server, then create the file: the initial publish should
9196hunk ./src/allmydata/test/test_mutable.py 2621
9197         d.addCallback(_break_peer0)
9198         # now "create" the file, using the pre-established key, and let the
9199         # initial publish finally happen
9200-        d.addCallback(lambda res: nm.create_mutable_file("contents 1"))
9201+        d.addCallback(lambda res: nm.create_mutable_file(MutableData("contents 1")))
9202         # that ought to work
9203         def _got_node(n):
9204             d = n.download_best_version()
9205hunk ./src/allmydata/test/test_mutable.py 2630
9206             def _break_peer1(res):
9207                 self.g.break_server(self.server1.get_serverid())
9208             d.addCallback(_break_peer1)
9209-            d.addCallback(lambda res: n.overwrite("contents 2"))
9210+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9211             # that ought to work too
9212             d.addCallback(lambda res: n.download_best_version())
9213             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
9214hunk ./src/allmydata/test/test_mutable.py 2662
9215         peerids = [s.get_serverid() for s in sb.get_connected_servers()]
9216         self.g.break_server(peerids[0])
9217 
9218-        d = nm.create_mutable_file("contents 1")
9219+        d = nm.create_mutable_file(MutableData("contents 1"))
9220         def _created(n):
9221             d = n.download_best_version()
9222             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
9223hunk ./src/allmydata/test/test_mutable.py 2670
9224             def _break_second_server(res):
9225                 self.g.break_server(peerids[1])
9226             d.addCallback(_break_second_server)
9227-            d.addCallback(lambda res: n.overwrite("contents 2"))
9228+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9229             # that ought to work too
9230             d.addCallback(lambda res: n.download_best_version())
9231             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
9232hunk ./src/allmydata/test/test_mutable.py 2688
9233 
9234         d = self.shouldFail(NotEnoughServersError,
9235                             "test_publish_all_servers_bad",
9236-                            "Ran out of non-bad servers",
9237-                            nm.create_mutable_file, "contents")
9238+                            "ran out of good servers",
9239+                            nm.create_mutable_file, MutableData("contents"))
9240         return d
9241 
9242     def test_publish_no_servers(self):
9243hunk ./src/allmydata/test/test_mutable.py 2701
9244         d = self.shouldFail(NotEnoughServersError,
9245                             "test_publish_no_servers",
9246                             "Ran out of non-bad servers",
9247-                            nm.create_mutable_file, "contents")
9248+                            nm.create_mutable_file, MutableData("contents"))
9249         return d
9250     test_publish_no_servers.timeout = 30
9251 
9252hunk ./src/allmydata/test/test_mutable.py 2719
9253         # we need some contents that are large enough to push the privkey out
9254         # of the early part of the file
9255         LARGE = "These are Larger contents" * 2000 # about 50KB
9256-        d = nm.create_mutable_file(LARGE)
9257+        LARGE_uploadable = MutableData(LARGE)
9258+        d = nm.create_mutable_file(LARGE_uploadable)
9259         def _created(n):
9260             self.uri = n.get_uri()
9261             self.n2 = nm.create_from_cap(self.uri)
9262hunk ./src/allmydata/test/test_mutable.py 2755
9263         self.basedir = "mutable/Problems/test_privkey_query_missing"
9264         self.set_up_grid(num_servers=20)
9265         nm = self.g.clients[0].nodemaker
9266-        LARGE = "These are Larger contents" * 2000 # about 50KB
9267+        LARGE = "These are Larger contents" * 2000 # about 50KiB
9268+        LARGE_uploadable = MutableData(LARGE)
9269         nm._node_cache = DevNullDictionary() # disable the nodecache
9270 
9271hunk ./src/allmydata/test/test_mutable.py 2759
9272-        d = nm.create_mutable_file(LARGE)
9273+        d = nm.create_mutable_file(LARGE_uploadable)
9274         def _created(n):
9275             self.uri = n.get_uri()
9276             self.n2 = nm.create_from_cap(self.uri)
9277hunk ./src/allmydata/test/test_mutable.py 2769
9278         d.addCallback(_created)
9279         d.addCallback(lambda res: self.n2.get_servermap(MODE_WRITE))
9280         return d
9281+
9282+
9283+    def test_block_and_hash_query_error(self):
9284+        # This tests for what happens when a query to a remote server
9285+        # fails in either the hash validation step or the block getting
9286+        # step (because of batching, this is the same actual query).
9287+        # We need to have the storage server persist up until the point
9288+        # that its prefix is validated, then suddenly die. This
9289+        # exercises some exception handling code in Retrieve.
9290+        self.basedir = "mutable/Problems/test_block_and_hash_query_error"
9291+        self.set_up_grid(num_servers=20)
9292+        nm = self.g.clients[0].nodemaker
9293+        CONTENTS = "contents" * 2000
9294+        CONTENTS_uploadable = MutableData(CONTENTS)
9295+        d = nm.create_mutable_file(CONTENTS_uploadable)
9296+        def _created(node):
9297+            self._node = node
9298+        d.addCallback(_created)
9299+        d.addCallback(lambda ignored:
9300+            self._node.get_servermap(MODE_READ))
9301+        def _then(servermap):
9302+            # we have our servermap. Now we set up the servers like the
9303+            # tests above -- the first one that gets a read call should
9304+            # start throwing errors, but only after returning its prefix
9305+            # for validation. Since we'll download without fetching the
9306+            # private key, the next query to the remote server will be
9307+            # for either a block and salt or for hashes, either of which
9308+            # will exercise the error handling code.
9309+            killer = FirstServerGetsKilled()
9310+            for s in nm.storage_broker.get_connected_servers():
9311+                s.get_rref().post_call_notifier = killer.notify
9312+            ver = servermap.best_recoverable_version()
9313+            assert ver
9314+            return self._node.download_version(servermap, ver)
9315+        d.addCallback(_then)
9316+        d.addCallback(lambda data:
9317+            self.failUnlessEqual(data, CONTENTS))
9318+        return d
9319+
9320+
9321+class FileHandle(unittest.TestCase):
9322+    def setUp(self):
9323+        self.test_data = "Test Data" * 50000
9324+        self.sio = StringIO(self.test_data)
9325+        self.uploadable = MutableFileHandle(self.sio)
9326+
9327+
9328+    def test_filehandle_read(self):
9329+        self.basedir = "mutable/FileHandle/test_filehandle_read"
9330+        chunk_size = 10
9331+        for i in xrange(0, len(self.test_data), chunk_size):
9332+            data = self.uploadable.read(chunk_size)
9333+            data = "".join(data)
9334+            start = i
9335+            end = i + chunk_size
9336+            self.failUnlessEqual(data, self.test_data[start:end])
9337+
9338+
9339+    def test_filehandle_get_size(self):
9340+        self.basedir = "mutable/FileHandle/test_filehandle_get_size"
9341+        actual_size = len(self.test_data)
9342+        size = self.uploadable.get_size()
9343+        self.failUnlessEqual(size, actual_size)
9344+
9345+
9346+    def test_filehandle_get_size_out_of_order(self):
9347+        # We should be able to call get_size whenever we want without
9348+        # disturbing the location of the seek pointer.
9349+        chunk_size = 100
9350+        data = self.uploadable.read(chunk_size)
9351+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
9352+
9353+        # Now get the size.
9354+        size = self.uploadable.get_size()
9355+        self.failUnlessEqual(size, len(self.test_data))
9356+
9357+        # Now get more data. We should be right where we left off.
9358+        more_data = self.uploadable.read(chunk_size)
9359+        start = chunk_size
9360+        end = chunk_size * 2
9361+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
9362+
9363+
9364+    def test_filehandle_file(self):
9365+        # Make sure that the MutableFileHandle works on a file as well
9366+        # as a StringIO object, since in some cases it will be asked to
9367+        # deal with files.
9368+        self.basedir = self.mktemp()
9369+        # necessary? What am I doing wrong here?
9370+        os.mkdir(self.basedir)
9371+        f_path = os.path.join(self.basedir, "test_file")
9372+        f = open(f_path, "w")
9373+        f.write(self.test_data)
9374+        f.close()
9375+        f = open(f_path, "r")
9376+
9377+        uploadable = MutableFileHandle(f)
9378+
9379+        data = uploadable.read(len(self.test_data))
9380+        self.failUnlessEqual("".join(data), self.test_data)
9381+        size = uploadable.get_size()
9382+        self.failUnlessEqual(size, len(self.test_data))
9383+
9384+
9385+    def test_close(self):
9386+        # Make sure that the MutableFileHandle closes its handle when
9387+        # told to do so.
9388+        self.uploadable.close()
9389+        self.failUnless(self.sio.closed)
9390+
9391+
9392+class DataHandle(unittest.TestCase):
9393+    def setUp(self):
9394+        self.test_data = "Test Data" * 50000
9395+        self.uploadable = MutableData(self.test_data)
9396+
9397+
9398+    def test_datahandle_read(self):
9399+        chunk_size = 10
9400+        for i in xrange(0, len(self.test_data), chunk_size):
9401+            data = self.uploadable.read(chunk_size)
9402+            data = "".join(data)
9403+            start = i
9404+            end = i + chunk_size
9405+            self.failUnlessEqual(data, self.test_data[start:end])
9406+
9407+
9408+    def test_datahandle_get_size(self):
9409+        actual_size = len(self.test_data)
9410+        size = self.uploadable.get_size()
9411+        self.failUnlessEqual(size, actual_size)
9412+
9413+
9414+    def test_datahandle_get_size_out_of_order(self):
9415+        # We should be able to call get_size whenever we want without
9416+        # disturbing the location of the seek pointer.
9417+        chunk_size = 100
9418+        data = self.uploadable.read(chunk_size)
9419+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
9420+
9421+        # Now get the size.
9422+        size = self.uploadable.get_size()
9423+        self.failUnlessEqual(size, len(self.test_data))
9424+
9425+        # Now get more data. We should be right where we left off.
9426+        more_data = self.uploadable.read(chunk_size)
9427+        start = chunk_size
9428+        end = chunk_size * 2
9429+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
9430+
9431+
9432+class Version(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin, \
9433+              PublishMixin):
9434+    def setUp(self):
9435+        GridTestMixin.setUp(self)
9436+        self.basedir = self.mktemp()
9437+        self.set_up_grid()
9438+        self.c = self.g.clients[0]
9439+        self.nm = self.c.nodemaker
9440+        self.data = "test data" * 100000 # about 900 KiB; MDMF
9441+        self.small_data = "test data" * 10 # about 90 B; SDMF
9442+        return self.do_upload()
9443+
9444+
9445+    def do_upload(self):
9446+        d1 = self.nm.create_mutable_file(MutableData(self.data),
9447+                                         version=MDMF_VERSION)
9448+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
9449+        dl = gatherResults([d1, d2])
9450+        def _then((n1, n2)):
9451+            assert isinstance(n1, MutableFileNode)
9452+            assert isinstance(n2, MutableFileNode)
9453+
9454+            self.mdmf_node = n1
9455+            self.sdmf_node = n2
9456+        dl.addCallback(_then)
9457+        return dl
9458+
9459+
9460+    def test_get_readonly_mutable_version(self):
9461+        # Attempting to get a mutable version of a mutable file from a
9462+        # filenode initialized with a readcap should return a readonly
9463+        # version of that same node.
9464+        ro = self.mdmf_node.get_readonly()
9465+        d = ro.get_best_mutable_version()
9466+        d.addCallback(lambda version:
9467+            self.failUnless(version.is_readonly()))
9468+        d.addCallback(lambda ignored:
9469+            self.sdmf_node.get_readonly())
9470+        d.addCallback(lambda version:
9471+            self.failUnless(version.is_readonly()))
9472+        return d
9473+
9474+
9475+    def test_get_sequence_number(self):
9476+        d = self.mdmf_node.get_best_readable_version()
9477+        d.addCallback(lambda bv:
9478+            self.failUnlessEqual(bv.get_sequence_number(), 1))
9479+        d.addCallback(lambda ignored:
9480+            self.sdmf_node.get_best_readable_version())
9481+        d.addCallback(lambda bv:
9482+            self.failUnlessEqual(bv.get_sequence_number(), 1))
9483+        # Now update. The sequence number in both cases should be 1 in
9484+        # both cases.
9485+        def _do_update(ignored):
9486+            new_data = MutableData("foo bar baz" * 100000)
9487+            new_small_data = MutableData("foo bar baz" * 10)
9488+            d1 = self.mdmf_node.overwrite(new_data)
9489+            d2 = self.sdmf_node.overwrite(new_small_data)
9490+            dl = gatherResults([d1, d2])
9491+            return dl
9492+        d.addCallback(_do_update)
9493+        d.addCallback(lambda ignored:
9494+            self.mdmf_node.get_best_readable_version())
9495+        d.addCallback(lambda bv:
9496+            self.failUnlessEqual(bv.get_sequence_number(), 2))
9497+        d.addCallback(lambda ignored:
9498+            self.sdmf_node.get_best_readable_version())
9499+        d.addCallback(lambda bv:
9500+            self.failUnlessEqual(bv.get_sequence_number(), 2))
9501+        return d
9502+
9503+
9504+    def test_version_extension_api(self):
9505+        # We need to define an API by which an uploader can set the
9506+        # extension parameters, and by which a downloader can retrieve
9507+        # extensions.
9508+        d = self.mdmf_node.get_best_mutable_version()
9509+        def _got_version(version):
9510+            hints = version.get_downloader_hints()
9511+            # Should be empty at this point.
9512+            self.failUnlessIn("k", hints)
9513+            self.failUnlessEqual(hints['k'], 3)
9514+            self.failUnlessIn('segsize', hints)
9515+            self.failUnlessEqual(hints['segsize'], 131073)
9516+        d.addCallback(_got_version)
9517+        return d
9518+
9519+
9520+    def test_extensions_from_cap(self):
9521+        # If we initialize a mutable file with a cap that has extension
9522+        # parameters in it and then grab the extension parameters using
9523+        # our API, we should see that they're set correctly.
9524+        mdmf_uri = self.mdmf_node.get_uri()
9525+        new_node = self.nm.create_from_cap(mdmf_uri)
9526+        d = new_node.get_best_mutable_version()
9527+        def _got_version(version):
9528+            hints = version.get_downloader_hints()
9529+            self.failUnlessIn("k", hints)
9530+            self.failUnlessEqual(hints["k"], 3)
9531+            self.failUnlessIn("segsize", hints)
9532+            self.failUnlessEqual(hints["segsize"], 131073)
9533+        d.addCallback(_got_version)
9534+        return d
9535+
9536+
9537+    def test_extensions_from_upload(self):
9538+        # If we create a new mutable file with some contents, we should
9539+        # get back an MDMF cap with the right hints in place.
9540+        contents = "foo bar baz" * 100000
9541+        d = self.nm.create_mutable_file(contents, version=MDMF_VERSION)
9542+        def _got_mutable_file(n):
9543+            rw_uri = n.get_uri()
9544+            expected_k = str(self.c.DEFAULT_ENCODING_PARAMETERS['k'])
9545+            self.failUnlessIn(expected_k, rw_uri)
9546+            # XXX: Get this more intelligently.
9547+            self.failUnlessIn("131073", rw_uri)
9548+
9549+            ro_uri = n.get_readonly_uri()
9550+            self.failUnlessIn(expected_k, ro_uri)
9551+            self.failUnlessIn("131073", ro_uri)
9552+        d.addCallback(_got_mutable_file)
9553+        return d
9554+
9555+
9556+    def test_cap_after_upload(self):
9557+        # If we create a new mutable file and upload things to it, and
9558+        # it's an MDMF file, we should get an MDMF cap back from that
9559+        # file and should be able to use that.
9560+        # That's essentially what MDMF node is, so just check that.
9561+        mdmf_uri = self.mdmf_node.get_uri()
9562+        cap = uri.from_string(mdmf_uri)
9563+        self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
9564+        readonly_mdmf_uri = self.mdmf_node.get_readonly_uri()
9565+        cap = uri.from_string(readonly_mdmf_uri)
9566+        self.failUnless(isinstance(cap, uri.ReadonlyMDMFFileURI))
9567+
9568+
9569+    def test_get_writekey(self):
9570+        d = self.mdmf_node.get_best_mutable_version()
9571+        d.addCallback(lambda bv:
9572+            self.failUnlessEqual(bv.get_writekey(),
9573+                                 self.mdmf_node.get_writekey()))
9574+        d.addCallback(lambda ignored:
9575+            self.sdmf_node.get_best_mutable_version())
9576+        d.addCallback(lambda bv:
9577+            self.failUnlessEqual(bv.get_writekey(),
9578+                                 self.sdmf_node.get_writekey()))
9579+        return d
9580+
9581+
9582+    def test_get_storage_index(self):
9583+        d = self.mdmf_node.get_best_mutable_version()
9584+        d.addCallback(lambda bv:
9585+            self.failUnlessEqual(bv.get_storage_index(),
9586+                                 self.mdmf_node.get_storage_index()))
9587+        d.addCallback(lambda ignored:
9588+            self.sdmf_node.get_best_mutable_version())
9589+        d.addCallback(lambda bv:
9590+            self.failUnlessEqual(bv.get_storage_index(),
9591+                                 self.sdmf_node.get_storage_index()))
9592+        return d
9593+
9594+
9595+    def test_get_readonly_version(self):
9596+        d = self.mdmf_node.get_best_readable_version()
9597+        d.addCallback(lambda bv:
9598+            self.failUnless(bv.is_readonly()))
9599+        d.addCallback(lambda ignored:
9600+            self.sdmf_node.get_best_readable_version())
9601+        d.addCallback(lambda bv:
9602+            self.failUnless(bv.is_readonly()))
9603+        return d
9604+
9605+
9606+    def test_get_mutable_version(self):
9607+        d = self.mdmf_node.get_best_mutable_version()
9608+        d.addCallback(lambda bv:
9609+            self.failIf(bv.is_readonly()))
9610+        d.addCallback(lambda ignored:
9611+            self.sdmf_node.get_best_mutable_version())
9612+        d.addCallback(lambda bv:
9613+            self.failIf(bv.is_readonly()))
9614+        return d
9615+
9616+
9617+    def test_toplevel_overwrite(self):
9618+        new_data = MutableData("foo bar baz" * 100000)
9619+        new_small_data = MutableData("foo bar baz" * 10)
9620+        d = self.mdmf_node.overwrite(new_data)
9621+        d.addCallback(lambda ignored:
9622+            self.mdmf_node.download_best_version())
9623+        d.addCallback(lambda data:
9624+            self.failUnlessEqual(data, "foo bar baz" * 100000))
9625+        d.addCallback(lambda ignored:
9626+            self.sdmf_node.overwrite(new_small_data))
9627+        d.addCallback(lambda ignored:
9628+            self.sdmf_node.download_best_version())
9629+        d.addCallback(lambda data:
9630+            self.failUnlessEqual(data, "foo bar baz" * 10))
9631+        return d
9632+
9633+
9634+    def test_toplevel_modify(self):
9635+        def modifier(old_contents, servermap, first_time):
9636+            return old_contents + "modified"
9637+        d = self.mdmf_node.modify(modifier)
9638+        d.addCallback(lambda ignored:
9639+            self.mdmf_node.download_best_version())
9640+        d.addCallback(lambda data:
9641+            self.failUnlessIn("modified", data))
9642+        d.addCallback(lambda ignored:
9643+            self.sdmf_node.modify(modifier))
9644+        d.addCallback(lambda ignored:
9645+            self.sdmf_node.download_best_version())
9646+        d.addCallback(lambda data:
9647+            self.failUnlessIn("modified", data))
9648+        return d
9649+
9650+
9651+    def test_version_modify(self):
9652+        # TODO: When we can publish multiple versions, alter this test
9653+        # to modify a version other than the best usable version, then
9654+        # test to see that the best recoverable version is that.
9655+        def modifier(old_contents, servermap, first_time):
9656+            return old_contents + "modified"
9657+        d = self.mdmf_node.modify(modifier)
9658+        d.addCallback(lambda ignored:
9659+            self.mdmf_node.download_best_version())
9660+        d.addCallback(lambda data:
9661+            self.failUnlessIn("modified", data))
9662+        d.addCallback(lambda ignored:
9663+            self.sdmf_node.modify(modifier))
9664+        d.addCallback(lambda ignored:
9665+            self.sdmf_node.download_best_version())
9666+        d.addCallback(lambda data:
9667+            self.failUnlessIn("modified", data))
9668+        return d
9669+
9670+
9671+    def test_download_version(self):
9672+        d = self.publish_multiple()
9673+        # We want to have two recoverable versions on the grid.
9674+        d.addCallback(lambda res:
9675+                      self._set_versions({0:0,2:0,4:0,6:0,8:0,
9676+                                          1:1,3:1,5:1,7:1,9:1}))
9677+        # Now try to download each version. We should get the plaintext
9678+        # associated with that version.
9679+        d.addCallback(lambda ignored:
9680+            self._fn.get_servermap(mode=MODE_READ))
9681+        def _got_servermap(smap):
9682+            versions = smap.recoverable_versions()
9683+            assert len(versions) == 2
9684+
9685+            self.servermap = smap
9686+            self.version1, self.version2 = versions
9687+            assert self.version1 != self.version2
9688+
9689+            self.version1_seqnum = self.version1[0]
9690+            self.version2_seqnum = self.version2[0]
9691+            self.version1_index = self.version1_seqnum - 1
9692+            self.version2_index = self.version2_seqnum - 1
9693+
9694+        d.addCallback(_got_servermap)
9695+        d.addCallback(lambda ignored:
9696+            self._fn.download_version(self.servermap, self.version1))
9697+        d.addCallback(lambda results:
9698+            self.failUnlessEqual(self.CONTENTS[self.version1_index],
9699+                                 results))
9700+        d.addCallback(lambda ignored:
9701+            self._fn.download_version(self.servermap, self.version2))
9702+        d.addCallback(lambda results:
9703+            self.failUnlessEqual(self.CONTENTS[self.version2_index],
9704+                                 results))
9705+        return d
9706+
9707+
9708+    def test_download_nonexistent_version(self):
9709+        d = self.mdmf_node.get_servermap(mode=MODE_WRITE)
9710+        def _set_servermap(servermap):
9711+            self.servermap = servermap
9712+        d.addCallback(_set_servermap)
9713+        d.addCallback(lambda ignored:
9714+           self.shouldFail(UnrecoverableFileError, "nonexistent version",
9715+                           None,
9716+                           self.mdmf_node.download_version, self.servermap,
9717+                           "not a version"))
9718+        return d
9719+
9720+
9721+    def test_partial_read(self):
9722+        # read only a few bytes at a time, and see that the results are
9723+        # what we expect.
9724+        d = self.mdmf_node.get_best_readable_version()
9725+        def _read_data(version):
9726+            c = consumer.MemoryConsumer()
9727+            d2 = defer.succeed(None)
9728+            for i in xrange(0, len(self.data), 10000):
9729+                d2.addCallback(lambda ignored, i=i: version.read(c, i, 10000))
9730+            d2.addCallback(lambda ignored:
9731+                self.failUnlessEqual(self.data, "".join(c.chunks)))
9732+            return d2
9733+        d.addCallback(_read_data)
9734+        return d
9735+
9736+
9737+    def test_read(self):
9738+        d = self.mdmf_node.get_best_readable_version()
9739+        def _read_data(version):
9740+            c = consumer.MemoryConsumer()
9741+            d2 = defer.succeed(None)
9742+            d2.addCallback(lambda ignored: version.read(c))
9743+            d2.addCallback(lambda ignored:
9744+                self.failUnlessEqual("".join(c.chunks), self.data))
9745+            return d2
9746+        d.addCallback(_read_data)
9747+        return d
9748+
9749+
9750+    def test_download_best_version(self):
9751+        d = self.mdmf_node.download_best_version()
9752+        d.addCallback(lambda data:
9753+            self.failUnlessEqual(data, self.data))
9754+        d.addCallback(lambda ignored:
9755+            self.sdmf_node.download_best_version())
9756+        d.addCallback(lambda data:
9757+            self.failUnlessEqual(data, self.small_data))
9758+        return d
9759+
9760+
9761+class Update(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
9762+    def setUp(self):
9763+        GridTestMixin.setUp(self)
9764+        self.basedir = self.mktemp()
9765+        self.set_up_grid()
9766+        self.c = self.g.clients[0]
9767+        self.nm = self.c.nodemaker
9768+        self.data = "testdata " * 100000 # about 900 KiB; MDMF
9769+        self.small_data = "test data" * 10 # about 90 B; SDMF
9770+        return self.do_upload()
9771+
9772+
9773+    def do_upload(self):
9774+        d1 = self.nm.create_mutable_file(MutableData(self.data),
9775+                                         version=MDMF_VERSION)
9776+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
9777+        dl = gatherResults([d1, d2])
9778+        def _then((n1, n2)):
9779+            assert isinstance(n1, MutableFileNode)
9780+            assert isinstance(n2, MutableFileNode)
9781+
9782+            self.mdmf_node = n1
9783+            self.sdmf_node = n2
9784+        dl.addCallback(_then)
9785+        # Make SDMF and MDMF mutable file nodes that have 255 shares.
9786+        def _make_max_shares(ign):
9787+            self.nm.default_encoding_parameters['n'] = 255
9788+            self.nm.default_encoding_parameters['k'] = 127
9789+            d1 = self.nm.create_mutable_file(MutableData(self.data),
9790+                                             version=MDMF_VERSION)
9791+            d2 = \
9792+                self.nm.create_mutable_file(MutableData(self.small_data))
9793+            return gatherResults([d1, d2])
9794+        dl.addCallback(_make_max_shares)
9795+        def _stash((n1, n2)):
9796+            assert isinstance(n1, MutableFileNode)
9797+            assert isinstance(n2, MutableFileNode)
9798+
9799+            self.mdmf_max_shares_node = n1
9800+            self.sdmf_max_shares_node = n2
9801+        dl.addCallback(_stash)
9802+        return dl
9803+
9804+    def test_append(self):
9805+        # We should be able to append data to the middle of a mutable
9806+        # file and get what we expect.
9807+        new_data = self.data + "appended"
9808+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9809+            d = node.get_best_mutable_version()
9810+            d.addCallback(lambda mv:
9811+                mv.update(MutableData("appended"), len(self.data)))
9812+            d.addCallback(lambda ignored, node=node:
9813+                node.download_best_version())
9814+            d.addCallback(lambda results:
9815+                self.failUnlessEqual(results, new_data))
9816+        return d
9817+
9818+    def test_replace(self):
9819+        # We should be able to replace data in the middle of a mutable
9820+        # file and get what we expect back.
9821+        new_data = self.data[:100]
9822+        new_data += "appended"
9823+        new_data += self.data[108:]
9824+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9825+            d = node.get_best_mutable_version()
9826+            d.addCallback(lambda mv:
9827+                mv.update(MutableData("appended"), 100))
9828+            d.addCallback(lambda ignored, node=node:
9829+                node.download_best_version())
9830+            d.addCallback(lambda results:
9831+                self.failUnlessEqual(results, new_data))
9832+        return d
9833+
9834+    def test_replace_beginning(self):
9835+        # We should be able to replace data at the beginning of the file
9836+        # without truncating the file
9837+        B = "beginning"
9838+        new_data = B + self.data[len(B):]
9839+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9840+            d = node.get_best_mutable_version()
9841+            d.addCallback(lambda mv: mv.update(MutableData(B), 0))
9842+            d.addCallback(lambda ignored, node=node:
9843+                node.download_best_version())
9844+            d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
9845+        return d
9846+
9847+    def test_replace_segstart1(self):
9848+        offset = 128*1024+1
9849+        new_data = "NNNN"
9850+        expected = self.data[:offset]+new_data+self.data[offset+4:]
9851+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9852+            d = node.get_best_mutable_version()
9853+            d.addCallback(lambda mv:
9854+                mv.update(MutableData(new_data), offset))
9855+            # close around node.
9856+            d.addCallback(lambda ignored, node=node:
9857+                node.download_best_version())
9858+            def _check(results):
9859+                if results != expected:
9860+                    print
9861+                    print "got: %s ... %s" % (results[:20], results[-20:])
9862+                    print "exp: %s ... %s" % (expected[:20], expected[-20:])
9863+                    self.fail("results != expected")
9864+            d.addCallback(_check)
9865+        return d
9866+
9867+    def _check_differences(self, got, expected):
9868+        # displaying arbitrary file corruption is tricky for a
9869+        # 1MB file of repeating data,, so look for likely places
9870+        # with problems and display them separately
9871+        gotmods = [mo.span() for mo in re.finditer('([A-Z]+)', got)]
9872+        expmods = [mo.span() for mo in re.finditer('([A-Z]+)', expected)]
9873+        gotspans = ["%d:%d=%s" % (start,end,got[start:end])
9874+                    for (start,end) in gotmods]
9875+        expspans = ["%d:%d=%s" % (start,end,expected[start:end])
9876+                    for (start,end) in expmods]
9877+        #print "expecting: %s" % expspans
9878+
9879+        SEGSIZE = 128*1024
9880+        if got != expected:
9881+            print "differences:"
9882+            for segnum in range(len(expected)//SEGSIZE):
9883+                start = segnum * SEGSIZE
9884+                end = (segnum+1) * SEGSIZE
9885+                got_ends = "%s .. %s" % (got[start:start+20], got[end-20:end])
9886+                exp_ends = "%s .. %s" % (expected[start:start+20], expected[end-20:end])
9887+                if got_ends != exp_ends:
9888+                    print "expected[%d]: %s" % (start, exp_ends)
9889+                    print "got     [%d]: %s" % (start, got_ends)
9890+            if expspans != gotspans:
9891+                print "expected: %s" % expspans
9892+                print "got     : %s" % gotspans
9893+            open("EXPECTED","wb").write(expected)
9894+            open("GOT","wb").write(got)
9895+            print "wrote data to EXPECTED and GOT"
9896+            self.fail("didn't get expected data")
9897+
9898+
9899+    def test_replace_locations(self):
9900+        # exercise fencepost conditions
9901+        expected = self.data
9902+        SEGSIZE = 128*1024
9903+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
9904+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
9905+        d = defer.succeed(None)
9906+        for offset in suspects:
9907+            new_data = letters.next()*2 # "AA", then "BB", etc
9908+            expected = expected[:offset]+new_data+expected[offset+2:]
9909+            d.addCallback(lambda ign:
9910+                          self.mdmf_node.get_best_mutable_version())
9911+            def _modify(mv, offset=offset, new_data=new_data):
9912+                # close over 'offset','new_data'
9913+                md = MutableData(new_data)
9914+                return mv.update(md, offset)
9915+            d.addCallback(_modify)
9916+            d.addCallback(lambda ignored:
9917+                          self.mdmf_node.download_best_version())
9918+            d.addCallback(self._check_differences, expected)
9919+        return d
9920+
9921+    def test_replace_locations_max_shares(self):
9922+        # exercise fencepost conditions
9923+        expected = self.data
9924+        SEGSIZE = 128*1024
9925+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
9926+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
9927+        d = defer.succeed(None)
9928+        for offset in suspects:
9929+            new_data = letters.next()*2 # "AA", then "BB", etc
9930+            expected = expected[:offset]+new_data+expected[offset+2:]
9931+            d.addCallback(lambda ign:
9932+                          self.mdmf_max_shares_node.get_best_mutable_version())
9933+            def _modify(mv, offset=offset, new_data=new_data):
9934+                # close over 'offset','new_data'
9935+                md = MutableData(new_data)
9936+                return mv.update(md, offset)
9937+            d.addCallback(_modify)
9938+            d.addCallback(lambda ignored:
9939+                          self.mdmf_max_shares_node.download_best_version())
9940+            d.addCallback(self._check_differences, expected)
9941+        return d
9942+
9943+    def test_replace_and_extend(self):
9944+        # We should be able to replace data in the middle of a mutable
9945+        # file and extend that mutable file and get what we expect.
9946+        new_data = self.data[:100]
9947+        new_data += "modified " * 100000
9948+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9949+            d = node.get_best_mutable_version()
9950+            d.addCallback(lambda mv:
9951+                mv.update(MutableData("modified " * 100000), 100))
9952+            d.addCallback(lambda ignored, node=node:
9953+                node.download_best_version())
9954+            d.addCallback(lambda results:
9955+                self.failUnlessEqual(results, new_data))
9956+        return d
9957+
9958+
9959+    def test_append_power_of_two(self):
9960+        # If we attempt to extend a mutable file so that its segment
9961+        # count crosses a power-of-two boundary, the update operation
9962+        # should know how to reencode the file.
9963+
9964+        # Note that the data populating self.mdmf_node is about 900 KiB
9965+        # long -- this is 7 segments in the default segment size. So we
9966+        # need to add 2 segments worth of data to push it over a
9967+        # power-of-two boundary.
9968+        segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
9969+        new_data = self.data + (segment * 2)
9970+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9971+            d = node.get_best_mutable_version()
9972+            d.addCallback(lambda mv:
9973+                mv.update(MutableData(segment * 2), len(self.data)))
9974+            d.addCallback(lambda ignored, node=node:
9975+                node.download_best_version())
9976+            d.addCallback(lambda results:
9977+                self.failUnlessEqual(results, new_data))
9978+        return d
9979+    test_append_power_of_two.timeout = 15
9980+
9981+
9982+    def test_update_sdmf(self):
9983+        # Running update on a single-segment file should still work.
9984+        new_data = self.small_data + "appended"
9985+        for node in (self.sdmf_node, self.sdmf_max_shares_node):
9986+            d = node.get_best_mutable_version()
9987+            d.addCallback(lambda mv:
9988+                mv.update(MutableData("appended"), len(self.small_data)))
9989+            d.addCallback(lambda ignored, node=node:
9990+                node.download_best_version())
9991+            d.addCallback(lambda results:
9992+                self.failUnlessEqual(results, new_data))
9993+        return d
9994+
9995+    def test_replace_in_last_segment(self):
9996+        # The wrapper should know how to handle the tail segment
9997+        # appropriately.
9998+        replace_offset = len(self.data) - 100
9999+        new_data = self.data[:replace_offset] + "replaced"
10000+        rest_offset = replace_offset + len("replaced")
10001+        new_data += self.data[rest_offset:]
10002+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
10003+            d = node.get_best_mutable_version()
10004+            d.addCallback(lambda mv:
10005+                mv.update(MutableData("replaced"), replace_offset))
10006+            d.addCallback(lambda ignored, node=node:
10007+                node.download_best_version())
10008+            d.addCallback(lambda results:
10009+                self.failUnlessEqual(results, new_data))
10010+        return d
10011+
10012+
10013+    def test_multiple_segment_replace(self):
10014+        replace_offset = 2 * DEFAULT_MAX_SEGMENT_SIZE
10015+        new_data = self.data[:replace_offset]
10016+        new_segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
10017+        new_data += 2 * new_segment
10018+        new_data += "replaced"
10019+        rest_offset = len(new_data)
10020+        new_data += self.data[rest_offset:]
10021+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
10022+            d = node.get_best_mutable_version()
10023+            d.addCallback(lambda mv:
10024+                mv.update(MutableData((2 * new_segment) + "replaced"),
10025+                          replace_offset))
10026+            d.addCallback(lambda ignored, node=node:
10027+                node.download_best_version())
10028+            d.addCallback(lambda results:
10029+                self.failUnlessEqual(results, new_data))
10030+        return d
10031+
10032+class Interoperability(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
10033+    sdmf_old_shares = {}
10034+    sdmf_old_shares[0] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAQ/EX4eC/1+hGOQ/h4EiKUkqxdsfzdcPlDvd11SGWZ0VHsUclZChTzuBAU2zLTXm+cG8IFhO50ly6Ey/DB44NtMKVaVzO0nU8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10035+    sdmf_old_shares[1] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAP7FHJWQoU87gQFNsy015vnBvCBYTudJcuhMvwweODbTD8Rfh4L/X6EY5D+HgSIpSSrF2x/N1w+UO93XVIZZnRUeePDXEwhqYDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10036+    sdmf_old_shares[2] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewASoSCFpVj4utEE+eVFM146xfgC6DX39GaQ2zT3YKsWX3GiLwKtGffwqV7IlZIcBEVqMfTXSTZsY+dZm1MxxCZH0Zd33VY0yggDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10037+    sdmf_old_shares[3] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewARoi8CrRn38KleyJWSHARFajH010k2bGPnWZtTMcQmR9GhIIWlWPi60QT55UUzXjrF+ALoNff0ZpDbNPdgqxZfcSNSplrHqtsDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10038+    sdmf_old_shares[4] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwAUMA7/aVz7Mb1em0eks+biC8ZuVUhuAEkTVOAF4YulIjE8JlfW0dS1XKk62u0586QxiN38NTsluUDx8EAPTL66yRsfb1f3rRIDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10039+    sdmf_old_shares[5] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwATPCZX1tHUtVypOtrtOfOkMYjd/DU7JblA8fBAD0y+uskwDv9pXPsxvV6bR6Sz5uILxm5VSG4ASRNU4AXhi6UiMUKZHBmcmEgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10040+    sdmf_old_shares[6] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAWDSFSPvKzcFzRcuRlVgKUf0HBce1MCF8SwpUbPPEyfVJty4xLZ7DvNU/Eh/R6BarsVAagVXdp+GtEu0+fok7nilT4LchmHo8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10041+    sdmf_old_shares[7] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAVbcuMS2ew7zVPxIf0egWq7FQGoFV3afhrRLtPn6JO54oNIVI+8rNwXNFy5GVWApR/QcFx7UwIXxLClRs88TJ9UtLnNF4/mM0DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10042+    sdmf_old_shares[8] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAYmqKY7A9vQChuYa17fYSyKerIb3682jxiIneQvCMWCK5WcuI4PMeIsUAj8yxdxHvV+a9vtSCEsDVvymrrooDKX1GK98t37yoDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10043+    sdmf_old_shares[9] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAXVnLiODzHiLFAI/MsXcR71fmvb7UghLA1b8pq66KAyl+aopjsD29AKG5hrXt9hLIp6shvfrzaPGIid5C8IxYIrjgBj1YohGgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10044+    sdmf_old_cap = "URI:SSK:gmjgofw6gan57gwpsow6gtrz3e:5adm6fayxmu3e4lkmfvt6lkkfix34ai2wop2ioqr4bgvvhiol3kq"
10045+    sdmf_old_contents = "This is a test file.\n"
10046+    def copy_sdmf_shares(self):
10047+        # We'll basically be short-circuiting the upload process.
10048+        servernums = self.g.servers_by_number.keys()
10049+        assert len(servernums) == 10
10050+
10051+        assignments = zip(self.sdmf_old_shares.keys(), servernums)
10052+        # Get the storage index.
10053+        cap = uri.from_string(self.sdmf_old_cap)
10054+        si = cap.get_storage_index()
10055+
10056+        # Now execute each assignment by writing the storage.
10057+        for (share, servernum) in assignments:
10058+            sharedata = base64.b64decode(self.sdmf_old_shares[share])
10059+            storedir = self.get_serverdir(servernum)
10060+            storage_path = os.path.join(storedir, "shares",
10061+                                        storage_index_to_dir(si))
10062+            fileutil.make_dirs(storage_path)
10063+            fileutil.write(os.path.join(storage_path, "%d" % share),
10064+                           sharedata)
10065+        # ...and verify that the shares are there.
10066+        shares = self.find_uri_shares(self.sdmf_old_cap)
10067+        assert len(shares) == 10
10068+
10069+    def test_new_downloader_can_read_old_shares(self):
10070+        self.basedir = "mutable/Interoperability/new_downloader_can_read_old_shares"
10071+        self.set_up_grid()
10072+        self.copy_sdmf_shares()
10073+        nm = self.g.clients[0].nodemaker
10074+        n = nm.create_from_cap(self.sdmf_old_cap)
10075+        d = n.download_best_version()
10076+        d.addCallback(self.failUnlessEqual, self.sdmf_old_contents)
10077+        return d
10078}
10079[mutable/layout: Define MDMF share format, write tools for working with MDMF share format
10080Kevan Carstensen <kevan@isnotajoke.com>**20110802021120
10081 Ignore-this: fa76ef4800939e19ba3cbc22a2eab4e
10082 
10083 The changes in layout.py are mostly concerned with the MDMF share
10084 format. In particular, we define read and write proxy objects used by
10085 retrieval, publishing, and other code to write and read the MDMF share
10086 format. We create equivalent proxies for SDMF objects so that these
10087 objects can be suitably general.
10088] {
10089hunk ./src/allmydata/mutable/layout.py 2
10090 
10091-import struct
10092+import struct, math
10093 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
10094hunk ./src/allmydata/mutable/layout.py 4
10095+from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
10096+                                 MDMF_VERSION, IMutableSlotWriter
10097+from allmydata.util import mathutil, observer
10098+from twisted.python import failure
10099+from twisted.internet import defer
10100+from zope.interface import implements
10101+
10102+
10103+# These strings describe the format of the packed structs they help process
10104+# Here's what they mean:
10105+#
10106+#  PREFIX:
10107+#    >: Big-endian byte order; the most significant byte is first (leftmost).
10108+#    B: The version information; an 8 bit version identifier. Stored as
10109+#       an unsigned char. This is currently 00 00 00 00; our modifications
10110+#       will turn it into 00 00 00 01.
10111+#    Q: The sequence number; this is sort of like a revision history for
10112+#       mutable files; they start at 1 and increase as they are changed after
10113+#       being uploaded. Stored as an unsigned long long, which is 8 bytes in
10114+#       length.
10115+#  32s: The root hash of the share hash tree. We use sha-256d, so we use 32
10116+#       characters = 32 bytes to store the value.
10117+#  16s: The salt for the readkey. This is a 16-byte random value, stored as
10118+#       16 characters.
10119+#
10120+#  SIGNED_PREFIX additions, things that are covered by the signature:
10121+#    B: The "k" encoding parameter. We store this as an 8-bit character,
10122+#       which is convenient because our erasure coding scheme cannot
10123+#       encode if you ask for more than 255 pieces.
10124+#    B: The "N" encoding parameter. Stored as an 8-bit character for the
10125+#       same reasons as above.
10126+#    Q: The segment size of the uploaded file. This will essentially be the
10127+#       length of the file in SDMF. An unsigned long long, so we can store
10128+#       files of quite large size.
10129+#    Q: The data length of the uploaded file. Modulo padding, this will be
10130+#       the same of the data length field. Like the data length field, it is
10131+#       an unsigned long long and can be quite large.
10132+#
10133+#   HEADER additions:
10134+#     L: The offset of the signature of this. An unsigned long.
10135+#     L: The offset of the share hash chain. An unsigned long.
10136+#     L: The offset of the block hash tree. An unsigned long.
10137+#     L: The offset of the share data. An unsigned long.
10138+#     Q: The offset of the encrypted private key. An unsigned long long, to
10139+#        account for the possibility of a lot of share data.
10140+#     Q: The offset of the EOF. An unsigned long long, to account for the
10141+#        possibility of a lot of share data.
10142+#
10143+#  After all of these, we have the following:
10144+#    - The verification key: Occupies the space between the end of the header
10145+#      and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']].
10146+#    - The signature, which goes from the signature offset to the share hash
10147+#      chain offset.
10148+#    - The share hash chain, which goes from the share hash chain offset to
10149+#      the block hash tree offset.
10150+#    - The share data, which goes from the share data offset to the encrypted
10151+#      private key offset.
10152+#    - The encrypted private key offset, which goes until the end of the file.
10153+#
10154+#  The block hash tree in this encoding has only one share, so the offset of
10155+#  the share data will be 32 bits more than the offset of the block hash tree.
10156+#  Given this, we may need to check to see how many bytes a reasonably sized
10157+#  block hash tree will take up.
10158 
10159 PREFIX = ">BQ32s16s" # each version has a different prefix
10160 SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature
10161hunk ./src/allmydata/mutable/layout.py 73
10162 SIGNED_PREFIX_LENGTH = struct.calcsize(SIGNED_PREFIX)
10163 HEADER = ">BQ32s16s BBQQ LLLLQQ" # includes offsets
10164 HEADER_LENGTH = struct.calcsize(HEADER)
10165+OFFSETS = ">LLLLQQ"
10166+OFFSETS_LENGTH = struct.calcsize(OFFSETS)
10167 
10168hunk ./src/allmydata/mutable/layout.py 76
10169+# These are still used for some tests.
10170 def unpack_header(data):
10171     o = {}
10172     (version,
10173hunk ./src/allmydata/mutable/layout.py 92
10174      o['EOF']) = struct.unpack(HEADER, data[:HEADER_LENGTH])
10175     return (version, seqnum, root_hash, IV, k, N, segsize, datalen, o)
10176 
10177-def unpack_prefix_and_signature(data):
10178-    assert len(data) >= HEADER_LENGTH, len(data)
10179-    prefix = data[:SIGNED_PREFIX_LENGTH]
10180-
10181-    (version,
10182-     seqnum,
10183-     root_hash,
10184-     IV,
10185-     k, N, segsize, datalen,
10186-     o) = unpack_header(data)
10187-
10188-    if version != 0:
10189-        raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
10190-
10191-    if len(data) < o['share_hash_chain']:
10192-        raise NeedMoreDataError(o['share_hash_chain'],
10193-                                o['enc_privkey'], o['EOF']-o['enc_privkey'])
10194-
10195-    pubkey_s = data[HEADER_LENGTH:o['signature']]
10196-    signature = data[o['signature']:o['share_hash_chain']]
10197-
10198-    return (seqnum, root_hash, IV, k, N, segsize, datalen,
10199-            pubkey_s, signature, prefix)
10200-
10201 def unpack_share(data):
10202     assert len(data) >= HEADER_LENGTH
10203     o = {}
10204hunk ./src/allmydata/mutable/layout.py 139
10205             pubkey, signature, share_hash_chain, block_hash_tree,
10206             share_data, enc_privkey)
10207 
10208-def unpack_share_data(verinfo, hash_and_data):
10209-    (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, o_t) = verinfo
10210-
10211-    # hash_and_data starts with the share_hash_chain, so figure out what the
10212-    # offsets really are
10213-    o = dict(o_t)
10214-    o_share_hash_chain = 0
10215-    o_block_hash_tree = o['block_hash_tree'] - o['share_hash_chain']
10216-    o_share_data = o['share_data'] - o['share_hash_chain']
10217-    o_enc_privkey = o['enc_privkey'] - o['share_hash_chain']
10218-
10219-    share_hash_chain_s = hash_and_data[o_share_hash_chain:o_block_hash_tree]
10220-    share_hash_format = ">H32s"
10221-    hsize = struct.calcsize(share_hash_format)
10222-    assert len(share_hash_chain_s) % hsize == 0, len(share_hash_chain_s)
10223-    share_hash_chain = []
10224-    for i in range(0, len(share_hash_chain_s), hsize):
10225-        chunk = share_hash_chain_s[i:i+hsize]
10226-        (hid, h) = struct.unpack(share_hash_format, chunk)
10227-        share_hash_chain.append( (hid, h) )
10228-    share_hash_chain = dict(share_hash_chain)
10229-    block_hash_tree_s = hash_and_data[o_block_hash_tree:o_share_data]
10230-    assert len(block_hash_tree_s) % 32 == 0, len(block_hash_tree_s)
10231-    block_hash_tree = []
10232-    for i in range(0, len(block_hash_tree_s), 32):
10233-        block_hash_tree.append(block_hash_tree_s[i:i+32])
10234-
10235-    share_data = hash_and_data[o_share_data:o_enc_privkey]
10236-
10237-    return (share_hash_chain, block_hash_tree, share_data)
10238-
10239-
10240-def pack_checkstring(seqnum, root_hash, IV):
10241-    return struct.pack(PREFIX,
10242-                       0, # version,
10243-                       seqnum,
10244-                       root_hash,
10245-                       IV)
10246-
10247 def unpack_checkstring(checkstring):
10248     cs_len = struct.calcsize(PREFIX)
10249     version, seqnum, root_hash, IV = struct.unpack(PREFIX, checkstring[:cs_len])
10250hunk ./src/allmydata/mutable/layout.py 146
10251         raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
10252     return (seqnum, root_hash, IV)
10253 
10254-def pack_prefix(seqnum, root_hash, IV,
10255-                required_shares, total_shares,
10256-                segment_size, data_length):
10257-    prefix = struct.pack(SIGNED_PREFIX,
10258-                         0, # version,
10259-                         seqnum,
10260-                         root_hash,
10261-                         IV,
10262-
10263-                         required_shares,
10264-                         total_shares,
10265-                         segment_size,
10266-                         data_length,
10267-                         )
10268-    return prefix
10269 
10270 def pack_offsets(verification_key_length, signature_length,
10271                  share_hash_chain_length, block_hash_tree_length,
10272hunk ./src/allmydata/mutable/layout.py 192
10273                            encprivkey])
10274     return final_share
10275 
10276+def pack_prefix(seqnum, root_hash, IV,
10277+                required_shares, total_shares,
10278+                segment_size, data_length):
10279+    prefix = struct.pack(SIGNED_PREFIX,
10280+                         0, # version,
10281+                         seqnum,
10282+                         root_hash,
10283+                         IV,
10284+                         required_shares,
10285+                         total_shares,
10286+                         segment_size,
10287+                         data_length,
10288+                         )
10289+    return prefix
10290+
10291+
10292+class SDMFSlotWriteProxy:
10293+    implements(IMutableSlotWriter)
10294+    """
10295+    I represent a remote write slot for an SDMF mutable file. I build a
10296+    share in memory, and then write it in one piece to the remote
10297+    server. This mimics how SDMF shares were built before MDMF (and the
10298+    new MDMF uploader), but provides that functionality in a way that
10299+    allows the MDMF uploader to be built without much special-casing for
10300+    file format, which makes the uploader code more readable.
10301+    """
10302+    def __init__(self,
10303+                 shnum,
10304+                 rref, # a remote reference to a storage server
10305+                 storage_index,
10306+                 secrets, # (write_enabler, renew_secret, cancel_secret)
10307+                 seqnum, # the sequence number of the mutable file
10308+                 required_shares,
10309+                 total_shares,
10310+                 segment_size,
10311+                 data_length): # the length of the original file
10312+        self.shnum = shnum
10313+        self._rref = rref
10314+        self._storage_index = storage_index
10315+        self._secrets = secrets
10316+        self._seqnum = seqnum
10317+        self._required_shares = required_shares
10318+        self._total_shares = total_shares
10319+        self._segment_size = segment_size
10320+        self._data_length = data_length
10321+
10322+        # This is an SDMF file, so it should have only one segment, so,
10323+        # modulo padding of the data length, the segment size and the
10324+        # data length should be the same.
10325+        expected_segment_size = mathutil.next_multiple(data_length,
10326+                                                       self._required_shares)
10327+        assert expected_segment_size == segment_size
10328+
10329+        self._block_size = self._segment_size / self._required_shares
10330+
10331+        # This is meant to mimic how SDMF files were built before MDMF
10332+        # entered the picture: we generate each share in its entirety,
10333+        # then push it off to the storage server in one write. When
10334+        # callers call set_*, they are just populating this dict.
10335+        # finish_publishing will stitch these pieces together into a
10336+        # coherent share, and then write the coherent share to the
10337+        # storage server.
10338+        self._share_pieces = {}
10339+
10340+        # This tells the write logic what checkstring to use when
10341+        # writing remote shares.
10342+        self._testvs = []
10343+
10344+        self._readvs = [(0, struct.calcsize(PREFIX))]
10345+
10346+
10347+    def set_checkstring(self, checkstring_or_seqnum,
10348+                              root_hash=None,
10349+                              salt=None):
10350+        """
10351+        Set the checkstring that I will pass to the remote server when
10352+        writing.
10353+
10354+            @param checkstring_or_seqnum: A packed checkstring to use,
10355+                   or a sequence number. I will treat this as a checkstr
10356+
10357+        Note that implementations can differ in which semantics they
10358+        wish to support for set_checkstring -- they can, for example,
10359+        build the checkstring themselves from its constituents, or
10360+        some other thing.
10361+        """
10362+        if root_hash and salt:
10363+            checkstring = struct.pack(PREFIX,
10364+                                      0,
10365+                                      checkstring_or_seqnum,
10366+                                      root_hash,
10367+                                      salt)
10368+        else:
10369+            checkstring = checkstring_or_seqnum
10370+        self._testvs = [(0, len(checkstring), "eq", checkstring)]
10371+
10372+
10373+    def get_checkstring(self):
10374+        """
10375+        Get the checkstring that I think currently exists on the remote
10376+        server.
10377+        """
10378+        if self._testvs:
10379+            return self._testvs[0][3]
10380+        return ""
10381+
10382+
10383+    def put_block(self, data, segnum, salt):
10384+        """
10385+        Add a block and salt to the share.
10386+        """
10387+        # SDMF files have only one segment
10388+        assert segnum == 0
10389+        assert len(data) == self._block_size
10390+        assert len(salt) == SALT_SIZE
10391+
10392+        self._share_pieces['sharedata'] = data
10393+        self._share_pieces['salt'] = salt
10394+
10395+        # TODO: Figure out something intelligent to return.
10396+        return defer.succeed(None)
10397+
10398+
10399+    def put_encprivkey(self, encprivkey):
10400+        """
10401+        Add the encrypted private key to the share.
10402+        """
10403+        self._share_pieces['encprivkey'] = encprivkey
10404+
10405+        return defer.succeed(None)
10406+
10407+
10408+    def put_blockhashes(self, blockhashes):
10409+        """
10410+        Add the block hash tree to the share.
10411+        """
10412+        assert isinstance(blockhashes, list)
10413+        for h in blockhashes:
10414+            assert len(h) == HASH_SIZE
10415+
10416+        # serialize the blockhashes, then set them.
10417+        blockhashes_s = "".join(blockhashes)
10418+        self._share_pieces['block_hash_tree'] = blockhashes_s
10419+
10420+        return defer.succeed(None)
10421+
10422+
10423+    def put_sharehashes(self, sharehashes):
10424+        """
10425+        Add the share hash chain to the share.
10426+        """
10427+        assert isinstance(sharehashes, dict)
10428+        for h in sharehashes.itervalues():
10429+            assert len(h) == HASH_SIZE
10430+
10431+        # serialize the sharehashes, then set them.
10432+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
10433+                                 for i in sorted(sharehashes.keys())])
10434+        self._share_pieces['share_hash_chain'] = sharehashes_s
10435+
10436+        return defer.succeed(None)
10437+
10438+
10439+    def put_root_hash(self, root_hash):
10440+        """
10441+        Add the root hash to the share.
10442+        """
10443+        assert len(root_hash) == HASH_SIZE
10444+
10445+        self._share_pieces['root_hash'] = root_hash
10446+
10447+        return defer.succeed(None)
10448+
10449+
10450+    def put_salt(self, salt):
10451+        """
10452+        Add a salt to an empty SDMF file.
10453+        """
10454+        assert len(salt) == SALT_SIZE
10455+
10456+        self._share_pieces['salt'] = salt
10457+        self._share_pieces['sharedata'] = ""
10458+
10459+
10460+    def get_signable(self):
10461+        """
10462+        Return the part of the share that needs to be signed.
10463+
10464+        SDMF writers need to sign the packed representation of the
10465+        first eight fields of the remote share, that is:
10466+            - version number (0)
10467+            - sequence number
10468+            - root of the share hash tree
10469+            - salt
10470+            - k
10471+            - n
10472+            - segsize
10473+            - datalen
10474+
10475+        This method is responsible for returning that to callers.
10476+        """
10477+        return struct.pack(SIGNED_PREFIX,
10478+                           0,
10479+                           self._seqnum,
10480+                           self._share_pieces['root_hash'],
10481+                           self._share_pieces['salt'],
10482+                           self._required_shares,
10483+                           self._total_shares,
10484+                           self._segment_size,
10485+                           self._data_length)
10486+
10487+
10488+    def put_signature(self, signature):
10489+        """
10490+        Add the signature to the share.
10491+        """
10492+        self._share_pieces['signature'] = signature
10493+
10494+        return defer.succeed(None)
10495+
10496+
10497+    def put_verification_key(self, verification_key):
10498+        """
10499+        Add the verification key to the share.
10500+        """
10501+        self._share_pieces['verification_key'] = verification_key
10502+
10503+        return defer.succeed(None)
10504+
10505+
10506+    def get_verinfo(self):
10507+        """
10508+        I return my verinfo tuple. This is used by the ServermapUpdater
10509+        to keep track of versions of mutable files.
10510+
10511+        The verinfo tuple for MDMF files contains:
10512+            - seqnum
10513+            - root hash
10514+            - a blank (nothing)
10515+            - segsize
10516+            - datalen
10517+            - k
10518+            - n
10519+            - prefix (the thing that you sign)
10520+            - a tuple of offsets
10521+
10522+        We include the nonce in MDMF to simplify processing of version
10523+        information tuples.
10524+
10525+        The verinfo tuple for SDMF files is the same, but contains a
10526+        16-byte IV instead of a hash of salts.
10527+        """
10528+        return (self._seqnum,
10529+                self._share_pieces['root_hash'],
10530+                self._share_pieces['salt'],
10531+                self._segment_size,
10532+                self._data_length,
10533+                self._required_shares,
10534+                self._total_shares,
10535+                self.get_signable(),
10536+                self._get_offsets_tuple())
10537+
10538+    def _get_offsets_dict(self):
10539+        post_offset = HEADER_LENGTH
10540+        offsets = {}
10541+
10542+        verification_key_length = len(self._share_pieces['verification_key'])
10543+        o1 = offsets['signature'] = post_offset + verification_key_length
10544+
10545+        signature_length = len(self._share_pieces['signature'])
10546+        o2 = offsets['share_hash_chain'] = o1 + signature_length
10547+
10548+        share_hash_chain_length = len(self._share_pieces['share_hash_chain'])
10549+        o3 = offsets['block_hash_tree'] = o2 + share_hash_chain_length
10550+
10551+        block_hash_tree_length = len(self._share_pieces['block_hash_tree'])
10552+        o4 = offsets['share_data'] = o3 + block_hash_tree_length
10553+
10554+        share_data_length = len(self._share_pieces['sharedata'])
10555+        o5 = offsets['enc_privkey'] = o4 + share_data_length
10556+
10557+        encprivkey_length = len(self._share_pieces['encprivkey'])
10558+        offsets['EOF'] = o5 + encprivkey_length
10559+        return offsets
10560+
10561+
10562+    def _get_offsets_tuple(self):
10563+        offsets = self._get_offsets_dict()
10564+        return tuple([(key, value) for key, value in offsets.items()])
10565+
10566+
10567+    def _pack_offsets(self):
10568+        offsets = self._get_offsets_dict()
10569+        return struct.pack(">LLLLQQ",
10570+                           offsets['signature'],
10571+                           offsets['share_hash_chain'],
10572+                           offsets['block_hash_tree'],
10573+                           offsets['share_data'],
10574+                           offsets['enc_privkey'],
10575+                           offsets['EOF'])
10576+
10577+
10578+    def finish_publishing(self):
10579+        """
10580+        Do anything necessary to finish writing the share to a remote
10581+        server. I require that no further publishing needs to take place
10582+        after this method has been called.
10583+        """
10584+        for k in ["sharedata", "encprivkey", "signature", "verification_key",
10585+                  "share_hash_chain", "block_hash_tree"]:
10586+            assert k in self._share_pieces
10587+        # This is the only method that actually writes something to the
10588+        # remote server.
10589+        # First, we need to pack the share into data that we can write
10590+        # to the remote server in one write.
10591+        offsets = self._pack_offsets()
10592+        prefix = self.get_signable()
10593+        final_share = "".join([prefix,
10594+                               offsets,
10595+                               self._share_pieces['verification_key'],
10596+                               self._share_pieces['signature'],
10597+                               self._share_pieces['share_hash_chain'],
10598+                               self._share_pieces['block_hash_tree'],
10599+                               self._share_pieces['sharedata'],
10600+                               self._share_pieces['encprivkey']])
10601+
10602+        # Our only data vector is going to be writing the final share,
10603+        # in its entirely.
10604+        datavs = [(0, final_share)]
10605+
10606+        if not self._testvs:
10607+            # Our caller has not provided us with another checkstring
10608+            # yet, so we assume that we are writing a new share, and set
10609+            # a test vector that will allow a new share to be written.
10610+            self._testvs = []
10611+            self._testvs.append(tuple([0, 1, "eq", ""]))
10612+
10613+        tw_vectors = {}
10614+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
10615+        return self._rref.callRemote("slot_testv_and_readv_and_writev",
10616+                                     self._storage_index,
10617+                                     self._secrets,
10618+                                     tw_vectors,
10619+                                     # TODO is it useful to read something?
10620+                                     self._readvs)
10621+
10622+
10623+MDMFHEADER = ">BQ32sBBQQ QQQQQQQQ"
10624+MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ"
10625+MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
10626+MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
10627+MDMFCHECKSTRING = ">BQ32s"
10628+MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
10629+MDMFOFFSETS = ">QQQQQQQQ"
10630+MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
10631+
10632+PRIVATE_KEY_SIZE = 1220
10633+SIGNATURE_SIZE = 260
10634+VERIFICATION_KEY_SIZE = 292
10635+# We know we won't have more than 256 shares, and we know that we won't
10636+# need to store more than lg 256 of them to validate, so that's our
10637+# bound. We add 1 to the int cast to round to the next integer.
10638+SHARE_HASH_CHAIN_SIZE = int(math.log(HASH_SIZE * 256)) + 1
10639+
10640+class MDMFSlotWriteProxy:
10641+    implements(IMutableSlotWriter)
10642+
10643+    """
10644+    I represent a remote write slot for an MDMF mutable file.
10645+
10646+    I abstract away from my caller the details of block and salt
10647+    management, and the implementation of the on-disk format for MDMF
10648+    shares.
10649+    """
10650+    # Expected layout, MDMF:
10651+    # offset:     size:       name:
10652+    #-- signed part --
10653+    # 0           1           version number (01)
10654+    # 1           8           sequence number
10655+    # 9           32          share tree root hash
10656+    # 41          1           The "k" encoding parameter
10657+    # 42          1           The "N" encoding parameter
10658+    # 43          8           The segment size of the uploaded file
10659+    # 51          8           The data length of the original plaintext
10660+    #-- end signed part --
10661+    # 59          8           The offset of the encrypted private key
10662+    # 67          8           The offset of the signature
10663+    # 75          8           The offset of the verification key
10664+    # 83          8           The offset of the end of the v. key.
10665+    # 92          8           The offset of the share data
10666+    # 100         8           The offset of the block hash tree
10667+    # 108         8           The offset of the share hash chain
10668+    # 116         8           The offset of EOF
10669+    #
10670+    # followed by the encrypted private key, signature, verification
10671+    # key, share hash chain, data, and block hash tree. We order the
10672+    # fields that way to make smart downloaders -- downloaders which
10673+    # prempetively read a big part of the share -- possible.
10674+    #
10675+    # The checkstring is the first three fields -- the version number,
10676+    # sequence number, root hash and root salt hash. This is consistent
10677+    # in meaning to what we have with SDMF files, except now instead of
10678+    # using the literal salt, we use a value derived from all of the
10679+    # salts -- the share hash root.
10680+    #
10681+    # The salt is stored before the block for each segment. The block
10682+    # hash tree is computed over the combination of block and salt for
10683+    # each segment. In this way, we get integrity checking for both
10684+    # block and salt with the current block hash tree arrangement.
10685+    #
10686+    # The ordering of the offsets is different to reflect the dependencies
10687+    # that we'll run into with an MDMF file. The expected write flow is
10688+    # something like this:
10689+    #
10690+    #   0: Initialize with the sequence number, encoding parameters and
10691+    #      data length. From this, we can deduce the number of segments,
10692+    #      and where they should go.. We can also figure out where the
10693+    #      encrypted private key should go, because we can figure out how
10694+    #      big the share data will be.
10695+    #
10696+    #   1: Encrypt, encode, and upload the file in chunks. Do something
10697+    #      like
10698+    #
10699+    #       put_block(data, segnum, salt)
10700+    #
10701+    #      to write a block and a salt to the disk. We can do both of
10702+    #      these operations now because we have enough of the offsets to
10703+    #      know where to put them.
10704+    #
10705+    #   2: Put the encrypted private key. Use:
10706+    #
10707+    #        put_encprivkey(encprivkey)
10708+    #
10709+    #      Now that we know the length of the private key, we can fill
10710+    #      in the offset for the block hash tree.
10711+    #
10712+    #   3: We're now in a position to upload the block hash tree for
10713+    #      a share. Put that using something like:
10714+    #       
10715+    #        put_blockhashes(block_hash_tree)
10716+    #
10717+    #      Note that block_hash_tree is a list of hashes -- we'll take
10718+    #      care of the details of serializing that appropriately. When
10719+    #      we get the block hash tree, we are also in a position to
10720+    #      calculate the offset for the share hash chain, and fill that
10721+    #      into the offsets table.
10722+    #
10723+    #   4: We're now in a position to upload the share hash chain for
10724+    #      a share. Do that with something like:
10725+    #     
10726+    #        put_sharehashes(share_hash_chain)
10727+    #
10728+    #      share_hash_chain should be a dictionary mapping shnums to
10729+    #      32-byte hashes -- the wrapper handles serialization.
10730+    #      We'll know where to put the signature at this point, also.
10731+    #      The root of this tree will be put explicitly in the next
10732+    #      step.
10733+    #
10734+    #   5: Before putting the signature, we must first put the
10735+    #      root_hash. Do this with:
10736+    #
10737+    #        put_root_hash(root_hash).
10738+    #     
10739+    #      In terms of knowing where to put this value, it was always
10740+    #      possible to place it, but it makes sense semantically to
10741+    #      place it after the share hash tree, so that's why you do it
10742+    #      in this order.
10743+    #
10744+    #   6: With the root hash put, we can now sign the header. Use:
10745+    #
10746+    #        get_signable()
10747+    #
10748+    #      to get the part of the header that you want to sign, and use:
10749+    #       
10750+    #        put_signature(signature)
10751+    #
10752+    #      to write your signature to the remote server.
10753+    #
10754+    #   6: Add the verification key, and finish. Do:
10755+    #
10756+    #        put_verification_key(key)
10757+    #
10758+    #      and
10759+    #
10760+    #        finish_publish()
10761+    #
10762+    # Checkstring management:
10763+    #
10764+    # To write to a mutable slot, we have to provide test vectors to ensure
10765+    # that we are writing to the same data that we think we are. These
10766+    # vectors allow us to detect uncoordinated writes; that is, writes
10767+    # where both we and some other shareholder are writing to the
10768+    # mutable slot, and to report those back to the parts of the program
10769+    # doing the writing.
10770+    #
10771+    # With SDMF, this was easy -- all of the share data was written in
10772+    # one go, so it was easy to detect uncoordinated writes, and we only
10773+    # had to do it once. With MDMF, not all of the file is written at
10774+    # once.
10775+    #
10776+    # If a share is new, we write out as much of the header as we can
10777+    # before writing out anything else. This gives other writers a
10778+    # canary that they can use to detect uncoordinated writes, and, if
10779+    # they do the same thing, gives us the same canary. We them update
10780+    # the share. We won't be able to write out two fields of the header
10781+    # -- the share tree hash and the salt hash -- until we finish
10782+    # writing out the share. We only require the writer to provide the
10783+    # initial checkstring, and keep track of what it should be after
10784+    # updates ourselves.
10785+    #
10786+    # If we haven't written anything yet, then on the first write (which
10787+    # will probably be a block + salt of a share), we'll also write out
10788+    # the header. On subsequent passes, we'll expect to see the header.
10789+    # This changes in two places:
10790+    #
10791+    #   - When we write out the salt hash
10792+    #   - When we write out the root of the share hash tree
10793+    #
10794+    # since these values will change the header. It is possible that we
10795+    # can just make those be written in one operation to minimize
10796+    # disruption.
10797+    def __init__(self,
10798+                 shnum,
10799+                 rref, # a remote reference to a storage server
10800+                 storage_index,
10801+                 secrets, # (write_enabler, renew_secret, cancel_secret)
10802+                 seqnum, # the sequence number of the mutable file
10803+                 required_shares,
10804+                 total_shares,
10805+                 segment_size,
10806+                 data_length): # the length of the original file
10807+        self.shnum = shnum
10808+        self._rref = rref
10809+        self._storage_index = storage_index
10810+        self._seqnum = seqnum
10811+        self._required_shares = required_shares
10812+        assert self.shnum >= 0 and self.shnum < total_shares
10813+        self._total_shares = total_shares
10814+        # We build up the offset table as we write things. It is the
10815+        # last thing we write to the remote server.
10816+        self._offsets = {}
10817+        self._testvs = []
10818+        # This is a list of write vectors that will be sent to our
10819+        # remote server once we are directed to write things there.
10820+        self._writevs = []
10821+        self._secrets = secrets
10822+        # The segment size needs to be a multiple of the k parameter --
10823+        # any padding should have been carried out by the publisher
10824+        # already.
10825+        assert segment_size % required_shares == 0
10826+        self._segment_size = segment_size
10827+        self._data_length = data_length
10828+
10829+        # These are set later -- we define them here so that we can
10830+        # check for their existence easily
10831+
10832+        # This is the root of the share hash tree -- the Merkle tree
10833+        # over the roots of the block hash trees computed for shares in
10834+        # this upload.
10835+        self._root_hash = None
10836+
10837+        # We haven't yet written anything to the remote bucket. By
10838+        # setting this, we tell the _write method as much. The write
10839+        # method will then know that it also needs to add a write vector
10840+        # for the checkstring (or what we have of it) to the first write
10841+        # request. We'll then record that value for future use.  If
10842+        # we're expecting something to be there already, we need to call
10843+        # set_checkstring before we write anything to tell the first
10844+        # write about that.
10845+        self._written = False
10846+
10847+        # When writing data to the storage servers, we get a read vector
10848+        # for free. We'll read the checkstring, which will help us
10849+        # figure out what's gone wrong if a write fails.
10850+        self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))]
10851+
10852+        # We calculate the number of segments because it tells us
10853+        # where the salt part of the file ends/share segment begins,
10854+        # and also because it provides a useful amount of bounds checking.
10855+        self._num_segments = mathutil.div_ceil(self._data_length,
10856+                                               self._segment_size)
10857+        self._block_size = self._segment_size / self._required_shares
10858+        # We also calculate the share size, to help us with block
10859+        # constraints later.
10860+        tail_size = self._data_length % self._segment_size
10861+        if not tail_size:
10862+            self._tail_block_size = self._block_size
10863+        else:
10864+            self._tail_block_size = mathutil.next_multiple(tail_size,
10865+                                                           self._required_shares)
10866+            self._tail_block_size /= self._required_shares
10867+
10868+        # We already know where the sharedata starts; right after the end
10869+        # of the header (which is defined as the signable part + the offsets)
10870+        # We can also calculate where the encrypted private key begins
10871+        # from what we know know.
10872+        self._actual_block_size = self._block_size + SALT_SIZE
10873+        data_size = self._actual_block_size * (self._num_segments - 1)
10874+        data_size += self._tail_block_size
10875+        data_size += SALT_SIZE
10876+        self._offsets['enc_privkey'] = MDMFHEADERSIZE
10877+
10878+        # We don't define offsets for these because we want them to be
10879+        # tightly packed -- this allows us to ignore the responsibility
10880+        # of padding individual values, and of removing that padding
10881+        # later. So nonconstant_start is where we start writing
10882+        # nonconstant data.
10883+        nonconstant_start = self._offsets['enc_privkey']
10884+        nonconstant_start += PRIVATE_KEY_SIZE
10885+        nonconstant_start += SIGNATURE_SIZE
10886+        nonconstant_start += VERIFICATION_KEY_SIZE
10887+        nonconstant_start += SHARE_HASH_CHAIN_SIZE
10888+
10889+        self._offsets['share_data'] = nonconstant_start
10890+
10891+        # Finally, we know how big the share data will be, so we can
10892+        # figure out where the block hash tree needs to go.
10893+        # XXX: But this will go away if Zooko wants to make it so that
10894+        # you don't need to know the size of the file before you start
10895+        # uploading it.
10896+        self._offsets['block_hash_tree'] = self._offsets['share_data'] + \
10897+                    data_size
10898+
10899+        # Done. We can snow start writing.
10900+
10901+
10902+    def set_checkstring(self,
10903+                        seqnum_or_checkstring,
10904+                        root_hash=None,
10905+                        salt=None):
10906+        """
10907+        Set checkstring checkstring for the given shnum.
10908+
10909+        This can be invoked in one of two ways.
10910+
10911+        With one argument, I assume that you are giving me a literal
10912+        checkstring -- e.g., the output of get_checkstring. I will then
10913+        set that checkstring as it is. This form is used by unit tests.
10914+
10915+        With two arguments, I assume that you are giving me a sequence
10916+        number and root hash to make a checkstring from. In that case, I
10917+        will build a checkstring and set it for you. This form is used
10918+        by the publisher.
10919+
10920+        By default, I assume that I am writing new shares to the grid.
10921+        If you don't explcitly set your own checkstring, I will use
10922+        one that requires that the remote share not exist. You will want
10923+        to use this method if you are updating a share in-place;
10924+        otherwise, writes will fail.
10925+        """
10926+        # You're allowed to overwrite checkstrings with this method;
10927+        # I assume that users know what they are doing when they call
10928+        # it.
10929+        if root_hash:
10930+            checkstring = struct.pack(MDMFCHECKSTRING,
10931+                                      1,
10932+                                      seqnum_or_checkstring,
10933+                                      root_hash)
10934+        else:
10935+            checkstring = seqnum_or_checkstring
10936+
10937+        if checkstring == "":
10938+            # We special-case this, since len("") = 0, but we need
10939+            # length of 1 for the case of an empty share to work on the
10940+            # storage server, which is what a checkstring that is the
10941+            # empty string means.
10942+            self._testvs = []
10943+        else:
10944+            self._testvs = []
10945+            self._testvs.append((0, len(checkstring), "eq", checkstring))
10946+
10947+
10948+    def __repr__(self):
10949+        return "MDMFSlotWriteProxy for share %d" % self.shnum
10950+
10951+
10952+    def get_checkstring(self):
10953+        """
10954+        Given a share number, I return a representation of what the
10955+        checkstring for that share on the server will look like.
10956+
10957+        I am mostly used for tests.
10958+        """
10959+        if self._root_hash:
10960+            roothash = self._root_hash
10961+        else:
10962+            roothash = "\x00" * 32
10963+        return struct.pack(MDMFCHECKSTRING,
10964+                           1,
10965+                           self._seqnum,
10966+                           roothash)
10967+
10968+
10969+    def put_block(self, data, segnum, salt):
10970+        """
10971+        I queue a write vector for the data, salt, and segment number
10972+        provided to me. I return None, as I do not actually cause
10973+        anything to be written yet.
10974+        """
10975+        if segnum >= self._num_segments:
10976+            raise LayoutInvalid("I won't overwrite the block hash tree")
10977+        if len(salt) != SALT_SIZE:
10978+            raise LayoutInvalid("I was given a salt of size %d, but "
10979+                                "I wanted a salt of size %d")
10980+        if segnum + 1 == self._num_segments:
10981+            if len(data) != self._tail_block_size:
10982+                raise LayoutInvalid("I was given the wrong size block to write")
10983+        elif len(data) != self._block_size:
10984+            raise LayoutInvalid("I was given the wrong size block to write")
10985+
10986+        # We want to write at len(MDMFHEADER) + segnum * block_size.
10987+        offset = self._offsets['share_data'] + \
10988+            (self._actual_block_size * segnum)
10989+        data = salt + data
10990+
10991+        self._writevs.append(tuple([offset, data]))
10992+
10993+
10994+    def put_encprivkey(self, encprivkey):
10995+        """
10996+        I queue a write vector for the encrypted private key provided to
10997+        me.
10998+        """
10999+        assert self._offsets
11000+        assert self._offsets['enc_privkey']
11001+        # You shouldn't re-write the encprivkey after the block hash
11002+        # tree is written, since that could cause the private key to run
11003+        # into the block hash tree. Before it writes the block hash
11004+        # tree, the block hash tree writing method writes the offset of
11005+        # the share hash chain. So that's a good indicator of whether or
11006+        # not the block hash tree has been written.
11007+        if "signature" in self._offsets:
11008+            raise LayoutInvalid("You can't put the encrypted private key "
11009+                                "after putting the share hash chain")
11010+
11011+        self._offsets['share_hash_chain'] = self._offsets['enc_privkey'] + \
11012+                len(encprivkey)
11013+
11014+        self._writevs.append(tuple([self._offsets['enc_privkey'], encprivkey]))
11015+
11016+
11017+    def put_blockhashes(self, blockhashes):
11018+        """
11019+        I queue a write vector to put the block hash tree in blockhashes
11020+        onto the remote server.
11021+
11022+        The encrypted private key must be queued before the block hash
11023+        tree, since we need to know how large it is to know where the
11024+        block hash tree should go. The block hash tree must be put
11025+        before the share hash chain, since its size determines the
11026+        offset of the share hash chain.
11027+        """
11028+        assert self._offsets
11029+        assert "block_hash_tree" in self._offsets
11030+
11031+        assert isinstance(blockhashes, list)
11032+
11033+        blockhashes_s = "".join(blockhashes)
11034+        self._offsets['EOF'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
11035+
11036+        self._writevs.append(tuple([self._offsets['block_hash_tree'],
11037+                                  blockhashes_s]))
11038+
11039+
11040+    def put_sharehashes(self, sharehashes):
11041+        """
11042+        I queue a write vector to put the share hash chain in my
11043+        argument onto the remote server.
11044+
11045+        The block hash tree must be queued before the share hash chain,
11046+        since we need to know where the block hash tree ends before we
11047+        can know where the share hash chain starts. The share hash chain
11048+        must be put before the signature, since the length of the packed
11049+        share hash chain determines the offset of the signature. Also,
11050+        semantically, you must know what the root of the block hash tree
11051+        is before you can generate a valid signature.
11052+        """
11053+        assert isinstance(sharehashes, dict)
11054+        assert self._offsets
11055+        if "share_hash_chain" not in self._offsets:
11056+            raise LayoutInvalid("You must put the block hash tree before "
11057+                                "putting the share hash chain")
11058+
11059+        # The signature comes after the share hash chain. If the
11060+        # signature has already been written, we must not write another
11061+        # share hash chain. The signature writes the verification key
11062+        # offset when it gets sent to the remote server, so we look for
11063+        # that.
11064+        if "verification_key" in self._offsets:
11065+            raise LayoutInvalid("You must write the share hash chain "
11066+                                "before you write the signature")
11067+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
11068+                                  for i in sorted(sharehashes.keys())])
11069+        self._offsets['signature'] = self._offsets['share_hash_chain'] + \
11070+            len(sharehashes_s)
11071+        self._writevs.append(tuple([self._offsets['share_hash_chain'],
11072+                            sharehashes_s]))
11073+
11074+
11075+    def put_root_hash(self, roothash):
11076+        """
11077+        Put the root hash (the root of the share hash tree) in the
11078+        remote slot.
11079+        """
11080+        # It does not make sense to be able to put the root
11081+        # hash without first putting the share hashes, since you need
11082+        # the share hashes to generate the root hash.
11083+        #
11084+        # Signature is defined by the routine that places the share hash
11085+        # chain, so it's a good thing to look for in finding out whether
11086+        # or not the share hash chain exists on the remote server.
11087+        if len(roothash) != HASH_SIZE:
11088+            raise LayoutInvalid("hashes and salts must be exactly %d bytes"
11089+                                 % HASH_SIZE)
11090+        self._root_hash = roothash
11091+        # To write both of these values, we update the checkstring on
11092+        # the remote server, which includes them
11093+        checkstring = self.get_checkstring()
11094+        self._writevs.append(tuple([0, checkstring]))
11095+        # This write, if successful, changes the checkstring, so we need
11096+        # to update our internal checkstring to be consistent with the
11097+        # one on the server.
11098+
11099+
11100+    def get_signable(self):
11101+        """
11102+        Get the first seven fields of the mutable file; the parts that
11103+        are signed.
11104+        """
11105+        if not self._root_hash:
11106+            raise LayoutInvalid("You need to set the root hash "
11107+                                "before getting something to "
11108+                                "sign")
11109+        return struct.pack(MDMFSIGNABLEHEADER,
11110+                           1,
11111+                           self._seqnum,
11112+                           self._root_hash,
11113+                           self._required_shares,
11114+                           self._total_shares,
11115+                           self._segment_size,
11116+                           self._data_length)
11117+
11118+
11119+    def put_signature(self, signature):
11120+        """
11121+        I queue a write vector for the signature of the MDMF share.
11122+
11123+        I require that the root hash and share hash chain have been put
11124+        to the grid before I will write the signature to the grid.
11125+        """
11126+        if "signature" not in self._offsets:
11127+            raise LayoutInvalid("You must put the share hash chain "
11128+        # It does not make sense to put a signature without first
11129+        # putting the root hash and the salt hash (since otherwise
11130+        # the signature would be incomplete), so we don't allow that.
11131+                       "before putting the signature")
11132+        if not self._root_hash:
11133+            raise LayoutInvalid("You must complete the signed prefix "
11134+                                "before computing a signature")
11135+        # If we put the signature after we put the verification key, we
11136+        # could end up running into the verification key, and will
11137+        # probably screw up the offsets as well. So we don't allow that.
11138+        if "verification_key_end" in self._offsets:
11139+            raise LayoutInvalid("You can't put the signature after the "
11140+                                "verification key")
11141+        # The method that writes the verification key defines the EOF
11142+        # offset before writing the verification key, so look for that.
11143+        self._offsets['verification_key'] = self._offsets['signature'] +\
11144+            len(signature)
11145+        self._writevs.append(tuple([self._offsets['signature'], signature]))
11146+
11147+
11148+    def put_verification_key(self, verification_key):
11149+        """
11150+        I queue a write vector for the verification key.
11151+
11152+        I require that the signature have been written to the storage
11153+        server before I allow the verification key to be written to the
11154+        remote server.
11155+        """
11156+        if "verification_key" not in self._offsets:
11157+            raise LayoutInvalid("You must put the signature before you "
11158+                                "can put the verification key")
11159+
11160+        self._offsets['verification_key_end'] = \
11161+            self._offsets['verification_key'] + len(verification_key)
11162+        assert self._offsets['verification_key_end'] <= self._offsets['share_data']
11163+        self._writevs.append(tuple([self._offsets['verification_key'],
11164+                            verification_key]))
11165+
11166+
11167+    def _get_offsets_tuple(self):
11168+        return tuple([(key, value) for key, value in self._offsets.items()])
11169+
11170+
11171+    def get_verinfo(self):
11172+        return (self._seqnum,
11173+                self._root_hash,
11174+                self._required_shares,
11175+                self._total_shares,
11176+                self._segment_size,
11177+                self._data_length,
11178+                self.get_signable(),
11179+                self._get_offsets_tuple())
11180+
11181+
11182+    def finish_publishing(self):
11183+        """
11184+        I add a write vector for the offsets table, and then cause all
11185+        of the write vectors that I've dealt with so far to be published
11186+        to the remote server, ending the write process.
11187+        """
11188+        if "verification_key_end" not in self._offsets:
11189+            raise LayoutInvalid("You must put the verification key before "
11190+                                "you can publish the offsets")
11191+        offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
11192+        offsets = struct.pack(MDMFOFFSETS,
11193+                              self._offsets['enc_privkey'],
11194+                              self._offsets['share_hash_chain'],
11195+                              self._offsets['signature'],
11196+                              self._offsets['verification_key'],
11197+                              self._offsets['verification_key_end'],
11198+                              self._offsets['share_data'],
11199+                              self._offsets['block_hash_tree'],
11200+                              self._offsets['EOF'])
11201+        self._writevs.append(tuple([offsets_offset, offsets]))
11202+        encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
11203+        params = struct.pack(">BBQQ",
11204+                             self._required_shares,
11205+                             self._total_shares,
11206+                             self._segment_size,
11207+                             self._data_length)
11208+        self._writevs.append(tuple([encoding_parameters_offset, params]))
11209+        return self._write(self._writevs)
11210+
11211+
11212+    def _write(self, datavs, on_failure=None, on_success=None):
11213+        """I write the data vectors in datavs to the remote slot."""
11214+        tw_vectors = {}
11215+        if not self._testvs:
11216+            self._testvs = []
11217+            self._testvs.append(tuple([0, 1, "eq", ""]))
11218+        if not self._written:
11219+            # Write a new checkstring to the share when we write it, so
11220+            # that we have something to check later.
11221+            new_checkstring = self.get_checkstring()
11222+            datavs.append((0, new_checkstring))
11223+            def _first_write():
11224+                self._written = True
11225+                self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)]
11226+            on_success = _first_write
11227+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
11228+        d = self._rref.callRemote("slot_testv_and_readv_and_writev",
11229+                                  self._storage_index,
11230+                                  self._secrets,
11231+                                  tw_vectors,
11232+                                  self._readv)
11233+        def _result(results):
11234+            if isinstance(results, failure.Failure) or not results[0]:
11235+                # Do nothing; the write was unsuccessful.
11236+                if on_failure: on_failure()
11237+            else:
11238+                if on_success: on_success()
11239+            return results
11240+        d.addCallback(_result)
11241+        return d
11242+
11243+
11244+class MDMFSlotReadProxy:
11245+    """
11246+    I read from a mutable slot filled with data written in the MDMF data
11247+    format (which is described above).
11248+
11249+    I can be initialized with some amount of data, which I will use (if
11250+    it is valid) to eliminate some of the need to fetch it from servers.
11251+    """
11252+    def __init__(self,
11253+                 rref,
11254+                 storage_index,
11255+                 shnum,
11256+                 data=""):
11257+        # Start the initialization process.
11258+        self._rref = rref
11259+        self._storage_index = storage_index
11260+        self.shnum = shnum
11261+
11262+        # Before doing anything, the reader is probably going to want to
11263+        # verify that the signature is correct. To do that, they'll need
11264+        # the verification key, and the signature. To get those, we'll
11265+        # need the offset table. So fetch the offset table on the
11266+        # assumption that that will be the first thing that a reader is
11267+        # going to do.
11268+
11269+        # The fact that these encoding parameters are None tells us
11270+        # that we haven't yet fetched them from the remote share, so we
11271+        # should. We could just not set them, but the checks will be
11272+        # easier to read if we don't have to use hasattr.
11273+        self._version_number = None
11274+        self._sequence_number = None
11275+        self._root_hash = None
11276+        # Filled in if we're dealing with an SDMF file. Unused
11277+        # otherwise.
11278+        self._salt = None
11279+        self._required_shares = None
11280+        self._total_shares = None
11281+        self._segment_size = None
11282+        self._data_length = None
11283+        self._offsets = None
11284+
11285+        # If the user has chosen to initialize us with some data, we'll
11286+        # try to satisfy subsequent data requests with that data before
11287+        # asking the storage server for it. If
11288+        self._data = data
11289+        # The way callers interact with cache in the filenode returns
11290+        # None if there isn't any cached data, but the way we index the
11291+        # cached data requires a string, so convert None to "".
11292+        if self._data == None:
11293+            self._data = ""
11294+
11295+        self._queue_observers = observer.ObserverList()
11296+        self._queue_errbacks = observer.ObserverList()
11297+        self._readvs = []
11298+
11299+
11300+    def _maybe_fetch_offsets_and_header(self, force_remote=False):
11301+        """
11302+        I fetch the offset table and the header from the remote slot if
11303+        I don't already have them. If I do have them, I do nothing and
11304+        return an empty Deferred.
11305+        """
11306+        if self._offsets:
11307+            return defer.succeed(None)
11308+        # At this point, we may be either SDMF or MDMF. Fetching 107
11309+        # bytes will be enough to get header and offsets for both SDMF and
11310+        # MDMF, though we'll be left with 4 more bytes than we
11311+        # need if this ends up being MDMF. This is probably less
11312+        # expensive than the cost of a second roundtrip.
11313+        readvs = [(0, 123)]
11314+        d = self._read(readvs, force_remote)
11315+        d.addCallback(self._process_encoding_parameters)
11316+        d.addCallback(self._process_offsets)
11317+        return d
11318+
11319+
11320+    def _process_encoding_parameters(self, encoding_parameters):
11321+        assert self.shnum in encoding_parameters
11322+        encoding_parameters = encoding_parameters[self.shnum][0]
11323+        # The first byte is the version number. It will tell us what
11324+        # to do next.
11325+        (verno,) = struct.unpack(">B", encoding_parameters[:1])
11326+        if verno == MDMF_VERSION:
11327+            read_size = MDMFHEADERWITHOUTOFFSETSSIZE
11328+            (verno,
11329+             seqnum,
11330+             root_hash,
11331+             k,
11332+             n,
11333+             segsize,
11334+             datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS,
11335+                                      encoding_parameters[:read_size])
11336+            if segsize == 0 and datalen == 0:
11337+                # Empty file, no segments.
11338+                self._num_segments = 0
11339+            else:
11340+                self._num_segments = mathutil.div_ceil(datalen, segsize)
11341+
11342+        elif verno == SDMF_VERSION:
11343+            read_size = SIGNED_PREFIX_LENGTH
11344+            (verno,
11345+             seqnum,
11346+             root_hash,
11347+             salt,
11348+             k,
11349+             n,
11350+             segsize,
11351+             datalen) = struct.unpack(">BQ32s16s BBQQ",
11352+                                encoding_parameters[:SIGNED_PREFIX_LENGTH])
11353+            self._salt = salt
11354+            if segsize == 0 and datalen == 0:
11355+                # empty file
11356+                self._num_segments = 0
11357+            else:
11358+                # non-empty SDMF files have one segment.
11359+                self._num_segments = 1
11360+        else:
11361+            raise UnknownVersionError("You asked me to read mutable file "
11362+                                      "version %d, but I only understand "
11363+                                      "%d and %d" % (verno, SDMF_VERSION,
11364+                                                     MDMF_VERSION))
11365+
11366+        self._version_number = verno
11367+        self._sequence_number = seqnum
11368+        self._root_hash = root_hash
11369+        self._required_shares = k
11370+        self._total_shares = n
11371+        self._segment_size = segsize
11372+        self._data_length = datalen
11373+
11374+        self._block_size = self._segment_size / self._required_shares
11375+        # We can upload empty files, and need to account for this fact
11376+        # so as to avoid zero-division and zero-modulo errors.
11377+        if datalen > 0:
11378+            tail_size = self._data_length % self._segment_size
11379+        else:
11380+            tail_size = 0
11381+        if not tail_size:
11382+            self._tail_block_size = self._block_size
11383+        else:
11384+            self._tail_block_size = mathutil.next_multiple(tail_size,
11385+                                                    self._required_shares)
11386+            self._tail_block_size /= self._required_shares
11387+
11388+        return encoding_parameters
11389+
11390+
11391+    def _process_offsets(self, offsets):
11392+        if self._version_number == 0:
11393+            read_size = OFFSETS_LENGTH
11394+            read_offset = SIGNED_PREFIX_LENGTH
11395+            end = read_size + read_offset
11396+            (signature,
11397+             share_hash_chain,
11398+             block_hash_tree,
11399+             share_data,
11400+             enc_privkey,
11401+             EOF) = struct.unpack(">LLLLQQ",
11402+                                  offsets[read_offset:end])
11403+            self._offsets = {}
11404+            self._offsets['signature'] = signature
11405+            self._offsets['share_data'] = share_data
11406+            self._offsets['block_hash_tree'] = block_hash_tree
11407+            self._offsets['share_hash_chain'] = share_hash_chain
11408+            self._offsets['enc_privkey'] = enc_privkey
11409+            self._offsets['EOF'] = EOF
11410+
11411+        elif self._version_number == 1:
11412+            read_offset = MDMFHEADERWITHOUTOFFSETSSIZE
11413+            read_length = MDMFOFFSETS_LENGTH
11414+            end = read_offset + read_length
11415+            (encprivkey,
11416+             sharehashes,
11417+             signature,
11418+             verification_key,
11419+             verification_key_end,
11420+             sharedata,
11421+             blockhashes,
11422+             eof) = struct.unpack(MDMFOFFSETS,
11423+                                  offsets[read_offset:end])
11424+            self._offsets = {}
11425+            self._offsets['enc_privkey'] = encprivkey
11426+            self._offsets['block_hash_tree'] = blockhashes
11427+            self._offsets['share_hash_chain'] = sharehashes
11428+            self._offsets['signature'] = signature
11429+            self._offsets['verification_key'] = verification_key
11430+            self._offsets['verification_key_end']= \
11431+                verification_key_end
11432+            self._offsets['EOF'] = eof
11433+            self._offsets['share_data'] = sharedata
11434+
11435+
11436+    def get_block_and_salt(self, segnum, queue=False):
11437+        """
11438+        I return (block, salt), where block is the block data and
11439+        salt is the salt used to encrypt that segment.
11440+        """
11441+        d = self._maybe_fetch_offsets_and_header()
11442+        def _then(ignored):
11443+            base_share_offset = self._offsets['share_data']
11444+
11445+            if segnum + 1 > self._num_segments:
11446+                raise LayoutInvalid("Not a valid segment number")
11447+
11448+            if self._version_number == 0:
11449+                share_offset = base_share_offset + self._block_size * segnum
11450+            else:
11451+                share_offset = base_share_offset + (self._block_size + \
11452+                                                    SALT_SIZE) * segnum
11453+            if segnum + 1 == self._num_segments:
11454+                data = self._tail_block_size
11455+            else:
11456+                data = self._block_size
11457+
11458+            if self._version_number == 1:
11459+                data += SALT_SIZE
11460+
11461+            readvs = [(share_offset, data)]
11462+            return readvs
11463+        d.addCallback(_then)
11464+        d.addCallback(lambda readvs:
11465+            self._read(readvs, queue=queue))
11466+        def _process_results(results):
11467+            assert self.shnum in results
11468+            if self._version_number == 0:
11469+                # We only read the share data, but we know the salt from
11470+                # when we fetched the header
11471+                data = results[self.shnum]
11472+                if not data:
11473+                    data = ""
11474+                else:
11475+                    assert len(data) == 1
11476+                    data = data[0]
11477+                salt = self._salt
11478+            else:
11479+                data = results[self.shnum]
11480+                if not data:
11481+                    salt = data = ""
11482+                else:
11483+                    salt_and_data = results[self.shnum][0]
11484+                    salt = salt_and_data[:SALT_SIZE]
11485+                    data = salt_and_data[SALT_SIZE:]
11486+            return data, salt
11487+        d.addCallback(_process_results)
11488+        return d
11489+
11490+
11491+    def get_blockhashes(self, needed=None, queue=False, force_remote=False):
11492+        """
11493+        I return the block hash tree
11494+
11495+        I take an optional argument, needed, which is a set of indices
11496+        correspond to hashes that I should fetch. If this argument is
11497+        missing, I will fetch the entire block hash tree; otherwise, I
11498+        may attempt to fetch fewer hashes, based on what needed says
11499+        that I should do. Note that I may fetch as many hashes as I
11500+        want, so long as the set of hashes that I do fetch is a superset
11501+        of the ones that I am asked for, so callers should be prepared
11502+        to tolerate additional hashes.
11503+        """
11504+        # TODO: Return only the parts of the block hash tree necessary
11505+        # to validate the blocknum provided?
11506+        # This is a good idea, but it is hard to implement correctly. It
11507+        # is bad to fetch any one block hash more than once, so we
11508+        # probably just want to fetch the whole thing at once and then
11509+        # serve it.
11510+        if needed == set([]):
11511+            return defer.succeed([])
11512+        d = self._maybe_fetch_offsets_and_header()
11513+        def _then(ignored):
11514+            blockhashes_offset = self._offsets['block_hash_tree']
11515+            if self._version_number == 1:
11516+                blockhashes_length = self._offsets['EOF'] - blockhashes_offset
11517+            else:
11518+                blockhashes_length = self._offsets['share_data'] - blockhashes_offset
11519+            readvs = [(blockhashes_offset, blockhashes_length)]
11520+            return readvs
11521+        d.addCallback(_then)
11522+        d.addCallback(lambda readvs:
11523+            self._read(readvs, queue=queue, force_remote=force_remote))
11524+        def _build_block_hash_tree(results):
11525+            assert self.shnum in results
11526+
11527+            rawhashes = results[self.shnum][0]
11528+            results = [rawhashes[i:i+HASH_SIZE]
11529+                       for i in range(0, len(rawhashes), HASH_SIZE)]
11530+            return results
11531+        d.addCallback(_build_block_hash_tree)
11532+        return d
11533+
11534+
11535+    def get_sharehashes(self, needed=None, queue=False, force_remote=False):
11536+        """
11537+        I return the part of the share hash chain placed to validate
11538+        this share.
11539+
11540+        I take an optional argument, needed. Needed is a set of indices
11541+        that correspond to the hashes that I should fetch. If needed is
11542+        not present, I will fetch and return the entire share hash
11543+        chain. Otherwise, I may fetch and return any part of the share
11544+        hash chain that is a superset of the part that I am asked to
11545+        fetch. Callers should be prepared to deal with more hashes than
11546+        they've asked for.
11547+        """
11548+        if needed == set([]):
11549+            return defer.succeed([])
11550+        d = self._maybe_fetch_offsets_and_header()
11551+
11552+        def _make_readvs(ignored):
11553+            sharehashes_offset = self._offsets['share_hash_chain']
11554+            if self._version_number == 0:
11555+                sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset
11556+            else:
11557+                sharehashes_length = self._offsets['signature'] - sharehashes_offset
11558+            readvs = [(sharehashes_offset, sharehashes_length)]
11559+            return readvs
11560+        d.addCallback(_make_readvs)
11561+        d.addCallback(lambda readvs:
11562+            self._read(readvs, queue=queue, force_remote=force_remote))
11563+        def _build_share_hash_chain(results):
11564+            assert self.shnum in results
11565+
11566+            sharehashes = results[self.shnum][0]
11567+            results = [sharehashes[i:i+(HASH_SIZE + 2)]
11568+                       for i in range(0, len(sharehashes), HASH_SIZE + 2)]
11569+            results = dict([struct.unpack(">H32s", data)
11570+                            for data in results])
11571+            return results
11572+        d.addCallback(_build_share_hash_chain)
11573+        return d
11574+
11575+
11576+    def get_encprivkey(self, queue=False):
11577+        """
11578+        I return the encrypted private key.
11579+        """
11580+        d = self._maybe_fetch_offsets_and_header()
11581+
11582+        def _make_readvs(ignored):
11583+            privkey_offset = self._offsets['enc_privkey']
11584+            if self._version_number == 0:
11585+                privkey_length = self._offsets['EOF'] - privkey_offset
11586+            else:
11587+                privkey_length = self._offsets['share_hash_chain'] - privkey_offset
11588+            readvs = [(privkey_offset, privkey_length)]
11589+            return readvs
11590+        d.addCallback(_make_readvs)
11591+        d.addCallback(lambda readvs:
11592+            self._read(readvs, queue=queue))
11593+        def _process_results(results):
11594+            assert self.shnum in results
11595+            privkey = results[self.shnum][0]
11596+            return privkey
11597+        d.addCallback(_process_results)
11598+        return d
11599+
11600+
11601+    def get_signature(self, queue=False):
11602+        """
11603+        I return the signature of my share.
11604+        """
11605+        d = self._maybe_fetch_offsets_and_header()
11606+
11607+        def _make_readvs(ignored):
11608+            signature_offset = self._offsets['signature']
11609+            if self._version_number == 1:
11610+                signature_length = self._offsets['verification_key'] - signature_offset
11611+            else:
11612+                signature_length = self._offsets['share_hash_chain'] - signature_offset
11613+            readvs = [(signature_offset, signature_length)]
11614+            return readvs
11615+        d.addCallback(_make_readvs)
11616+        d.addCallback(lambda readvs:
11617+            self._read(readvs, queue=queue))
11618+        def _process_results(results):
11619+            assert self.shnum in results
11620+            signature = results[self.shnum][0]
11621+            return signature
11622+        d.addCallback(_process_results)
11623+        return d
11624+
11625+
11626+    def get_verification_key(self, queue=False):
11627+        """
11628+        I return the verification key.
11629+        """
11630+        d = self._maybe_fetch_offsets_and_header()
11631+
11632+        def _make_readvs(ignored):
11633+            if self._version_number == 1:
11634+                vk_offset = self._offsets['verification_key']
11635+                vk_length = self._offsets['verification_key_end'] - vk_offset
11636+            else:
11637+                vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
11638+                vk_length = self._offsets['signature'] - vk_offset
11639+            readvs = [(vk_offset, vk_length)]
11640+            return readvs
11641+        d.addCallback(_make_readvs)
11642+        d.addCallback(lambda readvs:
11643+            self._read(readvs, queue=queue))
11644+        def _process_results(results):
11645+            assert self.shnum in results
11646+            verification_key = results[self.shnum][0]
11647+            return verification_key
11648+        d.addCallback(_process_results)
11649+        return d
11650+
11651+
11652+    def get_encoding_parameters(self):
11653+        """
11654+        I return (k, n, segsize, datalen)
11655+        """
11656+        d = self._maybe_fetch_offsets_and_header()
11657+        d.addCallback(lambda ignored:
11658+            (self._required_shares,
11659+             self._total_shares,
11660+             self._segment_size,
11661+             self._data_length))
11662+        return d
11663+
11664+
11665+    def get_seqnum(self):
11666+        """
11667+        I return the sequence number for this share.
11668+        """
11669+        d = self._maybe_fetch_offsets_and_header()
11670+        d.addCallback(lambda ignored:
11671+            self._sequence_number)
11672+        return d
11673+
11674+
11675+    def get_root_hash(self):
11676+        """
11677+        I return the root of the block hash tree
11678+        """
11679+        d = self._maybe_fetch_offsets_and_header()
11680+        d.addCallback(lambda ignored: self._root_hash)
11681+        return d
11682+
11683+
11684+    def get_checkstring(self):
11685+        """
11686+        I return the packed representation of the following:
11687+
11688+            - version number
11689+            - sequence number
11690+            - root hash
11691+            - salt hash
11692+
11693+        which my users use as a checkstring to detect other writers.
11694+        """
11695+        d = self._maybe_fetch_offsets_and_header()
11696+        def _build_checkstring(ignored):
11697+            if self._salt:
11698+                checkstring = struct.pack(PREFIX,
11699+                                          self._version_number,
11700+                                          self._sequence_number,
11701+                                          self._root_hash,
11702+                                          self._salt)
11703+            else:
11704+                checkstring = struct.pack(MDMFCHECKSTRING,
11705+                                          self._version_number,
11706+                                          self._sequence_number,
11707+                                          self._root_hash)
11708+
11709+            return checkstring
11710+        d.addCallback(_build_checkstring)
11711+        return d
11712+
11713+
11714+    def get_prefix(self, force_remote):
11715+        d = self._maybe_fetch_offsets_and_header(force_remote)
11716+        d.addCallback(lambda ignored:
11717+            self._build_prefix())
11718+        return d
11719+
11720+
11721+    def _build_prefix(self):
11722+        # The prefix is another name for the part of the remote share
11723+        # that gets signed. It consists of everything up to and
11724+        # including the datalength, packed by struct.
11725+        if self._version_number == SDMF_VERSION:
11726+            return struct.pack(SIGNED_PREFIX,
11727+                           self._version_number,
11728+                           self._sequence_number,
11729+                           self._root_hash,
11730+                           self._salt,
11731+                           self._required_shares,
11732+                           self._total_shares,
11733+                           self._segment_size,
11734+                           self._data_length)
11735+
11736+        else:
11737+            return struct.pack(MDMFSIGNABLEHEADER,
11738+                           self._version_number,
11739+                           self._sequence_number,
11740+                           self._root_hash,
11741+                           self._required_shares,
11742+                           self._total_shares,
11743+                           self._segment_size,
11744+                           self._data_length)
11745+
11746+
11747+    def _get_offsets_tuple(self):
11748+        # The offsets tuple is another component of the version
11749+        # information tuple. It is basically our offsets dictionary,
11750+        # itemized and in a tuple.
11751+        return self._offsets.copy()
11752+
11753+
11754+    def get_verinfo(self):
11755+        """
11756+        I return my verinfo tuple. This is used by the ServermapUpdater
11757+        to keep track of versions of mutable files.
11758+
11759+        The verinfo tuple for MDMF files contains:
11760+            - seqnum
11761+            - root hash
11762+            - a blank (nothing)
11763+            - segsize
11764+            - datalen
11765+            - k
11766+            - n
11767+            - prefix (the thing that you sign)
11768+            - a tuple of offsets
11769+
11770+        We include the nonce in MDMF to simplify processing of version
11771+        information tuples.
11772+
11773+        The verinfo tuple for SDMF files is the same, but contains a
11774+        16-byte IV instead of a hash of salts.
11775+        """
11776+        d = self._maybe_fetch_offsets_and_header()
11777+        def _build_verinfo(ignored):
11778+            if self._version_number == SDMF_VERSION:
11779+                salt_to_use = self._salt
11780+            else:
11781+                salt_to_use = None
11782+            return (self._sequence_number,
11783+                    self._root_hash,
11784+                    salt_to_use,
11785+                    self._segment_size,
11786+                    self._data_length,
11787+                    self._required_shares,
11788+                    self._total_shares,
11789+                    self._build_prefix(),
11790+                    self._get_offsets_tuple())
11791+        d.addCallback(_build_verinfo)
11792+        return d
11793+
11794+
11795+    def flush(self):
11796+        """
11797+        I flush my queue of read vectors.
11798+        """
11799+        d = self._read(self._readvs)
11800+        def _then(results):
11801+            self._readvs = []
11802+            if isinstance(results, failure.Failure):
11803+                self._queue_errbacks.notify(results)
11804+            else:
11805+                self._queue_observers.notify(results)
11806+            self._queue_observers = observer.ObserverList()
11807+            self._queue_errbacks = observer.ObserverList()
11808+        d.addBoth(_then)
11809+
11810+
11811+    def _read(self, readvs, force_remote=False, queue=False):
11812+        unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs)
11813+        # TODO: It's entirely possible to tweak this so that it just
11814+        # fulfills the requests that it can, and not demand that all
11815+        # requests are satisfiable before running it.
11816+        if not unsatisfiable and not force_remote:
11817+            results = [self._data[offset:offset+length]
11818+                       for (offset, length) in readvs]
11819+            results = {self.shnum: results}
11820+            return defer.succeed(results)
11821+        else:
11822+            if queue:
11823+                start = len(self._readvs)
11824+                self._readvs += readvs
11825+                end = len(self._readvs)
11826+                def _get_results(results, start, end):
11827+                    if not self.shnum in results:
11828+                        return {self._shnum: [""]}
11829+                    return {self.shnum: results[self.shnum][start:end]}
11830+                d = defer.Deferred()
11831+                d.addCallback(_get_results, start, end)
11832+                self._queue_observers.subscribe(d.callback)
11833+                self._queue_errbacks.subscribe(d.errback)
11834+                return d
11835+            return self._rref.callRemote("slot_readv",
11836+                                         self._storage_index,
11837+                                         [self.shnum],
11838+                                         readvs)
11839+
11840+
11841+    def is_sdmf(self):
11842+        """I tell my caller whether or not my remote file is SDMF or MDMF
11843+        """
11844+        d = self._maybe_fetch_offsets_and_header()
11845+        d.addCallback(lambda ignored:
11846+            self._version_number == 0)
11847+        return d
11848+
11849+
11850+class LayoutInvalid(Exception):
11851+    """
11852+    This isn't a valid MDMF mutable file
11853+    """
11854hunk ./src/allmydata/test/test_storage.py 1
11855-import time, os.path, platform, stat, re, simplejson, struct
11856+import time, os.path, platform, stat, re, simplejson, struct, shutil
11857 
11858 import mock
11859 
11860hunk ./src/allmydata/test/test_storage.py 23
11861 from allmydata.storage.expirer import LeaseCheckingCrawler
11862 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
11863      ReadBucketProxy
11864+from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
11865+                                     LayoutInvalid, MDMFSIGNABLEHEADER, \
11866+                                     SIGNED_PREFIX, MDMFHEADER, \
11867+                                     MDMFOFFSETS, SDMFSlotWriteProxy, \
11868+                                     PRIVATE_KEY_SIZE, \
11869+                                     SIGNATURE_SIZE, \
11870+                                     VERIFICATION_KEY_SIZE, \
11871+                                     SHARE_HASH_CHAIN_SIZE
11872 from allmydata.interfaces import BadWriteEnablerError
11873hunk ./src/allmydata/test/test_storage.py 32
11874-from allmydata.test.common import LoggingServiceParent
11875+from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
11876 from allmydata.test.common_web import WebRenderingMixin
11877 from allmydata.test.no_network import NoNetworkServer
11878 from allmydata.web.storage import StorageStatus, remove_prefix
11879hunk ./src/allmydata/test/test_storage.py 111
11880 
11881 class RemoteBucket:
11882 
11883+    def __init__(self):
11884+        self.read_count = 0
11885+        self.write_count = 0
11886+
11887     def callRemote(self, methname, *args, **kwargs):
11888         def _call():
11889             meth = getattr(self.target, "remote_" + methname)
11890hunk ./src/allmydata/test/test_storage.py 119
11891             return meth(*args, **kwargs)
11892+
11893+        if methname == "slot_readv":
11894+            self.read_count += 1
11895+        if "writev" in methname:
11896+            self.write_count += 1
11897+
11898         return defer.maybeDeferred(_call)
11899 
11900hunk ./src/allmydata/test/test_storage.py 127
11901+
11902 class BucketProxy(unittest.TestCase):
11903     def make_bucket(self, name, size):
11904         basedir = os.path.join("storage", "BucketProxy", name)
11905hunk ./src/allmydata/test/test_storage.py 1310
11906         self.failUnless(os.path.exists(prefixdir), prefixdir)
11907         self.failIf(os.path.exists(bucketdir), bucketdir)
11908 
11909+
11910+class MDMFProxies(unittest.TestCase, ShouldFailMixin):
11911+    def setUp(self):
11912+        self.sparent = LoggingServiceParent()
11913+        self._lease_secret = itertools.count()
11914+        self.ss = self.create("MDMFProxies storage test server")
11915+        self.rref = RemoteBucket()
11916+        self.rref.target = self.ss
11917+        self.secrets = (self.write_enabler("we_secret"),
11918+                        self.renew_secret("renew_secret"),
11919+                        self.cancel_secret("cancel_secret"))
11920+        self.segment = "aaaaaa"
11921+        self.block = "aa"
11922+        self.salt = "a" * 16
11923+        self.block_hash = "a" * 32
11924+        self.block_hash_tree = [self.block_hash for i in xrange(6)]
11925+        self.share_hash = self.block_hash
11926+        self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)])
11927+        self.signature = "foobarbaz"
11928+        self.verification_key = "vvvvvv"
11929+        self.encprivkey = "private"
11930+        self.root_hash = self.block_hash
11931+        self.salt_hash = self.root_hash
11932+        self.salt_hash_tree = [self.salt_hash for i in xrange(6)]
11933+        self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree)
11934+        self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain)
11935+        # blockhashes and salt hashes are serialized in the same way,
11936+        # only we lop off the first element and store that in the
11937+        # header.
11938+        self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
11939+
11940+
11941+    def tearDown(self):
11942+        self.sparent.stopService()
11943+        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
11944+
11945+
11946+    def write_enabler(self, we_tag):
11947+        return hashutil.tagged_hash("we_blah", we_tag)
11948+
11949+
11950+    def renew_secret(self, tag):
11951+        return hashutil.tagged_hash("renew_blah", str(tag))
11952+
11953+
11954+    def cancel_secret(self, tag):
11955+        return hashutil.tagged_hash("cancel_blah", str(tag))
11956+
11957+
11958+    def workdir(self, name):
11959+        basedir = os.path.join("storage", "MutableServer", name)
11960+        return basedir
11961+
11962+
11963+    def create(self, name):
11964+        workdir = self.workdir(name)
11965+        ss = StorageServer(workdir, "\x00" * 20)
11966+        ss.setServiceParent(self.sparent)
11967+        return ss
11968+
11969+
11970+    def build_test_mdmf_share(self, tail_segment=False, empty=False):
11971+        # Start with the checkstring
11972+        data = struct.pack(">BQ32s",
11973+                           1,
11974+                           0,
11975+                           self.root_hash)
11976+        self.checkstring = data
11977+        # Next, the encoding parameters
11978+        if tail_segment:
11979+            data += struct.pack(">BBQQ",
11980+                                3,
11981+                                10,
11982+                                6,
11983+                                33)
11984+        elif empty:
11985+            data += struct.pack(">BBQQ",
11986+                                3,
11987+                                10,
11988+                                0,
11989+                                0)
11990+        else:
11991+            data += struct.pack(">BBQQ",
11992+                                3,
11993+                                10,
11994+                                6,
11995+                                36)
11996+        # Now we'll build the offsets.
11997+        sharedata = ""
11998+        if not tail_segment and not empty:
11999+            for i in xrange(6):
12000+                sharedata += self.salt + self.block
12001+        elif tail_segment:
12002+            for i in xrange(5):
12003+                sharedata += self.salt + self.block
12004+            sharedata += self.salt + "a"
12005+
12006+        # The encrypted private key comes after the shares + salts
12007+        offset_size = struct.calcsize(MDMFOFFSETS)
12008+        encrypted_private_key_offset = len(data) + offset_size
12009+        # The share has chain comes after the private key
12010+        sharehashes_offset = encrypted_private_key_offset + \
12011+            len(self.encprivkey)
12012+
12013+        # The signature comes after the share hash chain.
12014+        signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
12015+
12016+        verification_key_offset = signature_offset + len(self.signature)
12017+        verification_key_end = verification_key_offset + \
12018+            len(self.verification_key)
12019+
12020+        share_data_offset = offset_size
12021+        share_data_offset += PRIVATE_KEY_SIZE
12022+        share_data_offset += SIGNATURE_SIZE
12023+        share_data_offset += VERIFICATION_KEY_SIZE
12024+        share_data_offset += SHARE_HASH_CHAIN_SIZE
12025+
12026+        blockhashes_offset = share_data_offset + len(sharedata)
12027+        eof_offset = blockhashes_offset + len(self.block_hash_tree_s)
12028+
12029+        data += struct.pack(MDMFOFFSETS,
12030+                            encrypted_private_key_offset,
12031+                            sharehashes_offset,
12032+                            signature_offset,
12033+                            verification_key_offset,
12034+                            verification_key_end,
12035+                            share_data_offset,
12036+                            blockhashes_offset,
12037+                            eof_offset)
12038+
12039+        self.offsets = {}
12040+        self.offsets['enc_privkey'] = encrypted_private_key_offset
12041+        self.offsets['block_hash_tree'] = blockhashes_offset
12042+        self.offsets['share_hash_chain'] = sharehashes_offset
12043+        self.offsets['signature'] = signature_offset
12044+        self.offsets['verification_key'] = verification_key_offset
12045+        self.offsets['share_data'] = share_data_offset
12046+        self.offsets['verification_key_end'] = verification_key_end
12047+        self.offsets['EOF'] = eof_offset
12048+
12049+        # the private key,
12050+        data += self.encprivkey
12051+        # the sharehashes
12052+        data += self.share_hash_chain_s
12053+        # the signature,
12054+        data += self.signature
12055+        # and the verification key
12056+        data += self.verification_key
12057+        # Then we'll add in gibberish until we get to the right point.
12058+        nulls = "".join([" " for i in xrange(len(data), share_data_offset)])
12059+        data += nulls
12060+
12061+        # Then the share data
12062+        data += sharedata
12063+        # the blockhashes
12064+        data += self.block_hash_tree_s
12065+        return data
12066+
12067+
12068+    def write_test_share_to_server(self,
12069+                                   storage_index,
12070+                                   tail_segment=False,
12071+                                   empty=False):
12072+        """
12073+        I write some data for the read tests to read to self.ss
12074+
12075+        If tail_segment=True, then I will write a share that has a
12076+        smaller tail segment than other segments.
12077+        """
12078+        write = self.ss.remote_slot_testv_and_readv_and_writev
12079+        data = self.build_test_mdmf_share(tail_segment, empty)
12080+        # Finally, we write the whole thing to the storage server in one
12081+        # pass.
12082+        testvs = [(0, 1, "eq", "")]
12083+        tws = {}
12084+        tws[0] = (testvs, [(0, data)], None)
12085+        readv = [(0, 1)]
12086+        results = write(storage_index, self.secrets, tws, readv)
12087+        self.failUnless(results[0])
12088+
12089+
12090+    def build_test_sdmf_share(self, empty=False):
12091+        if empty:
12092+            sharedata = ""
12093+        else:
12094+            sharedata = self.segment * 6
12095+        self.sharedata = sharedata
12096+        blocksize = len(sharedata) / 3
12097+        block = sharedata[:blocksize]
12098+        self.blockdata = block
12099+        prefix = struct.pack(">BQ32s16s BBQQ",
12100+                             0, # version,
12101+                             0,
12102+                             self.root_hash,
12103+                             self.salt,
12104+                             3,
12105+                             10,
12106+                             len(sharedata),
12107+                             len(sharedata),
12108+                            )
12109+        post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
12110+        signature_offset = post_offset + len(self.verification_key)
12111+        sharehashes_offset = signature_offset + len(self.signature)
12112+        blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s)
12113+        sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s)
12114+        encprivkey_offset = sharedata_offset + len(block)
12115+        eof_offset = encprivkey_offset + len(self.encprivkey)
12116+        offsets = struct.pack(">LLLLQQ",
12117+                              signature_offset,
12118+                              sharehashes_offset,
12119+                              blockhashes_offset,
12120+                              sharedata_offset,
12121+                              encprivkey_offset,
12122+                              eof_offset)
12123+        final_share = "".join([prefix,
12124+                           offsets,
12125+                           self.verification_key,
12126+                           self.signature,
12127+                           self.share_hash_chain_s,
12128+                           self.block_hash_tree_s,
12129+                           block,
12130+                           self.encprivkey])
12131+        self.offsets = {}
12132+        self.offsets['signature'] = signature_offset
12133+        self.offsets['share_hash_chain'] = sharehashes_offset
12134+        self.offsets['block_hash_tree'] = blockhashes_offset
12135+        self.offsets['share_data'] = sharedata_offset
12136+        self.offsets['enc_privkey'] = encprivkey_offset
12137+        self.offsets['EOF'] = eof_offset
12138+        return final_share
12139+
12140+
12141+    def write_sdmf_share_to_server(self,
12142+                                   storage_index,
12143+                                   empty=False):
12144+        # Some tests need SDMF shares to verify that we can still
12145+        # read them. This method writes one, which resembles but is not
12146+        assert self.rref
12147+        write = self.ss.remote_slot_testv_and_readv_and_writev
12148+        share = self.build_test_sdmf_share(empty)
12149+        testvs = [(0, 1, "eq", "")]
12150+        tws = {}
12151+        tws[0] = (testvs, [(0, share)], None)
12152+        readv = []
12153+        results = write(storage_index, self.secrets, tws, readv)
12154+        self.failUnless(results[0])
12155+
12156+
12157+    def test_read(self):
12158+        self.write_test_share_to_server("si1")
12159+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12160+        # Check that every method equals what we expect it to.
12161+        d = defer.succeed(None)
12162+        def _check_block_and_salt((block, salt)):
12163+            self.failUnlessEqual(block, self.block)
12164+            self.failUnlessEqual(salt, self.salt)
12165+
12166+        for i in xrange(6):
12167+            d.addCallback(lambda ignored, i=i:
12168+                mr.get_block_and_salt(i))
12169+            d.addCallback(_check_block_and_salt)
12170+
12171+        d.addCallback(lambda ignored:
12172+            mr.get_encprivkey())
12173+        d.addCallback(lambda encprivkey:
12174+            self.failUnlessEqual(self.encprivkey, encprivkey))
12175+
12176+        d.addCallback(lambda ignored:
12177+            mr.get_blockhashes())
12178+        d.addCallback(lambda blockhashes:
12179+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
12180+
12181+        d.addCallback(lambda ignored:
12182+            mr.get_sharehashes())
12183+        d.addCallback(lambda sharehashes:
12184+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
12185+
12186+        d.addCallback(lambda ignored:
12187+            mr.get_signature())
12188+        d.addCallback(lambda signature:
12189+            self.failUnlessEqual(signature, self.signature))
12190+
12191+        d.addCallback(lambda ignored:
12192+            mr.get_verification_key())
12193+        d.addCallback(lambda verification_key:
12194+            self.failUnlessEqual(verification_key, self.verification_key))
12195+
12196+        d.addCallback(lambda ignored:
12197+            mr.get_seqnum())
12198+        d.addCallback(lambda seqnum:
12199+            self.failUnlessEqual(seqnum, 0))
12200+
12201+        d.addCallback(lambda ignored:
12202+            mr.get_root_hash())
12203+        d.addCallback(lambda root_hash:
12204+            self.failUnlessEqual(self.root_hash, root_hash))
12205+
12206+        d.addCallback(lambda ignored:
12207+            mr.get_seqnum())
12208+        d.addCallback(lambda seqnum:
12209+            self.failUnlessEqual(0, seqnum))
12210+
12211+        d.addCallback(lambda ignored:
12212+            mr.get_encoding_parameters())
12213+        def _check_encoding_parameters((k, n, segsize, datalen)):
12214+            self.failUnlessEqual(k, 3)
12215+            self.failUnlessEqual(n, 10)
12216+            self.failUnlessEqual(segsize, 6)
12217+            self.failUnlessEqual(datalen, 36)
12218+        d.addCallback(_check_encoding_parameters)
12219+
12220+        d.addCallback(lambda ignored:
12221+            mr.get_checkstring())
12222+        d.addCallback(lambda checkstring:
12223+            self.failUnlessEqual(checkstring, checkstring))
12224+        return d
12225+
12226+
12227+    def test_read_with_different_tail_segment_size(self):
12228+        self.write_test_share_to_server("si1", tail_segment=True)
12229+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12230+        d = mr.get_block_and_salt(5)
12231+        def _check_tail_segment(results):
12232+            block, salt = results
12233+            self.failUnlessEqual(len(block), 1)
12234+            self.failUnlessEqual(block, "a")
12235+        d.addCallback(_check_tail_segment)
12236+        return d
12237+
12238+
12239+    def test_get_block_with_invalid_segnum(self):
12240+        self.write_test_share_to_server("si1")
12241+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12242+        d = defer.succeed(None)
12243+        d.addCallback(lambda ignored:
12244+            self.shouldFail(LayoutInvalid, "test invalid segnum",
12245+                            None,
12246+                            mr.get_block_and_salt, 7))
12247+        return d
12248+
12249+
12250+    def test_get_encoding_parameters_first(self):
12251+        self.write_test_share_to_server("si1")
12252+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12253+        d = mr.get_encoding_parameters()
12254+        def _check_encoding_parameters((k, n, segment_size, datalen)):
12255+            self.failUnlessEqual(k, 3)
12256+            self.failUnlessEqual(n, 10)
12257+            self.failUnlessEqual(segment_size, 6)
12258+            self.failUnlessEqual(datalen, 36)
12259+        d.addCallback(_check_encoding_parameters)
12260+        return d
12261+
12262+
12263+    def test_get_seqnum_first(self):
12264+        self.write_test_share_to_server("si1")
12265+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12266+        d = mr.get_seqnum()
12267+        d.addCallback(lambda seqnum:
12268+            self.failUnlessEqual(seqnum, 0))
12269+        return d
12270+
12271+
12272+    def test_get_root_hash_first(self):
12273+        self.write_test_share_to_server("si1")
12274+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12275+        d = mr.get_root_hash()
12276+        d.addCallback(lambda root_hash:
12277+            self.failUnlessEqual(root_hash, self.root_hash))
12278+        return d
12279+
12280+
12281+    def test_get_checkstring_first(self):
12282+        self.write_test_share_to_server("si1")
12283+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12284+        d = mr.get_checkstring()
12285+        d.addCallback(lambda checkstring:
12286+            self.failUnlessEqual(checkstring, self.checkstring))
12287+        return d
12288+
12289+
12290+    def test_write_read_vectors(self):
12291+        # When writing for us, the storage server will return to us a
12292+        # read vector, along with its result. If a write fails because
12293+        # the test vectors failed, this read vector can help us to
12294+        # diagnose the problem. This test ensures that the read vector
12295+        # is working appropriately.
12296+        mw = self._make_new_mw("si1", 0)
12297+
12298+        for i in xrange(6):
12299+            mw.put_block(self.block, i, self.salt)
12300+        mw.put_encprivkey(self.encprivkey)
12301+        mw.put_blockhashes(self.block_hash_tree)
12302+        mw.put_sharehashes(self.share_hash_chain)
12303+        mw.put_root_hash(self.root_hash)
12304+        mw.put_signature(self.signature)
12305+        mw.put_verification_key(self.verification_key)
12306+        d = mw.finish_publishing()
12307+        def _then(results):
12308+            self.failUnless(len(results), 2)
12309+            result, readv = results
12310+            self.failUnless(result)
12311+            self.failIf(readv)
12312+            self.old_checkstring = mw.get_checkstring()
12313+            mw.set_checkstring("")
12314+        d.addCallback(_then)
12315+        d.addCallback(lambda ignored:
12316+            mw.finish_publishing())
12317+        def _then_again(results):
12318+            self.failUnlessEqual(len(results), 2)
12319+            result, readvs = results
12320+            self.failIf(result)
12321+            self.failUnlessIn(0, readvs)
12322+            readv = readvs[0][0]
12323+            self.failUnlessEqual(readv, self.old_checkstring)
12324+        d.addCallback(_then_again)
12325+        # The checkstring remains the same for the rest of the process.
12326+        return d
12327+
12328+
12329+    def test_private_key_after_share_hash_chain(self):
12330+        mw = self._make_new_mw("si1", 0)
12331+        d = defer.succeed(None)
12332+        for i in xrange(6):
12333+            d.addCallback(lambda ignored, i=i:
12334+                mw.put_block(self.block, i, self.salt))
12335+        d.addCallback(lambda ignored:
12336+            mw.put_encprivkey(self.encprivkey))
12337+        d.addCallback(lambda ignored:
12338+            mw.put_sharehashes(self.share_hash_chain))
12339+
12340+        # Now try to put the private key again.
12341+        d.addCallback(lambda ignored:
12342+            self.shouldFail(LayoutInvalid, "test repeat private key",
12343+                            None,
12344+                            mw.put_encprivkey, self.encprivkey))
12345+        return d
12346+
12347+
12348+    def test_signature_after_verification_key(self):
12349+        mw = self._make_new_mw("si1", 0)
12350+        d = defer.succeed(None)
12351+        # Put everything up to and including the verification key.
12352+        for i in xrange(6):
12353+            d.addCallback(lambda ignored, i=i:
12354+                mw.put_block(self.block, i, self.salt))
12355+        d.addCallback(lambda ignored:
12356+            mw.put_encprivkey(self.encprivkey))
12357+        d.addCallback(lambda ignored:
12358+            mw.put_blockhashes(self.block_hash_tree))
12359+        d.addCallback(lambda ignored:
12360+            mw.put_sharehashes(self.share_hash_chain))
12361+        d.addCallback(lambda ignored:
12362+            mw.put_root_hash(self.root_hash))
12363+        d.addCallback(lambda ignored:
12364+            mw.put_signature(self.signature))
12365+        d.addCallback(lambda ignored:
12366+            mw.put_verification_key(self.verification_key))
12367+        # Now try to put the signature again. This should fail
12368+        d.addCallback(lambda ignored:
12369+            self.shouldFail(LayoutInvalid, "signature after verification",
12370+                            None,
12371+                            mw.put_signature, self.signature))
12372+        return d
12373+
12374+
12375+    def test_uncoordinated_write(self):
12376+        # Make two mutable writers, both pointing to the same storage
12377+        # server, both at the same storage index, and try writing to the
12378+        # same share.
12379+        mw1 = self._make_new_mw("si1", 0)
12380+        mw2 = self._make_new_mw("si1", 0)
12381+
12382+        def _check_success(results):
12383+            result, readvs = results
12384+            self.failUnless(result)
12385+
12386+        def _check_failure(results):
12387+            result, readvs = results
12388+            self.failIf(result)
12389+
12390+        def _write_share(mw):
12391+            for i in xrange(6):
12392+                mw.put_block(self.block, i, self.salt)
12393+            mw.put_encprivkey(self.encprivkey)
12394+            mw.put_blockhashes(self.block_hash_tree)
12395+            mw.put_sharehashes(self.share_hash_chain)
12396+            mw.put_root_hash(self.root_hash)
12397+            mw.put_signature(self.signature)
12398+            mw.put_verification_key(self.verification_key)
12399+            return mw.finish_publishing()
12400+        d = _write_share(mw1)
12401+        d.addCallback(_check_success)
12402+        d.addCallback(lambda ignored:
12403+            _write_share(mw2))
12404+        d.addCallback(_check_failure)
12405+        return d
12406+
12407+
12408+    def test_invalid_salt_size(self):
12409+        # Salts need to be 16 bytes in size. Writes that attempt to
12410+        # write more or less than this should be rejected.
12411+        mw = self._make_new_mw("si1", 0)
12412+        invalid_salt = "a" * 17 # 17 bytes
12413+        another_invalid_salt = "b" * 15 # 15 bytes
12414+        d = defer.succeed(None)
12415+        d.addCallback(lambda ignored:
12416+            self.shouldFail(LayoutInvalid, "salt too big",
12417+                            None,
12418+                            mw.put_block, self.block, 0, invalid_salt))
12419+        d.addCallback(lambda ignored:
12420+            self.shouldFail(LayoutInvalid, "salt too small",
12421+                            None,
12422+                            mw.put_block, self.block, 0,
12423+                            another_invalid_salt))
12424+        return d
12425+
12426+
12427+    def test_write_test_vectors(self):
12428+        # If we give the write proxy a bogus test vector at
12429+        # any point during the process, it should fail to write when we
12430+        # tell it to write.
12431+        def _check_failure(results):
12432+            self.failUnlessEqual(len(results), 2)
12433+            res, d = results
12434+            self.failIf(res)
12435+
12436+        def _check_success(results):
12437+            self.failUnlessEqual(len(results), 2)
12438+            res, d = results
12439+            self.failUnless(results)
12440+
12441+        mw = self._make_new_mw("si1", 0)
12442+        mw.set_checkstring("this is a lie")
12443+        for i in xrange(6):
12444+            mw.put_block(self.block, i, self.salt)
12445+        mw.put_encprivkey(self.encprivkey)
12446+        mw.put_blockhashes(self.block_hash_tree)
12447+        mw.put_sharehashes(self.share_hash_chain)
12448+        mw.put_root_hash(self.root_hash)
12449+        mw.put_signature(self.signature)
12450+        mw.put_verification_key(self.verification_key)
12451+        d = mw.finish_publishing()
12452+        d.addCallback(_check_failure)
12453+        d.addCallback(lambda ignored:
12454+            mw.set_checkstring(""))
12455+        d.addCallback(lambda ignored:
12456+            mw.finish_publishing())
12457+        d.addCallback(_check_success)
12458+        return d
12459+
12460+
12461+    def serialize_blockhashes(self, blockhashes):
12462+        return "".join(blockhashes)
12463+
12464+
12465+    def serialize_sharehashes(self, sharehashes):
12466+        ret = "".join([struct.pack(">H32s", i, sharehashes[i])
12467+                        for i in sorted(sharehashes.keys())])
12468+        return ret
12469+
12470+
12471+    def test_write(self):
12472+        # This translates to a file with 6 6-byte segments, and with 2-byte
12473+        # blocks.
12474+        mw = self._make_new_mw("si1", 0)
12475+        # Test writing some blocks.
12476+        read = self.ss.remote_slot_readv
12477+        expected_private_key_offset = struct.calcsize(MDMFHEADER)
12478+        expected_sharedata_offset = struct.calcsize(MDMFHEADER) + \
12479+                                    PRIVATE_KEY_SIZE + \
12480+                                    SIGNATURE_SIZE + \
12481+                                    VERIFICATION_KEY_SIZE + \
12482+                                    SHARE_HASH_CHAIN_SIZE
12483+        written_block_size = 2 + len(self.salt)
12484+        written_block = self.block + self.salt
12485+        for i in xrange(6):
12486+            mw.put_block(self.block, i, self.salt)
12487+
12488+        mw.put_encprivkey(self.encprivkey)
12489+        mw.put_blockhashes(self.block_hash_tree)
12490+        mw.put_sharehashes(self.share_hash_chain)
12491+        mw.put_root_hash(self.root_hash)
12492+        mw.put_signature(self.signature)
12493+        mw.put_verification_key(self.verification_key)
12494+        d = mw.finish_publishing()
12495+        def _check_publish(results):
12496+            self.failUnlessEqual(len(results), 2)
12497+            result, ign = results
12498+            self.failUnless(result, "publish failed")
12499+            for i in xrange(6):
12500+                self.failUnlessEqual(read("si1", [0], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]),
12501+                                {0: [written_block]})
12502+
12503+            self.failUnlessEqual(len(self.encprivkey), 7)
12504+            self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]),
12505+                                 {0: [self.encprivkey]})
12506+
12507+            expected_block_hash_offset = expected_sharedata_offset + \
12508+                        (6 * written_block_size)
12509+            self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
12510+            self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
12511+                                 {0: [self.block_hash_tree_s]})
12512+
12513+            expected_share_hash_offset = expected_private_key_offset + len(self.encprivkey)
12514+            self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
12515+                                 {0: [self.share_hash_chain_s]})
12516+
12517+            self.failUnlessEqual(read("si1", [0], [(9, 32)]),
12518+                                 {0: [self.root_hash]})
12519+            expected_signature_offset = expected_share_hash_offset + \
12520+                len(self.share_hash_chain_s)
12521+            self.failUnlessEqual(len(self.signature), 9)
12522+            self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
12523+                                 {0: [self.signature]})
12524+
12525+            expected_verification_key_offset = expected_signature_offset + len(self.signature)
12526+            self.failUnlessEqual(len(self.verification_key), 6)
12527+            self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]),
12528+                                 {0: [self.verification_key]})
12529+
12530+            signable = mw.get_signable()
12531+            verno, seq, roothash, k, n, segsize, datalen = \
12532+                                            struct.unpack(">BQ32sBBQQ",
12533+                                                          signable)
12534+            self.failUnlessEqual(verno, 1)
12535+            self.failUnlessEqual(seq, 0)
12536+            self.failUnlessEqual(roothash, self.root_hash)
12537+            self.failUnlessEqual(k, 3)
12538+            self.failUnlessEqual(n, 10)
12539+            self.failUnlessEqual(segsize, 6)
12540+            self.failUnlessEqual(datalen, 36)
12541+            expected_eof_offset = expected_block_hash_offset + \
12542+                len(self.block_hash_tree_s)
12543+
12544+            # Check the version number to make sure that it is correct.
12545+            expected_version_number = struct.pack(">B", 1)
12546+            self.failUnlessEqual(read("si1", [0], [(0, 1)]),
12547+                                 {0: [expected_version_number]})
12548+            # Check the sequence number to make sure that it is correct
12549+            expected_sequence_number = struct.pack(">Q", 0)
12550+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
12551+                                 {0: [expected_sequence_number]})
12552+            # Check that the encoding parameters (k, N, segement size, data
12553+            # length) are what they should be. These are  3, 10, 6, 36
12554+            expected_k = struct.pack(">B", 3)
12555+            self.failUnlessEqual(read("si1", [0], [(41, 1)]),
12556+                                 {0: [expected_k]})
12557+            expected_n = struct.pack(">B", 10)
12558+            self.failUnlessEqual(read("si1", [0], [(42, 1)]),
12559+                                 {0: [expected_n]})
12560+            expected_segment_size = struct.pack(">Q", 6)
12561+            self.failUnlessEqual(read("si1", [0], [(43, 8)]),
12562+                                 {0: [expected_segment_size]})
12563+            expected_data_length = struct.pack(">Q", 36)
12564+            self.failUnlessEqual(read("si1", [0], [(51, 8)]),
12565+                                 {0: [expected_data_length]})
12566+            expected_offset = struct.pack(">Q", expected_private_key_offset)
12567+            self.failUnlessEqual(read("si1", [0], [(59, 8)]),
12568+                                 {0: [expected_offset]})
12569+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
12570+            self.failUnlessEqual(read("si1", [0], [(67, 8)]),
12571+                                 {0: [expected_offset]})
12572+            expected_offset = struct.pack(">Q", expected_signature_offset)
12573+            self.failUnlessEqual(read("si1", [0], [(75, 8)]),
12574+                                 {0: [expected_offset]})
12575+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
12576+            self.failUnlessEqual(read("si1", [0], [(83, 8)]),
12577+                                 {0: [expected_offset]})
12578+            expected_offset = struct.pack(">Q", expected_verification_key_offset + len(self.verification_key))
12579+            self.failUnlessEqual(read("si1", [0], [(91, 8)]),
12580+                                 {0: [expected_offset]})
12581+            expected_offset = struct.pack(">Q", expected_sharedata_offset)
12582+            self.failUnlessEqual(read("si1", [0], [(99, 8)]),
12583+                                 {0: [expected_offset]})
12584+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
12585+            self.failUnlessEqual(read("si1", [0], [(107, 8)]),
12586+                                 {0: [expected_offset]})
12587+            expected_offset = struct.pack(">Q", expected_eof_offset)
12588+            self.failUnlessEqual(read("si1", [0], [(115, 8)]),
12589+                                 {0: [expected_offset]})
12590+        d.addCallback(_check_publish)
12591+        return d
12592+
12593+    def _make_new_mw(self, si, share, datalength=36):
12594+        # This is a file of size 36 bytes. Since it has a segment
12595+        # size of 6, we know that it has 6 byte segments, which will
12596+        # be split into blocks of 2 bytes because our FEC k
12597+        # parameter is 3.
12598+        mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10,
12599+                                6, datalength)
12600+        return mw
12601+
12602+
12603+    def test_write_rejected_with_too_many_blocks(self):
12604+        mw = self._make_new_mw("si0", 0)
12605+
12606+        # Try writing too many blocks. We should not be able to write
12607+        # more than 6
12608+        # blocks into each share.
12609+        d = defer.succeed(None)
12610+        for i in xrange(6):
12611+            d.addCallback(lambda ignored, i=i:
12612+                mw.put_block(self.block, i, self.salt))
12613+        d.addCallback(lambda ignored:
12614+            self.shouldFail(LayoutInvalid, "too many blocks",
12615+                            None,
12616+                            mw.put_block, self.block, 7, self.salt))
12617+        return d
12618+
12619+
12620+    def test_write_rejected_with_invalid_salt(self):
12621+        # Try writing an invalid salt. Salts are 16 bytes -- any more or
12622+        # less should cause an error.
12623+        mw = self._make_new_mw("si1", 0)
12624+        bad_salt = "a" * 17 # 17 bytes
12625+        d = defer.succeed(None)
12626+        d.addCallback(lambda ignored:
12627+            self.shouldFail(LayoutInvalid, "test_invalid_salt",
12628+                            None, mw.put_block, self.block, 7, bad_salt))
12629+        return d
12630+
12631+
12632+    def test_write_rejected_with_invalid_root_hash(self):
12633+        # Try writing an invalid root hash. This should be SHA256d, and
12634+        # 32 bytes long as a result.
12635+        mw = self._make_new_mw("si2", 0)
12636+        # 17 bytes != 32 bytes
12637+        invalid_root_hash = "a" * 17
12638+        d = defer.succeed(None)
12639+        # Before this test can work, we need to put some blocks + salts,
12640+        # a block hash tree, and a share hash tree. Otherwise, we'll see
12641+        # failures that match what we are looking for, but are caused by
12642+        # the constraints imposed on operation ordering.
12643+        for i in xrange(6):
12644+            d.addCallback(lambda ignored, i=i:
12645+                mw.put_block(self.block, i, self.salt))
12646+        d.addCallback(lambda ignored:
12647+            mw.put_encprivkey(self.encprivkey))
12648+        d.addCallback(lambda ignored:
12649+            mw.put_blockhashes(self.block_hash_tree))
12650+        d.addCallback(lambda ignored:
12651+            mw.put_sharehashes(self.share_hash_chain))
12652+        d.addCallback(lambda ignored:
12653+            self.shouldFail(LayoutInvalid, "invalid root hash",
12654+                            None, mw.put_root_hash, invalid_root_hash))
12655+        return d
12656+
12657+
12658+    def test_write_rejected_with_invalid_blocksize(self):
12659+        # The blocksize implied by the writer that we get from
12660+        # _make_new_mw is 2bytes -- any more or any less than this
12661+        # should be cause for failure, unless it is the tail segment, in
12662+        # which case it may not be failure.
12663+        invalid_block = "a"
12664+        mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with
12665+                                             # one byte blocks
12666+        # 1 bytes != 2 bytes
12667+        d = defer.succeed(None)
12668+        d.addCallback(lambda ignored, invalid_block=invalid_block:
12669+            self.shouldFail(LayoutInvalid, "test blocksize too small",
12670+                            None, mw.put_block, invalid_block, 0,
12671+                            self.salt))
12672+        invalid_block = invalid_block * 3
12673+        # 3 bytes != 2 bytes
12674+        d.addCallback(lambda ignored:
12675+            self.shouldFail(LayoutInvalid, "test blocksize too large",
12676+                            None,
12677+                            mw.put_block, invalid_block, 0, self.salt))
12678+        for i in xrange(5):
12679+            d.addCallback(lambda ignored, i=i:
12680+                mw.put_block(self.block, i, self.salt))
12681+        # Try to put an invalid tail segment
12682+        d.addCallback(lambda ignored:
12683+            self.shouldFail(LayoutInvalid, "test invalid tail segment",
12684+                            None,
12685+                            mw.put_block, self.block, 5, self.salt))
12686+        valid_block = "a"
12687+        d.addCallback(lambda ignored:
12688+            mw.put_block(valid_block, 5, self.salt))
12689+        return d
12690+
12691+
12692+    def test_write_enforces_order_constraints(self):
12693+        # We require that the MDMFSlotWriteProxy be interacted with in a
12694+        # specific way.
12695+        # That way is:
12696+        # 0: __init__
12697+        # 1: write blocks and salts
12698+        # 2: Write the encrypted private key
12699+        # 3: Write the block hashes
12700+        # 4: Write the share hashes
12701+        # 5: Write the root hash and salt hash
12702+        # 6: Write the signature and verification key
12703+        # 7: Write the file.
12704+        #
12705+        # Some of these can be performed out-of-order, and some can't.
12706+        # The dependencies that I want to test here are:
12707+        #  - Private key before block hashes
12708+        #  - share hashes and block hashes before root hash
12709+        #  - root hash before signature
12710+        #  - signature before verification key
12711+        mw0 = self._make_new_mw("si0", 0)
12712+        # Write some shares
12713+        d = defer.succeed(None)
12714+        for i in xrange(6):
12715+            d.addCallback(lambda ignored, i=i:
12716+                mw0.put_block(self.block, i, self.salt))
12717+
12718+        # Try to write the share hash chain without writing the
12719+        # encrypted private key
12720+        d.addCallback(lambda ignored:
12721+            self.shouldFail(LayoutInvalid, "share hash chain before "
12722+                                           "private key",
12723+                            None,
12724+                            mw0.put_sharehashes, self.share_hash_chain))
12725+        # Write the private key.
12726+        d.addCallback(lambda ignored:
12727+            mw0.put_encprivkey(self.encprivkey))
12728+
12729+        # Now write the block hashes and try again
12730+        d.addCallback(lambda ignored:
12731+            mw0.put_blockhashes(self.block_hash_tree))
12732+
12733+        # We haven't yet put the root hash on the share, so we shouldn't
12734+        # be able to sign it.
12735+        d.addCallback(lambda ignored:
12736+            self.shouldFail(LayoutInvalid, "signature before root hash",
12737+                            None, mw0.put_signature, self.signature))
12738+
12739+        d.addCallback(lambda ignored:
12740+            self.failUnlessRaises(LayoutInvalid, mw0.get_signable))
12741+
12742+        # ..and, since that fails, we also shouldn't be able to put the
12743+        # verification key.
12744+        d.addCallback(lambda ignored:
12745+            self.shouldFail(LayoutInvalid, "key before signature",
12746+                            None, mw0.put_verification_key,
12747+                            self.verification_key))
12748+
12749+        # Now write the share hashes.
12750+        d.addCallback(lambda ignored:
12751+            mw0.put_sharehashes(self.share_hash_chain))
12752+        # We should be able to write the root hash now too
12753+        d.addCallback(lambda ignored:
12754+            mw0.put_root_hash(self.root_hash))
12755+
12756+        # We should still be unable to put the verification key
12757+        d.addCallback(lambda ignored:
12758+            self.shouldFail(LayoutInvalid, "key before signature",
12759+                            None, mw0.put_verification_key,
12760+                            self.verification_key))
12761+
12762+        d.addCallback(lambda ignored:
12763+            mw0.put_signature(self.signature))
12764+
12765+        # We shouldn't be able to write the offsets to the remote server
12766+        # until the offset table is finished; IOW, until we have written
12767+        # the verification key.
12768+        d.addCallback(lambda ignored:
12769+            self.shouldFail(LayoutInvalid, "offsets before verification key",
12770+                            None,
12771+                            mw0.finish_publishing))
12772+
12773+        d.addCallback(lambda ignored:
12774+            mw0.put_verification_key(self.verification_key))
12775+        return d
12776+
12777+
12778+    def test_end_to_end(self):
12779+        mw = self._make_new_mw("si1", 0)
12780+        # Write a share using the mutable writer, and make sure that the
12781+        # reader knows how to read everything back to us.
12782+        d = defer.succeed(None)
12783+        for i in xrange(6):
12784+            d.addCallback(lambda ignored, i=i:
12785+                mw.put_block(self.block, i, self.salt))
12786+        d.addCallback(lambda ignored:
12787+            mw.put_encprivkey(self.encprivkey))
12788+        d.addCallback(lambda ignored:
12789+            mw.put_blockhashes(self.block_hash_tree))
12790+        d.addCallback(lambda ignored:
12791+            mw.put_sharehashes(self.share_hash_chain))
12792+        d.addCallback(lambda ignored:
12793+            mw.put_root_hash(self.root_hash))
12794+        d.addCallback(lambda ignored:
12795+            mw.put_signature(self.signature))
12796+        d.addCallback(lambda ignored:
12797+            mw.put_verification_key(self.verification_key))
12798+        d.addCallback(lambda ignored:
12799+            mw.finish_publishing())
12800+
12801+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12802+        def _check_block_and_salt((block, salt)):
12803+            self.failUnlessEqual(block, self.block)
12804+            self.failUnlessEqual(salt, self.salt)
12805+
12806+        for i in xrange(6):
12807+            d.addCallback(lambda ignored, i=i:
12808+                mr.get_block_and_salt(i))
12809+            d.addCallback(_check_block_and_salt)
12810+
12811+        d.addCallback(lambda ignored:
12812+            mr.get_encprivkey())
12813+        d.addCallback(lambda encprivkey:
12814+            self.failUnlessEqual(self.encprivkey, encprivkey))
12815+
12816+        d.addCallback(lambda ignored:
12817+            mr.get_blockhashes())
12818+        d.addCallback(lambda blockhashes:
12819+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
12820+
12821+        d.addCallback(lambda ignored:
12822+            mr.get_sharehashes())
12823+        d.addCallback(lambda sharehashes:
12824+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
12825+
12826+        d.addCallback(lambda ignored:
12827+            mr.get_signature())
12828+        d.addCallback(lambda signature:
12829+            self.failUnlessEqual(signature, self.signature))
12830+
12831+        d.addCallback(lambda ignored:
12832+            mr.get_verification_key())
12833+        d.addCallback(lambda verification_key:
12834+            self.failUnlessEqual(verification_key, self.verification_key))
12835+
12836+        d.addCallback(lambda ignored:
12837+            mr.get_seqnum())
12838+        d.addCallback(lambda seqnum:
12839+            self.failUnlessEqual(seqnum, 0))
12840+
12841+        d.addCallback(lambda ignored:
12842+            mr.get_root_hash())
12843+        d.addCallback(lambda root_hash:
12844+            self.failUnlessEqual(self.root_hash, root_hash))
12845+
12846+        d.addCallback(lambda ignored:
12847+            mr.get_encoding_parameters())
12848+        def _check_encoding_parameters((k, n, segsize, datalen)):
12849+            self.failUnlessEqual(k, 3)
12850+            self.failUnlessEqual(n, 10)
12851+            self.failUnlessEqual(segsize, 6)
12852+            self.failUnlessEqual(datalen, 36)
12853+        d.addCallback(_check_encoding_parameters)
12854+
12855+        d.addCallback(lambda ignored:
12856+            mr.get_checkstring())
12857+        d.addCallback(lambda checkstring:
12858+            self.failUnlessEqual(checkstring, mw.get_checkstring()))
12859+        return d
12860+
12861+
12862+    def test_is_sdmf(self):
12863+        # The MDMFSlotReadProxy should also know how to read SDMF files,
12864+        # since it will encounter them on the grid. Callers use the
12865+        # is_sdmf method to test this.
12866+        self.write_sdmf_share_to_server("si1")
12867+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12868+        d = mr.is_sdmf()
12869+        d.addCallback(lambda issdmf:
12870+            self.failUnless(issdmf))
12871+        return d
12872+
12873+
12874+    def test_reads_sdmf(self):
12875+        # The slot read proxy should, naturally, know how to tell us
12876+        # about data in the SDMF format
12877+        self.write_sdmf_share_to_server("si1")
12878+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12879+        d = defer.succeed(None)
12880+        d.addCallback(lambda ignored:
12881+            mr.is_sdmf())
12882+        d.addCallback(lambda issdmf:
12883+            self.failUnless(issdmf))
12884+
12885+        # What do we need to read?
12886+        #  - The sharedata
12887+        #  - The salt
12888+        d.addCallback(lambda ignored:
12889+            mr.get_block_and_salt(0))
12890+        def _check_block_and_salt(results):
12891+            block, salt = results
12892+            # Our original file is 36 bytes long. Then each share is 12
12893+            # bytes in size. The share is composed entirely of the
12894+            # letter a. self.block contains 2 as, so 6 * self.block is
12895+            # what we are looking for.
12896+            self.failUnlessEqual(block, self.block * 6)
12897+            self.failUnlessEqual(salt, self.salt)
12898+        d.addCallback(_check_block_and_salt)
12899+
12900+        #  - The blockhashes
12901+        d.addCallback(lambda ignored:
12902+            mr.get_blockhashes())
12903+        d.addCallback(lambda blockhashes:
12904+            self.failUnlessEqual(self.block_hash_tree,
12905+                                 blockhashes,
12906+                                 blockhashes))
12907+        #  - The sharehashes
12908+        d.addCallback(lambda ignored:
12909+            mr.get_sharehashes())
12910+        d.addCallback(lambda sharehashes:
12911+            self.failUnlessEqual(self.share_hash_chain,
12912+                                 sharehashes))
12913+        #  - The keys
12914+        d.addCallback(lambda ignored:
12915+            mr.get_encprivkey())
12916+        d.addCallback(lambda encprivkey:
12917+            self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey))
12918+        d.addCallback(lambda ignored:
12919+            mr.get_verification_key())
12920+        d.addCallback(lambda verification_key:
12921+            self.failUnlessEqual(verification_key,
12922+                                 self.verification_key,
12923+                                 verification_key))
12924+        #  - The signature
12925+        d.addCallback(lambda ignored:
12926+            mr.get_signature())
12927+        d.addCallback(lambda signature:
12928+            self.failUnlessEqual(signature, self.signature, signature))
12929+
12930+        #  - The sequence number
12931+        d.addCallback(lambda ignored:
12932+            mr.get_seqnum())
12933+        d.addCallback(lambda seqnum:
12934+            self.failUnlessEqual(seqnum, 0, seqnum))
12935+
12936+        #  - The root hash
12937+        d.addCallback(lambda ignored:
12938+            mr.get_root_hash())
12939+        d.addCallback(lambda root_hash:
12940+            self.failUnlessEqual(root_hash, self.root_hash, root_hash))
12941+        return d
12942+
12943+
12944+    def test_only_reads_one_segment_sdmf(self):
12945+        # SDMF shares have only one segment, so it doesn't make sense to
12946+        # read more segments than that. The reader should know this and
12947+        # complain if we try to do that.
12948+        self.write_sdmf_share_to_server("si1")
12949+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12950+        d = defer.succeed(None)
12951+        d.addCallback(lambda ignored:
12952+            mr.is_sdmf())
12953+        d.addCallback(lambda issdmf:
12954+            self.failUnless(issdmf))
12955+        d.addCallback(lambda ignored:
12956+            self.shouldFail(LayoutInvalid, "test bad segment",
12957+                            None,
12958+                            mr.get_block_and_salt, 1))
12959+        return d
12960+
12961+
12962+    def test_read_with_prefetched_mdmf_data(self):
12963+        # The MDMFSlotReadProxy will prefill certain fields if you pass
12964+        # it data that you have already fetched. This is useful for
12965+        # cases like the Servermap, which prefetches ~2kb of data while
12966+        # finding out which shares are on the remote peer so that it
12967+        # doesn't waste round trips.
12968+        mdmf_data = self.build_test_mdmf_share()
12969+        self.write_test_share_to_server("si1")
12970+        def _make_mr(ignored, length):
12971+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length])
12972+            return mr
12973+
12974+        d = defer.succeed(None)
12975+        # This should be enough to fill in both the encoding parameters
12976+        # and the table of offsets, which will complete the version
12977+        # information tuple.
12978+        d.addCallback(_make_mr, 123)
12979+        d.addCallback(lambda mr:
12980+            mr.get_verinfo())
12981+        def _check_verinfo(verinfo):
12982+            self.failUnless(verinfo)
12983+            self.failUnlessEqual(len(verinfo), 9)
12984+            (seqnum,
12985+             root_hash,
12986+             salt_hash,
12987+             segsize,
12988+             datalen,
12989+             k,
12990+             n,
12991+             prefix,
12992+             offsets) = verinfo
12993+            self.failUnlessEqual(seqnum, 0)
12994+            self.failUnlessEqual(root_hash, self.root_hash)
12995+            self.failUnlessEqual(segsize, 6)
12996+            self.failUnlessEqual(datalen, 36)
12997+            self.failUnlessEqual(k, 3)
12998+            self.failUnlessEqual(n, 10)
12999+            expected_prefix = struct.pack(MDMFSIGNABLEHEADER,
13000+                                          1,
13001+                                          seqnum,
13002+                                          root_hash,
13003+                                          k,
13004+                                          n,
13005+                                          segsize,
13006+                                          datalen)
13007+            self.failUnlessEqual(expected_prefix, prefix)
13008+            self.failUnlessEqual(self.rref.read_count, 0)
13009+        d.addCallback(_check_verinfo)
13010+        # This is not enough data to read a block and a share, so the
13011+        # wrapper should attempt to read this from the remote server.
13012+        d.addCallback(_make_mr, 123)
13013+        d.addCallback(lambda mr:
13014+            mr.get_block_and_salt(0))
13015+        def _check_block_and_salt((block, salt)):
13016+            self.failUnlessEqual(block, self.block)
13017+            self.failUnlessEqual(salt, self.salt)
13018+            self.failUnlessEqual(self.rref.read_count, 1)
13019+        # This should be enough data to read one block.
13020+        d.addCallback(_make_mr, 123 + PRIVATE_KEY_SIZE + SIGNATURE_SIZE + VERIFICATION_KEY_SIZE + SHARE_HASH_CHAIN_SIZE + 140)
13021+        d.addCallback(lambda mr:
13022+            mr.get_block_and_salt(0))
13023+        d.addCallback(_check_block_and_salt)
13024+        return d
13025+
13026+
13027+    def test_read_with_prefetched_sdmf_data(self):
13028+        sdmf_data = self.build_test_sdmf_share()
13029+        self.write_sdmf_share_to_server("si1")
13030+        def _make_mr(ignored, length):
13031+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length])
13032+            return mr
13033+
13034+        d = defer.succeed(None)
13035+        # This should be enough to get us the encoding parameters,
13036+        # offset table, and everything else we need to build a verinfo
13037+        # string.
13038+        d.addCallback(_make_mr, 123)
13039+        d.addCallback(lambda mr:
13040+            mr.get_verinfo())
13041+        def _check_verinfo(verinfo):
13042+            self.failUnless(verinfo)
13043+            self.failUnlessEqual(len(verinfo), 9)
13044+            (seqnum,
13045+             root_hash,
13046+             salt,
13047+             segsize,
13048+             datalen,
13049+             k,
13050+             n,
13051+             prefix,
13052+             offsets) = verinfo
13053+            self.failUnlessEqual(seqnum, 0)
13054+            self.failUnlessEqual(root_hash, self.root_hash)
13055+            self.failUnlessEqual(salt, self.salt)
13056+            self.failUnlessEqual(segsize, 36)
13057+            self.failUnlessEqual(datalen, 36)
13058+            self.failUnlessEqual(k, 3)
13059+            self.failUnlessEqual(n, 10)
13060+            expected_prefix = struct.pack(SIGNED_PREFIX,
13061+                                          0,
13062+                                          seqnum,
13063+                                          root_hash,
13064+                                          salt,
13065+                                          k,
13066+                                          n,
13067+                                          segsize,
13068+                                          datalen)
13069+            self.failUnlessEqual(expected_prefix, prefix)
13070+            self.failUnlessEqual(self.rref.read_count, 0)
13071+        d.addCallback(_check_verinfo)
13072+        # This shouldn't be enough to read any share data.
13073+        d.addCallback(_make_mr, 123)
13074+        d.addCallback(lambda mr:
13075+            mr.get_block_and_salt(0))
13076+        def _check_block_and_salt((block, salt)):
13077+            self.failUnlessEqual(block, self.block * 6)
13078+            self.failUnlessEqual(salt, self.salt)
13079+            # TODO: Fix the read routine so that it reads only the data
13080+            #       that it has cached if it can't read all of it.
13081+            self.failUnlessEqual(self.rref.read_count, 2)
13082+
13083+        # This should be enough to read share data.
13084+        d.addCallback(_make_mr, self.offsets['share_data'])
13085+        d.addCallback(lambda mr:
13086+            mr.get_block_and_salt(0))
13087+        d.addCallback(_check_block_and_salt)
13088+        return d
13089+
13090+
13091+    def test_read_with_empty_mdmf_file(self):
13092+        # Some tests upload a file with no contents to test things
13093+        # unrelated to the actual handling of the content of the file.
13094+        # The reader should behave intelligently in these cases.
13095+        self.write_test_share_to_server("si1", empty=True)
13096+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13097+        # We should be able to get the encoding parameters, and they
13098+        # should be correct.
13099+        d = defer.succeed(None)
13100+        d.addCallback(lambda ignored:
13101+            mr.get_encoding_parameters())
13102+        def _check_encoding_parameters(params):
13103+            self.failUnlessEqual(len(params), 4)
13104+            k, n, segsize, datalen = params
13105+            self.failUnlessEqual(k, 3)
13106+            self.failUnlessEqual(n, 10)
13107+            self.failUnlessEqual(segsize, 0)
13108+            self.failUnlessEqual(datalen, 0)
13109+        d.addCallback(_check_encoding_parameters)
13110+
13111+        # We should not be able to fetch a block, since there are no
13112+        # blocks to fetch
13113+        d.addCallback(lambda ignored:
13114+            self.shouldFail(LayoutInvalid, "get block on empty file",
13115+                            None,
13116+                            mr.get_block_and_salt, 0))
13117+        return d
13118+
13119+
13120+    def test_read_with_empty_sdmf_file(self):
13121+        self.write_sdmf_share_to_server("si1", empty=True)
13122+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13123+        # We should be able to get the encoding parameters, and they
13124+        # should be correct
13125+        d = defer.succeed(None)
13126+        d.addCallback(lambda ignored:
13127+            mr.get_encoding_parameters())
13128+        def _check_encoding_parameters(params):
13129+            self.failUnlessEqual(len(params), 4)
13130+            k, n, segsize, datalen = params
13131+            self.failUnlessEqual(k, 3)
13132+            self.failUnlessEqual(n, 10)
13133+            self.failUnlessEqual(segsize, 0)
13134+            self.failUnlessEqual(datalen, 0)
13135+        d.addCallback(_check_encoding_parameters)
13136+
13137+        # It does not make sense to get a block in this format, so we
13138+        # should not be able to.
13139+        d.addCallback(lambda ignored:
13140+            self.shouldFail(LayoutInvalid, "get block on an empty file",
13141+                            None,
13142+                            mr.get_block_and_salt, 0))
13143+        return d
13144+
13145+
13146+    def test_verinfo_with_sdmf_file(self):
13147+        self.write_sdmf_share_to_server("si1")
13148+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13149+        # We should be able to get the version information.
13150+        d = defer.succeed(None)
13151+        d.addCallback(lambda ignored:
13152+            mr.get_verinfo())
13153+        def _check_verinfo(verinfo):
13154+            self.failUnless(verinfo)
13155+            self.failUnlessEqual(len(verinfo), 9)
13156+            (seqnum,
13157+             root_hash,
13158+             salt,
13159+             segsize,
13160+             datalen,
13161+             k,
13162+             n,
13163+             prefix,
13164+             offsets) = verinfo
13165+            self.failUnlessEqual(seqnum, 0)
13166+            self.failUnlessEqual(root_hash, self.root_hash)
13167+            self.failUnlessEqual(salt, self.salt)
13168+            self.failUnlessEqual(segsize, 36)
13169+            self.failUnlessEqual(datalen, 36)
13170+            self.failUnlessEqual(k, 3)
13171+            self.failUnlessEqual(n, 10)
13172+            expected_prefix = struct.pack(">BQ32s16s BBQQ",
13173+                                          0,
13174+                                          seqnum,
13175+                                          root_hash,
13176+                                          salt,
13177+                                          k,
13178+                                          n,
13179+                                          segsize,
13180+                                          datalen)
13181+            self.failUnlessEqual(prefix, expected_prefix)
13182+            self.failUnlessEqual(offsets, self.offsets)
13183+        d.addCallback(_check_verinfo)
13184+        return d
13185+
13186+
13187+    def test_verinfo_with_mdmf_file(self):
13188+        self.write_test_share_to_server("si1")
13189+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13190+        d = defer.succeed(None)
13191+        d.addCallback(lambda ignored:
13192+            mr.get_verinfo())
13193+        def _check_verinfo(verinfo):
13194+            self.failUnless(verinfo)
13195+            self.failUnlessEqual(len(verinfo), 9)
13196+            (seqnum,
13197+             root_hash,
13198+             IV,
13199+             segsize,
13200+             datalen,
13201+             k,
13202+             n,
13203+             prefix,
13204+             offsets) = verinfo
13205+            self.failUnlessEqual(seqnum, 0)
13206+            self.failUnlessEqual(root_hash, self.root_hash)
13207+            self.failIf(IV)
13208+            self.failUnlessEqual(segsize, 6)
13209+            self.failUnlessEqual(datalen, 36)
13210+            self.failUnlessEqual(k, 3)
13211+            self.failUnlessEqual(n, 10)
13212+            expected_prefix = struct.pack(">BQ32s BBQQ",
13213+                                          1,
13214+                                          seqnum,
13215+                                          root_hash,
13216+                                          k,
13217+                                          n,
13218+                                          segsize,
13219+                                          datalen)
13220+            self.failUnlessEqual(prefix, expected_prefix)
13221+            self.failUnlessEqual(offsets, self.offsets)
13222+        d.addCallback(_check_verinfo)
13223+        return d
13224+
13225+
13226+    def test_reader_queue(self):
13227+        self.write_test_share_to_server('si1')
13228+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13229+        d1 = mr.get_block_and_salt(0, queue=True)
13230+        d2 = mr.get_blockhashes(queue=True)
13231+        d3 = mr.get_sharehashes(queue=True)
13232+        d4 = mr.get_signature(queue=True)
13233+        d5 = mr.get_verification_key(queue=True)
13234+        dl = defer.DeferredList([d1, d2, d3, d4, d5])
13235+        mr.flush()
13236+        def _print(results):
13237+            self.failUnlessEqual(len(results), 5)
13238+            # We have one read for version information and offsets, and
13239+            # one for everything else.
13240+            self.failUnlessEqual(self.rref.read_count, 2)
13241+            block, salt = results[0][1] # results[0] is a boolean that says
13242+                                           # whether or not the operation
13243+                                           # worked.
13244+            self.failUnlessEqual(self.block, block)
13245+            self.failUnlessEqual(self.salt, salt)
13246+
13247+            blockhashes = results[1][1]
13248+            self.failUnlessEqual(self.block_hash_tree, blockhashes)
13249+
13250+            sharehashes = results[2][1]
13251+            self.failUnlessEqual(self.share_hash_chain, sharehashes)
13252+
13253+            signature = results[3][1]
13254+            self.failUnlessEqual(self.signature, signature)
13255+
13256+            verification_key = results[4][1]
13257+            self.failUnlessEqual(self.verification_key, verification_key)
13258+        dl.addCallback(_print)
13259+        return dl
13260+
13261+
13262+    def test_sdmf_writer(self):
13263+        # Go through the motions of writing an SDMF share to the storage
13264+        # server. Then read the storage server to see that the share got
13265+        # written in the way that we think it should have.
13266+
13267+        # We do this first so that the necessary instance variables get
13268+        # set the way we want them for the tests below.
13269+        data = self.build_test_sdmf_share()
13270+        sdmfr = SDMFSlotWriteProxy(0,
13271+                                   self.rref,
13272+                                   "si1",
13273+                                   self.secrets,
13274+                                   0, 3, 10, 36, 36)
13275+        # Put the block and salt.
13276+        sdmfr.put_block(self.blockdata, 0, self.salt)
13277+
13278+        # Put the encprivkey
13279+        sdmfr.put_encprivkey(self.encprivkey)
13280+
13281+        # Put the block and share hash chains
13282+        sdmfr.put_blockhashes(self.block_hash_tree)
13283+        sdmfr.put_sharehashes(self.share_hash_chain)
13284+        sdmfr.put_root_hash(self.root_hash)
13285+
13286+        # Put the signature
13287+        sdmfr.put_signature(self.signature)
13288+
13289+        # Put the verification key
13290+        sdmfr.put_verification_key(self.verification_key)
13291+
13292+        # Now check to make sure that nothing has been written yet.
13293+        self.failUnlessEqual(self.rref.write_count, 0)
13294+
13295+        # Now finish publishing
13296+        d = sdmfr.finish_publishing()
13297+        def _then(ignored):
13298+            self.failUnlessEqual(self.rref.write_count, 1)
13299+            read = self.ss.remote_slot_readv
13300+            self.failUnlessEqual(read("si1", [0], [(0, len(data))]),
13301+                                 {0: [data]})
13302+        d.addCallback(_then)
13303+        return d
13304+
13305+
13306+    def test_sdmf_writer_preexisting_share(self):
13307+        data = self.build_test_sdmf_share()
13308+        self.write_sdmf_share_to_server("si1")
13309+
13310+        # Now there is a share on the storage server. To successfully
13311+        # write, we need to set the checkstring correctly. When we
13312+        # don't, no write should occur.
13313+        sdmfw = SDMFSlotWriteProxy(0,
13314+                                   self.rref,
13315+                                   "si1",
13316+                                   self.secrets,
13317+                                   1, 3, 10, 36, 36)
13318+        sdmfw.put_block(self.blockdata, 0, self.salt)
13319+
13320+        # Put the encprivkey
13321+        sdmfw.put_encprivkey(self.encprivkey)
13322+
13323+        # Put the block and share hash chains
13324+        sdmfw.put_blockhashes(self.block_hash_tree)
13325+        sdmfw.put_sharehashes(self.share_hash_chain)
13326+
13327+        # Put the root hash
13328+        sdmfw.put_root_hash(self.root_hash)
13329+
13330+        # Put the signature
13331+        sdmfw.put_signature(self.signature)
13332+
13333+        # Put the verification key
13334+        sdmfw.put_verification_key(self.verification_key)
13335+
13336+        # We shouldn't have a checkstring yet
13337+        self.failUnlessEqual(sdmfw.get_checkstring(), "")
13338+
13339+        d = sdmfw.finish_publishing()
13340+        def _then(results):
13341+            self.failIf(results[0])
13342+            # this is the correct checkstring
13343+            self._expected_checkstring = results[1][0][0]
13344+            return self._expected_checkstring
13345+
13346+        d.addCallback(_then)
13347+        d.addCallback(sdmfw.set_checkstring)
13348+        d.addCallback(lambda ignored:
13349+            sdmfw.get_checkstring())
13350+        d.addCallback(lambda checkstring:
13351+            self.failUnlessEqual(checkstring, self._expected_checkstring))
13352+        d.addCallback(lambda ignored:
13353+            sdmfw.finish_publishing())
13354+        def _then_again(results):
13355+            self.failUnless(results[0])
13356+            read = self.ss.remote_slot_readv
13357+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
13358+                                 {0: [struct.pack(">Q", 1)]})
13359+            self.failUnlessEqual(read("si1", [0], [(9, len(data) - 9)]),
13360+                                 {0: [data[9:]]})
13361+        d.addCallback(_then_again)
13362+        return d
13363+
13364+
13365 class Stats(unittest.TestCase):
13366 
13367     def setUp(self):
13368}
13369[frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
13370Kevan Carstensen <kevan@isnotajoke.com>**20110802021207
13371 Ignore-this: 5e0f6e961048f71d4eed6d30210ffd2e
13372] {
13373hunk ./src/allmydata/frontends/sftpd.py 33
13374 from allmydata.interfaces import IFileNode, IDirectoryNode, ExistingChildError, \
13375      NoSuchChildError, ChildOfWrongTypeError
13376 from allmydata.mutable.common import NotWriteableError
13377+from allmydata.mutable.publish import MutableFileHandle
13378 from allmydata.immutable.upload import FileHandle
13379 from allmydata.dirnode import update_metadata
13380 from allmydata.util.fileutil import EncryptedTemporaryFile
13381hunk ./src/allmydata/frontends/sftpd.py 667
13382         else:
13383             assert IFileNode.providedBy(filenode), filenode
13384 
13385-            if filenode.is_mutable():
13386-                self.async.addCallback(lambda ign: filenode.download_best_version())
13387-                def _downloaded(data):
13388-                    self.consumer = OverwriteableFileConsumer(len(data), tempfile_maker)
13389-                    self.consumer.write(data)
13390-                    self.consumer.finish()
13391-                    return None
13392-                self.async.addCallback(_downloaded)
13393-            else:
13394-                download_size = filenode.get_size()
13395-                assert download_size is not None, "download_size is None"
13396+            self.async.addCallback(lambda ignored: filenode.get_best_readable_version())
13397+
13398+            def _read(version):
13399+                if noisy: self.log("_read", level=NOISY)
13400+                download_size = version.get_size()
13401+                assert download_size is not None
13402+
13403                 self.consumer = OverwriteableFileConsumer(download_size, tempfile_maker)
13404hunk ./src/allmydata/frontends/sftpd.py 675
13405-                def _read(ign):
13406-                    if noisy: self.log("_read immutable", level=NOISY)
13407-                    filenode.read(self.consumer, 0, None)
13408-                self.async.addCallback(_read)
13409+
13410+                version.read(self.consumer, 0, None)
13411+            self.async.addCallback(_read)
13412 
13413         eventually(self.async.callback, None)
13414 
13415hunk ./src/allmydata/frontends/sftpd.py 821
13416                     assert parent and childname, (parent, childname, self.metadata)
13417                     d2.addCallback(lambda ign: parent.set_metadata_for(childname, self.metadata))
13418 
13419-                d2.addCallback(lambda ign: self.consumer.get_current_size())
13420-                d2.addCallback(lambda size: self.consumer.read(0, size))
13421-                d2.addCallback(lambda new_contents: self.filenode.overwrite(new_contents))
13422+                d2.addCallback(lambda ign: self.filenode.overwrite(MutableFileHandle(self.consumer.get_file())))
13423             else:
13424                 def _add_file(ign):
13425                     self.log("_add_file childname=%r" % (childname,), level=OPERATIONAL)
13426hunk ./src/allmydata/test/test_sftp.py 32
13427 
13428 from allmydata.util.consumer import download_to_data
13429 from allmydata.immutable import upload
13430+from allmydata.mutable import publish
13431 from allmydata.test.no_network import GridTestMixin
13432 from allmydata.test.common import ShouldFailMixin
13433 from allmydata.test.common_util import ReallyEqualMixin
13434hunk ./src/allmydata/test/test_sftp.py 80
13435         return d
13436 
13437     def _set_up_tree(self):
13438-        d = self.client.create_mutable_file("mutable file contents")
13439+        u = publish.MutableData("mutable file contents")
13440+        d = self.client.create_mutable_file(u)
13441         d.addCallback(lambda node: self.root.set_node(u"mutable", node))
13442         def _created_mutable(n):
13443             self.mutable = n
13444}
13445[uri: add MDMF and MDMF directory caps, add extension hint support
13446Kevan Carstensen <kevan@isnotajoke.com>**20110802021233
13447 Ignore-this: 525f98d5dcb7a6afad601c27dba59e84
13448] {
13449hunk ./src/allmydata/test/test_cli.py 1238
13450         d.addCallback(_check)
13451         return d
13452 
13453+    def _create_directory_structure(self):
13454+        # Create a simple directory structure that we can use for MDMF,
13455+        # SDMF, and immutable testing.
13456+        assert self.g
13457+
13458+        client = self.g.clients[0]
13459+        # Create a dirnode
13460+        d = client.create_dirnode()
13461+        def _got_rootnode(n):
13462+            # Add a few nodes.
13463+            self._dircap = n.get_uri()
13464+            nm = n._nodemaker
13465+            # The uploaders may run at the same time, so we need two
13466+            # MutableData instances or they'll fight over offsets &c and
13467+            # break.
13468+            mutable_data = MutableData("data" * 100000)
13469+            mutable_data2 = MutableData("data" * 100000)
13470+            # Add both kinds of mutable node.
13471+            d1 = nm.create_mutable_file(mutable_data,
13472+                                        version=MDMF_VERSION)
13473+            d2 = nm.create_mutable_file(mutable_data2,
13474+                                        version=SDMF_VERSION)
13475+            # Add an immutable node. We do this through the directory,
13476+            # with add_file.
13477+            immutable_data = upload.Data("immutable data" * 100000,
13478+                                         convergence="")
13479+            d3 = n.add_file(u"immutable", immutable_data)
13480+            ds = [d1, d2, d3]
13481+            dl = defer.DeferredList(ds)
13482+            def _made_files((r1, r2, r3)):
13483+                self.failUnless(r1[0])
13484+                self.failUnless(r2[0])
13485+                self.failUnless(r3[0])
13486+
13487+                # r1, r2, and r3 contain nodes.
13488+                mdmf_node = r1[1]
13489+                sdmf_node = r2[1]
13490+                imm_node = r3[1]
13491+
13492+                self._mdmf_uri = mdmf_node.get_uri()
13493+                self._mdmf_readonly_uri = mdmf_node.get_readonly_uri()
13494+                self._sdmf_uri = mdmf_node.get_uri()
13495+                self._sdmf_readonly_uri = sdmf_node.get_readonly_uri()
13496+                self._imm_uri = imm_node.get_uri()
13497+
13498+                d1 = n.set_node(u"mdmf", mdmf_node)
13499+                d2 = n.set_node(u"sdmf", sdmf_node)
13500+                return defer.DeferredList([d1, d2])
13501+            # We can now list the directory by listing self._dircap.
13502+            dl.addCallback(_made_files)
13503+            return dl
13504+        d.addCallback(_got_rootnode)
13505+        return d
13506+
13507+    def test_list_mdmf(self):
13508+        # 'tahoe ls' should include MDMF files.
13509+        self.basedir = "cli/List/list_mdmf"
13510+        self.set_up_grid()
13511+        d = self._create_directory_structure()
13512+        d.addCallback(lambda ignored:
13513+            self.do_cli("ls", self._dircap))
13514+        def _got_ls((rc, out, err)):
13515+            self.failUnlessEqual(rc, 0)
13516+            self.failUnlessEqual(err, "")
13517+            self.failUnlessIn("immutable", out)
13518+            self.failUnlessIn("mdmf", out)
13519+            self.failUnlessIn("sdmf", out)
13520+        d.addCallback(_got_ls)
13521+        return d
13522+
13523+    def test_list_mdmf_json(self):
13524+        # 'tahoe ls' should include MDMF caps when invoked with MDMF
13525+        # caps.
13526+        self.basedir = "cli/List/list_mdmf_json"
13527+        self.set_up_grid()
13528+        d = self._create_directory_structure()
13529+        d.addCallback(lambda ignored:
13530+            self.do_cli("ls", "--json", self._dircap))
13531+        def _got_json((rc, out, err)):
13532+            self.failUnlessEqual(rc, 0)
13533+            self.failUnlessEqual(err, "")
13534+            self.failUnlessIn(self._mdmf_uri, out)
13535+            self.failUnlessIn(self._mdmf_readonly_uri, out)
13536+            self.failUnlessIn(self._sdmf_uri, out)
13537+            self.failUnlessIn(self._sdmf_readonly_uri, out)
13538+            self.failUnlessIn(self._imm_uri, out)
13539+            self.failUnlessIn('"mutable-type": "sdmf"', out)
13540+            self.failUnlessIn('"mutable-type": "mdmf"', out)
13541+        d.addCallback(_got_json)
13542+        return d
13543+
13544 
13545 class Mv(GridTestMixin, CLITestMixin, unittest.TestCase):
13546     def test_mv_behavior(self):
13547hunk ./src/allmydata/test/test_uri.py 2
13548 
13549+import re
13550 from twisted.trial import unittest
13551 from allmydata import uri
13552 from allmydata.util import hashutil, base32
13553hunk ./src/allmydata/test/test_uri.py 259
13554         uri.CHKFileURI.init_from_string(fileURI)
13555 
13556 class Mutable(testutil.ReallyEqualMixin, unittest.TestCase):
13557-    def test_pack(self):
13558-        writekey = "\x01" * 16
13559-        fingerprint = "\x02" * 32
13560+    def setUp(self):
13561+        self.writekey = "\x01" * 16
13562+        self.fingerprint = "\x02" * 32
13563+        self.readkey = hashutil.ssk_readkey_hash(self.writekey)
13564+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
13565 
13566hunk ./src/allmydata/test/test_uri.py 265
13567-        u = uri.WriteableSSKFileURI(writekey, fingerprint)
13568-        self.failUnlessReallyEqual(u.writekey, writekey)
13569-        self.failUnlessReallyEqual(u.fingerprint, fingerprint)
13570+    def test_pack(self):
13571+        u = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
13572+        self.failUnlessReallyEqual(u.writekey, self.writekey)
13573+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
13574         self.failIf(u.is_readonly())
13575         self.failUnless(u.is_mutable())
13576         self.failUnless(IURI.providedBy(u))
13577hunk ./src/allmydata/test/test_uri.py 281
13578         self.failUnlessReallyEqual(u, u_h)
13579 
13580         u2 = uri.from_string(u.to_string())
13581-        self.failUnlessReallyEqual(u2.writekey, writekey)
13582-        self.failUnlessReallyEqual(u2.fingerprint, fingerprint)
13583+        self.failUnlessReallyEqual(u2.writekey, self.writekey)
13584+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
13585         self.failIf(u2.is_readonly())
13586         self.failUnless(u2.is_mutable())
13587         self.failUnless(IURI.providedBy(u2))
13588hunk ./src/allmydata/test/test_uri.py 297
13589         self.failUnless(isinstance(u2imm, uri.UnknownURI), u2imm)
13590 
13591         u3 = u2.get_readonly()
13592-        readkey = hashutil.ssk_readkey_hash(writekey)
13593-        self.failUnlessReallyEqual(u3.fingerprint, fingerprint)
13594+        readkey = hashutil.ssk_readkey_hash(self.writekey)
13595+        self.failUnlessReallyEqual(u3.fingerprint, self.fingerprint)
13596         self.failUnlessReallyEqual(u3.readkey, readkey)
13597         self.failUnless(u3.is_readonly())
13598         self.failUnless(u3.is_mutable())
13599hunk ./src/allmydata/test/test_uri.py 317
13600         u3_h = uri.ReadonlySSKFileURI.init_from_human_encoding(he)
13601         self.failUnlessReallyEqual(u3, u3_h)
13602 
13603-        u4 = uri.ReadonlySSKFileURI(readkey, fingerprint)
13604-        self.failUnlessReallyEqual(u4.fingerprint, fingerprint)
13605+        u4 = uri.ReadonlySSKFileURI(readkey, self.fingerprint)
13606+        self.failUnlessReallyEqual(u4.fingerprint, self.fingerprint)
13607         self.failUnlessReallyEqual(u4.readkey, readkey)
13608         self.failUnless(u4.is_readonly())
13609         self.failUnless(u4.is_mutable())
13610hunk ./src/allmydata/test/test_uri.py 350
13611         self.failUnlessReallyEqual(u5, u5_h)
13612 
13613 
13614+    def test_writable_mdmf_cap(self):
13615+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13616+        cap = u1.to_string()
13617+        u = uri.WritableMDMFFileURI.init_from_string(cap)
13618+
13619+        self.failUnless(IMutableFileURI.providedBy(u))
13620+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
13621+        self.failUnlessReallyEqual(u.writekey, self.writekey)
13622+        self.failUnless(u.is_mutable())
13623+        self.failIf(u.is_readonly())
13624+        self.failUnlessEqual(cap, u.to_string())
13625+
13626+        # Now get a readonly cap from the writable cap, and test that it
13627+        # degrades gracefully.
13628+        ru = u.get_readonly()
13629+        self.failUnlessReallyEqual(self.readkey, ru.readkey)
13630+        self.failUnlessReallyEqual(self.fingerprint, ru.fingerprint)
13631+        self.failUnless(ru.is_mutable())
13632+        self.failUnless(ru.is_readonly())
13633+
13634+        # Now get a verifier cap.
13635+        vu = ru.get_verify_cap()
13636+        self.failUnlessReallyEqual(self.storage_index, vu.storage_index)
13637+        self.failUnlessReallyEqual(self.fingerprint, vu.fingerprint)
13638+        self.failUnless(IVerifierURI.providedBy(vu))
13639+
13640+    def test_readonly_mdmf_cap(self):
13641+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13642+        cap = u1.to_string()
13643+        u2 = uri.ReadonlyMDMFFileURI.init_from_string(cap)
13644+
13645+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
13646+        self.failUnlessReallyEqual(u2.readkey, self.readkey)
13647+        self.failUnless(u2.is_readonly())
13648+        self.failUnless(u2.is_mutable())
13649+
13650+        vu = u2.get_verify_cap()
13651+        self.failUnlessEqual(u2.storage_index, self.storage_index)
13652+        self.failUnlessEqual(u2.fingerprint, self.fingerprint)
13653+
13654+    def test_create_writable_mdmf_cap_from_readcap(self):
13655+        # we shouldn't be able to create a writable MDMF cap given only a
13656+        # readcap.
13657+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13658+        cap = u1.to_string()
13659+        self.failUnlessRaises(uri.BadURIError,
13660+                              uri.WritableMDMFFileURI.init_from_string,
13661+                              cap)
13662+
13663+    def test_create_writable_mdmf_cap_from_verifycap(self):
13664+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13665+        cap = u1.to_string()
13666+        self.failUnlessRaises(uri.BadURIError,
13667+                              uri.WritableMDMFFileURI.init_from_string,
13668+                              cap)
13669+
13670+    def test_create_readonly_mdmf_cap_from_verifycap(self):
13671+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13672+        cap = u1.to_string()
13673+        self.failUnlessRaises(uri.BadURIError,
13674+                              uri.ReadonlyMDMFFileURI.init_from_string,
13675+                              cap)
13676+
13677+    def test_mdmf_verifier_cap(self):
13678+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13679+        self.failUnless(u1.is_readonly())
13680+        self.failIf(u1.is_mutable())
13681+        self.failUnlessReallyEqual(self.storage_index, u1.storage_index)
13682+        self.failUnlessReallyEqual(self.fingerprint, u1.fingerprint)
13683+
13684+        cap = u1.to_string()
13685+        u2 = uri.MDMFVerifierURI.init_from_string(cap)
13686+        self.failUnless(u2.is_readonly())
13687+        self.failIf(u2.is_mutable())
13688+        self.failUnlessReallyEqual(self.storage_index, u2.storage_index)
13689+        self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
13690+
13691+        u3 = u2.get_readonly()
13692+        self.failUnlessReallyEqual(u3, u2)
13693+
13694+        u4 = u2.get_verify_cap()
13695+        self.failUnlessReallyEqual(u4, u2)
13696+
13697+    def test_mdmf_cap_extra_information(self):
13698+        # MDMF caps can be arbitrarily extended after the fingerprint
13699+        # and key/storage index fields.
13700+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13701+        self.failUnlessEqual([], u1.get_extension_params())
13702+
13703+        cap = u1.to_string()
13704+        # Now let's append some fields. Say, 131073 (the segment size)
13705+        # and 3 (the "k" encoding parameter).
13706+        expected_extensions = []
13707+        for e in ('131073', '3'):
13708+            cap += (":%s" % e)
13709+            expected_extensions.append(e)
13710+
13711+            u2 = uri.WritableMDMFFileURI.init_from_string(cap)
13712+            self.failUnlessReallyEqual(self.writekey, u2.writekey)
13713+            self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
13714+            self.failIf(u2.is_readonly())
13715+            self.failUnless(u2.is_mutable())
13716+
13717+            c2 = u2.to_string()
13718+            u2n = uri.WritableMDMFFileURI.init_from_string(c2)
13719+            self.failUnlessReallyEqual(u2, u2n)
13720+
13721+            # We should get the extra back when we ask for it.
13722+            self.failUnlessEqual(expected_extensions, u2.get_extension_params())
13723+
13724+            # These should be preserved through cap attenuation, too.
13725+            u3 = u2.get_readonly()
13726+            self.failUnlessReallyEqual(self.readkey, u3.readkey)
13727+            self.failUnlessReallyEqual(self.fingerprint, u3.fingerprint)
13728+            self.failUnless(u3.is_readonly())
13729+            self.failUnless(u3.is_mutable())
13730+            self.failUnlessEqual(expected_extensions, u3.get_extension_params())
13731+
13732+            c3 = u3.to_string()
13733+            u3n = uri.ReadonlyMDMFFileURI.init_from_string(c3)
13734+            self.failUnlessReallyEqual(u3, u3n)
13735+
13736+            u4 = u3.get_verify_cap()
13737+            self.failUnlessReallyEqual(self.storage_index, u4.storage_index)
13738+            self.failUnlessReallyEqual(self.fingerprint, u4.fingerprint)
13739+            self.failUnless(u4.is_readonly())
13740+            self.failIf(u4.is_mutable())
13741+
13742+            c4 = u4.to_string()
13743+            u4n = uri.MDMFVerifierURI.init_from_string(c4)
13744+            self.failUnlessReallyEqual(u4n, u4)
13745+
13746+            self.failUnlessEqual(expected_extensions, u4.get_extension_params())
13747+
13748+
13749+    def test_sdmf_cap_extra_information(self):
13750+        # For interface consistency, we define a method to get
13751+        # extensions for SDMF files as well. This method must always
13752+        # return no extensions, since SDMF files were not created with
13753+        # extensions and cannot be modified to include extensions
13754+        # without breaking older clients.
13755+        u1 = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
13756+        cap = u1.to_string()
13757+        u2 = uri.WriteableSSKFileURI.init_from_string(cap)
13758+        self.failUnlessEqual([], u2.get_extension_params())
13759+
13760+    def test_extension_character_range(self):
13761+        # As written now, we shouldn't put things other than numbers in
13762+        # the extension fields.
13763+        writecap = uri.WritableMDMFFileURI(self.writekey, self.fingerprint).to_string()
13764+        readcap  = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint).to_string()
13765+        vcap     = uri.MDMFVerifierURI(self.storage_index, self.fingerprint).to_string()
13766+        self.failUnlessRaises(uri.BadURIError,
13767+                              uri.WritableMDMFFileURI.init_from_string,
13768+                              ("%s:invalid" % writecap))
13769+        self.failUnlessRaises(uri.BadURIError,
13770+                              uri.ReadonlyMDMFFileURI.init_from_string,
13771+                              ("%s:invalid" % readcap))
13772+        self.failUnlessRaises(uri.BadURIError,
13773+                              uri.MDMFVerifierURI.init_from_string,
13774+                              ("%s:invalid" % vcap))
13775+
13776+
13777+    def test_mdmf_valid_human_encoding(self):
13778+        # What's a human encoding? Well, it's of the form:
13779+        base = "https://127.0.0.1:3456/uri/"
13780+        # With a cap on the end. For each of the cap types, we need to
13781+        # test that a valid cap (with and without the traditional
13782+        # separators) is recognized and accepted by the classes.
13783+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13784+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13785+                                     ['131073', '3'])
13786+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13787+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13788+                                     ['131073', '3'])
13789+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13790+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13791+                                 ['131073', '3'])
13792+
13793+        # These will yield six different caps.
13794+        for o in (w1, w2, r1 , r2, v1, v2):
13795+            url = base + o.to_string()
13796+            o1 = o.__class__.init_from_human_encoding(url)
13797+            self.failUnlessReallyEqual(o1, o)
13798+
13799+            # Note that our cap will, by default, have : as separators.
13800+            # But it's expected that users from, e.g., the WUI, will
13801+            # have %3A as a separator. We need to make sure that the
13802+            # initialization routine handles that, too.
13803+            cap = o.to_string()
13804+            cap = re.sub(":", "%3A", cap)
13805+            url = base + cap
13806+            o2 = o.__class__.init_from_human_encoding(url)
13807+            self.failUnlessReallyEqual(o2, o)
13808+
13809+
13810+    def test_mdmf_human_encoding_invalid_base(self):
13811+        # What's a human encoding? Well, it's of the form:
13812+        base = "https://127.0.0.1:3456/foo/bar/bazuri/"
13813+        # With a cap on the end. For each of the cap types, we need to
13814+        # test that a valid cap (with and without the traditional
13815+        # separators) is recognized and accepted by the classes.
13816+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13817+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13818+                                     ['131073', '3'])
13819+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13820+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13821+                                     ['131073', '3'])
13822+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13823+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13824+                                 ['131073', '3'])
13825+
13826+        # These will yield six different caps.
13827+        for o in (w1, w2, r1 , r2, v1, v2):
13828+            url = base + o.to_string()
13829+            self.failUnlessRaises(uri.BadURIError,
13830+                                  o.__class__.init_from_human_encoding,
13831+                                  url)
13832+
13833+    def test_mdmf_human_encoding_invalid_cap(self):
13834+        base = "https://127.0.0.1:3456/uri/"
13835+        # With a cap on the end. For each of the cap types, we need to
13836+        # test that a valid cap (with and without the traditional
13837+        # separators) is recognized and accepted by the classes.
13838+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13839+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13840+                                     ['131073', '3'])
13841+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13842+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13843+                                     ['131073', '3'])
13844+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13845+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13846+                                 ['131073', '3'])
13847+
13848+        # These will yield six different caps.
13849+        for o in (w1, w2, r1 , r2, v1, v2):
13850+            # not exhaustive, obviously...
13851+            url = base + o.to_string() + "foobarbaz"
13852+            url2 = base + "foobarbaz" + o.to_string()
13853+            url3 = base + o.to_string()[:25] + "foo" + o.to_string()[:25]
13854+            for u in (url, url2, url3):
13855+                self.failUnlessRaises(uri.BadURIError,
13856+                                      o.__class__.init_from_human_encoding,
13857+                                      u)
13858+
13859+    def test_mdmf_from_string(self):
13860+        # Make sure that the from_string utility function works with
13861+        # MDMF caps.
13862+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13863+        cap = u1.to_string()
13864+        self.failUnless(uri.is_uri(cap))
13865+        u2 = uri.from_string(cap)
13866+        self.failUnlessReallyEqual(u1, u2)
13867+        u3 = uri.from_string_mutable_filenode(cap)
13868+        self.failUnlessEqual(u3, u1)
13869+
13870+        # XXX: We should refactor the extension field into setUp
13871+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13872+                                     ['131073', '3'])
13873+        cap = u1.to_string()
13874+        self.failUnless(uri.is_uri(cap))
13875+        u2 = uri.from_string(cap)
13876+        self.failUnlessReallyEqual(u1, u2)
13877+        u3 = uri.from_string_mutable_filenode(cap)
13878+        self.failUnlessEqual(u3, u1)
13879+
13880+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13881+        cap = u1.to_string()
13882+        self.failUnless(uri.is_uri(cap))
13883+        u2 = uri.from_string(cap)
13884+        self.failUnlessReallyEqual(u1, u2)
13885+        u3 = uri.from_string_mutable_filenode(cap)
13886+        self.failUnlessEqual(u3, u1)
13887+
13888+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13889+                                     ['131073', '3'])
13890+        cap = u1.to_string()
13891+        self.failUnless(uri.is_uri(cap))
13892+        u2 = uri.from_string(cap)
13893+        self.failUnlessReallyEqual(u1, u2)
13894+        u3 = uri.from_string_mutable_filenode(cap)
13895+        self.failUnlessEqual(u3, u1)
13896+
13897+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13898+        cap = u1.to_string()
13899+        self.failUnless(uri.is_uri(cap))
13900+        u2 = uri.from_string(cap)
13901+        self.failUnlessReallyEqual(u1, u2)
13902+        u3 = uri.from_string_verifier(cap)
13903+        self.failUnlessEqual(u3, u1)
13904+
13905+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13906+                                 ['131073', '3'])
13907+        cap = u1.to_string()
13908+        self.failUnless(uri.is_uri(cap))
13909+        u2 = uri.from_string(cap)
13910+        self.failUnlessReallyEqual(u1, u2)
13911+        u3 = uri.from_string_verifier(cap)
13912+        self.failUnlessEqual(u3, u1)
13913+
13914+
13915 class Dirnode(testutil.ReallyEqualMixin, unittest.TestCase):
13916     def test_pack(self):
13917         writekey = "\x01" * 16
13918hunk ./src/allmydata/test/test_uri.py 794
13919         self.failUnlessReallyEqual(u1.get_verify_cap(), None)
13920         self.failUnlessReallyEqual(u1.get_storage_index(), None)
13921         self.failUnlessReallyEqual(u1.abbrev_si(), "<LIT>")
13922+
13923+    def test_mdmf(self):
13924+        writekey = "\x01" * 16
13925+        fingerprint = "\x02" * 32
13926+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13927+        d1 = uri.MDMFDirectoryURI(uri1)
13928+        self.failIf(d1.is_readonly())
13929+        self.failUnless(d1.is_mutable())
13930+        self.failUnless(IURI.providedBy(d1))
13931+        self.failUnless(IDirnodeURI.providedBy(d1))
13932+        d1_uri = d1.to_string()
13933+
13934+        d2 = uri.from_string(d1_uri)
13935+        self.failUnlessIsInstance(d2, uri.MDMFDirectoryURI)
13936+        self.failIf(d2.is_readonly())
13937+        self.failUnless(d2.is_mutable())
13938+        self.failUnless(IURI.providedBy(d2))
13939+        self.failUnless(IDirnodeURI.providedBy(d2))
13940+
13941+        # It doesn't make sense to ask for a deep immutable URI for a
13942+        # mutable directory, and we should get back a result to that
13943+        # effect.
13944+        d3 = uri.from_string(d2.to_string(), deep_immutable=True)
13945+        self.failUnlessIsInstance(d3, uri.UnknownURI)
13946+
13947+    def test_mdmf_with_extensions(self):
13948+        writekey = "\x01" * 16
13949+        fingerprint = "\x02" * 32
13950+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13951+        d1 = uri.MDMFDirectoryURI(uri1)
13952+        d1_uri = d1.to_string()
13953+        # Add some extensions, verify that the URI is interpreted
13954+        # correctly.
13955+        d1_uri += ":3:131073"
13956+        uri2 = uri.from_string(d1_uri)
13957+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
13958+        self.failUnless(IURI.providedBy(uri2))
13959+        self.failUnless(IDirnodeURI.providedBy(uri2))
13960+        self.failUnless(uri1.is_mutable())
13961+        self.failIf(uri1.is_readonly())
13962+
13963+        d2_uri = uri2.to_string()
13964+        self.failUnlessIn(":3:131073", d2_uri)
13965+
13966+        # Now attenuate, verify that the extensions persist
13967+        ro_uri = uri2.get_readonly()
13968+        self.failUnlessIsInstance(ro_uri, uri.ReadonlyMDMFDirectoryURI)
13969+        self.failUnless(ro_uri.is_mutable())
13970+        self.failUnless(ro_uri.is_readonly())
13971+        self.failUnless(IURI.providedBy(ro_uri))
13972+        self.failUnless(IDirnodeURI.providedBy(ro_uri))
13973+        ro_uri_str = ro_uri.to_string()
13974+        self.failUnlessIn(":3:131073", ro_uri_str)
13975+
13976+    def test_mdmf_attenuation(self):
13977+        writekey = "\x01" * 16
13978+        fingerprint = "\x02" * 32
13979+
13980+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13981+        d1 = uri.MDMFDirectoryURI(uri1)
13982+        self.failUnless(d1.is_mutable())
13983+        self.failIf(d1.is_readonly())
13984+        self.failUnless(IURI.providedBy(d1))
13985+        self.failUnless(IDirnodeURI.providedBy(d1))
13986+
13987+        d1_uri = d1.to_string()
13988+        d1_uri_from_fn = uri.MDMFDirectoryURI(d1.get_filenode_cap()).to_string()
13989+        self.failUnlessEqual(d1_uri_from_fn, d1_uri)
13990+
13991+        uri2 = uri.from_string(d1_uri)
13992+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
13993+        self.failUnless(IURI.providedBy(uri2))
13994+        self.failUnless(IDirnodeURI.providedBy(uri2))
13995+        self.failUnless(uri2.is_mutable())
13996+        self.failIf(uri2.is_readonly())
13997+
13998+        ro = uri2.get_readonly()
13999+        self.failUnlessIsInstance(ro, uri.ReadonlyMDMFDirectoryURI)
14000+        self.failUnless(ro.is_mutable())
14001+        self.failUnless(ro.is_readonly())
14002+        self.failUnless(IURI.providedBy(ro))
14003+        self.failUnless(IDirnodeURI.providedBy(ro))
14004+
14005+        ro_uri = ro.to_string()
14006+        n = uri.from_string(ro_uri, deep_immutable=True)
14007+        self.failUnlessIsInstance(n, uri.UnknownURI)
14008+
14009+        fn_cap = ro.get_filenode_cap()
14010+        fn_ro_cap = fn_cap.get_readonly()
14011+        d3 = uri.ReadonlyMDMFDirectoryURI(fn_ro_cap)
14012+        self.failUnlessEqual(ro.to_string(), d3.to_string())
14013+        self.failUnless(ro.is_mutable())
14014+        self.failUnless(ro.is_readonly())
14015+
14016+    def test_mdmf_verifier(self):
14017+        # I'm not sure what I want to write here yet.
14018+        writekey = "\x01" * 16
14019+        fingerprint = "\x02" * 32
14020+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
14021+        d1 = uri.MDMFDirectoryURI(uri1)
14022+        v1 = d1.get_verify_cap()
14023+        self.failUnlessIsInstance(v1, uri.MDMFDirectoryURIVerifier)
14024+        self.failIf(v1.is_mutable())
14025+
14026+        d2 = uri.from_string(d1.to_string())
14027+        v2 = d2.get_verify_cap()
14028+        self.failUnlessIsInstance(v2, uri.MDMFDirectoryURIVerifier)
14029+        self.failIf(v2.is_mutable())
14030+        self.failUnlessEqual(v2.to_string(), v1.to_string())
14031+
14032+        # Now attenuate and make sure that works correctly.
14033+        r3 = d2.get_readonly()
14034+        v3 = r3.get_verify_cap()
14035+        self.failUnlessIsInstance(v3, uri.MDMFDirectoryURIVerifier)
14036+        self.failIf(v3.is_mutable())
14037+        self.failUnlessEqual(v3.to_string(), v1.to_string())
14038+        r4 = uri.from_string(r3.to_string())
14039+        v4 = r4.get_verify_cap()
14040+        self.failUnlessIsInstance(v4, uri.MDMFDirectoryURIVerifier)
14041+        self.failIf(v4.is_mutable())
14042+        self.failUnlessEqual(v4.to_string(), v3.to_string())
14043hunk ./src/allmydata/uri.py 31
14044 SEP='(?::|%3A)'
14045 NUMBER='([0-9]+)'
14046 NUMBER_IGNORE='(?:[0-9]+)'
14047+OPTIONAL_EXTENSION_FIELD = '(' + SEP + '[0-9' + SEP + ']+|)'
14048 
14049 # "human-encoded" URIs are allowed to come with a leading
14050 # 'http://127.0.0.1:(8123|3456)/uri/' that will be ignored.
14051hunk ./src/allmydata/uri.py 297
14052     def get_verify_cap(self):
14053         return SSKVerifierURI(self.storage_index, self.fingerprint)
14054 
14055+    def get_extension_params(self):
14056+        return []
14057+
14058+    def set_extension_params(self, params):
14059+        pass
14060 
14061 class ReadonlySSKFileURI(_BaseURI):
14062     implements(IURI, IMutableFileURI)
14063hunk ./src/allmydata/uri.py 357
14064     def get_verify_cap(self):
14065         return SSKVerifierURI(self.storage_index, self.fingerprint)
14066 
14067+    def get_extension_params(self):
14068+        return []
14069+
14070+    def set_extension_params(self, params):
14071+        pass
14072 
14073 class SSKVerifierURI(_BaseURI):
14074     implements(IVerifierURI)
14075hunk ./src/allmydata/uri.py 407
14076     def get_verify_cap(self):
14077         return self
14078 
14079+    def get_extension_params(self):
14080+        return []
14081+
14082+    def set_extension_params(self, params):
14083+        pass
14084+
14085+class WritableMDMFFileURI(_BaseURI):
14086+    implements(IURI, IMutableFileURI)
14087+
14088+    BASE_STRING='URI:MDMF:'
14089+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14090+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14091+
14092+    def __init__(self, writekey, fingerprint, params=[]):
14093+        self.writekey = writekey
14094+        self.readkey = hashutil.ssk_readkey_hash(writekey)
14095+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
14096+        assert len(self.storage_index) == 16
14097+        self.fingerprint = fingerprint
14098+        self.extension = params
14099+
14100+    @classmethod
14101+    def init_from_human_encoding(cls, uri):
14102+        mo = cls.HUMAN_RE.search(uri)
14103+        if not mo:
14104+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14105+        params = filter(lambda x: x != '', re.split(SEP, mo.group(3)))
14106+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14107+
14108+    @classmethod
14109+    def init_from_string(cls, uri):
14110+        mo = cls.STRING_RE.search(uri)
14111+        if not mo:
14112+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14113+        params = mo.group(3)
14114+        params = filter(lambda x: x != '', params.split(":"))
14115+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14116+
14117+    def to_string(self):
14118+        assert isinstance(self.writekey, str)
14119+        assert isinstance(self.fingerprint, str)
14120+        ret = 'URI:MDMF:%s:%s' % (base32.b2a(self.writekey),
14121+                                  base32.b2a(self.fingerprint))
14122+        if self.extension:
14123+            ret += ":"
14124+            ret += ":".join(self.extension)
14125+
14126+        return ret
14127+
14128+    def __repr__(self):
14129+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
14130+
14131+    def abbrev(self):
14132+        return base32.b2a(self.writekey[:5])
14133+
14134+    def abbrev_si(self):
14135+        return base32.b2a(self.storage_index)[:5]
14136+
14137+    def is_readonly(self):
14138+        return False
14139+
14140+    def is_mutable(self):
14141+        return True
14142+
14143+    def get_readonly(self):
14144+        return ReadonlyMDMFFileURI(self.readkey, self.fingerprint, self.extension)
14145+
14146+    def get_verify_cap(self):
14147+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
14148+
14149+    def get_extension_params(self):
14150+        return self.extension
14151+
14152+    def set_extension_params(self, params):
14153+        params = map(str, params)
14154+        self.extension = params
14155+
14156+class ReadonlyMDMFFileURI(_BaseURI):
14157+    implements(IURI, IMutableFileURI)
14158+
14159+    BASE_STRING='URI:MDMF-RO:'
14160+    STRING_RE=re.compile('^' +BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14161+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-RO'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14162+
14163+    def __init__(self, readkey, fingerprint, params=[]):
14164+        self.readkey = readkey
14165+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
14166+        assert len(self.storage_index) == 16
14167+        self.fingerprint = fingerprint
14168+        self.extension = params
14169+
14170+    @classmethod
14171+    def init_from_human_encoding(cls, uri):
14172+        mo = cls.HUMAN_RE.search(uri)
14173+        if not mo:
14174+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14175+        params = mo.group(3)
14176+        params = filter(lambda x: x!= '', re.split(SEP, params))
14177+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14178+
14179+    @classmethod
14180+    def init_from_string(cls, uri):
14181+        mo = cls.STRING_RE.search(uri)
14182+        if not mo:
14183+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14184+
14185+        params = mo.group(3)
14186+        params = filter(lambda x: x != '', params.split(":"))
14187+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14188+
14189+    def to_string(self):
14190+        assert isinstance(self.readkey, str)
14191+        assert isinstance(self.fingerprint, str)
14192+        ret = 'URI:MDMF-RO:%s:%s' % (base32.b2a(self.readkey),
14193+                                     base32.b2a(self.fingerprint))
14194+        if self.extension:
14195+            ret += ":"
14196+            ret += ":".join(self.extension)
14197+
14198+        return ret
14199+
14200+    def __repr__(self):
14201+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
14202+
14203+    def abbrev(self):
14204+        return base32.b2a(self.readkey[:5])
14205+
14206+    def abbrev_si(self):
14207+        return base32.b2a(self.storage_index)[:5]
14208+
14209+    def is_readonly(self):
14210+        return True
14211+
14212+    def is_mutable(self):
14213+        return True
14214+
14215+    def get_readonly(self):
14216+        return self
14217+
14218+    def get_verify_cap(self):
14219+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
14220+
14221+    def get_extension_params(self):
14222+        return self.extension
14223+
14224+    def set_extension_params(self, params):
14225+        params = map(str, params)
14226+        self.extension = params
14227+
14228+class MDMFVerifierURI(_BaseURI):
14229+    implements(IVerifierURI)
14230+
14231+    BASE_STRING='URI:MDMF-Verifier:'
14232+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14233+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-Verifier'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14234+
14235+    def __init__(self, storage_index, fingerprint, params=[]):
14236+        assert len(storage_index) == 16
14237+        self.storage_index = storage_index
14238+        self.fingerprint = fingerprint
14239+        self.extension = params
14240+
14241+    @classmethod
14242+    def init_from_human_encoding(cls, uri):
14243+        mo = cls.HUMAN_RE.search(uri)
14244+        if not mo:
14245+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14246+        params = mo.group(3)
14247+        params = filter(lambda x: x != '', re.split(SEP, params))
14248+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14249+
14250+    @classmethod
14251+    def init_from_string(cls, uri):
14252+        mo = cls.STRING_RE.search(uri)
14253+        if not mo:
14254+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14255+        params = mo.group(3)
14256+        params = filter(lambda x: x != '', params.split(":"))
14257+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14258+
14259+    def to_string(self):
14260+        assert isinstance(self.storage_index, str)
14261+        assert isinstance(self.fingerprint, str)
14262+        ret = 'URI:MDMF-Verifier:%s:%s' % (si_b2a(self.storage_index),
14263+                                           base32.b2a(self.fingerprint))
14264+        if self.extension:
14265+            ret += ':'
14266+            ret += ":".join(self.extension)
14267+
14268+        return ret
14269+
14270+    def is_readonly(self):
14271+        return True
14272+
14273+    def is_mutable(self):
14274+        return False
14275+
14276+    def get_readonly(self):
14277+        return self
14278+
14279+    def get_verify_cap(self):
14280+        return self
14281+
14282+    def get_extension_params(self):
14283+        return self.extension
14284+
14285 class _DirectoryBaseURI(_BaseURI):
14286     implements(IURI, IDirnodeURI)
14287     def __init__(self, filenode_uri=None):
14288hunk ./src/allmydata/uri.py 750
14289         return None
14290 
14291 
14292+class MDMFDirectoryURI(_DirectoryBaseURI):
14293+    implements(IDirectoryURI)
14294+
14295+    BASE_STRING='URI:DIR2-MDMF:'
14296+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14297+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF'+SEP)
14298+    INNER_URI_CLASS=WritableMDMFFileURI
14299+
14300+    def __init__(self, filenode_uri=None):
14301+        if filenode_uri:
14302+            assert not filenode_uri.is_readonly()
14303+        _DirectoryBaseURI.__init__(self, filenode_uri)
14304+
14305+    def is_readonly(self):
14306+        return False
14307+
14308+    def get_readonly(self):
14309+        return ReadonlyMDMFDirectoryURI(self._filenode_uri.get_readonly())
14310+
14311+    def get_verify_cap(self):
14312+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
14313+
14314+
14315+class ReadonlyMDMFDirectoryURI(_DirectoryBaseURI):
14316+    implements(IReadonlyDirectoryURI)
14317+
14318+    BASE_STRING='URI:DIR2-MDMF-RO:'
14319+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14320+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-RO'+SEP)
14321+    INNER_URI_CLASS=ReadonlyMDMFFileURI
14322+
14323+    def __init__(self, filenode_uri=None):
14324+        if filenode_uri:
14325+            assert filenode_uri.is_readonly()
14326+        _DirectoryBaseURI.__init__(self, filenode_uri)
14327+
14328+    def is_readonly(self):
14329+        return True
14330+
14331+    def get_readonly(self):
14332+        return self
14333+
14334+    def get_verify_cap(self):
14335+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
14336+
14337 def wrap_dirnode_cap(filecap):
14338     if isinstance(filecap, WriteableSSKFileURI):
14339         return DirectoryURI(filecap)
14340hunk ./src/allmydata/uri.py 804
14341         return ImmutableDirectoryURI(filecap)
14342     if isinstance(filecap, LiteralFileURI):
14343         return LiteralDirectoryURI(filecap)
14344+    if isinstance(filecap, WritableMDMFFileURI):
14345+        return MDMFDirectoryURI(filecap)
14346+    if isinstance(filecap, ReadonlyMDMFFileURI):
14347+        return ReadonlyMDMFDirectoryURI(filecap)
14348     assert False, "cannot interpret as a directory cap: %s" % filecap.__class__
14349 
14350hunk ./src/allmydata/uri.py 810
14351+class MDMFDirectoryURIVerifier(_DirectoryBaseURI):
14352+    implements(IVerifierURI)
14353+
14354+    BASE_STRING='URI:DIR2-MDMF-Verifier:'
14355+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14356+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-Verifier'+SEP)
14357+    INNER_URI_CLASS=MDMFVerifierURI
14358+
14359+    def __init__(self, filenode_uri=None):
14360+        if filenode_uri:
14361+            assert IVerifierURI.providedBy(filenode_uri)
14362+        self._filenode_uri = filenode_uri
14363+
14364+    def get_filenode_cap(self):
14365+        return self._filenode_uri
14366+
14367+    def is_mutable(self):
14368+        return False
14369 
14370 class DirectoryURIVerifier(_DirectoryBaseURI):
14371     implements(IVerifierURI)
14372hunk ./src/allmydata/uri.py 915
14373             kind = "URI:SSK-RO readcap to a mutable file"
14374         elif s.startswith('URI:SSK-Verifier:'):
14375             return SSKVerifierURI.init_from_string(s)
14376+        elif s.startswith('URI:MDMF:'):
14377+            return WritableMDMFFileURI.init_from_string(s)
14378+        elif s.startswith('URI:MDMF-RO:'):
14379+            return ReadonlyMDMFFileURI.init_from_string(s)
14380+        elif s.startswith('URI:MDMF-Verifier:'):
14381+            return MDMFVerifierURI.init_from_string(s)
14382         elif s.startswith('URI:DIR2:'):
14383             if can_be_writeable:
14384                 return DirectoryURI.init_from_string(s)
14385hunk ./src/allmydata/uri.py 935
14386             return ImmutableDirectoryURI.init_from_string(s)
14387         elif s.startswith('URI:DIR2-LIT:'):
14388             return LiteralDirectoryURI.init_from_string(s)
14389+        elif s.startswith('URI:DIR2-MDMF:'):
14390+            if can_be_writeable:
14391+                return MDMFDirectoryURI.init_from_string(s)
14392+            kind = "URI:DIR2-MDMF directory writecap"
14393+        elif s.startswith('URI:DIR2-MDMF-RO:'):
14394+            if can_be_mutable:
14395+                return ReadonlyMDMFDirectoryURI.init_from_string(s)
14396+            kind = "URI:DIR2-MDMF-RO readcap to a mutable directory"
14397         elif s.startswith('x-tahoe-future-test-writeable:') and not can_be_writeable:
14398             # For testing how future writeable caps would behave in read-only contexts.
14399             kind = "x-tahoe-future-test-writeable: testing cap"
14400}
14401[webapi changes for MDMF
14402Kevan Carstensen <kevan@isnotajoke.com>**20110802021311
14403 Ignore-this: cf8a873b621c654b23c394c78b1036b6
14404 
14405     - Learn how to create MDMF files and directories through the
14406       mutable-type argument.
14407     - Operate with the interface changes associated with MDMF and #993.
14408     - Learn how to do partial updates of mutable files.
14409] {
14410hunk ./src/allmydata/test/test_web.py 27
14411 from allmydata.util.netstring import split_netstring
14412 from allmydata.util.encodingutil import to_str
14413 from allmydata.test.common import FakeCHKFileNode, FakeMutableFileNode, \
14414-     create_chk_filenode, WebErrorMixin, ShouldFailMixin, make_mutable_file_uri
14415-from allmydata.interfaces import IMutableFileNode
14416+     create_chk_filenode, WebErrorMixin, ShouldFailMixin, \
14417+     make_mutable_file_uri, create_mutable_filenode
14418+from allmydata.interfaces import IMutableFileNode, SDMF_VERSION, MDMF_VERSION
14419 from allmydata.mutable import servermap, publish, retrieve
14420 import allmydata.test.common_util as testutil
14421 from allmydata.test.no_network import GridTestMixin
14422hunk ./src/allmydata/test/test_web.py 52
14423         return stats
14424 
14425 class FakeNodeMaker(NodeMaker):
14426+    encoding_params = {
14427+        'k': 3,
14428+        'n': 10,
14429+        'happy': 7,
14430+        'max_segment_size':128*1024 # 1024=KiB
14431+    }
14432     def _create_lit(self, cap):
14433         return FakeCHKFileNode(cap)
14434     def _create_immutable(self, cap):
14435hunk ./src/allmydata/test/test_web.py 63
14436         return FakeCHKFileNode(cap)
14437     def _create_mutable(self, cap):
14438-        return FakeMutableFileNode(None, None, None, None).init_from_cap(cap)
14439-    def create_mutable_file(self, contents="", keysize=None):
14440-        n = FakeMutableFileNode(None, None, None, None)
14441-        return n.create(contents)
14442+        return FakeMutableFileNode(None,
14443+                                   None,
14444+                                   self.encoding_params, None).init_from_cap(cap)
14445+    def create_mutable_file(self, contents="", keysize=None,
14446+                            version=SDMF_VERSION):
14447+        n = FakeMutableFileNode(None, None, self.encoding_params, None)
14448+        return n.create(contents, version=version)
14449 
14450 class FakeUploader(service.Service):
14451     name = "uploader"
14452hunk ./src/allmydata/test/test_web.py 177
14453         self.nodemaker = FakeNodeMaker(None, self._secret_holder, None,
14454                                        self.uploader, None,
14455                                        None, None)
14456+        self.mutable_file_default = SDMF_VERSION
14457 
14458     def startService(self):
14459         return service.MultiService.startService(self)
14460hunk ./src/allmydata/test/test_web.py 222
14461             foo.set_uri(u"bar.txt", self._bar_txt_uri, self._bar_txt_uri)
14462             self._bar_txt_verifycap = n.get_verify_cap().to_string()
14463 
14464+            # sdmf
14465+            # XXX: Do we ever use this?
14466+            self.BAZ_CONTENTS, n, self._baz_txt_uri, self._baz_txt_readonly_uri = self.makefile_mutable(0)
14467+
14468+            foo.set_uri(u"baz.txt", self._baz_txt_uri, self._baz_txt_readonly_uri)
14469+
14470+            # mdmf
14471+            self.QUUX_CONTENTS, n, self._quux_txt_uri, self._quux_txt_readonly_uri = self.makefile_mutable(0, mdmf=True)
14472+            assert self._quux_txt_uri.startswith("URI:MDMF")
14473+            foo.set_uri(u"quux.txt", self._quux_txt_uri, self._quux_txt_readonly_uri)
14474+
14475             foo.set_uri(u"empty", res[3][1].get_uri(),
14476                         res[3][1].get_readonly_uri())
14477             sub_uri = res[4][1].get_uri()
14478hunk ./src/allmydata/test/test_web.py 264
14479             # public/
14480             # public/foo/
14481             # public/foo/bar.txt
14482+            # public/foo/baz.txt
14483+            # public/foo/quux.txt
14484             # public/foo/blockingfile
14485             # public/foo/empty/
14486             # public/foo/sub/
14487hunk ./src/allmydata/test/test_web.py 286
14488         n = create_chk_filenode(contents)
14489         return contents, n, n.get_uri()
14490 
14491+    def makefile_mutable(self, number, mdmf=False):
14492+        contents = "contents of mutable file %s\n" % number
14493+        n = create_mutable_filenode(contents, mdmf)
14494+        return contents, n, n.get_uri(), n.get_readonly_uri()
14495+
14496     def tearDown(self):
14497         return self.s.stopService()
14498 
14499hunk ./src/allmydata/test/test_web.py 297
14500     def failUnlessIsBarDotTxt(self, res):
14501         self.failUnlessReallyEqual(res, self.BAR_CONTENTS, res)
14502 
14503+    def failUnlessIsQuuxDotTxt(self, res):
14504+        self.failUnlessReallyEqual(res, self.QUUX_CONTENTS, res)
14505+
14506+    def failUnlessIsBazDotTxt(self, res):
14507+        self.failUnlessReallyEqual(res, self.BAZ_CONTENTS, res)
14508+
14509     def failUnlessIsBarJSON(self, res):
14510         data = simplejson.loads(res)
14511         self.failUnless(isinstance(data, list))
14512hunk ./src/allmydata/test/test_web.py 314
14513         self.failUnlessReallyEqual(to_str(data[1]["verify_uri"]), self._bar_txt_verifycap)
14514         self.failUnlessReallyEqual(data[1]["size"], len(self.BAR_CONTENTS))
14515 
14516+    def failUnlessIsQuuxJSON(self, res, readonly=False):
14517+        data = simplejson.loads(res)
14518+        self.failUnless(isinstance(data, list))
14519+        self.failUnlessEqual(data[0], "filenode")
14520+        self.failUnless(isinstance(data[1], dict))
14521+        metadata = data[1]
14522+        return self.failUnlessIsQuuxDotTxtMetadata(metadata, readonly)
14523+
14524+    def failUnlessIsQuuxDotTxtMetadata(self, metadata, readonly):
14525+        self.failUnless(metadata['mutable'])
14526+        if readonly:
14527+            self.failIf("rw_uri" in metadata)
14528+        else:
14529+            self.failUnless("rw_uri" in metadata)
14530+            self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
14531+        self.failUnless("ro_uri" in metadata)
14532+        self.failUnlessEqual(metadata['ro_uri'], self._quux_txt_readonly_uri)
14533+        self.failUnlessReallyEqual(metadata['size'], len(self.QUUX_CONTENTS))
14534+
14535     def failUnlessIsFooJSON(self, res):
14536         data = simplejson.loads(res)
14537         self.failUnless(isinstance(data, list))
14538hunk ./src/allmydata/test/test_web.py 346
14539 
14540         kidnames = sorted([unicode(n) for n in data[1]["children"]])
14541         self.failUnlessEqual(kidnames,
14542-                             [u"bar.txt", u"blockingfile", u"empty",
14543-                              u"n\u00fc.txt", u"sub"])
14544+                             [u"bar.txt", u"baz.txt", u"blockingfile",
14545+                              u"empty", u"n\u00fc.txt", u"quux.txt", u"sub"])
14546         kids = dict( [(unicode(name),value)
14547                       for (name,value)
14548                       in data[1]["children"].iteritems()] )
14549hunk ./src/allmydata/test/test_web.py 368
14550                                    self._bar_txt_metadata["tahoe"]["linkcrtime"])
14551         self.failUnlessReallyEqual(to_str(kids[u"n\u00fc.txt"][1]["ro_uri"]),
14552                                    self._bar_txt_uri)
14553+        self.failUnlessIn("quux.txt", kids)
14554+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["rw_uri"],
14555+                                   self._quux_txt_uri)
14556+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["ro_uri"],
14557+                                   self._quux_txt_readonly_uri)
14558 
14559     def GET(self, urlpath, followRedirect=False, return_response=False,
14560             **kwargs):
14561hunk ./src/allmydata/test/test_web.py 845
14562                              self.PUT, base + "/@@name=/blah.txt", "")
14563         return d
14564 
14565+
14566     def test_GET_DIRURL_named_bad(self):
14567         base = "/file/%s" % urllib.quote(self._foo_uri)
14568         d = self.shouldFail2(error.Error, "test_PUT_DIRURL_named_bad",
14569hunk ./src/allmydata/test/test_web.py 888
14570         d.addCallback(self.failUnlessIsBarDotTxt)
14571         return d
14572 
14573+    def test_GET_FILE_URI_mdmf(self):
14574+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
14575+        d = self.GET(base)
14576+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14577+        return d
14578+
14579+    def test_GET_FILE_URI_mdmf_extensions(self):
14580+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
14581+        d = self.GET(base)
14582+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14583+        return d
14584+
14585+    def test_GET_FILE_URI_mdmf_bare_cap(self):
14586+        cap_elements = self._quux_txt_uri.split(":")
14587+        # 6 == expected cap length with two extensions.
14588+        self.failUnlessEqual(len(cap_elements), 6)
14589+
14590+        # Now lop off the extension parameters and stitch everything
14591+        # back together
14592+        quux_uri = ":".join(cap_elements[:len(cap_elements) - 2])
14593+
14594+        # Now GET that. We should get back quux.
14595+        base = "/uri/%s" % urllib.quote(quux_uri)
14596+        d = self.GET(base)
14597+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14598+        return d
14599+
14600+    def test_GET_FILE_URI_mdmf_readonly(self):
14601+        base = "/uri/%s" % urllib.quote(self._quux_txt_readonly_uri)
14602+        d = self.GET(base)
14603+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14604+        return d
14605+
14606     def test_GET_FILE_URI_badchild(self):
14607         base = "/uri/%s/boguschild" % urllib.quote(self._bar_txt_uri)
14608         errmsg = "Files have no children, certainly not named 'boguschild'"
14609hunk ./src/allmydata/test/test_web.py 937
14610                              self.PUT, base, "")
14611         return d
14612 
14613+    def test_PUT_FILE_URI_mdmf(self):
14614+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
14615+        self._quux_new_contents = "new_contents"
14616+        d = self.GET(base)
14617+        d.addCallback(lambda res:
14618+            self.failUnlessIsQuuxDotTxt(res))
14619+        d.addCallback(lambda ignored:
14620+            self.PUT(base, self._quux_new_contents))
14621+        d.addCallback(lambda ignored:
14622+            self.GET(base))
14623+        d.addCallback(lambda res:
14624+            self.failUnlessReallyEqual(res, self._quux_new_contents))
14625+        return d
14626+
14627+    def test_PUT_FILE_URI_mdmf_extensions(self):
14628+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
14629+        self._quux_new_contents = "new_contents"
14630+        d = self.GET(base)
14631+        d.addCallback(lambda res: self.failUnlessIsQuuxDotTxt(res))
14632+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
14633+        d.addCallback(lambda ignored: self.GET(base))
14634+        d.addCallback(lambda res: self.failUnlessEqual(self._quux_new_contents,
14635+                                                       res))
14636+        return d
14637+
14638+    def test_PUT_FILE_URI_mdmf_bare_cap(self):
14639+        elements = self._quux_txt_uri.split(":")
14640+        self.failUnlessEqual(len(elements), 6)
14641+
14642+        quux_uri = ":".join(elements[:len(elements) - 2])
14643+        base = "/uri/%s" % urllib.quote(quux_uri)
14644+        self._quux_new_contents = "new_contents" * 50000
14645+
14646+        d = self.GET(base)
14647+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14648+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
14649+        d.addCallback(lambda ignored: self.GET(base))
14650+        d.addCallback(lambda res:
14651+            self.failUnlessEqual(res, self._quux_new_contents))
14652+        return d
14653+
14654+    def test_PUT_FILE_URI_mdmf_readonly(self):
14655+        # We're not allowed to PUT things to a readonly cap.
14656+        base = "/uri/%s" % self._quux_txt_readonly_uri
14657+        d = self.GET(base)
14658+        d.addCallback(lambda res:
14659+            self.failUnlessIsQuuxDotTxt(res))
14660+        # What should we get here? We get a 500 error now; that's not right.
14661+        d.addCallback(lambda ignored:
14662+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_mdmf_readonly",
14663+                             "400 Bad Request", "read-only cap",
14664+                             self.PUT, base, "new data"))
14665+        return d
14666+
14667+    def test_PUT_FILE_URI_sdmf_readonly(self):
14668+        # We're not allowed to put things to a readonly cap.
14669+        base = "/uri/%s" % self._baz_txt_readonly_uri
14670+        d = self.GET(base)
14671+        d.addCallback(lambda res:
14672+            self.failUnlessIsBazDotTxt(res))
14673+        d.addCallback(lambda ignored:
14674+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_sdmf_readonly",
14675+                             "400 Bad Request", "read-only cap",
14676+                             self.PUT, base, "new_data"))
14677+        return d
14678+
14679     # TODO: version of this with a Unicode filename
14680     def test_GET_FILEURL_save(self):
14681         d = self.GET(self.public_url + "/foo/bar.txt?filename=bar.txt&save=true",
14682hunk ./src/allmydata/test/test_web.py 1019
14683         d.addBoth(self.should404, "test_GET_FILEURL_missing")
14684         return d
14685 
14686+    def test_GET_FILEURL_info_mdmf(self):
14687+        d = self.GET("/uri/%s?t=info" % self._quux_txt_uri)
14688+        def _got(res):
14689+            self.failUnlessIn("mutable file (mdmf)", res)
14690+            self.failUnlessIn(self._quux_txt_uri, res)
14691+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14692+        d.addCallback(_got)
14693+        return d
14694+
14695+    def test_GET_FILEURL_info_mdmf_readonly(self):
14696+        d = self.GET("/uri/%s?t=info" % self._quux_txt_readonly_uri)
14697+        def _got(res):
14698+            self.failUnlessIn("mutable file (mdmf)", res)
14699+            self.failIfIn(self._quux_txt_uri, res)
14700+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14701+        d.addCallback(_got)
14702+        return d
14703+
14704+    def test_GET_FILEURL_info_sdmf(self):
14705+        d = self.GET("/uri/%s?t=info" % self._baz_txt_uri)
14706+        def _got(res):
14707+            self.failUnlessIn("mutable file (sdmf)", res)
14708+            self.failUnlessIn(self._baz_txt_uri, res)
14709+        d.addCallback(_got)
14710+        return d
14711+
14712+    def test_GET_FILEURL_info_mdmf_extensions(self):
14713+        d = self.GET("/uri/%s:3:131073?t=info" % self._quux_txt_uri)
14714+        def _got(res):
14715+            self.failUnlessIn("mutable file (mdmf)", res)
14716+            self.failUnlessIn(self._quux_txt_uri, res)
14717+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14718+        d.addCallback(_got)
14719+        return d
14720+
14721+    def test_GET_FILEURL_info_mdmf_bare_cap(self):
14722+        elements = self._quux_txt_uri.split(":")
14723+        self.failUnlessEqual(len(elements), 6)
14724+
14725+        quux_uri = ":".join(elements[:len(elements) - 2])
14726+        base = "/uri/%s?t=info" % urllib.quote(quux_uri)
14727+        d = self.GET(base)
14728+        def _got(res):
14729+            self.failUnlessIn("mutable file (mdmf)", res)
14730+            self.failUnlessIn(quux_uri, res)
14731+        d.addCallback(_got)
14732+        return d
14733+
14734     def test_PUT_overwrite_only_files(self):
14735         # create a directory, put a file in that directory.
14736         contents, n, filecap = self.makefile(8)
14737hunk ./src/allmydata/test/test_web.py 1108
14738                                                       self.NEWFILE_CONTENTS))
14739         return d
14740 
14741+    def test_PUT_NEWFILEURL_unlinked_mdmf(self):
14742+        # this should get us a few segments of an MDMF mutable file,
14743+        # which we can then test for.
14744+        contents = self.NEWFILE_CONTENTS * 300000
14745+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
14746+                     contents)
14747+        def _got_filecap(filecap):
14748+            self.failUnless(filecap.startswith("URI:MDMF"))
14749+            return filecap
14750+        d.addCallback(_got_filecap)
14751+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14752+        d.addCallback(lambda json: self.failUnlessIn("mdmf", json))
14753+        return d
14754+
14755+    def test_PUT_NEWFILEURL_unlinked_sdmf(self):
14756+        contents = self.NEWFILE_CONTENTS * 300000
14757+        d = self.PUT("/uri?mutable=true&mutable-type=sdmf",
14758+                     contents)
14759+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14760+        d.addCallback(lambda json: self.failUnlessIn("sdmf", json))
14761+        return d
14762+
14763+    def test_PUT_NEWFILEURL_unlinked_bad_mutable_type(self):
14764+        contents = self.NEWFILE_CONTENTS * 300000
14765+        return self.shouldHTTPError("test bad mutable type",
14766+                                    400, "Bad Request", "Unknown type: foo",
14767+                                    self.PUT, "/uri?mutable=true&mutable-type=foo",
14768+                                    contents)
14769+
14770     def test_PUT_NEWFILEURL_range_bad(self):
14771         headers = {"content-range": "bytes 1-10/%d" % len(self.NEWFILE_CONTENTS)}
14772         target = self.public_url + "/foo/new.txt"
14773hunk ./src/allmydata/test/test_web.py 1169
14774         return d
14775 
14776     def test_PUT_NEWFILEURL_mutable_toobig(self):
14777-        d = self.shouldFail2(error.Error, "test_PUT_NEWFILEURL_mutable_toobig",
14778-                             "413 Request Entity Too Large",
14779-                             "SDMF is limited to one segment, and 10001 > 10000",
14780-                             self.PUT,
14781-                             self.public_url + "/foo/new.txt?mutable=true",
14782-                             "b" * (self.s.MUTABLE_SIZELIMIT+1))
14783+        # It is okay to upload large mutable files, so we should be able
14784+        # to do that.
14785+        d = self.PUT(self.public_url + "/foo/new.txt?mutable=true",
14786+                     "b" * (self.s.MUTABLE_SIZELIMIT + 1))
14787         return d
14788 
14789     def test_PUT_NEWFILEURL_replace(self):
14790hunk ./src/allmydata/test/test_web.py 1267
14791         d.addCallback(_check1)
14792         return d
14793 
14794+    def test_GET_FILEURL_json_mutable_type(self):
14795+        # The JSON should include mutable-type, which says whether the
14796+        # file is SDMF or MDMF
14797+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
14798+                     self.NEWFILE_CONTENTS * 300000)
14799+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14800+        def _got_json(json, version):
14801+            data = simplejson.loads(json)
14802+            assert "filenode" == data[0]
14803+            data = data[1]
14804+            assert isinstance(data, dict)
14805+
14806+            self.failUnlessIn("mutable-type", data)
14807+            self.failUnlessEqual(data['mutable-type'], version)
14808+
14809+        d.addCallback(_got_json, "mdmf")
14810+        # Now make an SDMF file and check that it is reported correctly.
14811+        d.addCallback(lambda ignored:
14812+            self.PUT("/uri?mutable=true&mutable-type=sdmf",
14813+                      self.NEWFILE_CONTENTS * 300000))
14814+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14815+        d.addCallback(_got_json, "sdmf")
14816+        return d
14817+
14818+    def test_GET_FILEURL_json_mdmf_extensions(self):
14819+        # A GET invoked against a URL that includes an MDMF cap with
14820+        # extensions should fetch the same JSON information as a GET
14821+        # invoked against a bare cap.
14822+        self._quux_txt_uri = "%s:3:131073" % self._quux_txt_uri
14823+        self._quux_txt_readonly_uri = "%s:3:131073" % self._quux_txt_readonly_uri
14824+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
14825+        d.addCallback(self.failUnlessIsQuuxJSON)
14826+        return d
14827+
14828+    def test_GET_FILEURL_json_mdmf_bare_cap(self):
14829+        elements = self._quux_txt_uri.split(":")
14830+        self.failUnlessEqual(len(elements), 6)
14831+
14832+        quux_uri = ":".join(elements[:len(elements) - 2])
14833+        # so failUnlessIsQuuxJSON will work.
14834+        self._quux_txt_uri = quux_uri
14835+
14836+        # we need to alter the readonly URI in the same way, again so
14837+        # failUnlessIsQuuxJSON will work
14838+        elements = self._quux_txt_readonly_uri.split(":")
14839+        self.failUnlessEqual(len(elements), 6)
14840+        quux_ro_uri = ":".join(elements[:len(elements) - 2])
14841+        self._quux_txt_readonly_uri = quux_ro_uri
14842+
14843+        base = "/uri/%s?t=json" % urllib.quote(quux_uri)
14844+        d = self.GET(base)
14845+        d.addCallback(self.failUnlessIsQuuxJSON)
14846+        return d
14847+
14848+    def test_GET_FILEURL_json_mdmf_bare_readonly_cap(self):
14849+        elements = self._quux_txt_readonly_uri.split(":")
14850+        self.failUnlessEqual(len(elements), 6)
14851+
14852+        quux_readonly_uri = ":".join(elements[:len(elements) - 2])
14853+        # so failUnlessIsQuuxJSON will work
14854+        self._quux_txt_readonly_uri = quux_readonly_uri
14855+        base = "/uri/%s?t=json" % quux_readonly_uri
14856+        d = self.GET(base)
14857+        # XXX: We may need to make a method that knows how to check for
14858+        # readonly JSON, or else alter that one so that it knows how to
14859+        # do that.
14860+        d.addCallback(self.failUnlessIsQuuxJSON, readonly=True)
14861+        return d
14862+
14863+    def test_GET_FILEURL_json_mdmf(self):
14864+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
14865+        d.addCallback(self.failUnlessIsQuuxJSON)
14866+        return d
14867+
14868     def test_GET_FILEURL_json_missing(self):
14869         d = self.GET(self.public_url + "/foo/missing?json")
14870         d.addBoth(self.should404, "test_GET_FILEURL_json_missing")
14871hunk ./src/allmydata/test/test_web.py 1373
14872             self.failUnless(CSS_STYLE.search(res), res)
14873         d.addCallback(_check)
14874         return d
14875-   
14876+
14877     def test_GET_FILEURL_uri_missing(self):
14878         d = self.GET(self.public_url + "/foo/missing?t=uri")
14879         d.addBoth(self.should404, "test_GET_FILEURL_uri_missing")
14880hunk ./src/allmydata/test/test_web.py 1379
14881         return d
14882 
14883-    def test_GET_DIRECTORY_html_banner(self):
14884+    def test_GET_DIRECTORY_html(self):
14885         d = self.GET(self.public_url + "/foo", followRedirect=True)
14886         def _check(res):
14887             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
14888hunk ./src/allmydata/test/test_web.py 1383
14889+            # These are radio buttons that allow a user to toggle
14890+            # whether a particular mutable file is SDMF or MDMF.
14891+            self.failUnlessIn("mutable-type-mdmf", res)
14892+            self.failUnlessIn("mutable-type-sdmf", res)
14893+            # Similarly, these toggle whether a particular directory
14894+            # should be MDMF or SDMF.
14895+            self.failUnlessIn("mutable-directory-mdmf", res)
14896+            self.failUnlessIn("mutable-directory-sdmf", res)
14897+            self.failUnlessIn("quux", res)
14898         d.addCallback(_check)
14899         return d
14900 
14901hunk ./src/allmydata/test/test_web.py 1395
14902+    def test_GET_root_html(self):
14903+        # make sure that we have the option to upload an unlinked
14904+        # mutable file in SDMF and MDMF formats.
14905+        d = self.GET("/")
14906+        def _got_html(html):
14907+            # These are radio buttons that allow the user to toggle
14908+            # whether a particular mutable file is MDMF or SDMF.
14909+            self.failUnlessIn("mutable-type-mdmf", html)
14910+            self.failUnlessIn("mutable-type-sdmf", html)
14911+            # We should also have the ability to create a mutable directory.
14912+            self.failUnlessIn("mkdir", html)
14913+            # ...and we should have the ability to say whether that's an
14914+            # MDMF or SDMF directory
14915+            self.failUnlessIn("mutable-directory-mdmf", html)
14916+            self.failUnlessIn("mutable-directory-sdmf", html)
14917+        d.addCallback(_got_html)
14918+        return d
14919+
14920+    def test_mutable_type_defaults(self):
14921+        # The checked="checked" attribute of the inputs corresponding to
14922+        # the mutable-type parameter should change as expected with the
14923+        # value configured in tahoe.cfg.
14924+        #
14925+        # By default, the value configured with the client is
14926+        # SDMF_VERSION, so that should be checked.
14927+        assert self.s.mutable_file_default == SDMF_VERSION
14928+
14929+        d = self.GET("/")
14930+        def _got_html(html, value):
14931+            i = 'input checked="checked" type="radio" id="mutable-type-%s"'
14932+            self.failUnlessIn(i % value, html)
14933+        d.addCallback(_got_html, "sdmf")
14934+        d.addCallback(lambda ignored:
14935+            self.GET(self.public_url + "/foo", followRedirect=True))
14936+        d.addCallback(_got_html, "sdmf")
14937+        # Now switch the configuration value to MDMF. The MDMF radio
14938+        # buttons should now be checked on these pages.
14939+        def _swap_values(ignored):
14940+            self.s.mutable_file_default = MDMF_VERSION
14941+        d.addCallback(_swap_values)
14942+        d.addCallback(lambda ignored: self.GET("/"))
14943+        d.addCallback(_got_html, "mdmf")
14944+        d.addCallback(lambda ignored:
14945+            self.GET(self.public_url + "/foo", followRedirect=True))
14946+        d.addCallback(_got_html, "mdmf")
14947+        return d
14948+
14949     def test_GET_DIRURL(self):
14950         # the addSlash means we get a redirect here
14951         # from /uri/$URI/foo/ , we need ../../../ to get back to the root
14952hunk ./src/allmydata/test/test_web.py 1535
14953         d.addCallback(self.failUnlessIsFooJSON)
14954         return d
14955 
14956+    def test_GET_DIRURL_json_mutable_type(self):
14957+        d = self.PUT(self.public_url + \
14958+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
14959+                     self.NEWFILE_CONTENTS * 300000)
14960+        d.addCallback(lambda ignored:
14961+            self.PUT(self.public_url + \
14962+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
14963+                     self.NEWFILE_CONTENTS * 300000))
14964+        # Now we have an MDMF and SDMF file in the directory. If we GET
14965+        # its JSON, we should see their encodings.
14966+        d.addCallback(lambda ignored:
14967+            self.GET(self.public_url + "/foo?t=json"))
14968+        def _got_json(json):
14969+            data = simplejson.loads(json)
14970+            assert data[0] == "dirnode"
14971+
14972+            data = data[1]
14973+            kids = data['children']
14974+
14975+            mdmf_data = kids['mdmf.txt'][1]
14976+            self.failUnlessIn("mutable-type", mdmf_data)
14977+            self.failUnlessEqual(mdmf_data['mutable-type'], "mdmf")
14978+
14979+            sdmf_data = kids['sdmf.txt'][1]
14980+            self.failUnlessIn("mutable-type", sdmf_data)
14981+            self.failUnlessEqual(sdmf_data['mutable-type'], "sdmf")
14982+        d.addCallback(_got_json)
14983+        return d
14984+
14985 
14986     def test_POST_DIRURL_manifest_no_ophandle(self):
14987         d = self.shouldFail2(error.Error,
14988hunk ./src/allmydata/test/test_web.py 1659
14989         d.addCallback(self.get_operation_results, "127", "json")
14990         def _got_json(stats):
14991             expected = {"count-immutable-files": 3,
14992-                        "count-mutable-files": 0,
14993+                        "count-mutable-files": 2,
14994                         "count-literal-files": 0,
14995hunk ./src/allmydata/test/test_web.py 1661
14996-                        "count-files": 3,
14997+                        "count-files": 5,
14998                         "count-directories": 3,
14999                         "size-immutable-files": 57,
15000                         "size-literal-files": 0,
15001hunk ./src/allmydata/test/test_web.py 1667
15002                         #"size-directories": 1912, # varies
15003                         #"largest-directory": 1590,
15004-                        "largest-directory-children": 5,
15005+                        "largest-directory-children": 7,
15006                         "largest-immutable-file": 19,
15007                         }
15008             for k,v in expected.iteritems():
15009hunk ./src/allmydata/test/test_web.py 1684
15010         def _check(res):
15011             self.failUnless(res.endswith("\n"))
15012             units = [simplejson.loads(t) for t in res[:-1].split("\n")]
15013-            self.failUnlessReallyEqual(len(units), 7)
15014+            self.failUnlessReallyEqual(len(units), 9)
15015             self.failUnlessEqual(units[-1]["type"], "stats")
15016             first = units[0]
15017             self.failUnlessEqual(first["path"], [])
15018hunk ./src/allmydata/test/test_web.py 1695
15019             self.failIfEqual(baz["storage-index"], None)
15020             self.failIfEqual(baz["verifycap"], None)
15021             self.failIfEqual(baz["repaircap"], None)
15022+            # XXX: Add quux and baz to this test.
15023             return
15024         d.addCallback(_check)
15025         return d
15026hunk ./src/allmydata/test/test_web.py 1722
15027         d.addCallback(self.failUnlessNodeKeysAre, [])
15028         return d
15029 
15030+    def test_PUT_NEWDIRURL_mdmf(self):
15031+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
15032+        d.addCallback(lambda res:
15033+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15034+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15035+        d.addCallback(lambda node:
15036+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15037+        return d
15038+
15039+    def test_PUT_NEWDIRURL_sdmf(self):
15040+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf",
15041+                     "")
15042+        d.addCallback(lambda res:
15043+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15044+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15045+        d.addCallback(lambda node:
15046+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15047+        return d
15048+
15049+    def test_PUT_NEWDIRURL_bad_mutable_type(self):
15050+        return self.shouldHTTPError("test bad mutable type",
15051+                             400, "Bad Request", "Unknown type: foo",
15052+                             self.PUT, self.public_url + \
15053+                             "/foo/newdir=?t=mkdir&mutable-type=foo", "")
15054+
15055     def test_POST_NEWDIRURL(self):
15056         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir", "")
15057         d.addCallback(lambda res:
15058hunk ./src/allmydata/test/test_web.py 1755
15059         d.addCallback(self.failUnlessNodeKeysAre, [])
15060         return d
15061 
15062+    def test_POST_NEWDIRURL_mdmf(self):
15063+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
15064+        d.addCallback(lambda res:
15065+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15066+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15067+        d.addCallback(lambda node:
15068+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15069+        return d
15070+
15071+    def test_POST_NEWDIRURL_sdmf(self):
15072+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf", "")
15073+        d.addCallback(lambda res:
15074+            self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15075+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15076+        d.addCallback(lambda node:
15077+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15078+        return d
15079+
15080+    def test_POST_NEWDIRURL_bad_mutable_type(self):
15081+        return self.shouldHTTPError("test bad mutable type",
15082+                                    400, "Bad Request", "Unknown type: foo",
15083+                                    self.POST2, self.public_url + \
15084+                                    "/foo/newdir?t=mkdir&mutable-type=foo", "")
15085+
15086     def test_POST_NEWDIRURL_emptyname(self):
15087         # an empty pathname component (i.e. a double-slash) is disallowed
15088         d = self.shouldFail2(error.Error, "test_POST_NEWDIRURL_emptyname",
15089hunk ./src/allmydata/test/test_web.py 1787
15090                              self.POST, self.public_url + "//?t=mkdir")
15091         return d
15092 
15093-    def test_POST_NEWDIRURL_initial_children(self):
15094+    def _do_POST_NEWDIRURL_initial_children_test(self, version=None):
15095         (newkids, caps) = self._create_initial_children()
15096hunk ./src/allmydata/test/test_web.py 1789
15097-        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-with-children",
15098+        query = "/foo/newdir?t=mkdir-with-children"
15099+        if version == MDMF_VERSION:
15100+            query += "&mutable-type=mdmf"
15101+        elif version == SDMF_VERSION:
15102+            query += "&mutable-type=sdmf"
15103+        else:
15104+            version = SDMF_VERSION # for later
15105+        d = self.POST2(self.public_url + query,
15106                        simplejson.dumps(newkids))
15107         def _check(uri):
15108             n = self.s.create_node_from_uri(uri.strip())
15109hunk ./src/allmydata/test/test_web.py 1801
15110             d2 = self.failUnlessNodeKeysAre(n, newkids.keys())
15111+            self.failUnlessEqual(n._node.get_version(), version)
15112             d2.addCallback(lambda ign:
15113                            self.failUnlessROChildURIIs(n, u"child-imm",
15114                                                        caps['filecap1']))
15115hunk ./src/allmydata/test/test_web.py 1839
15116         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
15117         return d
15118 
15119+    def test_POST_NEWDIRURL_initial_children(self):
15120+        return self._do_POST_NEWDIRURL_initial_children_test()
15121+
15122+    def test_POST_NEWDIRURL_initial_children_mdmf(self):
15123+        return self._do_POST_NEWDIRURL_initial_children_test(MDMF_VERSION)
15124+
15125+    def test_POST_NEWDIRURL_initial_children_sdmf(self):
15126+        return self._do_POST_NEWDIRURL_initial_children_test(SDMF_VERSION)
15127+
15128+    def test_POST_NEWDIRURL_initial_children_bad_mutable_type(self):
15129+        (newkids, caps) = self._create_initial_children()
15130+        return self.shouldHTTPError("test bad mutable type",
15131+                                    400, "Bad Request", "Unknown type: foo",
15132+                                    self.POST2, self.public_url + \
15133+                                    "/foo/newdir?t=mkdir-with-children&mutable-type=foo",
15134+                                    simplejson.dumps(newkids))
15135+
15136     def test_POST_NEWDIRURL_immutable(self):
15137         (newkids, caps) = self._create_immutable_children()
15138         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-immutable",
15139hunk ./src/allmydata/test/test_web.py 1956
15140         d.addCallback(self.failUnlessNodeKeysAre, [])
15141         return d
15142 
15143+    def test_PUT_NEWDIRURL_mkdirs_mdmf(self):
15144+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=mdmf", "")
15145+        d.addCallback(lambda ignored:
15146+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
15147+        d.addCallback(lambda ignored:
15148+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
15149+        d.addCallback(lambda ignored:
15150+            self._foo_node.get_child_at_path(u"subdir"))
15151+        def _got_subdir(subdir):
15152+            # XXX: What we want?
15153+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
15154+            self.failUnlessNodeHasChild(subdir, u"newdir")
15155+            return subdir.get_child_at_path(u"newdir")
15156+        d.addCallback(_got_subdir)
15157+        d.addCallback(lambda newdir:
15158+            self.failUnlessEqual(newdir._node.get_version(), MDMF_VERSION))
15159+        return d
15160+
15161+    def test_PUT_NEWDIRURL_mkdirs_sdmf(self):
15162+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=sdmf", "")
15163+        d.addCallback(lambda ignored:
15164+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
15165+        d.addCallback(lambda ignored:
15166+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
15167+        d.addCallback(lambda ignored:
15168+            self._foo_node.get_child_at_path(u"subdir"))
15169+        def _got_subdir(subdir):
15170+            # XXX: What we want?
15171+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
15172+            self.failUnlessNodeHasChild(subdir, u"newdir")
15173+            return subdir.get_child_at_path(u"newdir")
15174+        d.addCallback(_got_subdir)
15175+        d.addCallback(lambda newdir:
15176+            self.failUnlessEqual(newdir._node.get_version(), SDMF_VERSION))
15177+        return d
15178+
15179+    def test_PUT_NEWDIRURL_mkdirs_bad_mutable_type(self):
15180+        return self.shouldHTTPError("test bad mutable type",
15181+                                    400, "Bad Request", "Unknown type: foo",
15182+                                    self.PUT, self.public_url + \
15183+                                    "/foo/subdir/newdir?t=mkdir&mutable-type=foo",
15184+                                    "")
15185+
15186     def test_DELETE_DIRURL(self):
15187         d = self.DELETE(self.public_url + "/foo")
15188         d.addCallback(lambda res:
15189hunk ./src/allmydata/test/test_web.py 2236
15190         return d
15191 
15192     def test_POST_upload_no_link_mutable_toobig(self):
15193-        d = self.shouldFail2(error.Error,
15194-                             "test_POST_upload_no_link_mutable_toobig",
15195-                             "413 Request Entity Too Large",
15196-                             "SDMF is limited to one segment, and 10001 > 10000",
15197-                             self.POST,
15198-                             "/uri", t="upload", mutable="true",
15199-                             file=("new.txt",
15200-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
15201+        # The SDMF size limit is no longer in place, so we should be
15202+        # able to upload mutable files that are as large as we want them
15203+        # to be.
15204+        d = self.POST("/uri", t="upload", mutable="true",
15205+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
15206+        return d
15207+
15208+
15209+    def test_POST_upload_mutable_type_unlinked(self):
15210+        d = self.POST("/uri?t=upload&mutable=true&mutable-type=sdmf",
15211+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
15212+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
15213+        def _got_json(json, version):
15214+            data = simplejson.loads(json)
15215+            data = data[1]
15216+
15217+            self.failUnlessIn("mutable-type", data)
15218+            self.failUnlessEqual(data['mutable-type'], version)
15219+        d.addCallback(_got_json, "sdmf")
15220+        d.addCallback(lambda ignored:
15221+            self.POST("/uri?t=upload&mutable=true&mutable-type=mdmf",
15222+                      file=('mdmf.txt', self.NEWFILE_CONTENTS * 300000)))
15223+        def _got_filecap(filecap):
15224+            self.failUnless(filecap.startswith("URI:MDMF"))
15225+            return filecap
15226+        d.addCallback(_got_filecap)
15227+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
15228+        d.addCallback(_got_json, "mdmf")
15229+        return d
15230+
15231+    def test_POST_upload_mutable_type_unlinked_bad_mutable_type(self):
15232+        return self.shouldHTTPError("test bad mutable type",
15233+                                    400, "Bad Request", "Unknown type: foo",
15234+                                    self.POST,
15235+                                    "/uri?5=upload&mutable=true&mutable-type=foo",
15236+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
15237+
15238+    def test_POST_upload_mutable_type(self):
15239+        d = self.POST(self.public_url + \
15240+                      "/foo?t=upload&mutable=true&mutable-type=sdmf",
15241+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
15242+        fn = self._foo_node
15243+        def _got_cap(filecap, filename):
15244+            filenameu = unicode(filename)
15245+            self.failUnlessURIMatchesRWChild(filecap, fn, filenameu)
15246+            return self.GET(self.public_url + "/foo/%s?t=json" % filename)
15247+        def _got_mdmf_cap(filecap):
15248+            self.failUnless(filecap.startswith("URI:MDMF"))
15249+            return filecap
15250+        d.addCallback(_got_cap, "sdmf.txt")
15251+        def _got_json(json, version):
15252+            data = simplejson.loads(json)
15253+            data = data[1]
15254+
15255+            self.failUnlessIn("mutable-type", data)
15256+            self.failUnlessEqual(data['mutable-type'], version)
15257+        d.addCallback(_got_json, "sdmf")
15258+        d.addCallback(lambda ignored:
15259+            self.POST(self.public_url + \
15260+                      "/foo?t=upload&mutable=true&mutable-type=mdmf",
15261+                      file=("mdmf.txt", self.NEWFILE_CONTENTS * 300000)))
15262+        d.addCallback(_got_mdmf_cap)
15263+        d.addCallback(_got_cap, "mdmf.txt")
15264+        d.addCallback(_got_json, "mdmf")
15265         return d
15266 
15267hunk ./src/allmydata/test/test_web.py 2302
15268+    def test_POST_upload_bad_mutable_type(self):
15269+        return self.shouldHTTPError("test bad mutable type",
15270+                                    400, "Bad Request", "Unknown type: foo",
15271+                                    self.POST, self.public_url + \
15272+                                    "/foo?t=upload&mutable=true&mutable-type=foo",
15273+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
15274+
15275     def test_POST_upload_mutable(self):
15276         # this creates a mutable file
15277         d = self.POST(self.public_url + "/foo", t="upload", mutable="true",
15278hunk ./src/allmydata/test/test_web.py 2433
15279             self.failUnlessReallyEqual(headers["content-type"], ["text/plain"])
15280         d.addCallback(_got_headers)
15281 
15282-        # make sure that size errors are displayed correctly for overwrite
15283-        d.addCallback(lambda res:
15284-                      self.shouldFail2(error.Error,
15285-                                       "test_POST_upload_mutable-toobig",
15286-                                       "413 Request Entity Too Large",
15287-                                       "SDMF is limited to one segment, and 10001 > 10000",
15288-                                       self.POST,
15289-                                       self.public_url + "/foo", t="upload",
15290-                                       mutable="true",
15291-                                       file=("new.txt",
15292-                                             "b" * (self.s.MUTABLE_SIZELIMIT+1)),
15293-                                       ))
15294-
15295+        # make sure that outdated size limits aren't enforced anymore.
15296+        d.addCallback(lambda ignored:
15297+            self.POST(self.public_url + "/foo", t="upload",
15298+                      mutable="true",
15299+                      file=("new.txt",
15300+                            "b" * (self.s.MUTABLE_SIZELIMIT+1))))
15301         d.addErrback(self.dump_error)
15302         return d
15303 
15304hunk ./src/allmydata/test/test_web.py 2443
15305     def test_POST_upload_mutable_toobig(self):
15306-        d = self.shouldFail2(error.Error,
15307-                             "test_POST_upload_mutable_toobig",
15308-                             "413 Request Entity Too Large",
15309-                             "SDMF is limited to one segment, and 10001 > 10000",
15310-                             self.POST,
15311-                             self.public_url + "/foo",
15312-                             t="upload", mutable="true",
15313-                             file=("new.txt",
15314-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
15315+        # SDMF had a size limti that was removed a while ago. MDMF has
15316+        # never had a size limit. Test to make sure that we do not
15317+        # encounter errors when trying to upload large mutable files,
15318+        # since there should be no coded prohibitions regarding large
15319+        # mutable files.
15320+        d = self.POST(self.public_url + "/foo",
15321+                      t="upload", mutable="true",
15322+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
15323         return d
15324 
15325     def dump_error(self, f):
15326hunk ./src/allmydata/test/test_web.py 2538
15327         # make sure that nothing was added
15328         d.addCallback(lambda res:
15329                       self.failUnlessNodeKeysAre(self._foo_node,
15330-                                                 [u"bar.txt", u"blockingfile",
15331-                                                  u"empty", u"n\u00fc.txt",
15332+                                                 [u"bar.txt", u"baz.txt", u"blockingfile",
15333+                                                  u"empty", u"n\u00fc.txt", u"quux.txt",
15334                                                   u"sub"]))
15335         return d
15336 
15337hunk ./src/allmydata/test/test_web.py 2661
15338         d.addCallback(_check3)
15339         return d
15340 
15341+    def test_POST_FILEURL_mdmf_check(self):
15342+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
15343+        d = self.POST(quux_url, t="check")
15344+        def _check(res):
15345+            self.failUnlessIn("Healthy", res)
15346+        d.addCallback(_check)
15347+        quux_extension_url = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
15348+        d.addCallback(lambda ignored:
15349+            self.POST(quux_extension_url, t="check"))
15350+        d.addCallback(_check)
15351+        return d
15352+
15353+    def test_POST_FILEURL_mdmf_check_and_repair(self):
15354+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
15355+        d = self.POST(quux_url, t="check", repair="true")
15356+        def _check(res):
15357+            self.failUnlessIn("Healthy", res)
15358+        d.addCallback(_check)
15359+        quux_extension_url = "/uri/%s" %\
15360+            urllib.quote("%s:3:131073" % self._quux_txt_uri)
15361+        d.addCallback(lambda ignored:
15362+            self.POST(quux_extension_url, t="check", repair="true"))
15363+        d.addCallback(_check)
15364+        return d
15365+
15366     def wait_for_operation(self, ignored, ophandle):
15367         url = "/operations/" + ophandle
15368         url += "?t=status&output=JSON"
15369hunk ./src/allmydata/test/test_web.py 2731
15370         d.addCallback(self.wait_for_operation, "123")
15371         def _check_json(data):
15372             self.failUnlessReallyEqual(data["finished"], True)
15373-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
15374-            self.failUnlessReallyEqual(data["count-objects-healthy"], 8)
15375+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
15376+            self.failUnlessReallyEqual(data["count-objects-healthy"], 10)
15377         d.addCallback(_check_json)
15378         d.addCallback(self.get_operation_results, "123", "html")
15379         def _check_html(res):
15380hunk ./src/allmydata/test/test_web.py 2736
15381-            self.failUnless("Objects Checked: <span>8</span>" in res)
15382-            self.failUnless("Objects Healthy: <span>8</span>" in res)
15383+            self.failUnless("Objects Checked: <span>10</span>" in res)
15384+            self.failUnless("Objects Healthy: <span>10</span>" in res)
15385         d.addCallback(_check_html)
15386 
15387         d.addCallback(lambda res:
15388hunk ./src/allmydata/test/test_web.py 2766
15389         d.addCallback(self.wait_for_operation, "124")
15390         def _check_json(data):
15391             self.failUnlessReallyEqual(data["finished"], True)
15392-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
15393-            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 8)
15394+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
15395+            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 10)
15396             self.failUnlessReallyEqual(data["count-objects-unhealthy-pre-repair"], 0)
15397             self.failUnlessReallyEqual(data["count-corrupt-shares-pre-repair"], 0)
15398             self.failUnlessReallyEqual(data["count-repairs-attempted"], 0)
15399hunk ./src/allmydata/test/test_web.py 2773
15400             self.failUnlessReallyEqual(data["count-repairs-successful"], 0)
15401             self.failUnlessReallyEqual(data["count-repairs-unsuccessful"], 0)
15402-            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 8)
15403+            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 10)
15404             self.failUnlessReallyEqual(data["count-objects-unhealthy-post-repair"], 0)
15405             self.failUnlessReallyEqual(data["count-corrupt-shares-post-repair"], 0)
15406         d.addCallback(_check_json)
15407hunk ./src/allmydata/test/test_web.py 2779
15408         d.addCallback(self.get_operation_results, "124", "html")
15409         def _check_html(res):
15410-            self.failUnless("Objects Checked: <span>8</span>" in res)
15411+            self.failUnless("Objects Checked: <span>10</span>" in res)
15412 
15413hunk ./src/allmydata/test/test_web.py 2781
15414-            self.failUnless("Objects Healthy (before repair): <span>8</span>" in res)
15415+            self.failUnless("Objects Healthy (before repair): <span>10</span>" in res)
15416             self.failUnless("Objects Unhealthy (before repair): <span>0</span>" in res)
15417             self.failUnless("Corrupt Shares (before repair): <span>0</span>" in res)
15418 
15419hunk ./src/allmydata/test/test_web.py 2789
15420             self.failUnless("Repairs Successful: <span>0</span>" in res)
15421             self.failUnless("Repairs Unsuccessful: <span>0</span>" in res)
15422 
15423-            self.failUnless("Objects Healthy (after repair): <span>8</span>" in res)
15424+            self.failUnless("Objects Healthy (after repair): <span>10</span>" in res)
15425             self.failUnless("Objects Unhealthy (after repair): <span>0</span>" in res)
15426             self.failUnless("Corrupt Shares (after repair): <span>0</span>" in res)
15427         d.addCallback(_check_html)
15428hunk ./src/allmydata/test/test_web.py 2808
15429         d.addCallback(self.failUnlessNodeKeysAre, [])
15430         return d
15431 
15432+    def test_POST_mkdir_mdmf(self):
15433+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=mdmf")
15434+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15435+        d.addCallback(lambda node:
15436+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15437+        return d
15438+
15439+    def test_POST_mkdir_sdmf(self):
15440+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=sdmf")
15441+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15442+        d.addCallback(lambda node:
15443+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15444+        return d
15445+
15446+    def test_POST_mkdir_bad_mutable_type(self):
15447+        return self.shouldHTTPError("test bad mutable type",
15448+                                    400, "Bad Request", "Unknown type: foo",
15449+                                    self.POST, self.public_url + \
15450+                                    "/foo?t=mkdir&name=newdir&mutable-type=foo")
15451+
15452     def test_POST_mkdir_initial_children(self):
15453         (newkids, caps) = self._create_initial_children()
15454         d = self.POST2(self.public_url +
15455hunk ./src/allmydata/test/test_web.py 2841
15456         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
15457         return d
15458 
15459+    def test_POST_mkdir_initial_children_mdmf(self):
15460+        (newkids, caps) = self._create_initial_children()
15461+        d = self.POST2(self.public_url +
15462+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=mdmf",
15463+                       simplejson.dumps(newkids))
15464+        d.addCallback(lambda res:
15465+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15466+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15467+        d.addCallback(lambda node:
15468+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15469+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15470+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
15471+                       caps['filecap1'])
15472+        return d
15473+
15474+    # XXX: Duplication.
15475+    def test_POST_mkdir_initial_children_sdmf(self):
15476+        (newkids, caps) = self._create_initial_children()
15477+        d = self.POST2(self.public_url +
15478+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=sdmf",
15479+                       simplejson.dumps(newkids))
15480+        d.addCallback(lambda res:
15481+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15482+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15483+        d.addCallback(lambda node:
15484+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15485+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15486+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
15487+                       caps['filecap1'])
15488+        return d
15489+
15490+    def test_POST_mkdir_initial_children_bad_mutable_type(self):
15491+        (newkids, caps) = self._create_initial_children()
15492+        return self.shouldHTTPError("test bad mutable type",
15493+                                    400, "Bad Request", "Unknown type: foo",
15494+                                    self.POST, self.public_url + \
15495+                                    "/foo?t=mkdir-with-children&name=newdir&mutable-type=foo",
15496+                                    simplejson.dumps(newkids))
15497+
15498     def test_POST_mkdir_immutable(self):
15499         (newkids, caps) = self._create_immutable_children()
15500         d = self.POST2(self.public_url +
15501hunk ./src/allmydata/test/test_web.py 2936
15502         d.addCallback(_after_mkdir)
15503         return d
15504 
15505+    def test_POST_mkdir_no_parentdir_noredirect_mdmf(self):
15506+        d = self.POST("/uri?t=mkdir&mutable-type=mdmf")
15507+        def _after_mkdir(res):
15508+            u = uri.from_string(res)
15509+            # Check that this is an MDMF writecap
15510+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
15511+        d.addCallback(_after_mkdir)
15512+        return d
15513+
15514+    def test_POST_mkdir_no_parentdir_noredirect_sdmf(self):
15515+        d = self.POST("/uri?t=mkdir&mutable-type=sdmf")
15516+        def _after_mkdir(res):
15517+            u = uri.from_string(res)
15518+            self.failUnlessIsInstance(u, uri.DirectoryURI)
15519+        d.addCallback(_after_mkdir)
15520+        return d
15521+
15522+    def test_POST_mkdir_no_parentdir_noredirect_bad_mutable_type(self):
15523+        return self.shouldHTTPError("test bad mutable type",
15524+                                    400, "Bad Request", "Unknown type: foo",
15525+                                    self.POST, self.public_url + \
15526+                                    "/uri?t=mkdir&mutable-type=foo")
15527+
15528     def test_POST_mkdir_no_parentdir_noredirect2(self):
15529         # make sure form-based arguments (as on the welcome page) still work
15530         d = self.POST("/uri", t="mkdir")
15531hunk ./src/allmydata/test/test_web.py 3001
15532         filecap3 = node3.get_readonly_uri()
15533         node4 = self.s.create_node_from_uri(make_mutable_file_uri())
15534         dircap = DirectoryNode(node4, None, None).get_uri()
15535+        mdmfcap = make_mutable_file_uri(mdmf=True)
15536         litdircap = "URI:DIR2-LIT:ge3dumj2mewdcotyfqydulbshj5x2lbm"
15537         emptydircap = "URI:DIR2-LIT:"
15538         newkids = {u"child-imm":        ["filenode", {"rw_uri": filecap1,
15539hunk ./src/allmydata/test/test_web.py 3018
15540                                                       "ro_uri": self._make_readonly(dircap)}],
15541                    u"dirchild-lit":     ["dirnode",  {"ro_uri": litdircap}],
15542                    u"dirchild-empty":   ["dirnode",  {"ro_uri": emptydircap}],
15543+                   u"child-mutable-mdmf": ["filenode", {"rw_uri": mdmfcap,
15544+                                                        "ro_uri": self._make_readonly(mdmfcap)}],
15545                    }
15546         return newkids, {'filecap1': filecap1,
15547                          'filecap2': filecap2,
15548hunk ./src/allmydata/test/test_web.py 3029
15549                          'unknown_immcap': unknown_immcap,
15550                          'dircap': dircap,
15551                          'litdircap': litdircap,
15552-                         'emptydircap': emptydircap}
15553+                         'emptydircap': emptydircap,
15554+                         'mdmfcap': mdmfcap}
15555 
15556     def _create_immutable_children(self):
15557         contents, n, filecap1 = self.makefile(12)
15558hunk ./src/allmydata/test/test_web.py 3571
15559                                                       contents))
15560         return d
15561 
15562+    def test_PUT_NEWFILEURL_mdmf(self):
15563+        new_contents = self.NEWFILE_CONTENTS * 300000
15564+        d = self.PUT(self.public_url + \
15565+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
15566+                     new_contents)
15567+        d.addCallback(lambda ignored:
15568+            self.GET(self.public_url + "/foo/mdmf.txt?t=json"))
15569+        def _got_json(json):
15570+            data = simplejson.loads(json)
15571+            data = data[1]
15572+            self.failUnlessIn("mutable-type", data)
15573+            self.failUnlessEqual(data['mutable-type'], "mdmf")
15574+            self.failUnless(data['rw_uri'].startswith("URI:MDMF"))
15575+            self.failUnless(data['ro_uri'].startswith("URI:MDMF"))
15576+        d.addCallback(_got_json)
15577+        return d
15578+
15579+    def test_PUT_NEWFILEURL_sdmf(self):
15580+        new_contents = self.NEWFILE_CONTENTS * 300000
15581+        d = self.PUT(self.public_url + \
15582+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
15583+                     new_contents)
15584+        d.addCallback(lambda ignored:
15585+            self.GET(self.public_url + "/foo/sdmf.txt?t=json"))
15586+        def _got_json(json):
15587+            data = simplejson.loads(json)
15588+            data = data[1]
15589+            self.failUnlessIn("mutable-type", data)
15590+            self.failUnlessEqual(data['mutable-type'], "sdmf")
15591+        d.addCallback(_got_json)
15592+        return d
15593+
15594+    def test_PUT_NEWFILEURL_bad_mutable_type(self):
15595+       new_contents = self.NEWFILE_CONTENTS * 300000
15596+       return self.shouldHTTPError("test bad mutable type",
15597+                                   400, "Bad Request", "Unknown type: foo",
15598+                                   self.PUT, self.public_url + \
15599+                                   "/foo/foo.txt?mutable=true&mutable-type=foo",
15600+                                   new_contents)
15601+
15602     def test_PUT_NEWFILEURL_uri_replace(self):
15603         contents, n, new_uri = self.makefile(8)
15604         d = self.PUT(self.public_url + "/foo/bar.txt?t=uri", new_uri)
15605hunk ./src/allmydata/test/test_web.py 3720
15606         d.addCallback(self.failUnlessIsEmptyJSON)
15607         return d
15608 
15609+    def test_PUT_mkdir_mdmf(self):
15610+        d = self.PUT("/uri?t=mkdir&mutable-type=mdmf", "")
15611+        def _got(res):
15612+            u = uri.from_string(res)
15613+            # Check that this is an MDMF writecap
15614+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
15615+        d.addCallback(_got)
15616+        return d
15617+
15618+    def test_PUT_mkdir_sdmf(self):
15619+        d = self.PUT("/uri?t=mkdir&mutable-type=sdmf", "")
15620+        def _got(res):
15621+            u = uri.from_string(res)
15622+            self.failUnlessIsInstance(u, uri.DirectoryURI)
15623+        d.addCallback(_got)
15624+        return d
15625+
15626+    def test_PUT_mkdir_bad_mutable_type(self):
15627+        return self.shouldHTTPError("bad mutable type",
15628+                                    400, "Bad Request", "Unknown type: foo",
15629+                                    self.PUT, "/uri?t=mkdir&mutable-type=foo",
15630+                                    "")
15631+
15632     def test_POST_check(self):
15633         d = self.POST(self.public_url + "/foo", t="check", name="bar.txt")
15634         def _done(res):
15635hunk ./src/allmydata/test/test_web.py 3755
15636         d.addCallback(_done)
15637         return d
15638 
15639+
15640+    def test_PUT_update_at_offset(self):
15641+        file_contents = "test file" * 100000 # about 900 KiB
15642+        d = self.PUT("/uri?mutable=true", file_contents)
15643+        def _then(filecap):
15644+            self.filecap = filecap
15645+            new_data = file_contents[:100]
15646+            new = "replaced and so on"
15647+            new_data += new
15648+            new_data += file_contents[len(new_data):]
15649+            assert len(new_data) == len(file_contents)
15650+            self.new_data = new_data
15651+        d.addCallback(_then)
15652+        d.addCallback(lambda ignored:
15653+            self.PUT("/uri/%s?replace=True&offset=100" % self.filecap,
15654+                     "replaced and so on"))
15655+        def _get_data(filecap):
15656+            n = self.s.create_node_from_uri(filecap)
15657+            return n.download_best_version()
15658+        d.addCallback(_get_data)
15659+        d.addCallback(lambda results:
15660+            self.failUnlessEqual(results, self.new_data))
15661+        # Now try appending things to the file
15662+        d.addCallback(lambda ignored:
15663+            self.PUT("/uri/%s?offset=%d" % (self.filecap, len(self.new_data)),
15664+                     "puppies" * 100))
15665+        d.addCallback(_get_data)
15666+        d.addCallback(lambda results:
15667+            self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
15668+        # and try replacing the beginning of the file
15669+        d.addCallback(lambda ignored:
15670+            self.PUT("/uri/%s?offset=0" % self.filecap, "begin"))
15671+        d.addCallback(_get_data)
15672+        d.addCallback(lambda results:
15673+            self.failUnlessEqual(results, "begin"+self.new_data[len("begin"):]+("puppies"*100)))
15674+        return d
15675+
15676+    def test_PUT_update_at_invalid_offset(self):
15677+        file_contents = "test file" * 100000 # about 900 KiB
15678+        d = self.PUT("/uri?mutable=true", file_contents)
15679+        def _then(filecap):
15680+            self.filecap = filecap
15681+        d.addCallback(_then)
15682+        # Negative offsets should cause an error.
15683+        d.addCallback(lambda ignored:
15684+            self.shouldHTTPError("test mutable invalid offset negative",
15685+                                 400, "Bad Request",
15686+                                 "Invalid offset",
15687+                                 self.PUT,
15688+                                 "/uri/%s?offset=-1" % self.filecap,
15689+                                 "foo"))
15690+        return d
15691+
15692+    def test_PUT_update_at_offset_immutable(self):
15693+        file_contents = "Test file" * 100000
15694+        d = self.PUT("/uri", file_contents)
15695+        def _then(filecap):
15696+            self.filecap = filecap
15697+        d.addCallback(_then)
15698+        d.addCallback(lambda ignored:
15699+            self.shouldHTTPError("test immutable update",
15700+                                 400, "Bad Request",
15701+                                 "immutable",
15702+                                 self.PUT,
15703+                                 "/uri/%s?offset=50" % self.filecap,
15704+                                 "foo"))
15705+        return d
15706+
15707+
15708     def test_bad_method(self):
15709         url = self.webish_url + self.public_url + "/foo/bar.txt"
15710         d = self.shouldHTTPError("test_bad_method",
15711hunk ./src/allmydata/test/test_web.py 4093
15712         def _stash_mutable_uri(n, which):
15713             self.uris[which] = n.get_uri()
15714             assert isinstance(self.uris[which], str)
15715-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
15716+        d.addCallback(lambda ign:
15717+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
15718         d.addCallback(_stash_mutable_uri, "corrupt")
15719         d.addCallback(lambda ign:
15720                       c0.upload(upload.Data("literal", convergence="")))
15721hunk ./src/allmydata/test/test_web.py 4240
15722         def _stash_mutable_uri(n, which):
15723             self.uris[which] = n.get_uri()
15724             assert isinstance(self.uris[which], str)
15725-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
15726+        d.addCallback(lambda ign:
15727+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
15728         d.addCallback(_stash_mutable_uri, "corrupt")
15729 
15730         def _compute_fileurls(ignored):
15731hunk ./src/allmydata/test/test_web.py 4903
15732         def _stash_mutable_uri(n, which):
15733             self.uris[which] = n.get_uri()
15734             assert isinstance(self.uris[which], str)
15735-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"2"))
15736+        d.addCallback(lambda ign:
15737+            c0.create_mutable_file(publish.MutableData(DATA+"2")))
15738         d.addCallback(_stash_mutable_uri, "mutable")
15739 
15740         def _compute_fileurls(ignored):
15741hunk ./src/allmydata/test/test_web.py 5003
15742                                                         convergence="")))
15743         d.addCallback(_stash_uri, "small")
15744 
15745-        d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
15746+        d.addCallback(lambda ign:
15747+            c0.create_mutable_file(publish.MutableData("mutable")))
15748         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
15749         d.addCallback(_stash_uri, "mutable")
15750 
15751hunk ./src/allmydata/web/common.py 12
15752 from allmydata.interfaces import ExistingChildError, NoSuchChildError, \
15753      FileTooLargeError, NotEnoughSharesError, NoSharesError, \
15754      EmptyPathnameComponentError, MustBeDeepImmutableError, \
15755-     MustBeReadonlyError, MustNotBeUnknownRWError
15756+     MustBeReadonlyError, MustNotBeUnknownRWError, SDMF_VERSION, MDMF_VERSION
15757 from allmydata.mutable.common import UnrecoverableFileError
15758 from allmydata.util import abbreviate
15759 from allmydata.util.encodingutil import to_str, quote_output
15760hunk ./src/allmydata/web/common.py 35
15761     else:
15762         return boolean_of_arg(replace)
15763 
15764+
15765+def parse_mutable_type_arg(arg):
15766+    if not arg:
15767+        return None # interpreted by the caller as "let the nodemaker decide"
15768+
15769+    arg = arg.lower()
15770+    if arg == "mdmf":
15771+        return MDMF_VERSION
15772+    elif arg == "sdmf":
15773+        return SDMF_VERSION
15774+
15775+    return "invalid"
15776+
15777+
15778+def parse_offset_arg(offset):
15779+    # XXX: This will raise a ValueError when invoked on something that
15780+    # is not an integer. Is that okay? Or do we want a better error
15781+    # message? Since this call is going to be used by programmers and
15782+    # their tools rather than users (through the wui), it is not
15783+    # inconsistent to return that, I guess.
15784+    if offset is not None:
15785+        offset = int(offset)
15786+
15787+    return offset
15788+
15789+
15790 def get_root(ctx_or_req):
15791     req = IRequest(ctx_or_req)
15792     # the addSlash=True gives us one extra (empty) segment
15793hunk ./src/allmydata/web/directory.py 19
15794 from allmydata.uri import from_string_dirnode
15795 from allmydata.interfaces import IDirectoryNode, IFileNode, IFilesystemNode, \
15796      IImmutableFileNode, IMutableFileNode, ExistingChildError, \
15797-     NoSuchChildError, EmptyPathnameComponentError
15798+     NoSuchChildError, EmptyPathnameComponentError, SDMF_VERSION, MDMF_VERSION
15799 from allmydata.monitor import Monitor, OperationCancelledError
15800 from allmydata import dirnode
15801 from allmydata.web.common import text_plain, WebError, \
15802hunk ./src/allmydata/web/directory.py 26
15803      IOpHandleTable, NeedOperationHandleError, \
15804      boolean_of_arg, get_arg, get_root, parse_replace_arg, \
15805      should_create_intermediate_directories, \
15806-     getxmlfile, RenderMixin, humanize_failure, convert_children_json
15807+     getxmlfile, RenderMixin, humanize_failure, convert_children_json, \
15808+     parse_mutable_type_arg
15809 from allmydata.web.filenode import ReplaceMeMixin, \
15810      FileNodeHandler, PlaceHolderNodeHandler
15811 from allmydata.web.check_results import CheckResults, \
15812hunk ./src/allmydata/web/directory.py 112
15813                     mutable = True
15814                     if t == "mkdir-immutable":
15815                         mutable = False
15816+
15817+                    mt = None
15818+                    if mutable:
15819+                        arg = get_arg(req, "mutable-type", None)
15820+                        mt = parse_mutable_type_arg(arg)
15821+                        if mt is "invalid":
15822+                            raise WebError("Unknown type: %s" % arg,
15823+                                           http.BAD_REQUEST)
15824                     d = self.node.create_subdirectory(name, kids,
15825hunk ./src/allmydata/web/directory.py 121
15826-                                                      mutable=mutable)
15827+                                                      mutable=mutable,
15828+                                                      mutable_version=mt)
15829                     d.addCallback(make_handler_for,
15830                                   self.client, self.node, name)
15831                     return d
15832hunk ./src/allmydata/web/directory.py 163
15833         if not t:
15834             # render the directory as HTML, using the docFactory and Nevow's
15835             # whole templating thing.
15836-            return DirectoryAsHTML(self.node)
15837+            return DirectoryAsHTML(self.node,
15838+                                   self.client.mutable_file_default)
15839 
15840         if t == "json":
15841             return DirectoryJSONMetadata(ctx, self.node)
15842hunk ./src/allmydata/web/directory.py 253
15843         name = name.decode("utf-8")
15844         replace = boolean_of_arg(get_arg(req, "replace", "true"))
15845         kids = {}
15846-        d = self.node.create_subdirectory(name, kids, overwrite=replace)
15847+        arg = get_arg(req, "mutable-type", None)
15848+        mt = parse_mutable_type_arg(arg)
15849+        if mt is not None and mt is not "invalid":
15850+            d = self.node.create_subdirectory(name, kids, overwrite=replace,
15851+                                          mutable_version=mt)
15852+        elif mt is "invalid":
15853+            raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
15854+        else:
15855+            d = self.node.create_subdirectory(name, kids, overwrite=replace)
15856         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
15857         return d
15858 
15859hunk ./src/allmydata/web/directory.py 277
15860         req.content.seek(0)
15861         kids_json = req.content.read()
15862         kids = convert_children_json(self.client.nodemaker, kids_json)
15863-        d = self.node.create_subdirectory(name, kids, overwrite=False)
15864+        arg = get_arg(req, "mutable-type", None)
15865+        mt = parse_mutable_type_arg(arg)
15866+        if mt is not None and mt is not "invalid":
15867+            d = self.node.create_subdirectory(name, kids, overwrite=False,
15868+                                              mutable_version=mt)
15869+        elif mt is "invalid":
15870+            raise WebError("Unknown type: %s" % arg)
15871+        else:
15872+            d = self.node.create_subdirectory(name, kids, overwrite=False)
15873         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
15874         return d
15875 
15876hunk ./src/allmydata/web/directory.py 582
15877     docFactory = getxmlfile("directory.xhtml")
15878     addSlash = True
15879 
15880-    def __init__(self, node):
15881+    def __init__(self, node, default_mutable_format):
15882         rend.Page.__init__(self)
15883         self.node = node
15884 
15885hunk ./src/allmydata/web/directory.py 586
15886+        assert default_mutable_format in (MDMF_VERSION, SDMF_VERSION)
15887+        self.default_mutable_format = default_mutable_format
15888+
15889     def beforeRender(self, ctx):
15890         # attempt to get the dirnode's children, stashing them (or the
15891         # failure that results) for later use
15892hunk ./src/allmydata/web/directory.py 786
15893 
15894         return ctx.tag
15895 
15896+    # XXX: Duplicated from root.py.
15897     def render_forms(self, ctx, data):
15898         forms = []
15899 
15900hunk ./src/allmydata/web/directory.py 795
15901         if self.dirnode_children is None:
15902             return T.div["No upload forms: directory is unreadable"]
15903 
15904+        mdmf_directory_input = T.input(type='radio', name='mutable-type',
15905+                                       id='mutable-directory-mdmf',
15906+                                       value='mdmf')
15907+        sdmf_directory_input = T.input(type='radio', name='mutable-type',
15908+                                       id='mutable-directory-sdmf',
15909+                                       value='sdmf', checked='checked')
15910         mkdir = T.form(action=".", method="post",
15911                        enctype="multipart/form-data")[
15912             T.fieldset[
15913hunk ./src/allmydata/web/directory.py 809
15914             T.legend(class_="freeform-form-label")["Create a new directory in this directory"],
15915             "New directory name: ",
15916             T.input(type="text", name="name"), " ",
15917+            T.label(for_='mutable-directory-sdmf')["SDMF"],
15918+            sdmf_directory_input,
15919+            T.label(for_='mutable-directory-mdmf')["MDMF"],
15920+            mdmf_directory_input,
15921             T.input(type="submit", value="Create"),
15922             ]]
15923         forms.append(T.div(class_="freeform-form")[mkdir])
15924hunk ./src/allmydata/web/directory.py 817
15925 
15926+        # Build input elements for mutable file type. We do this outside
15927+        # of the list so we can check the appropriate format, based on
15928+        # the default configured in the client (which reflects the
15929+        # default configured in tahoe.cfg)
15930+        if self.default_mutable_format == MDMF_VERSION:
15931+            mdmf_input = T.input(type='radio', name='mutable-type',
15932+                                 id='mutable-type-mdmf', value='mdmf',
15933+                                 checked='checked')
15934+        else:
15935+            mdmf_input = T.input(type='radio', name='mutable-type',
15936+                                 id='mutable-type-mdmf', value='mdmf')
15937+
15938+        if self.default_mutable_format == SDMF_VERSION:
15939+            sdmf_input = T.input(type='radio', name='mutable-type',
15940+                                 id='mutable-type-sdmf', value='sdmf',
15941+                                 checked="checked")
15942+        else:
15943+            sdmf_input = T.input(type='radio', name='mutable-type',
15944+                                 id='mutable-type-sdmf', value='sdmf')
15945+
15946         upload = T.form(action=".", method="post",
15947                         enctype="multipart/form-data")[
15948             T.fieldset[
15949hunk ./src/allmydata/web/directory.py 849
15950             T.input(type="submit", value="Upload"),
15951             " Mutable?:",
15952             T.input(type="checkbox", name="mutable"),
15953+            sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
15954+            mdmf_input,
15955+            T.label(for_="mutable-type-mdmf")["MDMF (experimental)"],
15956             ]]
15957         forms.append(T.div(class_="freeform-form")[upload])
15958 
15959hunk ./src/allmydata/web/directory.py 887
15960                 kiddata = ("filenode", {'size': childnode.get_size(),
15961                                         'mutable': childnode.is_mutable(),
15962                                         })
15963+                if childnode.is_mutable() and \
15964+                    childnode.get_version() is not None:
15965+                    mutable_type = childnode.get_version()
15966+                    assert mutable_type in (SDMF_VERSION, MDMF_VERSION)
15967+
15968+                    if mutable_type == MDMF_VERSION:
15969+                        mutable_type = "mdmf"
15970+                    else:
15971+                        mutable_type = "sdmf"
15972+                    kiddata[1]['mutable-type'] = mutable_type
15973+
15974             elif IDirectoryNode.providedBy(childnode):
15975                 kiddata = ("dirnode", {'mutable': childnode.is_mutable()})
15976             else:
15977hunk ./src/allmydata/web/filenode.py 9
15978 from nevow import url, rend
15979 from nevow.inevow import IRequest
15980 
15981-from allmydata.interfaces import ExistingChildError
15982+from allmydata.interfaces import ExistingChildError, SDMF_VERSION, MDMF_VERSION
15983 from allmydata.monitor import Monitor
15984 from allmydata.immutable.upload import FileHandle
15985hunk ./src/allmydata/web/filenode.py 12
15986+from allmydata.mutable.publish import MutableFileHandle
15987+from allmydata.mutable.common import MODE_READ
15988 from allmydata.util import log, base32
15989 
15990 from allmydata.web.common import text_plain, WebError, RenderMixin, \
15991hunk ./src/allmydata/web/filenode.py 18
15992      boolean_of_arg, get_arg, should_create_intermediate_directories, \
15993-     MyExceptionHandler, parse_replace_arg
15994+     MyExceptionHandler, parse_replace_arg, parse_offset_arg, \
15995+     parse_mutable_type_arg
15996 from allmydata.web.check_results import CheckResults, \
15997      CheckAndRepairResults, LiteralCheckResults
15998 from allmydata.web.info import MoreInfo
15999hunk ./src/allmydata/web/filenode.py 29
16000         # a new file is being uploaded in our place.
16001         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
16002         if mutable:
16003-            req.content.seek(0)
16004-            data = req.content.read()
16005-            d = client.create_mutable_file(data)
16006+            arg = get_arg(req, "mutable-type", None)
16007+            mutable_type = parse_mutable_type_arg(arg)
16008+            if mutable_type is "invalid":
16009+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16010+
16011+            data = MutableFileHandle(req.content)
16012+            d = client.create_mutable_file(data, version=mutable_type)
16013             def _uploaded(newnode):
16014                 d2 = self.parentnode.set_node(self.name, newnode,
16015                                               overwrite=replace)
16016hunk ./src/allmydata/web/filenode.py 68
16017         d.addCallback(lambda res: childnode.get_uri())
16018         return d
16019 
16020-    def _read_data_from_formpost(self, req):
16021-        # SDMF: files are small, and we can only upload data, so we read
16022-        # the whole file into memory before uploading.
16023-        contents = req.fields["file"]
16024-        contents.file.seek(0)
16025-        data = contents.file.read()
16026-        return data
16027 
16028     def replace_me_with_a_formpost(self, req, client, replace):
16029         # create a new file, maybe mutable, maybe immutable
16030hunk ./src/allmydata/web/filenode.py 73
16031         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
16032 
16033+        # create an immutable file
16034+        contents = req.fields["file"]
16035         if mutable:
16036hunk ./src/allmydata/web/filenode.py 76
16037-            data = self._read_data_from_formpost(req)
16038-            d = client.create_mutable_file(data)
16039+            arg = get_arg(req, "mutable-type", None)
16040+            mutable_type = parse_mutable_type_arg(arg)
16041+            if mutable_type is "invalid":
16042+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16043+            uploadable = MutableFileHandle(contents.file)
16044+            d = client.create_mutable_file(uploadable, version=mutable_type)
16045             def _uploaded(newnode):
16046                 d2 = self.parentnode.set_node(self.name, newnode,
16047                                               overwrite=replace)
16048hunk ./src/allmydata/web/filenode.py 89
16049                 return d2
16050             d.addCallback(_uploaded)
16051             return d
16052-        # create an immutable file
16053-        contents = req.fields["file"]
16054+
16055         uploadable = FileHandle(contents.file, convergence=client.convergence)
16056         d = self.parentnode.add_file(self.name, uploadable, overwrite=replace)
16057         d.addCallback(lambda newnode: newnode.get_uri())
16058hunk ./src/allmydata/web/filenode.py 95
16059         return d
16060 
16061+
16062 class PlaceHolderNodeHandler(RenderMixin, rend.Page, ReplaceMeMixin):
16063     def __init__(self, client, parentnode, name):
16064         rend.Page.__init__(self)
16065hunk ./src/allmydata/web/filenode.py 178
16066             # properly. So we assume that at least the browser will agree
16067             # with itself, and echo back the same bytes that we were given.
16068             filename = get_arg(req, "filename", self.name) or "unknown"
16069-            if self.node.is_mutable():
16070-                # some day: d = self.node.get_best_version()
16071-                d = makeMutableDownloadable(self.node)
16072-            else:
16073-                d = defer.succeed(self.node)
16074+            d = self.node.get_best_readable_version()
16075             d.addCallback(lambda dn: FileDownloader(dn, filename))
16076             return d
16077         if t == "json":
16078hunk ./src/allmydata/web/filenode.py 182
16079-            if self.parentnode and self.name:
16080-                d = self.parentnode.get_metadata_for(self.name)
16081+            # We do this to make sure that fields like size and
16082+            # mutable-type (which depend on the file on the grid and not
16083+            # just on the cap) are filled in. The latter gets used in
16084+            # tests, in particular.
16085+            #
16086+            # TODO: Make it so that the servermap knows how to update in
16087+            # a mode specifically designed to fill in these fields, and
16088+            # then update it in that mode.
16089+            if self.node.is_mutable():
16090+                d = self.node.get_servermap(MODE_READ)
16091             else:
16092                 d = defer.succeed(None)
16093hunk ./src/allmydata/web/filenode.py 194
16094+            if self.parentnode and self.name:
16095+                d.addCallback(lambda ignored:
16096+                    self.parentnode.get_metadata_for(self.name))
16097+            else:
16098+                d.addCallback(lambda ignored: None)
16099             d.addCallback(lambda md: FileJSONMetadata(ctx, self.node, md))
16100             return d
16101         if t == "info":
16102hunk ./src/allmydata/web/filenode.py 215
16103         if t:
16104             raise WebError("GET file: bad t=%s" % t)
16105         filename = get_arg(req, "filename", self.name) or "unknown"
16106-        if self.node.is_mutable():
16107-            # some day: d = self.node.get_best_version()
16108-            d = makeMutableDownloadable(self.node)
16109-        else:
16110-            d = defer.succeed(self.node)
16111+        d = self.node.get_best_readable_version()
16112         d.addCallback(lambda dn: FileDownloader(dn, filename))
16113         return d
16114 
16115hunk ./src/allmydata/web/filenode.py 223
16116         req = IRequest(ctx)
16117         t = get_arg(req, "t", "").strip()
16118         replace = parse_replace_arg(get_arg(req, "replace", "true"))
16119+        offset = parse_offset_arg(get_arg(req, "offset", None))
16120 
16121         if not t:
16122hunk ./src/allmydata/web/filenode.py 226
16123-            if self.node.is_mutable():
16124-                return self.replace_my_contents(req)
16125             if not replace:
16126                 # this is the early trap: if someone else modifies the
16127                 # directory while we're uploading, the add_file(overwrite=)
16128hunk ./src/allmydata/web/filenode.py 231
16129                 # call in replace_me_with_a_child will do the late trap.
16130                 raise ExistingChildError()
16131-            assert self.parentnode and self.name
16132-            return self.replace_me_with_a_child(req, self.client, replace)
16133+
16134+            if self.node.is_mutable():
16135+                # Are we a readonly filenode? We shouldn't allow callers
16136+                # to try to replace us if we are.
16137+                if self.node.is_readonly():
16138+                    raise WebError("PUT to a mutable file: replace or update"
16139+                                   " requested with read-only cap")
16140+                if offset is None:
16141+                    return self.replace_my_contents(req)
16142+
16143+                if offset >= 0:
16144+                    return self.update_my_contents(req, offset)
16145+
16146+                raise WebError("PUT to a mutable file: Invalid offset")
16147+
16148+            else:
16149+                if offset is not None:
16150+                    raise WebError("PUT to a file: append operation invoked "
16151+                                   "on an immutable cap")
16152+
16153+                assert self.parentnode and self.name
16154+                return self.replace_me_with_a_child(req, self.client, replace)
16155+
16156         if t == "uri":
16157             if not replace:
16158                 raise ExistingChildError()
16159hunk ./src/allmydata/web/filenode.py 314
16160 
16161     def replace_my_contents(self, req):
16162         req.content.seek(0)
16163-        new_contents = req.content.read()
16164+        new_contents = MutableFileHandle(req.content)
16165         d = self.node.overwrite(new_contents)
16166         d.addCallback(lambda res: self.node.get_uri())
16167         return d
16168hunk ./src/allmydata/web/filenode.py 319
16169 
16170+
16171+    def update_my_contents(self, req, offset):
16172+        req.content.seek(0)
16173+        added_contents = MutableFileHandle(req.content)
16174+
16175+        d = self.node.get_best_mutable_version()
16176+        d.addCallback(lambda mv:
16177+            mv.update(added_contents, offset))
16178+        d.addCallback(lambda ignored:
16179+            self.node.get_uri())
16180+        return d
16181+
16182+
16183     def replace_my_contents_with_a_formpost(self, req):
16184         # we have a mutable file. Get the data from the formpost, and replace
16185         # the mutable file's contents with it.
16186hunk ./src/allmydata/web/filenode.py 335
16187-        new_contents = self._read_data_from_formpost(req)
16188+        new_contents = req.fields['file']
16189+        new_contents = MutableFileHandle(new_contents.file)
16190+
16191         d = self.node.overwrite(new_contents)
16192         d.addCallback(lambda res: self.node.get_uri())
16193         return d
16194hunk ./src/allmydata/web/filenode.py 342
16195 
16196-class MutableDownloadable:
16197-    #implements(IDownloadable)
16198-    def __init__(self, size, node):
16199-        self.size = size
16200-        self.node = node
16201-    def get_size(self):
16202-        return self.size
16203-    def is_mutable(self):
16204-        return True
16205-    def read(self, consumer, offset=0, size=None):
16206-        d = self.node.download_best_version()
16207-        d.addCallback(self._got_data, consumer, offset, size)
16208-        return d
16209-    def _got_data(self, contents, consumer, offset, size):
16210-        start = offset
16211-        if size is not None:
16212-            end = offset+size
16213-        else:
16214-            end = self.size
16215-        # SDMF: we can write the whole file in one big chunk
16216-        consumer.write(contents[start:end])
16217-        return consumer
16218-
16219-def makeMutableDownloadable(n):
16220-    d = defer.maybeDeferred(n.get_size_of_best_version)
16221-    d.addCallback(MutableDownloadable, n)
16222-    return d
16223 
16224 class FileDownloader(rend.Page):
16225     def __init__(self, filenode, filename):
16226hunk ./src/allmydata/web/filenode.py 516
16227     data[1]['mutable'] = filenode.is_mutable()
16228     if edge_metadata is not None:
16229         data[1]['metadata'] = edge_metadata
16230+
16231+    if filenode.is_mutable() and filenode.get_version() is not None:
16232+        mutable_type = filenode.get_version()
16233+        assert mutable_type in (MDMF_VERSION, SDMF_VERSION)
16234+        if mutable_type == MDMF_VERSION:
16235+            mutable_type = "mdmf"
16236+        else:
16237+            mutable_type = "sdmf"
16238+        data[1]['mutable-type'] = mutable_type
16239+
16240     return text_plain(simplejson.dumps(data, indent=1) + "\n", ctx)
16241 
16242 def FileURI(ctx, filenode):
16243hunk ./src/allmydata/web/info.py 8
16244 from nevow.inevow import IRequest
16245 
16246 from allmydata.util import base32
16247-from allmydata.interfaces import IDirectoryNode, IFileNode
16248+from allmydata.interfaces import IDirectoryNode, IFileNode, MDMF_VERSION, SDMF_VERSION
16249 from allmydata.web.common import getxmlfile
16250 from allmydata.mutable.common import UnrecoverableFileError # TODO: move
16251 
16252hunk ./src/allmydata/web/info.py 31
16253             si = node.get_storage_index()
16254             if si:
16255                 if node.is_mutable():
16256-                    return "mutable file"
16257+                    ret = "mutable file"
16258+                    if node.get_version() == MDMF_VERSION:
16259+                        ret += " (mdmf)"
16260+                    else:
16261+                        ret += " (sdmf)"
16262+                    return ret
16263                 return "immutable file"
16264             return "immutable LIT file"
16265         return "unknown"
16266hunk ./src/allmydata/web/root.py 15
16267 from allmydata import get_package_versions_string
16268 from allmydata import provisioning
16269 from allmydata.util import idlib, log
16270-from allmydata.interfaces import IFileNode
16271+from allmydata.interfaces import IFileNode, MDMF_VERSION, SDMF_VERSION
16272 from allmydata.web import filenode, directory, unlinked, status, operations
16273 from allmydata.web import reliability, storage
16274 from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
16275hunk ./src/allmydata/web/root.py 19
16276-     get_arg, RenderMixin, boolean_of_arg
16277+     get_arg, RenderMixin, boolean_of_arg, parse_mutable_type_arg
16278 
16279 
16280 class URIHandler(RenderMixin, rend.Page):
16281hunk ./src/allmydata/web/root.py 50
16282         if t == "":
16283             mutable = boolean_of_arg(get_arg(req, "mutable", "false").strip())
16284             if mutable:
16285-                return unlinked.PUTUnlinkedSSK(req, self.client)
16286+                arg = get_arg(req, "mutable-type", None)
16287+                version = parse_mutable_type_arg(arg)
16288+                if version == "invalid":
16289+                    errmsg = "Unknown type: %s" % arg
16290+                    raise WebError(errmsg, http.BAD_REQUEST)
16291+
16292+                return unlinked.PUTUnlinkedSSK(req, self.client, version)
16293             else:
16294                 return unlinked.PUTUnlinkedCHK(req, self.client)
16295         if t == "mkdir":
16296hunk ./src/allmydata/web/root.py 74
16297         if t in ("", "upload"):
16298             mutable = bool(get_arg(req, "mutable", "").strip())
16299             if mutable:
16300-                return unlinked.POSTUnlinkedSSK(req, self.client)
16301+                arg = get_arg(req, "mutable-type", None)
16302+                version = parse_mutable_type_arg(arg)
16303+                if version is "invalid":
16304+                    raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16305+                return unlinked.POSTUnlinkedSSK(req, self.client, version)
16306             else:
16307                 return unlinked.POSTUnlinkedCHK(req, self.client)
16308         if t == "mkdir":
16309hunk ./src/allmydata/web/root.py 335
16310 
16311     def render_upload_form(self, ctx, data):
16312         # this is a form where users can upload unlinked files
16313+        #
16314+        # for mutable files, users can choose the format by selecting
16315+        # MDMF or SDMF from a radio button. They can also configure a
16316+        # default format in tahoe.cfg, which they rightly expect us to
16317+        # obey. we convey to them that we are obeying their choice by
16318+        # ensuring that the one that they've chosen is selected in the
16319+        # interface.
16320+        if self.client.mutable_file_default == MDMF_VERSION:
16321+            mdmf_input = T.input(type='radio', name='mutable-type',
16322+                                 value='mdmf', id='mutable-type-mdmf',
16323+                                 checked='checked')
16324+        else:
16325+            mdmf_input = T.input(type='radio', name='mutable-type',
16326+                                 value='mdmf', id='mutable-type-mdmf')
16327+
16328+        if self.client.mutable_file_default == SDMF_VERSION:
16329+            sdmf_input = T.input(type='radio', name='mutable-type',
16330+                                 value='sdmf', id='mutable-type-sdmf',
16331+                                 checked='checked')
16332+        else:
16333+            sdmf_input = T.input(type='radio', name='mutable-type',
16334+                                 value='sdmf', id='mutable-type-sdmf')
16335+
16336+
16337         form = T.form(action="uri", method="post",
16338                       enctype="multipart/form-data")[
16339             T.fieldset[
16340hunk ./src/allmydata/web/root.py 367
16341                   T.input(type="file", name="file", class_="freeform-input-file")],
16342             T.input(type="hidden", name="t", value="upload"),
16343             T.div[T.input(type="checkbox", name="mutable"), T.label(for_="mutable")["Create mutable file"],
16344+                  sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
16345+                  mdmf_input,
16346+                  T.label(for_='mutable-type-mdmf')['MDMF (experimental)'],
16347                   " ", T.input(type="submit", value="Upload!")],
16348             ]]
16349         return T.div[form]
16350hunk ./src/allmydata/web/root.py 376
16351 
16352     def render_mkdir_form(self, ctx, data):
16353         # this is a form where users can create new directories
16354+        mdmf_input = T.input(type='radio', name='mutable-type',
16355+                             value='mdmf', id='mutable-directory-mdmf')
16356+        sdmf_input = T.input(type='radio', name='mutable-type',
16357+                             value='sdmf', id='mutable-directory-sdmf',
16358+                             checked='checked')
16359         form = T.form(action="uri", method="post",
16360                       enctype="multipart/form-data")[
16361             T.fieldset[
16362hunk ./src/allmydata/web/root.py 385
16363             T.legend(class_="freeform-form-label")["Create a directory"],
16364+            T.label(for_='mutable-directory-sdmf')["SDMF"],
16365+            sdmf_input,
16366+            T.label(for_='mutable-directory-mdmf')["MDMF"],
16367+            mdmf_input,
16368             T.input(type="hidden", name="t", value="mkdir"),
16369             T.input(type="hidden", name="redirect_to_result", value="true"),
16370             T.input(type="submit", value="Create a directory"),
16371hunk ./src/allmydata/web/unlinked.py 7
16372 from twisted.internet import defer
16373 from nevow import rend, url, tags as T
16374 from allmydata.immutable.upload import FileHandle
16375+from allmydata.mutable.publish import MutableFileHandle
16376 from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
16377hunk ./src/allmydata/web/unlinked.py 9
16378-     convert_children_json, WebError
16379+     convert_children_json, WebError, parse_mutable_type_arg
16380 from allmydata.web import status
16381 
16382 def PUTUnlinkedCHK(req, client):
16383hunk ./src/allmydata/web/unlinked.py 20
16384     # that fires with the URI of the new file
16385     return d
16386 
16387-def PUTUnlinkedSSK(req, client):
16388+def PUTUnlinkedSSK(req, client, version):
16389     # SDMF: files are small, and we can only upload data
16390     req.content.seek(0)
16391hunk ./src/allmydata/web/unlinked.py 23
16392-    data = req.content.read()
16393-    d = client.create_mutable_file(data)
16394+    data = MutableFileHandle(req.content)
16395+    d = client.create_mutable_file(data, version=version)
16396     d.addCallback(lambda n: n.get_uri())
16397     return d
16398 
16399hunk ./src/allmydata/web/unlinked.py 30
16400 def PUTUnlinkedCreateDirectory(req, client):
16401     # "PUT /uri?t=mkdir", to create an unlinked directory.
16402-    d = client.create_dirnode()
16403+    arg = get_arg(req, "mutable-type", None)
16404+    mt = parse_mutable_type_arg(arg)
16405+    if mt is not None and mt is not "invalid":
16406+        d = client.create_dirnode(version=mt)
16407+    elif mt is "invalid":
16408+        msg = "Unknown type: %s" % arg
16409+        raise WebError(msg, http.BAD_REQUEST)
16410+    else:
16411+        d = client.create_dirnode()
16412     d.addCallback(lambda dirnode: dirnode.get_uri())
16413     # XXX add redirect_to_result
16414     return d
16415hunk ./src/allmydata/web/unlinked.py 91
16416                       ["/uri/" + res.uri])
16417         return d
16418 
16419-def POSTUnlinkedSSK(req, client):
16420+def POSTUnlinkedSSK(req, client, version):
16421     # "POST /uri", to create an unlinked file.
16422     # SDMF: files are small, and we can only upload data
16423hunk ./src/allmydata/web/unlinked.py 94
16424-    contents = req.fields["file"]
16425-    contents.file.seek(0)
16426-    data = contents.file.read()
16427-    d = client.create_mutable_file(data)
16428+    contents = req.fields["file"].file
16429+    data = MutableFileHandle(contents)
16430+    d = client.create_mutable_file(data, version=version)
16431     d.addCallback(lambda n: n.get_uri())
16432     return d
16433 
16434hunk ./src/allmydata/web/unlinked.py 115
16435             raise WebError("t=mkdir does not accept children=, "
16436                            "try t=mkdir-with-children instead",
16437                            http.BAD_REQUEST)
16438-    d = client.create_dirnode()
16439+    arg = get_arg(req, "mutable-type", None)
16440+    mt = parse_mutable_type_arg(arg)
16441+    if mt is not None and mt is not "invalid":
16442+        d = client.create_dirnode(version=mt)
16443+    elif mt is "invalid":
16444+        msg = "Unknown type: %s" % arg
16445+        raise WebError(msg, http.BAD_REQUEST)
16446+    else:
16447+        d = client.create_dirnode()
16448     redirect = get_arg(req, "redirect_to_result", "false")
16449     if boolean_of_arg(redirect):
16450         def _then_redir(res):
16451}
16452[test: fix assorted tests broken by MDMF changes
16453Kevan Carstensen <kevan@isnotajoke.com>**20110802021438
16454 Ignore-this: d6ca88ef20ce52de9c2527b893e25fa4
16455] {
16456hunk ./src/allmydata/test/test_checker.py 11
16457 from allmydata.test.no_network import GridTestMixin
16458 from allmydata.immutable.upload import Data
16459 from allmydata.test.common_web import WebRenderingMixin
16460+from allmydata.mutable.publish import MutableData
16461 
16462 class FakeClient:
16463     def get_storage_broker(self):
16464hunk ./src/allmydata/test/test_checker.py 291
16465         def _stash_immutable(ur):
16466             self.imm = c0.create_node_from_uri(ur.uri)
16467         d.addCallback(_stash_immutable)
16468-        d.addCallback(lambda ign: c0.create_mutable_file("contents"))
16469+        d.addCallback(lambda ign:
16470+            c0.create_mutable_file(MutableData("contents")))
16471         def _stash_mutable(node):
16472             self.mut = node
16473         d.addCallback(_stash_mutable)
16474hunk ./src/allmydata/test/test_cli.py 13
16475 from allmydata.util import fileutil, hashutil, base32
16476 from allmydata import uri
16477 from allmydata.immutable import upload
16478+from allmydata.interfaces import MDMF_VERSION, SDMF_VERSION
16479+from allmydata.mutable.publish import MutableData
16480 from allmydata.dirnode import normalize
16481 
16482 # Test that the scripts can be imported.
16483hunk ./src/allmydata/test/test_cli.py 2009
16484             self.do_cli("cp", replacement_file_path, "tahoe:test_file.txt"))
16485         def _check_error_message((rc, out, err)):
16486             self.failUnlessEqual(rc, 1)
16487-            self.failUnlessIn("need write capability to publish", err)
16488+            self.failUnlessIn("replace or update requested with read-only cap", err)
16489         d.addCallback(_check_error_message)
16490         # Make extra sure that that didn't work.
16491         d.addCallback(lambda ignored:
16492hunk ./src/allmydata/test/test_cli.py 2571
16493         self.set_up_grid()
16494         c0 = self.g.clients[0]
16495         DATA = "data" * 100
16496-        d = c0.create_mutable_file(DATA)
16497+        DATA_uploadable = MutableData(DATA)
16498+        d = c0.create_mutable_file(DATA_uploadable)
16499         def _stash_uri(n):
16500             self.uri = n.get_uri()
16501         d.addCallback(_stash_uri)
16502hunk ./src/allmydata/test/test_cli.py 2673
16503                                            upload.Data("literal",
16504                                                         convergence="")))
16505         d.addCallback(_stash_uri, "small")
16506-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"1"))
16507+        d.addCallback(lambda ign:
16508+            c0.create_mutable_file(MutableData(DATA+"1")))
16509         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
16510         d.addCallback(_stash_uri, "mutable")
16511 
16512hunk ./src/allmydata/test/test_deepcheck.py 9
16513 from twisted.internet import threads # CLI tests use deferToThread
16514 from allmydata.immutable import upload
16515 from allmydata.mutable.common import UnrecoverableFileError
16516+from allmydata.mutable.publish import MutableData
16517 from allmydata.util import idlib
16518 from allmydata.util import base32
16519 from allmydata.scripts import runner
16520hunk ./src/allmydata/test/test_deepcheck.py 38
16521         self.basedir = "deepcheck/MutableChecker/good"
16522         self.set_up_grid()
16523         CONTENTS = "a little bit of data"
16524-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16525+        CONTENTS_uploadable = MutableData(CONTENTS)
16526+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16527         def _created(node):
16528             self.node = node
16529             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16530hunk ./src/allmydata/test/test_deepcheck.py 61
16531         self.basedir = "deepcheck/MutableChecker/corrupt"
16532         self.set_up_grid()
16533         CONTENTS = "a little bit of data"
16534-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16535+        CONTENTS_uploadable = MutableData(CONTENTS)
16536+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16537         def _stash_and_corrupt(node):
16538             self.node = node
16539             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16540hunk ./src/allmydata/test/test_deepcheck.py 99
16541         self.basedir = "deepcheck/MutableChecker/delete_share"
16542         self.set_up_grid()
16543         CONTENTS = "a little bit of data"
16544-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16545+        CONTENTS_uploadable = MutableData(CONTENTS)
16546+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16547         def _stash_and_delete(node):
16548             self.node = node
16549             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16550hunk ./src/allmydata/test/test_deepcheck.py 223
16551             self.root = n
16552             self.root_uri = n.get_uri()
16553         d.addCallback(_created_root)
16554-        d.addCallback(lambda ign: c0.create_mutable_file("mutable file contents"))
16555+        d.addCallback(lambda ign:
16556+            c0.create_mutable_file(MutableData("mutable file contents")))
16557         d.addCallback(lambda n: self.root.set_node(u"mutable", n))
16558         def _created_mutable(n):
16559             self.mutable = n
16560hunk ./src/allmydata/test/test_deepcheck.py 965
16561     def create_mangled(self, ignored, name):
16562         nodetype, mangletype = name.split("-", 1)
16563         if nodetype == "mutable":
16564-            d = self.g.clients[0].create_mutable_file("mutable file contents")
16565+            mutable_uploadable = MutableData("mutable file contents")
16566+            d = self.g.clients[0].create_mutable_file(mutable_uploadable)
16567             d.addCallback(lambda n: self.root.set_node(unicode(name), n))
16568         elif nodetype == "large":
16569             large = upload.Data("Lots of data\n" * 1000 + name + "\n", None)
16570hunk ./src/allmydata/test/test_hung_server.py 10
16571 from allmydata.util.consumer import download_to_data
16572 from allmydata.immutable import upload
16573 from allmydata.mutable.common import UnrecoverableFileError
16574+from allmydata.mutable.publish import MutableData
16575 from allmydata.storage.common import storage_index_to_dir
16576 from allmydata.test.no_network import GridTestMixin
16577 from allmydata.test.common import ShouldFailMixin
16578hunk ./src/allmydata/test/test_hung_server.py 110
16579         self.servers = self.servers[5:] + self.servers[:5]
16580 
16581         if mutable:
16582-            d = nm.create_mutable_file(mutable_plaintext)
16583+            uploadable = MutableData(mutable_plaintext)
16584+            d = nm.create_mutable_file(uploadable)
16585             def _uploaded_mutable(node):
16586                 self.uri = node.get_uri()
16587                 self.shares = self.find_uri_shares(self.uri)
16588hunk ./src/allmydata/test/test_system.py 26
16589 from allmydata.monitor import Monitor
16590 from allmydata.mutable.common import NotWriteableError
16591 from allmydata.mutable import layout as mutable_layout
16592+from allmydata.mutable.publish import MutableData
16593 from foolscap.api import DeadReferenceError
16594 from twisted.python.failure import Failure
16595 from twisted.web.client import getPage
16596hunk ./src/allmydata/test/test_system.py 467
16597     def test_mutable(self):
16598         self.basedir = "system/SystemTest/test_mutable"
16599         DATA = "initial contents go here."  # 25 bytes % 3 != 0
16600+        DATA_uploadable = MutableData(DATA)
16601         NEWDATA = "new contents yay"
16602hunk ./src/allmydata/test/test_system.py 469
16603+        NEWDATA_uploadable = MutableData(NEWDATA)
16604         NEWERDATA = "this is getting old"
16605hunk ./src/allmydata/test/test_system.py 471
16606+        NEWERDATA_uploadable = MutableData(NEWERDATA)
16607 
16608         d = self.set_up_nodes(use_key_generator=True)
16609 
16610hunk ./src/allmydata/test/test_system.py 478
16611         def _create_mutable(res):
16612             c = self.clients[0]
16613             log.msg("starting create_mutable_file")
16614-            d1 = c.create_mutable_file(DATA)
16615+            d1 = c.create_mutable_file(DATA_uploadable)
16616             def _done(res):
16617                 log.msg("DONE: %s" % (res,))
16618                 self._mutable_node_1 = res
16619hunk ./src/allmydata/test/test_system.py 565
16620             self.failUnlessEqual(res, DATA)
16621             # replace the data
16622             log.msg("starting replace1")
16623-            d1 = newnode.overwrite(NEWDATA)
16624+            d1 = newnode.overwrite(NEWDATA_uploadable)
16625             d1.addCallback(lambda res: newnode.download_best_version())
16626             return d1
16627         d.addCallback(_check_download_3)
16628hunk ./src/allmydata/test/test_system.py 579
16629             newnode2 = self.clients[3].create_node_from_uri(uri)
16630             self._newnode3 = self.clients[3].create_node_from_uri(uri)
16631             log.msg("starting replace2")
16632-            d1 = newnode1.overwrite(NEWERDATA)
16633+            d1 = newnode1.overwrite(NEWERDATA_uploadable)
16634             d1.addCallback(lambda res: newnode2.download_best_version())
16635             return d1
16636         d.addCallback(_check_download_4)
16637hunk ./src/allmydata/test/test_system.py 649
16638         def _check_empty_file(res):
16639             # make sure we can create empty files, this usually screws up the
16640             # segsize math
16641-            d1 = self.clients[2].create_mutable_file("")
16642+            d1 = self.clients[2].create_mutable_file(MutableData(""))
16643             d1.addCallback(lambda newnode: newnode.download_best_version())
16644             d1.addCallback(lambda res: self.failUnlessEqual("", res))
16645             return d1
16646hunk ./src/allmydata/test/test_system.py 680
16647                                  self.key_generator_svc.key_generator.pool_size + size_delta)
16648 
16649         d.addCallback(check_kg_poolsize, 0)
16650-        d.addCallback(lambda junk: self.clients[3].create_mutable_file('hello, world'))
16651+        d.addCallback(lambda junk:
16652+            self.clients[3].create_mutable_file(MutableData('hello, world')))
16653         d.addCallback(check_kg_poolsize, -1)
16654         d.addCallback(lambda junk: self.clients[3].create_dirnode())
16655         d.addCallback(check_kg_poolsize, -2)
16656}
16657[cli: teach CLI how to create MDMF mutable files
16658Kevan Carstensen <kevan@isnotajoke.com>**20110802021613
16659 Ignore-this: 18d0ff98e75be231eed3c53319e76936
16660 
16661 Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
16662 argument.
16663] {
16664hunk ./src/allmydata/scripts/cli.py 53
16665 
16666 
16667 class MakeDirectoryOptions(VDriveOptions):
16668+    optParameters = [
16669+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
16670+        ]
16671+
16672     def parseArgs(self, where=""):
16673         self.where = argv_to_unicode(where)
16674 
16675hunk ./src/allmydata/scripts/cli.py 60
16676+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
16677+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
16678+
16679     def getSynopsis(self):
16680         return "Usage:  %s mkdir [options] [REMOTE_DIR]" % (self.command_name,)
16681 
16682hunk ./src/allmydata/scripts/cli.py 174
16683     optFlags = [
16684         ("mutable", "m", "Create a mutable file instead of an immutable one."),
16685         ]
16686+    optParameters = [
16687+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
16688+        ]
16689 
16690     def parseArgs(self, arg1=None, arg2=None):
16691         # see Examples below
16692hunk ./src/allmydata/scripts/cli.py 193
16693         if self.from_file == u"-":
16694             self.from_file = None
16695 
16696+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
16697+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
16698+
16699+
16700     def getSynopsis(self):
16701         return "Usage:  %s put [options] LOCAL_FILE REMOTE_FILE" % (self.command_name,)
16702 
16703hunk ./src/allmydata/scripts/tahoe_mkdir.py 25
16704     if not where or not path:
16705         # create a new unlinked directory
16706         url = nodeurl + "uri?t=mkdir"
16707+        if options["mutable-type"]:
16708+            url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
16709         resp = do_http("POST", url)
16710         rc = check_http_error(resp, stderr)
16711         if rc:
16712hunk ./src/allmydata/scripts/tahoe_mkdir.py 42
16713     # path must be "/".join([s.encode("utf-8") for s in segments])
16714     url = nodeurl + "uri/%s/%s?t=mkdir" % (urllib.quote(rootcap),
16715                                            urllib.quote(path))
16716+    if options['mutable-type']:
16717+        url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
16718+
16719     resp = do_http("POST", url)
16720     check_http_error(resp, stderr)
16721     new_uri = resp.read().strip()
16722hunk ./src/allmydata/scripts/tahoe_put.py 21
16723     from_file = options.from_file
16724     to_file = options.to_file
16725     mutable = options['mutable']
16726+    mutable_type = False
16727+
16728+    if mutable:
16729+        mutable_type = options['mutable-type']
16730     if options['quiet']:
16731         verbosity = 0
16732     else:
16733hunk ./src/allmydata/scripts/tahoe_put.py 49
16734         #  DIRCAP:./subdir/foo : DIRCAP/subdir/foo
16735         #  MUTABLE-FILE-WRITECAP : filecap
16736 
16737-        # FIXME: this shouldn't rely on a particular prefix.
16738-        if to_file.startswith("URI:SSK:"):
16739+        # FIXME: don't hardcode cap format.
16740+        if to_file.startswith("URI:MDMF:") or to_file.startswith("URI:SSK:"):
16741             url = nodeurl + "uri/%s" % urllib.quote(to_file)
16742         else:
16743             try:
16744hunk ./src/allmydata/scripts/tahoe_put.py 71
16745         url = nodeurl + "uri"
16746     if mutable:
16747         url += "?mutable=true"
16748+    if mutable_type:
16749+        assert mutable
16750+        url += "&mutable-type=%s" % mutable_type
16751+
16752     if from_file:
16753         infileobj = open(os.path.expanduser(from_file), "rb")
16754     else:
16755hunk ./src/allmydata/test/test_cli.py 33
16756 from allmydata.test.common_util import StallMixin, ReallyEqualMixin
16757 from allmydata.test.no_network import GridTestMixin
16758 from twisted.internet import threads # CLI tests use deferToThread
16759+from twisted.internet import defer # List uses a DeferredList in one place.
16760 from twisted.python import usage
16761 
16762 from allmydata.util.assertutil import precondition
16763hunk ./src/allmydata/test/test_cli.py 1014
16764         d.addCallback(lambda (rc,out,err): self.failUnlessReallyEqual(out, DATA2))
16765         return d
16766 
16767+    def _check_mdmf_json(self, (rc, json, err)):
16768+         self.failUnlessEqual(rc, 0)
16769+         self.failUnlessEqual(err, "")
16770+         self.failUnlessIn('"mutable-type": "mdmf"', json)
16771+         # We also want a valid MDMF cap to be in the json.
16772+         self.failUnlessIn("URI:MDMF", json)
16773+         self.failUnlessIn("URI:MDMF-RO", json)
16774+         self.failUnlessIn("URI:MDMF-Verifier", json)
16775+
16776+    def _check_sdmf_json(self, (rc, json, err)):
16777+        self.failUnlessEqual(rc, 0)
16778+        self.failUnlessEqual(err, "")
16779+        self.failUnlessIn('"mutable-type": "sdmf"', json)
16780+        # We also want to see the appropriate SDMF caps.
16781+        self.failUnlessIn("URI:SSK", json)
16782+        self.failUnlessIn("URI:SSK-RO", json)
16783+        self.failUnlessIn("URI:SSK-Verifier", json)
16784+
16785+    def test_mutable_type(self):
16786+        self.basedir = "cli/Put/mutable_type"
16787+        self.set_up_grid()
16788+        data = "data" * 100000
16789+        fn1 = os.path.join(self.basedir, "data")
16790+        fileutil.write(fn1, data)
16791+        d = self.do_cli("create-alias", "tahoe")
16792+        d.addCallback(lambda ignored:
16793+            self.do_cli("put", "--mutable", "--mutable-type=mdmf",
16794+                        fn1, "tahoe:uploaded.txt"))
16795+        d.addCallback(lambda ignored:
16796+            self.do_cli("ls", "--json", "tahoe:uploaded.txt"))
16797+        d.addCallback(self._check_mdmf_json)
16798+        d.addCallback(lambda ignored:
16799+            self.do_cli("put", "--mutable", "--mutable-type=sdmf",
16800+                        fn1, "tahoe:uploaded2.txt"))
16801+        d.addCallback(lambda ignored:
16802+            self.do_cli("ls", "--json", "tahoe:uploaded2.txt"))
16803+        d.addCallback(self._check_sdmf_json)
16804+        return d
16805+
16806+    def test_mutable_type_unlinked(self):
16807+        self.basedir = "cli/Put/mutable_type_unlinked"
16808+        self.set_up_grid()
16809+        data = "data" * 100000
16810+        fn1 = os.path.join(self.basedir, "data")
16811+        fileutil.write(fn1, data)
16812+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
16813+        d.addCallback(lambda (rc, cap, err):
16814+            self.do_cli("ls", "--json", cap))
16815+        d.addCallback(self._check_mdmf_json)
16816+        d.addCallback(lambda ignored:
16817+            self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1))
16818+        d.addCallback(lambda (rc, cap, err):
16819+            self.do_cli("ls", "--json", cap))
16820+        d.addCallback(self._check_sdmf_json)
16821+        return d
16822+
16823+    def test_put_to_mdmf_cap(self):
16824+        self.basedir = "cli/Put/put_to_mdmf_cap"
16825+        self.set_up_grid()
16826+        data = "data" * 100000
16827+        fn1 = os.path.join(self.basedir, "data")
16828+        fileutil.write(fn1, data)
16829+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
16830+        def _got_cap((rc, out, err)):
16831+            self.failUnlessEqual(rc, 0)
16832+            self.cap = out
16833+        d.addCallback(_got_cap)
16834+        # Now try to write something to the cap using put.
16835+        data2 = "data2" * 100000
16836+        fn2 = os.path.join(self.basedir, "data2")
16837+        fileutil.write(fn2, data2)
16838+        d.addCallback(lambda ignored:
16839+            self.do_cli("put", fn2, self.cap))
16840+        def _got_put((rc, out, err)):
16841+            self.failUnlessEqual(rc, 0)
16842+            self.failUnlessIn(self.cap, out)
16843+        d.addCallback(_got_put)
16844+        # Now get the cap. We should see the data we just put there.
16845+        d.addCallback(lambda ignored:
16846+            self.do_cli("get", self.cap))
16847+        def _got_data((rc, out, err)):
16848+            self.failUnlessEqual(rc, 0)
16849+            self.failUnlessEqual(out, data2)
16850+        d.addCallback(_got_data)
16851+        # Now strip the extension information off of the cap and try
16852+        # to put something to it.
16853+        def _make_bare_cap(ignored):
16854+            cap = self.cap.split(":")
16855+            cap = ":".join(cap[:len(cap) - 2])
16856+            self.cap = cap
16857+        d.addCallback(_make_bare_cap)
16858+        data3 = "data3" * 100000
16859+        fn3 = os.path.join(self.basedir, "data3")
16860+        fileutil.write(fn3, data3)
16861+        d.addCallback(lambda ignored:
16862+            self.do_cli("put", fn3, self.cap))
16863+        d.addCallback(lambda ignored:
16864+            self.do_cli("get", self.cap))
16865+        def _got_data3((rc, out, err)):
16866+            self.failUnlessEqual(rc, 0)
16867+            self.failUnlessEqual(out, data3)
16868+        d.addCallback(_got_data3)
16869+        return d
16870+
16871+    def test_put_to_sdmf_cap(self):
16872+        self.basedir = "cli/Put/put_to_sdmf_cap"
16873+        self.set_up_grid()
16874+        data = "data" * 100000
16875+        fn1 = os.path.join(self.basedir, "data")
16876+        fileutil.write(fn1, data)
16877+        d = self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1)
16878+        def _got_cap((rc, out, err)):
16879+            self.failUnlessEqual(rc, 0)
16880+            self.cap = out
16881+        d.addCallback(_got_cap)
16882+        # Now try to write something to the cap using put.
16883+        data2 = "data2" * 100000
16884+        fn2 = os.path.join(self.basedir, "data2")
16885+        fileutil.write(fn2, data2)
16886+        d.addCallback(lambda ignored:
16887+            self.do_cli("put", fn2, self.cap))
16888+        def _got_put((rc, out, err)):
16889+            self.failUnlessEqual(rc, 0)
16890+            self.failUnlessIn(self.cap, out)
16891+        d.addCallback(_got_put)
16892+        # Now get the cap. We should see the data we just put there.
16893+        d.addCallback(lambda ignored:
16894+            self.do_cli("get", self.cap))
16895+        def _got_data((rc, out, err)):
16896+            self.failUnlessEqual(rc, 0)
16897+            self.failUnlessEqual(out, data2)
16898+        d.addCallback(_got_data)
16899+        return d
16900+
16901+    def test_mutable_type_invalid_format(self):
16902+        o = cli.PutOptions()
16903+        self.failUnlessRaises(usage.UsageError,
16904+                              o.parseOptions,
16905+                              ["--mutable", "--mutable-type=ldmf"])
16906+
16907     def test_put_with_nonexistent_alias(self):
16908         # when invoked with an alias that doesn't exist, 'tahoe put'
16909         # should output a useful error message, not a stack trace
16910hunk ./src/allmydata/test/test_cli.py 3147
16911 
16912         return d
16913 
16914+    def test_mkdir_mutable_type(self):
16915+        self.basedir = os.path.dirname(self.mktemp())
16916+        self.set_up_grid()
16917+        d = self.do_cli("create-alias", "tahoe")
16918+        d.addCallback(lambda ignored:
16919+            self.do_cli("mkdir", "--mutable-type=sdmf", "tahoe:foo"))
16920+        def _check((rc, out, err), st):
16921+            self.failUnlessReallyEqual(rc, 0)
16922+            self.failUnlessReallyEqual(err, "")
16923+            self.failUnlessIn(st, out)
16924+            return out
16925+        def _stash_dircap(cap):
16926+            self._dircap = cap
16927+            u = uri.from_string(cap)
16928+            fn_uri = u.get_filenode_cap()
16929+            self._filecap = fn_uri.to_string()
16930+        d.addCallback(_check, "URI:DIR2")
16931+        d.addCallback(_stash_dircap)
16932+        d.addCallback(lambda ignored:
16933+            self.do_cli("ls", "--json", "tahoe:foo"))
16934+        d.addCallback(_check, "URI:DIR2")
16935+        d.addCallback(lambda ignored:
16936+            self.do_cli("ls", "--json", self._filecap))
16937+        d.addCallback(_check, '"mutable-type": "sdmf"')
16938+        d.addCallback(lambda ignored:
16939+            self.do_cli("mkdir", "--mutable-type=mdmf", "tahoe:bar"))
16940+        d.addCallback(_check, "URI:DIR2-MDMF")
16941+        d.addCallback(_stash_dircap)
16942+        d.addCallback(lambda ignored:
16943+            self.do_cli("ls", "--json", "tahoe:bar"))
16944+        d.addCallback(_check, "URI:DIR2-MDMF")
16945+        d.addCallback(lambda ignored:
16946+            self.do_cli("ls", "--json", self._filecap))
16947+        d.addCallback(_check, '"mutable-type": "mdmf"')
16948+        return d
16949+
16950+    def test_mkdir_mutable_type_unlinked(self):
16951+        self.basedir = os.path.dirname(self.mktemp())
16952+        self.set_up_grid()
16953+        d = self.do_cli("mkdir", "--mutable-type=sdmf")
16954+        def _check((rc, out, err), st):
16955+            self.failUnlessReallyEqual(rc, 0)
16956+            self.failUnlessReallyEqual(err, "")
16957+            self.failUnlessIn(st, out)
16958+            return out
16959+        d.addCallback(_check, "URI:DIR2")
16960+        def _stash_dircap(cap):
16961+            self._dircap = cap
16962+            # Now we're going to feed the cap into uri.from_string...
16963+            u = uri.from_string(cap)
16964+            # ...grab the underlying filenode uri.
16965+            fn_uri = u.get_filenode_cap()
16966+            # ...and stash that.
16967+            self._filecap = fn_uri.to_string()
16968+        d.addCallback(_stash_dircap)
16969+        d.addCallback(lambda res: self.do_cli("ls", "--json",
16970+                                              self._filecap))
16971+        d.addCallback(_check, '"mutable-type": "sdmf"')
16972+        d.addCallback(lambda res: self.do_cli("mkdir", "--mutable-type=mdmf"))
16973+        d.addCallback(_check, "URI:DIR2-MDMF")
16974+        d.addCallback(_stash_dircap)
16975+        d.addCallback(lambda res: self.do_cli("ls", "--json",
16976+                                              self._filecap))
16977+        d.addCallback(_check, '"mutable-type": "mdmf"')
16978+        return d
16979+
16980+    def test_mkdir_bad_mutable_type(self):
16981+        o = cli.MakeDirectoryOptions()
16982+        self.failUnlessRaises(usage.UsageError,
16983+                              o.parseOptions,
16984+                              ["--mutable", "--mutable-type=ldmf"])
16985+
16986     def test_mkdir_unicode(self):
16987         self.basedir = os.path.dirname(self.mktemp())
16988         self.set_up_grid()
16989}
16990[docs: amend configuration, webapi documentation to talk about MDMF
16991Kevan Carstensen <kevan@isnotajoke.com>**20110802022056
16992 Ignore-this: 4cab9b7e4ab79cc1efdabe2d457f27a6
16993] {
16994hunk ./docs/configuration.rst 328
16995     (Mutable files use a different share placement algorithm that does not
16996     currently consider this parameter.)
16997 
16998+``mutable.format = sdmf or mdmf``
16999+
17000+    This value tells Tahoe-LAFS what the default mutable file format should
17001+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
17002+    in the old SDMF format. This is desirable for clients that operate on
17003+    grids where some peers run older versions of Tahoe-LAFS, as these older
17004+    versions cannot read the new MDMF mutable file format. If
17005+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
17006+    the new MDMF format, which supports efficient in-place modification and
17007+    streaming downloads. You can overwrite this value using a special
17008+    mutable-type parameter in the webapi. If you do not specify a value here,
17009+    Tahoe-LAFS will use SDMF for all newly-created mutable files.
17010+
17011+    Note that this parameter only applies to mutable files. Mutable
17012+    directories, which are stored as mutable files, are not controlled by
17013+    this parameter and will always use SDMF. We may revisit this decision
17014+    in future versions of Tahoe-LAFS.
17015 
17016 Frontend Configuration
17017 ======================
17018hunk ./docs/frontends/webapi.rst 368
17019  To use the /uri/$FILECAP form, $FILECAP must be a write-cap for a mutable file.
17020 
17021  In the /uri/$DIRCAP/[SUBDIRS../]FILENAME form, if the target file is a
17022- writeable mutable file, that file's contents will be overwritten in-place. If
17023- it is a read-cap for a mutable file, an error will occur. If it is an
17024- immutable file, the old file will be discarded, and a new one will be put in
17025- its place.
17026+ writeable mutable file, that file's contents will be overwritten
17027+ in-place. If it is a read-cap for a mutable file, an error will occur.
17028+ If it is an immutable file, the old file will be discarded, and a new
17029+ one will be put in its place. If the target file is a writable mutable
17030+ file, you may also specify an "offset" parameter -- a byte offset that
17031+ determines where in the mutable file the data from the HTTP request
17032+ body is placed. This operation is relatively efficient for MDMF mutable
17033+ files, and is relatively inefficient (but still supported) for SDMF
17034+ mutable files. If no offset parameter is specified, then the entire
17035+ file is replaced with the data from the HTTP request body. For an
17036+ immutable file, the "offset" parameter is not valid.
17037 
17038  When creating a new file, if "mutable=true" is in the query arguments, the
17039  operation will create a mutable file instead of an immutable one.
17040hunk ./docs/frontends/webapi.rst 399
17041 
17042  If "mutable=true" is in the query arguments, the operation will create a
17043  mutable file, and return its write-cap in the HTTP respose. The default is
17044- to create an immutable file, returning the read-cap as a response.
17045+ to create an immutable file, returning the read-cap as a response. If
17046+ you create a mutable file, you can also use the "mutable-type" query
17047+ parameter. If "mutable-type=sdmf", then the mutable file will be created
17048+ in the old SDMF mutable file format. This is desirable for files that
17049+ need to be read by old clients. If "mutable-type=mdmf", then the file
17050+ will be created in the new MDMF mutable file format. MDMF mutable files
17051+ can be downloaded more efficiently, and modified in-place efficiently,
17052+ but are not compatible with older versions of Tahoe-LAFS. If no
17053+ "mutable-type" argument is given, the file is created in whatever
17054+ format was configured in tahoe.cfg.
17055 
17056 
17057 Creating A New Directory
17058hunk ./docs/frontends/webapi.rst 1101
17059  If a "mutable=true" argument is provided, the operation will create a
17060  mutable file, and the response body will contain the write-cap instead of
17061  the upload results page. The default is to create an immutable file,
17062- returning the upload results page as a response.
17063+ returning the upload results page as a response. If you create a
17064+ mutable file, you may choose to specify the format of that mutable file
17065+ with the "mutable-type" parameter. If "mutable-type=mdmf", then the
17066+ file will be created as an MDMF mutable file. If "mutable-type=sdmf",
17067+ then the file will be created as an SDMF mutable file. If no value is
17068+ specified, the file will be created in whatever format is specified in
17069+ tahoe.cfg.
17070 
17071 
17072 ``POST /uri/$DIRCAP/[SUBDIRS../]?t=upload``
17073}
17074[Fix some test failures caused by #393 patch.
17075david-sarah@jacaranda.org**20110802032810
17076 Ignore-this: 7f65e5adb5c859af289cea7011216fef
17077] {
17078hunk ./src/allmydata/test/test_immutable.py 291
17079         return d
17080 
17081     def test_download_to_data(self):
17082-        d = self.n.download_to_data()
17083+        d = self.startup("download_to_data")
17084+        d.addCallback(lambda ign: self.filenode.download_to_data())
17085         d.addCallback(lambda data:
17086             self.failUnlessEqual(data, common.TEST_DATA))
17087         return d
17088hunk ./src/allmydata/test/test_immutable.py 299
17089 
17090 
17091     def test_download_best_version(self):
17092-        d = self.n.download_best_version()
17093+        d = self.startup("download_best_version")
17094+        d.addCallback(lambda ign: self.filenode.download_best_version())
17095         d.addCallback(lambda data:
17096             self.failUnlessEqual(data, common.TEST_DATA))
17097         return d
17098hunk ./src/allmydata/test/test_immutable.py 307
17099 
17100 
17101     def test_get_best_readable_version(self):
17102-        d = self.n.get_best_readable_version()
17103+        d = self.startup("get_best_readable_version")
17104+        d.addCallback(lambda ign: self.filenode.get_best_readable_version())
17105         d.addCallback(lambda n2:
17106hunk ./src/allmydata/test/test_immutable.py 310
17107-            self.failUnlessEqual(n2, self.n))
17108+            self.failUnlessEqual(n2, self.filenode))
17109         return d
17110 
17111     def test_get_size_of_best_version(self):
17112hunk ./src/allmydata/test/test_immutable.py 314
17113-        d = self.n.get_size_of_best_version()
17114+        d = self.startup("get_size_of_best_version")
17115+        d.addCallback(lambda ign: self.filenode.get_size_of_best_version())
17116         d.addCallback(lambda size:
17117             self.failUnlessEqual(size, len(common.TEST_DATA)))
17118         return d
17119}
17120
17121Context:
17122
17123[remove nodeid from WriteBucketProxy classes and customers
17124warner@lothar.com**20110801224317
17125 Ignore-this: e55334bb0095de11711eeb3af827e8e8
17126 refs #1363
17127]
17128[remove get_serverid() from ReadBucketProxy and customers, including Checker
17129warner@lothar.com**20110801224307
17130 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
17131 and debug.py dump-share commands
17132 refs #1363
17133]
17134[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
17135zooko@zooko.com**20110801232423
17136 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
17137 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
17138 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
17139 fixes #1385
17140]
17141[whitespace-cleanup
17142zooko@zooko.com**20110725015546
17143 Ignore-this: 442970d0545183b97adc7bd66657876c
17144]
17145[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
17146zooko@zooko.com**20110331145427
17147 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
17148 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
17149]
17150[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
17151david-sarah@jacaranda.org**20110801221317
17152 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
17153]
17154[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
17155david-sarah@jacaranda.org**20110801220919
17156 Ignore-this: 572327591137bb05c24c44812d4b163f
17157]
17158[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
17159david-sarah@jacaranda.org**20110801220108
17160 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
17161]
17162[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
17163david-sarah@jacaranda.org**20110801205356
17164 Ignore-this: 4fbf03864934753c951ddeff64392491
17165]
17166[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
17167david-sarah@jacaranda.org**20110713002722
17168 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
17169]
17170[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
17171david-sarah@jacaranda.org**20110713001218
17172 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
17173]
17174[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
17175david-sarah@jacaranda.org**20110801203009
17176 Ignore-this: b3912e95a874647027efdc97822dd10e
17177]
17178[fix typo introduced during rebasing of 'remove get_serverid from
17179Brian Warner <warner@lothar.com>**20110801200341
17180 Ignore-this: 4235b0f585c0533892193941dbbd89a8
17181 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
17182]
17183[remove get_serverid from DownloadStatus.add_dyhb_request and customers
17184zooko@zooko.com**20110801185401
17185 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
17186 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
17187 refs #1363
17188]
17189[remove get_serverid from DownloadStatus.add_block_request and customers
17190zooko@zooko.com**20110801185344
17191 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
17192 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
17193 refs #1363
17194]
17195[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
17196warner@lothar.com**20110801174452
17197 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
17198 refs #1363
17199]
17200[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
17201warner@lothar.com**20110801174444
17202 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
17203 remove now-unused ShareManglingMixin
17204 refs #1363
17205]
17206[DownloadStatus.add_known_share wants to be used by Finder, web.status
17207warner@lothar.com**20110801174436
17208 Ignore-this: 1433bcd73099a579abe449f697f35f9
17209 refs #1363
17210]
17211[replace IServer.name() with get_name(), and get_longname()
17212warner@lothar.com**20110801174428
17213 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
17214 
17215 This patch was originally written by Brian, but was re-recorded by Zooko to use
17216 darcs replace instead of hunks for any file in which it would result in fewer
17217 total hunks.
17218 refs #1363
17219]
17220[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
17221zooko@zooko.com**20110801174143
17222 Ignore-this: e36e1420bba0620a0107bd90032a5198
17223 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
17224 refs #1363
17225]
17226[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
17227Brian Warner <warner@lothar.com>**20110801151834
17228 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
17229]
17230[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
17231david-sarah@jacaranda.org**20110801034035
17232 Ignore-this: 6971e0621db2fba794d86395b4d51038
17233]
17234[tahoe_rm.py: better error message when there is no path. refs #1292
17235david-sarah@jacaranda.org**20110122064212
17236 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
17237]
17238[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
17239david-sarah@jacaranda.org**20110104105108
17240 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
17241]
17242[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
17243david-sarah@jacaranda.org**20110801005209
17244 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
17245]
17246[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
17247david-sarah@jacaranda.org**20110730032521
17248 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
17249]
17250[cli: make 'tahoe cp' overwrite mutable files in-place
17251Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
17252 Ignore-this: b2ad21a19439722f05c49bfd35b01855
17253]
17254[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
17255david-sarah@jacaranda.org**20110729233102
17256 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
17257]
17258[src/allmydata/scripts/cli.py: fix pyflakes warning.
17259david-sarah@jacaranda.org**20110728021402
17260 Ignore-this: 94050140ddb99865295973f49927c509
17261]
17262[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
17263david-sarah@jacaranda.org**20110724225440
17264 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
17265]
17266[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
17267david-sarah@jacaranda.org**20110629185356
17268 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
17269]
17270[docs/man/tahoe.1: add man page. fixes #1420
17271david-sarah@jacaranda.org**20110724171728
17272 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
17273]
17274[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
17275david-sarah@jacaranda.org**20110721234941
17276 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
17277]
17278[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
17279david-sarah@jacaranda.org**20110722000320
17280 Ignore-this: 55cd558b791526113db3f83c00ec328a
17281]
17282[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
17283david-sarah@jacaranda.org**20110721233658
17284 Ignore-this: 81b41745477163c9b39c0b59db91cc62
17285]
17286[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
17287david-sarah@jacaranda.org**20110722035402
17288 Ignore-this: 5d03f544c4154f088e26c7107494bf39
17289]
17290[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
17291david-sarah@jacaranda.org**20110722024907
17292 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
17293]
17294[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
17295david-sarah@jacaranda.org**20110718005949
17296 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
17297]
17298[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
17299david-sarah@jacaranda.org**20110717194315
17300 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
17301]
17302[README.txt: say that quickstart.rst is in the docs directory.
17303david-sarah@jacaranda.org**20110717192400
17304 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
17305]
17306[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
17307zooko@zooko.com**20110717114226
17308 Ignore-this: df222120d41447ce4102616921626c82
17309 fixes #1383
17310]
17311[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
17312david-sarah@jacaranda.org**20110716181813
17313 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
17314]
17315[docs: add missing link in NEWS.rst
17316zooko@zooko.com**20110712153307
17317 Ignore-this: be7b7eb81c03700b739daa1027d72b35
17318]
17319[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
17320zooko@zooko.com**20110712153229
17321 Ignore-this: 723c4f9e2211027c79d711715d972c5
17322 Also remove a couple of vestigial references to figleaf, which is long gone.
17323 fixes #1409 (remove contrib/fuse)
17324]
17325[add Protovis.js-based download-status timeline visualization
17326Brian Warner <warner@lothar.com>**20110629222606
17327 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
17328 
17329 provide status overlap info on the webapi t=json output, add decode/decrypt
17330 rate tooltips, add zoomin/zoomout buttons
17331]
17332[add more download-status data, fix tests
17333Brian Warner <warner@lothar.com>**20110629222555
17334 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
17335]
17336[prepare for viz: improve DownloadStatus events
17337Brian Warner <warner@lothar.com>**20110629222542
17338 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
17339 
17340 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
17341]
17342[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
17343zooko@zooko.com**20110629185711
17344 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
17345]
17346[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
17347david-sarah@jacaranda.org**20110130235809
17348 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
17349]
17350[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
17351david-sarah@jacaranda.org**20110626054124
17352 Ignore-this: abb864427a1b91bd10d5132b4589fd90
17353]
17354[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
17355david-sarah@jacaranda.org**20110623205528
17356 Ignore-this: c63e23146c39195de52fb17c7c49b2da
17357]
17358[Rename test_package_initialization.py to (much shorter) test_import.py .
17359Brian Warner <warner@lothar.com>**20110611190234
17360 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
17361 
17362 The former name was making my 'ls' listings hard to read, by forcing them
17363 down to just two columns.
17364]
17365[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
17366zooko@zooko.com**20110611163741
17367 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
17368 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
17369 fixes #1412
17370]
17371[wui: right-align the size column in the WUI
17372zooko@zooko.com**20110611153758
17373 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
17374 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
17375 fixes #1412
17376]
17377[docs: three minor fixes
17378zooko@zooko.com**20110610121656
17379 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
17380 CREDITS for arc for stats tweak
17381 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
17382 English usage tweak
17383]
17384[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
17385david-sarah@jacaranda.org**20110609223719
17386 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
17387]
17388[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
17389wilcoxjg@gmail.com**20110527120135
17390 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
17391 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
17392 NEWS.rst, stats.py: documentation of change to get_latencies
17393 stats.rst: now documents percentile modification in get_latencies
17394 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
17395 fixes #1392
17396]
17397[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
17398david-sarah@jacaranda.org**20110517011214
17399 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
17400]
17401[docs: convert NEWS to NEWS.rst and change all references to it.
17402david-sarah@jacaranda.org**20110517010255
17403 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
17404]
17405[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
17406david-sarah@jacaranda.org**20110512140559
17407 Ignore-this: 784548fc5367fac5450df1c46890876d
17408]
17409[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
17410david-sarah@jacaranda.org**20110130164923
17411 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
17412]
17413[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
17414zooko@zooko.com**20110128142006
17415 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
17416 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
17417]
17418[M-x whitespace-cleanup
17419zooko@zooko.com**20110510193653
17420 Ignore-this: dea02f831298c0f65ad096960e7df5c7
17421]
17422[docs: fix typo in running.rst, thanks to arch_o_median
17423zooko@zooko.com**20110510193633
17424 Ignore-this: ca06de166a46abbc61140513918e79e8
17425]
17426[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
17427david-sarah@jacaranda.org**20110204204902
17428 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
17429]
17430[relnotes.txt: forseeable -> foreseeable. refs #1342
17431david-sarah@jacaranda.org**20110204204116
17432 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
17433]
17434[replace remaining .html docs with .rst docs
17435zooko@zooko.com**20110510191650
17436 Ignore-this: d557d960a986d4ac8216d1677d236399
17437 Remove install.html (long since deprecated).
17438 Also replace some obsolete references to install.html with references to quickstart.rst.
17439 Fix some broken internal references within docs/historical/historical_known_issues.txt.
17440 Thanks to Ravi Pinjala and Patrick McDonald.
17441 refs #1227
17442]
17443[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
17444zooko@zooko.com**20110428055232
17445 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
17446]
17447[munin tahoe_files plugin: fix incorrect file count
17448francois@ctrlaltdel.ch**20110428055312
17449 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
17450 fixes #1391
17451]
17452[corrected "k must never be smaller than N" to "k must never be greater than N"
17453secorp@allmydata.org**20110425010308
17454 Ignore-this: 233129505d6c70860087f22541805eac
17455]
17456[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
17457david-sarah@jacaranda.org**20110411190738
17458 Ignore-this: 7847d26bc117c328c679f08a7baee519
17459]
17460[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
17461david-sarah@jacaranda.org**20110410155844
17462 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
17463]
17464[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
17465david-sarah@jacaranda.org**20110410155705
17466 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
17467]
17468[remove unused variable detected by pyflakes
17469zooko@zooko.com**20110407172231
17470 Ignore-this: 7344652d5e0720af822070d91f03daf9
17471]
17472[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
17473david-sarah@jacaranda.org**20110401202750
17474 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
17475]
17476[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
17477Brian Warner <warner@lothar.com>**20110325232511
17478 Ignore-this: d5307faa6900f143193bfbe14e0f01a
17479]
17480[control.py: remove all uses of s.get_serverid()
17481warner@lothar.com**20110227011203
17482 Ignore-this: f80a787953bd7fa3d40e828bde00e855
17483]
17484[web: remove some uses of s.get_serverid(), not all
17485warner@lothar.com**20110227011159
17486 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
17487]
17488[immutable/downloader/fetcher.py: remove all get_serverid() calls
17489warner@lothar.com**20110227011156
17490 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
17491]
17492[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
17493warner@lothar.com**20110227011153
17494 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
17495 
17496 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
17497 _shares_from_server dict was being popped incorrectly (using shnum as the
17498 index instead of serverid). I'm still thinking through the consequences of
17499 this bug. It was probably benign and really hard to detect. I think it would
17500 cause us to incorrectly believe that we're pulling too many shares from a
17501 server, and thus prefer a different server rather than asking for a second
17502 share from the first server. The diversity code is intended to spread out the
17503 number of shares simultaneously being requested from each server, but with
17504 this bug, it might be spreading out the total number of shares requested at
17505 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
17506 segment, so the effect doesn't last very long).
17507]
17508[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
17509warner@lothar.com**20110227011150
17510 Ignore-this: d8d56dd8e7b280792b40105e13664554
17511 
17512 test_download.py: create+check MyShare instances better, make sure they share
17513 Server objects, now that finder.py cares
17514]
17515[immutable/downloader/finder.py: reduce use of get_serverid(), one left
17516warner@lothar.com**20110227011146
17517 Ignore-this: 5785be173b491ae8a78faf5142892020
17518]
17519[immutable/offloaded.py: reduce use of get_serverid() a bit more
17520warner@lothar.com**20110227011142
17521 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
17522]
17523[immutable/upload.py: reduce use of get_serverid()
17524warner@lothar.com**20110227011138
17525 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
17526]
17527[immutable/checker.py: remove some uses of s.get_serverid(), not all
17528warner@lothar.com**20110227011134
17529 Ignore-this: e480a37efa9e94e8016d826c492f626e
17530]
17531[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
17532warner@lothar.com**20110227011132
17533 Ignore-this: 6078279ddf42b179996a4b53bee8c421
17534 MockIServer stubs
17535]
17536[upload.py: rearrange _make_trackers a bit, no behavior changes
17537warner@lothar.com**20110227011128
17538 Ignore-this: 296d4819e2af452b107177aef6ebb40f
17539]
17540[happinessutil.py: finally rename merge_peers to merge_servers
17541warner@lothar.com**20110227011124
17542 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
17543]
17544[test_upload.py: factor out FakeServerTracker
17545warner@lothar.com**20110227011120
17546 Ignore-this: 6c182cba90e908221099472cc159325b
17547]
17548[test_upload.py: server-vs-tracker cleanup
17549warner@lothar.com**20110227011115
17550 Ignore-this: 2915133be1a3ba456e8603885437e03
17551]
17552[happinessutil.py: server-vs-tracker cleanup
17553warner@lothar.com**20110227011111
17554 Ignore-this: b856c84033562d7d718cae7cb01085a9
17555]
17556[upload.py: more tracker-vs-server cleanup
17557warner@lothar.com**20110227011107
17558 Ignore-this: bb75ed2afef55e47c085b35def2de315
17559]
17560[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
17561warner@lothar.com**20110227011103
17562 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
17563]
17564[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
17565warner@lothar.com**20110227011100
17566 Ignore-this: 7ea858755cbe5896ac212a925840fe68
17567 
17568 No behavioral changes, just updating variable/method names and log messages.
17569 The effects outside these three files should be minimal: some exception
17570 messages changed (to say "server" instead of "peer"), and some internal class
17571 names were changed. A few things still use "peer" to minimize external
17572 changes, like UploadResults.timings["peer_selection"] and
17573 happinessutil.merge_peers, which can be changed later.
17574]
17575[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
17576warner@lothar.com**20110227011056
17577 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
17578]
17579[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
17580warner@lothar.com**20110227011051
17581 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
17582]
17583[test: increase timeout on a network test because Francois's ARM machine hit that timeout
17584zooko@zooko.com**20110317165909
17585 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
17586 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
17587]
17588[docs/configuration.rst: add a "Frontend Configuration" section
17589Brian Warner <warner@lothar.com>**20110222014323
17590 Ignore-this: 657018aa501fe4f0efef9851628444ca
17591 
17592 this points to docs/frontends/*.rst, which were previously underlinked
17593]
17594[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
17595"Brian Warner <warner@lothar.com>"**20110221061544
17596 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
17597]
17598[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
17599david-sarah@jacaranda.org**20110221015817
17600 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
17601]
17602[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
17603david-sarah@jacaranda.org**20110221020125
17604 Ignore-this: b0744ed58f161bf188e037bad077fc48
17605]
17606[Refactor StorageFarmBroker handling of servers
17607Brian Warner <warner@lothar.com>**20110221015804
17608 Ignore-this: 842144ed92f5717699b8f580eab32a51
17609 
17610 Pass around IServer instance instead of (peerid, rref) tuple. Replace
17611 "descriptor" with "server". Other replacements:
17612 
17613  get_all_servers -> get_connected_servers/get_known_servers
17614  get_servers_for_index -> get_servers_for_psi (now returns IServers)
17615 
17616 This change still needs to be pushed further down: lots of code is now
17617 getting the IServer and then distributing (peerid, rref) internally.
17618 Instead, it ought to distribute the IServer internally and delay
17619 extracting a serverid or rref until the last moment.
17620 
17621 no_network.py was updated to retain parallelism.
17622]
17623[TAG allmydata-tahoe-1.8.2
17624warner@lothar.com**20110131020101]
17625Patch bundle hash:
17626b7909d271f66188636c64f12aa0be28a66eac072