Ticket #393: 393status47.dpatch

File 393status47.dpatch, 785.8 KB (added by kevan, at 2011-08-02T02:58:13Z)

rework #393 patches to be more comprehensible

Line 
1Mon Aug  1 18:35:24 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
2  * mutable/retrieve: rework the mutable downloader to handle multiple-segment files
3 
4  The downloader needs substantial reworking to handle multiple segment
5  mutable files, which it needs to handle for MDMF.
6
7Mon Aug  1 18:39:31 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
8  * mutable/publish: teach the publisher how to publish MDMF mutable files
9 
10  Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
11
12Mon Aug  1 18:40:18 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
13  * mutable/servermap: Rework the servermap to work with MDMF mutable files
14
15Mon Aug  1 18:41:19 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
16  * interfaces: change interfaces to work with MDMF
17 
18  A lot of this work concerns #993, in that it unifies (to an extent) the
19  interfaces of mutable and immutable files.
20
21Mon Aug  1 18:42:58 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
22  * nodemaker: teach nodemaker how to create MDMF mutable files
23
24Mon Aug  1 18:45:01 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
25  * mutable/filenode: Modify mutable filenodes for use with MDMF
26 
27  In particular:
28      - Break MutableFileNode and MutableFileVersion into distinct classes.
29      - Implement the interface modifications made for MDMF.
30      - Be aware of MDMF caps.
31      - Learn how to create and work with MDMF files.
32
33Mon Aug  1 18:48:11 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
34  * client: teach client how to create and work with MDMF files
35
36Mon Aug  1 18:49:26 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
37  * nodemaker: teach nodemaker about MDMF caps
38
39Mon Aug  1 18:51:40 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
40  * mutable: train checker and repairer to work with MDMF mutable files
41
42Mon Aug  1 18:56:43 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
43  * test/common: Alter common test code to work with MDMF.
44 
45  This mostly has to do with making the test code implement the new
46  unified filenode interfaces.
47
48Mon Aug  1 19:05:11 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
49  * dirnode: teach dirnode to make MDMF directories
50
51Mon Aug  1 19:08:14 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
52  * immutable/literal.py: Implement interface changes in literal nodes.
53
54Mon Aug  1 19:09:05 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
55  * immutable/filenode: implement unified filenode interface
56
57Mon Aug  1 19:09:24 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
58  * test/test_mutable: tests for MDMF
59 
60  These are their own patch because they cut across a lot of the changes
61  I've made in implementing MDMF in such a way as to make it difficult to
62  split them up into the other patches.
63
64Mon Aug  1 19:11:20 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
65  * mutable/layout: Define MDMF share format, write tools for working with MDMF share format
66 
67  The changes in layout.py are mostly concerned with the MDMF share
68  format. In particular, we define read and write proxy objects used by
69  retrieval, publishing, and other code to write and read the MDMF share
70  format. We create equivalent proxies for SDMF objects so that these
71  objects can be suitably general.
72
73Mon Aug  1 19:12:07 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
74  * frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
75
76Mon Aug  1 19:12:33 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
77  * uri: add MDMF and MDMF directory caps, add extension hint support
78
79Mon Aug  1 19:13:11 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
80  * webapi changes for MDMF
81 
82      - Learn how to create MDMF files and directories through the
83        mutable-type argument.
84      - Operate with the interface changes associated with MDMF and #993.
85      - Learn how to do partial updates of mutable files.
86
87Mon Aug  1 19:14:38 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
88  * test: fix assorted tests broken by MDMF changes
89
90Mon Aug  1 19:16:13 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
91  * cli: teach CLI how to create MDMF mutable files
92 
93  Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
94  argument.
95
96Mon Aug  1 19:20:56 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
97  * docs: amend configuration, webapi documentation to talk about MDMF
98
99New patches:
100
101[mutable/retrieve: rework the mutable downloader to handle multiple-segment files
102Kevan Carstensen <kevan@isnotajoke.com>**20110802013524
103 Ignore-this: 398d11b5cb993b50e5e4fa6e7a3856dc
104 
105 The downloader needs substantial reworking to handle multiple segment
106 mutable files, which it needs to handle for MDMF.
107] {
108hunk ./src/allmydata/mutable/retrieve.py 2
109 
110-import struct, time
111+import time
112 from itertools import count
113 from zope.interface import implements
114 from twisted.internet import defer
115hunk ./src/allmydata/mutable/retrieve.py 7
116 from twisted.python import failure
117-from foolscap.api import DeadReferenceError, eventually, fireEventually
118-from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError
119-from allmydata.util import hashutil, idlib, log
120+from twisted.internet.interfaces import IPushProducer, IConsumer
121+from foolscap.api import eventually, fireEventually
122+from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \
123+                                 MDMF_VERSION, SDMF_VERSION
124+from allmydata.util import hashutil, log, mathutil
125 from allmydata.util.dictutil import DictOfSets
126 from allmydata import hashtree, codec
127 from allmydata.storage.server import si_b2a
128hunk ./src/allmydata/mutable/retrieve.py 19
129 from pycryptopp.publickey import rsa
130 
131 from allmydata.mutable.common import CorruptShareError, UncoordinatedWriteError
132-from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data
133+from allmydata.mutable.layout import MDMFSlotReadProxy
134 
135 class RetrieveStatus:
136     implements(IRetrieveStatus)
137hunk ./src/allmydata/mutable/retrieve.py 86
138     # times, and each will have a separate response chain. However the
139     # Retrieve object will remain tied to a specific version of the file, and
140     # will use a single ServerMap instance.
141+    implements(IPushProducer)
142 
143hunk ./src/allmydata/mutable/retrieve.py 88
144-    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False):
145+    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False,
146+                 verify=False):
147         self._node = filenode
148         assert self._node.get_pubkey()
149         self._storage_index = filenode.get_storage_index()
150hunk ./src/allmydata/mutable/retrieve.py 107
151         self.verinfo = verinfo
152         # during repair, we may be called upon to grab the private key, since
153         # it wasn't picked up during a verify=False checker run, and we'll
154-        # need it for repair to generate the a new version.
155-        self._need_privkey = fetch_privkey
156-        if self._node.get_privkey():
157+        # need it for repair to generate a new version.
158+        self._need_privkey = fetch_privkey or verify
159+        if self._node.get_privkey() and not verify:
160             self._need_privkey = False
161 
162hunk ./src/allmydata/mutable/retrieve.py 112
163+        if self._need_privkey:
164+            # TODO: Evaluate the need for this. We'll use it if we want
165+            # to limit how many queries are on the wire for the privkey
166+            # at once.
167+            self._privkey_query_markers = [] # one Marker for each time we've
168+                                             # tried to get the privkey.
169+
170+        # verify means that we are using the downloader logic to verify all
171+        # of our shares. This tells the downloader a few things.
172+        #
173+        # 1. We need to download all of the shares.
174+        # 2. We don't need to decode or decrypt the shares, since our
175+        #    caller doesn't care about the plaintext, only the
176+        #    information about which shares are or are not valid.
177+        # 3. When we are validating readers, we need to validate the
178+        #    signature on the prefix. Do we? We already do this in the
179+        #    servermap update?
180+        self._verify = False
181+        if verify:
182+            self._verify = True
183+
184         self._status = RetrieveStatus()
185         self._status.set_storage_index(self._storage_index)
186         self._status.set_helper(False)
187hunk ./src/allmydata/mutable/retrieve.py 142
188          offsets_tuple) = self.verinfo
189         self._status.set_size(datalength)
190         self._status.set_encoding(k, N)
191+        self.readers = {}
192+        self._paused = False
193+        self._paused_deferred = None
194+        self._offset = None
195+        self._read_length = None
196+        self.log("got seqnum %d" % self.verinfo[0])
197+
198 
199     def get_status(self):
200         return self._status
201hunk ./src/allmydata/mutable/retrieve.py 160
202             kwargs["facility"] = "tahoe.mutable.retrieve"
203         return log.msg(*args, **kwargs)
204 
205-    def download(self):
206+
207+    ###################
208+    # IPushProducer
209+
210+    def pauseProducing(self):
211+        """
212+        I am called by my download target if we have produced too much
213+        data for it to handle. I make the downloader stop producing new
214+        data until my resumeProducing method is called.
215+        """
216+        if self._paused:
217+            return
218+
219+        # fired when the download is unpaused.
220+        self._old_status = self._status.get_status()
221+        self._status.set_status("Paused")
222+
223+        self._pause_deferred = defer.Deferred()
224+        self._paused = True
225+
226+
227+    def resumeProducing(self):
228+        """
229+        I am called by my download target once it is ready to begin
230+        receiving data again.
231+        """
232+        if not self._paused:
233+            return
234+
235+        self._paused = False
236+        p = self._pause_deferred
237+        self._pause_deferred = None
238+        self._status.set_status(self._old_status)
239+
240+        eventually(p.callback, None)
241+
242+
243+    def _check_for_paused(self, res):
244+        """
245+        I am called just before a write to the consumer. I return a
246+        Deferred that eventually fires with the data that is to be
247+        written to the consumer. If the download has not been paused,
248+        the Deferred fires immediately. Otherwise, the Deferred fires
249+        when the downloader is unpaused.
250+        """
251+        if self._paused:
252+            d = defer.Deferred()
253+            self._pause_deferred.addCallback(lambda ignored: d.callback(res))
254+            return d
255+        return defer.succeed(res)
256+
257+
258+    def download(self, consumer=None, offset=0, size=None):
259+        assert IConsumer.providedBy(consumer) or self._verify
260+
261+        if consumer:
262+            self._consumer = consumer
263+            # we provide IPushProducer, so streaming=True, per
264+            # IConsumer.
265+            self._consumer.registerProducer(self, streaming=True)
266+
267         self._done_deferred = defer.Deferred()
268         self._started = time.time()
269         self._status.set_status("Retrieving Shares")
270hunk ./src/allmydata/mutable/retrieve.py 225
271 
272+        self._offset = offset
273+        self._read_length = size
274+
275         # first, which servers can we use?
276         versionmap = self.servermap.make_versionmap()
277         shares = versionmap[self.verinfo]
278hunk ./src/allmydata/mutable/retrieve.py 235
279         self.remaining_sharemap = DictOfSets()
280         for (shnum, peerid, timestamp) in shares:
281             self.remaining_sharemap.add(shnum, peerid)
282+            # If the servermap update fetched anything, it fetched at least 1
283+            # KiB, so we ask for that much.
284+            # TODO: Change the cache methods to allow us to fetch all of the
285+            # data that they have, then change this method to do that.
286+            any_cache = self._node._read_from_cache(self.verinfo, shnum,
287+                                                    0, 1000)
288+            ss = self.servermap.connections[peerid]
289+            reader = MDMFSlotReadProxy(ss,
290+                                       self._storage_index,
291+                                       shnum,
292+                                       any_cache)
293+            reader.peerid = peerid
294+            self.readers[shnum] = reader
295+
296 
297         self.shares = {} # maps shnum to validated blocks
298hunk ./src/allmydata/mutable/retrieve.py 251
299+        self._active_readers = [] # list of active readers for this dl.
300+        self._validated_readers = set() # set of readers that we have
301+                                        # validated the prefix of
302+        self._block_hash_trees = {} # shnum => hashtree
303 
304         # how many shares do we need?
305hunk ./src/allmydata/mutable/retrieve.py 257
306-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
307+        (seqnum,
308+         root_hash,
309+         IV,
310+         segsize,
311+         datalength,
312+         k,
313+         N,
314+         prefix,
315          offsets_tuple) = self.verinfo
316hunk ./src/allmydata/mutable/retrieve.py 266
317+
318+
319+        # We need one share hash tree for the entire file; its leaves
320+        # are the roots of the block hash trees for the shares that
321+        # comprise it, and its root is in the verinfo.
322+        self.share_hash_tree = hashtree.IncompleteHashTree(N)
323+        self.share_hash_tree.set_hashes({0: root_hash})
324+
325+        # This will set up both the segment decoder and the tail segment
326+        # decoder, as well as a variety of other instance variables that
327+        # the download process will use.
328+        self._setup_encoding_parameters()
329         assert len(self.remaining_sharemap) >= k
330hunk ./src/allmydata/mutable/retrieve.py 279
331-        # we start with the lowest shnums we have available, since FEC is
332-        # faster if we're using "primary shares"
333-        self.active_shnums = set(sorted(self.remaining_sharemap.keys())[:k])
334-        for shnum in self.active_shnums:
335-            # we use an arbitrary peer who has the share. If shares are
336-            # doubled up (more than one share per peer), we could make this
337-            # run faster by spreading the load among multiple peers. But the
338-            # algorithm to do that is more complicated than I want to write
339-            # right now, and a well-provisioned grid shouldn't have multiple
340-            # shares per peer.
341-            peerid = list(self.remaining_sharemap[shnum])[0]
342-            self.get_data(shnum, peerid)
343 
344hunk ./src/allmydata/mutable/retrieve.py 280
345-        # control flow beyond this point: state machine. Receiving responses
346-        # from queries is the input. We might send out more queries, or we
347-        # might produce a result.
348+        self.log("starting download")
349+        self._paused = False
350+        self._started_fetching = time.time()
351 
352hunk ./src/allmydata/mutable/retrieve.py 284
353+        self._add_active_peers()
354+        # The download process beyond this is a state machine.
355+        # _add_active_peers will select the peers that we want to use
356+        # for the download, and then attempt to start downloading. After
357+        # each segment, it will check for doneness, reacting to broken
358+        # peers and corrupt shares as necessary. If it runs out of good
359+        # peers before downloading all of the segments, _done_deferred
360+        # will errback.  Otherwise, it will eventually callback with the
361+        # contents of the mutable file.
362         return self._done_deferred
363 
364hunk ./src/allmydata/mutable/retrieve.py 295
365-    def get_data(self, shnum, peerid):
366-        self.log(format="sending sh#%(shnum)d request to [%(peerid)s]",
367-                 shnum=shnum,
368-                 peerid=idlib.shortnodeid_b2a(peerid),
369-                 level=log.NOISY)
370-        ss = self.servermap.connections[peerid]
371-        started = time.time()
372-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
373+
374+    def decode(self, blocks_and_salts, segnum):
375+        """
376+        I am a helper method that the mutable file update process uses
377+        as a shortcut to decode and decrypt the segments that it needs
378+        to fetch in order to perform a file update. I take in a
379+        collection of blocks and salts, and pick some of those to make a
380+        segment with. I return the plaintext associated with that
381+        segment.
382+        """
383+        # shnum => block hash tree. Unusued, but setup_encoding_parameters will
384+        # want to set this.
385+        # XXX: Make it so that it won't set this if we're just decoding.
386+        self._block_hash_trees = {}
387+        self._setup_encoding_parameters()
388+        # This is the form expected by decode.
389+        blocks_and_salts = blocks_and_salts.items()
390+        blocks_and_salts = [(True, [d]) for d in blocks_and_salts]
391+
392+        d = self._decode_blocks(blocks_and_salts, segnum)
393+        d.addCallback(self._decrypt_segment)
394+        return d
395+
396+
397+    def _setup_encoding_parameters(self):
398+        """
399+        I set up the encoding parameters, including k, n, the number
400+        of segments associated with this file, and the segment decoder.
401+        """
402+        (seqnum,
403+         root_hash,
404+         IV,
405+         segsize,
406+         datalength,
407+         k,
408+         n,
409+         known_prefix,
410          offsets_tuple) = self.verinfo
411hunk ./src/allmydata/mutable/retrieve.py 333
412-        offsets = dict(offsets_tuple)
413+        self._required_shares = k
414+        self._total_shares = n
415+        self._segment_size = segsize
416+        self._data_length = datalength
417+
418+        if not IV:
419+            self._version = MDMF_VERSION
420+        else:
421+            self._version = SDMF_VERSION
422 
423hunk ./src/allmydata/mutable/retrieve.py 343
424-        # we read the checkstring, to make sure that the data we grab is from
425-        # the right version.
426-        readv = [ (0, struct.calcsize(SIGNED_PREFIX)) ]
427+        if datalength and segsize:
428+            self._num_segments = mathutil.div_ceil(datalength, segsize)
429+            self._tail_data_size = datalength % segsize
430+        else:
431+            self._num_segments = 0
432+            self._tail_data_size = 0
433 
434hunk ./src/allmydata/mutable/retrieve.py 350
435-        # We also read the data, and the hashes necessary to validate them
436-        # (share_hash_chain, block_hash_tree, share_data). We don't read the
437-        # signature or the pubkey, since that was handled during the
438-        # servermap phase, and we'll be comparing the share hash chain
439-        # against the roothash that was validated back then.
440+        self._segment_decoder = codec.CRSDecoder()
441+        self._segment_decoder.set_params(segsize, k, n)
442 
443hunk ./src/allmydata/mutable/retrieve.py 353
444-        readv.append( (offsets['share_hash_chain'],
445-                       offsets['enc_privkey'] - offsets['share_hash_chain'] ) )
446+        if  not self._tail_data_size:
447+            self._tail_data_size = segsize
448 
449hunk ./src/allmydata/mutable/retrieve.py 356
450-        # if we need the private key (for repair), we also fetch that
451-        if self._need_privkey:
452-            readv.append( (offsets['enc_privkey'],
453-                           offsets['EOF'] - offsets['enc_privkey']) )
454+        self._tail_segment_size = mathutil.next_multiple(self._tail_data_size,
455+                                                         self._required_shares)
456+        if self._tail_segment_size == self._segment_size:
457+            self._tail_decoder = self._segment_decoder
458+        else:
459+            self._tail_decoder = codec.CRSDecoder()
460+            self._tail_decoder.set_params(self._tail_segment_size,
461+                                          self._required_shares,
462+                                          self._total_shares)
463+
464+        self.log("got encoding parameters: "
465+                 "k: %d "
466+                 "n: %d "
467+                 "%d segments of %d bytes each (%d byte tail segment)" % \
468+                 (k, n, self._num_segments, self._segment_size,
469+                  self._tail_segment_size))
470 
471hunk ./src/allmydata/mutable/retrieve.py 373
472-        m = Marker()
473-        self._outstanding_queries[m] = (peerid, shnum, started)
474+        for i in xrange(self._total_shares):
475+            # So we don't have to do this later.
476+            self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments)
477 
478hunk ./src/allmydata/mutable/retrieve.py 377
479-        # ask the cache first
480-        got_from_cache = False
481-        datavs = []
482-        for (offset, length) in readv:
483-            data = self._node._read_from_cache(self.verinfo, shnum, offset, length)
484-            if data is not None:
485-                datavs.append(data)
486-        if len(datavs) == len(readv):
487-            self.log("got data from cache")
488-            got_from_cache = True
489-            d = fireEventually({shnum: datavs})
490-            # datavs is a dict mapping shnum to a pair of strings
491+        # Our last task is to tell the downloader where to start and
492+        # where to stop. We use three parameters for that:
493+        #   - self._start_segment: the segment that we need to start
494+        #     downloading from.
495+        #   - self._current_segment: the next segment that we need to
496+        #     download.
497+        #   - self._last_segment: The last segment that we were asked to
498+        #     download.
499+        #
500+        #  We say that the download is complete when
501+        #  self._current_segment > self._last_segment. We use
502+        #  self._start_segment and self._last_segment to know when to
503+        #  strip things off of segments, and how much to strip.
504+        if self._offset:
505+            self.log("got offset: %d" % self._offset)
506+            # our start segment is the first segment containing the
507+            # offset we were given.
508+            start = mathutil.div_ceil(self._offset,
509+                                      self._segment_size)
510+            # this gets us the first segment after self._offset. Then
511+            # our start segment is the one before it.
512+            start -= 1
513+
514+            assert start < self._num_segments
515+            self._start_segment = start
516+            self.log("got start segment: %d" % self._start_segment)
517         else:
518hunk ./src/allmydata/mutable/retrieve.py 404
519-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
520-        self.remaining_sharemap.discard(shnum, peerid)
521+            self._start_segment = 0
522 
523hunk ./src/allmydata/mutable/retrieve.py 406
524-        d.addCallback(self._got_results, m, peerid, started, got_from_cache)
525-        d.addErrback(self._query_failed, m, peerid)
526-        # errors that aren't handled by _query_failed (and errors caused by
527-        # _query_failed) get logged, but we still want to check for doneness.
528-        def _oops(f):
529-            self.log(format="problem in _query_failed for sh#%(shnum)d to %(peerid)s",
530-                     shnum=shnum,
531-                     peerid=idlib.shortnodeid_b2a(peerid),
532-                     failure=f,
533-                     level=log.WEIRD, umid="W0xnQA")
534-        d.addErrback(_oops)
535-        d.addBoth(self._check_for_done)
536-        # any error during _check_for_done means the download fails. If the
537-        # download is successful, _check_for_done will fire _done by itself.
538-        d.addErrback(self._done)
539-        d.addErrback(log.err)
540-        return d # purely for testing convenience
541 
542hunk ./src/allmydata/mutable/retrieve.py 407
543-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
544-        # isolate the callRemote to a separate method, so tests can subclass
545-        # Publish and override it
546-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
547-        return d
548+        if self._read_length:
549+            # our end segment is the last segment containing part of the
550+            # segment that we were asked to read.
551+            self.log("got read length %d" % self._read_length)
552+            end_data = self._offset + self._read_length
553+            end = mathutil.div_ceil(end_data,
554+                                    self._segment_size)
555+            end -= 1
556+            assert end < self._num_segments
557+            self._last_segment = end
558+            self.log("got end segment: %d" % self._last_segment)
559+        else:
560+            self._last_segment = self._num_segments - 1
561 
562hunk ./src/allmydata/mutable/retrieve.py 421
563-    def remove_peer(self, peerid):
564-        for shnum in list(self.remaining_sharemap.keys()):
565-            self.remaining_sharemap.discard(shnum, peerid)
566+        self._current_segment = self._start_segment
567 
568hunk ./src/allmydata/mutable/retrieve.py 423
569-    def _got_results(self, datavs, marker, peerid, started, got_from_cache):
570-        now = time.time()
571-        elapsed = now - started
572-        if not got_from_cache:
573-            self._status.add_fetch_timing(peerid, elapsed)
574-        self.log(format="got results (%(shares)d shares) from [%(peerid)s]",
575-                 shares=len(datavs),
576-                 peerid=idlib.shortnodeid_b2a(peerid),
577-                 level=log.NOISY)
578-        self._outstanding_queries.pop(marker, None)
579-        if not self._running:
580-            return
581+    def _add_active_peers(self):
582+        """
583+        I populate self._active_readers with enough active readers to
584+        retrieve the contents of this mutable file. I am called before
585+        downloading starts, and (eventually) after each validation
586+        error, connection error, or other problem in the download.
587+        """
588+        # TODO: It would be cool to investigate other heuristics for
589+        # reader selection. For instance, the cost (in time the user
590+        # spends waiting for their file) of selecting a really slow peer
591+        # that happens to have a primary share is probably more than
592+        # selecting a really fast peer that doesn't have a primary
593+        # share. Maybe the servermap could be extended to provide this
594+        # information; it could keep track of latency information while
595+        # it gathers more important data, and then this routine could
596+        # use that to select active readers.
597+        #
598+        # (these and other questions would be easier to answer with a
599+        #  robust, configurable tahoe-lafs simulator, which modeled node
600+        #  failures, differences in node speed, and other characteristics
601+        #  that we expect storage servers to have.  You could have
602+        #  presets for really stable grids (like allmydata.com),
603+        #  friendnets, make it easy to configure your own settings, and
604+        #  then simulate the effect of big changes on these use cases
605+        #  instead of just reasoning about what the effect might be. Out
606+        #  of scope for MDMF, though.)
607 
608hunk ./src/allmydata/mutable/retrieve.py 450
609-        # note that we only ask for a single share per query, so we only
610-        # expect a single share back. On the other hand, we use the extra
611-        # shares if we get them.. seems better than an assert().
612+        # We need at least self._required_shares readers to download a
613+        # segment.
614+        if self._verify:
615+            needed = self._total_shares
616+        else:
617+            needed = self._required_shares - len(self._active_readers)
618+        # XXX: Why don't format= log messages work here?
619+        self.log("adding %d peers to the active peers list" % needed)
620 
621hunk ./src/allmydata/mutable/retrieve.py 459
622-        for shnum,datav in datavs.items():
623-            (prefix, hash_and_data) = datav[:2]
624-            try:
625-                self._got_results_one_share(shnum, peerid,
626-                                            prefix, hash_and_data)
627-            except CorruptShareError, e:
628-                # log it and give the other shares a chance to be processed
629-                f = failure.Failure()
630-                self.log(format="bad share: %(f_value)s",
631-                         f_value=str(f.value), failure=f,
632-                         level=log.WEIRD, umid="7fzWZw")
633-                self.notify_server_corruption(peerid, shnum, str(e))
634-                self.remove_peer(peerid)
635-                self.servermap.mark_bad_share(peerid, shnum, prefix)
636-                self._bad_shares.add( (peerid, shnum) )
637-                self._status.problems[peerid] = f
638-                self._last_failure = f
639-                pass
640-            if self._need_privkey and len(datav) > 2:
641-                lp = None
642-                self._try_to_validate_privkey(datav[2], peerid, shnum, lp)
643-        # all done!
644+        # We favor lower numbered shares, since FEC is faster with
645+        # primary shares than with other shares, and lower-numbered
646+        # shares are more likely to be primary than higher numbered
647+        # shares.
648+        active_shnums = set(sorted(self.remaining_sharemap.keys()))
649+        # We shouldn't consider adding shares that we already have; this
650+        # will cause problems later.
651+        active_shnums -= set([reader.shnum for reader in self._active_readers])
652+        active_shnums = list(active_shnums)[:needed]
653+        if len(active_shnums) < needed and not self._verify:
654+            # We don't have enough readers to retrieve the file; fail.
655+            return self._failed()
656 
657hunk ./src/allmydata/mutable/retrieve.py 472
658-    def notify_server_corruption(self, peerid, shnum, reason):
659-        ss = self.servermap.connections[peerid]
660-        ss.callRemoteOnly("advise_corrupt_share",
661-                          "mutable", self._storage_index, shnum, reason)
662+        for shnum in active_shnums:
663+            self._active_readers.append(self.readers[shnum])
664+            self.log("added reader for share %d" % shnum)
665+        assert len(self._active_readers) >= self._required_shares
666+        # Conceptually, this is part of the _add_active_peers step. It
667+        # validates the prefixes of newly added readers to make sure
668+        # that they match what we are expecting for self.verinfo. If
669+        # validation is successful, _validate_active_prefixes will call
670+        # _download_current_segment for us. If validation is
671+        # unsuccessful, then _validate_prefixes will remove the peer and
672+        # call _add_active_peers again, where we will attempt to rectify
673+        # the problem by choosing another peer.
674+        return self._validate_active_prefixes()
675 
676hunk ./src/allmydata/mutable/retrieve.py 486
677-    def _got_results_one_share(self, shnum, peerid,
678-                               got_prefix, got_hash_and_data):
679-        self.log("_got_results: got shnum #%d from peerid %s"
680-                 % (shnum, idlib.shortnodeid_b2a(peerid)))
681-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
682+
683+    def _validate_active_prefixes(self):
684+        """
685+        I check to make sure that the prefixes on the peers that I am
686+        currently reading from match the prefix that we want to see, as
687+        said in self.verinfo.
688+
689+        If I find that all of the active peers have acceptable prefixes,
690+        I pass control to _download_current_segment, which will use
691+        those peers to do cool things. If I find that some of the active
692+        peers have unacceptable prefixes, I will remove them from active
693+        peers (and from further consideration) and call
694+        _add_active_peers to attempt to rectify the situation. I keep
695+        track of which peers I have already validated so that I don't
696+        need to do so again.
697+        """
698+        assert self._active_readers, "No more active readers"
699+
700+        ds = []
701+        new_readers = set(self._active_readers) - self._validated_readers
702+        self.log('validating %d newly-added active readers' % len(new_readers))
703+
704+        for reader in new_readers:
705+            # We force a remote read here -- otherwise, we are relying
706+            # on cached data that we already verified as valid, and we
707+            # won't detect an uncoordinated write that has occurred
708+            # since the last servermap update.
709+            d = reader.get_prefix(force_remote=True)
710+            d.addCallback(self._try_to_validate_prefix, reader)
711+            ds.append(d)
712+        dl = defer.DeferredList(ds, consumeErrors=True)
713+        def _check_results(results):
714+            # Each result in results will be of the form (success, msg).
715+            # We don't care about msg, but success will tell us whether
716+            # or not the checkstring validated. If it didn't, we need to
717+            # remove the offending (peer,share) from our active readers,
718+            # and ensure that active readers is again populated.
719+            bad_readers = []
720+            for i, result in enumerate(results):
721+                if not result[0]:
722+                    reader = self._active_readers[i]
723+                    f = result[1]
724+                    assert isinstance(f, failure.Failure)
725+
726+                    self.log("The reader %s failed to "
727+                             "properly validate: %s" % \
728+                             (reader, str(f.value)))
729+                    bad_readers.append((reader, f))
730+                else:
731+                    reader = self._active_readers[i]
732+                    self.log("the reader %s checks out, so we'll use it" % \
733+                             reader)
734+                    self._validated_readers.add(reader)
735+                    # Each time we validate a reader, we check to see if
736+                    # we need the private key. If we do, we politely ask
737+                    # for it and then continue computing. If we find
738+                    # that we haven't gotten it at the end of
739+                    # segment decoding, then we'll take more drastic
740+                    # measures.
741+                    if self._need_privkey and not self._node.is_readonly():
742+                        d = reader.get_encprivkey()
743+                        d.addCallback(self._try_to_validate_privkey, reader)
744+            if bad_readers:
745+                # We do them all at once, or else we screw up list indexing.
746+                for (reader, f) in bad_readers:
747+                    self._mark_bad_share(reader, f)
748+                if self._verify:
749+                    if len(self._active_readers) >= self._required_shares:
750+                        return self._download_current_segment()
751+                    else:
752+                        return self._failed()
753+                else:
754+                    return self._add_active_peers()
755+            else:
756+                return self._download_current_segment()
757+            # The next step will assert that it has enough active
758+            # readers to fetch shares; we just need to remove it.
759+        dl.addCallback(_check_results)
760+        return dl
761+
762+
763+    def _try_to_validate_prefix(self, prefix, reader):
764+        """
765+        I check that the prefix returned by a candidate server for
766+        retrieval matches the prefix that the servermap knows about
767+        (and, hence, the prefix that was validated earlier). If it does,
768+        I return True, which means that I approve of the use of the
769+        candidate server for segment retrieval. If it doesn't, I return
770+        False, which means that another server must be chosen.
771+        """
772+        (seqnum,
773+         root_hash,
774+         IV,
775+         segsize,
776+         datalength,
777+         k,
778+         N,
779+         known_prefix,
780          offsets_tuple) = self.verinfo
781hunk ./src/allmydata/mutable/retrieve.py 585
782-        assert len(got_prefix) == len(prefix), (len(got_prefix), len(prefix))
783-        if got_prefix != prefix:
784-            msg = "someone wrote to the data since we read the servermap: prefix changed"
785-            raise UncoordinatedWriteError(msg)
786-        (share_hash_chain, block_hash_tree,
787-         share_data) = unpack_share_data(self.verinfo, got_hash_and_data)
788+        if known_prefix != prefix:
789+            self.log("prefix from share %d doesn't match" % reader.shnum)
790+            raise UncoordinatedWriteError("Mismatched prefix -- this could "
791+                                          "indicate an uncoordinated write")
792+        # Otherwise, we're okay -- no issues.
793 
794hunk ./src/allmydata/mutable/retrieve.py 591
795-        assert isinstance(share_data, str)
796-        # build the block hash tree. SDMF has only one leaf.
797-        leaves = [hashutil.block_hash(share_data)]
798-        t = hashtree.HashTree(leaves)
799-        if list(t) != block_hash_tree:
800-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
801-        share_hash_leaf = t[0]
802-        t2 = hashtree.IncompleteHashTree(N)
803-        # root_hash was checked by the signature
804-        t2.set_hashes({0: root_hash})
805-        try:
806-            t2.set_hashes(hashes=share_hash_chain,
807-                          leaves={shnum: share_hash_leaf})
808-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
809-                IndexError), e:
810-            msg = "corrupt hashes: %s" % (e,)
811-            raise CorruptShareError(peerid, shnum, msg)
812-        self.log(" data valid! len=%d" % len(share_data))
813-        # each query comes down to this: placing validated share data into
814-        # self.shares
815-        self.shares[shnum] = share_data
816 
817hunk ./src/allmydata/mutable/retrieve.py 592
818-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
819+    def _remove_reader(self, reader):
820+        """
821+        At various points, we will wish to remove a peer from
822+        consideration and/or use. These include, but are not necessarily
823+        limited to:
824 
825hunk ./src/allmydata/mutable/retrieve.py 598
826-        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
827-        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
828-        if alleged_writekey != self._node.get_writekey():
829-            self.log("invalid privkey from %s shnum %d" %
830-                     (idlib.nodeid_b2a(peerid)[:8], shnum),
831-                     parent=lp, level=log.WEIRD, umid="YIw4tA")
832-            return
833+            - A connection error.
834+            - A mismatched prefix (that is, a prefix that does not match
835+              our conception of the version information string).
836+            - A failing block hash, salt hash, or share hash, which can
837+              indicate disk failure/bit flips, or network trouble.
838 
839hunk ./src/allmydata/mutable/retrieve.py 604
840-        # it's good
841-        self.log("got valid privkey from shnum %d on peerid %s" %
842-                 (shnum, idlib.shortnodeid_b2a(peerid)),
843-                 parent=lp)
844-        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
845-        self._node._populate_encprivkey(enc_privkey)
846-        self._node._populate_privkey(privkey)
847-        self._need_privkey = False
848+        This method will do that. I will make sure that the
849+        (shnum,reader) combination represented by my reader argument is
850+        not used for anything else during this download. I will not
851+        advise the reader of any corruption, something that my callers
852+        may wish to do on their own.
853+        """
854+        # TODO: When you're done writing this, see if this is ever
855+        # actually used for something that _mark_bad_share isn't. I have
856+        # a feeling that they will be used for very similar things, and
857+        # that having them both here is just going to be an epic amount
858+        # of code duplication.
859+        #
860+        # (well, okay, not epic, but meaningful)
861+        self.log("removing reader %s" % reader)
862+        # Remove the reader from _active_readers
863+        self._active_readers.remove(reader)
864+        # TODO: self.readers.remove(reader)?
865+        for shnum in list(self.remaining_sharemap.keys()):
866+            self.remaining_sharemap.discard(shnum, reader.peerid)
867 
868hunk ./src/allmydata/mutable/retrieve.py 624
869-    def _query_failed(self, f, marker, peerid):
870-        self.log(format="query to [%(peerid)s] failed",
871-                 peerid=idlib.shortnodeid_b2a(peerid),
872-                 level=log.NOISY)
873-        self._status.problems[peerid] = f
874-        self._outstanding_queries.pop(marker, None)
875-        if not self._running:
876-            return
877+
878+    def _mark_bad_share(self, reader, f):
879+        """
880+        I mark the (peerid, shnum) encapsulated by my reader argument as
881+        a bad share, which means that it will not be used anywhere else.
882+
883+        There are several reasons to want to mark something as a bad
884+        share. These include:
885+
886+            - A connection error to the peer.
887+            - A mismatched prefix (that is, a prefix that does not match
888+              our local conception of the version information string).
889+            - A failing block hash, salt hash, share hash, or other
890+              integrity check.
891+
892+        This method will ensure that readers that we wish to mark bad
893+        (for these reasons or other reasons) are not used for the rest
894+        of the download. Additionally, it will attempt to tell the
895+        remote peer (with no guarantee of success) that its share is
896+        corrupt.
897+        """
898+        self.log("marking share %d on server %s as bad" % \
899+                 (reader.shnum, reader))
900+        prefix = self.verinfo[-2]
901+        self.servermap.mark_bad_share(reader.peerid,
902+                                      reader.shnum,
903+                                      prefix)
904+        self._remove_reader(reader)
905+        self._bad_shares.add((reader.peerid, reader.shnum, f))
906+        self._status.problems[reader.peerid] = f
907         self._last_failure = f
908hunk ./src/allmydata/mutable/retrieve.py 655
909-        self.remove_peer(peerid)
910-        level = log.WEIRD
911-        if f.check(DeadReferenceError):
912-            level = log.UNUSUAL
913-        self.log(format="error during query: %(f_value)s",
914-                 f_value=str(f.value), failure=f, level=level, umid="gOJB5g")
915+        self.notify_server_corruption(reader.peerid, reader.shnum,
916+                                      str(f.value))
917 
918hunk ./src/allmydata/mutable/retrieve.py 658
919-    def _check_for_done(self, res):
920-        # exit paths:
921-        #  return : keep waiting, no new queries
922-        #  return self._send_more_queries(outstanding) : send some more queries
923-        #  fire self._done(plaintext) : download successful
924-        #  raise exception : download fails
925 
926hunk ./src/allmydata/mutable/retrieve.py 659
927-        self.log(format="_check_for_done: running=%(running)s, decoding=%(decoding)s",
928-                 running=self._running, decoding=self._decoding,
929-                 level=log.NOISY)
930-        if not self._running:
931-            return
932-        if self._decoding:
933-            return
934-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
935-         offsets_tuple) = self.verinfo
936+    def _download_current_segment(self):
937+        """
938+        I download, validate, decode, decrypt, and assemble the segment
939+        that this Retrieve is currently responsible for downloading.
940+        """
941+        assert len(self._active_readers) >= self._required_shares
942+        if self._current_segment <= self._last_segment:
943+            d = self._process_segment(self._current_segment)
944+        else:
945+            d = defer.succeed(None)
946+        d.addBoth(self._turn_barrier)
947+        d.addCallback(self._check_for_done)
948+        return d
949 
950hunk ./src/allmydata/mutable/retrieve.py 673
951-        if len(self.shares) < k:
952-            # we don't have enough shares yet
953-            return self._maybe_send_more_queries(k)
954-        if self._need_privkey:
955-            # we got k shares, but none of them had a valid privkey. TODO:
956-            # look further. Adding code to do this is a bit complicated, and
957-            # I want to avoid that complication, and this should be pretty
958-            # rare (k shares with bitflips in the enc_privkey but not in the
959-            # data blocks). If we actually do get here, the subsequent repair
960-            # will fail for lack of a privkey.
961-            self.log("got k shares but still need_privkey, bummer",
962-                     level=log.WEIRD, umid="MdRHPA")
963 
964hunk ./src/allmydata/mutable/retrieve.py 674
965-        # we have enough to finish. All the shares have had their hashes
966-        # checked, so if something fails at this point, we don't know how
967-        # to fix it, so the download will fail.
968+    def _turn_barrier(self, result):
969+        """
970+        I help the download process avoid the recursion limit issues
971+        discussed in #237.
972+        """
973+        return fireEventually(result)
974 
975hunk ./src/allmydata/mutable/retrieve.py 681
976-        self._decoding = True # avoid reentrancy
977-        self._status.set_status("decoding")
978-        now = time.time()
979-        elapsed = now - self._started
980-        self._status.timings["fetch"] = elapsed
981 
982hunk ./src/allmydata/mutable/retrieve.py 682
983-        d = defer.maybeDeferred(self._decode)
984-        d.addCallback(self._decrypt, IV, self._node.get_readkey())
985-        d.addBoth(self._done)
986-        return d # purely for test convenience
987+    def _process_segment(self, segnum):
988+        """
989+        I download, validate, decode, and decrypt one segment of the
990+        file that this Retrieve is retrieving. This means coordinating
991+        the process of getting k blocks of that file, validating them,
992+        assembling them into one segment with the decoder, and then
993+        decrypting them.
994+        """
995+        self.log("processing segment %d" % segnum)
996 
997hunk ./src/allmydata/mutable/retrieve.py 692
998-    def _maybe_send_more_queries(self, k):
999-        # we don't have enough shares yet. Should we send out more queries?
1000-        # There are some number of queries outstanding, each for a single
1001-        # share. If we can generate 'needed_shares' additional queries, we do
1002-        # so. If we can't, then we know this file is a goner, and we raise
1003-        # NotEnoughSharesError.
1004-        self.log(format=("_maybe_send_more_queries, have=%(have)d, k=%(k)d, "
1005-                         "outstanding=%(outstanding)d"),
1006-                 have=len(self.shares), k=k,
1007-                 outstanding=len(self._outstanding_queries),
1008-                 level=log.NOISY)
1009+        # TODO: The old code uses a marker. Should this code do that
1010+        # too? What did the Marker do?
1011+        assert len(self._active_readers) >= self._required_shares
1012 
1013hunk ./src/allmydata/mutable/retrieve.py 696
1014-        remaining_shares = k - len(self.shares)
1015-        needed = remaining_shares - len(self._outstanding_queries)
1016-        if not needed:
1017-            # we have enough queries in flight already
1018+        # We need to ask each of our active readers for its block and
1019+        # salt. We will then validate those. If validation is
1020+        # successful, we will assemble the results into plaintext.
1021+        ds = []
1022+        for reader in self._active_readers:
1023+            started = time.time()
1024+            d = reader.get_block_and_salt(segnum, queue=True)
1025+            d2 = self._get_needed_hashes(reader, segnum)
1026+            dl = defer.DeferredList([d, d2], consumeErrors=True)
1027+            dl.addCallback(self._validate_block, segnum, reader, started)
1028+            dl.addErrback(self._validation_or_decoding_failed, [reader])
1029+            ds.append(dl)
1030+            reader.flush()
1031+        dl = defer.DeferredList(ds)
1032+        if self._verify:
1033+            dl.addCallback(lambda ignored: "")
1034+            dl.addCallback(self._set_segment)
1035+        else:
1036+            dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum)
1037+        return dl
1038 
1039hunk ./src/allmydata/mutable/retrieve.py 717
1040-            # TODO: but if they've been in flight for a long time, and we
1041-            # have reason to believe that new queries might respond faster
1042-            # (i.e. we've seen other queries come back faster, then consider
1043-            # sending out new queries. This could help with peers which have
1044-            # silently gone away since the servermap was updated, for which
1045-            # we're still waiting for the 15-minute TCP disconnect to happen.
1046-            self.log("enough queries are in flight, no more are needed",
1047-                     level=log.NOISY)
1048-            return
1049 
1050hunk ./src/allmydata/mutable/retrieve.py 718
1051-        outstanding_shnums = set([shnum
1052-                                  for (peerid, shnum, started)
1053-                                  in self._outstanding_queries.values()])
1054-        # prefer low-numbered shares, they are more likely to be primary
1055-        available_shnums = sorted(self.remaining_sharemap.keys())
1056-        for shnum in available_shnums:
1057-            if shnum in outstanding_shnums:
1058-                # skip ones that are already in transit
1059-                continue
1060-            if shnum not in self.remaining_sharemap:
1061-                # no servers for that shnum. note that DictOfSets removes
1062-                # empty sets from the dict for us.
1063-                continue
1064-            peerid = list(self.remaining_sharemap[shnum])[0]
1065-            # get_data will remove that peerid from the sharemap, and add the
1066-            # query to self._outstanding_queries
1067-            self._status.set_status("Retrieving More Shares")
1068-            self.get_data(shnum, peerid)
1069-            needed -= 1
1070-            if not needed:
1071+    def _maybe_decode_and_decrypt_segment(self, blocks_and_salts, segnum):
1072+        """
1073+        I take the results of fetching and validating the blocks from a
1074+        callback chain in another method. If the results are such that
1075+        they tell me that validation and fetching succeeded without
1076+        incident, I will proceed with decoding and decryption.
1077+        Otherwise, I will do nothing.
1078+        """
1079+        self.log("trying to decode and decrypt segment %d" % segnum)
1080+        failures = False
1081+        for block_and_salt in blocks_and_salts:
1082+            if not block_and_salt[0] or block_and_salt[1] == None:
1083+                self.log("some validation operations failed; not proceeding")
1084+                failures = True
1085                 break
1086hunk ./src/allmydata/mutable/retrieve.py 733
1087+        if not failures:
1088+            self.log("everything looks ok, building segment %d" % segnum)
1089+            d = self._decode_blocks(blocks_and_salts, segnum)
1090+            d.addCallback(self._decrypt_segment)
1091+            d.addErrback(self._validation_or_decoding_failed,
1092+                         self._active_readers)
1093+            # check to see whether we've been paused before writing
1094+            # anything.
1095+            d.addCallback(self._check_for_paused)
1096+            d.addCallback(self._set_segment)
1097+            return d
1098+        else:
1099+            return defer.succeed(None)
1100 
1101hunk ./src/allmydata/mutable/retrieve.py 747
1102-        # at this point, we have as many outstanding queries as we can. If
1103-        # needed!=0 then we might not have enough to recover the file.
1104-        if needed:
1105-            format = ("ran out of peers: "
1106-                      "have %(have)d shares (k=%(k)d), "
1107-                      "%(outstanding)d queries in flight, "
1108-                      "need %(need)d more, "
1109-                      "found %(bad)d bad shares")
1110-            args = {"have": len(self.shares),
1111-                    "k": k,
1112-                    "outstanding": len(self._outstanding_queries),
1113-                    "need": needed,
1114-                    "bad": len(self._bad_shares),
1115-                    }
1116-            self.log(format=format,
1117-                     level=log.WEIRD, umid="ezTfjw", **args)
1118-            err = NotEnoughSharesError("%s, last failure: %s" %
1119-                                      (format % args, self._last_failure))
1120-            if self._bad_shares:
1121-                self.log("We found some bad shares this pass. You should "
1122-                         "update the servermap and try again to check "
1123-                         "more peers",
1124-                         level=log.WEIRD, umid="EFkOlA")
1125-                err.servermap = self.servermap
1126-            raise err
1127 
1128hunk ./src/allmydata/mutable/retrieve.py 748
1129+    def _set_segment(self, segment):
1130+        """
1131+        Given a plaintext segment, I register that segment with the
1132+        target that is handling the file download.
1133+        """
1134+        self.log("got plaintext for segment %d" % self._current_segment)
1135+        if self._current_segment == self._start_segment:
1136+            # We're on the first segment. It's possible that we want
1137+            # only some part of the end of this segment, and that we
1138+            # just downloaded the whole thing to get that part. If so,
1139+            # we need to account for that and give the reader just the
1140+            # data that they want.
1141+            n = self._offset % self._segment_size
1142+            self.log("stripping %d bytes off of the first segment" % n)
1143+            self.log("original segment length: %d" % len(segment))
1144+            segment = segment[n:]
1145+            self.log("new segment length: %d" % len(segment))
1146+
1147+        if self._current_segment == self._last_segment and self._read_length is not None:
1148+            # We're on the last segment. It's possible that we only want
1149+            # part of the beginning of this segment, and that we
1150+            # downloaded the whole thing anyway. Make sure to give the
1151+            # caller only the portion of the segment that they want to
1152+            # receive.
1153+            extra = self._read_length
1154+            if self._start_segment != self._last_segment:
1155+                extra -= self._segment_size - \
1156+                            (self._offset % self._segment_size)
1157+            extra %= self._segment_size
1158+            self.log("original segment length: %d" % len(segment))
1159+            segment = segment[:extra]
1160+            self.log("new segment length: %d" % len(segment))
1161+            self.log("only taking %d bytes of the last segment" % extra)
1162+
1163+        if not self._verify:
1164+            self._consumer.write(segment)
1165+        else:
1166+            # we don't care about the plaintext if we are doing a verify.
1167+            segment = None
1168+        self._current_segment += 1
1169+
1170+
1171+    def _validation_or_decoding_failed(self, f, readers):
1172+        """
1173+        I am called when a block or a salt fails to correctly validate, or when
1174+        the decryption or decoding operation fails for some reason.  I react to
1175+        this failure by notifying the remote server of corruption, and then
1176+        removing the remote peer from further activity.
1177+        """
1178+        assert isinstance(readers, list)
1179+        bad_shnums = [reader.shnum for reader in readers]
1180+
1181+        self.log("validation or decoding failed on share(s) %s, peer(s) %s "
1182+                 ", segment %d: %s" % \
1183+                 (bad_shnums, readers, self._current_segment, str(f)))
1184+        for reader in readers:
1185+            self._mark_bad_share(reader, f)
1186         return
1187 
1188hunk ./src/allmydata/mutable/retrieve.py 807
1189-    def _decode(self):
1190-        started = time.time()
1191-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1192-         offsets_tuple) = self.verinfo
1193 
1194hunk ./src/allmydata/mutable/retrieve.py 808
1195-        # shares_dict is a dict mapping shnum to share data, but the codec
1196-        # wants two lists.
1197-        shareids = []; shares = []
1198-        for shareid, share in self.shares.items():
1199+    def _validate_block(self, results, segnum, reader, started):
1200+        """
1201+        I validate a block from one share on a remote server.
1202+        """
1203+        # Grab the part of the block hash tree that is necessary to
1204+        # validate this block, then generate the block hash root.
1205+        self.log("validating share %d for segment %d" % (reader.shnum,
1206+                                                             segnum))
1207+        self._status.add_fetch_timing(reader.peerid, started)
1208+        self._status.set_status("Valdiating blocks for segment %d" % segnum)
1209+        # Did we fail to fetch either of the things that we were
1210+        # supposed to? Fail if so.
1211+        if not results[0][0] and results[1][0]:
1212+            # handled by the errback handler.
1213+
1214+            # These all get batched into one query, so the resulting
1215+            # failure should be the same for all of them, so we can just
1216+            # use the first one.
1217+            assert isinstance(results[0][1], failure.Failure)
1218+
1219+            f = results[0][1]
1220+            raise CorruptShareError(reader.peerid,
1221+                                    reader.shnum,
1222+                                    "Connection error: %s" % str(f))
1223+
1224+        block_and_salt, block_and_sharehashes = results
1225+        block, salt = block_and_salt[1]
1226+        blockhashes, sharehashes = block_and_sharehashes[1]
1227+
1228+        blockhashes = dict(enumerate(blockhashes[1]))
1229+        self.log("the reader gave me the following blockhashes: %s" % \
1230+                 blockhashes.keys())
1231+        self.log("the reader gave me the following sharehashes: %s" % \
1232+                 sharehashes[1].keys())
1233+        bht = self._block_hash_trees[reader.shnum]
1234+
1235+        if bht.needed_hashes(segnum, include_leaf=True):
1236+            try:
1237+                bht.set_hashes(blockhashes)
1238+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1239+                    IndexError), e:
1240+                raise CorruptShareError(reader.peerid,
1241+                                        reader.shnum,
1242+                                        "block hash tree failure: %s" % e)
1243+
1244+        if self._version == MDMF_VERSION:
1245+            blockhash = hashutil.block_hash(salt + block)
1246+        else:
1247+            blockhash = hashutil.block_hash(block)
1248+        # If this works without an error, then validation is
1249+        # successful.
1250+        try:
1251+           bht.set_hashes(leaves={segnum: blockhash})
1252+        except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1253+                IndexError), e:
1254+            raise CorruptShareError(reader.peerid,
1255+                                    reader.shnum,
1256+                                    "block hash tree failure: %s" % e)
1257+
1258+        # Reaching this point means that we know that this segment
1259+        # is correct. Now we need to check to see whether the share
1260+        # hash chain is also correct.
1261+        # SDMF wrote share hash chains that didn't contain the
1262+        # leaves, which would be produced from the block hash tree.
1263+        # So we need to validate the block hash tree first. If
1264+        # successful, then bht[0] will contain the root for the
1265+        # shnum, which will be a leaf in the share hash tree, which
1266+        # will allow us to validate the rest of the tree.
1267+        if self.share_hash_tree.needed_hashes(reader.shnum,
1268+                                              include_leaf=True) or \
1269+                                              self._verify:
1270+            try:
1271+                self.share_hash_tree.set_hashes(hashes=sharehashes[1],
1272+                                            leaves={reader.shnum: bht[0]})
1273+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
1274+                    IndexError), e:
1275+                raise CorruptShareError(reader.peerid,
1276+                                        reader.shnum,
1277+                                        "corrupt hashes: %s" % e)
1278+
1279+        self.log('share %d is valid for segment %d' % (reader.shnum,
1280+                                                       segnum))
1281+        return {reader.shnum: (block, salt)}
1282+
1283+
1284+    def _get_needed_hashes(self, reader, segnum):
1285+        """
1286+        I get the hashes needed to validate segnum from the reader, then return
1287+        to my caller when this is done.
1288+        """
1289+        bht = self._block_hash_trees[reader.shnum]
1290+        needed = bht.needed_hashes(segnum, include_leaf=True)
1291+        # The root of the block hash tree is also a leaf in the share
1292+        # hash tree. So we don't need to fetch it from the remote
1293+        # server. In the case of files with one segment, this means that
1294+        # we won't fetch any block hash tree from the remote server,
1295+        # since the hash of each share of the file is the entire block
1296+        # hash tree, and is a leaf in the share hash tree. This is fine,
1297+        # since any share corruption will be detected in the share hash
1298+        # tree.
1299+        #needed.discard(0)
1300+        self.log("getting blockhashes for segment %d, share %d: %s" % \
1301+                 (segnum, reader.shnum, str(needed)))
1302+        d1 = reader.get_blockhashes(needed, queue=True, force_remote=True)
1303+        if self.share_hash_tree.needed_hashes(reader.shnum):
1304+            need = self.share_hash_tree.needed_hashes(reader.shnum)
1305+            self.log("also need sharehashes for share %d: %s" % (reader.shnum,
1306+                                                                 str(need)))
1307+            d2 = reader.get_sharehashes(need, queue=True, force_remote=True)
1308+        else:
1309+            d2 = defer.succeed({}) # the logic in the next method
1310+                                   # expects a dict
1311+        dl = defer.DeferredList([d1, d2], consumeErrors=True)
1312+        return dl
1313+
1314+
1315+    def _decode_blocks(self, blocks_and_salts, segnum):
1316+        """
1317+        I take a list of k blocks and salts, and decode that into a
1318+        single encrypted segment.
1319+        """
1320+        d = {}
1321+        # We want to merge our dictionaries to the form
1322+        # {shnum: blocks_and_salts}
1323+        #
1324+        # The dictionaries come from validate block that way, so we just
1325+        # need to merge them.
1326+        for block_and_salt in blocks_and_salts:
1327+            d.update(block_and_salt[1])
1328+
1329+        # All of these blocks should have the same salt; in SDMF, it is
1330+        # the file-wide IV, while in MDMF it is the per-segment salt. In
1331+        # either case, we just need to get one of them and use it.
1332+        #
1333+        # d.items()[0] is like (shnum, (block, salt))
1334+        # d.items()[0][1] is like (block, salt)
1335+        # d.items()[0][1][1] is the salt.
1336+        salt = d.items()[0][1][1]
1337+        # Next, extract just the blocks from the dict. We'll use the
1338+        # salt in the next step.
1339+        share_and_shareids = [(k, v[0]) for k, v in d.items()]
1340+        d2 = dict(share_and_shareids)
1341+        shareids = []
1342+        shares = []
1343+        for shareid, share in d2.items():
1344             shareids.append(shareid)
1345             shares.append(share)
1346 
1347hunk ./src/allmydata/mutable/retrieve.py 956
1348-        assert len(shareids) >= k, len(shareids)
1349+        self._status.set_status("Decoding")
1350+        started = time.time()
1351+        assert len(shareids) >= self._required_shares, len(shareids)
1352         # zfec really doesn't want extra shares
1353hunk ./src/allmydata/mutable/retrieve.py 960
1354-        shareids = shareids[:k]
1355-        shares = shares[:k]
1356-
1357-        fec = codec.CRSDecoder()
1358-        fec.set_params(segsize, k, N)
1359-
1360-        self.log("params %s, we have %d shares" % ((segsize, k, N), len(shares)))
1361-        self.log("about to decode, shareids=%s" % (shareids,))
1362-        d = defer.maybeDeferred(fec.decode, shares, shareids)
1363-        def _done(buffers):
1364-            self._status.timings["decode"] = time.time() - started
1365-            self.log(" decode done, %d buffers" % len(buffers))
1366+        shareids = shareids[:self._required_shares]
1367+        shares = shares[:self._required_shares]
1368+        self.log("decoding segment %d" % segnum)
1369+        if segnum == self._num_segments - 1:
1370+            d = defer.maybeDeferred(self._tail_decoder.decode, shares, shareids)
1371+        else:
1372+            d = defer.maybeDeferred(self._segment_decoder.decode, shares, shareids)
1373+        def _process(buffers):
1374             segment = "".join(buffers)
1375hunk ./src/allmydata/mutable/retrieve.py 969
1376+            self.log(format="now decoding segment %(segnum)s of %(numsegs)s",
1377+                     segnum=segnum,
1378+                     numsegs=self._num_segments,
1379+                     level=log.NOISY)
1380             self.log(" joined length %d, datalength %d" %
1381hunk ./src/allmydata/mutable/retrieve.py 974
1382-                     (len(segment), datalength))
1383-            segment = segment[:datalength]
1384+                     (len(segment), self._data_length))
1385+            if segnum == self._num_segments - 1:
1386+                size_to_use = self._tail_data_size
1387+            else:
1388+                size_to_use = self._segment_size
1389+            segment = segment[:size_to_use]
1390             self.log(" segment len=%d" % len(segment))
1391hunk ./src/allmydata/mutable/retrieve.py 981
1392-            return segment
1393-        def _err(f):
1394-            self.log(" decode failed: %s" % f)
1395-            return f
1396-        d.addCallback(_done)
1397-        d.addErrback(_err)
1398+            self._status.timings.setdefault("decode", 0)
1399+            self._status.timings['decode'] = time.time() - started
1400+            return segment, salt
1401+        d.addCallback(_process)
1402         return d
1403 
1404hunk ./src/allmydata/mutable/retrieve.py 987
1405-    def _decrypt(self, crypttext, IV, readkey):
1406+
1407+    def _decrypt_segment(self, segment_and_salt):
1408+        """
1409+        I take a single segment and its salt, and decrypt it. I return
1410+        the plaintext of the segment that is in my argument.
1411+        """
1412+        segment, salt = segment_and_salt
1413         self._status.set_status("decrypting")
1414hunk ./src/allmydata/mutable/retrieve.py 995
1415+        self.log("decrypting segment %d" % self._current_segment)
1416         started = time.time()
1417hunk ./src/allmydata/mutable/retrieve.py 997
1418-        key = hashutil.ssk_readkey_data_hash(IV, readkey)
1419+        key = hashutil.ssk_readkey_data_hash(salt, self._node.get_readkey())
1420         decryptor = AES(key)
1421hunk ./src/allmydata/mutable/retrieve.py 999
1422-        plaintext = decryptor.process(crypttext)
1423-        self._status.timings["decrypt"] = time.time() - started
1424+        plaintext = decryptor.process(segment)
1425+        self._status.timings.setdefault("decrypt", 0)
1426+        self._status.timings['decrypt'] = time.time() - started
1427         return plaintext
1428 
1429hunk ./src/allmydata/mutable/retrieve.py 1004
1430-    def _done(self, res):
1431-        if not self._running:
1432+
1433+    def notify_server_corruption(self, peerid, shnum, reason):
1434+        ss = self.servermap.connections[peerid]
1435+        ss.callRemoteOnly("advise_corrupt_share",
1436+                          "mutable", self._storage_index, shnum, reason)
1437+
1438+
1439+    def _try_to_validate_privkey(self, enc_privkey, reader):
1440+        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
1441+        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
1442+        if alleged_writekey != self._node.get_writekey():
1443+            self.log("invalid privkey from %s shnum %d" %
1444+                     (reader, reader.shnum),
1445+                     level=log.WEIRD, umid="YIw4tA")
1446+            if self._verify:
1447+                self.servermap.mark_bad_share(reader.peerid, reader.shnum,
1448+                                              self.verinfo[-2])
1449+                e = CorruptShareError(reader.peerid,
1450+                                      reader.shnum,
1451+                                      "invalid privkey")
1452+                f = failure.Failure(e)
1453+                self._bad_shares.add((reader.peerid, reader.shnum, f))
1454             return
1455hunk ./src/allmydata/mutable/retrieve.py 1027
1456+
1457+        # it's good
1458+        self.log("got valid privkey from shnum %d on reader %s" %
1459+                 (reader.shnum, reader))
1460+        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
1461+        self._node._populate_encprivkey(enc_privkey)
1462+        self._node._populate_privkey(privkey)
1463+        self._need_privkey = False
1464+
1465+
1466+    def _check_for_done(self, res):
1467+        """
1468+        I check to see if this Retrieve object has successfully finished
1469+        its work.
1470+
1471+        I can exit in the following ways:
1472+            - If there are no more segments to download, then I exit by
1473+              causing self._done_deferred to fire with the plaintext
1474+              content requested by the caller.
1475+            - If there are still segments to be downloaded, and there
1476+              are enough active readers (readers which have not broken
1477+              and have not given us corrupt data) to continue
1478+              downloading, I send control back to
1479+              _download_current_segment.
1480+            - If there are still segments to be downloaded but there are
1481+              not enough active peers to download them, I ask
1482+              _add_active_peers to add more peers. If it is successful,
1483+              it will call _download_current_segment. If there are not
1484+              enough peers to retrieve the file, then that will cause
1485+              _done_deferred to errback.
1486+        """
1487+        self.log("checking for doneness")
1488+        if self._current_segment > self._last_segment:
1489+            # No more segments to download, we're done.
1490+            self.log("got plaintext, done")
1491+            return self._done()
1492+
1493+        if len(self._active_readers) >= self._required_shares:
1494+            # More segments to download, but we have enough good peers
1495+            # in self._active_readers that we can do that without issue,
1496+            # so go nab the next segment.
1497+            self.log("not done yet: on segment %d of %d" % \
1498+                     (self._current_segment + 1, self._num_segments))
1499+            return self._download_current_segment()
1500+
1501+        self.log("not done yet: on segment %d of %d, need to add peers" % \
1502+                 (self._current_segment + 1, self._num_segments))
1503+        return self._add_active_peers()
1504+
1505+
1506+    def _done(self):
1507+        """
1508+        I am called by _check_for_done when the download process has
1509+        finished successfully. After making some useful logging
1510+        statements, I return the decrypted contents to the owner of this
1511+        Retrieve object through self._done_deferred.
1512+        """
1513         self._running = False
1514         self._status.set_active(False)
1515hunk ./src/allmydata/mutable/retrieve.py 1086
1516-        self._status.timings["total"] = time.time() - self._started
1517-        # res is either the new contents, or a Failure
1518-        if isinstance(res, failure.Failure):
1519-            self.log("Retrieve done, with failure", failure=res,
1520-                     level=log.UNUSUAL)
1521-            self._status.set_status("Failed")
1522+        now = time.time()
1523+        self._status.timings['total'] = now - self._started
1524+        self._status.timings['fetch'] = now - self._started_fetching
1525+
1526+        if self._verify:
1527+            ret = list(self._bad_shares)
1528+            self.log("done verifying, found %d bad shares" % len(ret))
1529         else:
1530hunk ./src/allmydata/mutable/retrieve.py 1094
1531-            self.log("Retrieve done, success!")
1532-            self._status.set_status("Finished")
1533-            self._status.set_progress(1.0)
1534-            # remember the encoding parameters, use them again next time
1535-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
1536-             offsets_tuple) = self.verinfo
1537-            self._node._populate_required_shares(k)
1538-            self._node._populate_total_shares(N)
1539-        eventually(self._done_deferred.callback, res)
1540+            # TODO: upload status here?
1541+            ret = self._consumer
1542+            self._consumer.unregisterProducer()
1543+        eventually(self._done_deferred.callback, ret)
1544+
1545+
1546+    def _failed(self):
1547+        """
1548+        I am called by _add_active_peers when there are not enough
1549+        active peers left to complete the download. After making some
1550+        useful logging statements, I return an exception to that effect
1551+        to the caller of this Retrieve object through
1552+        self._done_deferred.
1553+        """
1554+        self._running = False
1555+        self._status.set_active(False)
1556+        now = time.time()
1557+        self._status.timings['total'] = now - self._started
1558+        self._status.timings['fetch'] = now - self._started_fetching
1559 
1560hunk ./src/allmydata/mutable/retrieve.py 1114
1561+        if self._verify:
1562+            ret = list(self._bad_shares)
1563+        else:
1564+            format = ("ran out of peers: "
1565+                      "have %(have)d of %(total)d segments "
1566+                      "found %(bad)d bad shares "
1567+                      "encoding %(k)d-of-%(n)d")
1568+            args = {"have": self._current_segment,
1569+                    "total": self._num_segments,
1570+                    "need": self._last_segment,
1571+                    "k": self._required_shares,
1572+                    "n": self._total_shares,
1573+                    "bad": len(self._bad_shares)}
1574+            e = NotEnoughSharesError("%s, last failure: %s" % \
1575+                                     (format % args, str(self._last_failure)))
1576+            f = failure.Failure(e)
1577+            ret = f
1578+        eventually(self._done_deferred.callback, ret)
1579}
1580[mutable/publish: teach the publisher how to publish MDMF mutable files
1581Kevan Carstensen <kevan@isnotajoke.com>**20110802013931
1582 Ignore-this: 115217ec2b289452ec774cb725da8a86
1583 
1584 Like the downloader, the publisher needs some substantial changes to handle multiple segment mutable files.
1585] {
1586hunk ./src/allmydata/mutable/publish.py 3
1587 
1588 
1589-import os, struct, time
1590+import os, time
1591+from StringIO import StringIO
1592 from itertools import count
1593 from zope.interface import implements
1594 from twisted.internet import defer
1595hunk ./src/allmydata/mutable/publish.py 9
1596 from twisted.python import failure
1597-from allmydata.interfaces import IPublishStatus
1598+from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \
1599+                                 IMutableUploadable
1600 from allmydata.util import base32, hashutil, mathutil, idlib, log
1601 from allmydata.util.dictutil import DictOfSets
1602 from allmydata import hashtree, codec
1603hunk ./src/allmydata/mutable/publish.py 21
1604 from allmydata.mutable.common import MODE_WRITE, MODE_CHECK, \
1605      UncoordinatedWriteError, NotEnoughServersError
1606 from allmydata.mutable.servermap import ServerMap
1607-from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \
1608-     unpack_checkstring, SIGNED_PREFIX
1609+from allmydata.mutable.layout import unpack_checkstring, MDMFSlotWriteProxy, \
1610+                                     SDMFSlotWriteProxy
1611+
1612+KiB = 1024
1613+DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB
1614+PUSHING_BLOCKS_STATE = 0
1615+PUSHING_EVERYTHING_ELSE_STATE = 1
1616+DONE_STATE = 2
1617 
1618 class PublishStatus:
1619     implements(IPublishStatus)
1620hunk ./src/allmydata/mutable/publish.py 112
1621         self._log_number = num
1622         self._running = True
1623         self._first_write_error = None
1624+        self._last_failure = None
1625 
1626         self._status = PublishStatus()
1627         self._status.set_storage_index(self._storage_index)
1628hunk ./src/allmydata/mutable/publish.py 119
1629         self._status.set_helper(False)
1630         self._status.set_progress(0.0)
1631         self._status.set_active(True)
1632+        self._version = self._node.get_version()
1633+        assert self._version in (SDMF_VERSION, MDMF_VERSION)
1634+
1635 
1636     def get_status(self):
1637         return self._status
1638hunk ./src/allmydata/mutable/publish.py 133
1639             kwargs["facility"] = "tahoe.mutable.publish"
1640         return log.msg(*args, **kwargs)
1641 
1642+
1643+    def update(self, data, offset, blockhashes, version):
1644+        """
1645+        I replace the contents of this file with the contents of data,
1646+        starting at offset. I return a Deferred that fires with None
1647+        when the replacement has been completed, or with an error if
1648+        something went wrong during the process.
1649+
1650+        Note that this process will not upload new shares. If the file
1651+        being updated is in need of repair, callers will have to repair
1652+        it on their own.
1653+        """
1654+        # How this works:
1655+        # 1: Make peer assignments. We'll assign each share that we know
1656+        # about on the grid to that peer that currently holds that
1657+        # share, and will not place any new shares.
1658+        # 2: Setup encoding parameters. Most of these will stay the same
1659+        # -- datalength will change, as will some of the offsets.
1660+        # 3. Upload the new segments.
1661+        # 4. Be done.
1662+        assert IMutableUploadable.providedBy(data)
1663+
1664+        self.data = data
1665+
1666+        # XXX: Use the MutableFileVersion instead.
1667+        self.datalength = self._node.get_size()
1668+        if data.get_size() > self.datalength:
1669+            self.datalength = data.get_size()
1670+
1671+        self.log("starting update")
1672+        self.log("adding new data of length %d at offset %d" % \
1673+                    (data.get_size(), offset))
1674+        self.log("new data length is %d" % self.datalength)
1675+        self._status.set_size(self.datalength)
1676+        self._status.set_status("Started")
1677+        self._started = time.time()
1678+
1679+        self.done_deferred = defer.Deferred()
1680+
1681+        self._writekey = self._node.get_writekey()
1682+        assert self._writekey, "need write capability to publish"
1683+
1684+        # first, which servers will we publish to? We require that the
1685+        # servermap was updated in MODE_WRITE, so we can depend upon the
1686+        # peerlist computed by that process instead of computing our own.
1687+        assert self._servermap
1688+        assert self._servermap.last_update_mode in (MODE_WRITE, MODE_CHECK)
1689+        # we will push a version that is one larger than anything present
1690+        # in the grid, according to the servermap.
1691+        self._new_seqnum = self._servermap.highest_seqnum() + 1
1692+        self._status.set_servermap(self._servermap)
1693+
1694+        self.log(format="new seqnum will be %(seqnum)d",
1695+                 seqnum=self._new_seqnum, level=log.NOISY)
1696+
1697+        # We're updating an existing file, so all of the following
1698+        # should be available.
1699+        self.readkey = self._node.get_readkey()
1700+        self.required_shares = self._node.get_required_shares()
1701+        assert self.required_shares is not None
1702+        self.total_shares = self._node.get_total_shares()
1703+        assert self.total_shares is not None
1704+        self._status.set_encoding(self.required_shares, self.total_shares)
1705+
1706+        self._pubkey = self._node.get_pubkey()
1707+        assert self._pubkey
1708+        self._privkey = self._node.get_privkey()
1709+        assert self._privkey
1710+        self._encprivkey = self._node.get_encprivkey()
1711+
1712+        sb = self._storage_broker
1713+        full_peerlist = [(s.get_serverid(), s.get_rref())
1714+                         for s in sb.get_servers_for_psi(self._storage_index)]
1715+        self.full_peerlist = full_peerlist # for use later, immutable
1716+        self.bad_peers = set() # peerids who have errbacked/refused requests
1717+
1718+        # This will set self.segment_size, self.num_segments, and
1719+        # self.fec. TODO: Does it know how to do the offset? Probably
1720+        # not. So do that part next.
1721+        self.setup_encoding_parameters(offset=offset)
1722+
1723+        # if we experience any surprises (writes which were rejected because
1724+        # our test vector did not match, or shares which we didn't expect to
1725+        # see), we set this flag and report an UncoordinatedWriteError at the
1726+        # end of the publish process.
1727+        self.surprised = False
1728+
1729+        # we keep track of three tables. The first is our goal: which share
1730+        # we want to see on which servers. This is initially populated by the
1731+        # existing servermap.
1732+        self.goal = set() # pairs of (peerid, shnum) tuples
1733+
1734+        # the second table is our list of outstanding queries: those which
1735+        # are in flight and may or may not be delivered, accepted, or
1736+        # acknowledged. Items are added to this table when the request is
1737+        # sent, and removed when the response returns (or errbacks).
1738+        self.outstanding = set() # (peerid, shnum) tuples
1739+
1740+        # the third is a table of successes: share which have actually been
1741+        # placed. These are populated when responses come back with success.
1742+        # When self.placed == self.goal, we're done.
1743+        self.placed = set() # (peerid, shnum) tuples
1744+
1745+        # we also keep a mapping from peerid to RemoteReference. Each time we
1746+        # pull a connection out of the full peerlist, we add it to this for
1747+        # use later.
1748+        self.connections = {}
1749+
1750+        self.bad_share_checkstrings = {}
1751+
1752+        # This is set at the last step of the publishing process.
1753+        self.versioninfo = ""
1754+
1755+        # we use the servermap to populate the initial goal: this way we will
1756+        # try to update each existing share in place. Since we're
1757+        # updating, we ignore damaged and missing shares -- callers must
1758+        # do a repair to repair and recreate these.
1759+        for (peerid, shnum) in self._servermap.servermap:
1760+            self.goal.add( (peerid, shnum) )
1761+            self.connections[peerid] = self._servermap.connections[peerid]
1762+        self.writers = {}
1763+
1764+        # SDMF files are updated differently.
1765+        self._version = MDMF_VERSION
1766+        writer_class = MDMFSlotWriteProxy
1767+
1768+        # For each (peerid, shnum) in self.goal, we make a
1769+        # write proxy for that peer. We'll use this to write
1770+        # shares to the peer.
1771+        for key in self.goal:
1772+            peerid, shnum = key
1773+            write_enabler = self._node.get_write_enabler(peerid)
1774+            renew_secret = self._node.get_renewal_secret(peerid)
1775+            cancel_secret = self._node.get_cancel_secret(peerid)
1776+            secrets = (write_enabler, renew_secret, cancel_secret)
1777+
1778+            self.writers[shnum] =  writer_class(shnum,
1779+                                                self.connections[peerid],
1780+                                                self._storage_index,
1781+                                                secrets,
1782+                                                self._new_seqnum,
1783+                                                self.required_shares,
1784+                                                self.total_shares,
1785+                                                self.segment_size,
1786+                                                self.datalength)
1787+            self.writers[shnum].peerid = peerid
1788+            assert (peerid, shnum) in self._servermap.servermap
1789+            old_versionid, old_timestamp = self._servermap.servermap[key]
1790+            (old_seqnum, old_root_hash, old_salt, old_segsize,
1791+             old_datalength, old_k, old_N, old_prefix,
1792+             old_offsets_tuple) = old_versionid
1793+            self.writers[shnum].set_checkstring(old_seqnum,
1794+                                                old_root_hash,
1795+                                                old_salt)
1796+
1797+        # Our remote shares will not have a complete checkstring until
1798+        # after we are done writing share data and have started to write
1799+        # blocks. In the meantime, we need to know what to look for when
1800+        # writing, so that we can detect UncoordinatedWriteErrors.
1801+        self._checkstring = self.writers.values()[0].get_checkstring()
1802+
1803+        # Now, we start pushing shares.
1804+        self._status.timings["setup"] = time.time() - self._started
1805+        # First, we encrypt, encode, and publish the shares that we need
1806+        # to encrypt, encode, and publish.
1807+
1808+        # Our update process fetched these for us. We need to update
1809+        # them in place as publishing happens.
1810+        self.blockhashes = {} # (shnum, [blochashes])
1811+        for (i, bht) in blockhashes.iteritems():
1812+            # We need to extract the leaves from our old hash tree.
1813+            old_segcount = mathutil.div_ceil(version[4],
1814+                                             version[3])
1815+            h = hashtree.IncompleteHashTree(old_segcount)
1816+            bht = dict(enumerate(bht))
1817+            h.set_hashes(bht)
1818+            leaves = h[h.get_leaf_index(0):]
1819+            for j in xrange(self.num_segments - len(leaves)):
1820+                leaves.append(None)
1821+
1822+            assert len(leaves) >= self.num_segments
1823+            self.blockhashes[i] = leaves
1824+            # This list will now be the leaves that were set during the
1825+            # initial upload + enough empty hashes to make it a
1826+            # power-of-two. If we exceed a power of two boundary, we
1827+            # should be encoding the file over again, and should not be
1828+            # here. So, we have
1829+            #assert len(self.blockhashes[i]) == \
1830+            #    hashtree.roundup_pow2(self.num_segments), \
1831+            #        len(self.blockhashes[i])
1832+            # XXX: Except this doesn't work. Figure out why.
1833+
1834+        # These are filled in later, after we've modified the block hash
1835+        # tree suitably.
1836+        self.sharehash_leaves = None # eventually [sharehashes]
1837+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1838+                              # validate the share]
1839+
1840+        self.log("Starting push")
1841+
1842+        self._state = PUSHING_BLOCKS_STATE
1843+        self._push()
1844+
1845+        return self.done_deferred
1846+
1847+
1848     def publish(self, newdata):
1849         """Publish the filenode's current contents.  Returns a Deferred that
1850         fires (with None) when the publish has done as much work as it's ever
1851hunk ./src/allmydata/mutable/publish.py 346
1852         simultaneous write.
1853         """
1854 
1855-        # 1: generate shares (SDMF: files are small, so we can do it in RAM)
1856-        # 2: perform peer selection, get candidate servers
1857-        #  2a: send queries to n+epsilon servers, to determine current shares
1858-        #  2b: based upon responses, create target map
1859-        # 3: send slot_testv_and_readv_and_writev messages
1860-        # 4: as responses return, update share-dispatch table
1861-        # 4a: may need to run recovery algorithm
1862-        # 5: when enough responses are back, we're done
1863+        # 0. Setup encoding parameters, encoder, and other such things.
1864+        # 1. Encrypt, encode, and publish segments.
1865+        assert IMutableUploadable.providedBy(newdata)
1866 
1867hunk ./src/allmydata/mutable/publish.py 350
1868-        self.log("starting publish, datalen is %s" % len(newdata))
1869-        self._status.set_size(len(newdata))
1870+        self.data = newdata
1871+        self.datalength = newdata.get_size()
1872+        #if self.datalength >= DEFAULT_MAX_SEGMENT_SIZE:
1873+        #    self._version = MDMF_VERSION
1874+        #else:
1875+        #    self._version = SDMF_VERSION
1876+
1877+        self.log("starting publish, datalen is %s" % self.datalength)
1878+        self._status.set_size(self.datalength)
1879         self._status.set_status("Started")
1880         self._started = time.time()
1881 
1882hunk ./src/allmydata/mutable/publish.py 407
1883         self.full_peerlist = full_peerlist # for use later, immutable
1884         self.bad_peers = set() # peerids who have errbacked/refused requests
1885 
1886-        self.newdata = newdata
1887-        self.salt = os.urandom(16)
1888-
1889+        # This will set self.segment_size, self.num_segments, and
1890+        # self.fec.
1891         self.setup_encoding_parameters()
1892 
1893         # if we experience any surprises (writes which were rejected because
1894hunk ./src/allmydata/mutable/publish.py 417
1895         # end of the publish process.
1896         self.surprised = False
1897 
1898-        # as a failsafe, refuse to iterate through self.loop more than a
1899-        # thousand times.
1900-        self.looplimit = 1000
1901-
1902         # we keep track of three tables. The first is our goal: which share
1903         # we want to see on which servers. This is initially populated by the
1904         # existing servermap.
1905hunk ./src/allmydata/mutable/publish.py 440
1906 
1907         self.bad_share_checkstrings = {}
1908 
1909+        # This is set at the last step of the publishing process.
1910+        self.versioninfo = ""
1911+
1912         # we use the servermap to populate the initial goal: this way we will
1913         # try to update each existing share in place.
1914         for (peerid, shnum) in self._servermap.servermap:
1915hunk ./src/allmydata/mutable/publish.py 456
1916             self.bad_share_checkstrings[key] = old_checkstring
1917             self.connections[peerid] = self._servermap.connections[peerid]
1918 
1919-        # create the shares. We'll discard these as they are delivered. SDMF:
1920-        # we're allowed to hold everything in memory.
1921+        # TODO: Make this part do peer selection.
1922+        self.update_goal()
1923+        self.writers = {}
1924+        if self._version == MDMF_VERSION:
1925+            writer_class = MDMFSlotWriteProxy
1926+        else:
1927+            writer_class = SDMFSlotWriteProxy
1928+
1929+        # For each (peerid, shnum) in self.goal, we make a
1930+        # write proxy for that peer. We'll use this to write
1931+        # shares to the peer.
1932+        for key in self.goal:
1933+            peerid, shnum = key
1934+            write_enabler = self._node.get_write_enabler(peerid)
1935+            renew_secret = self._node.get_renewal_secret(peerid)
1936+            cancel_secret = self._node.get_cancel_secret(peerid)
1937+            secrets = (write_enabler, renew_secret, cancel_secret)
1938 
1939hunk ./src/allmydata/mutable/publish.py 474
1940+            self.writers[shnum] =  writer_class(shnum,
1941+                                                self.connections[peerid],
1942+                                                self._storage_index,
1943+                                                secrets,
1944+                                                self._new_seqnum,
1945+                                                self.required_shares,
1946+                                                self.total_shares,
1947+                                                self.segment_size,
1948+                                                self.datalength)
1949+            self.writers[shnum].peerid = peerid
1950+            if (peerid, shnum) in self._servermap.servermap:
1951+                old_versionid, old_timestamp = self._servermap.servermap[key]
1952+                (old_seqnum, old_root_hash, old_salt, old_segsize,
1953+                 old_datalength, old_k, old_N, old_prefix,
1954+                 old_offsets_tuple) = old_versionid
1955+                self.writers[shnum].set_checkstring(old_seqnum,
1956+                                                    old_root_hash,
1957+                                                    old_salt)
1958+            elif (peerid, shnum) in self.bad_share_checkstrings:
1959+                old_checkstring = self.bad_share_checkstrings[(peerid, shnum)]
1960+                self.writers[shnum].set_checkstring(old_checkstring)
1961+
1962+        # Our remote shares will not have a complete checkstring until
1963+        # after we are done writing share data and have started to write
1964+        # blocks. In the meantime, we need to know what to look for when
1965+        # writing, so that we can detect UncoordinatedWriteErrors.
1966+        self._checkstring = self.writers.values()[0].get_checkstring()
1967+
1968+        # Now, we start pushing shares.
1969         self._status.timings["setup"] = time.time() - self._started
1970hunk ./src/allmydata/mutable/publish.py 504
1971-        d = self._encrypt_and_encode()
1972-        d.addCallback(self._generate_shares)
1973-        def _start_pushing(res):
1974-            self._started_pushing = time.time()
1975-            return res
1976-        d.addCallback(_start_pushing)
1977-        d.addCallback(self.loop) # trigger delivery
1978-        d.addErrback(self._fatal_error)
1979+        # First, we encrypt, encode, and publish the shares that we need
1980+        # to encrypt, encode, and publish.
1981+
1982+        # This will eventually hold the block hash chain for each share
1983+        # that we publish. We define it this way so that empty publishes
1984+        # will still have something to write to the remote slot.
1985+        self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)])
1986+        for i in xrange(self.total_shares):
1987+            blocks = self.blockhashes[i]
1988+            for j in xrange(self.num_segments):
1989+                blocks.append(None)
1990+        self.sharehash_leaves = None # eventually [sharehashes]
1991+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to
1992+                              # validate the share]
1993+
1994+        self.log("Starting push")
1995+
1996+        self._state = PUSHING_BLOCKS_STATE
1997+        self._push()
1998 
1999         return self.done_deferred
2000 
2001hunk ./src/allmydata/mutable/publish.py 526
2002-    def setup_encoding_parameters(self):
2003-        segment_size = len(self.newdata)
2004+
2005+    def _update_status(self):
2006+        self._status.set_status("Sending Shares: %d placed out of %d, "
2007+                                "%d messages outstanding" %
2008+                                (len(self.placed),
2009+                                 len(self.goal),
2010+                                 len(self.outstanding)))
2011+        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2012+
2013+
2014+    def setup_encoding_parameters(self, offset=0):
2015+        if self._version == MDMF_VERSION:
2016+            segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default
2017+        else:
2018+            segment_size = self.datalength # SDMF is only one segment
2019         # this must be a multiple of self.required_shares
2020         segment_size = mathutil.next_multiple(segment_size,
2021                                               self.required_shares)
2022hunk ./src/allmydata/mutable/publish.py 545
2023         self.segment_size = segment_size
2024+
2025+        # Calculate the starting segment for the upload.
2026         if segment_size:
2027hunk ./src/allmydata/mutable/publish.py 548
2028-            self.num_segments = mathutil.div_ceil(len(self.newdata),
2029+            # We use div_ceil instead of integer division here because
2030+            # it is semantically correct.
2031+            # If datalength isn't an even multiple of segment_size, but
2032+            # is larger than segment_size, datalength // segment_size
2033+            # will be the largest number such that num <= datalength and
2034+            # num % segment_size == 0. But that's not what we want,
2035+            # because it ignores the extra data. div_ceil will give us
2036+            # the right number of segments for the data that we're
2037+            # given.
2038+            self.num_segments = mathutil.div_ceil(self.datalength,
2039                                                   segment_size)
2040hunk ./src/allmydata/mutable/publish.py 559
2041+
2042+            self.starting_segment = offset // segment_size
2043+
2044         else:
2045             self.num_segments = 0
2046hunk ./src/allmydata/mutable/publish.py 564
2047-        assert self.num_segments in [0, 1,] # SDMF restrictions
2048+            self.starting_segment = 0
2049 
2050hunk ./src/allmydata/mutable/publish.py 566
2051-    def _fatal_error(self, f):
2052-        self.log("error during loop", failure=f, level=log.UNUSUAL)
2053-        self._done(f)
2054 
2055hunk ./src/allmydata/mutable/publish.py 567
2056-    def _update_status(self):
2057-        self._status.set_status("Sending Shares: %d placed out of %d, "
2058-                                "%d messages outstanding" %
2059-                                (len(self.placed),
2060-                                 len(self.goal),
2061-                                 len(self.outstanding)))
2062-        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
2063+        self.log("building encoding parameters for file")
2064+        self.log("got segsize %d" % self.segment_size)
2065+        self.log("got %d segments" % self.num_segments)
2066 
2067hunk ./src/allmydata/mutable/publish.py 571
2068-    def loop(self, ignored=None):
2069-        self.log("entering loop", level=log.NOISY)
2070-        if not self._running:
2071-            return
2072+        if self._version == SDMF_VERSION:
2073+            assert self.num_segments in (0, 1) # SDMF
2074+        # calculate the tail segment size.
2075 
2076hunk ./src/allmydata/mutable/publish.py 575
2077-        self.looplimit -= 1
2078-        if self.looplimit <= 0:
2079-            raise LoopLimitExceededError("loop limit exceeded")
2080+        if segment_size and self.datalength:
2081+            self.tail_segment_size = self.datalength % segment_size
2082+            self.log("got tail segment size %d" % self.tail_segment_size)
2083+        else:
2084+            self.tail_segment_size = 0
2085 
2086hunk ./src/allmydata/mutable/publish.py 581
2087-        if self.surprised:
2088-            # don't send out any new shares, just wait for the outstanding
2089-            # ones to be retired.
2090-            self.log("currently surprised, so don't send any new shares",
2091-                     level=log.NOISY)
2092+        if self.tail_segment_size == 0 and segment_size:
2093+            # The tail segment is the same size as the other segments.
2094+            self.tail_segment_size = segment_size
2095+
2096+        # Make FEC encoders
2097+        fec = codec.CRSEncoder()
2098+        fec.set_params(self.segment_size,
2099+                       self.required_shares, self.total_shares)
2100+        self.piece_size = fec.get_block_size()
2101+        self.fec = fec
2102+
2103+        if self.tail_segment_size == self.segment_size:
2104+            self.tail_fec = self.fec
2105         else:
2106hunk ./src/allmydata/mutable/publish.py 595
2107-            self.update_goal()
2108-            # how far are we from our goal?
2109-            needed = self.goal - self.placed - self.outstanding
2110-            self._update_status()
2111+            tail_fec = codec.CRSEncoder()
2112+            tail_fec.set_params(self.tail_segment_size,
2113+                                self.required_shares,
2114+                                self.total_shares)
2115+            self.tail_fec = tail_fec
2116 
2117hunk ./src/allmydata/mutable/publish.py 601
2118-            if needed:
2119-                # we need to send out new shares
2120-                self.log(format="need to send %(needed)d new shares",
2121-                         needed=len(needed), level=log.NOISY)
2122-                self._send_shares(needed)
2123-                return
2124+        self._current_segment = self.starting_segment
2125+        self.end_segment = self.num_segments - 1
2126+        # Now figure out where the last segment should be.
2127+        if self.data.get_size() != self.datalength:
2128+            # We're updating a few segments in the middle of a mutable
2129+            # file, so we don't want to republish the whole thing.
2130+            # (we don't have enough data to do that even if we wanted
2131+            # to)
2132+            end = self.data.get_size()
2133+            self.end_segment = end // segment_size
2134+            if end % segment_size == 0:
2135+                self.end_segment -= 1
2136 
2137hunk ./src/allmydata/mutable/publish.py 614
2138-        if self.outstanding:
2139-            # queries are still pending, keep waiting
2140-            self.log(format="%(outstanding)d queries still outstanding",
2141-                     outstanding=len(self.outstanding),
2142-                     level=log.NOISY)
2143-            return
2144+        self.log("got start segment %d" % self.starting_segment)
2145+        self.log("got end segment %d" % self.end_segment)
2146+
2147+
2148+    def _push(self, ignored=None):
2149+        """
2150+        I manage state transitions. In particular, I see that we still
2151+        have a good enough number of writers to complete the upload
2152+        successfully.
2153+        """
2154+        # Can we still successfully publish this file?
2155+        # TODO: Keep track of outstanding queries before aborting the
2156+        #       process.
2157+        if len(self.writers) < self.required_shares or self.surprised:
2158+            return self._failure()
2159+
2160+        # Figure out what we need to do next. Each of these needs to
2161+        # return a deferred so that we don't block execution when this
2162+        # is first called in the upload method.
2163+        if self._state == PUSHING_BLOCKS_STATE:
2164+            return self.push_segment(self._current_segment)
2165+
2166+        elif self._state == PUSHING_EVERYTHING_ELSE_STATE:
2167+            return self.push_everything_else()
2168+
2169+        # If we make it to this point, we were successful in placing the
2170+        # file.
2171+        return self._done()
2172+
2173+
2174+    def push_segment(self, segnum):
2175+        if self.num_segments == 0 and self._version == SDMF_VERSION:
2176+            self._add_dummy_salts()
2177+
2178+        if segnum > self.end_segment:
2179+            # We don't have any more segments to push.
2180+            self._state = PUSHING_EVERYTHING_ELSE_STATE
2181+            return self._push()
2182+
2183+        d = self._encode_segment(segnum)
2184+        d.addCallback(self._push_segment, segnum)
2185+        def _increment_segnum(ign):
2186+            self._current_segment += 1
2187+        # XXX: I don't think we need to do addBoth here -- any errBacks
2188+        # should be handled within push_segment.
2189+        d.addCallback(_increment_segnum)
2190+        d.addCallback(self._turn_barrier)
2191+        d.addCallback(self._push)
2192+        d.addErrback(self._failure)
2193+
2194+
2195+    def _turn_barrier(self, result):
2196+        """
2197+        I help the publish process avoid the recursion limit issues
2198+        described in #237.
2199+        """
2200+        return fireEventually(result)
2201+
2202+
2203+    def _add_dummy_salts(self):
2204+        """
2205+        SDMF files need a salt even if they're empty, or the signature
2206+        won't make sense. This method adds a dummy salt to each of our
2207+        SDMF writers so that they can write the signature later.
2208+        """
2209+        salt = os.urandom(16)
2210+        assert self._version == SDMF_VERSION
2211+
2212+        for writer in self.writers.itervalues():
2213+            writer.put_salt(salt)
2214+
2215+
2216+    def _encode_segment(self, segnum):
2217+        """
2218+        I encrypt and encode the segment segnum.
2219+        """
2220+        started = time.time()
2221+
2222+        if segnum + 1 == self.num_segments:
2223+            segsize = self.tail_segment_size
2224+        else:
2225+            segsize = self.segment_size
2226+
2227+
2228+        self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments))
2229+        data = self.data.read(segsize)
2230+        # XXX: This is dumb. Why return a list?
2231+        data = "".join(data)
2232+
2233+        assert len(data) == segsize, len(data)
2234+
2235+        salt = os.urandom(16)
2236+
2237+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
2238+        self._status.set_status("Encrypting")
2239+        enc = AES(key)
2240+        crypttext = enc.process(data)
2241+        assert len(crypttext) == len(data)
2242 
2243hunk ./src/allmydata/mutable/publish.py 713
2244-        # no queries outstanding, no placements needed: we're done
2245-        self.log("no queries outstanding, no placements needed: done",
2246-                 level=log.OPERATIONAL)
2247         now = time.time()
2248hunk ./src/allmydata/mutable/publish.py 714
2249-        elapsed = now - self._started_pushing
2250-        self._status.timings["push"] = elapsed
2251-        return self._done(None)
2252+        self._status.timings["encrypt"] = now - started
2253+        started = now
2254+
2255+        # now apply FEC
2256+        if segnum + 1 == self.num_segments:
2257+            fec = self.tail_fec
2258+        else:
2259+            fec = self.fec
2260+
2261+        self._status.set_status("Encoding")
2262+        crypttext_pieces = [None] * self.required_shares
2263+        piece_size = fec.get_block_size()
2264+        for i in range(len(crypttext_pieces)):
2265+            offset = i * piece_size
2266+            piece = crypttext[offset:offset+piece_size]
2267+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2268+            crypttext_pieces[i] = piece
2269+            assert len(piece) == piece_size
2270+        d = fec.encode(crypttext_pieces)
2271+        def _done_encoding(res):
2272+            elapsed = time.time() - started
2273+            self._status.timings["encode"] = elapsed
2274+            return (res, salt)
2275+        d.addCallback(_done_encoding)
2276+        return d
2277+
2278+
2279+    def _push_segment(self, encoded_and_salt, segnum):
2280+        """
2281+        I push (data, salt) as segment number segnum.
2282+        """
2283+        results, salt = encoded_and_salt
2284+        shares, shareids = results
2285+        self._status.set_status("Pushing segment")
2286+        for i in xrange(len(shares)):
2287+            sharedata = shares[i]
2288+            shareid = shareids[i]
2289+            if self._version == MDMF_VERSION:
2290+                hashed = salt + sharedata
2291+            else:
2292+                hashed = sharedata
2293+            block_hash = hashutil.block_hash(hashed)
2294+            self.blockhashes[shareid][segnum] = block_hash
2295+            # find the writer for this share
2296+            writer = self.writers[shareid]
2297+            writer.put_block(sharedata, segnum, salt)
2298+
2299+
2300+    def push_everything_else(self):
2301+        """
2302+        I put everything else associated with a share.
2303+        """
2304+        self._pack_started = time.time()
2305+        self.push_encprivkey()
2306+        self.push_blockhashes()
2307+        self.push_sharehashes()
2308+        self.push_toplevel_hashes_and_signature()
2309+        d = self.finish_publishing()
2310+        def _change_state(ignored):
2311+            self._state = DONE_STATE
2312+        d.addCallback(_change_state)
2313+        d.addCallback(self._push)
2314+        return d
2315+
2316+
2317+    def push_encprivkey(self):
2318+        encprivkey = self._encprivkey
2319+        self._status.set_status("Pushing encrypted private key")
2320+        for writer in self.writers.itervalues():
2321+            writer.put_encprivkey(encprivkey)
2322+
2323+
2324+    def push_blockhashes(self):
2325+        self.sharehash_leaves = [None] * len(self.blockhashes)
2326+        self._status.set_status("Building and pushing block hash tree")
2327+        for shnum, blockhashes in self.blockhashes.iteritems():
2328+            t = hashtree.HashTree(blockhashes)
2329+            self.blockhashes[shnum] = list(t)
2330+            # set the leaf for future use.
2331+            self.sharehash_leaves[shnum] = t[0]
2332+
2333+            writer = self.writers[shnum]
2334+            writer.put_blockhashes(self.blockhashes[shnum])
2335+
2336+
2337+    def push_sharehashes(self):
2338+        self._status.set_status("Building and pushing share hash chain")
2339+        share_hash_tree = hashtree.HashTree(self.sharehash_leaves)
2340+        for shnum in xrange(len(self.sharehash_leaves)):
2341+            needed_indices = share_hash_tree.needed_hashes(shnum)
2342+            self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i])
2343+                                             for i in needed_indices] )
2344+            writer = self.writers[shnum]
2345+            writer.put_sharehashes(self.sharehashes[shnum])
2346+        self.root_hash = share_hash_tree[0]
2347+
2348+
2349+    def push_toplevel_hashes_and_signature(self):
2350+        # We need to to three things here:
2351+        #   - Push the root hash and salt hash
2352+        #   - Get the checkstring of the resulting layout; sign that.
2353+        #   - Push the signature
2354+        self._status.set_status("Pushing root hashes and signature")
2355+        for shnum in xrange(self.total_shares):
2356+            writer = self.writers[shnum]
2357+            writer.put_root_hash(self.root_hash)
2358+        self._update_checkstring()
2359+        self._make_and_place_signature()
2360+
2361+
2362+    def _update_checkstring(self):
2363+        """
2364+        After putting the root hash, MDMF files will have the
2365+        checkstring written to the storage server. This means that we
2366+        can update our copy of the checkstring so we can detect
2367+        uncoordinated writes. SDMF files will have the same checkstring,
2368+        so we need not do anything.
2369+        """
2370+        self._checkstring = self.writers.values()[0].get_checkstring()
2371+
2372+
2373+    def _make_and_place_signature(self):
2374+        """
2375+        I create and place the signature.
2376+        """
2377+        started = time.time()
2378+        self._status.set_status("Signing prefix")
2379+        signable = self.writers[0].get_signable()
2380+        self.signature = self._privkey.sign(signable)
2381+
2382+        for (shnum, writer) in self.writers.iteritems():
2383+            writer.put_signature(self.signature)
2384+        self._status.timings['sign'] = time.time() - started
2385+
2386+
2387+    def finish_publishing(self):
2388+        # We're almost done -- we just need to put the verification key
2389+        # and the offsets
2390+        started = time.time()
2391+        self._status.set_status("Pushing shares")
2392+        self._started_pushing = started
2393+        ds = []
2394+        verification_key = self._pubkey.serialize()
2395+
2396+
2397+        # TODO: Bad, since we remove from this same dict. We need to
2398+        # make a copy, or just use a non-iterated value.
2399+        for (shnum, writer) in self.writers.iteritems():
2400+            writer.put_verification_key(verification_key)
2401+            d = writer.finish_publishing()
2402+            # Add the (peerid, shnum) tuple to our list of outstanding
2403+            # queries. This gets used by _loop if some of our queries
2404+            # fail to place shares.
2405+            self.outstanding.add((writer.peerid, writer.shnum))
2406+            d.addCallback(self._got_write_answer, writer, started)
2407+            d.addErrback(self._connection_problem, writer)
2408+            ds.append(d)
2409+        self._record_verinfo()
2410+        self._status.timings['pack'] = time.time() - started
2411+        return defer.DeferredList(ds)
2412+
2413+
2414+    def _record_verinfo(self):
2415+        self.versioninfo = self.writers.values()[0].get_verinfo()
2416+
2417+
2418+    def _connection_problem(self, f, writer):
2419+        """
2420+        We ran into a connection problem while working with writer, and
2421+        need to deal with that.
2422+        """
2423+        self.log("found problem: %s" % str(f))
2424+        self._last_failure = f
2425+        del(self.writers[writer.shnum])
2426+
2427 
2428     def log_goal(self, goal, message=""):
2429         logmsg = [message]
2430hunk ./src/allmydata/mutable/publish.py 971
2431             self.log_goal(self.goal, "after update: ")
2432 
2433 
2434+    def _got_write_answer(self, answer, writer, started):
2435+        if not answer:
2436+            # SDMF writers only pretend to write when readers set their
2437+            # blocks, salts, and so on -- they actually just write once,
2438+            # at the end of the upload process. In fake writes, they
2439+            # return defer.succeed(None). If we see that, we shouldn't
2440+            # bother checking it.
2441+            return
2442 
2443hunk ./src/allmydata/mutable/publish.py 980
2444-    def _encrypt_and_encode(self):
2445-        # this returns a Deferred that fires with a list of (sharedata,
2446-        # sharenum) tuples. TODO: cache the ciphertext, only produce the
2447-        # shares that we care about.
2448-        self.log("_encrypt_and_encode")
2449-
2450-        self._status.set_status("Encrypting")
2451-        started = time.time()
2452-
2453-        key = hashutil.ssk_readkey_data_hash(self.salt, self.readkey)
2454-        enc = AES(key)
2455-        crypttext = enc.process(self.newdata)
2456-        assert len(crypttext) == len(self.newdata)
2457+        peerid = writer.peerid
2458+        lp = self.log("_got_write_answer from %s, share %d" %
2459+                      (idlib.shortnodeid_b2a(peerid), writer.shnum))
2460 
2461         now = time.time()
2462hunk ./src/allmydata/mutable/publish.py 985
2463-        self._status.timings["encrypt"] = now - started
2464-        started = now
2465-
2466-        # now apply FEC
2467-
2468-        self._status.set_status("Encoding")
2469-        fec = codec.CRSEncoder()
2470-        fec.set_params(self.segment_size,
2471-                       self.required_shares, self.total_shares)
2472-        piece_size = fec.get_block_size()
2473-        crypttext_pieces = [None] * self.required_shares
2474-        for i in range(len(crypttext_pieces)):
2475-            offset = i * piece_size
2476-            piece = crypttext[offset:offset+piece_size]
2477-            piece = piece + "\x00"*(piece_size - len(piece)) # padding
2478-            crypttext_pieces[i] = piece
2479-            assert len(piece) == piece_size
2480-
2481-        d = fec.encode(crypttext_pieces)
2482-        def _done_encoding(res):
2483-            elapsed = time.time() - started
2484-            self._status.timings["encode"] = elapsed
2485-            return res
2486-        d.addCallback(_done_encoding)
2487-        return d
2488-
2489-    def _generate_shares(self, shares_and_shareids):
2490-        # this sets self.shares and self.root_hash
2491-        self.log("_generate_shares")
2492-        self._status.set_status("Generating Shares")
2493-        started = time.time()
2494-
2495-        # we should know these by now
2496-        privkey = self._privkey
2497-        encprivkey = self._encprivkey
2498-        pubkey = self._pubkey
2499-
2500-        (shares, share_ids) = shares_and_shareids
2501-
2502-        assert len(shares) == len(share_ids)
2503-        assert len(shares) == self.total_shares
2504-        all_shares = {}
2505-        block_hash_trees = {}
2506-        share_hash_leaves = [None] * len(shares)
2507-        for i in range(len(shares)):
2508-            share_data = shares[i]
2509-            shnum = share_ids[i]
2510-            all_shares[shnum] = share_data
2511-
2512-            # build the block hash tree. SDMF has only one leaf.
2513-            leaves = [hashutil.block_hash(share_data)]
2514-            t = hashtree.HashTree(leaves)
2515-            block_hash_trees[shnum] = list(t)
2516-            share_hash_leaves[shnum] = t[0]
2517-        for leaf in share_hash_leaves:
2518-            assert leaf is not None
2519-        share_hash_tree = hashtree.HashTree(share_hash_leaves)
2520-        share_hash_chain = {}
2521-        for shnum in range(self.total_shares):
2522-            needed_hashes = share_hash_tree.needed_hashes(shnum)
2523-            share_hash_chain[shnum] = dict( [ (i, share_hash_tree[i])
2524-                                              for i in needed_hashes ] )
2525-        root_hash = share_hash_tree[0]
2526-        assert len(root_hash) == 32
2527-        self.log("my new root_hash is %s" % base32.b2a(root_hash))
2528-        self._new_version_info = (self._new_seqnum, root_hash, self.salt)
2529-
2530-        prefix = pack_prefix(self._new_seqnum, root_hash, self.salt,
2531-                             self.required_shares, self.total_shares,
2532-                             self.segment_size, len(self.newdata))
2533-
2534-        # now pack the beginning of the share. All shares are the same up
2535-        # to the signature, then they have divergent share hash chains,
2536-        # then completely different block hash trees + salt + share data,
2537-        # then they all share the same encprivkey at the end. The sizes
2538-        # of everything are the same for all shares.
2539-
2540-        sign_started = time.time()
2541-        signature = privkey.sign(prefix)
2542-        self._status.timings["sign"] = time.time() - sign_started
2543-
2544-        verification_key = pubkey.serialize()
2545-
2546-        final_shares = {}
2547-        for shnum in range(self.total_shares):
2548-            final_share = pack_share(prefix,
2549-                                     verification_key,
2550-                                     signature,
2551-                                     share_hash_chain[shnum],
2552-                                     block_hash_trees[shnum],
2553-                                     all_shares[shnum],
2554-                                     encprivkey)
2555-            final_shares[shnum] = final_share
2556-        elapsed = time.time() - started
2557-        self._status.timings["pack"] = elapsed
2558-        self.shares = final_shares
2559-        self.root_hash = root_hash
2560-
2561-        # we also need to build up the version identifier for what we're
2562-        # pushing. Extract the offsets from one of our shares.
2563-        assert final_shares
2564-        offsets = unpack_header(final_shares.values()[0])[-1]
2565-        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
2566-        verinfo = (self._new_seqnum, root_hash, self.salt,
2567-                   self.segment_size, len(self.newdata),
2568-                   self.required_shares, self.total_shares,
2569-                   prefix, offsets_tuple)
2570-        self.versioninfo = verinfo
2571-
2572-
2573-
2574-    def _send_shares(self, needed):
2575-        self.log("_send_shares")
2576-
2577-        # we're finally ready to send out our shares. If we encounter any
2578-        # surprises here, it's because somebody else is writing at the same
2579-        # time. (Note: in the future, when we remove the _query_peers() step
2580-        # and instead speculate about [or remember] which shares are where,
2581-        # surprises here are *not* indications of UncoordinatedWriteError,
2582-        # and we'll need to respond to them more gracefully.)
2583-
2584-        # needed is a set of (peerid, shnum) tuples. The first thing we do is
2585-        # organize it by peerid.
2586-
2587-        peermap = DictOfSets()
2588-        for (peerid, shnum) in needed:
2589-            peermap.add(peerid, shnum)
2590-
2591-        # the next thing is to build up a bunch of test vectors. The
2592-        # semantics of Publish are that we perform the operation if the world
2593-        # hasn't changed since the ServerMap was constructed (more or less).
2594-        # For every share we're trying to place, we create a test vector that
2595-        # tests to see if the server*share still corresponds to the
2596-        # map.
2597-
2598-        all_tw_vectors = {} # maps peerid to tw_vectors
2599-        sm = self._servermap.servermap
2600-
2601-        for key in needed:
2602-            (peerid, shnum) = key
2603-
2604-            if key in sm:
2605-                # an old version of that share already exists on the
2606-                # server, according to our servermap. We will create a
2607-                # request that attempts to replace it.
2608-                old_versionid, old_timestamp = sm[key]
2609-                (old_seqnum, old_root_hash, old_salt, old_segsize,
2610-                 old_datalength, old_k, old_N, old_prefix,
2611-                 old_offsets_tuple) = old_versionid
2612-                old_checkstring = pack_checkstring(old_seqnum,
2613-                                                   old_root_hash,
2614-                                                   old_salt)
2615-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2616-
2617-            elif key in self.bad_share_checkstrings:
2618-                old_checkstring = self.bad_share_checkstrings[key]
2619-                testv = (0, len(old_checkstring), "eq", old_checkstring)
2620-
2621-            else:
2622-                # add a testv that requires the share not exist
2623-
2624-                # Unfortunately, foolscap-0.2.5 has a bug in the way inbound
2625-                # constraints are handled. If the same object is referenced
2626-                # multiple times inside the arguments, foolscap emits a
2627-                # 'reference' token instead of a distinct copy of the
2628-                # argument. The bug is that these 'reference' tokens are not
2629-                # accepted by the inbound constraint code. To work around
2630-                # this, we need to prevent python from interning the
2631-                # (constant) tuple, by creating a new copy of this vector
2632-                # each time.
2633-
2634-                # This bug is fixed in foolscap-0.2.6, and even though this
2635-                # version of Tahoe requires foolscap-0.3.1 or newer, we are
2636-                # supposed to be able to interoperate with older versions of
2637-                # Tahoe which are allowed to use older versions of foolscap,
2638-                # including foolscap-0.2.5 . In addition, I've seen other
2639-                # foolscap problems triggered by 'reference' tokens (see #541
2640-                # for details). So we must keep this workaround in place.
2641-
2642-                #testv = (0, 1, 'eq', "")
2643-                testv = tuple([0, 1, 'eq', ""])
2644-
2645-            testvs = [testv]
2646-            # the write vector is simply the share
2647-            writev = [(0, self.shares[shnum])]
2648-
2649-            if peerid not in all_tw_vectors:
2650-                all_tw_vectors[peerid] = {}
2651-                # maps shnum to (testvs, writevs, new_length)
2652-            assert shnum not in all_tw_vectors[peerid]
2653-
2654-            all_tw_vectors[peerid][shnum] = (testvs, writev, None)
2655-
2656-        # we read the checkstring back from each share, however we only use
2657-        # it to detect whether there was a new share that we didn't know
2658-        # about. The success or failure of the write will tell us whether
2659-        # there was a collision or not. If there is a collision, the first
2660-        # thing we'll do is update the servermap, which will find out what
2661-        # happened. We could conceivably reduce a roundtrip by using the
2662-        # readv checkstring to populate the servermap, but really we'd have
2663-        # to read enough data to validate the signatures too, so it wouldn't
2664-        # be an overall win.
2665-        read_vector = [(0, struct.calcsize(SIGNED_PREFIX))]
2666-
2667-        # ok, send the messages!
2668-        self.log("sending %d shares" % len(all_tw_vectors), level=log.NOISY)
2669-        started = time.time()
2670-        for (peerid, tw_vectors) in all_tw_vectors.items():
2671-
2672-            write_enabler = self._node.get_write_enabler(peerid)
2673-            renew_secret = self._node.get_renewal_secret(peerid)
2674-            cancel_secret = self._node.get_cancel_secret(peerid)
2675-            secrets = (write_enabler, renew_secret, cancel_secret)
2676-            shnums = tw_vectors.keys()
2677-
2678-            for shnum in shnums:
2679-                self.outstanding.add( (peerid, shnum) )
2680-
2681-            d = self._do_testreadwrite(peerid, secrets,
2682-                                       tw_vectors, read_vector)
2683-            d.addCallbacks(self._got_write_answer, self._got_write_error,
2684-                           callbackArgs=(peerid, shnums, started),
2685-                           errbackArgs=(peerid, shnums, started))
2686-            # tolerate immediate errback, like with DeadReferenceError
2687-            d.addBoth(fireEventually)
2688-            d.addCallback(self.loop)
2689-            d.addErrback(self._fatal_error)
2690-
2691-        self._update_status()
2692-        self.log("%d shares sent" % len(all_tw_vectors), level=log.NOISY)
2693+        elapsed = now - started
2694 
2695hunk ./src/allmydata/mutable/publish.py 987
2696-    def _do_testreadwrite(self, peerid, secrets,
2697-                          tw_vectors, read_vector):
2698-        storage_index = self._storage_index
2699-        ss = self.connections[peerid]
2700+        self._status.add_per_server_time(peerid, elapsed)
2701 
2702hunk ./src/allmydata/mutable/publish.py 989
2703-        #print "SS[%s] is %s" % (idlib.shortnodeid_b2a(peerid), ss), ss.tracker.interfaceName
2704-        d = ss.callRemote("slot_testv_and_readv_and_writev",
2705-                          storage_index,
2706-                          secrets,
2707-                          tw_vectors,
2708-                          read_vector)
2709-        return d
2710+        wrote, read_data = answer
2711 
2712hunk ./src/allmydata/mutable/publish.py 991
2713-    def _got_write_answer(self, answer, peerid, shnums, started):
2714-        lp = self.log("_got_write_answer from %s" %
2715-                      idlib.shortnodeid_b2a(peerid))
2716-        for shnum in shnums:
2717-            self.outstanding.discard( (peerid, shnum) )
2718+        surprise_shares = set(read_data.keys()) - set([writer.shnum])
2719 
2720hunk ./src/allmydata/mutable/publish.py 993
2721-        now = time.time()
2722-        elapsed = now - started
2723-        self._status.add_per_server_time(peerid, elapsed)
2724+        # We need to remove from surprise_shares any shares that we are
2725+        # knowingly also writing to that peer from other writers.
2726 
2727hunk ./src/allmydata/mutable/publish.py 996
2728-        wrote, read_data = answer
2729+        # TODO: Precompute this.
2730+        known_shnums = [x.shnum for x in self.writers.values()
2731+                        if x.peerid == peerid]
2732+        surprise_shares -= set(known_shnums)
2733+        self.log("found the following surprise shares: %s" %
2734+                 str(surprise_shares))
2735 
2736hunk ./src/allmydata/mutable/publish.py 1003
2737-        surprise_shares = set(read_data.keys()) - set(shnums)
2738+        # Now surprise shares contains all of the shares that we did not
2739+        # expect to be there.
2740 
2741         surprised = False
2742         for shnum in surprise_shares:
2743hunk ./src/allmydata/mutable/publish.py 1010
2744             # read_data is a dict mapping shnum to checkstring (SIGNED_PREFIX)
2745             checkstring = read_data[shnum][0]
2746-            their_version_info = unpack_checkstring(checkstring)
2747-            if their_version_info == self._new_version_info:
2748+            # What we want to do here is to see if their (seqnum,
2749+            # roothash, salt) is the same as our (seqnum, roothash,
2750+            # salt), or the equivalent for MDMF. The best way to do this
2751+            # is to store a packed representation of our checkstring
2752+            # somewhere, then not bother unpacking the other
2753+            # checkstring.
2754+            if checkstring == self._checkstring:
2755                 # they have the right share, somehow
2756 
2757                 if (peerid,shnum) in self.goal:
2758hunk ./src/allmydata/mutable/publish.py 1095
2759             self.log("our testv failed, so the write did not happen",
2760                      parent=lp, level=log.WEIRD, umid="8sc26g")
2761             self.surprised = True
2762-            self.bad_peers.add(peerid) # don't ask them again
2763+            self.bad_peers.add(writer) # don't ask them again
2764             # use the checkstring to add information to the log message
2765             for (shnum,readv) in read_data.items():
2766                 checkstring = readv[0]
2767hunk ./src/allmydata/mutable/publish.py 1117
2768                 # if expected_version==None, then we didn't expect to see a
2769                 # share on that peer, and the 'surprise_shares' clause above
2770                 # will have logged it.
2771-            # self.loop() will take care of finding new homes
2772             return
2773 
2774hunk ./src/allmydata/mutable/publish.py 1119
2775-        for shnum in shnums:
2776-            self.placed.add( (peerid, shnum) )
2777-            # and update the servermap
2778-            self._servermap.add_new_share(peerid, shnum,
2779+        # and update the servermap
2780+        # self.versioninfo is set during the last phase of publishing.
2781+        # If we get there, we know that responses correspond to placed
2782+        # shares, and can safely execute these statements.
2783+        if self.versioninfo:
2784+            self.log("wrote successfully: adding new share to servermap")
2785+            self._servermap.add_new_share(peerid, writer.shnum,
2786                                           self.versioninfo, started)
2787hunk ./src/allmydata/mutable/publish.py 1127
2788-
2789-        # self.loop() will take care of checking to see if we're done
2790-        return
2791-
2792-    def _got_write_error(self, f, peerid, shnums, started):
2793-        for shnum in shnums:
2794-            self.outstanding.discard( (peerid, shnum) )
2795-        self.bad_peers.add(peerid)
2796-        if self._first_write_error is None:
2797-            self._first_write_error = f
2798-        self.log(format="error while writing shares %(shnums)s to peerid %(peerid)s",
2799-                 shnums=list(shnums), peerid=idlib.shortnodeid_b2a(peerid),
2800-                 failure=f,
2801-                 level=log.UNUSUAL)
2802-        # self.loop() will take care of checking to see if we're done
2803+            self.placed.add( (peerid, writer.shnum) )
2804+        self._update_status()
2805+        # the next method in the deferred chain will check to see if
2806+        # we're done and successful.
2807         return
2808 
2809 
2810hunk ./src/allmydata/mutable/publish.py 1134
2811-    def _done(self, res):
2812+    def _done(self):
2813         if not self._running:
2814             return
2815         self._running = False
2816hunk ./src/allmydata/mutable/publish.py 1140
2817         now = time.time()
2818         self._status.timings["total"] = now - self._started
2819+
2820+        elapsed = now - self._started_pushing
2821+        self._status.timings['push'] = elapsed
2822+
2823         self._status.set_active(False)
2824hunk ./src/allmydata/mutable/publish.py 1145
2825-        if isinstance(res, failure.Failure):
2826-            self.log("Publish done, with failure", failure=res,
2827-                     level=log.WEIRD, umid="nRsR9Q")
2828-            self._status.set_status("Failed")
2829-        elif self.surprised:
2830-            self.log("Publish done, UncoordinatedWriteError", level=log.UNUSUAL)
2831-            self._status.set_status("UncoordinatedWriteError")
2832-            # deliver a failure
2833-            res = failure.Failure(UncoordinatedWriteError())
2834-            # TODO: recovery
2835+        self.log("Publish done, success")
2836+        self._status.set_status("Finished")
2837+        self._status.set_progress(1.0)
2838+        # Get k and segsize, then give them to the caller.
2839+        hints = {}
2840+        hints['segsize'] = self.segment_size
2841+        hints['k'] = self.required_shares
2842+        self._node.set_downloader_hints(hints)
2843+        eventually(self.done_deferred.callback, None)
2844+
2845+    def _failure(self, f=None):
2846+        if f:
2847+            self._last_failure = f
2848+
2849+        if not self.surprised:
2850+            # We ran out of servers
2851+            msg = "Publish ran out of good servers"
2852+            if self._last_failure:
2853+                msg += ", last failure was: %s" % str(self._last_failure)
2854+            self.log(msg)
2855+            e = NotEnoughServersError(msg)
2856+
2857+        else:
2858+            # We ran into shares that we didn't recognize, which means
2859+            # that we need to return an UncoordinatedWriteError.
2860+            self.log("Publish failed with UncoordinatedWriteError")
2861+            e = UncoordinatedWriteError()
2862+        f = failure.Failure(e)
2863+        eventually(self.done_deferred.callback, f)
2864+
2865+
2866+class MutableFileHandle:
2867+    """
2868+    I am a mutable uploadable built around a filehandle-like object,
2869+    usually either a StringIO instance or a handle to an actual file.
2870+    """
2871+    implements(IMutableUploadable)
2872+
2873+    def __init__(self, filehandle):
2874+        # The filehandle is defined as a generally file-like object that
2875+        # has these two methods. We don't care beyond that.
2876+        assert hasattr(filehandle, "read")
2877+        assert hasattr(filehandle, "close")
2878+
2879+        self._filehandle = filehandle
2880+        # We must start reading at the beginning of the file, or we risk
2881+        # encountering errors when the data read does not match the size
2882+        # reported to the uploader.
2883+        self._filehandle.seek(0)
2884+
2885+        # We have not yet read anything, so our position is 0.
2886+        self._marker = 0
2887+
2888+
2889+    def get_size(self):
2890+        """
2891+        I return the amount of data in my filehandle.
2892+        """
2893+        if not hasattr(self, "_size"):
2894+            old_position = self._filehandle.tell()
2895+            # Seek to the end of the file by seeking 0 bytes from the
2896+            # file's end
2897+            self._filehandle.seek(0, 2) # 2 == os.SEEK_END in 2.5+
2898+            self._size = self._filehandle.tell()
2899+            # Restore the previous position, in case this was called
2900+            # after a read.
2901+            self._filehandle.seek(old_position)
2902+            assert self._filehandle.tell() == old_position
2903+
2904+        assert hasattr(self, "_size")
2905+        return self._size
2906+
2907+
2908+    def pos(self):
2909+        """
2910+        I return the position of my read marker -- i.e., how much data I
2911+        have already read and returned to callers.
2912+        """
2913+        return self._marker
2914+
2915+
2916+    def read(self, length):
2917+        """
2918+        I return some data (up to length bytes) from my filehandle.
2919+
2920+        In most cases, I return length bytes, but sometimes I won't --
2921+        for example, if I am asked to read beyond the end of a file, or
2922+        an error occurs.
2923+        """
2924+        results = self._filehandle.read(length)
2925+        self._marker += len(results)
2926+        return [results]
2927+
2928+
2929+    def close(self):
2930+        """
2931+        I close the underlying filehandle. Any further operations on the
2932+        filehandle fail at this point.
2933+        """
2934+        self._filehandle.close()
2935+
2936+
2937+class MutableData(MutableFileHandle):
2938+    """
2939+    I am a mutable uploadable built around a string, which I then cast
2940+    into a StringIO and treat as a filehandle.
2941+    """
2942+
2943+    def __init__(self, s):
2944+        # Take a string and return a file-like uploadable.
2945+        assert isinstance(s, str)
2946+
2947+        MutableFileHandle.__init__(self, StringIO(s))
2948+
2949+
2950+class TransformingUploadable:
2951+    """
2952+    I am an IMutableUploadable that wraps another IMutableUploadable,
2953+    and some segments that are already on the grid. When I am called to
2954+    read, I handle merging of boundary segments.
2955+    """
2956+    implements(IMutableUploadable)
2957+
2958+
2959+    def __init__(self, data, offset, segment_size, start, end):
2960+        assert IMutableUploadable.providedBy(data)
2961+
2962+        self._newdata = data
2963+        self._offset = offset
2964+        self._segment_size = segment_size
2965+        self._start = start
2966+        self._end = end
2967+
2968+        self._read_marker = 0
2969+
2970+        self._first_segment_offset = offset % segment_size
2971+
2972+        num = self.log("TransformingUploadable: starting", parent=None)
2973+        self._log_number = num
2974+        self.log("got fso: %d" % self._first_segment_offset)
2975+        self.log("got offset: %d" % self._offset)
2976+
2977+
2978+    def log(self, *args, **kwargs):
2979+        if 'parent' not in kwargs:
2980+            kwargs['parent'] = self._log_number
2981+        if "facility" not in kwargs:
2982+            kwargs["facility"] = "tahoe.mutable.transforminguploadable"
2983+        return log.msg(*args, **kwargs)
2984+
2985+
2986+    def get_size(self):
2987+        return self._offset + self._newdata.get_size()
2988+
2989+
2990+    def read(self, length):
2991+        # We can get data from 3 sources here.
2992+        #   1. The first of the segments provided to us.
2993+        #   2. The data that we're replacing things with.
2994+        #   3. The last of the segments provided to us.
2995+
2996+        # are we in state 0?
2997+        self.log("reading %d bytes" % length)
2998+
2999+        old_start_data = ""
3000+        old_data_length = self._first_segment_offset - self._read_marker
3001+        if old_data_length > 0:
3002+            if old_data_length > length:
3003+                old_data_length = length
3004+            self.log("returning %d bytes of old start data" % old_data_length)
3005+
3006+            old_data_end = old_data_length + self._read_marker
3007+            old_start_data = self._start[self._read_marker:old_data_end]
3008+            length -= old_data_length
3009         else:
3010hunk ./src/allmydata/mutable/publish.py 1320
3011-            self.log("Publish done, success")
3012-            self._status.set_status("Finished")
3013-            self._status.set_progress(1.0)
3014-        eventually(self.done_deferred.callback, res)
3015+            # otherwise calculations later get screwed up.
3016+            old_data_length = 0
3017+
3018+        # Is there enough new data to satisfy this read? If not, we need
3019+        # to pad the end of the data with data from our last segment.
3020+        old_end_length = length - \
3021+            (self._newdata.get_size() - self._newdata.pos())
3022+        old_end_data = ""
3023+        if old_end_length > 0:
3024+            self.log("reading %d bytes of old end data" % old_end_length)
3025+
3026+            # TODO: We're not explicitly checking for tail segment size
3027+            # here. Is that a problem?
3028+            old_data_offset = (length - old_end_length + \
3029+                               old_data_length) % self._segment_size
3030+            self.log("reading at offset %d" % old_data_offset)
3031+            old_end = old_data_offset + old_end_length
3032+            old_end_data = self._end[old_data_offset:old_end]
3033+            length -= old_end_length
3034+            assert length == self._newdata.get_size() - self._newdata.pos()
3035+
3036+        self.log("reading %d bytes of new data" % length)
3037+        new_data = self._newdata.read(length)
3038+        new_data = "".join(new_data)
3039+
3040+        self._read_marker += len(old_start_data + new_data + old_end_data)
3041 
3042hunk ./src/allmydata/mutable/publish.py 1347
3043+        return old_start_data + new_data + old_end_data
3044 
3045hunk ./src/allmydata/mutable/publish.py 1349
3046+    def close(self):
3047+        pass
3048}
3049[mutable/servermap: Rework the servermap to work with MDMF mutable files
3050Kevan Carstensen <kevan@isnotajoke.com>**20110802014018
3051 Ignore-this: 4d74b1fd4f03096c84d5d90dd4a33598
3052] {
3053hunk ./src/allmydata/mutable/servermap.py 2
3054 
3055-import sys, time
3056+import sys, time, struct
3057 from zope.interface import implements
3058 from itertools import count
3059 from twisted.internet import defer
3060hunk ./src/allmydata/mutable/servermap.py 7
3061 from twisted.python import failure
3062-from foolscap.api import DeadReferenceError, RemoteException, eventually
3063-from allmydata.util import base32, hashutil, idlib, log
3064+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
3065+                         fireEventually
3066+from allmydata.util import base32, hashutil, idlib, log, deferredutil
3067 from allmydata.util.dictutil import DictOfSets
3068 from allmydata.storage.server import si_b2a
3069 from allmydata.interfaces import IServermapUpdaterStatus
3070hunk ./src/allmydata/mutable/servermap.py 16
3071 from pycryptopp.publickey import rsa
3072 
3073 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \
3074-     CorruptShareError, NeedMoreDataError
3075-from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
3076-     SIGNED_PREFIX_LENGTH
3077+     CorruptShareError
3078+from allmydata.mutable.layout import SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
3079 
3080 class UpdateStatus:
3081     implements(IServermapUpdaterStatus)
3082hunk ./src/allmydata/mutable/servermap.py 124
3083         self.bad_shares = {} # maps (peerid,shnum) to old checkstring
3084         self.last_update_mode = None
3085         self.last_update_time = 0
3086+        self.update_data = {} # (verinfo,shnum) => data
3087 
3088     def copy(self):
3089         s = ServerMap()
3090hunk ./src/allmydata/mutable/servermap.py 255
3091         """Return a set of versionids, one for each version that is currently
3092         recoverable."""
3093         versionmap = self.make_versionmap()
3094-
3095         recoverable_versions = set()
3096         for (verinfo, shares) in versionmap.items():
3097             (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3098hunk ./src/allmydata/mutable/servermap.py 340
3099         return False
3100 
3101 
3102+    def get_update_data_for_share_and_verinfo(self, shnum, verinfo):
3103+        """
3104+        I return the update data for the given shnum
3105+        """
3106+        update_data = self.update_data[shnum]
3107+        update_datum = [i[1] for i in update_data if i[0] == verinfo][0]
3108+        return update_datum
3109+
3110+
3111+    def set_update_data_for_share_and_verinfo(self, shnum, verinfo, data):
3112+        """
3113+        I record the block hash tree for the given shnum.
3114+        """
3115+        self.update_data.setdefault(shnum , []).append((verinfo, data))
3116+
3117+
3118 class ServermapUpdater:
3119     def __init__(self, filenode, storage_broker, monitor, servermap,
3120hunk ./src/allmydata/mutable/servermap.py 358
3121-                 mode=MODE_READ, add_lease=False):
3122+                 mode=MODE_READ, add_lease=False, update_range=None):
3123         """I update a servermap, locating a sufficient number of useful
3124         shares and remembering where they are located.
3125 
3126hunk ./src/allmydata/mutable/servermap.py 383
3127         self._servers_responded = set()
3128 
3129         # how much data should we read?
3130+        # SDMF:
3131         #  * if we only need the checkstring, then [0:75]
3132         #  * if we need to validate the checkstring sig, then [543ish:799ish]
3133         #  * if we need the verification key, then [107:436ish]
3134hunk ./src/allmydata/mutable/servermap.py 391
3135         #  * if we need the encrypted private key, we want [-1216ish:]
3136         #   * but we can't read from negative offsets
3137         #   * the offset table tells us the 'ish', also the positive offset
3138-        # A future version of the SMDF slot format should consider using
3139-        # fixed-size slots so we can retrieve less data. For now, we'll just
3140-        # read 4000 bytes, which also happens to read enough actual data to
3141-        # pre-fetch an 18-entry dirnode.
3142+        # MDMF:
3143+        #  * Checkstring? [0:72]
3144+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
3145+        #    the offset table will tell us for sure.
3146+        #  * If we need the verification key, we have to consult the offset
3147+        #    table as well.
3148+        # At this point, we don't know which we are. Our filenode can
3149+        # tell us, but it might be lying -- in some cases, we're
3150+        # responsible for telling it which kind of file it is.
3151         self._read_size = 4000
3152         if mode == MODE_CHECK:
3153             # we use unpack_prefix_and_signature, so we need 1k
3154hunk ./src/allmydata/mutable/servermap.py 405
3155             self._read_size = 1000
3156         self._need_privkey = False
3157+
3158         if mode == MODE_WRITE and not self._node.get_privkey():
3159             self._need_privkey = True
3160         # check+repair: repair requires the privkey, so if we didn't happen
3161hunk ./src/allmydata/mutable/servermap.py 412
3162         # to ask for it during the check, we'll have problems doing the
3163         # publish.
3164 
3165+        self.fetch_update_data = False
3166+        if mode == MODE_WRITE and update_range:
3167+            # We're updating the servermap in preparation for an
3168+            # in-place file update, so we need to fetch some additional
3169+            # data from each share that we find.
3170+            assert len(update_range) == 2
3171+
3172+            self.start_segment = update_range[0]
3173+            self.end_segment = update_range[1]
3174+            self.fetch_update_data = True
3175+
3176         prefix = si_b2a(self._storage_index)[:5]
3177         self._log_number = log.msg(format="SharemapUpdater(%(si)s): starting (%(mode)s)",
3178                                    si=prefix, mode=mode)
3179hunk ./src/allmydata/mutable/servermap.py 461
3180         self._queries_completed = 0
3181 
3182         sb = self._storage_broker
3183+        # All of the peers, permuted by the storage index, as usual.
3184         full_peerlist = [(s.get_serverid(), s.get_rref())
3185                          for s in sb.get_servers_for_psi(self._storage_index)]
3186         self.full_peerlist = full_peerlist # for use later, immutable
3187hunk ./src/allmydata/mutable/servermap.py 469
3188         self._good_peers = set() # peers who had some shares
3189         self._empty_peers = set() # peers who don't have any shares
3190         self._bad_peers = set() # peers to whom our queries failed
3191+        self._readers = {} # peerid -> dict(sharewriters), filled in
3192+                           # after responses come in.
3193 
3194         k = self._node.get_required_shares()
3195hunk ./src/allmydata/mutable/servermap.py 473
3196+        # For what cases can these conditions work?
3197         if k is None:
3198             # make a guess
3199             k = 3
3200hunk ./src/allmydata/mutable/servermap.py 486
3201         self.num_peers_to_query = k + self.EPSILON
3202 
3203         if self.mode == MODE_CHECK:
3204+            # We want to query all of the peers.
3205             initial_peers_to_query = dict(full_peerlist)
3206             must_query = set(initial_peers_to_query.keys())
3207             self.extra_peers = []
3208hunk ./src/allmydata/mutable/servermap.py 494
3209             # we're planning to replace all the shares, so we want a good
3210             # chance of finding them all. We will keep searching until we've
3211             # seen epsilon that don't have a share.
3212+            # We don't query all of the peers because that could take a while.
3213             self.num_peers_to_query = N + self.EPSILON
3214             initial_peers_to_query, must_query = self._build_initial_querylist()
3215             self.required_num_empty_peers = self.EPSILON
3216hunk ./src/allmydata/mutable/servermap.py 504
3217             # might also avoid the round trip required to read the encrypted
3218             # private key.
3219 
3220-        else:
3221+        else: # MODE_READ, MODE_ANYTHING
3222+            # 2k peers is good enough.
3223             initial_peers_to_query, must_query = self._build_initial_querylist()
3224 
3225         # this is a set of peers that we are required to get responses from:
3226hunk ./src/allmydata/mutable/servermap.py 520
3227         # before we can consider ourselves finished, and self.extra_peers
3228         # contains the overflow (peers that we should tap if we don't get
3229         # enough responses)
3230+        # I guess that self._must_query is a subset of
3231+        # initial_peers_to_query?
3232+        assert set(must_query).issubset(set(initial_peers_to_query))
3233 
3234         self._send_initial_requests(initial_peers_to_query)
3235         self._status.timings["initial_queries"] = time.time() - self._started
3236hunk ./src/allmydata/mutable/servermap.py 579
3237         # errors that aren't handled by _query_failed (and errors caused by
3238         # _query_failed) get logged, but we still want to check for doneness.
3239         d.addErrback(log.err)
3240-        d.addBoth(self._check_for_done)
3241         d.addErrback(self._fatal_error)
3242hunk ./src/allmydata/mutable/servermap.py 580
3243+        d.addCallback(self._check_for_done)
3244         return d
3245 
3246     def _do_read(self, ss, peerid, storage_index, shnums, readv):
3247hunk ./src/allmydata/mutable/servermap.py 599
3248         d = ss.callRemote("slot_readv", storage_index, shnums, readv)
3249         return d
3250 
3251+
3252+    def _got_corrupt_share(self, e, shnum, peerid, data, lp):
3253+        """
3254+        I am called when a remote server returns a corrupt share in
3255+        response to one of our queries. By corrupt, I mean a share
3256+        without a valid signature. I then record the failure, notify the
3257+        server of the corruption, and record the share as bad.
3258+        """
3259+        f = failure.Failure(e)
3260+        self.log(format="bad share: %(f_value)s", f_value=str(f),
3261+                 failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
3262+        # Notify the server that its share is corrupt.
3263+        self.notify_server_corruption(peerid, shnum, str(e))
3264+        # By flagging this as a bad peer, we won't count any of
3265+        # the other shares on that peer as valid, though if we
3266+        # happen to find a valid version string amongst those
3267+        # shares, we'll keep track of it so that we don't need
3268+        # to validate the signature on those again.
3269+        self._bad_peers.add(peerid)
3270+        self._last_failure = f
3271+        # XXX: Use the reader for this?
3272+        checkstring = data[:SIGNED_PREFIX_LENGTH]
3273+        self._servermap.mark_bad_share(peerid, shnum, checkstring)
3274+        self._servermap.problems.append(f)
3275+
3276+
3277+    def _cache_good_sharedata(self, verinfo, shnum, now, data):
3278+        """
3279+        If one of my queries returns successfully (which means that we
3280+        were able to and successfully did validate the signature), I
3281+        cache the data that we initially fetched from the storage
3282+        server. This will help reduce the number of roundtrips that need
3283+        to occur when the file is downloaded, or when the file is
3284+        updated.
3285+        """
3286+        if verinfo:
3287+            self._node._add_to_cache(verinfo, shnum, 0, data)
3288+
3289+
3290     def _got_results(self, datavs, peerid, readsize, stuff, started):
3291         lp = self.log(format="got result from [%(peerid)s], %(numshares)d shares",
3292                       peerid=idlib.shortnodeid_b2a(peerid),
3293hunk ./src/allmydata/mutable/servermap.py 641
3294-                      numshares=len(datavs),
3295-                      level=log.NOISY)
3296+                      numshares=len(datavs))
3297         now = time.time()
3298         elapsed = now - started
3299hunk ./src/allmydata/mutable/servermap.py 644
3300-        self._queries_outstanding.discard(peerid)
3301-        self._servermap.reachable_peers.add(peerid)
3302-        self._must_query.discard(peerid)
3303-        self._queries_completed += 1
3304+        def _done_processing(ignored=None):
3305+            self._queries_outstanding.discard(peerid)
3306+            self._servermap.reachable_peers.add(peerid)
3307+            self._must_query.discard(peerid)
3308+            self._queries_completed += 1
3309         if not self._running:
3310hunk ./src/allmydata/mutable/servermap.py 650
3311-            self.log("but we're not running, so we'll ignore it", parent=lp,
3312-                     level=log.NOISY)
3313+            self.log("but we're not running, so we'll ignore it", parent=lp)
3314+            _done_processing()
3315             self._status.add_per_server_time(peerid, "late", started, elapsed)
3316             return
3317         self._status.add_per_server_time(peerid, "query", started, elapsed)
3318hunk ./src/allmydata/mutable/servermap.py 661
3319         else:
3320             self._empty_peers.add(peerid)
3321 
3322-        last_verinfo = None
3323-        last_shnum = None
3324+        ss, storage_index = stuff
3325+        ds = []
3326+
3327         for shnum,datav in datavs.items():
3328             data = datav[0]
3329hunk ./src/allmydata/mutable/servermap.py 666
3330-            try:
3331-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
3332-                last_verinfo = verinfo
3333-                last_shnum = shnum
3334-                self._node._add_to_cache(verinfo, shnum, 0, data)
3335-            except CorruptShareError, e:
3336-                # log it and give the other shares a chance to be processed
3337-                f = failure.Failure()
3338-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
3339-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
3340-                self.notify_server_corruption(peerid, shnum, str(e))
3341-                self._bad_peers.add(peerid)
3342-                self._last_failure = f
3343-                checkstring = data[:SIGNED_PREFIX_LENGTH]
3344-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
3345-                self._servermap.problems.append(f)
3346-                pass
3347+            reader = MDMFSlotReadProxy(ss,
3348+                                       storage_index,
3349+                                       shnum,
3350+                                       data)
3351+            self._readers.setdefault(peerid, dict())[shnum] = reader
3352+            # our goal, with each response, is to validate the version
3353+            # information and share data as best we can at this point --
3354+            # we do this by validating the signature. To do this, we
3355+            # need to do the following:
3356+            #   - If we don't already have the public key, fetch the
3357+            #     public key. We use this to validate the signature.
3358+            if not self._node.get_pubkey():
3359+                # fetch and set the public key.
3360+                d = reader.get_verification_key(queue=True)
3361+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
3362+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
3363+                # XXX: Make self._pubkey_query_failed?
3364+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
3365+                    self._got_corrupt_share(error, shnum, peerid, data, lp))
3366+            else:
3367+                # we already have the public key.
3368+                d = defer.succeed(None)
3369 
3370hunk ./src/allmydata/mutable/servermap.py 689
3371-        self._status.timings["cumulative_verify"] += (time.time() - now)
3372+            # Neither of these two branches return anything of
3373+            # consequence, so the first entry in our deferredlist will
3374+            # be None.
3375 
3376hunk ./src/allmydata/mutable/servermap.py 693
3377-        if self._need_privkey and last_verinfo:
3378-            # send them a request for the privkey. We send one request per
3379-            # server.
3380-            lp2 = self.log("sending privkey request",
3381-                           parent=lp, level=log.NOISY)
3382-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3383-             offsets_tuple) = last_verinfo
3384-            o = dict(offsets_tuple)
3385+            # - Next, we need the version information. We almost
3386+            #   certainly got this by reading the first thousand or so
3387+            #   bytes of the share on the storage server, so we
3388+            #   shouldn't need to fetch anything at this step.
3389+            d2 = reader.get_verinfo()
3390+            d2.addErrback(lambda error, shnum=shnum, peerid=peerid:
3391+                self._got_corrupt_share(error, shnum, peerid, data, lp))
3392+            # - Next, we need the signature. For an SDMF share, it is
3393+            #   likely that we fetched this when doing our initial fetch
3394+            #   to get the version information. In MDMF, this lives at
3395+            #   the end of the share, so unless the file is quite small,
3396+            #   we'll need to do a remote fetch to get it.
3397+            d3 = reader.get_signature(queue=True)
3398+            d3.addErrback(lambda error, shnum=shnum, peerid=peerid:
3399+                self._got_corrupt_share(error, shnum, peerid, data, lp))
3400+            #  Once we have all three of these responses, we can move on
3401+            #  to validating the signature
3402+
3403+            # Does the node already have a privkey? If not, we'll try to
3404+            # fetch it here.
3405+            if self._need_privkey:
3406+                d4 = reader.get_encprivkey(queue=True)
3407+                d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
3408+                    self._try_to_validate_privkey(results, peerid, shnum, lp))
3409+                d4.addErrback(lambda error, shnum=shnum, peerid=peerid:
3410+                    self._privkey_query_failed(error, shnum, data, lp))
3411+            else:
3412+                d4 = defer.succeed(None)
3413 
3414hunk ./src/allmydata/mutable/servermap.py 722
3415-            self._queries_outstanding.add(peerid)
3416-            readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ]
3417-            ss = self._servermap.connections[peerid]
3418-            privkey_started = time.time()
3419-            d = self._do_read(ss, peerid, self._storage_index,
3420-                              [last_shnum], readv)
3421-            d.addCallback(self._got_privkey_results, peerid, last_shnum,
3422-                          privkey_started, lp2)
3423-            d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2)
3424-            d.addErrback(log.err)
3425-            d.addCallback(self._check_for_done)
3426-            d.addErrback(self._fatal_error)
3427 
3428hunk ./src/allmydata/mutable/servermap.py 723
3429+            if self.fetch_update_data:
3430+                # fetch the block hash tree and first + last segment, as
3431+                # configured earlier.
3432+                # Then set them in wherever we happen to want to set
3433+                # them.
3434+                ds = []
3435+                # XXX: We do this above, too. Is there a good way to
3436+                # make the two routines share the value without
3437+                # introducing more roundtrips?
3438+                ds.append(reader.get_verinfo())
3439+                ds.append(reader.get_blockhashes(queue=True))
3440+                ds.append(reader.get_block_and_salt(self.start_segment,
3441+                                                    queue=True))
3442+                ds.append(reader.get_block_and_salt(self.end_segment,
3443+                                                    queue=True))
3444+                d5 = deferredutil.gatherResults(ds)
3445+                d5.addCallback(self._got_update_results_one_share, shnum)
3446+            else:
3447+                d5 = defer.succeed(None)
3448+
3449+            dl = defer.DeferredList([d, d2, d3, d4, d5])
3450+            dl.addBoth(self._turn_barrier)
3451+            reader.flush()
3452+            dl.addCallback(lambda results, shnum=shnum, peerid=peerid:
3453+                self._got_signature_one_share(results, shnum, peerid, lp))
3454+            dl.addErrback(lambda error, shnum=shnum, data=data:
3455+               self._got_corrupt_share(error, shnum, peerid, data, lp))
3456+            dl.addCallback(lambda verinfo, shnum=shnum, peerid=peerid, data=data:
3457+                self._cache_good_sharedata(verinfo, shnum, now, data))
3458+            ds.append(dl)
3459+        # dl is a deferred list that will fire when all of the shares
3460+        # that we found on this peer are done processing. When dl fires,
3461+        # we know that processing is done, so we can decrement the
3462+        # semaphore-like thing that we incremented earlier.
3463+        dl = defer.DeferredList(ds, fireOnOneErrback=True)
3464+        # Are we done? Done means that there are no more queries to
3465+        # send, that there are no outstanding queries, and that we
3466+        # haven't received any queries that are still processing. If we
3467+        # are done, self._check_for_done will cause the done deferred
3468+        # that we returned to our caller to fire, which tells them that
3469+        # they have a complete servermap, and that we won't be touching
3470+        # the servermap anymore.
3471+        dl.addCallback(_done_processing)
3472+        dl.addCallback(self._check_for_done)
3473+        dl.addErrback(self._fatal_error)
3474         # all done!
3475         self.log("_got_results done", parent=lp, level=log.NOISY)
3476hunk ./src/allmydata/mutable/servermap.py 770
3477+        return dl
3478+
3479+
3480+    def _turn_barrier(self, result):
3481+        """
3482+        I help the servermap updater avoid the recursion limit issues
3483+        discussed in #237.
3484+        """
3485+        return fireEventually(result)
3486+
3487+
3488+    def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp):
3489+        if self._node.get_pubkey():
3490+            return # don't go through this again if we don't have to
3491+        fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
3492+        assert len(fingerprint) == 32
3493+        if fingerprint != self._node.get_fingerprint():
3494+            raise CorruptShareError(peerid, shnum,
3495+                                "pubkey doesn't match fingerprint")
3496+        self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
3497+        assert self._node.get_pubkey()
3498+
3499 
3500     def notify_server_corruption(self, peerid, shnum, reason):
3501         ss = self._servermap.connections[peerid]
3502hunk ./src/allmydata/mutable/servermap.py 798
3503         ss.callRemoteOnly("advise_corrupt_share",
3504                           "mutable", self._storage_index, shnum, reason)
3505 
3506-    def _got_results_one_share(self, shnum, data, peerid, lp):
3507+
3508+    def _got_signature_one_share(self, results, shnum, peerid, lp):
3509+        # It is our job to give versioninfo to our caller. We need to
3510+        # raise CorruptShareError if the share is corrupt for any
3511+        # reason, something that our caller will handle.
3512         self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s",
3513                  shnum=shnum,
3514                  peerid=idlib.shortnodeid_b2a(peerid),
3515hunk ./src/allmydata/mutable/servermap.py 808
3516                  level=log.NOISY,
3517                  parent=lp)
3518+        if not self._running:
3519+            # We can't process the results, since we can't touch the
3520+            # servermap anymore.
3521+            self.log("but we're not running anymore.")
3522+            return None
3523 
3524hunk ./src/allmydata/mutable/servermap.py 814
3525-        # this might raise NeedMoreDataError, if the pubkey and signature
3526-        # live at some weird offset. That shouldn't happen, so I'm going to
3527-        # treat it as a bad share.
3528-        (seqnum, root_hash, IV, k, N, segsize, datalength,
3529-         pubkey_s, signature, prefix) = unpack_prefix_and_signature(data)
3530-
3531-        if not self._node.get_pubkey():
3532-            fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
3533-            assert len(fingerprint) == 32
3534-            if fingerprint != self._node.get_fingerprint():
3535-                raise CorruptShareError(peerid, shnum,
3536-                                        "pubkey doesn't match fingerprint")
3537-            self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
3538-
3539-        if self._need_privkey:
3540-            self._try_to_extract_privkey(data, peerid, shnum, lp)
3541-
3542-        (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N,
3543-         ig_segsize, ig_datalen, offsets) = unpack_header(data)
3544+        _, verinfo, signature, __, ___ = results
3545+        (seqnum,
3546+         root_hash,
3547+         saltish,
3548+         segsize,
3549+         datalen,
3550+         k,
3551+         n,
3552+         prefix,
3553+         offsets) = verinfo[1]
3554         offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
3555 
3556hunk ./src/allmydata/mutable/servermap.py 826
3557-        verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
3558+        # XXX: This should be done for us in the method, so
3559+        # presumably you can go in there and fix it.
3560+        verinfo = (seqnum,
3561+                   root_hash,
3562+                   saltish,
3563+                   segsize,
3564+                   datalen,
3565+                   k,
3566+                   n,
3567+                   prefix,
3568                    offsets_tuple)
3569hunk ./src/allmydata/mutable/servermap.py 837
3570+        # This tuple uniquely identifies a share on the grid; we use it
3571+        # to keep track of the ones that we've already seen.
3572 
3573         if verinfo not in self._valid_versions:
3574hunk ./src/allmydata/mutable/servermap.py 841
3575-            # it's a new pair. Verify the signature.
3576-            valid = self._node.get_pubkey().verify(prefix, signature)
3577+            # This is a new version tuple, and we need to validate it
3578+            # against the public key before keeping track of it.
3579+            assert self._node.get_pubkey()
3580+            valid = self._node.get_pubkey().verify(prefix, signature[1])
3581             if not valid:
3582hunk ./src/allmydata/mutable/servermap.py 846
3583-                raise CorruptShareError(peerid, shnum, "signature is invalid")
3584+                raise CorruptShareError(peerid, shnum,
3585+                                        "signature is invalid")
3586 
3587hunk ./src/allmydata/mutable/servermap.py 849
3588-            # ok, it's a valid verinfo. Add it to the list of validated
3589-            # versions.
3590-            self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
3591-                     % (seqnum, base32.b2a(root_hash)[:4],
3592-                        idlib.shortnodeid_b2a(peerid), shnum,
3593-                        k, N, segsize, datalength),
3594-                     parent=lp)
3595-            self._valid_versions.add(verinfo)
3596-        # We now know that this is a valid candidate verinfo.
3597+        # ok, it's a valid verinfo. Add it to the list of validated
3598+        # versions.
3599+        self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
3600+                 % (seqnum, base32.b2a(root_hash)[:4],
3601+                    idlib.shortnodeid_b2a(peerid), shnum,
3602+                    k, n, segsize, datalen),
3603+                    parent=lp)
3604+        self._valid_versions.add(verinfo)
3605+        # We now know that this is a valid candidate verinfo. Whether or
3606+        # not this instance of it is valid is a matter for the next
3607+        # statement; at this point, we just know that if we see this
3608+        # version info again, that its signature checks out and that
3609+        # we're okay to skip the signature-checking step.
3610 
3611hunk ./src/allmydata/mutable/servermap.py 863
3612+        # (peerid, shnum) are bound in the method invocation.
3613         if (peerid, shnum) in self._servermap.bad_shares:
3614             # we've been told that the rest of the data in this share is
3615             # unusable, so don't add it to the servermap.
3616hunk ./src/allmydata/mutable/servermap.py 876
3617         self._servermap.add_new_share(peerid, shnum, verinfo, timestamp)
3618         # and the versionmap
3619         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
3620+
3621         return verinfo
3622 
3623hunk ./src/allmydata/mutable/servermap.py 879
3624-    def _deserialize_pubkey(self, pubkey_s):
3625-        verifier = rsa.create_verifying_key_from_string(pubkey_s)
3626-        return verifier
3627 
3628hunk ./src/allmydata/mutable/servermap.py 880
3629-    def _try_to_extract_privkey(self, data, peerid, shnum, lp):
3630-        try:
3631-            r = unpack_share(data)
3632-        except NeedMoreDataError, e:
3633-            # this share won't help us. oh well.
3634-            offset = e.encprivkey_offset
3635-            length = e.encprivkey_length
3636-            self.log("shnum %d on peerid %s: share was too short (%dB) "
3637-                     "to get the encprivkey; [%d:%d] ought to hold it" %
3638-                     (shnum, idlib.shortnodeid_b2a(peerid), len(data),
3639-                      offset, offset+length),
3640-                     parent=lp)
3641-            # NOTE: if uncoordinated writes are taking place, someone might
3642-            # change the share (and most probably move the encprivkey) before
3643-            # we get a chance to do one of these reads and fetch it. This
3644-            # will cause us to see a NotEnoughSharesError(unable to fetch
3645-            # privkey) instead of an UncoordinatedWriteError . This is a
3646-            # nuisance, but it will go away when we move to DSA-based mutable
3647-            # files (since the privkey will be small enough to fit in the
3648-            # write cap).
3649+    def _got_update_results_one_share(self, results, share):
3650+        """
3651+        I record the update results in results.
3652+        """
3653+        assert len(results) == 4
3654+        verinfo, blockhashes, start, end = results
3655+        (seqnum,
3656+         root_hash,
3657+         saltish,
3658+         segsize,
3659+         datalen,
3660+         k,
3661+         n,
3662+         prefix,
3663+         offsets) = verinfo
3664+        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
3665 
3666hunk ./src/allmydata/mutable/servermap.py 897
3667-            return
3668+        # XXX: This should be done for us in the method, so
3669+        # presumably you can go in there and fix it.
3670+        verinfo = (seqnum,
3671+                   root_hash,
3672+                   saltish,
3673+                   segsize,
3674+                   datalen,
3675+                   k,
3676+                   n,
3677+                   prefix,
3678+                   offsets_tuple)
3679 
3680hunk ./src/allmydata/mutable/servermap.py 909
3681-        (seqnum, root_hash, IV, k, N, segsize, datalen,
3682-         pubkey, signature, share_hash_chain, block_hash_tree,
3683-         share_data, enc_privkey) = r
3684+        update_data = (blockhashes, start, end)
3685+        self._servermap.set_update_data_for_share_and_verinfo(share,
3686+                                                              verinfo,
3687+                                                              update_data)
3688 
3689hunk ./src/allmydata/mutable/servermap.py 914
3690-        return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
3691 
3692hunk ./src/allmydata/mutable/servermap.py 915
3693-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
3694+    def _deserialize_pubkey(self, pubkey_s):
3695+        verifier = rsa.create_verifying_key_from_string(pubkey_s)
3696+        return verifier
3697 
3698hunk ./src/allmydata/mutable/servermap.py 919
3699+
3700+    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
3701+        """
3702+        Given a writekey from a remote server, I validate it against the
3703+        writekey stored in my node. If it is valid, then I set the
3704+        privkey and encprivkey properties of the node.
3705+        """
3706         alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
3707         alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
3708         if alleged_writekey != self._node.get_writekey():
3709hunk ./src/allmydata/mutable/servermap.py 998
3710         self._queries_completed += 1
3711         self._last_failure = f
3712 
3713-    def _got_privkey_results(self, datavs, peerid, shnum, started, lp):
3714-        now = time.time()
3715-        elapsed = now - started
3716-        self._status.add_per_server_time(peerid, "privkey", started, elapsed)
3717-        self._queries_outstanding.discard(peerid)
3718-        if not self._need_privkey:
3719-            return
3720-        if shnum not in datavs:
3721-            self.log("privkey wasn't there when we asked it",
3722-                     level=log.WEIRD, umid="VA9uDQ")
3723-            return
3724-        datav = datavs[shnum]
3725-        enc_privkey = datav[0]
3726-        self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
3727 
3728     def _privkey_query_failed(self, f, peerid, shnum, lp):
3729         self._queries_outstanding.discard(peerid)
3730hunk ./src/allmydata/mutable/servermap.py 1012
3731         self._servermap.problems.append(f)
3732         self._last_failure = f
3733 
3734+
3735     def _check_for_done(self, res):
3736         # exit paths:
3737         #  return self._send_more_queries(outstanding) : send some more queries
3738hunk ./src/allmydata/mutable/servermap.py 1018
3739         #  return self._done() : all done
3740         #  return : keep waiting, no new queries
3741-
3742         lp = self.log(format=("_check_for_done, mode is '%(mode)s', "
3743                               "%(outstanding)d queries outstanding, "
3744                               "%(extra)d extra peers available, "
3745hunk ./src/allmydata/mutable/servermap.py 1209
3746 
3747     def _done(self):
3748         if not self._running:
3749+            self.log("not running; we're already done")
3750             return
3751         self._running = False
3752         now = time.time()
3753hunk ./src/allmydata/mutable/servermap.py 1224
3754         self._servermap.last_update_time = self._started
3755         # the servermap will not be touched after this
3756         self.log("servermap: %s" % self._servermap.summarize_versions())
3757+
3758         eventually(self._done_deferred.callback, self._servermap)
3759 
3760     def _fatal_error(self, f):
3761}
3762[interfaces: change interfaces to work with MDMF
3763Kevan Carstensen <kevan@isnotajoke.com>**20110802014119
3764 Ignore-this: 2f441022cf888c044bc9e6dd609db139
3765 
3766 A lot of this work concerns #993, in that it unifies (to an extent) the
3767 interfaces of mutable and immutable files.
3768] {
3769hunk ./src/allmydata/interfaces.py 7
3770      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
3771 
3772 HASH_SIZE=32
3773+SALT_SIZE=16
3774+
3775+SDMF_VERSION=0
3776+MDMF_VERSION=1
3777 
3778 Hash = StringConstraint(maxLength=HASH_SIZE,
3779                         minLength=HASH_SIZE)# binary format 32-byte SHA256 hash
3780hunk ./src/allmydata/interfaces.py 424
3781         """
3782 
3783 
3784+class IMutableSlotWriter(Interface):
3785+    """
3786+    The interface for a writer around a mutable slot on a remote server.
3787+    """
3788+    def set_checkstring(checkstring, *args):
3789+        """
3790+        Set the checkstring that I will pass to the remote server when
3791+        writing.
3792+
3793+            @param checkstring A packed checkstring to use.
3794+
3795+        Note that implementations can differ in which semantics they
3796+        wish to support for set_checkstring -- they can, for example,
3797+        build the checkstring themselves from its constituents, or
3798+        some other thing.
3799+        """
3800+
3801+    def get_checkstring():
3802+        """
3803+        Get the checkstring that I think currently exists on the remote
3804+        server.
3805+        """
3806+
3807+    def put_block(data, segnum, salt):
3808+        """
3809+        Add a block and salt to the share.
3810+        """
3811+
3812+    def put_encprivey(encprivkey):
3813+        """
3814+        Add the encrypted private key to the share.
3815+        """
3816+
3817+    def put_blockhashes(blockhashes=list):
3818+        """
3819+        Add the block hash tree to the share.
3820+        """
3821+
3822+    def put_sharehashes(sharehashes=dict):
3823+        """
3824+        Add the share hash chain to the share.
3825+        """
3826+
3827+    def get_signable():
3828+        """
3829+        Return the part of the share that needs to be signed.
3830+        """
3831+
3832+    def put_signature(signature):
3833+        """
3834+        Add the signature to the share.
3835+        """
3836+
3837+    def put_verification_key(verification_key):
3838+        """
3839+        Add the verification key to the share.
3840+        """
3841+
3842+    def finish_publishing():
3843+        """
3844+        Do anything necessary to finish writing the share to a remote
3845+        server. I require that no further publishing needs to take place
3846+        after this method has been called.
3847+        """
3848+
3849+
3850 class IURI(Interface):
3851     def init_from_string(uri):
3852         """Accept a string (as created by my to_string() method) and populate
3853hunk ./src/allmydata/interfaces.py 546
3854 
3855 class IMutableFileURI(Interface):
3856     """I am a URI which represents a mutable filenode."""
3857+    def get_extension_params():
3858+        """Return the extension parameters in the URI"""
3859+
3860+    def set_extension_params():
3861+        """Set the extension parameters that should be in the URI"""
3862 
3863 class IDirectoryURI(Interface):
3864     pass
3865hunk ./src/allmydata/interfaces.py 574
3866 class MustNotBeUnknownRWError(CapConstraintError):
3867     """Cannot add an unknown child cap specified in a rw_uri field."""
3868 
3869+
3870+class IReadable(Interface):
3871+    """I represent a readable object -- either an immutable file, or a
3872+    specific version of a mutable file.
3873+    """
3874+
3875+    def is_readonly():
3876+        """Return True if this reference provides mutable access to the given
3877+        file or directory (i.e. if you can modify it), or False if not. Note
3878+        that even if this reference is read-only, someone else may hold a
3879+        read-write reference to it.
3880+
3881+        For an IReadable returned by get_best_readable_version(), this will
3882+        always return True, but for instances of subinterfaces such as
3883+        IMutableFileVersion, it may return False."""
3884+
3885+    def is_mutable():
3886+        """Return True if this file or directory is mutable (by *somebody*,
3887+        not necessarily you), False if it is is immutable. Note that a file
3888+        might be mutable overall, but your reference to it might be
3889+        read-only. On the other hand, all references to an immutable file
3890+        will be read-only; there are no read-write references to an immutable
3891+        file."""
3892+
3893+    def get_storage_index():
3894+        """Return the storage index of the file."""
3895+
3896+    def get_size():
3897+        """Return the length (in bytes) of this readable object."""
3898+
3899+    def download_to_data():
3900+        """Download all of the file contents. I return a Deferred that fires
3901+        with the contents as a byte string."""
3902+
3903+    def read(consumer, offset=0, size=None):
3904+        """Download a portion (possibly all) of the file's contents, making
3905+        them available to the given IConsumer. Return a Deferred that fires
3906+        (with the consumer) when the consumer is unregistered (either because
3907+        the last byte has been given to it, or because the consumer threw an
3908+        exception during write(), possibly because it no longer wants to
3909+        receive data). The portion downloaded will start at 'offset' and
3910+        contain 'size' bytes (or the remainder of the file if size==None).
3911+
3912+        The consumer will be used in non-streaming mode: an IPullProducer
3913+        will be attached to it.
3914+
3915+        The consumer will not receive data right away: several network trips
3916+        must occur first. The order of events will be::
3917+
3918+         consumer.registerProducer(p, streaming)
3919+          (if streaming == False)::
3920+           consumer does p.resumeProducing()
3921+            consumer.write(data)
3922+           consumer does p.resumeProducing()
3923+            consumer.write(data).. (repeat until all data is written)
3924+         consumer.unregisterProducer()
3925+         deferred.callback(consumer)
3926+
3927+        If a download error occurs, or an exception is raised by
3928+        consumer.registerProducer() or consumer.write(), I will call
3929+        consumer.unregisterProducer() and then deliver the exception via
3930+        deferred.errback(). To cancel the download, the consumer should call
3931+        p.stopProducing(), which will result in an exception being delivered
3932+        via deferred.errback().
3933+
3934+        See src/allmydata/util/consumer.py for an example of a simple
3935+        download-to-memory consumer.
3936+        """
3937+
3938+
3939+class IWritable(Interface):
3940+    """
3941+    I define methods that callers can use to update SDMF and MDMF
3942+    mutable files on a Tahoe-LAFS grid.
3943+    """
3944+    # XXX: For the moment, we have only this. It is possible that we
3945+    #      want to move overwrite() and modify() in here too.
3946+    def update(data, offset):
3947+        """
3948+        I write the data from my data argument to the MDMF file,
3949+        starting at offset. I continue writing data until my data
3950+        argument is exhausted, appending data to the file as necessary.
3951+        """
3952+        # assert IMutableUploadable.providedBy(data)
3953+        # to append data: offset=node.get_size_of_best_version()
3954+        # do we want to support compacting MDMF?
3955+        # for an MDMF file, this can be done with O(data.get_size())
3956+        # memory. For an SDMF file, any modification takes
3957+        # O(node.get_size_of_best_version()).
3958+
3959+
3960+class IMutableFileVersion(IReadable):
3961+    """I provide access to a particular version of a mutable file. The
3962+    access is read/write if I was obtained from a filenode derived from
3963+    a write cap, or read-only if the filenode was derived from a read cap.
3964+    """
3965+
3966+    def get_sequence_number():
3967+        """Return the sequence number of this version."""
3968+
3969+    def get_servermap():
3970+        """Return the IMutableFileServerMap instance that was used to create
3971+        this object.
3972+        """
3973+
3974+    def get_writekey():
3975+        """Return this filenode's writekey, or None if the node does not have
3976+        write-capability. This may be used to assist with data structures
3977+        that need to make certain data available only to writers, such as the
3978+        read-write child caps in dirnodes. The recommended process is to have
3979+        reader-visible data be submitted to the filenode in the clear (where
3980+        it will be encrypted by the filenode using the readkey), but encrypt
3981+        writer-visible data using this writekey.
3982+        """
3983+
3984+    # TODO: Can this be overwrite instead of replace?
3985+    def replace(new_contents):
3986+        """Replace the contents of the mutable file, provided that no other
3987+        node has published (or is attempting to publish, concurrently) a
3988+        newer version of the file than this one.
3989+
3990+        I will avoid modifying any share that is different than the version
3991+        given by get_sequence_number(). However, if another node is writing
3992+        to the file at the same time as me, I may manage to update some shares
3993+        while they update others. If I see any evidence of this, I will signal
3994+        UncoordinatedWriteError, and the file will be left in an inconsistent
3995+        state (possibly the version you provided, possibly the old version,
3996+        possibly somebody else's version, and possibly a mix of shares from
3997+        all of these).
3998+
3999+        The recommended response to UncoordinatedWriteError is to either
4000+        return it to the caller (since they failed to coordinate their
4001+        writes), or to attempt some sort of recovery. It may be sufficient to
4002+        wait a random interval (with exponential backoff) and repeat your
4003+        operation. If I do not signal UncoordinatedWriteError, then I was
4004+        able to write the new version without incident.
4005+
4006+        I return a Deferred that fires (with a PublishStatus object) when the
4007+        update has completed.
4008+        """
4009+
4010+    def modify(modifier_cb):
4011+        """Modify the contents of the file, by downloading this version,
4012+        applying the modifier function (or bound method), then uploading
4013+        the new version. This will succeed as long as no other node
4014+        publishes a version between the download and the upload.
4015+        I return a Deferred that fires (with a PublishStatus object) when
4016+        the update is complete.
4017+
4018+        The modifier callable will be given three arguments: a string (with
4019+        the old contents), a 'first_time' boolean, and a servermap. As with
4020+        download_to_data(), the old contents will be from this version,
4021+        but the modifier can use the servermap to make other decisions
4022+        (such as refusing to apply the delta if there are multiple parallel
4023+        versions, or if there is evidence of a newer unrecoverable version).
4024+        'first_time' will be True the first time the modifier is called,
4025+        and False on any subsequent calls.
4026+
4027+        The callable should return a string with the new contents. The
4028+        callable must be prepared to be called multiple times, and must
4029+        examine the input string to see if the change that it wants to make
4030+        is already present in the old version. If it does not need to make
4031+        any changes, it can either return None, or return its input string.
4032+
4033+        If the modifier raises an exception, it will be returned in the
4034+        errback.
4035+        """
4036+
4037+
4038 # The hierarchy looks like this:
4039 #  IFilesystemNode
4040 #   IFileNode
4041hunk ./src/allmydata/interfaces.py 833
4042     def raise_error():
4043         """Raise any error associated with this node."""
4044 
4045+    # XXX: These may not be appropriate outside the context of an IReadable.
4046     def get_size():
4047         """Return the length (in bytes) of the data this node represents. For
4048         directory nodes, I return the size of the backing store. I return
4049hunk ./src/allmydata/interfaces.py 850
4050 class IFileNode(IFilesystemNode):
4051     """I am a node which represents a file: a sequence of bytes. I am not a
4052     container, like IDirectoryNode."""
4053+    def get_best_readable_version():
4054+        """Return a Deferred that fires with an IReadable for the 'best'
4055+        available version of the file. The IReadable provides only read
4056+        access, even if this filenode was derived from a write cap.
4057 
4058hunk ./src/allmydata/interfaces.py 855
4059-class IImmutableFileNode(IFileNode):
4060-    def read(consumer, offset=0, size=None):
4061-        """Download a portion (possibly all) of the file's contents, making
4062-        them available to the given IConsumer. Return a Deferred that fires
4063-        (with the consumer) when the consumer is unregistered (either because
4064-        the last byte has been given to it, or because the consumer threw an
4065-        exception during write(), possibly because it no longer wants to
4066-        receive data). The portion downloaded will start at 'offset' and
4067-        contain 'size' bytes (or the remainder of the file if size==None).
4068-
4069-        The consumer will be used in non-streaming mode: an IPullProducer
4070-        will be attached to it.
4071+        For an immutable file, there is only one version. For a mutable
4072+        file, the 'best' version is the recoverable version with the
4073+        highest sequence number. If no uncoordinated writes have occurred,
4074+        and if enough shares are available, then this will be the most
4075+        recent version that has been uploaded. If no version is recoverable,
4076+        the Deferred will errback with an UnrecoverableFileError.
4077+        """
4078 
4079hunk ./src/allmydata/interfaces.py 863
4080-        The consumer will not receive data right away: several network trips
4081-        must occur first. The order of events will be::
4082+    def download_best_version():
4083+        """Download the contents of the version that would be returned
4084+        by get_best_readable_version(). This is equivalent to calling
4085+        download_to_data() on the IReadable given by that method.
4086 
4087hunk ./src/allmydata/interfaces.py 868
4088-         consumer.registerProducer(p, streaming)
4089-          (if streaming == False)::
4090-           consumer does p.resumeProducing()
4091-            consumer.write(data)
4092-           consumer does p.resumeProducing()
4093-            consumer.write(data).. (repeat until all data is written)
4094-         consumer.unregisterProducer()
4095-         deferred.callback(consumer)
4096+        I return a Deferred that fires with a byte string when the file
4097+        has been fully downloaded. To support streaming download, use
4098+        the 'read' method of IReadable. If no version is recoverable,
4099+        the Deferred will errback with an UnrecoverableFileError.
4100+        """
4101 
4102hunk ./src/allmydata/interfaces.py 874
4103-        If a download error occurs, or an exception is raised by
4104-        consumer.registerProducer() or consumer.write(), I will call
4105-        consumer.unregisterProducer() and then deliver the exception via
4106-        deferred.errback(). To cancel the download, the consumer should call
4107-        p.stopProducing(), which will result in an exception being delivered
4108-        via deferred.errback().
4109+    def get_size_of_best_version():
4110+        """Find the size of the version that would be returned by
4111+        get_best_readable_version().
4112 
4113hunk ./src/allmydata/interfaces.py 878
4114-        See src/allmydata/util/consumer.py for an example of a simple
4115-        download-to-memory consumer.
4116+        I return a Deferred that fires with an integer. If no version
4117+        is recoverable, the Deferred will errback with an
4118+        UnrecoverableFileError.
4119         """
4120 
4121hunk ./src/allmydata/interfaces.py 883
4122+
4123+class IImmutableFileNode(IFileNode, IReadable):
4124+    """I am a node representing an immutable file. Immutable files have
4125+    only one version"""
4126+
4127+
4128 class IMutableFileNode(IFileNode):
4129     """I provide access to a 'mutable file', which retains its identity
4130     regardless of what contents are put in it.
4131hunk ./src/allmydata/interfaces.py 948
4132     only be retrieved and updated all-at-once, as a single big string. Future
4133     versions of our mutable files will remove this restriction.
4134     """
4135-
4136-    def download_best_version():
4137-        """Download the 'best' available version of the file, meaning one of
4138-        the recoverable versions with the highest sequence number. If no
4139+    def get_best_mutable_version():
4140+        """Return a Deferred that fires with an IMutableFileVersion for
4141+        the 'best' available version of the file. The best version is
4142+        the recoverable version with the highest sequence number. If no
4143         uncoordinated writes have occurred, and if enough shares are
4144hunk ./src/allmydata/interfaces.py 953
4145-        available, then this will be the most recent version that has been
4146-        uploaded.
4147+        available, then this will be the most recent version that has
4148+        been uploaded.
4149 
4150hunk ./src/allmydata/interfaces.py 956
4151-        I update an internal servermap with MODE_READ, determine which
4152-        version of the file is indicated by
4153-        servermap.best_recoverable_version(), and return a Deferred that
4154-        fires with its contents. If no version is recoverable, the Deferred
4155-        will errback with UnrecoverableFileError.
4156-        """
4157-
4158-    def get_size_of_best_version():
4159-        """Find the size of the version that would be downloaded with
4160-        download_best_version(), without actually downloading the whole file.
4161-
4162-        I return a Deferred that fires with an integer.
4163+        If no version is recoverable, the Deferred will errback with an
4164+        UnrecoverableFileError.
4165         """
4166 
4167     def overwrite(new_contents):
4168hunk ./src/allmydata/interfaces.py 996
4169         errback.
4170         """
4171 
4172-
4173     def get_servermap(mode):
4174         """Return a Deferred that fires with an IMutableFileServerMap
4175         instance, updated using the given mode.
4176hunk ./src/allmydata/interfaces.py 1049
4177         writer-visible data using this writekey.
4178         """
4179 
4180+    def get_version():
4181+        """Returns the mutable file protocol version."""
4182+
4183 class NotEnoughSharesError(Exception):
4184     """Download was unable to get enough shares"""
4185 
4186hunk ./src/allmydata/interfaces.py 1888
4187         """The upload is finished, and whatever filehandle was in use may be
4188         closed."""
4189 
4190+
4191+class IMutableUploadable(Interface):
4192+    """
4193+    I represent content that is due to be uploaded to a mutable filecap.
4194+    """
4195+    # This is somewhat simpler than the IUploadable interface above
4196+    # because mutable files do not need to be concerned with possibly
4197+    # generating a CHK, nor with per-file keys. It is a subset of the
4198+    # methods in IUploadable, though, so we could just as well implement
4199+    # the mutable uploadables as IUploadables that don't happen to use
4200+    # those methods (with the understanding that the unused methods will
4201+    # never be called on such objects)
4202+    def get_size():
4203+        """
4204+        Returns a Deferred that fires with the size of the content held
4205+        by the uploadable.
4206+        """
4207+
4208+    def read(length):
4209+        """
4210+        Returns a list of strings which, when concatenated, are the next
4211+        length bytes of the file, or fewer if there are fewer bytes
4212+        between the current location and the end of the file.
4213+        """
4214+
4215+    def close():
4216+        """
4217+        The process that used the Uploadable is finished using it, so
4218+        the uploadable may be closed.
4219+        """
4220+
4221 class IUploadResults(Interface):
4222     """I am returned by upload() methods. I contain a number of public
4223     attributes which can be read to determine the results of the upload. Some
4224}
4225[nodemaker: teach nodemaker how to create MDMF mutable files
4226Kevan Carstensen <kevan@isnotajoke.com>**20110802014258
4227 Ignore-this: 2bf1fd4f8c1d1ad0e855c678347b76c2
4228] {
4229hunk ./src/allmydata/nodemaker.py 3
4230 import weakref
4231 from zope.interface import implements
4232-from allmydata.interfaces import INodeMaker
4233+from allmydata.util.assertutil import precondition
4234+from allmydata.interfaces import INodeMaker, SDMF_VERSION
4235 from allmydata.immutable.literal import LiteralFileNode
4236 from allmydata.immutable.filenode import ImmutableFileNode, CiphertextFileNode
4237 from allmydata.immutable.upload import Data
4238hunk ./src/allmydata/nodemaker.py 9
4239 from allmydata.mutable.filenode import MutableFileNode
4240+from allmydata.mutable.publish import MutableData
4241 from allmydata.dirnode import DirectoryNode, pack_children
4242 from allmydata.unknown import UnknownNode
4243 from allmydata import uri
4244hunk ./src/allmydata/nodemaker.py 92
4245             return self._create_dirnode(filenode)
4246         return None
4247 
4248-    def create_mutable_file(self, contents=None, keysize=None):
4249+    def create_mutable_file(self, contents=None, keysize=None,
4250+                            version=SDMF_VERSION):
4251         n = MutableFileNode(self.storage_broker, self.secret_holder,
4252                             self.default_encoding_parameters, self.history)
4253         d = self.key_generator.generate(keysize)
4254hunk ./src/allmydata/nodemaker.py 97
4255-        d.addCallback(n.create_with_keys, contents)
4256+        d.addCallback(n.create_with_keys, contents, version=version)
4257         d.addCallback(lambda res: n)
4258         return d
4259 
4260hunk ./src/allmydata/nodemaker.py 101
4261-    def create_new_mutable_directory(self, initial_children={}):
4262+    def create_new_mutable_directory(self, initial_children={},
4263+                                     version=SDMF_VERSION):
4264+        # initial_children must have metadata (i.e. {} instead of None)
4265+        for (name, (node, metadata)) in initial_children.iteritems():
4266+            precondition(isinstance(metadata, dict),
4267+                         "create_new_mutable_directory requires metadata to be a dict, not None", metadata)
4268+            node.raise_error()
4269         d = self.create_mutable_file(lambda n:
4270hunk ./src/allmydata/nodemaker.py 109
4271-                                     pack_children(initial_children, n.get_writekey()))
4272+                                     MutableData(pack_children(initial_children,
4273+                                                    n.get_writekey())),
4274+                                     version=version)
4275         d.addCallback(self._create_dirnode)
4276         return d
4277 
4278}
4279[mutable/filenode: Modify mutable filenodes for use with MDMF
4280Kevan Carstensen <kevan@isnotajoke.com>**20110802014501
4281 Ignore-this: 3c230bb0ebe60a94c667b0ee0c3b28e0
4282 
4283 In particular:
4284     - Break MutableFileNode and MutableFileVersion into distinct classes.
4285     - Implement the interface modifications made for MDMF.
4286     - Be aware of MDMF caps.
4287     - Learn how to create and work with MDMF files.
4288] {
4289hunk ./src/allmydata/mutable/filenode.py 7
4290 from zope.interface import implements
4291 from twisted.internet import defer, reactor
4292 from foolscap.api import eventually
4293-from allmydata.interfaces import IMutableFileNode, \
4294-     ICheckable, ICheckResults, NotEnoughSharesError
4295-from allmydata.util import hashutil, log
4296+from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \
4297+     NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION, IMutableUploadable, \
4298+     IMutableFileVersion, IWritable
4299+from allmydata.util import hashutil, log, consumer, deferredutil, mathutil
4300 from allmydata.util.assertutil import precondition
4301hunk ./src/allmydata/mutable/filenode.py 12
4302-from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
4303+from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI, \
4304+                          WritableMDMFFileURI, ReadonlyMDMFFileURI
4305 from allmydata.monitor import Monitor
4306 from pycryptopp.cipher.aes import AES
4307 
4308hunk ./src/allmydata/mutable/filenode.py 17
4309-from allmydata.mutable.publish import Publish
4310-from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \
4311+from allmydata.mutable.publish import Publish, MutableData,\
4312+                                      TransformingUploadable
4313+from allmydata.mutable.common import MODE_READ, MODE_WRITE, MODE_CHECK, UnrecoverableFileError, \
4314      ResponseCache, UncoordinatedWriteError
4315 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
4316 from allmydata.mutable.retrieve import Retrieve
4317hunk ./src/allmydata/mutable/filenode.py 70
4318         self._sharemap = {} # known shares, shnum-to-[nodeids]
4319         self._cache = ResponseCache()
4320         self._most_recent_size = None
4321+        # filled in after __init__ if we're being created for the first time;
4322+        # filled in by the servermap updater before publishing, otherwise.
4323+        # set to this default value in case neither of those things happen,
4324+        # or in case the servermap can't find any shares to tell us what
4325+        # to publish as.
4326+        self._protocol_version = None
4327 
4328         # all users of this MutableFileNode go through the serializer. This
4329         # takes advantage of the fact that Deferreds discard the callbacks
4330hunk ./src/allmydata/mutable/filenode.py 83
4331         # forever without consuming more and more memory.
4332         self._serializer = defer.succeed(None)
4333 
4334+        # Starting with MDMF, we can get these from caps if they're
4335+        # there. Leave them alone for now; they'll be filled in by my
4336+        # init_from_cap method if necessary.
4337+        self._downloader_hints = {}
4338+
4339     def __repr__(self):
4340         if hasattr(self, '_uri'):
4341             return "<%s %x %s %s>" % (self.__class__.__name__, id(self), self.is_readonly() and 'RO' or 'RW', self._uri.abbrev())
4342hunk ./src/allmydata/mutable/filenode.py 99
4343         # verification key, nor things like 'k' or 'N'. If and when someone
4344         # wants to get our contents, we'll pull from shares and fill those
4345         # in.
4346-        assert isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI))
4347+        if isinstance(filecap, (WritableMDMFFileURI, ReadonlyMDMFFileURI)):
4348+            self._protocol_version = MDMF_VERSION
4349+        elif isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI)):
4350+            self._protocol_version = SDMF_VERSION
4351+
4352         self._uri = filecap
4353         self._writekey = None
4354hunk ./src/allmydata/mutable/filenode.py 106
4355-        if isinstance(filecap, WriteableSSKFileURI):
4356+
4357+        if not filecap.is_readonly() and filecap.is_mutable():
4358             self._writekey = self._uri.writekey
4359         self._readkey = self._uri.readkey
4360         self._storage_index = self._uri.storage_index
4361hunk ./src/allmydata/mutable/filenode.py 120
4362         # if possible, otherwise by the first peer that Publish talks to.
4363         self._privkey = None
4364         self._encprivkey = None
4365+
4366+        # Starting with MDMF caps, we allowed arbitrary extensions in
4367+        # caps. If we were initialized with a cap that had extensions,
4368+        # we want to remember them so we can tell MutableFileVersions
4369+        # about them.
4370+        extensions = self._uri.get_extension_params()
4371+        if extensions:
4372+            extensions = map(int, extensions)
4373+            suspected_k, suspected_segsize = extensions
4374+            self._downloader_hints['k'] = suspected_k
4375+            self._downloader_hints['segsize'] = suspected_segsize
4376+
4377         return self
4378 
4379hunk ./src/allmydata/mutable/filenode.py 134
4380-    def create_with_keys(self, (pubkey, privkey), contents):
4381+    def create_with_keys(self, (pubkey, privkey), contents,
4382+                         version=SDMF_VERSION):
4383         """Call this to create a brand-new mutable file. It will create the
4384         shares, find homes for them, and upload the initial contents (created
4385         with the same rules as IClient.create_mutable_file() ). Returns a
4386hunk ./src/allmydata/mutable/filenode.py 148
4387         self._writekey = hashutil.ssk_writekey_hash(privkey_s)
4388         self._encprivkey = self._encrypt_privkey(self._writekey, privkey_s)
4389         self._fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
4390-        self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
4391+        if version == MDMF_VERSION:
4392+            self._uri = WritableMDMFFileURI(self._writekey, self._fingerprint)
4393+            self._protocol_version = version
4394+        elif version == SDMF_VERSION:
4395+            self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
4396+            self._protocol_version = version
4397         self._readkey = self._uri.readkey
4398         self._storage_index = self._uri.storage_index
4399         initial_contents = self._get_initial_contents(contents)
4400hunk ./src/allmydata/mutable/filenode.py 160
4401         return self._upload(initial_contents, None)
4402 
4403     def _get_initial_contents(self, contents):
4404+        if contents is None:
4405+            return MutableData("")
4406+
4407         if isinstance(contents, str):
4408hunk ./src/allmydata/mutable/filenode.py 164
4409+            return MutableData(contents)
4410+
4411+        if IMutableUploadable.providedBy(contents):
4412             return contents
4413hunk ./src/allmydata/mutable/filenode.py 168
4414-        if contents is None:
4415-            return ""
4416+
4417         assert callable(contents), "%s should be callable, not %s" % \
4418                (contents, type(contents))
4419         return contents(self)
4420hunk ./src/allmydata/mutable/filenode.py 238
4421 
4422     def get_size(self):
4423         return self._most_recent_size
4424+
4425     def get_current_size(self):
4426         d = self.get_size_of_best_version()
4427         d.addCallback(self._stash_size)
4428hunk ./src/allmydata/mutable/filenode.py 243
4429         return d
4430+
4431     def _stash_size(self, size):
4432         self._most_recent_size = size
4433         return size
4434hunk ./src/allmydata/mutable/filenode.py 302
4435             return cmp(self.__class__, them.__class__)
4436         return cmp(self._uri, them._uri)
4437 
4438-    def _do_serialized(self, cb, *args, **kwargs):
4439-        # note: to avoid deadlock, this callable is *not* allowed to invoke
4440-        # other serialized methods within this (or any other)
4441-        # MutableFileNode. The callable should be a bound method of this same
4442-        # MFN instance.
4443-        d = defer.Deferred()
4444-        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4445-        # we need to put off d.callback until this Deferred is finished being
4446-        # processed. Otherwise the caller's subsequent activities (like,
4447-        # doing other things with this node) can cause reentrancy problems in
4448-        # the Deferred code itself
4449-        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4450-        # add a log.err just in case something really weird happens, because
4451-        # self._serializer stays around forever, therefore we won't see the
4452-        # usual Unhandled Error in Deferred that would give us a hint.
4453-        self._serializer.addErrback(log.err)
4454-        return d
4455 
4456     #################################
4457     # ICheckable
4458hunk ./src/allmydata/mutable/filenode.py 327
4459 
4460 
4461     #################################
4462-    # IMutableFileNode
4463+    # IFileNode
4464+
4465+    def get_best_readable_version(self):
4466+        """
4467+        I return a Deferred that fires with a MutableFileVersion
4468+        representing the best readable version of the file that I
4469+        represent
4470+        """
4471+        return self.get_readable_version()
4472+
4473+
4474+    def get_readable_version(self, servermap=None, version=None):
4475+        """
4476+        I return a Deferred that fires with an MutableFileVersion for my
4477+        version argument, if there is a recoverable file of that version
4478+        on the grid. If there is no recoverable version, I fire with an
4479+        UnrecoverableFileError.
4480+
4481+        If a servermap is provided, I look in there for the requested
4482+        version. If no servermap is provided, I create and update a new
4483+        one.
4484+
4485+        If no version is provided, then I return a MutableFileVersion
4486+        representing the best recoverable version of the file.
4487+        """
4488+        d = self._get_version_from_servermap(MODE_READ, servermap, version)
4489+        def _build_version((servermap, their_version)):
4490+            assert their_version in servermap.recoverable_versions()
4491+            assert their_version in servermap.make_versionmap()
4492+
4493+            mfv = MutableFileVersion(self,
4494+                                     servermap,
4495+                                     their_version,
4496+                                     self._storage_index,
4497+                                     self._storage_broker,
4498+                                     self._readkey,
4499+                                     history=self._history)
4500+            assert mfv.is_readonly()
4501+            mfv.set_downloader_hints(self._downloader_hints)
4502+            # our caller can use this to download the contents of the
4503+            # mutable file.
4504+            return mfv
4505+        return d.addCallback(_build_version)
4506+
4507+
4508+    def _get_version_from_servermap(self,
4509+                                    mode,
4510+                                    servermap=None,
4511+                                    version=None):
4512+        """
4513+        I return a Deferred that fires with (servermap, version).
4514+
4515+        This function performs validation and a servermap update. If it
4516+        returns (servermap, version), the caller can assume that:
4517+            - servermap was last updated in mode.
4518+            - version is recoverable, and corresponds to the servermap.
4519+
4520+        If version and servermap are provided to me, I will validate
4521+        that version exists in the servermap, and that the servermap was
4522+        updated correctly.
4523+
4524+        If version is not provided, but servermap is, I will validate
4525+        the servermap and return the best recoverable version that I can
4526+        find in the servermap.
4527+
4528+        If the version is provided but the servermap isn't, I will
4529+        obtain a servermap that has been updated in the correct mode and
4530+        validate that version is found and recoverable.
4531+
4532+        If neither servermap nor version are provided, I will obtain a
4533+        servermap updated in the correct mode, and return the best
4534+        recoverable version that I can find in there.
4535+        """
4536+        # XXX: wording ^^^^
4537+        if servermap and servermap.last_update_mode == mode:
4538+            d = defer.succeed(servermap)
4539+        else:
4540+            d = self._get_servermap(mode)
4541+
4542+        def _get_version(servermap, v):
4543+            if v and v not in servermap.recoverable_versions():
4544+                v = None
4545+            elif not v:
4546+                v = servermap.best_recoverable_version()
4547+            if not v:
4548+                raise UnrecoverableFileError("no recoverable versions")
4549+
4550+            return (servermap, v)
4551+        return d.addCallback(_get_version, version)
4552+
4553 
4554     def download_best_version(self):
4555hunk ./src/allmydata/mutable/filenode.py 419
4556+        """
4557+        I return a Deferred that fires with the contents of the best
4558+        version of this mutable file.
4559+        """
4560         return self._do_serialized(self._download_best_version)
4561hunk ./src/allmydata/mutable/filenode.py 424
4562+
4563+
4564     def _download_best_version(self):
4565hunk ./src/allmydata/mutable/filenode.py 427
4566-        servermap = ServerMap()
4567-        d = self._try_once_to_download_best_version(servermap, MODE_READ)
4568-        def _maybe_retry(f):
4569-            f.trap(NotEnoughSharesError)
4570-            # the download is worth retrying once. Make sure to use the
4571-            # old servermap, since it is what remembers the bad shares,
4572-            # but use MODE_WRITE to make it look for even more shares.
4573-            # TODO: consider allowing this to retry multiple times.. this
4574-            # approach will let us tolerate about 8 bad shares, I think.
4575-            return self._try_once_to_download_best_version(servermap,
4576-                                                           MODE_WRITE)
4577+        """
4578+        I am the serialized sibling of download_best_version.
4579+        """
4580+        d = self.get_best_readable_version()
4581+        d.addCallback(self._record_size)
4582+        d.addCallback(lambda version: version.download_to_data())
4583+
4584+        # It is possible that the download will fail because there
4585+        # aren't enough shares to be had. If so, we will try again after
4586+        # updating the servermap in MODE_WRITE, which may find more
4587+        # shares than updating in MODE_READ, as we just did. We can do
4588+        # this by getting the best mutable version and downloading from
4589+        # that -- the best mutable version will be a MutableFileVersion
4590+        # with a servermap that was last updated in MODE_WRITE, as we
4591+        # want. If this fails, then we give up.
4592+        def _maybe_retry(failure):
4593+            failure.trap(NotEnoughSharesError)
4594+
4595+            d = self.get_best_mutable_version()
4596+            d.addCallback(self._record_size)
4597+            d.addCallback(lambda version: version.download_to_data())
4598+            return d
4599+
4600         d.addErrback(_maybe_retry)
4601         return d
4602hunk ./src/allmydata/mutable/filenode.py 452
4603-    def _try_once_to_download_best_version(self, servermap, mode):
4604-        d = self._update_servermap(servermap, mode)
4605-        d.addCallback(self._once_updated_download_best_version, servermap)
4606-        return d
4607-    def _once_updated_download_best_version(self, ignored, servermap):
4608-        goal = servermap.best_recoverable_version()
4609-        if not goal:
4610-            raise UnrecoverableFileError("no recoverable versions")
4611-        return self._try_once_to_download_version(servermap, goal)
4612+
4613+
4614+    def _record_size(self, mfv):
4615+        """
4616+        I record the size of a mutable file version.
4617+        """
4618+        self._most_recent_size = mfv.get_size()
4619+        return mfv
4620+
4621 
4622     def get_size_of_best_version(self):
4623hunk ./src/allmydata/mutable/filenode.py 463
4624-        d = self.get_servermap(MODE_READ)
4625-        def _got_servermap(smap):
4626-            ver = smap.best_recoverable_version()
4627-            if not ver:
4628-                raise UnrecoverableFileError("no recoverable version")
4629-            return smap.size_of_version(ver)
4630-        d.addCallback(_got_servermap)
4631-        return d
4632+        """
4633+        I return the size of the best version of this mutable file.
4634+
4635+        This is equivalent to calling get_size() on the result of
4636+        get_best_readable_version().
4637+        """
4638+        d = self.get_best_readable_version()
4639+        return d.addCallback(lambda mfv: mfv.get_size())
4640+
4641+
4642+    #################################
4643+    # IMutableFileNode
4644+
4645+    def get_best_mutable_version(self, servermap=None):
4646+        """
4647+        I return a Deferred that fires with a MutableFileVersion
4648+        representing the best readable version of the file that I
4649+        represent. I am like get_best_readable_version, except that I
4650+        will try to make a writable version if I can.
4651+        """
4652+        return self.get_mutable_version(servermap=servermap)
4653+
4654+
4655+    def get_mutable_version(self, servermap=None, version=None):
4656+        """
4657+        I return a version of this mutable file. I return a Deferred
4658+        that fires with a MutableFileVersion
4659+
4660+        If version is provided, the Deferred will fire with a
4661+        MutableFileVersion initailized with that version. Otherwise, it
4662+        will fire with the best version that I can recover.
4663+
4664+        If servermap is provided, I will use that to find versions
4665+        instead of performing my own servermap update.
4666+        """
4667+        if self.is_readonly():
4668+            return self.get_readable_version(servermap=servermap,
4669+                                             version=version)
4670+
4671+        # get_mutable_version => write intent, so we require that the
4672+        # servermap is updated in MODE_WRITE
4673+        d = self._get_version_from_servermap(MODE_WRITE, servermap, version)
4674+        def _build_version((servermap, smap_version)):
4675+            # these should have been set by the servermap update.
4676+            assert self._secret_holder
4677+            assert self._writekey
4678+
4679+            mfv = MutableFileVersion(self,
4680+                                     servermap,
4681+                                     smap_version,
4682+                                     self._storage_index,
4683+                                     self._storage_broker,
4684+                                     self._readkey,
4685+                                     self._writekey,
4686+                                     self._secret_holder,
4687+                                     history=self._history)
4688+            assert not mfv.is_readonly()
4689+            mfv.set_downloader_hints(self._downloader_hints)
4690+            return mfv
4691+
4692+        return d.addCallback(_build_version)
4693 
4694hunk ./src/allmydata/mutable/filenode.py 525
4695+
4696+    # XXX: I'm uncomfortable with the difference between upload and
4697+    #      overwrite, which, FWICT, is basically that you don't have to
4698+    #      do a servermap update before you overwrite. We split them up
4699+    #      that way anyway, so I guess there's no real difficulty in
4700+    #      offering both ways to callers, but it also makes the
4701+    #      public-facing API cluttery, and makes it hard to discern the
4702+    #      right way of doing things.
4703+
4704+    # In general, we leave it to callers to ensure that they aren't
4705+    # going to cause UncoordinatedWriteErrors when working with
4706+    # MutableFileVersions. We know that the next three operations
4707+    # (upload, overwrite, and modify) will all operate on the same
4708+    # version, so we say that only one of them can be going on at once,
4709+    # and serialize them to ensure that that actually happens, since as
4710+    # the caller in this situation it is our job to do that.
4711     def overwrite(self, new_contents):
4712hunk ./src/allmydata/mutable/filenode.py 542
4713+        """
4714+        I overwrite the contents of the best recoverable version of this
4715+        mutable file with new_contents. This is equivalent to calling
4716+        overwrite on the result of get_best_mutable_version with
4717+        new_contents as an argument. I return a Deferred that eventually
4718+        fires with the results of my replacement process.
4719+        """
4720+        # TODO: Update downloader hints.
4721         return self._do_serialized(self._overwrite, new_contents)
4722hunk ./src/allmydata/mutable/filenode.py 551
4723+
4724+
4725     def _overwrite(self, new_contents):
4726hunk ./src/allmydata/mutable/filenode.py 554
4727+        """
4728+        I am the serialized sibling of overwrite.
4729+        """
4730+        d = self.get_best_mutable_version()
4731+        d.addCallback(lambda mfv: mfv.overwrite(new_contents))
4732+        d.addCallback(self._did_upload, new_contents.get_size())
4733+        return d
4734+
4735+
4736+    def upload(self, new_contents, servermap):
4737+        """
4738+        I overwrite the contents of the best recoverable version of this
4739+        mutable file with new_contents, using servermap instead of
4740+        creating/updating our own servermap. I return a Deferred that
4741+        fires with the results of my upload.
4742+        """
4743+        # TODO: Update downloader hints
4744+        return self._do_serialized(self._upload, new_contents, servermap)
4745+
4746+
4747+    def modify(self, modifier, backoffer=None):
4748+        """
4749+        I modify the contents of the best recoverable version of this
4750+        mutable file with the modifier. This is equivalent to calling
4751+        modify on the result of get_best_mutable_version. I return a
4752+        Deferred that eventually fires with an UploadResults instance
4753+        describing this process.
4754+        """
4755+        # TODO: Update downloader hints.
4756+        return self._do_serialized(self._modify, modifier, backoffer)
4757+
4758+
4759+    def _modify(self, modifier, backoffer):
4760+        """
4761+        I am the serialized sibling of modify.
4762+        """
4763+        d = self.get_best_mutable_version()
4764+        d.addCallback(lambda mfv: mfv.modify(modifier, backoffer))
4765+        return d
4766+
4767+
4768+    def download_version(self, servermap, version, fetch_privkey=False):
4769+        """
4770+        Download the specified version of this mutable file. I return a
4771+        Deferred that fires with the contents of the specified version
4772+        as a bytestring, or errbacks if the file is not recoverable.
4773+        """
4774+        d = self.get_readable_version(servermap, version)
4775+        return d.addCallback(lambda mfv: mfv.download_to_data(fetch_privkey))
4776+
4777+
4778+    def get_servermap(self, mode):
4779+        """
4780+        I return a servermap that has been updated in mode.
4781+
4782+        mode should be one of MODE_READ, MODE_WRITE, MODE_CHECK or
4783+        MODE_ANYTHING. See servermap.py for more on what these mean.
4784+        """
4785+        return self._do_serialized(self._get_servermap, mode)
4786+
4787+
4788+    def _get_servermap(self, mode):
4789+        """
4790+        I am a serialized twin to get_servermap.
4791+        """
4792         servermap = ServerMap()
4793hunk ./src/allmydata/mutable/filenode.py 620
4794-        d = self._update_servermap(servermap, mode=MODE_WRITE)
4795-        d.addCallback(lambda ignored: self._upload(new_contents, servermap))
4796+        d = self._update_servermap(servermap, mode)
4797+        # The servermap will tell us about the most recent size of the
4798+        # file, so we may as well set that so that callers might get
4799+        # more data about us.
4800+        if not self._most_recent_size:
4801+            d.addCallback(self._get_size_from_servermap)
4802+        return d
4803+
4804+
4805+    def _get_size_from_servermap(self, servermap):
4806+        """
4807+        I extract the size of the best version of this file and record
4808+        it in self._most_recent_size. I return the servermap that I was
4809+        given.
4810+        """
4811+        if servermap.recoverable_versions():
4812+            v = servermap.best_recoverable_version()
4813+            size = v[4] # verinfo[4] == size
4814+            self._most_recent_size = size
4815+        return servermap
4816+
4817+
4818+    def _update_servermap(self, servermap, mode):
4819+        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
4820+                             mode)
4821+        if self._history:
4822+            self._history.notify_mapupdate(u.get_status())
4823+        return u.update()
4824+
4825+
4826+    #def set_version(self, version):
4827+        # I can be set in two ways:
4828+        #  1. When the node is created.
4829+        #  2. (for an existing share) when the Servermap is updated
4830+        #     before I am read.
4831+    #    assert version in (MDMF_VERSION, SDMF_VERSION)
4832+    #    self._protocol_version = version
4833+
4834+
4835+    def get_version(self):
4836+        return self._protocol_version
4837+
4838+
4839+    def _do_serialized(self, cb, *args, **kwargs):
4840+        # note: to avoid deadlock, this callable is *not* allowed to invoke
4841+        # other serialized methods within this (or any other)
4842+        # MutableFileNode. The callable should be a bound method of this same
4843+        # MFN instance.
4844+        d = defer.Deferred()
4845+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
4846+        # we need to put off d.callback until this Deferred is finished being
4847+        # processed. Otherwise the caller's subsequent activities (like,
4848+        # doing other things with this node) can cause reentrancy problems in
4849+        # the Deferred code itself
4850+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
4851+        # add a log.err just in case something really weird happens, because
4852+        # self._serializer stays around forever, therefore we won't see the
4853+        # usual Unhandled Error in Deferred that would give us a hint.
4854+        self._serializer.addErrback(log.err)
4855+        return d
4856+
4857+
4858+    def _upload(self, new_contents, servermap):
4859+        """
4860+        A MutableFileNode still has to have some way of getting
4861+        published initially, which is what I am here for. After that,
4862+        all publishing, updating, modifying and so on happens through
4863+        MutableFileVersions.
4864+        """
4865+        assert self._pubkey, "update_servermap must be called before publish"
4866+
4867+        # Define IPublishInvoker with a set_downloader_hints method?
4868+        # Then have the publisher call that method when it's done publishing?
4869+        p = Publish(self, self._storage_broker, servermap)
4870+        if self._history:
4871+            self._history.notify_publish(p.get_status(),
4872+                                         new_contents.get_size())
4873+        d = p.publish(new_contents)
4874+        d.addCallback(self._did_upload, new_contents.get_size())
4875         return d
4876 
4877 
4878hunk ./src/allmydata/mutable/filenode.py 702
4879+    def set_downloader_hints(self, hints):
4880+        self._downloader_hints = hints
4881+        extensions = hints.values()
4882+        self._uri.set_extension_params(extensions)
4883+
4884+
4885+    def _did_upload(self, res, size):
4886+        self._most_recent_size = size
4887+        return res
4888+
4889+
4890+class MutableFileVersion:
4891+    """
4892+    I represent a specific version (most likely the best version) of a
4893+    mutable file.
4894+
4895+    Since I implement IReadable, instances which hold a
4896+    reference to an instance of me are guaranteed the ability (absent
4897+    connection difficulties or unrecoverable versions) to read the file
4898+    that I represent. Depending on whether I was initialized with a
4899+    write capability or not, I may also provide callers the ability to
4900+    overwrite or modify the contents of the mutable file that I
4901+    reference.
4902+    """
4903+    implements(IMutableFileVersion, IWritable)
4904+
4905+    def __init__(self,
4906+                 node,
4907+                 servermap,
4908+                 version,
4909+                 storage_index,
4910+                 storage_broker,
4911+                 readcap,
4912+                 writekey=None,
4913+                 write_secrets=None,
4914+                 history=None):
4915+
4916+        self._node = node
4917+        self._servermap = servermap
4918+        self._version = version
4919+        self._storage_index = storage_index
4920+        self._write_secrets = write_secrets
4921+        self._history = history
4922+        self._storage_broker = storage_broker
4923+
4924+        #assert isinstance(readcap, IURI)
4925+        self._readcap = readcap
4926+
4927+        self._writekey = writekey
4928+        self._serializer = defer.succeed(None)
4929+
4930+
4931+    def get_sequence_number(self):
4932+        """
4933+        Get the sequence number of the mutable version that I represent.
4934+        """
4935+        return self._version[0] # verinfo[0] == the sequence number
4936+
4937+
4938+    # TODO: Terminology?
4939+    def get_writekey(self):
4940+        """
4941+        I return a writekey or None if I don't have a writekey.
4942+        """
4943+        return self._writekey
4944+
4945+
4946+    def set_downloader_hints(self, hints):
4947+        """
4948+        I set the downloader hints.
4949+        """
4950+        assert isinstance(hints, dict)
4951+
4952+        self._downloader_hints = hints
4953+
4954+
4955+    def get_downloader_hints(self):
4956+        """
4957+        I return the downloader hints.
4958+        """
4959+        return self._downloader_hints
4960+
4961+
4962+    def overwrite(self, new_contents):
4963+        """
4964+        I overwrite the contents of this mutable file version with the
4965+        data in new_contents.
4966+        """
4967+        assert not self.is_readonly()
4968+
4969+        return self._do_serialized(self._overwrite, new_contents)
4970+
4971+
4972+    def _overwrite(self, new_contents):
4973+        assert IMutableUploadable.providedBy(new_contents)
4974+        assert self._servermap.last_update_mode == MODE_WRITE
4975+
4976+        return self._upload(new_contents)
4977+
4978+
4979     def modify(self, modifier, backoffer=None):
4980         """I use a modifier callback to apply a change to the mutable file.
4981         I implement the following pseudocode::
4982hunk ./src/allmydata/mutable/filenode.py 842
4983         backoffer should not invoke any methods on this MutableFileNode
4984         instance, and it needs to be highly conscious of deadlock issues.
4985         """
4986+        assert not self.is_readonly()
4987+
4988         return self._do_serialized(self._modify, modifier, backoffer)
4989hunk ./src/allmydata/mutable/filenode.py 845
4990+
4991+
4992     def _modify(self, modifier, backoffer):
4993hunk ./src/allmydata/mutable/filenode.py 848
4994-        servermap = ServerMap()
4995         if backoffer is None:
4996             backoffer = BackoffAgent().delay
4997hunk ./src/allmydata/mutable/filenode.py 850
4998-        return self._modify_and_retry(servermap, modifier, backoffer, True)
4999-    def _modify_and_retry(self, servermap, modifier, backoffer, first_time):
5000-        d = self._modify_once(servermap, modifier, first_time)
5001+        return self._modify_and_retry(modifier, backoffer, True)
5002+
5003+
5004+    def _modify_and_retry(self, modifier, backoffer, first_time):
5005+        """
5006+        I try to apply modifier to the contents of this version of the
5007+        mutable file. If I succeed, I return an UploadResults instance
5008+        describing my success. If I fail, I try again after waiting for
5009+        a little bit.
5010+        """
5011+        log.msg("doing modify")
5012+        if first_time:
5013+            d = self._update_servermap()
5014+        else:
5015+            # We ran into trouble; do MODE_CHECK so we're a little more
5016+            # careful on subsequent tries.
5017+            d = self._update_servermap(mode=MODE_CHECK)
5018+
5019+        d.addCallback(lambda ignored:
5020+            self._modify_once(modifier, first_time))
5021         def _retry(f):
5022             f.trap(UncoordinatedWriteError)
5023hunk ./src/allmydata/mutable/filenode.py 872
5024+            # Uh oh, it broke. We're allowed to trust the servermap for our
5025+            # first try, but after that we need to update it. It's
5026+            # possible that we've failed due to a race with another
5027+            # uploader, and if the race is to converge correctly, we
5028+            # need to know about that upload.
5029             d2 = defer.maybeDeferred(backoffer, self, f)
5030             d2.addCallback(lambda ignored:
5031hunk ./src/allmydata/mutable/filenode.py 879
5032-                           self._modify_and_retry(servermap, modifier,
5033+                           self._modify_and_retry(modifier,
5034                                                   backoffer, False))
5035             return d2
5036         d.addErrback(_retry)
5037hunk ./src/allmydata/mutable/filenode.py 884
5038         return d
5039-    def _modify_once(self, servermap, modifier, first_time):
5040-        d = self._update_servermap(servermap, MODE_WRITE)
5041-        d.addCallback(self._once_updated_download_best_version, servermap)
5042+
5043+
5044+    def _modify_once(self, modifier, first_time):
5045+        """
5046+        I attempt to apply a modifier to the contents of the mutable
5047+        file.
5048+        """
5049+        assert self._servermap.last_update_mode != MODE_READ
5050+
5051+        # download_to_data is serialized, so we have to call this to
5052+        # avoid deadlock.
5053+        d = self._try_to_download_data()
5054         def _apply(old_contents):
5055hunk ./src/allmydata/mutable/filenode.py 897
5056-            new_contents = modifier(old_contents, servermap, first_time)
5057+            new_contents = modifier(old_contents, self._servermap, first_time)
5058+            precondition((isinstance(new_contents, str) or
5059+                          new_contents is None),
5060+                         "Modifier function must return a string "
5061+                         "or None")
5062+
5063             if new_contents is None or new_contents == old_contents:
5064hunk ./src/allmydata/mutable/filenode.py 904
5065+                log.msg("no changes")
5066                 # no changes need to be made
5067                 if first_time:
5068                     return
5069hunk ./src/allmydata/mutable/filenode.py 912
5070                 # recovery when it observes UCWE, we need to do a second
5071                 # publish. See #551 for details. We'll basically loop until
5072                 # we managed an uncontested publish.
5073-                new_contents = old_contents
5074-            precondition(isinstance(new_contents, str),
5075-                         "Modifier function must return a string or None")
5076-            return self._upload(new_contents, servermap)
5077+                old_uploadable = MutableData(old_contents)
5078+                new_contents = old_uploadable
5079+            else:
5080+                new_contents = MutableData(new_contents)
5081+
5082+            return self._upload(new_contents)
5083         d.addCallback(_apply)
5084         return d
5085 
5086hunk ./src/allmydata/mutable/filenode.py 921
5087-    def get_servermap(self, mode):
5088-        return self._do_serialized(self._get_servermap, mode)
5089-    def _get_servermap(self, mode):
5090-        servermap = ServerMap()
5091-        return self._update_servermap(servermap, mode)
5092-    def _update_servermap(self, servermap, mode):
5093-        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
5094-                             mode)
5095-        if self._history:
5096-            self._history.notify_mapupdate(u.get_status())
5097-        return u.update()
5098 
5099hunk ./src/allmydata/mutable/filenode.py 922
5100-    def download_version(self, servermap, version, fetch_privkey=False):
5101-        return self._do_serialized(self._try_once_to_download_version,
5102-                                   servermap, version, fetch_privkey)
5103-    def _try_once_to_download_version(self, servermap, version,
5104-                                      fetch_privkey=False):
5105-        r = Retrieve(self, servermap, version, fetch_privkey)
5106+    def is_readonly(self):
5107+        """
5108+        I return True if this MutableFileVersion provides no write
5109+        access to the file that it encapsulates, and False if it
5110+        provides the ability to modify the file.
5111+        """
5112+        return self._writekey is None
5113+
5114+
5115+    def is_mutable(self):
5116+        """
5117+        I return True, since mutable files are always mutable by
5118+        somebody.
5119+        """
5120+        return True
5121+
5122+
5123+    def get_storage_index(self):
5124+        """
5125+        I return the storage index of the reference that I encapsulate.
5126+        """
5127+        return self._storage_index
5128+
5129+
5130+    def get_size(self):
5131+        """
5132+        I return the length, in bytes, of this readable object.
5133+        """
5134+        return self._servermap.size_of_version(self._version)
5135+
5136+
5137+    def download_to_data(self, fetch_privkey=False):
5138+        """
5139+        I return a Deferred that fires with the contents of this
5140+        readable object as a byte string.
5141+
5142+        """
5143+        c = consumer.MemoryConsumer()
5144+        d = self.read(c, fetch_privkey=fetch_privkey)
5145+        d.addCallback(lambda mc: "".join(mc.chunks))
5146+        return d
5147+
5148+
5149+    def _try_to_download_data(self):
5150+        """
5151+        I am an unserialized cousin of download_to_data; I am called
5152+        from the children of modify() to download the data associated
5153+        with this mutable version.
5154+        """
5155+        c = consumer.MemoryConsumer()
5156+        # modify will almost certainly write, so we need the privkey.
5157+        d = self._read(c, fetch_privkey=True)
5158+        d.addCallback(lambda mc: "".join(mc.chunks))
5159+        return d
5160+
5161+
5162+    def read(self, consumer, offset=0, size=None, fetch_privkey=False):
5163+        """
5164+        I read a portion (possibly all) of the mutable file that I
5165+        reference into consumer.
5166+        """
5167+        return self._do_serialized(self._read, consumer, offset, size,
5168+                                   fetch_privkey)
5169+
5170+
5171+    def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
5172+        """
5173+        I am the serialized companion of read.
5174+        """
5175+        r = Retrieve(self._node, self._servermap, self._version, fetch_privkey)
5176         if self._history:
5177             self._history.notify_retrieve(r.get_status())
5178hunk ./src/allmydata/mutable/filenode.py 994
5179-        d = r.download()
5180-        d.addCallback(self._downloaded_version)
5181+        d = r.download(consumer, offset, size)
5182         return d
5183hunk ./src/allmydata/mutable/filenode.py 996
5184-    def _downloaded_version(self, data):
5185-        self._most_recent_size = len(data)
5186-        return data
5187 
5188hunk ./src/allmydata/mutable/filenode.py 997
5189-    def upload(self, new_contents, servermap):
5190-        return self._do_serialized(self._upload, new_contents, servermap)
5191-    def _upload(self, new_contents, servermap):
5192-        assert self._pubkey, "update_servermap must be called before publish"
5193-        p = Publish(self, self._storage_broker, servermap)
5194+
5195+    def _do_serialized(self, cb, *args, **kwargs):
5196+        # note: to avoid deadlock, this callable is *not* allowed to invoke
5197+        # other serialized methods within this (or any other)
5198+        # MutableFileNode. The callable should be a bound method of this same
5199+        # MFN instance.
5200+        d = defer.Deferred()
5201+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
5202+        # we need to put off d.callback until this Deferred is finished being
5203+        # processed. Otherwise the caller's subsequent activities (like,
5204+        # doing other things with this node) can cause reentrancy problems in
5205+        # the Deferred code itself
5206+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
5207+        # add a log.err just in case something really weird happens, because
5208+        # self._serializer stays around forever, therefore we won't see the
5209+        # usual Unhandled Error in Deferred that would give us a hint.
5210+        self._serializer.addErrback(log.err)
5211+        return d
5212+
5213+
5214+    def _upload(self, new_contents):
5215+        #assert self._pubkey, "update_servermap must be called before publish"
5216+        p = Publish(self._node, self._storage_broker, self._servermap)
5217         if self._history:
5218hunk ./src/allmydata/mutable/filenode.py 1021
5219-            self._history.notify_publish(p.get_status(), len(new_contents))
5220+            self._history.notify_publish(p.get_status(),
5221+                                         new_contents.get_size())
5222         d = p.publish(new_contents)
5223hunk ./src/allmydata/mutable/filenode.py 1024
5224-        d.addCallback(self._did_upload, len(new_contents))
5225+        d.addCallback(self._did_upload, new_contents.get_size())
5226         return d
5227hunk ./src/allmydata/mutable/filenode.py 1026
5228+
5229+
5230     def _did_upload(self, res, size):
5231         self._most_recent_size = size
5232         return res
5233hunk ./src/allmydata/mutable/filenode.py 1031
5234+
5235+    def update(self, data, offset):
5236+        """
5237+        Do an update of this mutable file version by inserting data at
5238+        offset within the file. If offset is the EOF, this is an append
5239+        operation. I return a Deferred that fires with the results of
5240+        the update operation when it has completed.
5241+
5242+        In cases where update does not append any data, or where it does
5243+        not append so many blocks that the block count crosses a
5244+        power-of-two boundary, this operation will use roughly
5245+        O(data.get_size()) memory/bandwidth/CPU to perform the update.
5246+        Otherwise, it must download, re-encode, and upload the entire
5247+        file again, which will use O(filesize) resources.
5248+        """
5249+        return self._do_serialized(self._update, data, offset)
5250+
5251+
5252+    def _update(self, data, offset):
5253+        """
5254+        I update the mutable file version represented by this particular
5255+        IMutableVersion by inserting the data in data at the offset
5256+        offset. I return a Deferred that fires when this has been
5257+        completed.
5258+        """
5259+        new_size = data.get_size() + offset
5260+        old_size = self.get_size()
5261+        segment_size = self._version[3]
5262+        num_old_segments = mathutil.div_ceil(old_size,
5263+                                             segment_size)
5264+        num_new_segments = mathutil.div_ceil(new_size,
5265+                                             segment_size)
5266+        log.msg("got %d old segments, %d new segments" % \
5267+                        (num_old_segments, num_new_segments))
5268+
5269+        # We do a whole file re-encode if the file is an SDMF file.
5270+        if self._version[2]: # version[2] == SDMF salt, which MDMF lacks
5271+            log.msg("doing re-encode instead of in-place update")
5272+            return self._do_modify_update(data, offset)
5273+
5274+        # Otherwise, we can replace just the parts that are changing.
5275+        log.msg("updating in place")
5276+        d = self._do_update_update(data, offset)
5277+        d.addCallback(self._decode_and_decrypt_segments, data, offset)
5278+        d.addCallback(self._build_uploadable_and_finish, data, offset)
5279+        return d
5280+
5281+
5282+    def _do_modify_update(self, data, offset):
5283+        """
5284+        I perform a file update by modifying the contents of the file
5285+        after downloading it, then reuploading it. I am less efficient
5286+        than _do_update_update, but am necessary for certain updates.
5287+        """
5288+        def m(old, servermap, first_time):
5289+            start = offset
5290+            rest = offset + data.get_size()
5291+            new = old[:start]
5292+            new += "".join(data.read(data.get_size()))
5293+            new += old[rest:]
5294+            return new
5295+        return self._modify(m, None)
5296+
5297+
5298+    def _do_update_update(self, data, offset):
5299+        """
5300+        I start the Servermap update that gets us the data we need to
5301+        continue the update process. I return a Deferred that fires when
5302+        the servermap update is done.
5303+        """
5304+        assert IMutableUploadable.providedBy(data)
5305+        assert self.is_mutable()
5306+        # offset == self.get_size() is valid and means that we are
5307+        # appending data to the file.
5308+        assert offset <= self.get_size()
5309+
5310+        segsize = self._version[3]
5311+        # We'll need the segment that the data starts in, regardless of
5312+        # what we'll do later.
5313+        start_segment = offset // segsize
5314+
5315+        # We only need the end segment if the data we append does not go
5316+        # beyond the current end-of-file.
5317+        end_segment = start_segment
5318+        if offset + data.get_size() < self.get_size():
5319+            end_data = offset + data.get_size()
5320+            end_segment = end_data // segsize
5321+
5322+        self._start_segment = start_segment
5323+        self._end_segment = end_segment
5324+
5325+        # Now ask for the servermap to be updated in MODE_WRITE with
5326+        # this update range.
5327+        return self._update_servermap(update_range=(start_segment,
5328+                                                    end_segment))
5329+
5330+
5331+    def _decode_and_decrypt_segments(self, ignored, data, offset):
5332+        """
5333+        After the servermap update, I take the encrypted and encoded
5334+        data that the servermap fetched while doing its update and
5335+        transform it into decoded-and-decrypted plaintext that can be
5336+        used by the new uploadable. I return a Deferred that fires with
5337+        the segments.
5338+        """
5339+        r = Retrieve(self._node, self._servermap, self._version)
5340+        # decode: takes in our blocks and salts from the servermap,
5341+        # returns a Deferred that fires with the corresponding plaintext
5342+        # segments. Does not download -- simply takes advantage of
5343+        # existing infrastructure within the Retrieve class to avoid
5344+        # duplicating code.
5345+        sm = self._servermap
5346+        # XXX: If the methods in the servermap don't work as
5347+        # abstractions, you should rewrite them instead of going around
5348+        # them.
5349+        update_data = sm.update_data
5350+        start_segments = {} # shnum -> start segment
5351+        end_segments = {} # shnum -> end segment
5352+        blockhashes = {} # shnum -> blockhash tree
5353+        for (shnum, data) in update_data.iteritems():
5354+            data = [d[1] for d in data if d[0] == self._version]
5355+
5356+            # Every data entry in our list should now be share shnum for
5357+            # a particular version of the mutable file, so all of the
5358+            # entries should be identical.
5359+            datum = data[0]
5360+            assert filter(lambda x: x != datum, data) == []
5361+
5362+            blockhashes[shnum] = datum[0]
5363+            start_segments[shnum] = datum[1]
5364+            end_segments[shnum] = datum[2]
5365+
5366+        d1 = r.decode(start_segments, self._start_segment)
5367+        d2 = r.decode(end_segments, self._end_segment)
5368+        d3 = defer.succeed(blockhashes)
5369+        return deferredutil.gatherResults([d1, d2, d3])
5370+
5371+
5372+    def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
5373+        """
5374+        After the process has the plaintext segments, I build the
5375+        TransformingUploadable that the publisher will eventually
5376+        re-upload to the grid. I then invoke the publisher with that
5377+        uploadable, and return a Deferred when the publish operation has
5378+        completed without issue.
5379+        """
5380+        u = TransformingUploadable(data, offset,
5381+                                   self._version[3],
5382+                                   segments_and_bht[0],
5383+                                   segments_and_bht[1])
5384+        p = Publish(self._node, self._storage_broker, self._servermap)
5385+        return p.update(u, offset, segments_and_bht[2], self._version)
5386+
5387+
5388+    def _update_servermap(self, mode=MODE_WRITE, update_range=None):
5389+        """
5390+        I update the servermap. I return a Deferred that fires when the
5391+        servermap update is done.
5392+        """
5393+        if update_range:
5394+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
5395+                                 self._servermap,
5396+                                 mode=mode,
5397+                                 update_range=update_range)
5398+        else:
5399+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
5400+                                 self._servermap,
5401+                                 mode=mode)
5402+        return u.update()
5403}
5404[client: teach client how to create and work with MDMF files
5405Kevan Carstensen <kevan@isnotajoke.com>**20110802014811
5406 Ignore-this: d72fbc4c2ca63f00d9ab9dc2919098ff
5407] {
5408hunk ./src/allmydata/client.py 25
5409 from allmydata.util.time_format import parse_duration, parse_date
5410 from allmydata.stats import StatsProvider
5411 from allmydata.history import History
5412-from allmydata.interfaces import IStatsProducer, RIStubClient
5413+from allmydata.interfaces import IStatsProducer, RIStubClient, \
5414+                                 SDMF_VERSION, MDMF_VERSION
5415 from allmydata.nodemaker import NodeMaker
5416 
5417 
5418hunk ./src/allmydata/client.py 357
5419                                    self.terminator,
5420                                    self.get_encoding_parameters(),
5421                                    self._key_generator)
5422+        default = self.get_config("client", "mutable.format", default="sdmf")
5423+        if default == "mdmf":
5424+            self.mutable_file_default = MDMF_VERSION
5425+        else:
5426+            self.mutable_file_default = SDMF_VERSION
5427 
5428     def get_history(self):
5429         return self.history
5430hunk ./src/allmydata/client.py 493
5431         # may get an opaque node if there were any problems.
5432         return self.nodemaker.create_from_cap(write_uri, read_uri, deep_immutable=deep_immutable, name=name)
5433 
5434-    def create_dirnode(self, initial_children={}):
5435-        d = self.nodemaker.create_new_mutable_directory(initial_children)
5436+    def create_dirnode(self, initial_children={}, version=SDMF_VERSION):
5437+        d = self.nodemaker.create_new_mutable_directory(initial_children, version=version)
5438         return d
5439 
5440     def create_immutable_dirnode(self, children, convergence=None):
5441hunk ./src/allmydata/client.py 500
5442         return self.nodemaker.create_immutable_directory(children, convergence)
5443 
5444-    def create_mutable_file(self, contents=None, keysize=None):
5445-        return self.nodemaker.create_mutable_file(contents, keysize)
5446+    def create_mutable_file(self, contents=None, keysize=None, version=None):
5447+        if not version:
5448+            version = self.mutable_file_default
5449+        return self.nodemaker.create_mutable_file(contents, keysize,
5450+                                                  version=version)
5451 
5452     def upload(self, uploadable):
5453         uploader = self.getServiceNamed("uploader")
5454}
5455[nodemaker: teach nodemaker about MDMF caps
5456Kevan Carstensen <kevan@isnotajoke.com>**20110802014926
5457 Ignore-this: 430c73121b6883b99626cfd652fc65c4
5458] {
5459hunk ./src/allmydata/nodemaker.py 82
5460             return self._create_immutable(cap)
5461         if isinstance(cap, uri.CHKFileVerifierURI):
5462             return self._create_immutable_verifier(cap)
5463-        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI)):
5464+        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI,
5465+                            uri.WritableMDMFFileURI, uri.ReadonlyMDMFFileURI)):
5466             return self._create_mutable(cap)
5467         if isinstance(cap, (uri.DirectoryURI,
5468                             uri.ReadonlyDirectoryURI,
5469hunk ./src/allmydata/nodemaker.py 88
5470                             uri.ImmutableDirectoryURI,
5471-                            uri.LiteralDirectoryURI)):
5472+                            uri.LiteralDirectoryURI,
5473+                            uri.MDMFDirectoryURI,
5474+                            uri.ReadonlyMDMFDirectoryURI)):
5475             filenode = self._create_from_single_cap(cap.get_filenode_cap())
5476             return self._create_dirnode(filenode)
5477         return None
5478}
5479[mutable: train checker and repairer to work with MDMF mutable files
5480Kevan Carstensen <kevan@isnotajoke.com>**20110802015140
5481 Ignore-this: 8b1928925bed63708b71ab0de8d4306f
5482] {
5483hunk ./src/allmydata/mutable/checker.py 2
5484 
5485-from twisted.internet import defer
5486-from twisted.python import failure
5487-from allmydata import hashtree
5488 from allmydata.uri import from_string
5489hunk ./src/allmydata/mutable/checker.py 3
5490-from allmydata.util import hashutil, base32, idlib, log
5491+from allmydata.util import base32, idlib, log
5492 from allmydata.check_results import CheckAndRepairResults, CheckResults
5493 
5494 from allmydata.mutable.common import MODE_CHECK, CorruptShareError
5495hunk ./src/allmydata/mutable/checker.py 8
5496 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
5497-from allmydata.mutable.layout import unpack_share, SIGNED_PREFIX_LENGTH
5498+from allmydata.mutable.retrieve import Retrieve # for verifying
5499 
5500 class MutableChecker:
5501 
5502hunk ./src/allmydata/mutable/checker.py 25
5503 
5504     def check(self, verify=False, add_lease=False):
5505         servermap = ServerMap()
5506+        # Updating the servermap in MODE_CHECK will stand a good chance
5507+        # of finding all of the shares, and getting a good idea of
5508+        # recoverability, etc, without verifying.
5509         u = ServermapUpdater(self._node, self._storage_broker, self._monitor,
5510                              servermap, MODE_CHECK, add_lease=add_lease)
5511         if self._history:
5512hunk ./src/allmydata/mutable/checker.py 51
5513         if num_recoverable:
5514             self.best_version = servermap.best_recoverable_version()
5515 
5516+        # The file is unhealthy and needs to be repaired if:
5517+        # - There are unrecoverable versions.
5518         if servermap.unrecoverable_versions():
5519             self.need_repair = True
5520hunk ./src/allmydata/mutable/checker.py 55
5521+        # - There isn't a recoverable version.
5522         if num_recoverable != 1:
5523             self.need_repair = True
5524hunk ./src/allmydata/mutable/checker.py 58
5525+        # - The best recoverable version is missing some shares.
5526         if self.best_version:
5527             available_shares = servermap.shares_available()
5528             (num_distinct_shares, k, N) = available_shares[self.best_version]
5529hunk ./src/allmydata/mutable/checker.py 69
5530 
5531     def _verify_all_shares(self, servermap):
5532         # read every byte of each share
5533+        #
5534+        # This logic is going to be very nearly the same as the
5535+        # downloader. I bet we could pass the downloader a flag that
5536+        # makes it do this, and piggyback onto that instead of
5537+        # duplicating a bunch of code.
5538+        #
5539+        # Like:
5540+        #  r = Retrieve(blah, blah, blah, verify=True)
5541+        #  d = r.download()
5542+        #  (wait, wait, wait, d.callback)
5543+        # 
5544+        #  Then, when it has finished, we can check the servermap (which
5545+        #  we provided to Retrieve) to figure out which shares are bad,
5546+        #  since the Retrieve process will have updated the servermap as
5547+        #  it went along.
5548+        #
5549+        #  By passing the verify=True flag to the constructor, we are
5550+        #  telling the downloader a few things.
5551+        #
5552+        #  1. It needs to download all N shares, not just K shares.
5553+        #  2. It doesn't need to decrypt or decode the shares, only
5554+        #     verify them.
5555         if not self.best_version:
5556             return
5557hunk ./src/allmydata/mutable/checker.py 93
5558-        versionmap = servermap.make_versionmap()
5559-        shares = versionmap[self.best_version]
5560-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
5561-         offsets_tuple) = self.best_version
5562-        offsets = dict(offsets_tuple)
5563-        readv = [ (0, offsets["EOF"]) ]
5564-        dl = []
5565-        for (shnum, peerid, timestamp) in shares:
5566-            ss = servermap.connections[peerid]
5567-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
5568-            d.addCallback(self._got_answer, peerid, servermap)
5569-            dl.append(d)
5570-        return defer.DeferredList(dl, fireOnOneErrback=True, consumeErrors=True)
5571 
5572hunk ./src/allmydata/mutable/checker.py 94
5573-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
5574-        # isolate the callRemote to a separate method, so tests can subclass
5575-        # Publish and override it
5576-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
5577+        r = Retrieve(self._node, servermap, self.best_version, verify=True)
5578+        d = r.download()
5579+        d.addCallback(self._process_bad_shares)
5580         return d
5581 
5582hunk ./src/allmydata/mutable/checker.py 99
5583-    def _got_answer(self, datavs, peerid, servermap):
5584-        for shnum,datav in datavs.items():
5585-            data = datav[0]
5586-            try:
5587-                self._got_results_one_share(shnum, peerid, data)
5588-            except CorruptShareError:
5589-                f = failure.Failure()
5590-                self.need_repair = True
5591-                self.bad_shares.append( (peerid, shnum, f) )
5592-                prefix = data[:SIGNED_PREFIX_LENGTH]
5593-                servermap.mark_bad_share(peerid, shnum, prefix)
5594-                ss = servermap.connections[peerid]
5595-                self.notify_server_corruption(ss, shnum, str(f.value))
5596-
5597-    def check_prefix(self, peerid, shnum, data):
5598-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
5599-         offsets_tuple) = self.best_version
5600-        got_prefix = data[:SIGNED_PREFIX_LENGTH]
5601-        if got_prefix != prefix:
5602-            raise CorruptShareError(peerid, shnum,
5603-                                    "prefix mismatch: share changed while we were reading it")
5604-
5605-    def _got_results_one_share(self, shnum, peerid, data):
5606-        self.check_prefix(peerid, shnum, data)
5607-
5608-        # the [seqnum:signature] pieces are validated by _compare_prefix,
5609-        # which checks their signature against the pubkey known to be
5610-        # associated with this file.
5611 
5612hunk ./src/allmydata/mutable/checker.py 100
5613-        (seqnum, root_hash, IV, k, N, segsize, datalen, pubkey, signature,
5614-         share_hash_chain, block_hash_tree, share_data,
5615-         enc_privkey) = unpack_share(data)
5616-
5617-        # validate [share_hash_chain,block_hash_tree,share_data]
5618-
5619-        leaves = [hashutil.block_hash(share_data)]
5620-        t = hashtree.HashTree(leaves)
5621-        if list(t) != block_hash_tree:
5622-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
5623-        share_hash_leaf = t[0]
5624-        t2 = hashtree.IncompleteHashTree(N)
5625-        # root_hash was checked by the signature
5626-        t2.set_hashes({0: root_hash})
5627-        try:
5628-            t2.set_hashes(hashes=share_hash_chain,
5629-                          leaves={shnum: share_hash_leaf})
5630-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
5631-                IndexError), e:
5632-            msg = "corrupt hashes: %s" % (e,)
5633-            raise CorruptShareError(peerid, shnum, msg)
5634-
5635-        # validate enc_privkey: only possible if we have a write-cap
5636-        if not self._node.is_readonly():
5637-            alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
5638-            alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
5639-            if alleged_writekey != self._node.get_writekey():
5640-                raise CorruptShareError(peerid, shnum, "invalid privkey")
5641+    def _process_bad_shares(self, bad_shares):
5642+        if bad_shares:
5643+            self.need_repair = True
5644+        self.bad_shares = bad_shares
5645 
5646hunk ./src/allmydata/mutable/checker.py 105
5647-    def notify_server_corruption(self, ss, shnum, reason):
5648-        ss.callRemoteOnly("advise_corrupt_share",
5649-                          "mutable", self._storage_index, shnum, reason)
5650 
5651     def _count_shares(self, smap, version):
5652         available_shares = smap.shares_available()
5653hunk ./src/allmydata/mutable/repairer.py 5
5654 from zope.interface import implements
5655 from twisted.internet import defer
5656 from allmydata.interfaces import IRepairResults, ICheckResults
5657+from allmydata.mutable.publish import MutableData
5658 
5659 class RepairResults:
5660     implements(IRepairResults)
5661hunk ./src/allmydata/mutable/repairer.py 108
5662             raise RepairRequiresWritecapError("Sorry, repair currently requires a writecap, to set the write-enabler properly.")
5663 
5664         d = self.node.download_version(smap, best_version, fetch_privkey=True)
5665+        d.addCallback(lambda data:
5666+            MutableData(data))
5667         d.addCallback(self.node.upload, smap)
5668         d.addCallback(self.get_results, smap)
5669         return d
5670}
5671[test/common: Alter common test code to work with MDMF.
5672Kevan Carstensen <kevan@isnotajoke.com>**20110802015643
5673 Ignore-this: e564403182d0030439b168dd9f8726fa
5674 
5675 This mostly has to do with making the test code implement the new
5676 unified filenode interfaces.
5677] {
5678hunk ./src/allmydata/test/common.py 11
5679 from foolscap.api import flushEventualQueue, fireEventually
5680 from allmydata import uri, dirnode, client
5681 from allmydata.introducer.server import IntroducerNode
5682-from allmydata.interfaces import IMutableFileNode, IImmutableFileNode, \
5683-     FileTooLargeError, NotEnoughSharesError, ICheckable
5684+from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
5685+                                 NotEnoughSharesError, ICheckable, \
5686+                                 IMutableUploadable, SDMF_VERSION, \
5687+                                 MDMF_VERSION
5688 from allmydata.check_results import CheckResults, CheckAndRepairResults, \
5689      DeepCheckResults, DeepCheckAndRepairResults
5690 from allmydata.mutable.common import CorruptShareError
5691hunk ./src/allmydata/test/common.py 19
5692 from allmydata.mutable.layout import unpack_header
5693+from allmydata.mutable.publish import MutableData
5694+from allmydata.storage.server import storage_index_to_dir
5695 from allmydata.storage.mutable import MutableShareFile
5696 from allmydata.util import hashutil, log, fileutil, pollmixin
5697 from allmydata.util.assertutil import precondition
5698hunk ./src/allmydata/test/common.py 152
5699         consumer.write(data[start:end])
5700         return consumer
5701 
5702+
5703+    def get_best_readable_version(self):
5704+        return defer.succeed(self)
5705+
5706+
5707+    def download_to_data(self):
5708+        return download_to_data(self)
5709+
5710+
5711+    download_best_version = download_to_data
5712+
5713+
5714+    def get_size_of_best_version(self):
5715+        return defer.succeed(self.get_size)
5716+
5717+
5718 def make_chk_file_cap(size):
5719     return uri.CHKFileURI(key=os.urandom(16),
5720                           uri_extension_hash=os.urandom(32),
5721hunk ./src/allmydata/test/common.py 192
5722     MUTABLE_SIZELIMIT = 10000
5723     all_contents = {}
5724     bad_shares = {}
5725+    file_types = {} # storage index => MDMF_VERSION or SDMF_VERSION
5726 
5727     def __init__(self, storage_broker, secret_holder,
5728                  default_encoding_parameters, history):
5729hunk ./src/allmydata/test/common.py 197
5730         self.init_from_cap(make_mutable_file_cap())
5731-    def create(self, contents, key_generator=None, keysize=None):
5732+        self._k = default_encoding_parameters['k']
5733+        self._segsize = default_encoding_parameters['max_segment_size']
5734+    def create(self, contents, key_generator=None, keysize=None,
5735+               version=SDMF_VERSION):
5736+        if version == MDMF_VERSION and \
5737+            isinstance(self.my_uri, (uri.ReadonlySSKFileURI,
5738+                                 uri.WriteableSSKFileURI)):
5739+            self.init_from_cap(make_mdmf_mutable_file_cap())
5740+        self.file_types[self.storage_index] = version
5741         initial_contents = self._get_initial_contents(contents)
5742hunk ./src/allmydata/test/common.py 207
5743-        if len(initial_contents) > self.MUTABLE_SIZELIMIT:
5744-            raise FileTooLargeError("SDMF is limited to one segment, and "
5745-                                    "%d > %d" % (len(initial_contents),
5746-                                                 self.MUTABLE_SIZELIMIT))
5747-        self.all_contents[self.storage_index] = initial_contents
5748+        data = initial_contents.read(initial_contents.get_size())
5749+        data = "".join(data)
5750+        self.all_contents[self.storage_index] = data
5751+        self.my_uri.set_extension_params([self._k, self._segsize])
5752         return defer.succeed(self)
5753     def _get_initial_contents(self, contents):
5754hunk ./src/allmydata/test/common.py 213
5755-        if isinstance(contents, str):
5756-            return contents
5757         if contents is None:
5758hunk ./src/allmydata/test/common.py 214
5759-            return ""
5760+            return MutableData("")
5761+
5762+        if IMutableUploadable.providedBy(contents):
5763+            return contents
5764+
5765         assert callable(contents), "%s should be callable, not %s" % \
5766                (contents, type(contents))
5767         return contents(self)
5768hunk ./src/allmydata/test/common.py 224
5769     def init_from_cap(self, filecap):
5770         assert isinstance(filecap, (uri.WriteableSSKFileURI,
5771-                                    uri.ReadonlySSKFileURI))
5772+                                    uri.ReadonlySSKFileURI,
5773+                                    uri.WritableMDMFFileURI,
5774+                                    uri.ReadonlyMDMFFileURI))
5775         self.my_uri = filecap
5776         self.storage_index = self.my_uri.get_storage_index()
5777hunk ./src/allmydata/test/common.py 229
5778+        if isinstance(filecap, (uri.WritableMDMFFileURI,
5779+                                uri.ReadonlyMDMFFileURI)):
5780+            self.file_types[self.storage_index] = MDMF_VERSION
5781+
5782+        else:
5783+            self.file_types[self.storage_index] = SDMF_VERSION
5784+
5785         return self
5786     def get_cap(self):
5787         return self.my_uri
5788hunk ./src/allmydata/test/common.py 253
5789         return self.my_uri.get_readonly().to_string()
5790     def get_verify_cap(self):
5791         return self.my_uri.get_verify_cap()
5792+    def get_repair_cap(self):
5793+        if self.my_uri.is_readonly():
5794+            return None
5795+        return self.my_uri
5796     def is_readonly(self):
5797         return self.my_uri.is_readonly()
5798     def is_mutable(self):
5799hunk ./src/allmydata/test/common.py 279
5800     def get_storage_index(self):
5801         return self.storage_index
5802 
5803+    def get_servermap(self, mode):
5804+        return defer.succeed(None)
5805+
5806+    def get_version(self):
5807+        assert self.storage_index in self.file_types
5808+        return self.file_types[self.storage_index]
5809+
5810     def check(self, monitor, verify=False, add_lease=False):
5811         r = CheckResults(self.my_uri, self.storage_index)
5812         is_bad = self.bad_shares.get(self.storage_index, None)
5813hunk ./src/allmydata/test/common.py 344
5814         return d
5815 
5816     def download_best_version(self):
5817+        return defer.succeed(self._download_best_version())
5818+
5819+
5820+    def _download_best_version(self, ignored=None):
5821         if isinstance(self.my_uri, uri.LiteralFileURI):
5822hunk ./src/allmydata/test/common.py 349
5823-            return defer.succeed(self.my_uri.data)
5824+            return self.my_uri.data
5825         if self.storage_index not in self.all_contents:
5826hunk ./src/allmydata/test/common.py 351
5827-            return defer.fail(NotEnoughSharesError(None, 0, 3))
5828-        return defer.succeed(self.all_contents[self.storage_index])
5829+            raise NotEnoughSharesError(None, 0, 3)
5830+        return self.all_contents[self.storage_index]
5831+
5832 
5833     def overwrite(self, new_contents):
5834hunk ./src/allmydata/test/common.py 356
5835-        if len(new_contents) > self.MUTABLE_SIZELIMIT:
5836-            raise FileTooLargeError("SDMF is limited to one segment, and "
5837-                                    "%d > %d" % (len(new_contents),
5838-                                                 self.MUTABLE_SIZELIMIT))
5839         assert not self.is_readonly()
5840hunk ./src/allmydata/test/common.py 357
5841-        self.all_contents[self.storage_index] = new_contents
5842+        new_data = new_contents.read(new_contents.get_size())
5843+        new_data = "".join(new_data)
5844+        self.all_contents[self.storage_index] = new_data
5845+        self.my_uri.set_extension_params([self._k, self._segsize])
5846         return defer.succeed(None)
5847     def modify(self, modifier):
5848         # this does not implement FileTooLargeError, but the real one does
5849hunk ./src/allmydata/test/common.py 368
5850     def _modify(self, modifier):
5851         assert not self.is_readonly()
5852         old_contents = self.all_contents[self.storage_index]
5853-        self.all_contents[self.storage_index] = modifier(old_contents, None, True)
5854+        new_data = modifier(old_contents, None, True)
5855+        self.all_contents[self.storage_index] = new_data
5856+        self.my_uri.set_extension_params([self._k, self._segsize])
5857         return None
5858 
5859hunk ./src/allmydata/test/common.py 373
5860+    # As actually implemented, MutableFilenode and MutableFileVersion
5861+    # are distinct. However, nothing in the webapi uses (yet) that
5862+    # distinction -- it just uses the unified download interface
5863+    # provided by get_best_readable_version and read. When we start
5864+    # doing cooler things like LDMF, we will want to revise this code to
5865+    # be less simplistic.
5866+    def get_best_readable_version(self):
5867+        return defer.succeed(self)
5868+
5869+
5870+    def get_best_mutable_version(self):
5871+        return defer.succeed(self)
5872+
5873+    # Ditto for this, which is an implementation of IWritable.
5874+    # XXX: Declare that the same is implemented.
5875+    def update(self, data, offset):
5876+        assert not self.is_readonly()
5877+        def modifier(old, servermap, first_time):
5878+            new = old[:offset] + "".join(data.read(data.get_size()))
5879+            new += old[len(new):]
5880+            return new
5881+        return self.modify(modifier)
5882+
5883+
5884+    def read(self, consumer, offset=0, size=None):
5885+        data = self._download_best_version()
5886+        if size:
5887+            data = data[offset:offset+size]
5888+        consumer.write(data)
5889+        return defer.succeed(consumer)
5890+
5891+
5892 def make_mutable_file_cap():
5893     return uri.WriteableSSKFileURI(writekey=os.urandom(16),
5894                                    fingerprint=os.urandom(32))
5895hunk ./src/allmydata/test/common.py 408
5896-def make_mutable_file_uri():
5897-    return make_mutable_file_cap().to_string()
5898+
5899+def make_mdmf_mutable_file_cap():
5900+    return uri.WritableMDMFFileURI(writekey=os.urandom(16),
5901+                                   fingerprint=os.urandom(32))
5902+
5903+def make_mutable_file_uri(mdmf=False):
5904+    if mdmf:
5905+        uri = make_mdmf_mutable_file_cap()
5906+    else:
5907+        uri = make_mutable_file_cap()
5908+
5909+    return uri.to_string()
5910 
5911 def make_verifier_uri():
5912     return uri.SSKVerifierURI(storage_index=os.urandom(16),
5913hunk ./src/allmydata/test/common.py 425
5914                               fingerprint=os.urandom(32)).to_string()
5915 
5916+def create_mutable_filenode(contents, mdmf=False):
5917+    # XXX: All of these arguments are kind of stupid.
5918+    if mdmf:
5919+        cap = make_mdmf_mutable_file_cap()
5920+    else:
5921+        cap = make_mutable_file_cap()
5922+
5923+    encoding_params = {}
5924+    encoding_params['k'] = 3
5925+    encoding_params['max_segment_size'] = 128*1024
5926+
5927+    filenode = FakeMutableFileNode(None, None, encoding_params, None)
5928+    filenode.init_from_cap(cap)
5929+    if mdmf:
5930+        filenode.create(MutableData(contents), version=MDMF_VERSION)
5931+    else:
5932+        filenode.create(MutableData(contents), version=SDMF_VERSION)
5933+    return filenode
5934+
5935+
5936 class FakeDirectoryNode(dirnode.DirectoryNode):
5937     """This offers IDirectoryNode, but uses a FakeMutableFileNode for the
5938     backing store, so it doesn't go to the grid. The child data is still
5939}
5940[dirnode: teach dirnode to make MDMF directories
5941Kevan Carstensen <kevan@isnotajoke.com>**20110802020511
5942 Ignore-this: 143631400a6136467eb82455487df525
5943] {
5944hunk ./src/allmydata/dirnode.py 14
5945 from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
5946      IImmutableFileNode, IMutableFileNode, \
5947      ExistingChildError, NoSuchChildError, ICheckable, IDeepCheckable, \
5948-     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError
5949+     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError, \
5950+     SDMF_VERSION, MDMF_VERSION
5951 from allmydata.check_results import DeepCheckResults, \
5952      DeepCheckAndRepairResults
5953 from allmydata.monitor import Monitor
5954hunk ./src/allmydata/dirnode.py 617
5955         d.addCallback(lambda res: deleter.old_child)
5956         return d
5957 
5958+    # XXX: Too many arguments? Worthwhile to break into mutable/immutable?
5959     def create_subdirectory(self, namex, initial_children={}, overwrite=True,
5960hunk ./src/allmydata/dirnode.py 619
5961-                            mutable=True, metadata=None):
5962+                            mutable=True, mutable_version=None, metadata=None):
5963         name = normalize(namex)
5964         if self.is_readonly():
5965             return defer.fail(NotWriteableError())
5966hunk ./src/allmydata/dirnode.py 624
5967         if mutable:
5968-            d = self._nodemaker.create_new_mutable_directory(initial_children)
5969+            if mutable_version:
5970+                d = self._nodemaker.create_new_mutable_directory(initial_children,
5971+                                                                 version=mutable_version)
5972+            else:
5973+                d = self._nodemaker.create_new_mutable_directory(initial_children)
5974         else:
5975hunk ./src/allmydata/dirnode.py 630
5976+            # mutable version doesn't make sense for immmutable directories.
5977+            assert mutable_version is None
5978             d = self._nodemaker.create_immutable_directory(initial_children)
5979         def _created(child):
5980             entries = {name: (child, metadata)}
5981hunk ./src/allmydata/test/test_dirnode.py 14
5982 from allmydata.interfaces import IImmutableFileNode, IMutableFileNode, \
5983      ExistingChildError, NoSuchChildError, MustNotBeUnknownRWError, \
5984      MustBeDeepImmutableError, MustBeReadonlyError, \
5985-     IDeepCheckResults, IDeepCheckAndRepairResults
5986+     IDeepCheckResults, IDeepCheckAndRepairResults, \
5987+     MDMF_VERSION, SDMF_VERSION
5988 from allmydata.mutable.filenode import MutableFileNode
5989 from allmydata.mutable.common import UncoordinatedWriteError
5990 from allmydata.util import hashutil, base32
5991hunk ./src/allmydata/test/test_dirnode.py 61
5992               testutil.ReallyEqualMixin, testutil.ShouldFailMixin, testutil.StallMixin, ErrorMixin):
5993     timeout = 480 # It occasionally takes longer than 240 seconds on Francois's arm box.
5994 
5995-    def test_basic(self):
5996-        self.basedir = "dirnode/Dirnode/test_basic"
5997-        self.set_up_grid()
5998+    def _do_create_test(self, mdmf=False):
5999         c = self.g.clients[0]
6000hunk ./src/allmydata/test/test_dirnode.py 63
6001-        d = c.create_dirnode()
6002-        def _done(res):
6003-            self.failUnless(isinstance(res, dirnode.DirectoryNode))
6004-            self.failUnless(res.is_mutable())
6005-            self.failIf(res.is_readonly())
6006-            self.failIf(res.is_unknown())
6007-            self.failIf(res.is_allowed_in_immutable_directory())
6008-            res.raise_error()
6009-            rep = str(res)
6010-            self.failUnless("RW-MUT" in rep)
6011-        d.addCallback(_done)
6012+
6013+        self.expected_manifest = []
6014+        self.expected_verifycaps = set()
6015+        self.expected_storage_indexes = set()
6016+
6017+        d = None
6018+        if mdmf:
6019+            d = c.create_dirnode(version=MDMF_VERSION)
6020+        else:
6021+            d = c.create_dirnode()
6022+        def _then(n):
6023+            # /
6024+            self.rootnode = n
6025+            backing_node = n._node
6026+            if mdmf:
6027+                self.failUnlessEqual(backing_node.get_version(),
6028+                                     MDMF_VERSION)
6029+            else:
6030+                self.failUnlessEqual(backing_node.get_version(),
6031+                                     SDMF_VERSION)
6032+            self.failUnless(n.is_mutable())
6033+            u = n.get_uri()
6034+            self.failUnless(u)
6035+            cap_formats = []
6036+            if mdmf:
6037+                cap_formats = ["URI:DIR2-MDMF:",
6038+                               "URI:DIR2-MDMF-RO:",
6039+                               "URI:DIR2-MDMF-Verifier:"]
6040+            else:
6041+                cap_formats = ["URI:DIR2:",
6042+                               "URI:DIR2-RO",
6043+                               "URI:DIR2-Verifier:"]
6044+            rw, ro, v = cap_formats
6045+            self.failUnless(u.startswith(rw), u)
6046+            u_ro = n.get_readonly_uri()
6047+            self.failUnless(u_ro.startswith(ro), u_ro)
6048+            u_v = n.get_verify_cap().to_string()
6049+            self.failUnless(u_v.startswith(v), u_v)
6050+            u_r = n.get_repair_cap().to_string()
6051+            self.failUnlessReallyEqual(u_r, u)
6052+            self.expected_manifest.append( ((), u) )
6053+            self.expected_verifycaps.add(u_v)
6054+            si = n.get_storage_index()
6055+            self.expected_storage_indexes.add(base32.b2a(si))
6056+            expected_si = n._uri.get_storage_index()
6057+            self.failUnlessReallyEqual(si, expected_si)
6058+
6059+            d = n.list()
6060+            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
6061+            d.addCallback(lambda res: n.has_child(u"missing"))
6062+            d.addCallback(lambda res: self.failIf(res))
6063+
6064+            fake_file_uri = make_mutable_file_uri()
6065+            other_file_uri = make_mutable_file_uri()
6066+            m = c.nodemaker.create_from_cap(fake_file_uri)
6067+            ffu_v = m.get_verify_cap().to_string()
6068+            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
6069+            self.expected_verifycaps.add(ffu_v)
6070+            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
6071+            d.addCallback(lambda res: n.set_uri(u"child",
6072+                                                fake_file_uri, fake_file_uri))
6073+            d.addCallback(lambda res:
6074+                          self.shouldFail(ExistingChildError, "set_uri-no",
6075+                                          "child 'child' already exists",
6076+                                          n.set_uri, u"child",
6077+                                          other_file_uri, other_file_uri,
6078+                                          overwrite=False))
6079+            # /
6080+            # /child = mutable
6081+
6082+            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
6083+
6084+            # /
6085+            # /child = mutable
6086+            # /subdir = directory
6087+            def _created(subdir):
6088+                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
6089+                self.subdir = subdir
6090+                new_v = subdir.get_verify_cap().to_string()
6091+                assert isinstance(new_v, str)
6092+                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
6093+                self.expected_verifycaps.add(new_v)
6094+                si = subdir.get_storage_index()
6095+                self.expected_storage_indexes.add(base32.b2a(si))
6096+            d.addCallback(_created)
6097+
6098+            d.addCallback(lambda res:
6099+                          self.shouldFail(ExistingChildError, "mkdir-no",
6100+                                          "child 'subdir' already exists",
6101+                                          n.create_subdirectory, u"subdir",
6102+                                          overwrite=False))
6103+
6104+            d.addCallback(lambda res: n.list())
6105+            d.addCallback(lambda children:
6106+                          self.failUnlessReallyEqual(set(children.keys()),
6107+                                                     set([u"child", u"subdir"])))
6108+
6109+            d.addCallback(lambda res: n.start_deep_stats().when_done())
6110+            def _check_deepstats(stats):
6111+                self.failUnless(isinstance(stats, dict))
6112+                expected = {"count-immutable-files": 0,
6113+                            "count-mutable-files": 1,
6114+                            "count-literal-files": 0,
6115+                            "count-files": 1,
6116+                            "count-directories": 2,
6117+                            "size-immutable-files": 0,
6118+                            "size-literal-files": 0,
6119+                            #"size-directories": 616, # varies
6120+                            #"largest-directory": 616,
6121+                            "largest-directory-children": 2,
6122+                            "largest-immutable-file": 0,
6123+                            }
6124+                for k,v in expected.iteritems():
6125+                    self.failUnlessReallyEqual(stats[k], v,
6126+                                               "stats[%s] was %s, not %s" %
6127+                                               (k, stats[k], v))
6128+                self.failUnless(stats["size-directories"] > 500,
6129+                                stats["size-directories"])
6130+                self.failUnless(stats["largest-directory"] > 500,
6131+                                stats["largest-directory"])
6132+                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
6133+            d.addCallback(_check_deepstats)
6134+
6135+            d.addCallback(lambda res: n.build_manifest().when_done())
6136+            def _check_manifest(res):
6137+                manifest = res["manifest"]
6138+                self.failUnlessReallyEqual(sorted(manifest),
6139+                                           sorted(self.expected_manifest))
6140+                stats = res["stats"]
6141+                _check_deepstats(stats)
6142+                self.failUnlessReallyEqual(self.expected_verifycaps,
6143+                                           res["verifycaps"])
6144+                self.failUnlessReallyEqual(self.expected_storage_indexes,
6145+                                           res["storage-index"])
6146+            d.addCallback(_check_manifest)
6147+
6148+            def _add_subsubdir(res):
6149+                return self.subdir.create_subdirectory(u"subsubdir")
6150+            d.addCallback(_add_subsubdir)
6151+            # /
6152+            # /child = mutable
6153+            # /subdir = directory
6154+            # /subdir/subsubdir = directory
6155+            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
6156+            d.addCallback(lambda subsubdir:
6157+                          self.failUnless(isinstance(subsubdir,
6158+                                                     dirnode.DirectoryNode)))
6159+            d.addCallback(lambda res: n.get_child_at_path(u""))
6160+            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
6161+                                                                 n.get_uri()))
6162+
6163+            d.addCallback(lambda res: n.get_metadata_for(u"child"))
6164+            d.addCallback(lambda metadata:
6165+                          self.failUnlessEqual(set(metadata.keys()),
6166+                                               set(["tahoe"])))
6167+
6168+            d.addCallback(lambda res:
6169+                          self.shouldFail(NoSuchChildError, "gcamap-no",
6170+                                          "nope",
6171+                                          n.get_child_and_metadata_at_path,
6172+                                          u"subdir/nope"))
6173+            d.addCallback(lambda res:
6174+                          n.get_child_and_metadata_at_path(u""))
6175+            def _check_child_and_metadata1(res):
6176+                child, metadata = res
6177+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6178+                # edge-metadata needs at least one path segment
6179+                self.failUnlessEqual(set(metadata.keys()), set([]))
6180+            d.addCallback(_check_child_and_metadata1)
6181+            d.addCallback(lambda res:
6182+                          n.get_child_and_metadata_at_path(u"child"))
6183+
6184+            def _check_child_and_metadata2(res):
6185+                child, metadata = res
6186+                self.failUnlessReallyEqual(child.get_uri(),
6187+                                           fake_file_uri)
6188+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6189+            d.addCallback(_check_child_and_metadata2)
6190+
6191+            d.addCallback(lambda res:
6192+                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
6193+            def _check_child_and_metadata3(res):
6194+                child, metadata = res
6195+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6196+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6197+            d.addCallback(_check_child_and_metadata3)
6198+
6199+            # set_uri + metadata
6200+            # it should be possible to add a child without any metadata
6201+            d.addCallback(lambda res: n.set_uri(u"c2",
6202+                                                fake_file_uri, fake_file_uri,
6203+                                                {}))
6204+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6205+            d.addCallback(lambda metadata:
6206+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6207+
6208+            # You can't override the link timestamps.
6209+            d.addCallback(lambda res: n.set_uri(u"c2",
6210+                                                fake_file_uri, fake_file_uri,
6211+                                                { 'tahoe': {'linkcrtime': "bogus"}}))
6212+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6213+            def _has_good_linkcrtime(metadata):
6214+                self.failUnless(metadata.has_key('tahoe'))
6215+                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
6216+                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
6217+            d.addCallback(_has_good_linkcrtime)
6218+
6219+            # if we don't set any defaults, the child should get timestamps
6220+            d.addCallback(lambda res: n.set_uri(u"c3",
6221+                                                fake_file_uri, fake_file_uri))
6222+            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
6223+            d.addCallback(lambda metadata:
6224+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6225+
6226+            # we can also add specific metadata at set_uri() time
6227+            d.addCallback(lambda res: n.set_uri(u"c4",
6228+                                                fake_file_uri, fake_file_uri,
6229+                                                {"key": "value"}))
6230+            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
6231+            d.addCallback(lambda metadata:
6232+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6233+                                              (metadata['key'] == "value"), metadata))
6234+
6235+            d.addCallback(lambda res: n.delete(u"c2"))
6236+            d.addCallback(lambda res: n.delete(u"c3"))
6237+            d.addCallback(lambda res: n.delete(u"c4"))
6238+
6239+            # set_node + metadata
6240+            # it should be possible to add a child without any metadata except for timestamps
6241+            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
6242+            d.addCallback(lambda res: c.create_dirnode())
6243+            d.addCallback(lambda n2:
6244+                          self.shouldFail(ExistingChildError, "set_node-no",
6245+                                          "child 'd2' already exists",
6246+                                          n.set_node, u"d2", n2,
6247+                                          overwrite=False))
6248+            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
6249+            d.addCallback(lambda metadata:
6250+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6251+
6252+            # if we don't set any defaults, the child should get timestamps
6253+            d.addCallback(lambda res: n.set_node(u"d3", n))
6254+            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
6255+            d.addCallback(lambda metadata:
6256+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6257+
6258+            # we can also add specific metadata at set_node() time
6259+            d.addCallback(lambda res: n.set_node(u"d4", n,
6260+                                                {"key": "value"}))
6261+            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
6262+            d.addCallback(lambda metadata:
6263+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6264+                                          (metadata["key"] == "value"), metadata))
6265+
6266+            d.addCallback(lambda res: n.delete(u"d2"))
6267+            d.addCallback(lambda res: n.delete(u"d3"))
6268+            d.addCallback(lambda res: n.delete(u"d4"))
6269+
6270+            # metadata through set_children()
6271+            d.addCallback(lambda res:
6272+                          n.set_children({
6273+                              u"e1": (fake_file_uri, fake_file_uri),
6274+                              u"e2": (fake_file_uri, fake_file_uri, {}),
6275+                              u"e3": (fake_file_uri, fake_file_uri,
6276+                                      {"key": "value"}),
6277+                              }))
6278+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
6279+            d.addCallback(lambda res:
6280+                          self.shouldFail(ExistingChildError, "set_children-no",
6281+                                          "child 'e1' already exists",
6282+                                          n.set_children,
6283+                                          { u"e1": (other_file_uri,
6284+                                                    other_file_uri),
6285+                                            u"new": (other_file_uri,
6286+                                                     other_file_uri),
6287+                                            },
6288+                                          overwrite=False))
6289+            # and 'new' should not have been created
6290+            d.addCallback(lambda res: n.list())
6291+            d.addCallback(lambda children: self.failIf(u"new" in children))
6292+            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
6293+            d.addCallback(lambda metadata:
6294+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6295+            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
6296+            d.addCallback(lambda metadata:
6297+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6298+            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
6299+            d.addCallback(lambda metadata:
6300+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6301+                                          (metadata["key"] == "value"), metadata))
6302+
6303+            d.addCallback(lambda res: n.delete(u"e1"))
6304+            d.addCallback(lambda res: n.delete(u"e2"))
6305+            d.addCallback(lambda res: n.delete(u"e3"))
6306+
6307+            # metadata through set_nodes()
6308+            d.addCallback(lambda res:
6309+                          n.set_nodes({ u"f1": (n, None),
6310+                                        u"f2": (n, {}),
6311+                                        u"f3": (n, {"key": "value"}),
6312+                                        }))
6313+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
6314+            d.addCallback(lambda res:
6315+                          self.shouldFail(ExistingChildError, "set_nodes-no",
6316+                                          "child 'f1' already exists",
6317+                                          n.set_nodes, { u"f1": (n, None),
6318+                                                         u"new": (n, None), },
6319+                                          overwrite=False))
6320+            # and 'new' should not have been created
6321+            d.addCallback(lambda res: n.list())
6322+            d.addCallback(lambda children: self.failIf(u"new" in children))
6323+            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
6324+            d.addCallback(lambda metadata:
6325+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6326+            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
6327+            d.addCallback(lambda metadata:
6328+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6329+            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
6330+            d.addCallback(lambda metadata:
6331+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6332+                                          (metadata["key"] == "value"), metadata))
6333+
6334+            d.addCallback(lambda res: n.delete(u"f1"))
6335+            d.addCallback(lambda res: n.delete(u"f2"))
6336+            d.addCallback(lambda res: n.delete(u"f3"))
6337+
6338+
6339+            d.addCallback(lambda res:
6340+                          n.set_metadata_for(u"child",
6341+                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
6342+            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
6343+            d.addCallback(lambda metadata:
6344+                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
6345+                                          metadata["tags"] == ["web2.0-compatible"] and
6346+                                          "bad" not in metadata["tahoe"], metadata))
6347+
6348+            d.addCallback(lambda res:
6349+                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
6350+                                          n.set_metadata_for, u"nosuch", {}))
6351+
6352+
6353+            def _start(res):
6354+                self._start_timestamp = time.time()
6355+            d.addCallback(_start)
6356+            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
6357+            # floats to hundredeths (it uses str(num) instead of repr(num)).
6358+            # simplejson-1.7.3 does not have this bug. To prevent this bug
6359+            # from causing the test to fail, stall for more than a few
6360+            # hundrededths of a second.
6361+            d.addCallback(self.stall, 0.1)
6362+            d.addCallback(lambda res: n.add_file(u"timestamps",
6363+                                                 upload.Data("stamp me", convergence="some convergence string")))
6364+            d.addCallback(self.stall, 0.1)
6365+            def _stop(res):
6366+                self._stop_timestamp = time.time()
6367+            d.addCallback(_stop)
6368+
6369+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
6370+            def _check_timestamp1(metadata):
6371+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6372+                tahoe_md = metadata["tahoe"]
6373+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
6374+
6375+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
6376+                                                  self._start_timestamp)
6377+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
6378+                                                  tahoe_md["linkcrtime"])
6379+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
6380+                                                  self._start_timestamp)
6381+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
6382+                                                  tahoe_md["linkmotime"])
6383+                # Our current timestamp rules say that replacing an existing
6384+                # child should preserve the 'linkcrtime' but update the
6385+                # 'linkmotime'
6386+                self._old_linkcrtime = tahoe_md["linkcrtime"]
6387+                self._old_linkmotime = tahoe_md["linkmotime"]
6388+            d.addCallback(_check_timestamp1)
6389+            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
6390+            d.addCallback(lambda res: n.set_node(u"timestamps", n))
6391+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
6392+            def _check_timestamp2(metadata):
6393+                self.failUnlessIn("tahoe", metadata)
6394+                tahoe_md = metadata["tahoe"]
6395+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
6396+
6397+                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
6398+                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
6399+                return n.delete(u"timestamps")
6400+            d.addCallback(_check_timestamp2)
6401+
6402+            d.addCallback(lambda res: n.delete(u"subdir"))
6403+            d.addCallback(lambda old_child:
6404+                          self.failUnlessReallyEqual(old_child.get_uri(),
6405+                                                     self.subdir.get_uri()))
6406+
6407+            d.addCallback(lambda res: n.list())
6408+            d.addCallback(lambda children:
6409+                          self.failUnlessReallyEqual(set(children.keys()),
6410+                                                     set([u"child"])))
6411+
6412+            uploadable1 = upload.Data("some data", convergence="converge")
6413+            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
6414+            d.addCallback(lambda newnode:
6415+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
6416+            uploadable2 = upload.Data("some data", convergence="stuff")
6417+            d.addCallback(lambda res:
6418+                          self.shouldFail(ExistingChildError, "add_file-no",
6419+                                          "child 'newfile' already exists",
6420+                                          n.add_file, u"newfile",
6421+                                          uploadable2,
6422+                                          overwrite=False))
6423+            d.addCallback(lambda res: n.list())
6424+            d.addCallback(lambda children:
6425+                          self.failUnlessReallyEqual(set(children.keys()),
6426+                                                     set([u"child", u"newfile"])))
6427+            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
6428+            d.addCallback(lambda metadata:
6429+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6430+
6431+            uploadable3 = upload.Data("some data", convergence="converge")
6432+            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
6433+                                                 uploadable3,
6434+                                                 {"key": "value"}))
6435+            d.addCallback(lambda newnode:
6436+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
6437+            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
6438+            d.addCallback(lambda metadata:
6439+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6440+                                              (metadata['key'] == "value"), metadata))
6441+            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
6442+
6443+            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
6444+            def _created2(subdir2):
6445+                self.subdir2 = subdir2
6446+                # put something in the way, to make sure it gets overwritten
6447+                return subdir2.add_file(u"child", upload.Data("overwrite me",
6448+                                                              "converge"))
6449+            d.addCallback(_created2)
6450+
6451+            d.addCallback(lambda res:
6452+                          n.move_child_to(u"child", self.subdir2))
6453+            d.addCallback(lambda res: n.list())
6454+            d.addCallback(lambda children:
6455+                          self.failUnlessReallyEqual(set(children.keys()),
6456+                                                     set([u"newfile", u"subdir2"])))
6457+            d.addCallback(lambda res: self.subdir2.list())
6458+            d.addCallback(lambda children:
6459+                          self.failUnlessReallyEqual(set(children.keys()),
6460+                                                     set([u"child"])))
6461+            d.addCallback(lambda res: self.subdir2.get(u"child"))
6462+            d.addCallback(lambda child:
6463+                          self.failUnlessReallyEqual(child.get_uri(),
6464+                                                     fake_file_uri))
6465+
6466+            # move it back, using new_child_name=
6467+            d.addCallback(lambda res:
6468+                          self.subdir2.move_child_to(u"child", n, u"newchild"))
6469+            d.addCallback(lambda res: n.list())
6470+            d.addCallback(lambda children:
6471+                          self.failUnlessReallyEqual(set(children.keys()),
6472+                                                     set([u"newchild", u"newfile",
6473+                                                          u"subdir2"])))
6474+            d.addCallback(lambda res: self.subdir2.list())
6475+            d.addCallback(lambda children:
6476+                          self.failUnlessReallyEqual(set(children.keys()), set([])))
6477+
6478+            # now make sure that we honor overwrite=False
6479+            d.addCallback(lambda res:
6480+                          self.subdir2.set_uri(u"newchild",
6481+                                               other_file_uri, other_file_uri))
6482+
6483+            d.addCallback(lambda res:
6484+                          self.shouldFail(ExistingChildError, "move_child_to-no",
6485+                                          "child 'newchild' already exists",
6486+                                          n.move_child_to, u"newchild",
6487+                                          self.subdir2,
6488+                                          overwrite=False))
6489+            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
6490+            d.addCallback(lambda child:
6491+                          self.failUnlessReallyEqual(child.get_uri(),
6492+                                                     other_file_uri))
6493+
6494+
6495+            # Setting the no-write field should diminish a mutable cap to read-only
6496+            # (for both files and directories).
6497+
6498+            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
6499+            d.addCallback(lambda ign: n.get(u"mutable"))
6500+            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
6501+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
6502+            d.addCallback(lambda ign: n.get(u"mutable"))
6503+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
6504+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
6505+            d.addCallback(lambda ign: n.get(u"mutable"))
6506+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
6507+
6508+            d.addCallback(lambda ign: n.get(u"subdir2"))
6509+            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
6510+            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
6511+            d.addCallback(lambda ign: n.get(u"subdir2"))
6512+            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
6513+
6514+            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
6515+                                                metadata={"no-write": True}))
6516+            d.addCallback(lambda ign: n.get(u"mutable_ro"))
6517+            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
6518+
6519+            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
6520+            d.addCallback(lambda ign: n.get(u"subdir_ro"))
6521+            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
6522+
6523+            return d
6524+
6525+        d.addCallback(_then)
6526+
6527+        d.addErrback(self.explain_error)
6528         return d
6529 
6530hunk ./src/allmydata/test/test_dirnode.py 581
6531-    def test_initial_children(self):
6532-        self.basedir = "dirnode/Dirnode/test_initial_children"
6533-        self.set_up_grid()
6534+
6535+    def _do_initial_children_test(self, mdmf=False):
6536         c = self.g.clients[0]
6537         nm = c.nodemaker
6538 
6539hunk ./src/allmydata/test/test_dirnode.py 597
6540                 u"empty_litdir": (nm.create_from_cap(empty_litdir_uri), {}),
6541                 u"tiny_litdir": (nm.create_from_cap(tiny_litdir_uri), {}),
6542                 }
6543-        d = c.create_dirnode(kids)
6544-       
6545+        d = None
6546+        if mdmf:
6547+            d = c.create_dirnode(kids, version=MDMF_VERSION)
6548+        else:
6549+            d = c.create_dirnode(kids)
6550         def _created(dn):
6551             self.failUnless(isinstance(dn, dirnode.DirectoryNode))
6552hunk ./src/allmydata/test/test_dirnode.py 604
6553+            backing_node = dn._node
6554+            if mdmf:
6555+                self.failUnlessEqual(backing_node.get_version(),
6556+                                     MDMF_VERSION)
6557+            else:
6558+                self.failUnlessEqual(backing_node.get_version(),
6559+                                     SDMF_VERSION)
6560             self.failUnless(dn.is_mutable())
6561             self.failIf(dn.is_readonly())
6562             self.failIf(dn.is_unknown())
6563hunk ./src/allmydata/test/test_dirnode.py 619
6564             rep = str(dn)
6565             self.failUnless("RW-MUT" in rep)
6566             return dn.list()
6567-        d.addCallback(_created)
6568-       
6569+
6570         def _check_kids(children):
6571             self.failUnlessReallyEqual(set(children.keys()),
6572                                        set([one_nfc, u"two", u"mut", u"fut", u"fro",
6573hunk ./src/allmydata/test/test_dirnode.py 623
6574-                                            u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
6575+                                        u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
6576             one_node, one_metadata = children[one_nfc]
6577             two_node, two_metadata = children[u"two"]
6578             mut_node, mut_metadata = children[u"mut"]
6579hunk ./src/allmydata/test/test_dirnode.py 683
6580             d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
6581             d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
6582             return d2
6583-
6584+        d.addCallback(_created)
6585         d.addCallback(_check_kids)
6586 
6587         d.addCallback(lambda ign: nm.create_new_mutable_directory(kids))
6588hunk ./src/allmydata/test/test_dirnode.py 707
6589                                       bad_kids2))
6590         return d
6591 
6592+    def _do_basic_test(self, mdmf=False):
6593+        c = self.g.clients[0]
6594+        d = None
6595+        if mdmf:
6596+            d = c.create_dirnode(version=MDMF_VERSION)
6597+        else:
6598+            d = c.create_dirnode()
6599+        def _done(res):
6600+            self.failUnless(isinstance(res, dirnode.DirectoryNode))
6601+            self.failUnless(res.is_mutable())
6602+            self.failIf(res.is_readonly())
6603+            self.failIf(res.is_unknown())
6604+            self.failIf(res.is_allowed_in_immutable_directory())
6605+            res.raise_error()
6606+            rep = str(res)
6607+            self.failUnless("RW-MUT" in rep)
6608+        d.addCallback(_done)
6609+        return d
6610+
6611+    def test_basic(self):
6612+        self.basedir = "dirnode/Dirnode/test_basic"
6613+        self.set_up_grid()
6614+        return self._do_basic_test()
6615+
6616+    def test_basic_mdmf(self):
6617+        self.basedir = "dirnode/Dirnode/test_basic_mdmf"
6618+        self.set_up_grid()
6619+        return self._do_basic_test(mdmf=True)
6620+
6621+    def test_initial_children(self):
6622+        self.basedir = "dirnode/Dirnode/test_initial_children"
6623+        self.set_up_grid()
6624+        return self._do_initial_children_test()
6625+
6626     def test_immutable(self):
6627         self.basedir = "dirnode/Dirnode/test_immutable"
6628         self.set_up_grid()
6629hunk ./src/allmydata/test/test_dirnode.py 1025
6630         d.addCallback(_done)
6631         return d
6632 
6633-    def _test_deepcheck_create(self):
6634+    def _test_deepcheck_create(self, version=SDMF_VERSION):
6635         # create a small tree with a loop, and some non-directories
6636         #  root/
6637         #  root/subdir/
6638hunk ./src/allmydata/test/test_dirnode.py 1033
6639         #  root/subdir/link -> root
6640         #  root/rodir
6641         c = self.g.clients[0]
6642-        d = c.create_dirnode()
6643+        d = c.create_dirnode(version=version)
6644         def _created_root(rootnode):
6645             self._rootnode = rootnode
6646hunk ./src/allmydata/test/test_dirnode.py 1036
6647+            self.failUnlessEqual(rootnode._node.get_version(), version)
6648             return rootnode.create_subdirectory(u"subdir")
6649         d.addCallback(_created_root)
6650         def _created_subdir(subdir):
6651hunk ./src/allmydata/test/test_dirnode.py 1075
6652         d.addCallback(_check_results)
6653         return d
6654 
6655+    def test_deepcheck_mdmf(self):
6656+        self.basedir = "dirnode/Dirnode/test_deepcheck_mdmf"
6657+        self.set_up_grid()
6658+        d = self._test_deepcheck_create(MDMF_VERSION)
6659+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
6660+        def _check_results(r):
6661+            self.failUnless(IDeepCheckResults.providedBy(r))
6662+            c = r.get_counters()
6663+            self.failUnlessReallyEqual(c,
6664+                                       {"count-objects-checked": 4,
6665+                                        "count-objects-healthy": 4,
6666+                                        "count-objects-unhealthy": 0,
6667+                                        "count-objects-unrecoverable": 0,
6668+                                        "count-corrupt-shares": 0,
6669+                                        })
6670+            self.failIf(r.get_corrupt_shares())
6671+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
6672+        d.addCallback(_check_results)
6673+        return d
6674+
6675     def test_deepcheck_and_repair(self):
6676         self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair"
6677         self.set_up_grid()
6678hunk ./src/allmydata/test/test_dirnode.py 1124
6679         d.addCallback(_check_results)
6680         return d
6681 
6682+    def test_deepcheck_and_repair_mdmf(self):
6683+        self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair_mdmf"
6684+        self.set_up_grid()
6685+        d = self._test_deepcheck_create(version=MDMF_VERSION)
6686+        d.addCallback(lambda rootnode:
6687+                      rootnode.start_deep_check_and_repair().when_done())
6688+        def _check_results(r):
6689+            self.failUnless(IDeepCheckAndRepairResults.providedBy(r))
6690+            c = r.get_counters()
6691+            self.failUnlessReallyEqual(c,
6692+                                       {"count-objects-checked": 4,
6693+                                        "count-objects-healthy-pre-repair": 4,
6694+                                        "count-objects-unhealthy-pre-repair": 0,
6695+                                        "count-objects-unrecoverable-pre-repair": 0,
6696+                                        "count-corrupt-shares-pre-repair": 0,
6697+                                        "count-objects-healthy-post-repair": 4,
6698+                                        "count-objects-unhealthy-post-repair": 0,
6699+                                        "count-objects-unrecoverable-post-repair": 0,
6700+                                        "count-corrupt-shares-post-repair": 0,
6701+                                        "count-repairs-attempted": 0,
6702+                                        "count-repairs-successful": 0,
6703+                                        "count-repairs-unsuccessful": 0,
6704+                                        })
6705+            self.failIf(r.get_corrupt_shares())
6706+            self.failIf(r.get_remaining_corrupt_shares())
6707+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
6708+        d.addCallback(_check_results)
6709+        return d
6710+
6711     def _mark_file_bad(self, rootnode):
6712         self.delete_shares_numbered(rootnode.get_uri(), [0])
6713         return rootnode
6714hunk ./src/allmydata/test/test_dirnode.py 1176
6715         d.addCallback(_check_results)
6716         return d
6717 
6718-    def test_readonly(self):
6719-        self.basedir = "dirnode/Dirnode/test_readonly"
6720+    def test_deepcheck_problems_mdmf(self):
6721+        self.basedir = "dirnode/Dirnode/test_deepcheck_problems_mdmf"
6722         self.set_up_grid()
6723hunk ./src/allmydata/test/test_dirnode.py 1179
6724+        d = self._test_deepcheck_create(version=MDMF_VERSION)
6725+        d.addCallback(lambda rootnode: self._mark_file_bad(rootnode))
6726+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
6727+        def _check_results(r):
6728+            c = r.get_counters()
6729+            self.failUnlessReallyEqual(c,
6730+                                       {"count-objects-checked": 4,
6731+                                        "count-objects-healthy": 3,
6732+                                        "count-objects-unhealthy": 1,
6733+                                        "count-objects-unrecoverable": 0,
6734+                                        "count-corrupt-shares": 0,
6735+                                        })
6736+            #self.failUnlessReallyEqual(len(r.get_problems()), 1) # TODO
6737+        d.addCallback(_check_results)
6738+        return d
6739+
6740+    def _do_readonly_test(self, version=SDMF_VERSION):
6741         c = self.g.clients[0]
6742         nm = c.nodemaker
6743         filecap = make_chk_file_uri(1234)
6744hunk ./src/allmydata/test/test_dirnode.py 1202
6745         filenode = nm.create_from_cap(filecap)
6746         uploadable = upload.Data("some data", convergence="some convergence string")
6747 
6748-        d = c.create_dirnode()
6749+        d = c.create_dirnode(version=version)
6750         def _created(rw_dn):
6751hunk ./src/allmydata/test/test_dirnode.py 1204
6752+            backing_node = rw_dn._node
6753+            self.failUnlessEqual(backing_node.get_version(), version)
6754             d2 = rw_dn.set_uri(u"child", filecap, filecap)
6755             d2.addCallback(lambda res: rw_dn)
6756             return d2
6757hunk ./src/allmydata/test/test_dirnode.py 1245
6758         d.addCallback(_listed)
6759         return d
6760 
6761+    def test_readonly(self):
6762+        self.basedir = "dirnode/Dirnode/test_readonly"
6763+        self.set_up_grid()
6764+        return self._do_readonly_test()
6765+
6766+    def test_readonly_mdmf(self):
6767+        self.basedir = "dirnode/Dirnode/test_readonly_mdmf"
6768+        self.set_up_grid()
6769+        return self._do_readonly_test(version=MDMF_VERSION)
6770+
6771     def failUnlessGreaterThan(self, a, b):
6772         self.failUnless(a > b, "%r should be > %r" % (a, b))
6773 
6774hunk ./src/allmydata/test/test_dirnode.py 1264
6775     def test_create(self):
6776         self.basedir = "dirnode/Dirnode/test_create"
6777         self.set_up_grid()
6778-        c = self.g.clients[0]
6779-
6780-        self.expected_manifest = []
6781-        self.expected_verifycaps = set()
6782-        self.expected_storage_indexes = set()
6783-
6784-        d = c.create_dirnode()
6785-        def _then(n):
6786-            # /
6787-            self.rootnode = n
6788-            self.failUnless(n.is_mutable())
6789-            u = n.get_uri()
6790-            self.failUnless(u)
6791-            self.failUnless(u.startswith("URI:DIR2:"), u)
6792-            u_ro = n.get_readonly_uri()
6793-            self.failUnless(u_ro.startswith("URI:DIR2-RO:"), u_ro)
6794-            u_v = n.get_verify_cap().to_string()
6795-            self.failUnless(u_v.startswith("URI:DIR2-Verifier:"), u_v)
6796-            u_r = n.get_repair_cap().to_string()
6797-            self.failUnlessReallyEqual(u_r, u)
6798-            self.expected_manifest.append( ((), u) )
6799-            self.expected_verifycaps.add(u_v)
6800-            si = n.get_storage_index()
6801-            self.expected_storage_indexes.add(base32.b2a(si))
6802-            expected_si = n._uri.get_storage_index()
6803-            self.failUnlessReallyEqual(si, expected_si)
6804-
6805-            d = n.list()
6806-            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
6807-            d.addCallback(lambda res: n.has_child(u"missing"))
6808-            d.addCallback(lambda res: self.failIf(res))
6809-
6810-            fake_file_uri = make_mutable_file_uri()
6811-            other_file_uri = make_mutable_file_uri()
6812-            m = c.nodemaker.create_from_cap(fake_file_uri)
6813-            ffu_v = m.get_verify_cap().to_string()
6814-            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
6815-            self.expected_verifycaps.add(ffu_v)
6816-            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
6817-            d.addCallback(lambda res: n.set_uri(u"child",
6818-                                                fake_file_uri, fake_file_uri))
6819-            d.addCallback(lambda res:
6820-                          self.shouldFail(ExistingChildError, "set_uri-no",
6821-                                          "child 'child' already exists",
6822-                                          n.set_uri, u"child",
6823-                                          other_file_uri, other_file_uri,
6824-                                          overwrite=False))
6825-            # /
6826-            # /child = mutable
6827-
6828-            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
6829-
6830-            # /
6831-            # /child = mutable
6832-            # /subdir = directory
6833-            def _created(subdir):
6834-                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
6835-                self.subdir = subdir
6836-                new_v = subdir.get_verify_cap().to_string()
6837-                assert isinstance(new_v, str)
6838-                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
6839-                self.expected_verifycaps.add(new_v)
6840-                si = subdir.get_storage_index()
6841-                self.expected_storage_indexes.add(base32.b2a(si))
6842-            d.addCallback(_created)
6843-
6844-            d.addCallback(lambda res:
6845-                          self.shouldFail(ExistingChildError, "mkdir-no",
6846-                                          "child 'subdir' already exists",
6847-                                          n.create_subdirectory, u"subdir",
6848-                                          overwrite=False))
6849-
6850-            d.addCallback(lambda res: n.list())
6851-            d.addCallback(lambda children:
6852-                          self.failUnlessReallyEqual(set(children.keys()),
6853-                                                     set([u"child", u"subdir"])))
6854-
6855-            d.addCallback(lambda res: n.start_deep_stats().when_done())
6856-            def _check_deepstats(stats):
6857-                self.failUnless(isinstance(stats, dict))
6858-                expected = {"count-immutable-files": 0,
6859-                            "count-mutable-files": 1,
6860-                            "count-literal-files": 0,
6861-                            "count-files": 1,
6862-                            "count-directories": 2,
6863-                            "size-immutable-files": 0,
6864-                            "size-literal-files": 0,
6865-                            #"size-directories": 616, # varies
6866-                            #"largest-directory": 616,
6867-                            "largest-directory-children": 2,
6868-                            "largest-immutable-file": 0,
6869-                            }
6870-                for k,v in expected.iteritems():
6871-                    self.failUnlessReallyEqual(stats[k], v,
6872-                                               "stats[%s] was %s, not %s" %
6873-                                               (k, stats[k], v))
6874-                self.failUnless(stats["size-directories"] > 500,
6875-                                stats["size-directories"])
6876-                self.failUnless(stats["largest-directory"] > 500,
6877-                                stats["largest-directory"])
6878-                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
6879-            d.addCallback(_check_deepstats)
6880-
6881-            d.addCallback(lambda res: n.build_manifest().when_done())
6882-            def _check_manifest(res):
6883-                manifest = res["manifest"]
6884-                self.failUnlessReallyEqual(sorted(manifest),
6885-                                           sorted(self.expected_manifest))
6886-                stats = res["stats"]
6887-                _check_deepstats(stats)
6888-                self.failUnlessReallyEqual(self.expected_verifycaps,
6889-                                           res["verifycaps"])
6890-                self.failUnlessReallyEqual(self.expected_storage_indexes,
6891-                                           res["storage-index"])
6892-            d.addCallback(_check_manifest)
6893-
6894-            def _add_subsubdir(res):
6895-                return self.subdir.create_subdirectory(u"subsubdir")
6896-            d.addCallback(_add_subsubdir)
6897-            # /
6898-            # /child = mutable
6899-            # /subdir = directory
6900-            # /subdir/subsubdir = directory
6901-            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
6902-            d.addCallback(lambda subsubdir:
6903-                          self.failUnless(isinstance(subsubdir,
6904-                                                     dirnode.DirectoryNode)))
6905-            d.addCallback(lambda res: n.get_child_at_path(u""))
6906-            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
6907-                                                                 n.get_uri()))
6908-
6909-            d.addCallback(lambda res: n.get_metadata_for(u"child"))
6910-            d.addCallback(lambda metadata:
6911-                          self.failUnlessEqual(set(metadata.keys()),
6912-                                               set(["tahoe"])))
6913-
6914-            d.addCallback(lambda res:
6915-                          self.shouldFail(NoSuchChildError, "gcamap-no",
6916-                                          "nope",
6917-                                          n.get_child_and_metadata_at_path,
6918-                                          u"subdir/nope"))
6919-            d.addCallback(lambda res:
6920-                          n.get_child_and_metadata_at_path(u""))
6921-            def _check_child_and_metadata1(res):
6922-                child, metadata = res
6923-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6924-                # edge-metadata needs at least one path segment
6925-                self.failUnlessEqual(set(metadata.keys()), set([]))
6926-            d.addCallback(_check_child_and_metadata1)
6927-            d.addCallback(lambda res:
6928-                          n.get_child_and_metadata_at_path(u"child"))
6929-
6930-            def _check_child_and_metadata2(res):
6931-                child, metadata = res
6932-                self.failUnlessReallyEqual(child.get_uri(),
6933-                                           fake_file_uri)
6934-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6935-            d.addCallback(_check_child_and_metadata2)
6936-
6937-            d.addCallback(lambda res:
6938-                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
6939-            def _check_child_and_metadata3(res):
6940-                child, metadata = res
6941-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
6942-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
6943-            d.addCallback(_check_child_and_metadata3)
6944-
6945-            # set_uri + metadata
6946-            # it should be possible to add a child without any metadata
6947-            d.addCallback(lambda res: n.set_uri(u"c2",
6948-                                                fake_file_uri, fake_file_uri,
6949-                                                {}))
6950-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6951-            d.addCallback(lambda metadata:
6952-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6953-
6954-            # You can't override the link timestamps.
6955-            d.addCallback(lambda res: n.set_uri(u"c2",
6956-                                                fake_file_uri, fake_file_uri,
6957-                                                { 'tahoe': {'linkcrtime': "bogus"}}))
6958-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
6959-            def _has_good_linkcrtime(metadata):
6960-                self.failUnless(metadata.has_key('tahoe'))
6961-                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
6962-                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
6963-            d.addCallback(_has_good_linkcrtime)
6964-
6965-            # if we don't set any defaults, the child should get timestamps
6966-            d.addCallback(lambda res: n.set_uri(u"c3",
6967-                                                fake_file_uri, fake_file_uri))
6968-            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
6969-            d.addCallback(lambda metadata:
6970-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6971-
6972-            # we can also add specific metadata at set_uri() time
6973-            d.addCallback(lambda res: n.set_uri(u"c4",
6974-                                                fake_file_uri, fake_file_uri,
6975-                                                {"key": "value"}))
6976-            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
6977-            d.addCallback(lambda metadata:
6978-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
6979-                                              (metadata['key'] == "value"), metadata))
6980-
6981-            d.addCallback(lambda res: n.delete(u"c2"))
6982-            d.addCallback(lambda res: n.delete(u"c3"))
6983-            d.addCallback(lambda res: n.delete(u"c4"))
6984-
6985-            # set_node + metadata
6986-            # it should be possible to add a child without any metadata except for timestamps
6987-            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
6988-            d.addCallback(lambda res: c.create_dirnode())
6989-            d.addCallback(lambda n2:
6990-                          self.shouldFail(ExistingChildError, "set_node-no",
6991-                                          "child 'd2' already exists",
6992-                                          n.set_node, u"d2", n2,
6993-                                          overwrite=False))
6994-            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
6995-            d.addCallback(lambda metadata:
6996-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
6997-
6998-            # if we don't set any defaults, the child should get timestamps
6999-            d.addCallback(lambda res: n.set_node(u"d3", n))
7000-            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
7001-            d.addCallback(lambda metadata:
7002-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7003-
7004-            # we can also add specific metadata at set_node() time
7005-            d.addCallback(lambda res: n.set_node(u"d4", n,
7006-                                                {"key": "value"}))
7007-            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
7008-            d.addCallback(lambda metadata:
7009-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7010-                                          (metadata["key"] == "value"), metadata))
7011-
7012-            d.addCallback(lambda res: n.delete(u"d2"))
7013-            d.addCallback(lambda res: n.delete(u"d3"))
7014-            d.addCallback(lambda res: n.delete(u"d4"))
7015-
7016-            # metadata through set_children()
7017-            d.addCallback(lambda res:
7018-                          n.set_children({
7019-                              u"e1": (fake_file_uri, fake_file_uri),
7020-                              u"e2": (fake_file_uri, fake_file_uri, {}),
7021-                              u"e3": (fake_file_uri, fake_file_uri,
7022-                                      {"key": "value"}),
7023-                              }))
7024-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
7025-            d.addCallback(lambda res:
7026-                          self.shouldFail(ExistingChildError, "set_children-no",
7027-                                          "child 'e1' already exists",
7028-                                          n.set_children,
7029-                                          { u"e1": (other_file_uri,
7030-                                                    other_file_uri),
7031-                                            u"new": (other_file_uri,
7032-                                                     other_file_uri),
7033-                                            },
7034-                                          overwrite=False))
7035-            # and 'new' should not have been created
7036-            d.addCallback(lambda res: n.list())
7037-            d.addCallback(lambda children: self.failIf(u"new" in children))
7038-            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
7039-            d.addCallback(lambda metadata:
7040-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7041-            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
7042-            d.addCallback(lambda metadata:
7043-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7044-            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
7045-            d.addCallback(lambda metadata:
7046-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7047-                                          (metadata["key"] == "value"), metadata))
7048-
7049-            d.addCallback(lambda res: n.delete(u"e1"))
7050-            d.addCallback(lambda res: n.delete(u"e2"))
7051-            d.addCallback(lambda res: n.delete(u"e3"))
7052-
7053-            # metadata through set_nodes()
7054-            d.addCallback(lambda res:
7055-                          n.set_nodes({ u"f1": (n, None),
7056-                                        u"f2": (n, {}),
7057-                                        u"f3": (n, {"key": "value"}),
7058-                                        }))
7059-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
7060-            d.addCallback(lambda res:
7061-                          self.shouldFail(ExistingChildError, "set_nodes-no",
7062-                                          "child 'f1' already exists",
7063-                                          n.set_nodes, { u"f1": (n, None),
7064-                                                         u"new": (n, None), },
7065-                                          overwrite=False))
7066-            # and 'new' should not have been created
7067-            d.addCallback(lambda res: n.list())
7068-            d.addCallback(lambda children: self.failIf(u"new" in children))
7069-            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
7070-            d.addCallback(lambda metadata:
7071-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7072-            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
7073-            d.addCallback(lambda metadata:
7074-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7075-            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
7076-            d.addCallback(lambda metadata:
7077-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7078-                                          (metadata["key"] == "value"), metadata))
7079-
7080-            d.addCallback(lambda res: n.delete(u"f1"))
7081-            d.addCallback(lambda res: n.delete(u"f2"))
7082-            d.addCallback(lambda res: n.delete(u"f3"))
7083-
7084-
7085-            d.addCallback(lambda res:
7086-                          n.set_metadata_for(u"child",
7087-                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
7088-            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
7089-            d.addCallback(lambda metadata:
7090-                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
7091-                                          metadata["tags"] == ["web2.0-compatible"] and
7092-                                          "bad" not in metadata["tahoe"], metadata))
7093-
7094-            d.addCallback(lambda res:
7095-                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
7096-                                          n.set_metadata_for, u"nosuch", {}))
7097-
7098-
7099-            def _start(res):
7100-                self._start_timestamp = time.time()
7101-            d.addCallback(_start)
7102-            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
7103-            # floats to hundredeths (it uses str(num) instead of repr(num)).
7104-            # simplejson-1.7.3 does not have this bug. To prevent this bug
7105-            # from causing the test to fail, stall for more than a few
7106-            # hundrededths of a second.
7107-            d.addCallback(self.stall, 0.1)
7108-            d.addCallback(lambda res: n.add_file(u"timestamps",
7109-                                                 upload.Data("stamp me", convergence="some convergence string")))
7110-            d.addCallback(self.stall, 0.1)
7111-            def _stop(res):
7112-                self._stop_timestamp = time.time()
7113-            d.addCallback(_stop)
7114-
7115-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
7116-            def _check_timestamp1(metadata):
7117-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
7118-                tahoe_md = metadata["tahoe"]
7119-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
7120-
7121-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
7122-                                                  self._start_timestamp)
7123-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
7124-                                                  tahoe_md["linkcrtime"])
7125-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
7126-                                                  self._start_timestamp)
7127-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
7128-                                                  tahoe_md["linkmotime"])
7129-                # Our current timestamp rules say that replacing an existing
7130-                # child should preserve the 'linkcrtime' but update the
7131-                # 'linkmotime'
7132-                self._old_linkcrtime = tahoe_md["linkcrtime"]
7133-                self._old_linkmotime = tahoe_md["linkmotime"]
7134-            d.addCallback(_check_timestamp1)
7135-            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
7136-            d.addCallback(lambda res: n.set_node(u"timestamps", n))
7137-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
7138-            def _check_timestamp2(metadata):
7139-                self.failUnlessIn("tahoe", metadata)
7140-                tahoe_md = metadata["tahoe"]
7141-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
7142-
7143-                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
7144-                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
7145-                return n.delete(u"timestamps")
7146-            d.addCallback(_check_timestamp2)
7147-
7148-            d.addCallback(lambda res: n.delete(u"subdir"))
7149-            d.addCallback(lambda old_child:
7150-                          self.failUnlessReallyEqual(old_child.get_uri(),
7151-                                                     self.subdir.get_uri()))
7152-
7153-            d.addCallback(lambda res: n.list())
7154-            d.addCallback(lambda children:
7155-                          self.failUnlessReallyEqual(set(children.keys()),
7156-                                                     set([u"child"])))
7157-
7158-            uploadable1 = upload.Data("some data", convergence="converge")
7159-            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
7160-            d.addCallback(lambda newnode:
7161-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
7162-            uploadable2 = upload.Data("some data", convergence="stuff")
7163-            d.addCallback(lambda res:
7164-                          self.shouldFail(ExistingChildError, "add_file-no",
7165-                                          "child 'newfile' already exists",
7166-                                          n.add_file, u"newfile",
7167-                                          uploadable2,
7168-                                          overwrite=False))
7169-            d.addCallback(lambda res: n.list())
7170-            d.addCallback(lambda children:
7171-                          self.failUnlessReallyEqual(set(children.keys()),
7172-                                                     set([u"child", u"newfile"])))
7173-            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
7174-            d.addCallback(lambda metadata:
7175-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
7176-
7177-            uploadable3 = upload.Data("some data", convergence="converge")
7178-            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
7179-                                                 uploadable3,
7180-                                                 {"key": "value"}))
7181-            d.addCallback(lambda newnode:
7182-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
7183-            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
7184-            d.addCallback(lambda metadata:
7185-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
7186-                                              (metadata['key'] == "value"), metadata))
7187-            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
7188-
7189-            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
7190-            def _created2(subdir2):
7191-                self.subdir2 = subdir2
7192-                # put something in the way, to make sure it gets overwritten
7193-                return subdir2.add_file(u"child", upload.Data("overwrite me",
7194-                                                              "converge"))
7195-            d.addCallback(_created2)
7196-
7197-            d.addCallback(lambda res:
7198-                          n.move_child_to(u"child", self.subdir2))
7199-            d.addCallback(lambda res: n.list())
7200-            d.addCallback(lambda children:
7201-                          self.failUnlessReallyEqual(set(children.keys()),
7202-                                                     set([u"newfile", u"subdir2"])))
7203-            d.addCallback(lambda res: self.subdir2.list())
7204-            d.addCallback(lambda children:
7205-                          self.failUnlessReallyEqual(set(children.keys()),
7206-                                                     set([u"child"])))
7207-            d.addCallback(lambda res: self.subdir2.get(u"child"))
7208-            d.addCallback(lambda child:
7209-                          self.failUnlessReallyEqual(child.get_uri(),
7210-                                                     fake_file_uri))
7211-
7212-            # move it back, using new_child_name=
7213-            d.addCallback(lambda res:
7214-                          self.subdir2.move_child_to(u"child", n, u"newchild"))
7215-            d.addCallback(lambda res: n.list())
7216-            d.addCallback(lambda children:
7217-                          self.failUnlessReallyEqual(set(children.keys()),
7218-                                                     set([u"newchild", u"newfile",
7219-                                                          u"subdir2"])))
7220-            d.addCallback(lambda res: self.subdir2.list())
7221-            d.addCallback(lambda children:
7222-                          self.failUnlessReallyEqual(set(children.keys()), set([])))
7223-
7224-            # now make sure that we honor overwrite=False
7225-            d.addCallback(lambda res:
7226-                          self.subdir2.set_uri(u"newchild",
7227-                                               other_file_uri, other_file_uri))
7228-
7229-            d.addCallback(lambda res:
7230-                          self.shouldFail(ExistingChildError, "move_child_to-no",
7231-                                          "child 'newchild' already exists",
7232-                                          n.move_child_to, u"newchild",
7233-                                          self.subdir2,
7234-                                          overwrite=False))
7235-            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
7236-            d.addCallback(lambda child:
7237-                          self.failUnlessReallyEqual(child.get_uri(),
7238-                                                     other_file_uri))
7239-
7240-
7241-            # Setting the no-write field should diminish a mutable cap to read-only
7242-            # (for both files and directories).
7243-
7244-            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
7245-            d.addCallback(lambda ign: n.get(u"mutable"))
7246-            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
7247-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
7248-            d.addCallback(lambda ign: n.get(u"mutable"))
7249-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
7250-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
7251-            d.addCallback(lambda ign: n.get(u"mutable"))
7252-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
7253-
7254-            d.addCallback(lambda ign: n.get(u"subdir2"))
7255-            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
7256-            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
7257-            d.addCallback(lambda ign: n.get(u"subdir2"))
7258-            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
7259-
7260-            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
7261-                                                metadata={"no-write": True}))
7262-            d.addCallback(lambda ign: n.get(u"mutable_ro"))
7263-            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
7264-
7265-            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
7266-            d.addCallback(lambda ign: n.get(u"subdir_ro"))
7267-            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
7268-
7269-            return d
7270-
7271-        d.addCallback(_then)
7272-
7273-        d.addErrback(self.explain_error)
7274-        return d
7275+        return self._do_create_test()
7276 
7277     def test_update_metadata(self):
7278         (t1, t2, t3) = (626644800.0, 634745640.0, 892226160.0)
7279hunk ./src/allmydata/test/test_dirnode.py 1283
7280         self.failUnlessEqual(md4, {"bool": True, "number": 42,
7281                                    "tahoe":{"linkcrtime": t1, "linkmotime": t1}})
7282 
7283-    def test_create_subdirectory(self):
7284-        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
7285-        self.set_up_grid()
7286+    def _do_create_subdirectory_test(self, version=SDMF_VERSION):
7287         c = self.g.clients[0]
7288         nm = c.nodemaker
7289 
7290hunk ./src/allmydata/test/test_dirnode.py 1287
7291-        d = c.create_dirnode()
7292+        d = c.create_dirnode(version=version)
7293         def _then(n):
7294             # /
7295             self.rootnode = n
7296hunk ./src/allmydata/test/test_dirnode.py 1297
7297             kids = {u"kid1": (nm.create_from_cap(fake_file_uri), {}),
7298                     u"kid2": (nm.create_from_cap(other_file_uri), md),
7299                     }
7300-            d = n.create_subdirectory(u"subdir", kids)
7301+            d = n.create_subdirectory(u"subdir", kids,
7302+                                      mutable_version=version)
7303             def _check(sub):
7304                 d = n.get_child_at_path(u"subdir")
7305                 d.addCallback(lambda sub2: self.failUnlessReallyEqual(sub2.get_uri(),
7306hunk ./src/allmydata/test/test_dirnode.py 1314
7307         d.addCallback(_then)
7308         return d
7309 
7310+    def test_create_subdirectory(self):
7311+        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
7312+        self.set_up_grid()
7313+        return self._do_create_subdirectory_test()
7314+
7315+    def test_create_subdirectory_mdmf(self):
7316+        self.basedir = "dirnode/Dirnode/test_create_subdirectory_mdmf"
7317+        self.set_up_grid()
7318+        return self._do_create_subdirectory_test(version=MDMF_VERSION)
7319+
7320+    def test_create_mdmf(self):
7321+        self.basedir = "dirnode/Dirnode/test_mdmf"
7322+        self.set_up_grid()
7323+        return self._do_create_test(mdmf=True)
7324+
7325+    def test_mdmf_initial_children(self):
7326+        self.basedir = "dirnode/Dirnode/test_mdmf"
7327+        self.set_up_grid()
7328+        return self._do_initial_children_test(mdmf=True)
7329+
7330 class MinimalFakeMutableFile:
7331     def get_writekey(self):
7332         return "writekey"
7333hunk ./src/allmydata/test/test_dirnode.py 1452
7334     implements(IMutableFileNode)
7335     counter = 0
7336     def __init__(self, initial_contents=""):
7337-        self.data = self._get_initial_contents(initial_contents)
7338+        data = self._get_initial_contents(initial_contents)
7339+        self.data = data.read(data.get_size())
7340+        self.data = "".join(self.data)
7341+
7342         counter = FakeMutableFile.counter
7343         FakeMutableFile.counter += 1
7344         writekey = hashutil.ssk_writekey_hash(str(counter))
7345hunk ./src/allmydata/test/test_dirnode.py 1502
7346         pass
7347 
7348     def modify(self, modifier):
7349-        self.data = modifier(self.data, None, True)
7350+        data = modifier(self.data, None, True)
7351+        self.data = data
7352         return defer.succeed(None)
7353 
7354 class FakeNodeMaker(NodeMaker):
7355hunk ./src/allmydata/test/test_dirnode.py 1507
7356-    def create_mutable_file(self, contents="", keysize=None):
7357+    def create_mutable_file(self, contents="", keysize=None, version=None):
7358         return defer.succeed(FakeMutableFile(contents))
7359 
7360 class FakeClient2(Client):
7361hunk ./src/allmydata/test/test_dirnode.py 1706
7362             self.failUnless(n.get_readonly_uri().startswith("imm."), i)
7363 
7364 
7365+
7366 class DeepStats(testutil.ReallyEqualMixin, unittest.TestCase):
7367     timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
7368     def test_stats(self):
7369}
7370[immutable/literal.py: Implement interface changes in literal nodes.
7371Kevan Carstensen <kevan@isnotajoke.com>**20110802020814
7372 Ignore-this: 4371e71a50e65ce2607c4d67d3a32171
7373] {
7374hunk ./src/allmydata/immutable/literal.py 106
7375         d.addCallback(lambda lastSent: consumer)
7376         return d
7377 
7378+    # IReadable, IFileNode, IFilesystemNode
7379+    def get_best_readable_version(self):
7380+        return defer.succeed(self)
7381+
7382+
7383+    def download_best_version(self):
7384+        return defer.succeed(self.u.data)
7385+
7386+
7387+    download_to_data = download_best_version
7388+    get_size_of_best_version = get_current_size
7389+
7390hunk ./src/allmydata/test/test_filenode.py 98
7391         def _check_segment(res):
7392             self.failUnlessEqual(res, DATA[1:1+5])
7393         d.addCallback(_check_segment)
7394+        d.addCallback(lambda ignored: fn1.get_best_readable_version())
7395+        d.addCallback(lambda fn2: self.failUnlessEqual(fn1, fn2))
7396+        d.addCallback(lambda ignored:
7397+            fn1.get_size_of_best_version())
7398+        d.addCallback(lambda size:
7399+            self.failUnlessEqual(size, len(DATA)))
7400+        d.addCallback(lambda ignored:
7401+            fn1.download_to_data())
7402+        d.addCallback(lambda data:
7403+            self.failUnlessEqual(data, DATA))
7404+        d.addCallback(lambda ignored:
7405+            fn1.download_best_version())
7406+        d.addCallback(lambda data:
7407+            self.failUnlessEqual(data, DATA))
7408 
7409         return d
7410 
7411}
7412[immutable/filenode: implement unified filenode interface
7413Kevan Carstensen <kevan@isnotajoke.com>**20110802020905
7414 Ignore-this: d9a442fc285157f134f5d1b4607c6a48
7415] {
7416hunk ./src/allmydata/immutable/filenode.py 8
7417 now = time.time
7418 from zope.interface import implements
7419 from twisted.internet import defer
7420-from twisted.internet.interfaces import IConsumer
7421 
7422hunk ./src/allmydata/immutable/filenode.py 9
7423-from allmydata.interfaces import IImmutableFileNode, IUploadResults
7424 from allmydata import uri
7425hunk ./src/allmydata/immutable/filenode.py 10
7426+from twisted.internet.interfaces import IConsumer
7427+from twisted.protocols import basic
7428+from foolscap.api import eventually
7429+from allmydata.interfaces import IImmutableFileNode, ICheckable, \
7430+     IDownloadTarget, IUploadResults
7431+from allmydata.util import dictutil, log, base32, consumer
7432+from allmydata.immutable.checker import Checker
7433 from allmydata.check_results import CheckResults, CheckAndRepairResults
7434 from allmydata.util.dictutil import DictOfSets
7435 from pycryptopp.cipher.aes import AES
7436hunk ./src/allmydata/immutable/filenode.py 285
7437         return self._cnode.check_and_repair(monitor, verify, add_lease)
7438     def check(self, monitor, verify=False, add_lease=False):
7439         return self._cnode.check(monitor, verify, add_lease)
7440+
7441+    def get_best_readable_version(self):
7442+        """
7443+        Return an IReadable of the best version of this file. Since
7444+        immutable files can have only one version, we just return the
7445+        current filenode.
7446+        """
7447+        return defer.succeed(self)
7448+
7449+
7450+    def download_best_version(self):
7451+        """
7452+        Download the best version of this file, returning its contents
7453+        as a bytestring. Since there is only one version of an immutable
7454+        file, we download and return the contents of this file.
7455+        """
7456+        d = consumer.download_to_data(self)
7457+        return d
7458+
7459+    # for an immutable file, download_to_data (specified in IReadable)
7460+    # is the same as download_best_version (specified in IFileNode). For
7461+    # mutable files, the difference is more meaningful, since they can
7462+    # have multiple versions.
7463+    download_to_data = download_best_version
7464+
7465+
7466+    # get_size() (IReadable), get_current_size() (IFilesystemNode), and
7467+    # get_size_of_best_version(IFileNode) are all the same for immutable
7468+    # files.
7469+    get_size_of_best_version = get_current_size
7470hunk ./src/allmydata/test/test_immutable.py 290
7471         d.addCallback(_try_download)
7472         return d
7473 
7474+    def test_download_to_data(self):
7475+        d = self.n.download_to_data()
7476+        d.addCallback(lambda data:
7477+            self.failUnlessEqual(data, common.TEST_DATA))
7478+        return d
7479+
7480+
7481+    def test_download_best_version(self):
7482+        d = self.n.download_best_version()
7483+        d.addCallback(lambda data:
7484+            self.failUnlessEqual(data, common.TEST_DATA))
7485+        return d
7486+
7487+
7488+    def test_get_best_readable_version(self):
7489+        d = self.n.get_best_readable_version()
7490+        d.addCallback(lambda n2:
7491+            self.failUnlessEqual(n2, self.n))
7492+        return d
7493+
7494+    def test_get_size_of_best_version(self):
7495+        d = self.n.get_size_of_best_version()
7496+        d.addCallback(lambda size:
7497+            self.failUnlessEqual(size, len(common.TEST_DATA)))
7498+        return d
7499+
7500 
7501 # XXX extend these tests to show bad behavior of various kinds from servers:
7502 # raising exception from each remove_foo() method, for example
7503}
7504[test/test_mutable: tests for MDMF
7505Kevan Carstensen <kevan@isnotajoke.com>**20110802020924
7506 Ignore-this: 6b5269849b3f987aa6e266a57ee01041
7507 
7508 These are their own patch because they cut across a lot of the changes
7509 I've made in implementing MDMF in such a way as to make it difficult to
7510 split them up into the other patches.
7511] {
7512hunk ./src/allmydata/test/test_mutable.py 2
7513 
7514-import struct
7515+import os, re, base64
7516 from cStringIO import StringIO
7517 from twisted.trial import unittest
7518 from twisted.internet import defer, reactor
7519hunk ./src/allmydata/test/test_mutable.py 6
7520+from twisted.internet.interfaces import IConsumer
7521+from zope.interface import implements
7522 from allmydata import uri, client
7523 from allmydata.nodemaker import NodeMaker
7524hunk ./src/allmydata/test/test_mutable.py 10
7525-from allmydata.util import base32
7526+from allmydata.util import base32, consumer, fileutil
7527 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \
7528      ssk_pubkey_fingerprint_hash
7529hunk ./src/allmydata/test/test_mutable.py 13
7530+from allmydata.util.deferredutil import gatherResults
7531 from allmydata.interfaces import IRepairResults, ICheckAndRepairResults, \
7532hunk ./src/allmydata/test/test_mutable.py 15
7533-     NotEnoughSharesError
7534+     NotEnoughSharesError, SDMF_VERSION, MDMF_VERSION
7535 from allmydata.monitor import Monitor
7536 from allmydata.test.common import ShouldFailMixin
7537 from allmydata.test.no_network import GridTestMixin
7538hunk ./src/allmydata/test/test_mutable.py 22
7539 from foolscap.api import eventually, fireEventually
7540 from foolscap.logging import log
7541 from allmydata.storage_client import StorageFarmBroker
7542+from allmydata.storage.common import storage_index_to_dir, si_b2a
7543 
7544 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
7545 from allmydata.mutable.common import ResponseCache, \
7546hunk ./src/allmydata/test/test_mutable.py 30
7547      NeedMoreDataError, UnrecoverableFileError, UncoordinatedWriteError, \
7548      NotEnoughServersError, CorruptShareError
7549 from allmydata.mutable.retrieve import Retrieve
7550-from allmydata.mutable.publish import Publish
7551+from allmydata.mutable.publish import Publish, MutableFileHandle, \
7552+                                      MutableData, \
7553+                                      DEFAULT_MAX_SEGMENT_SIZE
7554 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
7555hunk ./src/allmydata/test/test_mutable.py 34
7556-from allmydata.mutable.layout import unpack_header, unpack_share
7557+from allmydata.mutable.layout import unpack_header, MDMFSlotReadProxy
7558 from allmydata.mutable.repairer import MustForceRepairError
7559 
7560 import allmydata.test.common_util as testutil
7561hunk ./src/allmydata/test/test_mutable.py 103
7562         self.storage = storage
7563         self.queries = 0
7564     def callRemote(self, methname, *args, **kwargs):
7565+        self.queries += 1
7566         def _call():
7567             meth = getattr(self, methname)
7568             return meth(*args, **kwargs)
7569hunk ./src/allmydata/test/test_mutable.py 110
7570         d = fireEventually()
7571         d.addCallback(lambda res: _call())
7572         return d
7573+
7574     def callRemoteOnly(self, methname, *args, **kwargs):
7575hunk ./src/allmydata/test/test_mutable.py 112
7576+        self.queries += 1
7577         d = self.callRemote(methname, *args, **kwargs)
7578         d.addBoth(lambda ignore: None)
7579         pass
7580hunk ./src/allmydata/test/test_mutable.py 160
7581             chr(ord(original[byte_offset]) ^ 0x01) +
7582             original[byte_offset+1:])
7583 
7584+def add_two(original, byte_offset):
7585+    # It isn't enough to simply flip the bit for the version number,
7586+    # because 1 is a valid version number. So we add two instead.
7587+    return (original[:byte_offset] +
7588+            chr(ord(original[byte_offset]) ^ 0x02) +
7589+            original[byte_offset+1:])
7590+
7591 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0):
7592     # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a
7593     # list of shnums to corrupt.
7594hunk ./src/allmydata/test/test_mutable.py 170
7595+    ds = []
7596     for peerid in s._peers:
7597         shares = s._peers[peerid]
7598         for shnum in shares:
7599hunk ./src/allmydata/test/test_mutable.py 178
7600                 and shnum not in shnums_to_corrupt):
7601                 continue
7602             data = shares[shnum]
7603-            (version,
7604-             seqnum,
7605-             root_hash,
7606-             IV,
7607-             k, N, segsize, datalen,
7608-             o) = unpack_header(data)
7609-            if isinstance(offset, tuple):
7610-                offset1, offset2 = offset
7611-            else:
7612-                offset1 = offset
7613-                offset2 = 0
7614-            if offset1 == "pubkey":
7615-                real_offset = 107
7616-            elif offset1 in o:
7617-                real_offset = o[offset1]
7618-            else:
7619-                real_offset = offset1
7620-            real_offset = int(real_offset) + offset2 + offset_offset
7621-            assert isinstance(real_offset, int), offset
7622-            shares[shnum] = flip_bit(data, real_offset)
7623-    return res
7624+            # We're feeding the reader all of the share data, so it
7625+            # won't need to use the rref that we didn't provide, nor the
7626+            # storage index that we didn't provide. We do this because
7627+            # the reader will work for both MDMF and SDMF.
7628+            reader = MDMFSlotReadProxy(None, None, shnum, data)
7629+            # We need to get the offsets for the next part.
7630+            d = reader.get_verinfo()
7631+            def _do_corruption(verinfo, data, shnum):
7632+                (seqnum,
7633+                 root_hash,
7634+                 IV,
7635+                 segsize,
7636+                 datalen,
7637+                 k, n, prefix, o) = verinfo
7638+                if isinstance(offset, tuple):
7639+                    offset1, offset2 = offset
7640+                else:
7641+                    offset1 = offset
7642+                    offset2 = 0
7643+                if offset1 == "pubkey" and IV:
7644+                    real_offset = 107
7645+                elif offset1 in o:
7646+                    real_offset = o[offset1]
7647+                else:
7648+                    real_offset = offset1
7649+                real_offset = int(real_offset) + offset2 + offset_offset
7650+                assert isinstance(real_offset, int), offset
7651+                if offset1 == 0: # verbyte
7652+                    f = add_two
7653+                else:
7654+                    f = flip_bit
7655+                shares[shnum] = f(data, real_offset)
7656+            d.addCallback(_do_corruption, data, shnum)
7657+            ds.append(d)
7658+    dl = defer.DeferredList(ds)
7659+    dl.addCallback(lambda ignored: res)
7660+    return dl
7661 
7662 def make_storagebroker(s=None, num_peers=10):
7663     if not s:
7664hunk ./src/allmydata/test/test_mutable.py 257
7665             self.failUnlessEqual(len(shnums), 1)
7666         d.addCallback(_created)
7667         return d
7668+    test_create.timeout = 15
7669+
7670+
7671+    def test_create_mdmf(self):
7672+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7673+        def _created(n):
7674+            self.failUnless(isinstance(n, MutableFileNode))
7675+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7676+            sb = self.nodemaker.storage_broker
7677+            peer0 = sorted(sb.get_all_serverids())[0]
7678+            shnums = self._storage._peers[peer0].keys()
7679+            self.failUnlessEqual(len(shnums), 1)
7680+        d.addCallback(_created)
7681+        return d
7682+
7683+    def test_single_share(self):
7684+        # Make sure that we tolerate publishing a single share.
7685+        self.nodemaker.default_encoding_parameters['k'] = 1
7686+        self.nodemaker.default_encoding_parameters['happy'] = 1
7687+        self.nodemaker.default_encoding_parameters['n'] = 1
7688+        d = defer.succeed(None)
7689+        for v in (SDMF_VERSION, MDMF_VERSION):
7690+            d.addCallback(lambda ignored:
7691+                self.nodemaker.create_mutable_file(version=v))
7692+            def _created(n):
7693+                self.failUnless(isinstance(n, MutableFileNode))
7694+                self._node = n
7695+                return n
7696+            d.addCallback(_created)
7697+            d.addCallback(lambda n:
7698+                n.overwrite(MutableData("Contents" * 50000)))
7699+            d.addCallback(lambda ignored:
7700+                self._node.download_best_version())
7701+            d.addCallback(lambda contents:
7702+                self.failUnlessEqual(contents, "Contents" * 50000))
7703+        return d
7704+
7705+    def test_max_shares(self):
7706+        self.nodemaker.default_encoding_parameters['n'] = 255
7707+        d = self.nodemaker.create_mutable_file(version=SDMF_VERSION)
7708+        def _created(n):
7709+            self.failUnless(isinstance(n, MutableFileNode))
7710+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7711+            sb = self.nodemaker.storage_broker
7712+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
7713+                              in sb.get_all_serverids()])
7714+            self.failUnlessEqual(num_shares, 255)
7715+            self._node = n
7716+            return n
7717+        d.addCallback(_created)
7718+        # Now we upload some contents
7719+        d.addCallback(lambda n:
7720+            n.overwrite(MutableData("contents" * 50000)))
7721+        # ...then download contents
7722+        d.addCallback(lambda ignored:
7723+            self._node.download_best_version())
7724+        # ...and check to make sure everything went okay.
7725+        d.addCallback(lambda contents:
7726+            self.failUnlessEqual("contents" * 50000, contents))
7727+        return d
7728+
7729+    def test_max_shares_mdmf(self):
7730+        # Test how files behave when there are 255 shares.
7731+        self.nodemaker.default_encoding_parameters['n'] = 255
7732+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7733+        def _created(n):
7734+            self.failUnless(isinstance(n, MutableFileNode))
7735+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
7736+            sb = self.nodemaker.storage_broker
7737+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
7738+                              in sb.get_all_serverids()])
7739+            self.failUnlessEqual(num_shares, 255)
7740+            self._node = n
7741+            return n
7742+        d.addCallback(_created)
7743+        d.addCallback(lambda n:
7744+            n.overwrite(MutableData("contents" * 50000)))
7745+        d.addCallback(lambda ignored:
7746+            self._node.download_best_version())
7747+        d.addCallback(lambda contents:
7748+            self.failUnlessEqual(contents, "contents" * 50000))
7749+        return d
7750+
7751+    def test_mdmf_filenode_cap(self):
7752+        # Test that an MDMF filenode, once created, returns an MDMF URI.
7753+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7754+        def _created(n):
7755+            self.failUnless(isinstance(n, MutableFileNode))
7756+            cap = n.get_cap()
7757+            self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
7758+            rcap = n.get_readcap()
7759+            self.failUnless(isinstance(rcap, uri.ReadonlyMDMFFileURI))
7760+            vcap = n.get_verify_cap()
7761+            self.failUnless(isinstance(vcap, uri.MDMFVerifierURI))
7762+        d.addCallback(_created)
7763+        return d
7764+
7765+
7766+    def test_create_from_mdmf_writecap(self):
7767+        # Test that the nodemaker is capable of creating an MDMF
7768+        # filenode given an MDMF cap.
7769+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7770+        def _created(n):
7771+            self.failUnless(isinstance(n, MutableFileNode))
7772+            s = n.get_uri()
7773+            self.failUnless(s.startswith("URI:MDMF"))
7774+            n2 = self.nodemaker.create_from_cap(s)
7775+            self.failUnless(isinstance(n2, MutableFileNode))
7776+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
7777+            self.failUnlessEqual(n.get_uri(), n2.get_uri())
7778+        d.addCallback(_created)
7779+        return d
7780+
7781+
7782+    def test_create_from_mdmf_writecap_with_extensions(self):
7783+        # Test that the nodemaker is capable of creating an MDMF
7784+        # filenode when given a writecap with extension parameters in
7785+        # them.
7786+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7787+        def _created(n):
7788+            self.failUnless(isinstance(n, MutableFileNode))
7789+            s = n.get_uri()
7790+            # We need to cheat a little and delete the nodemaker's
7791+            # cache, otherwise we'll get the same node instance back.
7792+            self.failUnlessIn(":3:131073", s)
7793+            n2 = self.nodemaker.create_from_cap(s)
7794+
7795+            self.failUnlessEqual(n2.get_storage_index(), n.get_storage_index())
7796+            self.failUnlessEqual(n.get_writekey(), n2.get_writekey())
7797+            hints = n2._downloader_hints
7798+            self.failUnlessEqual(hints['k'], 3)
7799+            self.failUnlessEqual(hints['segsize'], 131073)
7800+        d.addCallback(_created)
7801+        return d
7802+
7803+
7804+    def test_create_from_mdmf_readcap(self):
7805+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7806+        def _created(n):
7807+            self.failUnless(isinstance(n, MutableFileNode))
7808+            s = n.get_readonly_uri()
7809+            n2 = self.nodemaker.create_from_cap(s)
7810+            self.failUnless(isinstance(n2, MutableFileNode))
7811+
7812+            # Check that it's a readonly node
7813+            self.failUnless(n2.is_readonly())
7814+        d.addCallback(_created)
7815+        return d
7816+
7817+
7818+    def test_create_from_mdmf_readcap_with_extensions(self):
7819+        # We should be able to create an MDMF filenode with the
7820+        # extension parameters without it breaking.
7821+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7822+        def _created(n):
7823+            self.failUnless(isinstance(n, MutableFileNode))
7824+            s = n.get_readonly_uri()
7825+            self.failUnlessIn(":3:131073", s)
7826+
7827+            n2 = self.nodemaker.create_from_cap(s)
7828+            self.failUnless(isinstance(n2, MutableFileNode))
7829+            self.failUnless(n2.is_readonly())
7830+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
7831+            hints = n2._downloader_hints
7832+            self.failUnlessEqual(hints["k"], 3)
7833+            self.failUnlessEqual(hints["segsize"], 131073)
7834+        d.addCallback(_created)
7835+        return d
7836+
7837+
7838+    def test_internal_version_from_cap(self):
7839+        # MutableFileNodes and MutableFileVersions have an internal
7840+        # switch that tells them whether they're dealing with an SDMF or
7841+        # MDMF mutable file when they start doing stuff. We want to make
7842+        # sure that this is set appropriately given an MDMF cap.
7843+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7844+        def _created(n):
7845+            self.uri = n.get_uri()
7846+            self.failUnlessEqual(n._protocol_version, MDMF_VERSION)
7847+
7848+            n2 = self.nodemaker.create_from_cap(self.uri)
7849+            self.failUnlessEqual(n2._protocol_version, MDMF_VERSION)
7850+        d.addCallback(_created)
7851+        return d
7852+
7853 
7854     def test_serialize(self):
7855         n = MutableFileNode(None, None, {"k": 3, "n": 10}, None)
7856hunk ./src/allmydata/test/test_mutable.py 472
7857             d.addCallback(lambda smap: smap.dump(StringIO()))
7858             d.addCallback(lambda sio:
7859                           self.failUnless("3-of-10" in sio.getvalue()))
7860-            d.addCallback(lambda res: n.overwrite("contents 1"))
7861+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
7862             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
7863             d.addCallback(lambda res: n.download_best_version())
7864             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
7865hunk ./src/allmydata/test/test_mutable.py 479
7866             d.addCallback(lambda res: n.get_size_of_best_version())
7867             d.addCallback(lambda size:
7868                           self.failUnlessEqual(size, len("contents 1")))
7869-            d.addCallback(lambda res: n.overwrite("contents 2"))
7870+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
7871             d.addCallback(lambda res: n.download_best_version())
7872             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
7873             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
7874hunk ./src/allmydata/test/test_mutable.py 483
7875-            d.addCallback(lambda smap: n.upload("contents 3", smap))
7876+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
7877             d.addCallback(lambda res: n.download_best_version())
7878             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
7879             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
7880hunk ./src/allmydata/test/test_mutable.py 495
7881             # mapupdate-to-retrieve data caching (i.e. make the shares larger
7882             # than the default readsize, which is 2000 bytes). A 15kB file
7883             # will have 5kB shares.
7884-            d.addCallback(lambda res: n.overwrite("large size file" * 1000))
7885+            d.addCallback(lambda res: n.overwrite(MutableData("large size file" * 1000)))
7886             d.addCallback(lambda res: n.download_best_version())
7887             d.addCallback(lambda res:
7888                           self.failUnlessEqual(res, "large size file" * 1000))
7889hunk ./src/allmydata/test/test_mutable.py 503
7890         d.addCallback(_created)
7891         return d
7892 
7893+
7894+    def test_upload_and_download_mdmf(self):
7895+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7896+        def _created(n):
7897+            d = defer.succeed(None)
7898+            d.addCallback(lambda ignored:
7899+                n.get_servermap(MODE_READ))
7900+            def _then(servermap):
7901+                dumped = servermap.dump(StringIO())
7902+                self.failUnlessIn("3-of-10", dumped.getvalue())
7903+            d.addCallback(_then)
7904+            # Now overwrite the contents with some new contents. We want
7905+            # to make them big enough to force the file to be uploaded
7906+            # in more than one segment.
7907+            big_contents = "contents1" * 100000 # about 900 KiB
7908+            big_contents_uploadable = MutableData(big_contents)
7909+            d.addCallback(lambda ignored:
7910+                n.overwrite(big_contents_uploadable))
7911+            d.addCallback(lambda ignored:
7912+                n.download_best_version())
7913+            d.addCallback(lambda data:
7914+                self.failUnlessEqual(data, big_contents))
7915+            # Overwrite the contents again with some new contents. As
7916+            # before, they need to be big enough to force multiple
7917+            # segments, so that we make the downloader deal with
7918+            # multiple segments.
7919+            bigger_contents = "contents2" * 1000000 # about 9MiB
7920+            bigger_contents_uploadable = MutableData(bigger_contents)
7921+            d.addCallback(lambda ignored:
7922+                n.overwrite(bigger_contents_uploadable))
7923+            d.addCallback(lambda ignored:
7924+                n.download_best_version())
7925+            d.addCallback(lambda data:
7926+                self.failUnlessEqual(data, bigger_contents))
7927+            return d
7928+        d.addCallback(_created)
7929+        return d
7930+
7931+
7932+    def test_retrieve_pause(self):
7933+        # We should make sure that the retriever is able to pause
7934+        # correctly.
7935+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7936+        def _created(node):
7937+            self.node = node
7938+
7939+            return node.overwrite(MutableData("contents1" * 100000))
7940+        d.addCallback(_created)
7941+        # Now we'll retrieve it into a pausing consumer.
7942+        d.addCallback(lambda ignored:
7943+            self.node.get_best_mutable_version())
7944+        def _got_version(version):
7945+            self.c = PausingConsumer()
7946+            return version.read(self.c)
7947+        d.addCallback(_got_version)
7948+        d.addCallback(lambda ignored:
7949+            self.failUnlessEqual(self.c.data, "contents1" * 100000))
7950+        return d
7951+    test_retrieve_pause.timeout = 25
7952+
7953+
7954+    def test_download_from_mdmf_cap(self):
7955+        # We should be able to download an MDMF file given its cap
7956+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
7957+        def _created(node):
7958+            self.uri = node.get_uri()
7959+
7960+            return node.overwrite(MutableData("contents1" * 100000))
7961+        def _then(ignored):
7962+            node = self.nodemaker.create_from_cap(self.uri)
7963+            return node.download_best_version()
7964+        def _downloaded(data):
7965+            self.failUnlessEqual(data, "contents1" * 100000)
7966+        d.addCallback(_created)
7967+        d.addCallback(_then)
7968+        d.addCallback(_downloaded)
7969+        return d
7970+
7971+
7972+    def test_create_and_download_from_bare_mdmf_cap(self):
7973+        # MDMF caps have extension parameters on them by default. We
7974+        # need to make sure that they work without extension parameters.
7975+        contents = MutableData("contents" * 100000)
7976+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION,
7977+                                               contents=contents)
7978+        def _created(node):
7979+            uri = node.get_uri()
7980+            self._created = node
7981+            self.failUnlessIn(":3:131073", uri)
7982+            # Now strip that off the end of the uri, then try creating
7983+            # and downloading the node again.
7984+            bare_uri = uri.replace(":3:131073", "")
7985+            assert ":3:131073" not in bare_uri
7986+
7987+            return self.nodemaker.create_from_cap(bare_uri)
7988+        d.addCallback(_created)
7989+        def _created_bare(node):
7990+            self.failUnlessEqual(node.get_writekey(),
7991+                                 self._created.get_writekey())
7992+            self.failUnlessEqual(node.get_readkey(),
7993+                                 self._created.get_readkey())
7994+            self.failUnlessEqual(node.get_storage_index(),
7995+                                 self._created.get_storage_index())
7996+            return node.download_best_version()
7997+        d.addCallback(_created_bare)
7998+        d.addCallback(lambda data:
7999+            self.failUnlessEqual(data, "contents" * 100000))
8000+        return d
8001+
8002+
8003+    def test_mdmf_write_count(self):
8004+        # Publishing an MDMF file should only cause one write for each
8005+        # share that is to be published. Otherwise, we introduce
8006+        # undesirable semantics that are a regression from SDMF
8007+        upload = MutableData("MDMF" * 100000) # about 400 KiB
8008+        d = self.nodemaker.create_mutable_file(upload,
8009+                                               version=MDMF_VERSION)
8010+        def _check_server_write_counts(ignored):
8011+            sb = self.nodemaker.storage_broker
8012+            peers = sb.test_servers.values()
8013+            for peer in peers:
8014+                self.failUnlessEqual(peer.queries, 1)
8015+        d.addCallback(_check_server_write_counts)
8016+        return d
8017+
8018+
8019     def test_create_with_initial_contents(self):
8020hunk ./src/allmydata/test/test_mutable.py 630
8021-        d = self.nodemaker.create_mutable_file("contents 1")
8022+        upload1 = MutableData("contents 1")
8023+        d = self.nodemaker.create_mutable_file(upload1)
8024         def _created(n):
8025             d = n.download_best_version()
8026             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
8027hunk ./src/allmydata/test/test_mutable.py 635
8028-            d.addCallback(lambda res: n.overwrite("contents 2"))
8029+            upload2 = MutableData("contents 2")
8030+            d.addCallback(lambda res: n.overwrite(upload2))
8031             d.addCallback(lambda res: n.download_best_version())
8032             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
8033             return d
8034hunk ./src/allmydata/test/test_mutable.py 642
8035         d.addCallback(_created)
8036         return d
8037+    test_create_with_initial_contents.timeout = 15
8038+
8039+
8040+    def test_create_mdmf_with_initial_contents(self):
8041+        initial_contents = "foobarbaz" * 131072 # 900KiB
8042+        initial_contents_uploadable = MutableData(initial_contents)
8043+        d = self.nodemaker.create_mutable_file(initial_contents_uploadable,
8044+                                               version=MDMF_VERSION)
8045+        def _created(n):
8046+            d = n.download_best_version()
8047+            d.addCallback(lambda data:
8048+                self.failUnlessEqual(data, initial_contents))
8049+            uploadable2 = MutableData(initial_contents + "foobarbaz")
8050+            d.addCallback(lambda ignored:
8051+                n.overwrite(uploadable2))
8052+            d.addCallback(lambda ignored:
8053+                n.download_best_version())
8054+            d.addCallback(lambda data:
8055+                self.failUnlessEqual(data, initial_contents +
8056+                                           "foobarbaz"))
8057+            return d
8058+        d.addCallback(_created)
8059+        return d
8060+    test_create_mdmf_with_initial_contents.timeout = 20
8061+
8062 
8063     def test_response_cache_memory_leak(self):
8064         d = self.nodemaker.create_mutable_file("contents")
8065hunk ./src/allmydata/test/test_mutable.py 693
8066             key = n.get_writekey()
8067             self.failUnless(isinstance(key, str), key)
8068             self.failUnlessEqual(len(key), 16) # AES key size
8069-            return data
8070+            return MutableData(data)
8071         d = self.nodemaker.create_mutable_file(_make_contents)
8072         def _created(n):
8073             return n.download_best_version()
8074hunk ./src/allmydata/test/test_mutable.py 701
8075         d.addCallback(lambda data2: self.failUnlessEqual(data2, data))
8076         return d
8077 
8078+
8079+    def test_create_mdmf_with_initial_contents_function(self):
8080+        data = "initial contents" * 100000
8081+        def _make_contents(n):
8082+            self.failUnless(isinstance(n, MutableFileNode))
8083+            key = n.get_writekey()
8084+            self.failUnless(isinstance(key, str), key)
8085+            self.failUnlessEqual(len(key), 16)
8086+            return MutableData(data)
8087+        d = self.nodemaker.create_mutable_file(_make_contents,
8088+                                               version=MDMF_VERSION)
8089+        d.addCallback(lambda n:
8090+            n.download_best_version())
8091+        d.addCallback(lambda data2:
8092+            self.failUnlessEqual(data2, data))
8093+        return d
8094+
8095+
8096     def test_create_with_too_large_contents(self):
8097         BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1)
8098hunk ./src/allmydata/test/test_mutable.py 721
8099-        d = self.nodemaker.create_mutable_file(BIG)
8100+        BIG_uploadable = MutableData(BIG)
8101+        d = self.nodemaker.create_mutable_file(BIG_uploadable)
8102         def _created(n):
8103hunk ./src/allmydata/test/test_mutable.py 724
8104-            d = n.overwrite(BIG)
8105+            other_BIG_uploadable = MutableData(BIG)
8106+            d = n.overwrite(other_BIG_uploadable)
8107             return d
8108         d.addCallback(_created)
8109         return d
8110hunk ./src/allmydata/test/test_mutable.py 739
8111 
8112     def test_modify(self):
8113         def _modifier(old_contents, servermap, first_time):
8114-            return old_contents + "line2"
8115+            new_contents = old_contents + "line2"
8116+            return new_contents
8117         def _non_modifier(old_contents, servermap, first_time):
8118             return old_contents
8119         def _none_modifier(old_contents, servermap, first_time):
8120hunk ./src/allmydata/test/test_mutable.py 748
8121         def _error_modifier(old_contents, servermap, first_time):
8122             raise ValueError("oops")
8123         def _toobig_modifier(old_contents, servermap, first_time):
8124-            return "b" * (self.OLD_MAX_SEGMENT_SIZE+1)
8125+            new_content = "b" * (self.OLD_MAX_SEGMENT_SIZE + 1)
8126+            return new_content
8127         calls = []
8128         def _ucw_error_modifier(old_contents, servermap, first_time):
8129             # simulate an UncoordinatedWriteError once
8130hunk ./src/allmydata/test/test_mutable.py 756
8131             calls.append(1)
8132             if len(calls) <= 1:
8133                 raise UncoordinatedWriteError("simulated")
8134-            return old_contents + "line3"
8135+            new_contents = old_contents + "line3"
8136+            return new_contents
8137         def _ucw_error_non_modifier(old_contents, servermap, first_time):
8138             # simulate an UncoordinatedWriteError once, and don't actually
8139             # modify the contents on subsequent invocations
8140hunk ./src/allmydata/test/test_mutable.py 766
8141                 raise UncoordinatedWriteError("simulated")
8142             return old_contents
8143 
8144-        d = self.nodemaker.create_mutable_file("line1")
8145+        initial_contents = "line1"
8146+        d = self.nodemaker.create_mutable_file(MutableData(initial_contents))
8147         def _created(n):
8148             d = n.modify(_modifier)
8149             d.addCallback(lambda res: n.download_best_version())
8150hunk ./src/allmydata/test/test_mutable.py 824
8151             return d
8152         d.addCallback(_created)
8153         return d
8154+    test_modify.timeout = 15
8155+
8156 
8157     def test_modify_backoffer(self):
8158         def _modifier(old_contents, servermap, first_time):
8159hunk ./src/allmydata/test/test_mutable.py 851
8160         giveuper._delay = 0.1
8161         giveuper.factor = 1
8162 
8163-        d = self.nodemaker.create_mutable_file("line1")
8164+        d = self.nodemaker.create_mutable_file(MutableData("line1"))
8165         def _created(n):
8166             d = n.modify(_modifier)
8167             d.addCallback(lambda res: n.download_best_version())
8168hunk ./src/allmydata/test/test_mutable.py 901
8169             d.addCallback(lambda smap: smap.dump(StringIO()))
8170             d.addCallback(lambda sio:
8171                           self.failUnless("3-of-10" in sio.getvalue()))
8172-            d.addCallback(lambda res: n.overwrite("contents 1"))
8173+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
8174             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
8175             d.addCallback(lambda res: n.download_best_version())
8176             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
8177hunk ./src/allmydata/test/test_mutable.py 905
8178-            d.addCallback(lambda res: n.overwrite("contents 2"))
8179+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
8180             d.addCallback(lambda res: n.download_best_version())
8181             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
8182             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
8183hunk ./src/allmydata/test/test_mutable.py 909
8184-            d.addCallback(lambda smap: n.upload("contents 3", smap))
8185+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
8186             d.addCallback(lambda res: n.download_best_version())
8187             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
8188             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
8189hunk ./src/allmydata/test/test_mutable.py 922
8190         return d
8191 
8192 
8193-class MakeShares(unittest.TestCase):
8194-    def test_encrypt(self):
8195-        nm = make_nodemaker()
8196-        CONTENTS = "some initial contents"
8197-        d = nm.create_mutable_file(CONTENTS)
8198-        def _created(fn):
8199-            p = Publish(fn, nm.storage_broker, None)
8200-            p.salt = "SALT" * 4
8201-            p.readkey = "\x00" * 16
8202-            p.newdata = CONTENTS
8203-            p.required_shares = 3
8204-            p.total_shares = 10
8205-            p.setup_encoding_parameters()
8206-            return p._encrypt_and_encode()
8207+    def test_size_after_servermap_update(self):
8208+        # a mutable file node should have something to say about how big
8209+        # it is after a servermap update is performed, since this tells
8210+        # us how large the best version of that mutable file is.
8211+        d = self.nodemaker.create_mutable_file()
8212+        def _created(n):
8213+            self.n = n
8214+            return n.get_servermap(MODE_READ)
8215         d.addCallback(_created)
8216hunk ./src/allmydata/test/test_mutable.py 931
8217-        def _done(shares_and_shareids):
8218-            (shares, share_ids) = shares_and_shareids
8219-            self.failUnlessEqual(len(shares), 10)
8220-            for sh in shares:
8221-                self.failUnless(isinstance(sh, str))
8222-                self.failUnlessEqual(len(sh), 7)
8223-            self.failUnlessEqual(len(share_ids), 10)
8224-        d.addCallback(_done)
8225-        return d
8226-
8227-    def test_generate(self):
8228-        nm = make_nodemaker()
8229-        CONTENTS = "some initial contents"
8230-        d = nm.create_mutable_file(CONTENTS)
8231-        def _created(fn):
8232-            self._fn = fn
8233-            p = Publish(fn, nm.storage_broker, None)
8234-            self._p = p
8235-            p.newdata = CONTENTS
8236-            p.required_shares = 3
8237-            p.total_shares = 10
8238-            p.setup_encoding_parameters()
8239-            p._new_seqnum = 3
8240-            p.salt = "SALT" * 4
8241-            # make some fake shares
8242-            shares_and_ids = ( ["%07d" % i for i in range(10)], range(10) )
8243-            p._privkey = fn.get_privkey()
8244-            p._encprivkey = fn.get_encprivkey()
8245-            p._pubkey = fn.get_pubkey()
8246-            return p._generate_shares(shares_and_ids)
8247+        d.addCallback(lambda ignored:
8248+            self.failUnlessEqual(self.n.get_size(), 0))
8249+        d.addCallback(lambda ignored:
8250+            self.n.overwrite(MutableData("foobarbaz")))
8251+        d.addCallback(lambda ignored:
8252+            self.failUnlessEqual(self.n.get_size(), 9))
8253+        d.addCallback(lambda ignored:
8254+            self.nodemaker.create_mutable_file(MutableData("foobarbaz")))
8255         d.addCallback(_created)
8256hunk ./src/allmydata/test/test_mutable.py 940
8257-        def _generated(res):
8258-            p = self._p
8259-            final_shares = p.shares
8260-            root_hash = p.root_hash
8261-            self.failUnlessEqual(len(root_hash), 32)
8262-            self.failUnless(isinstance(final_shares, dict))
8263-            self.failUnlessEqual(len(final_shares), 10)
8264-            self.failUnlessEqual(sorted(final_shares.keys()), range(10))
8265-            for i,sh in final_shares.items():
8266-                self.failUnless(isinstance(sh, str))
8267-                # feed the share through the unpacker as a sanity-check
8268-                pieces = unpack_share(sh)
8269-                (u_seqnum, u_root_hash, IV, k, N, segsize, datalen,
8270-                 pubkey, signature, share_hash_chain, block_hash_tree,
8271-                 share_data, enc_privkey) = pieces
8272-                self.failUnlessEqual(u_seqnum, 3)
8273-                self.failUnlessEqual(u_root_hash, root_hash)
8274-                self.failUnlessEqual(k, 3)
8275-                self.failUnlessEqual(N, 10)
8276-                self.failUnlessEqual(segsize, 21)
8277-                self.failUnlessEqual(datalen, len(CONTENTS))
8278-                self.failUnlessEqual(pubkey, p._pubkey.serialize())
8279-                sig_material = struct.pack(">BQ32s16s BBQQ",
8280-                                           0, p._new_seqnum, root_hash, IV,
8281-                                           k, N, segsize, datalen)
8282-                self.failUnless(p._pubkey.verify(sig_material, signature))
8283-                #self.failUnlessEqual(signature, p._privkey.sign(sig_material))
8284-                self.failUnless(isinstance(share_hash_chain, dict))
8285-                self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++
8286-                for shnum,share_hash in share_hash_chain.items():
8287-                    self.failUnless(isinstance(shnum, int))
8288-                    self.failUnless(isinstance(share_hash, str))
8289-                    self.failUnlessEqual(len(share_hash), 32)
8290-                self.failUnless(isinstance(block_hash_tree, list))
8291-                self.failUnlessEqual(len(block_hash_tree), 1) # very small tree
8292-                self.failUnlessEqual(IV, "SALT"*4)
8293-                self.failUnlessEqual(len(share_data), len("%07d" % 1))
8294-                self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey())
8295-        d.addCallback(_generated)
8296+        d.addCallback(lambda ignored:
8297+            self.failUnlessEqual(self.n.get_size(), 9))
8298         return d
8299 
8300hunk ./src/allmydata/test/test_mutable.py 944
8301-    # TODO: when we publish to 20 peers, we should get one share per peer on 10
8302-    # when we publish to 3 peers, we should get either 3 or 4 shares per peer
8303-    # when we publish to zero peers, we should get a NotEnoughSharesError
8304 
8305 class PublishMixin:
8306     def publish_one(self):
8307hunk ./src/allmydata/test/test_mutable.py 950
8308         # publish a file and create shares, which can then be manipulated
8309         # later.
8310         self.CONTENTS = "New contents go here" * 1000
8311+        self.uploadable = MutableData(self.CONTENTS)
8312+        self._storage = FakeStorage()
8313+        self._nodemaker = make_nodemaker(self._storage)
8314+        self._storage_broker = self._nodemaker.storage_broker
8315+        d = self._nodemaker.create_mutable_file(self.uploadable)
8316+        def _created(node):
8317+            self._fn = node
8318+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8319+        d.addCallback(_created)
8320+        return d
8321+
8322+    def publish_mdmf(self):
8323+        # like publish_one, except that the result is guaranteed to be
8324+        # an MDMF file.
8325+        # self.CONTENTS should have more than one segment.
8326+        self.CONTENTS = "This is an MDMF file" * 100000
8327+        self.uploadable = MutableData(self.CONTENTS)
8328+        self._storage = FakeStorage()
8329+        self._nodemaker = make_nodemaker(self._storage)
8330+        self._storage_broker = self._nodemaker.storage_broker
8331+        d = self._nodemaker.create_mutable_file(self.uploadable, version=MDMF_VERSION)
8332+        def _created(node):
8333+            self._fn = node
8334+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8335+        d.addCallback(_created)
8336+        return d
8337+
8338+
8339+    def publish_sdmf(self):
8340+        # like publish_one, except that the result is guaranteed to be
8341+        # an SDMF file
8342+        self.CONTENTS = "This is an SDMF file" * 1000
8343+        self.uploadable = MutableData(self.CONTENTS)
8344         self._storage = FakeStorage()
8345         self._nodemaker = make_nodemaker(self._storage)
8346         self._storage_broker = self._nodemaker.storage_broker
8347hunk ./src/allmydata/test/test_mutable.py 986
8348-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
8349+        d = self._nodemaker.create_mutable_file(self.uploadable, version=SDMF_VERSION)
8350         def _created(node):
8351             self._fn = node
8352             self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
8353hunk ./src/allmydata/test/test_mutable.py 993
8354         d.addCallback(_created)
8355         return d
8356 
8357-    def publish_multiple(self):
8358+
8359+    def publish_multiple(self, version=0):
8360         self.CONTENTS = ["Contents 0",
8361                          "Contents 1",
8362                          "Contents 2",
8363hunk ./src/allmydata/test/test_mutable.py 1000
8364                          "Contents 3a",
8365                          "Contents 3b"]
8366+        self.uploadables = [MutableData(d) for d in self.CONTENTS]
8367         self._copied_shares = {}
8368         self._storage = FakeStorage()
8369         self._nodemaker = make_nodemaker(self._storage)
8370hunk ./src/allmydata/test/test_mutable.py 1004
8371-        d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1
8372+        d = self._nodemaker.create_mutable_file(self.uploadables[0], version=version) # seqnum=1
8373         def _created(node):
8374             self._fn = node
8375             # now create multiple versions of the same file, and accumulate
8376hunk ./src/allmydata/test/test_mutable.py 1011
8377             # their shares, so we can mix and match them later.
8378             d = defer.succeed(None)
8379             d.addCallback(self._copy_shares, 0)
8380-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[1])) #s2
8381+            d.addCallback(lambda res: node.overwrite(self.uploadables[1])) #s2
8382             d.addCallback(self._copy_shares, 1)
8383hunk ./src/allmydata/test/test_mutable.py 1013
8384-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[2])) #s3
8385+            d.addCallback(lambda res: node.overwrite(self.uploadables[2])) #s3
8386             d.addCallback(self._copy_shares, 2)
8387hunk ./src/allmydata/test/test_mutable.py 1015
8388-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[3])) #s4a
8389+            d.addCallback(lambda res: node.overwrite(self.uploadables[3])) #s4a
8390             d.addCallback(self._copy_shares, 3)
8391             # now we replace all the shares with version s3, and upload a new
8392             # version to get s4b.
8393hunk ./src/allmydata/test/test_mutable.py 1021
8394             rollback = dict([(i,2) for i in range(10)])
8395             d.addCallback(lambda res: self._set_versions(rollback))
8396-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[4])) #s4b
8397+            d.addCallback(lambda res: node.overwrite(self.uploadables[4])) #s4b
8398             d.addCallback(self._copy_shares, 4)
8399             # we leave the storage in state 4
8400             return d
8401hunk ./src/allmydata/test/test_mutable.py 1028
8402         d.addCallback(_created)
8403         return d
8404 
8405+
8406     def _copy_shares(self, ignored, index):
8407         shares = self._storage._peers
8408         # we need a deep copy
8409hunk ./src/allmydata/test/test_mutable.py 1051
8410                     index = versionmap[shnum]
8411                     shares[peerid][shnum] = oldshares[index][peerid][shnum]
8412 
8413+class PausingConsumer:
8414+    implements(IConsumer)
8415+    def __init__(self):
8416+        self.data = ""
8417+        self.already_paused = False
8418+
8419+    def registerProducer(self, producer, streaming):
8420+        self.producer = producer
8421+        self.producer.resumeProducing()
8422+
8423+    def unregisterProducer(self):
8424+        self.producer = None
8425+
8426+    def _unpause(self, ignored):
8427+        self.producer.resumeProducing()
8428+
8429+    def write(self, data):
8430+        self.data += data
8431+        if not self.already_paused:
8432+           self.producer.pauseProducing()
8433+           self.already_paused = True
8434+           reactor.callLater(15, self._unpause, None)
8435+
8436 
8437 class Servermap(unittest.TestCase, PublishMixin):
8438     def setUp(self):
8439hunk ./src/allmydata/test/test_mutable.py 1079
8440         return self.publish_one()
8441 
8442-    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None):
8443+    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None,
8444+                       update_range=None):
8445         if fn is None:
8446             fn = self._fn
8447         if sb is None:
8448hunk ./src/allmydata/test/test_mutable.py 1086
8449             sb = self._storage_broker
8450         smu = ServermapUpdater(fn, sb, Monitor(),
8451-                               ServerMap(), mode)
8452+                               ServerMap(), mode, update_range=update_range)
8453         d = smu.update()
8454         return d
8455 
8456hunk ./src/allmydata/test/test_mutable.py 1152
8457         # create a new file, which is large enough to knock the privkey out
8458         # of the early part of the file
8459         LARGE = "These are Larger contents" * 200 # about 5KB
8460-        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE))
8461+        LARGE_uploadable = MutableData(LARGE)
8462+        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE_uploadable))
8463         def _created(large_fn):
8464             large_fn2 = self._nodemaker.create_from_cap(large_fn.get_uri())
8465             return self.make_servermap(MODE_WRITE, large_fn2)
8466hunk ./src/allmydata/test/test_mutable.py 1161
8467         d.addCallback(lambda sm: self.failUnlessOneRecoverable(sm, 10))
8468         return d
8469 
8470+
8471     def test_mark_bad(self):
8472         d = defer.succeed(None)
8473         ms = self.make_servermap
8474hunk ./src/allmydata/test/test_mutable.py 1207
8475         self._storage._peers = {} # delete all shares
8476         ms = self.make_servermap
8477         d = defer.succeed(None)
8478-
8479+#
8480         d.addCallback(lambda res: ms(mode=MODE_CHECK))
8481         d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm))
8482 
8483hunk ./src/allmydata/test/test_mutable.py 1259
8484         return d
8485 
8486 
8487+    def test_servermapupdater_finds_mdmf_files(self):
8488+        # setUp already published an MDMF file for us. We just need to
8489+        # make sure that when we run the ServermapUpdater, the file is
8490+        # reported to have one recoverable version.
8491+        d = defer.succeed(None)
8492+        d.addCallback(lambda ignored:
8493+            self.publish_mdmf())
8494+        d.addCallback(lambda ignored:
8495+            self.make_servermap(mode=MODE_CHECK))
8496+        # Calling make_servermap also updates the servermap in the mode
8497+        # that we specify, so we just need to see what it says.
8498+        def _check_servermap(sm):
8499+            self.failUnlessEqual(len(sm.recoverable_versions()), 1)
8500+        d.addCallback(_check_servermap)
8501+        return d
8502+
8503+
8504+    def test_fetch_update(self):
8505+        d = defer.succeed(None)
8506+        d.addCallback(lambda ignored:
8507+            self.publish_mdmf())
8508+        d.addCallback(lambda ignored:
8509+            self.make_servermap(mode=MODE_WRITE, update_range=(1, 2)))
8510+        def _check_servermap(sm):
8511+            # 10 shares
8512+            self.failUnlessEqual(len(sm.update_data), 10)
8513+            # one version
8514+            for data in sm.update_data.itervalues():
8515+                self.failUnlessEqual(len(data), 1)
8516+        d.addCallback(_check_servermap)
8517+        return d
8518+
8519+
8520+    def test_servermapupdater_finds_sdmf_files(self):
8521+        d = defer.succeed(None)
8522+        d.addCallback(lambda ignored:
8523+            self.publish_sdmf())
8524+        d.addCallback(lambda ignored:
8525+            self.make_servermap(mode=MODE_CHECK))
8526+        d.addCallback(lambda servermap:
8527+            self.failUnlessEqual(len(servermap.recoverable_versions()), 1))
8528+        return d
8529+
8530 
8531 class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin):
8532     def setUp(self):
8533hunk ./src/allmydata/test/test_mutable.py 1342
8534         if version is None:
8535             version = servermap.best_recoverable_version()
8536         r = Retrieve(self._fn, servermap, version)
8537-        return r.download()
8538+        c = consumer.MemoryConsumer()
8539+        d = r.download(consumer=c)
8540+        d.addCallback(lambda mc: "".join(mc.chunks))
8541+        return d
8542+
8543 
8544     def test_basic(self):
8545         d = self.make_servermap()
8546hunk ./src/allmydata/test/test_mutable.py 1423
8547         return d
8548     test_no_servers_download.timeout = 15
8549 
8550+
8551     def _test_corrupt_all(self, offset, substring,
8552hunk ./src/allmydata/test/test_mutable.py 1425
8553-                          should_succeed=False, corrupt_early=True,
8554-                          failure_checker=None):
8555+                          should_succeed=False,
8556+                          corrupt_early=True,
8557+                          failure_checker=None,
8558+                          fetch_privkey=False):
8559         d = defer.succeed(None)
8560         if corrupt_early:
8561             d.addCallback(corrupt, self._storage, offset)
8562hunk ./src/allmydata/test/test_mutable.py 1445
8563                     self.failUnlessIn(substring, "".join(allproblems))
8564                 return servermap
8565             if should_succeed:
8566-                d1 = self._fn.download_version(servermap, ver)
8567+                d1 = self._fn.download_version(servermap, ver,
8568+                                               fetch_privkey)
8569                 d1.addCallback(lambda new_contents:
8570                                self.failUnlessEqual(new_contents, self.CONTENTS))
8571             else:
8572hunk ./src/allmydata/test/test_mutable.py 1453
8573                 d1 = self.shouldFail(NotEnoughSharesError,
8574                                      "_corrupt_all(offset=%s)" % (offset,),
8575                                      substring,
8576-                                     self._fn.download_version, servermap, ver)
8577+                                     self._fn.download_version, servermap,
8578+                                                                ver,
8579+                                                                fetch_privkey)
8580             if failure_checker:
8581                 d1.addCallback(failure_checker)
8582             d1.addCallback(lambda res: servermap)
8583hunk ./src/allmydata/test/test_mutable.py 1464
8584         return d
8585 
8586     def test_corrupt_all_verbyte(self):
8587-        # when the version byte is not 0, we hit an UnknownVersionError error
8588-        # in unpack_share().
8589+        # when the version byte is not 0 or 1, we hit an UnknownVersionError
8590+        # error in unpack_share().
8591         d = self._test_corrupt_all(0, "UnknownVersionError")
8592         def _check_servermap(servermap):
8593             # and the dump should mention the problems
8594hunk ./src/allmydata/test/test_mutable.py 1471
8595             s = StringIO()
8596             dump = servermap.dump(s).getvalue()
8597-            self.failUnless("10 PROBLEMS" in dump, dump)
8598+            self.failUnless("30 PROBLEMS" in dump, dump)
8599         d.addCallback(_check_servermap)
8600         return d
8601 
8602hunk ./src/allmydata/test/test_mutable.py 1541
8603         return self._test_corrupt_all("enc_privkey", None, should_succeed=True)
8604 
8605 
8606+    def test_corrupt_all_encprivkey_late(self):
8607+        # this should work for the same reason as above, but we corrupt
8608+        # after the servermap update to exercise the error handling
8609+        # code.
8610+        # We need to remove the privkey from the node, or the retrieve
8611+        # process won't know to update it.
8612+        self._fn._privkey = None
8613+        return self._test_corrupt_all("enc_privkey",
8614+                                      None, # this shouldn't fail
8615+                                      should_succeed=True,
8616+                                      corrupt_early=False,
8617+                                      fetch_privkey=True)
8618+
8619+
8620     def test_corrupt_all_seqnum_late(self):
8621         # corrupting the seqnum between mapupdate and retrieve should result
8622         # in NotEnoughSharesError, since each share will look invalid
8623hunk ./src/allmydata/test/test_mutable.py 1561
8624         def _check(res):
8625             f = res[0]
8626             self.failUnless(f.check(NotEnoughSharesError))
8627-            self.failUnless("someone wrote to the data since we read the servermap" in str(f))
8628+            self.failUnless("uncoordinated write" in str(f))
8629         return self._test_corrupt_all(1, "ran out of peers",
8630                                       corrupt_early=False,
8631                                       failure_checker=_check)
8632hunk ./src/allmydata/test/test_mutable.py 1605
8633                             in str(servermap.problems[0]))
8634             ver = servermap.best_recoverable_version()
8635             r = Retrieve(self._fn, servermap, ver)
8636-            return r.download()
8637+            c = consumer.MemoryConsumer()
8638+            return r.download(c)
8639         d.addCallback(_do_retrieve)
8640hunk ./src/allmydata/test/test_mutable.py 1608
8641+        d.addCallback(lambda mc: "".join(mc.chunks))
8642         d.addCallback(lambda new_contents:
8643                       self.failUnlessEqual(new_contents, self.CONTENTS))
8644         return d
8645hunk ./src/allmydata/test/test_mutable.py 1613
8646 
8647-    def test_corrupt_some(self):
8648-        # corrupt the data of first five shares (so the servermap thinks
8649-        # they're good but retrieve marks them as bad), so that the
8650-        # MODE_READ set of 6 will be insufficient, forcing node.download to
8651-        # retry with more servers.
8652-        corrupt(None, self._storage, "share_data", range(5))
8653-        d = self.make_servermap()
8654+
8655+    def _test_corrupt_some(self, offset, mdmf=False):
8656+        if mdmf:
8657+            d = self.publish_mdmf()
8658+        else:
8659+            d = defer.succeed(None)
8660+        d.addCallback(lambda ignored:
8661+            corrupt(None, self._storage, offset, range(5)))
8662+        d.addCallback(lambda ignored:
8663+            self.make_servermap())
8664         def _do_retrieve(servermap):
8665             ver = servermap.best_recoverable_version()
8666             self.failUnless(ver)
8667hunk ./src/allmydata/test/test_mutable.py 1629
8668             return self._fn.download_best_version()
8669         d.addCallback(_do_retrieve)
8670         d.addCallback(lambda new_contents:
8671-                      self.failUnlessEqual(new_contents, self.CONTENTS))
8672+            self.failUnlessEqual(new_contents, self.CONTENTS))
8673         return d
8674 
8675hunk ./src/allmydata/test/test_mutable.py 1632
8676+
8677+    def test_corrupt_some(self):
8678+        # corrupt the data of first five shares (so the servermap thinks
8679+        # they're good but retrieve marks them as bad), so that the
8680+        # MODE_READ set of 6 will be insufficient, forcing node.download to
8681+        # retry with more servers.
8682+        return self._test_corrupt_some("share_data")
8683+
8684+
8685     def test_download_fails(self):
8686hunk ./src/allmydata/test/test_mutable.py 1642
8687-        corrupt(None, self._storage, "signature")
8688-        d = self.shouldFail(UnrecoverableFileError, "test_download_anyway",
8689+        d = corrupt(None, self._storage, "signature")
8690+        d.addCallback(lambda ignored:
8691+            self.shouldFail(UnrecoverableFileError, "test_download_anyway",
8692                             "no recoverable versions",
8693hunk ./src/allmydata/test/test_mutable.py 1646
8694-                            self._fn.download_best_version)
8695+                            self._fn.download_best_version))
8696+        return d
8697+
8698+
8699+
8700+    def test_corrupt_mdmf_block_hash_tree(self):
8701+        d = self.publish_mdmf()
8702+        d.addCallback(lambda ignored:
8703+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
8704+                                   "block hash tree failure",
8705+                                   corrupt_early=False,
8706+                                   should_succeed=False))
8707         return d
8708 
8709 
8710hunk ./src/allmydata/test/test_mutable.py 1661
8711+    def test_corrupt_mdmf_block_hash_tree_late(self):
8712+        d = self.publish_mdmf()
8713+        d.addCallback(lambda ignored:
8714+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
8715+                                   "block hash tree failure",
8716+                                   corrupt_early=True,
8717+                                   should_succeed=False))
8718+        return d
8719+
8720+
8721+    def test_corrupt_mdmf_share_data(self):
8722+        d = self.publish_mdmf()
8723+        d.addCallback(lambda ignored:
8724+            # TODO: Find out what the block size is and corrupt a
8725+            # specific block, rather than just guessing.
8726+            self._test_corrupt_all(("share_data", 12 * 40),
8727+                                    "block hash tree failure",
8728+                                    corrupt_early=True,
8729+                                    should_succeed=False))
8730+        return d
8731+
8732+
8733+    def test_corrupt_some_mdmf(self):
8734+        return self._test_corrupt_some(("share_data", 12 * 40),
8735+                                       mdmf=True)
8736+
8737+
8738 class CheckerMixin:
8739     def check_good(self, r, where):
8740         self.failUnless(r.is_healthy(), where)
8741hunk ./src/allmydata/test/test_mutable.py 1718
8742         d.addCallback(self.check_good, "test_check_good")
8743         return d
8744 
8745+    def test_check_mdmf_good(self):
8746+        d = self.publish_mdmf()
8747+        d.addCallback(lambda ignored:
8748+            self._fn.check(Monitor()))
8749+        d.addCallback(self.check_good, "test_check_mdmf_good")
8750+        return d
8751+
8752     def test_check_no_shares(self):
8753         for shares in self._storage._peers.values():
8754             shares.clear()
8755hunk ./src/allmydata/test/test_mutable.py 1732
8756         d.addCallback(self.check_bad, "test_check_no_shares")
8757         return d
8758 
8759+    def test_check_mdmf_no_shares(self):
8760+        d = self.publish_mdmf()
8761+        def _then(ignored):
8762+            for share in self._storage._peers.values():
8763+                share.clear()
8764+        d.addCallback(_then)
8765+        d.addCallback(lambda ignored:
8766+            self._fn.check(Monitor()))
8767+        d.addCallback(self.check_bad, "test_check_mdmf_no_shares")
8768+        return d
8769+
8770     def test_check_not_enough_shares(self):
8771         for shares in self._storage._peers.values():
8772             for shnum in shares.keys():
8773hunk ./src/allmydata/test/test_mutable.py 1752
8774         d.addCallback(self.check_bad, "test_check_not_enough_shares")
8775         return d
8776 
8777+    def test_check_mdmf_not_enough_shares(self):
8778+        d = self.publish_mdmf()
8779+        def _then(ignored):
8780+            for shares in self._storage._peers.values():
8781+                for shnum in shares.keys():
8782+                    if shnum > 0:
8783+                        del shares[shnum]
8784+        d.addCallback(_then)
8785+        d.addCallback(lambda ignored:
8786+            self._fn.check(Monitor()))
8787+        d.addCallback(self.check_bad, "test_check_mdmf_not_enougH_shares")
8788+        return d
8789+
8790+
8791     def test_check_all_bad_sig(self):
8792hunk ./src/allmydata/test/test_mutable.py 1767
8793-        corrupt(None, self._storage, 1) # bad sig
8794-        d = self._fn.check(Monitor())
8795+        d = corrupt(None, self._storage, 1) # bad sig
8796+        d.addCallback(lambda ignored:
8797+            self._fn.check(Monitor()))
8798         d.addCallback(self.check_bad, "test_check_all_bad_sig")
8799         return d
8800 
8801hunk ./src/allmydata/test/test_mutable.py 1773
8802+    def test_check_mdmf_all_bad_sig(self):
8803+        d = self.publish_mdmf()
8804+        d.addCallback(lambda ignored:
8805+            corrupt(None, self._storage, 1))
8806+        d.addCallback(lambda ignored:
8807+            self._fn.check(Monitor()))
8808+        d.addCallback(self.check_bad, "test_check_mdmf_all_bad_sig")
8809+        return d
8810+
8811     def test_check_all_bad_blocks(self):
8812hunk ./src/allmydata/test/test_mutable.py 1783
8813-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
8814+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
8815         # the Checker won't notice this.. it doesn't look at actual data
8816hunk ./src/allmydata/test/test_mutable.py 1785
8817-        d = self._fn.check(Monitor())
8818+        d.addCallback(lambda ignored:
8819+            self._fn.check(Monitor()))
8820         d.addCallback(self.check_good, "test_check_all_bad_blocks")
8821         return d
8822 
8823hunk ./src/allmydata/test/test_mutable.py 1790
8824+
8825+    def test_check_mdmf_all_bad_blocks(self):
8826+        d = self.publish_mdmf()
8827+        d.addCallback(lambda ignored:
8828+            corrupt(None, self._storage, "share_data"))
8829+        d.addCallback(lambda ignored:
8830+            self._fn.check(Monitor()))
8831+        d.addCallback(self.check_good, "test_check_mdmf_all_bad_blocks")
8832+        return d
8833+
8834     def test_verify_good(self):
8835         d = self._fn.check(Monitor(), verify=True)
8836         d.addCallback(self.check_good, "test_verify_good")
8837hunk ./src/allmydata/test/test_mutable.py 1804
8838         return d
8839+    test_verify_good.timeout = 15
8840 
8841     def test_verify_all_bad_sig(self):
8842hunk ./src/allmydata/test/test_mutable.py 1807
8843-        corrupt(None, self._storage, 1) # bad sig
8844-        d = self._fn.check(Monitor(), verify=True)
8845+        d = corrupt(None, self._storage, 1) # bad sig
8846+        d.addCallback(lambda ignored:
8847+            self._fn.check(Monitor(), verify=True))
8848         d.addCallback(self.check_bad, "test_verify_all_bad_sig")
8849         return d
8850 
8851hunk ./src/allmydata/test/test_mutable.py 1814
8852     def test_verify_one_bad_sig(self):
8853-        corrupt(None, self._storage, 1, [9]) # bad sig
8854-        d = self._fn.check(Monitor(), verify=True)
8855+        d = corrupt(None, self._storage, 1, [9]) # bad sig
8856+        d.addCallback(lambda ignored:
8857+            self._fn.check(Monitor(), verify=True))
8858         d.addCallback(self.check_bad, "test_verify_one_bad_sig")
8859         return d
8860 
8861hunk ./src/allmydata/test/test_mutable.py 1821
8862     def test_verify_one_bad_block(self):
8863-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
8864+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
8865         # the Verifier *will* notice this, since it examines every byte
8866hunk ./src/allmydata/test/test_mutable.py 1823
8867-        d = self._fn.check(Monitor(), verify=True)
8868+        d.addCallback(lambda ignored:
8869+            self._fn.check(Monitor(), verify=True))
8870         d.addCallback(self.check_bad, "test_verify_one_bad_block")
8871         d.addCallback(self.check_expected_failure,
8872                       CorruptShareError, "block hash tree failure",
8873hunk ./src/allmydata/test/test_mutable.py 1832
8874         return d
8875 
8876     def test_verify_one_bad_sharehash(self):
8877-        corrupt(None, self._storage, "share_hash_chain", [9], 5)
8878-        d = self._fn.check(Monitor(), verify=True)
8879+        d = corrupt(None, self._storage, "share_hash_chain", [9], 5)
8880+        d.addCallback(lambda ignored:
8881+            self._fn.check(Monitor(), verify=True))
8882         d.addCallback(self.check_bad, "test_verify_one_bad_sharehash")
8883         d.addCallback(self.check_expected_failure,
8884                       CorruptShareError, "corrupt hashes",
8885hunk ./src/allmydata/test/test_mutable.py 1842
8886         return d
8887 
8888     def test_verify_one_bad_encprivkey(self):
8889-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8890-        d = self._fn.check(Monitor(), verify=True)
8891+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8892+        d.addCallback(lambda ignored:
8893+            self._fn.check(Monitor(), verify=True))
8894         d.addCallback(self.check_bad, "test_verify_one_bad_encprivkey")
8895         d.addCallback(self.check_expected_failure,
8896                       CorruptShareError, "invalid privkey",
8897hunk ./src/allmydata/test/test_mutable.py 1852
8898         return d
8899 
8900     def test_verify_one_bad_encprivkey_uncheckable(self):
8901-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8902+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
8903         readonly_fn = self._fn.get_readonly()
8904         # a read-only node has no way to validate the privkey
8905hunk ./src/allmydata/test/test_mutable.py 1855
8906-        d = readonly_fn.check(Monitor(), verify=True)
8907+        d.addCallback(lambda ignored:
8908+            readonly_fn.check(Monitor(), verify=True))
8909         d.addCallback(self.check_good,
8910                       "test_verify_one_bad_encprivkey_uncheckable")
8911         return d
8912hunk ./src/allmydata/test/test_mutable.py 1861
8913 
8914+
8915+    def test_verify_mdmf_good(self):
8916+        d = self.publish_mdmf()
8917+        d.addCallback(lambda ignored:
8918+            self._fn.check(Monitor(), verify=True))
8919+        d.addCallback(self.check_good, "test_verify_mdmf_good")
8920+        return d
8921+
8922+
8923+    def test_verify_mdmf_one_bad_block(self):
8924+        d = self.publish_mdmf()
8925+        d.addCallback(lambda ignored:
8926+            corrupt(None, self._storage, "share_data", [1]))
8927+        d.addCallback(lambda ignored:
8928+            self._fn.check(Monitor(), verify=True))
8929+        # We should find one bad block here
8930+        d.addCallback(self.check_bad, "test_verify_mdmf_one_bad_block")
8931+        d.addCallback(self.check_expected_failure,
8932+                      CorruptShareError, "block hash tree failure",
8933+                      "test_verify_mdmf_one_bad_block")
8934+        return d
8935+
8936+
8937+    def test_verify_mdmf_bad_encprivkey(self):
8938+        d = self.publish_mdmf()
8939+        d.addCallback(lambda ignored:
8940+            corrupt(None, self._storage, "enc_privkey", [0]))
8941+        d.addCallback(lambda ignored:
8942+            self._fn.check(Monitor(), verify=True))
8943+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey")
8944+        d.addCallback(self.check_expected_failure,
8945+                      CorruptShareError, "privkey",
8946+                      "test_verify_mdmf_bad_encprivkey")
8947+        return d
8948+
8949+
8950+    def test_verify_mdmf_bad_sig(self):
8951+        d = self.publish_mdmf()
8952+        d.addCallback(lambda ignored:
8953+            corrupt(None, self._storage, 1, [1]))
8954+        d.addCallback(lambda ignored:
8955+            self._fn.check(Monitor(), verify=True))
8956+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_sig")
8957+        return d
8958+
8959+
8960+    def test_verify_mdmf_bad_encprivkey_uncheckable(self):
8961+        d = self.publish_mdmf()
8962+        d.addCallback(lambda ignored:
8963+            corrupt(None, self._storage, "enc_privkey", [1]))
8964+        d.addCallback(lambda ignored:
8965+            self._fn.get_readonly())
8966+        d.addCallback(lambda fn:
8967+            fn.check(Monitor(), verify=True))
8968+        d.addCallback(self.check_good,
8969+                      "test_verify_mdmf_bad_encprivkey_uncheckable")
8970+        return d
8971+
8972+
8973 class Repair(unittest.TestCase, PublishMixin, ShouldFailMixin):
8974 
8975     def get_shares(self, s):
8976hunk ./src/allmydata/test/test_mutable.py 1985
8977         current_shares = self.old_shares[-1]
8978         self.failUnlessEqual(old_shares, current_shares)
8979 
8980+
8981     def test_unrepairable_0shares(self):
8982         d = self.publish_one()
8983         def _delete_all_shares(ign):
8984hunk ./src/allmydata/test/test_mutable.py 2000
8985         d.addCallback(_check)
8986         return d
8987 
8988+    def test_mdmf_unrepairable_0shares(self):
8989+        d = self.publish_mdmf()
8990+        def _delete_all_shares(ign):
8991+            shares = self._storage._peers
8992+            for peerid in shares:
8993+                shares[peerid] = {}
8994+        d.addCallback(_delete_all_shares)
8995+        d.addCallback(lambda ign: self._fn.check(Monitor()))
8996+        d.addCallback(lambda check_results: self._fn.repair(check_results))
8997+        d.addCallback(lambda crr: self.failIf(crr.get_successful()))
8998+        return d
8999+
9000+
9001     def test_unrepairable_1share(self):
9002         d = self.publish_one()
9003         def _delete_all_shares(ign):
9004hunk ./src/allmydata/test/test_mutable.py 2029
9005         d.addCallback(_check)
9006         return d
9007 
9008+    def test_mdmf_unrepairable_1share(self):
9009+        d = self.publish_mdmf()
9010+        def _delete_all_shares(ign):
9011+            shares = self._storage._peers
9012+            for peerid in shares:
9013+                for shnum in list(shares[peerid]):
9014+                    if shnum > 0:
9015+                        del shares[peerid][shnum]
9016+        d.addCallback(_delete_all_shares)
9017+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9018+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9019+        def _check(crr):
9020+            self.failUnlessEqual(crr.get_successful(), False)
9021+        d.addCallback(_check)
9022+        return d
9023+
9024+    def test_repairable_5shares(self):
9025+        d = self.publish_mdmf()
9026+        def _delete_all_shares(ign):
9027+            shares = self._storage._peers
9028+            for peerid in shares:
9029+                for shnum in list(shares[peerid]):
9030+                    if shnum > 4:
9031+                        del shares[peerid][shnum]
9032+        d.addCallback(_delete_all_shares)
9033+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9034+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9035+        def _check(crr):
9036+            self.failUnlessEqual(crr.get_successful(), True)
9037+        d.addCallback(_check)
9038+        return d
9039+
9040+    def test_mdmf_repairable_5shares(self):
9041+        d = self.publish_mdmf()
9042+        def _delete_some_shares(ign):
9043+            shares = self._storage._peers
9044+            for peerid in shares:
9045+                for shnum in list(shares[peerid]):
9046+                    if shnum > 5:
9047+                        del shares[peerid][shnum]
9048+        d.addCallback(_delete_some_shares)
9049+        d.addCallback(lambda ign: self._fn.check(Monitor()))
9050+        def _check(cr):
9051+            self.failIf(cr.is_healthy())
9052+            self.failUnless(cr.is_recoverable())
9053+            return cr
9054+        d.addCallback(_check)
9055+        d.addCallback(lambda check_results: self._fn.repair(check_results))
9056+        def _check1(crr):
9057+            self.failUnlessEqual(crr.get_successful(), True)
9058+        d.addCallback(_check1)
9059+        return d
9060+
9061+
9062     def test_merge(self):
9063         self.old_shares = []
9064         d = self.publish_multiple()
9065hunk ./src/allmydata/test/test_mutable.py 2197
9066 class MultipleEncodings(unittest.TestCase):
9067     def setUp(self):
9068         self.CONTENTS = "New contents go here"
9069+        self.uploadable = MutableData(self.CONTENTS)
9070         self._storage = FakeStorage()
9071         self._nodemaker = make_nodemaker(self._storage, num_peers=20)
9072         self._storage_broker = self._nodemaker.storage_broker
9073hunk ./src/allmydata/test/test_mutable.py 2201
9074-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
9075+        d = self._nodemaker.create_mutable_file(self.uploadable)
9076         def _created(node):
9077             self._fn = node
9078         d.addCallback(_created)
9079hunk ./src/allmydata/test/test_mutable.py 2207
9080         return d
9081 
9082-    def _encode(self, k, n, data):
9083+    def _encode(self, k, n, data, version=SDMF_VERSION):
9084         # encode 'data' into a peerid->shares dict.
9085 
9086         fn = self._fn
9087hunk ./src/allmydata/test/test_mutable.py 2227
9088         s = self._storage
9089         s._peers = {} # clear existing storage
9090         p2 = Publish(fn2, self._storage_broker, None)
9091-        d = p2.publish(data)
9092+        uploadable = MutableData(data)
9093+        d = p2.publish(uploadable)
9094         def _published(res):
9095             shares = s._peers
9096             s._peers = {}
9097hunk ./src/allmydata/test/test_mutable.py 2495
9098         self.basedir = "mutable/Problems/test_publish_surprise"
9099         self.set_up_grid()
9100         nm = self.g.clients[0].nodemaker
9101-        d = nm.create_mutable_file("contents 1")
9102+        d = nm.create_mutable_file(MutableData("contents 1"))
9103         def _created(n):
9104             d = defer.succeed(None)
9105             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
9106hunk ./src/allmydata/test/test_mutable.py 2505
9107             d.addCallback(_got_smap1)
9108             # then modify the file, leaving the old map untouched
9109             d.addCallback(lambda res: log.msg("starting winning write"))
9110-            d.addCallback(lambda res: n.overwrite("contents 2"))
9111+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9112             # now attempt to modify the file with the old servermap. This
9113             # will look just like an uncoordinated write, in which every
9114             # single share got updated between our mapupdate and our publish
9115hunk ./src/allmydata/test/test_mutable.py 2514
9116                           self.shouldFail(UncoordinatedWriteError,
9117                                           "test_publish_surprise", None,
9118                                           n.upload,
9119-                                          "contents 2a", self.old_map))
9120+                                          MutableData("contents 2a"), self.old_map))
9121             return d
9122         d.addCallback(_created)
9123         return d
9124hunk ./src/allmydata/test/test_mutable.py 2523
9125         self.basedir = "mutable/Problems/test_retrieve_surprise"
9126         self.set_up_grid()
9127         nm = self.g.clients[0].nodemaker
9128-        d = nm.create_mutable_file("contents 1")
9129+        d = nm.create_mutable_file(MutableData("contents 1"))
9130         def _created(n):
9131             d = defer.succeed(None)
9132             d.addCallback(lambda res: n.get_servermap(MODE_READ))
9133hunk ./src/allmydata/test/test_mutable.py 2533
9134             d.addCallback(_got_smap1)
9135             # then modify the file, leaving the old map untouched
9136             d.addCallback(lambda res: log.msg("starting winning write"))
9137-            d.addCallback(lambda res: n.overwrite("contents 2"))
9138+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9139             # now attempt to retrieve the old version with the old servermap.
9140             # This will look like someone has changed the file since we
9141             # updated the servermap.
9142hunk ./src/allmydata/test/test_mutable.py 2542
9143             d.addCallback(lambda res:
9144                           self.shouldFail(NotEnoughSharesError,
9145                                           "test_retrieve_surprise",
9146-                                          "ran out of peers: have 0 shares (k=3)",
9147+                                          "ran out of peers: have 0 of 1",
9148                                           n.download_version,
9149                                           self.old_map,
9150                                           self.old_map.best_recoverable_version(),
9151hunk ./src/allmydata/test/test_mutable.py 2551
9152         d.addCallback(_created)
9153         return d
9154 
9155+
9156     def test_unexpected_shares(self):
9157         # upload the file, take a servermap, shut down one of the servers,
9158         # upload it again (causing shares to appear on a new server), then
9159hunk ./src/allmydata/test/test_mutable.py 2561
9160         self.basedir = "mutable/Problems/test_unexpected_shares"
9161         self.set_up_grid()
9162         nm = self.g.clients[0].nodemaker
9163-        d = nm.create_mutable_file("contents 1")
9164+        d = nm.create_mutable_file(MutableData("contents 1"))
9165         def _created(n):
9166             d = defer.succeed(None)
9167             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
9168hunk ./src/allmydata/test/test_mutable.py 2573
9169                 self.g.remove_server(peer0)
9170                 # then modify the file, leaving the old map untouched
9171                 log.msg("starting winning write")
9172-                return n.overwrite("contents 2")
9173+                return n.overwrite(MutableData("contents 2"))
9174             d.addCallback(_got_smap1)
9175             # now attempt to modify the file with the old servermap. This
9176             # will look just like an uncoordinated write, in which every
9177hunk ./src/allmydata/test/test_mutable.py 2583
9178                           self.shouldFail(UncoordinatedWriteError,
9179                                           "test_surprise", None,
9180                                           n.upload,
9181-                                          "contents 2a", self.old_map))
9182+                                          MutableData("contents 2a"), self.old_map))
9183             return d
9184         d.addCallback(_created)
9185         return d
9186hunk ./src/allmydata/test/test_mutable.py 2587
9187+    test_unexpected_shares.timeout = 15
9188 
9189     def test_bad_server(self):
9190         # Break one server, then create the file: the initial publish should
9191hunk ./src/allmydata/test/test_mutable.py 2621
9192         d.addCallback(_break_peer0)
9193         # now "create" the file, using the pre-established key, and let the
9194         # initial publish finally happen
9195-        d.addCallback(lambda res: nm.create_mutable_file("contents 1"))
9196+        d.addCallback(lambda res: nm.create_mutable_file(MutableData("contents 1")))
9197         # that ought to work
9198         def _got_node(n):
9199             d = n.download_best_version()
9200hunk ./src/allmydata/test/test_mutable.py 2630
9201             def _break_peer1(res):
9202                 self.g.break_server(self.server1.get_serverid())
9203             d.addCallback(_break_peer1)
9204-            d.addCallback(lambda res: n.overwrite("contents 2"))
9205+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9206             # that ought to work too
9207             d.addCallback(lambda res: n.download_best_version())
9208             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
9209hunk ./src/allmydata/test/test_mutable.py 2662
9210         peerids = [s.get_serverid() for s in sb.get_connected_servers()]
9211         self.g.break_server(peerids[0])
9212 
9213-        d = nm.create_mutable_file("contents 1")
9214+        d = nm.create_mutable_file(MutableData("contents 1"))
9215         def _created(n):
9216             d = n.download_best_version()
9217             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
9218hunk ./src/allmydata/test/test_mutable.py 2670
9219             def _break_second_server(res):
9220                 self.g.break_server(peerids[1])
9221             d.addCallback(_break_second_server)
9222-            d.addCallback(lambda res: n.overwrite("contents 2"))
9223+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
9224             # that ought to work too
9225             d.addCallback(lambda res: n.download_best_version())
9226             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
9227hunk ./src/allmydata/test/test_mutable.py 2688
9228 
9229         d = self.shouldFail(NotEnoughServersError,
9230                             "test_publish_all_servers_bad",
9231-                            "Ran out of non-bad servers",
9232-                            nm.create_mutable_file, "contents")
9233+                            "ran out of good servers",
9234+                            nm.create_mutable_file, MutableData("contents"))
9235         return d
9236 
9237     def test_publish_no_servers(self):
9238hunk ./src/allmydata/test/test_mutable.py 2701
9239         d = self.shouldFail(NotEnoughServersError,
9240                             "test_publish_no_servers",
9241                             "Ran out of non-bad servers",
9242-                            nm.create_mutable_file, "contents")
9243+                            nm.create_mutable_file, MutableData("contents"))
9244         return d
9245     test_publish_no_servers.timeout = 30
9246 
9247hunk ./src/allmydata/test/test_mutable.py 2719
9248         # we need some contents that are large enough to push the privkey out
9249         # of the early part of the file
9250         LARGE = "These are Larger contents" * 2000 # about 50KB
9251-        d = nm.create_mutable_file(LARGE)
9252+        LARGE_uploadable = MutableData(LARGE)
9253+        d = nm.create_mutable_file(LARGE_uploadable)
9254         def _created(n):
9255             self.uri = n.get_uri()
9256             self.n2 = nm.create_from_cap(self.uri)
9257hunk ./src/allmydata/test/test_mutable.py 2755
9258         self.basedir = "mutable/Problems/test_privkey_query_missing"
9259         self.set_up_grid(num_servers=20)
9260         nm = self.g.clients[0].nodemaker
9261-        LARGE = "These are Larger contents" * 2000 # about 50KB
9262+        LARGE = "These are Larger contents" * 2000 # about 50KiB
9263+        LARGE_uploadable = MutableData(LARGE)
9264         nm._node_cache = DevNullDictionary() # disable the nodecache
9265 
9266hunk ./src/allmydata/test/test_mutable.py 2759
9267-        d = nm.create_mutable_file(LARGE)
9268+        d = nm.create_mutable_file(LARGE_uploadable)
9269         def _created(n):
9270             self.uri = n.get_uri()
9271             self.n2 = nm.create_from_cap(self.uri)
9272hunk ./src/allmydata/test/test_mutable.py 2769
9273         d.addCallback(_created)
9274         d.addCallback(lambda res: self.n2.get_servermap(MODE_WRITE))
9275         return d
9276+
9277+
9278+    def test_block_and_hash_query_error(self):
9279+        # This tests for what happens when a query to a remote server
9280+        # fails in either the hash validation step or the block getting
9281+        # step (because of batching, this is the same actual query).
9282+        # We need to have the storage server persist up until the point
9283+        # that its prefix is validated, then suddenly die. This
9284+        # exercises some exception handling code in Retrieve.
9285+        self.basedir = "mutable/Problems/test_block_and_hash_query_error"
9286+        self.set_up_grid(num_servers=20)
9287+        nm = self.g.clients[0].nodemaker
9288+        CONTENTS = "contents" * 2000
9289+        CONTENTS_uploadable = MutableData(CONTENTS)
9290+        d = nm.create_mutable_file(CONTENTS_uploadable)
9291+        def _created(node):
9292+            self._node = node
9293+        d.addCallback(_created)
9294+        d.addCallback(lambda ignored:
9295+            self._node.get_servermap(MODE_READ))
9296+        def _then(servermap):
9297+            # we have our servermap. Now we set up the servers like the
9298+            # tests above -- the first one that gets a read call should
9299+            # start throwing errors, but only after returning its prefix
9300+            # for validation. Since we'll download without fetching the
9301+            # private key, the next query to the remote server will be
9302+            # for either a block and salt or for hashes, either of which
9303+            # will exercise the error handling code.
9304+            killer = FirstServerGetsKilled()
9305+            for s in nm.storage_broker.get_connected_servers():
9306+                s.get_rref().post_call_notifier = killer.notify
9307+            ver = servermap.best_recoverable_version()
9308+            assert ver
9309+            return self._node.download_version(servermap, ver)
9310+        d.addCallback(_then)
9311+        d.addCallback(lambda data:
9312+            self.failUnlessEqual(data, CONTENTS))
9313+        return d
9314+
9315+
9316+class FileHandle(unittest.TestCase):
9317+    def setUp(self):
9318+        self.test_data = "Test Data" * 50000
9319+        self.sio = StringIO(self.test_data)
9320+        self.uploadable = MutableFileHandle(self.sio)
9321+
9322+
9323+    def test_filehandle_read(self):
9324+        self.basedir = "mutable/FileHandle/test_filehandle_read"
9325+        chunk_size = 10
9326+        for i in xrange(0, len(self.test_data), chunk_size):
9327+            data = self.uploadable.read(chunk_size)
9328+            data = "".join(data)
9329+            start = i
9330+            end = i + chunk_size
9331+            self.failUnlessEqual(data, self.test_data[start:end])
9332+
9333+
9334+    def test_filehandle_get_size(self):
9335+        self.basedir = "mutable/FileHandle/test_filehandle_get_size"
9336+        actual_size = len(self.test_data)
9337+        size = self.uploadable.get_size()
9338+        self.failUnlessEqual(size, actual_size)
9339+
9340+
9341+    def test_filehandle_get_size_out_of_order(self):
9342+        # We should be able to call get_size whenever we want without
9343+        # disturbing the location of the seek pointer.
9344+        chunk_size = 100
9345+        data = self.uploadable.read(chunk_size)
9346+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
9347+
9348+        # Now get the size.
9349+        size = self.uploadable.get_size()
9350+        self.failUnlessEqual(size, len(self.test_data))
9351+
9352+        # Now get more data. We should be right where we left off.
9353+        more_data = self.uploadable.read(chunk_size)
9354+        start = chunk_size
9355+        end = chunk_size * 2
9356+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
9357+
9358+
9359+    def test_filehandle_file(self):
9360+        # Make sure that the MutableFileHandle works on a file as well
9361+        # as a StringIO object, since in some cases it will be asked to
9362+        # deal with files.
9363+        self.basedir = self.mktemp()
9364+        # necessary? What am I doing wrong here?
9365+        os.mkdir(self.basedir)
9366+        f_path = os.path.join(self.basedir, "test_file")
9367+        f = open(f_path, "w")
9368+        f.write(self.test_data)
9369+        f.close()
9370+        f = open(f_path, "r")
9371+
9372+        uploadable = MutableFileHandle(f)
9373+
9374+        data = uploadable.read(len(self.test_data))
9375+        self.failUnlessEqual("".join(data), self.test_data)
9376+        size = uploadable.get_size()
9377+        self.failUnlessEqual(size, len(self.test_data))
9378+
9379+
9380+    def test_close(self):
9381+        # Make sure that the MutableFileHandle closes its handle when
9382+        # told to do so.
9383+        self.uploadable.close()
9384+        self.failUnless(self.sio.closed)
9385+
9386+
9387+class DataHandle(unittest.TestCase):
9388+    def setUp(self):
9389+        self.test_data = "Test Data" * 50000
9390+        self.uploadable = MutableData(self.test_data)
9391+
9392+
9393+    def test_datahandle_read(self):
9394+        chunk_size = 10
9395+        for i in xrange(0, len(self.test_data), chunk_size):
9396+            data = self.uploadable.read(chunk_size)
9397+            data = "".join(data)
9398+            start = i
9399+            end = i + chunk_size
9400+            self.failUnlessEqual(data, self.test_data[start:end])
9401+
9402+
9403+    def test_datahandle_get_size(self):
9404+        actual_size = len(self.test_data)
9405+        size = self.uploadable.get_size()
9406+        self.failUnlessEqual(size, actual_size)
9407+
9408+
9409+    def test_datahandle_get_size_out_of_order(self):
9410+        # We should be able to call get_size whenever we want without
9411+        # disturbing the location of the seek pointer.
9412+        chunk_size = 100
9413+        data = self.uploadable.read(chunk_size)
9414+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
9415+
9416+        # Now get the size.
9417+        size = self.uploadable.get_size()
9418+        self.failUnlessEqual(size, len(self.test_data))
9419+
9420+        # Now get more data. We should be right where we left off.
9421+        more_data = self.uploadable.read(chunk_size)
9422+        start = chunk_size
9423+        end = chunk_size * 2
9424+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
9425+
9426+
9427+class Version(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin, \
9428+              PublishMixin):
9429+    def setUp(self):
9430+        GridTestMixin.setUp(self)
9431+        self.basedir = self.mktemp()
9432+        self.set_up_grid()
9433+        self.c = self.g.clients[0]
9434+        self.nm = self.c.nodemaker
9435+        self.data = "test data" * 100000 # about 900 KiB; MDMF
9436+        self.small_data = "test data" * 10 # about 90 B; SDMF
9437+        return self.do_upload()
9438+
9439+
9440+    def do_upload(self):
9441+        d1 = self.nm.create_mutable_file(MutableData(self.data),
9442+                                         version=MDMF_VERSION)
9443+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
9444+        dl = gatherResults([d1, d2])
9445+        def _then((n1, n2)):
9446+            assert isinstance(n1, MutableFileNode)
9447+            assert isinstance(n2, MutableFileNode)
9448+
9449+            self.mdmf_node = n1
9450+            self.sdmf_node = n2
9451+        dl.addCallback(_then)
9452+        return dl
9453+
9454+
9455+    def test_get_readonly_mutable_version(self):
9456+        # Attempting to get a mutable version of a mutable file from a
9457+        # filenode initialized with a readcap should return a readonly
9458+        # version of that same node.
9459+        ro = self.mdmf_node.get_readonly()
9460+        d = ro.get_best_mutable_version()
9461+        d.addCallback(lambda version:
9462+            self.failUnless(version.is_readonly()))
9463+        d.addCallback(lambda ignored:
9464+            self.sdmf_node.get_readonly())
9465+        d.addCallback(lambda version:
9466+            self.failUnless(version.is_readonly()))
9467+        return d
9468+
9469+
9470+    def test_get_sequence_number(self):
9471+        d = self.mdmf_node.get_best_readable_version()
9472+        d.addCallback(lambda bv:
9473+            self.failUnlessEqual(bv.get_sequence_number(), 1))
9474+        d.addCallback(lambda ignored:
9475+            self.sdmf_node.get_best_readable_version())
9476+        d.addCallback(lambda bv:
9477+            self.failUnlessEqual(bv.get_sequence_number(), 1))
9478+        # Now update. The sequence number in both cases should be 1 in
9479+        # both cases.
9480+        def _do_update(ignored):
9481+            new_data = MutableData("foo bar baz" * 100000)
9482+            new_small_data = MutableData("foo bar baz" * 10)
9483+            d1 = self.mdmf_node.overwrite(new_data)
9484+            d2 = self.sdmf_node.overwrite(new_small_data)
9485+            dl = gatherResults([d1, d2])
9486+            return dl
9487+        d.addCallback(_do_update)
9488+        d.addCallback(lambda ignored:
9489+            self.mdmf_node.get_best_readable_version())
9490+        d.addCallback(lambda bv:
9491+            self.failUnlessEqual(bv.get_sequence_number(), 2))
9492+        d.addCallback(lambda ignored:
9493+            self.sdmf_node.get_best_readable_version())
9494+        d.addCallback(lambda bv:
9495+            self.failUnlessEqual(bv.get_sequence_number(), 2))
9496+        return d
9497+
9498+
9499+    def test_version_extension_api(self):
9500+        # We need to define an API by which an uploader can set the
9501+        # extension parameters, and by which a downloader can retrieve
9502+        # extensions.
9503+        d = self.mdmf_node.get_best_mutable_version()
9504+        def _got_version(version):
9505+            hints = version.get_downloader_hints()
9506+            # Should be empty at this point.
9507+            self.failUnlessIn("k", hints)
9508+            self.failUnlessEqual(hints['k'], 3)
9509+            self.failUnlessIn('segsize', hints)
9510+            self.failUnlessEqual(hints['segsize'], 131073)
9511+        d.addCallback(_got_version)
9512+        return d
9513+
9514+
9515+    def test_extensions_from_cap(self):
9516+        # If we initialize a mutable file with a cap that has extension
9517+        # parameters in it and then grab the extension parameters using
9518+        # our API, we should see that they're set correctly.
9519+        mdmf_uri = self.mdmf_node.get_uri()
9520+        new_node = self.nm.create_from_cap(mdmf_uri)
9521+        d = new_node.get_best_mutable_version()
9522+        def _got_version(version):
9523+            hints = version.get_downloader_hints()
9524+            self.failUnlessIn("k", hints)
9525+            self.failUnlessEqual(hints["k"], 3)
9526+            self.failUnlessIn("segsize", hints)
9527+            self.failUnlessEqual(hints["segsize"], 131073)
9528+        d.addCallback(_got_version)
9529+        return d
9530+
9531+
9532+    def test_extensions_from_upload(self):
9533+        # If we create a new mutable file with some contents, we should
9534+        # get back an MDMF cap with the right hints in place.
9535+        contents = "foo bar baz" * 100000
9536+        d = self.nm.create_mutable_file(contents, version=MDMF_VERSION)
9537+        def _got_mutable_file(n):
9538+            rw_uri = n.get_uri()
9539+            expected_k = str(self.c.DEFAULT_ENCODING_PARAMETERS['k'])
9540+            self.failUnlessIn(expected_k, rw_uri)
9541+            # XXX: Get this more intelligently.
9542+            self.failUnlessIn("131073", rw_uri)
9543+
9544+            ro_uri = n.get_readonly_uri()
9545+            self.failUnlessIn(expected_k, ro_uri)
9546+            self.failUnlessIn("131073", ro_uri)
9547+        d.addCallback(_got_mutable_file)
9548+        return d
9549+
9550+
9551+    def test_cap_after_upload(self):
9552+        # If we create a new mutable file and upload things to it, and
9553+        # it's an MDMF file, we should get an MDMF cap back from that
9554+        # file and should be able to use that.
9555+        # That's essentially what MDMF node is, so just check that.
9556+        mdmf_uri = self.mdmf_node.get_uri()
9557+        cap = uri.from_string(mdmf_uri)
9558+        self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
9559+        readonly_mdmf_uri = self.mdmf_node.get_readonly_uri()
9560+        cap = uri.from_string(readonly_mdmf_uri)
9561+        self.failUnless(isinstance(cap, uri.ReadonlyMDMFFileURI))
9562+
9563+
9564+    def test_get_writekey(self):
9565+        d = self.mdmf_node.get_best_mutable_version()
9566+        d.addCallback(lambda bv:
9567+            self.failUnlessEqual(bv.get_writekey(),
9568+                                 self.mdmf_node.get_writekey()))
9569+        d.addCallback(lambda ignored:
9570+            self.sdmf_node.get_best_mutable_version())
9571+        d.addCallback(lambda bv:
9572+            self.failUnlessEqual(bv.get_writekey(),
9573+                                 self.sdmf_node.get_writekey()))
9574+        return d
9575+
9576+
9577+    def test_get_storage_index(self):
9578+        d = self.mdmf_node.get_best_mutable_version()
9579+        d.addCallback(lambda bv:
9580+            self.failUnlessEqual(bv.get_storage_index(),
9581+                                 self.mdmf_node.get_storage_index()))
9582+        d.addCallback(lambda ignored:
9583+            self.sdmf_node.get_best_mutable_version())
9584+        d.addCallback(lambda bv:
9585+            self.failUnlessEqual(bv.get_storage_index(),
9586+                                 self.sdmf_node.get_storage_index()))
9587+        return d
9588+
9589+
9590+    def test_get_readonly_version(self):
9591+        d = self.mdmf_node.get_best_readable_version()
9592+        d.addCallback(lambda bv:
9593+            self.failUnless(bv.is_readonly()))
9594+        d.addCallback(lambda ignored:
9595+            self.sdmf_node.get_best_readable_version())
9596+        d.addCallback(lambda bv:
9597+            self.failUnless(bv.is_readonly()))
9598+        return d
9599+
9600+
9601+    def test_get_mutable_version(self):
9602+        d = self.mdmf_node.get_best_mutable_version()
9603+        d.addCallback(lambda bv:
9604+            self.failIf(bv.is_readonly()))
9605+        d.addCallback(lambda ignored:
9606+            self.sdmf_node.get_best_mutable_version())
9607+        d.addCallback(lambda bv:
9608+            self.failIf(bv.is_readonly()))
9609+        return d
9610+
9611+
9612+    def test_toplevel_overwrite(self):
9613+        new_data = MutableData("foo bar baz" * 100000)
9614+        new_small_data = MutableData("foo bar baz" * 10)
9615+        d = self.mdmf_node.overwrite(new_data)
9616+        d.addCallback(lambda ignored:
9617+            self.mdmf_node.download_best_version())
9618+        d.addCallback(lambda data:
9619+            self.failUnlessEqual(data, "foo bar baz" * 100000))
9620+        d.addCallback(lambda ignored:
9621+            self.sdmf_node.overwrite(new_small_data))
9622+        d.addCallback(lambda ignored:
9623+            self.sdmf_node.download_best_version())
9624+        d.addCallback(lambda data:
9625+            self.failUnlessEqual(data, "foo bar baz" * 10))
9626+        return d
9627+
9628+
9629+    def test_toplevel_modify(self):
9630+        def modifier(old_contents, servermap, first_time):
9631+            return old_contents + "modified"
9632+        d = self.mdmf_node.modify(modifier)
9633+        d.addCallback(lambda ignored:
9634+            self.mdmf_node.download_best_version())
9635+        d.addCallback(lambda data:
9636+            self.failUnlessIn("modified", data))
9637+        d.addCallback(lambda ignored:
9638+            self.sdmf_node.modify(modifier))
9639+        d.addCallback(lambda ignored:
9640+            self.sdmf_node.download_best_version())
9641+        d.addCallback(lambda data:
9642+            self.failUnlessIn("modified", data))
9643+        return d
9644+
9645+
9646+    def test_version_modify(self):
9647+        # TODO: When we can publish multiple versions, alter this test
9648+        # to modify a version other than the best usable version, then
9649+        # test to see that the best recoverable version is that.
9650+        def modifier(old_contents, servermap, first_time):
9651+            return old_contents + "modified"
9652+        d = self.mdmf_node.modify(modifier)
9653+        d.addCallback(lambda ignored:
9654+            self.mdmf_node.download_best_version())
9655+        d.addCallback(lambda data:
9656+            self.failUnlessIn("modified", data))
9657+        d.addCallback(lambda ignored:
9658+            self.sdmf_node.modify(modifier))
9659+        d.addCallback(lambda ignored:
9660+            self.sdmf_node.download_best_version())
9661+        d.addCallback(lambda data:
9662+            self.failUnlessIn("modified", data))
9663+        return d
9664+
9665+
9666+    def test_download_version(self):
9667+        d = self.publish_multiple()
9668+        # We want to have two recoverable versions on the grid.
9669+        d.addCallback(lambda res:
9670+                      self._set_versions({0:0,2:0,4:0,6:0,8:0,
9671+                                          1:1,3:1,5:1,7:1,9:1}))
9672+        # Now try to download each version. We should get the plaintext
9673+        # associated with that version.
9674+        d.addCallback(lambda ignored:
9675+            self._fn.get_servermap(mode=MODE_READ))
9676+        def _got_servermap(smap):
9677+            versions = smap.recoverable_versions()
9678+            assert len(versions) == 2
9679+
9680+            self.servermap = smap
9681+            self.version1, self.version2 = versions
9682+            assert self.version1 != self.version2
9683+
9684+            self.version1_seqnum = self.version1[0]
9685+            self.version2_seqnum = self.version2[0]
9686+            self.version1_index = self.version1_seqnum - 1
9687+            self.version2_index = self.version2_seqnum - 1
9688+
9689+        d.addCallback(_got_servermap)
9690+        d.addCallback(lambda ignored:
9691+            self._fn.download_version(self.servermap, self.version1))
9692+        d.addCallback(lambda results:
9693+            self.failUnlessEqual(self.CONTENTS[self.version1_index],
9694+                                 results))
9695+        d.addCallback(lambda ignored:
9696+            self._fn.download_version(self.servermap, self.version2))
9697+        d.addCallback(lambda results:
9698+            self.failUnlessEqual(self.CONTENTS[self.version2_index],
9699+                                 results))
9700+        return d
9701+
9702+
9703+    def test_download_nonexistent_version(self):
9704+        d = self.mdmf_node.get_servermap(mode=MODE_WRITE)
9705+        def _set_servermap(servermap):
9706+            self.servermap = servermap
9707+        d.addCallback(_set_servermap)
9708+        d.addCallback(lambda ignored:
9709+           self.shouldFail(UnrecoverableFileError, "nonexistent version",
9710+                           None,
9711+                           self.mdmf_node.download_version, self.servermap,
9712+                           "not a version"))
9713+        return d
9714+
9715+
9716+    def test_partial_read(self):
9717+        # read only a few bytes at a time, and see that the results are
9718+        # what we expect.
9719+        d = self.mdmf_node.get_best_readable_version()
9720+        def _read_data(version):
9721+            c = consumer.MemoryConsumer()
9722+            d2 = defer.succeed(None)
9723+            for i in xrange(0, len(self.data), 10000):
9724+                d2.addCallback(lambda ignored, i=i: version.read(c, i, 10000))
9725+            d2.addCallback(lambda ignored:
9726+                self.failUnlessEqual(self.data, "".join(c.chunks)))
9727+            return d2
9728+        d.addCallback(_read_data)
9729+        return d
9730+
9731+
9732+    def test_read(self):
9733+        d = self.mdmf_node.get_best_readable_version()
9734+        def _read_data(version):
9735+            c = consumer.MemoryConsumer()
9736+            d2 = defer.succeed(None)
9737+            d2.addCallback(lambda ignored: version.read(c))
9738+            d2.addCallback(lambda ignored:
9739+                self.failUnlessEqual("".join(c.chunks), self.data))
9740+            return d2
9741+        d.addCallback(_read_data)
9742+        return d
9743+
9744+
9745+    def test_download_best_version(self):
9746+        d = self.mdmf_node.download_best_version()
9747+        d.addCallback(lambda data:
9748+            self.failUnlessEqual(data, self.data))
9749+        d.addCallback(lambda ignored:
9750+            self.sdmf_node.download_best_version())
9751+        d.addCallback(lambda data:
9752+            self.failUnlessEqual(data, self.small_data))
9753+        return d
9754+
9755+
9756+class Update(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
9757+    def setUp(self):
9758+        GridTestMixin.setUp(self)
9759+        self.basedir = self.mktemp()
9760+        self.set_up_grid()
9761+        self.c = self.g.clients[0]
9762+        self.nm = self.c.nodemaker
9763+        self.data = "testdata " * 100000 # about 900 KiB; MDMF
9764+        self.small_data = "test data" * 10 # about 90 B; SDMF
9765+        return self.do_upload()
9766+
9767+
9768+    def do_upload(self):
9769+        d1 = self.nm.create_mutable_file(MutableData(self.data),
9770+                                         version=MDMF_VERSION)
9771+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
9772+        dl = gatherResults([d1, d2])
9773+        def _then((n1, n2)):
9774+            assert isinstance(n1, MutableFileNode)
9775+            assert isinstance(n2, MutableFileNode)
9776+
9777+            self.mdmf_node = n1
9778+            self.sdmf_node = n2
9779+        dl.addCallback(_then)
9780+        # Make SDMF and MDMF mutable file nodes that have 255 shares.
9781+        def _make_max_shares(ign):
9782+            self.nm.default_encoding_parameters['n'] = 255
9783+            self.nm.default_encoding_parameters['k'] = 127
9784+            d1 = self.nm.create_mutable_file(MutableData(self.data),
9785+                                             version=MDMF_VERSION)
9786+            d2 = \
9787+                self.nm.create_mutable_file(MutableData(self.small_data))
9788+            return gatherResults([d1, d2])
9789+        dl.addCallback(_make_max_shares)
9790+        def _stash((n1, n2)):
9791+            assert isinstance(n1, MutableFileNode)
9792+            assert isinstance(n2, MutableFileNode)
9793+
9794+            self.mdmf_max_shares_node = n1
9795+            self.sdmf_max_shares_node = n2
9796+        dl.addCallback(_stash)
9797+        return dl
9798+
9799+    def test_append(self):
9800+        # We should be able to append data to the middle of a mutable
9801+        # file and get what we expect.
9802+        new_data = self.data + "appended"
9803+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9804+            d = node.get_best_mutable_version()
9805+            d.addCallback(lambda mv:
9806+                mv.update(MutableData("appended"), len(self.data)))
9807+            d.addCallback(lambda ignored, node=node:
9808+                node.download_best_version())
9809+            d.addCallback(lambda results:
9810+                self.failUnlessEqual(results, new_data))
9811+        return d
9812+
9813+    def test_replace(self):
9814+        # We should be able to replace data in the middle of a mutable
9815+        # file and get what we expect back.
9816+        new_data = self.data[:100]
9817+        new_data += "appended"
9818+        new_data += self.data[108:]
9819+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9820+            d = node.get_best_mutable_version()
9821+            d.addCallback(lambda mv:
9822+                mv.update(MutableData("appended"), 100))
9823+            d.addCallback(lambda ignored, node=node:
9824+                node.download_best_version())
9825+            d.addCallback(lambda results:
9826+                self.failUnlessEqual(results, new_data))
9827+        return d
9828+
9829+    def test_replace_beginning(self):
9830+        # We should be able to replace data at the beginning of the file
9831+        # without truncating the file
9832+        B = "beginning"
9833+        new_data = B + self.data[len(B):]
9834+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9835+            d = node.get_best_mutable_version()
9836+            d.addCallback(lambda mv: mv.update(MutableData(B), 0))
9837+            d.addCallback(lambda ignored, node=node:
9838+                node.download_best_version())
9839+            d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
9840+        return d
9841+
9842+    def test_replace_segstart1(self):
9843+        offset = 128*1024+1
9844+        new_data = "NNNN"
9845+        expected = self.data[:offset]+new_data+self.data[offset+4:]
9846+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9847+            d = node.get_best_mutable_version()
9848+            d.addCallback(lambda mv:
9849+                mv.update(MutableData(new_data), offset))
9850+            # close around node.
9851+            d.addCallback(lambda ignored, node=node:
9852+                node.download_best_version())
9853+            def _check(results):
9854+                if results != expected:
9855+                    print
9856+                    print "got: %s ... %s" % (results[:20], results[-20:])
9857+                    print "exp: %s ... %s" % (expected[:20], expected[-20:])
9858+                    self.fail("results != expected")
9859+            d.addCallback(_check)
9860+        return d
9861+
9862+    def _check_differences(self, got, expected):
9863+        # displaying arbitrary file corruption is tricky for a
9864+        # 1MB file of repeating data,, so look for likely places
9865+        # with problems and display them separately
9866+        gotmods = [mo.span() for mo in re.finditer('([A-Z]+)', got)]
9867+        expmods = [mo.span() for mo in re.finditer('([A-Z]+)', expected)]
9868+        gotspans = ["%d:%d=%s" % (start,end,got[start:end])
9869+                    for (start,end) in gotmods]
9870+        expspans = ["%d:%d=%s" % (start,end,expected[start:end])
9871+                    for (start,end) in expmods]
9872+        #print "expecting: %s" % expspans
9873+
9874+        SEGSIZE = 128*1024
9875+        if got != expected:
9876+            print "differences:"
9877+            for segnum in range(len(expected)//SEGSIZE):
9878+                start = segnum * SEGSIZE
9879+                end = (segnum+1) * SEGSIZE
9880+                got_ends = "%s .. %s" % (got[start:start+20], got[end-20:end])
9881+                exp_ends = "%s .. %s" % (expected[start:start+20], expected[end-20:end])
9882+                if got_ends != exp_ends:
9883+                    print "expected[%d]: %s" % (start, exp_ends)
9884+                    print "got     [%d]: %s" % (start, got_ends)
9885+            if expspans != gotspans:
9886+                print "expected: %s" % expspans
9887+                print "got     : %s" % gotspans
9888+            open("EXPECTED","wb").write(expected)
9889+            open("GOT","wb").write(got)
9890+            print "wrote data to EXPECTED and GOT"
9891+            self.fail("didn't get expected data")
9892+
9893+
9894+    def test_replace_locations(self):
9895+        # exercise fencepost conditions
9896+        expected = self.data
9897+        SEGSIZE = 128*1024
9898+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
9899+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
9900+        d = defer.succeed(None)
9901+        for offset in suspects:
9902+            new_data = letters.next()*2 # "AA", then "BB", etc
9903+            expected = expected[:offset]+new_data+expected[offset+2:]
9904+            d.addCallback(lambda ign:
9905+                          self.mdmf_node.get_best_mutable_version())
9906+            def _modify(mv, offset=offset, new_data=new_data):
9907+                # close over 'offset','new_data'
9908+                md = MutableData(new_data)
9909+                return mv.update(md, offset)
9910+            d.addCallback(_modify)
9911+            d.addCallback(lambda ignored:
9912+                          self.mdmf_node.download_best_version())
9913+            d.addCallback(self._check_differences, expected)
9914+        return d
9915+
9916+    def test_replace_locations_max_shares(self):
9917+        # exercise fencepost conditions
9918+        expected = self.data
9919+        SEGSIZE = 128*1024
9920+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
9921+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
9922+        d = defer.succeed(None)
9923+        for offset in suspects:
9924+            new_data = letters.next()*2 # "AA", then "BB", etc
9925+            expected = expected[:offset]+new_data+expected[offset+2:]
9926+            d.addCallback(lambda ign:
9927+                          self.mdmf_max_shares_node.get_best_mutable_version())
9928+            def _modify(mv, offset=offset, new_data=new_data):
9929+                # close over 'offset','new_data'
9930+                md = MutableData(new_data)
9931+                return mv.update(md, offset)
9932+            d.addCallback(_modify)
9933+            d.addCallback(lambda ignored:
9934+                          self.mdmf_max_shares_node.download_best_version())
9935+            d.addCallback(self._check_differences, expected)
9936+        return d
9937+
9938+    def test_replace_and_extend(self):
9939+        # We should be able to replace data in the middle of a mutable
9940+        # file and extend that mutable file and get what we expect.
9941+        new_data = self.data[:100]
9942+        new_data += "modified " * 100000
9943+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9944+            d = node.get_best_mutable_version()
9945+            d.addCallback(lambda mv:
9946+                mv.update(MutableData("modified " * 100000), 100))
9947+            d.addCallback(lambda ignored, node=node:
9948+                node.download_best_version())
9949+            d.addCallback(lambda results:
9950+                self.failUnlessEqual(results, new_data))
9951+        return d
9952+
9953+
9954+    def test_append_power_of_two(self):
9955+        # If we attempt to extend a mutable file so that its segment
9956+        # count crosses a power-of-two boundary, the update operation
9957+        # should know how to reencode the file.
9958+
9959+        # Note that the data populating self.mdmf_node is about 900 KiB
9960+        # long -- this is 7 segments in the default segment size. So we
9961+        # need to add 2 segments worth of data to push it over a
9962+        # power-of-two boundary.
9963+        segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
9964+        new_data = self.data + (segment * 2)
9965+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9966+            d = node.get_best_mutable_version()
9967+            d.addCallback(lambda mv:
9968+                mv.update(MutableData(segment * 2), len(self.data)))
9969+            d.addCallback(lambda ignored, node=node:
9970+                node.download_best_version())
9971+            d.addCallback(lambda results:
9972+                self.failUnlessEqual(results, new_data))
9973+        return d
9974+    test_append_power_of_two.timeout = 15
9975+
9976+
9977+    def test_update_sdmf(self):
9978+        # Running update on a single-segment file should still work.
9979+        new_data = self.small_data + "appended"
9980+        for node in (self.sdmf_node, self.sdmf_max_shares_node):
9981+            d = node.get_best_mutable_version()
9982+            d.addCallback(lambda mv:
9983+                mv.update(MutableData("appended"), len(self.small_data)))
9984+            d.addCallback(lambda ignored, node=node:
9985+                node.download_best_version())
9986+            d.addCallback(lambda results:
9987+                self.failUnlessEqual(results, new_data))
9988+        return d
9989+
9990+    def test_replace_in_last_segment(self):
9991+        # The wrapper should know how to handle the tail segment
9992+        # appropriately.
9993+        replace_offset = len(self.data) - 100
9994+        new_data = self.data[:replace_offset] + "replaced"
9995+        rest_offset = replace_offset + len("replaced")
9996+        new_data += self.data[rest_offset:]
9997+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
9998+            d = node.get_best_mutable_version()
9999+            d.addCallback(lambda mv:
10000+                mv.update(MutableData("replaced"), replace_offset))
10001+            d.addCallback(lambda ignored, node=node:
10002+                node.download_best_version())
10003+            d.addCallback(lambda results:
10004+                self.failUnlessEqual(results, new_data))
10005+        return d
10006+
10007+
10008+    def test_multiple_segment_replace(self):
10009+        replace_offset = 2 * DEFAULT_MAX_SEGMENT_SIZE
10010+        new_data = self.data[:replace_offset]
10011+        new_segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
10012+        new_data += 2 * new_segment
10013+        new_data += "replaced"
10014+        rest_offset = len(new_data)
10015+        new_data += self.data[rest_offset:]
10016+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
10017+            d = node.get_best_mutable_version()
10018+            d.addCallback(lambda mv:
10019+                mv.update(MutableData((2 * new_segment) + "replaced"),
10020+                          replace_offset))
10021+            d.addCallback(lambda ignored, node=node:
10022+                node.download_best_version())
10023+            d.addCallback(lambda results:
10024+                self.failUnlessEqual(results, new_data))
10025+        return d
10026+
10027+class Interoperability(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
10028+    sdmf_old_shares = {}
10029+    sdmf_old_shares[0] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAQ/EX4eC/1+hGOQ/h4EiKUkqxdsfzdcPlDvd11SGWZ0VHsUclZChTzuBAU2zLTXm+cG8IFhO50ly6Ey/DB44NtMKVaVzO0nU8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10030+    sdmf_old_shares[1] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAP7FHJWQoU87gQFNsy015vnBvCBYTudJcuhMvwweODbTD8Rfh4L/X6EY5D+HgSIpSSrF2x/N1w+UO93XVIZZnRUeePDXEwhqYDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10031+    sdmf_old_shares[2] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewASoSCFpVj4utEE+eVFM146xfgC6DX39GaQ2zT3YKsWX3GiLwKtGffwqV7IlZIcBEVqMfTXSTZsY+dZm1MxxCZH0Zd33VY0yggDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10032+    sdmf_old_shares[3] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewARoi8CrRn38KleyJWSHARFajH010k2bGPnWZtTMcQmR9GhIIWlWPi60QT55UUzXjrF+ALoNff0ZpDbNPdgqxZfcSNSplrHqtsDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10033+    sdmf_old_shares[4] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwAUMA7/aVz7Mb1em0eks+biC8ZuVUhuAEkTVOAF4YulIjE8JlfW0dS1XKk62u0586QxiN38NTsluUDx8EAPTL66yRsfb1f3rRIDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10034+    sdmf_old_shares[5] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwATPCZX1tHUtVypOtrtOfOkMYjd/DU7JblA8fBAD0y+uskwDv9pXPsxvV6bR6Sz5uILxm5VSG4ASRNU4AXhi6UiMUKZHBmcmEgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10035+    sdmf_old_shares[6] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAWDSFSPvKzcFzRcuRlVgKUf0HBce1MCF8SwpUbPPEyfVJty4xLZ7DvNU/Eh/R6BarsVAagVXdp+GtEu0+fok7nilT4LchmHo8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10036+    sdmf_old_shares[7] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAVbcuMS2ew7zVPxIf0egWq7FQGoFV3afhrRLtPn6JO54oNIVI+8rNwXNFy5GVWApR/QcFx7UwIXxLClRs88TJ9UtLnNF4/mM0DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10037+    sdmf_old_shares[8] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAYmqKY7A9vQChuYa17fYSyKerIb3682jxiIneQvCMWCK5WcuI4PMeIsUAj8yxdxHvV+a9vtSCEsDVvymrrooDKX1GK98t37yoDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10038+    sdmf_old_shares[9] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAXVnLiODzHiLFAI/MsXcR71fmvb7UghLA1b8pq66KAyl+aopjsD29AKG5hrXt9hLIp6shvfrzaPGIid5C8IxYIrjgBj1YohGgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
10039+    sdmf_old_cap = "URI:SSK:gmjgofw6gan57gwpsow6gtrz3e:5adm6fayxmu3e4lkmfvt6lkkfix34ai2wop2ioqr4bgvvhiol3kq"
10040+    sdmf_old_contents = "This is a test file.\n"
10041+    def copy_sdmf_shares(self):
10042+        # We'll basically be short-circuiting the upload process.
10043+        servernums = self.g.servers_by_number.keys()
10044+        assert len(servernums) == 10
10045+
10046+        assignments = zip(self.sdmf_old_shares.keys(), servernums)
10047+        # Get the storage index.
10048+        cap = uri.from_string(self.sdmf_old_cap)
10049+        si = cap.get_storage_index()
10050+
10051+        # Now execute each assignment by writing the storage.
10052+        for (share, servernum) in assignments:
10053+            sharedata = base64.b64decode(self.sdmf_old_shares[share])
10054+            storedir = self.get_serverdir(servernum)
10055+            storage_path = os.path.join(storedir, "shares",
10056+                                        storage_index_to_dir(si))
10057+            fileutil.make_dirs(storage_path)
10058+            fileutil.write(os.path.join(storage_path, "%d" % share),
10059+                           sharedata)
10060+        # ...and verify that the shares are there.
10061+        shares = self.find_uri_shares(self.sdmf_old_cap)
10062+        assert len(shares) == 10
10063+
10064+    def test_new_downloader_can_read_old_shares(self):
10065+        self.basedir = "mutable/Interoperability/new_downloader_can_read_old_shares"
10066+        self.set_up_grid()
10067+        self.copy_sdmf_shares()
10068+        nm = self.g.clients[0].nodemaker
10069+        n = nm.create_from_cap(self.sdmf_old_cap)
10070+        d = n.download_best_version()
10071+        d.addCallback(self.failUnlessEqual, self.sdmf_old_contents)
10072+        return d
10073}
10074[mutable/layout: Define MDMF share format, write tools for working with MDMF share format
10075Kevan Carstensen <kevan@isnotajoke.com>**20110802021120
10076 Ignore-this: fa76ef4800939e19ba3cbc22a2eab4e
10077 
10078 The changes in layout.py are mostly concerned with the MDMF share
10079 format. In particular, we define read and write proxy objects used by
10080 retrieval, publishing, and other code to write and read the MDMF share
10081 format. We create equivalent proxies for SDMF objects so that these
10082 objects can be suitably general.
10083] {
10084hunk ./src/allmydata/mutable/layout.py 2
10085 
10086-import struct
10087+import struct, math
10088 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
10089hunk ./src/allmydata/mutable/layout.py 4
10090+from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
10091+                                 MDMF_VERSION, IMutableSlotWriter
10092+from allmydata.util import mathutil, observer
10093+from twisted.python import failure
10094+from twisted.internet import defer
10095+from zope.interface import implements
10096+
10097+
10098+# These strings describe the format of the packed structs they help process
10099+# Here's what they mean:
10100+#
10101+#  PREFIX:
10102+#    >: Big-endian byte order; the most significant byte is first (leftmost).
10103+#    B: The version information; an 8 bit version identifier. Stored as
10104+#       an unsigned char. This is currently 00 00 00 00; our modifications
10105+#       will turn it into 00 00 00 01.
10106+#    Q: The sequence number; this is sort of like a revision history for
10107+#       mutable files; they start at 1 and increase as they are changed after
10108+#       being uploaded. Stored as an unsigned long long, which is 8 bytes in
10109+#       length.
10110+#  32s: The root hash of the share hash tree. We use sha-256d, so we use 32
10111+#       characters = 32 bytes to store the value.
10112+#  16s: The salt for the readkey. This is a 16-byte random value, stored as
10113+#       16 characters.
10114+#
10115+#  SIGNED_PREFIX additions, things that are covered by the signature:
10116+#    B: The "k" encoding parameter. We store this as an 8-bit character,
10117+#       which is convenient because our erasure coding scheme cannot
10118+#       encode if you ask for more than 255 pieces.
10119+#    B: The "N" encoding parameter. Stored as an 8-bit character for the
10120+#       same reasons as above.
10121+#    Q: The segment size of the uploaded file. This will essentially be the
10122+#       length of the file in SDMF. An unsigned long long, so we can store
10123+#       files of quite large size.
10124+#    Q: The data length of the uploaded file. Modulo padding, this will be
10125+#       the same of the data length field. Like the data length field, it is
10126+#       an unsigned long long and can be quite large.
10127+#
10128+#   HEADER additions:
10129+#     L: The offset of the signature of this. An unsigned long.
10130+#     L: The offset of the share hash chain. An unsigned long.
10131+#     L: The offset of the block hash tree. An unsigned long.
10132+#     L: The offset of the share data. An unsigned long.
10133+#     Q: The offset of the encrypted private key. An unsigned long long, to
10134+#        account for the possibility of a lot of share data.
10135+#     Q: The offset of the EOF. An unsigned long long, to account for the
10136+#        possibility of a lot of share data.
10137+#
10138+#  After all of these, we have the following:
10139+#    - The verification key: Occupies the space between the end of the header
10140+#      and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']].
10141+#    - The signature, which goes from the signature offset to the share hash
10142+#      chain offset.
10143+#    - The share hash chain, which goes from the share hash chain offset to
10144+#      the block hash tree offset.
10145+#    - The share data, which goes from the share data offset to the encrypted
10146+#      private key offset.
10147+#    - The encrypted private key offset, which goes until the end of the file.
10148+#
10149+#  The block hash tree in this encoding has only one share, so the offset of
10150+#  the share data will be 32 bits more than the offset of the block hash tree.
10151+#  Given this, we may need to check to see how many bytes a reasonably sized
10152+#  block hash tree will take up.
10153 
10154 PREFIX = ">BQ32s16s" # each version has a different prefix
10155 SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature
10156hunk ./src/allmydata/mutable/layout.py 73
10157 SIGNED_PREFIX_LENGTH = struct.calcsize(SIGNED_PREFIX)
10158 HEADER = ">BQ32s16s BBQQ LLLLQQ" # includes offsets
10159 HEADER_LENGTH = struct.calcsize(HEADER)
10160+OFFSETS = ">LLLLQQ"
10161+OFFSETS_LENGTH = struct.calcsize(OFFSETS)
10162 
10163hunk ./src/allmydata/mutable/layout.py 76
10164+# These are still used for some tests.
10165 def unpack_header(data):
10166     o = {}
10167     (version,
10168hunk ./src/allmydata/mutable/layout.py 92
10169      o['EOF']) = struct.unpack(HEADER, data[:HEADER_LENGTH])
10170     return (version, seqnum, root_hash, IV, k, N, segsize, datalen, o)
10171 
10172-def unpack_prefix_and_signature(data):
10173-    assert len(data) >= HEADER_LENGTH, len(data)
10174-    prefix = data[:SIGNED_PREFIX_LENGTH]
10175-
10176-    (version,
10177-     seqnum,
10178-     root_hash,
10179-     IV,
10180-     k, N, segsize, datalen,
10181-     o) = unpack_header(data)
10182-
10183-    if version != 0:
10184-        raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
10185-
10186-    if len(data) < o['share_hash_chain']:
10187-        raise NeedMoreDataError(o['share_hash_chain'],
10188-                                o['enc_privkey'], o['EOF']-o['enc_privkey'])
10189-
10190-    pubkey_s = data[HEADER_LENGTH:o['signature']]
10191-    signature = data[o['signature']:o['share_hash_chain']]
10192-
10193-    return (seqnum, root_hash, IV, k, N, segsize, datalen,
10194-            pubkey_s, signature, prefix)
10195-
10196 def unpack_share(data):
10197     assert len(data) >= HEADER_LENGTH
10198     o = {}
10199hunk ./src/allmydata/mutable/layout.py 139
10200             pubkey, signature, share_hash_chain, block_hash_tree,
10201             share_data, enc_privkey)
10202 
10203-def unpack_share_data(verinfo, hash_and_data):
10204-    (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, o_t) = verinfo
10205-
10206-    # hash_and_data starts with the share_hash_chain, so figure out what the
10207-    # offsets really are
10208-    o = dict(o_t)
10209-    o_share_hash_chain = 0
10210-    o_block_hash_tree = o['block_hash_tree'] - o['share_hash_chain']
10211-    o_share_data = o['share_data'] - o['share_hash_chain']
10212-    o_enc_privkey = o['enc_privkey'] - o['share_hash_chain']
10213-
10214-    share_hash_chain_s = hash_and_data[o_share_hash_chain:o_block_hash_tree]
10215-    share_hash_format = ">H32s"
10216-    hsize = struct.calcsize(share_hash_format)
10217-    assert len(share_hash_chain_s) % hsize == 0, len(share_hash_chain_s)
10218-    share_hash_chain = []
10219-    for i in range(0, len(share_hash_chain_s), hsize):
10220-        chunk = share_hash_chain_s[i:i+hsize]
10221-        (hid, h) = struct.unpack(share_hash_format, chunk)
10222-        share_hash_chain.append( (hid, h) )
10223-    share_hash_chain = dict(share_hash_chain)
10224-    block_hash_tree_s = hash_and_data[o_block_hash_tree:o_share_data]
10225-    assert len(block_hash_tree_s) % 32 == 0, len(block_hash_tree_s)
10226-    block_hash_tree = []
10227-    for i in range(0, len(block_hash_tree_s), 32):
10228-        block_hash_tree.append(block_hash_tree_s[i:i+32])
10229-
10230-    share_data = hash_and_data[o_share_data:o_enc_privkey]
10231-
10232-    return (share_hash_chain, block_hash_tree, share_data)
10233-
10234-
10235-def pack_checkstring(seqnum, root_hash, IV):
10236-    return struct.pack(PREFIX,
10237-                       0, # version,
10238-                       seqnum,
10239-                       root_hash,
10240-                       IV)
10241-
10242 def unpack_checkstring(checkstring):
10243     cs_len = struct.calcsize(PREFIX)
10244     version, seqnum, root_hash, IV = struct.unpack(PREFIX, checkstring[:cs_len])
10245hunk ./src/allmydata/mutable/layout.py 146
10246         raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
10247     return (seqnum, root_hash, IV)
10248 
10249-def pack_prefix(seqnum, root_hash, IV,
10250-                required_shares, total_shares,
10251-                segment_size, data_length):
10252-    prefix = struct.pack(SIGNED_PREFIX,
10253-                         0, # version,
10254-                         seqnum,
10255-                         root_hash,
10256-                         IV,
10257-
10258-                         required_shares,
10259-                         total_shares,
10260-                         segment_size,
10261-                         data_length,
10262-                         )
10263-    return prefix
10264 
10265 def pack_offsets(verification_key_length, signature_length,
10266                  share_hash_chain_length, block_hash_tree_length,
10267hunk ./src/allmydata/mutable/layout.py 192
10268                            encprivkey])
10269     return final_share
10270 
10271+def pack_prefix(seqnum, root_hash, IV,
10272+                required_shares, total_shares,
10273+                segment_size, data_length):
10274+    prefix = struct.pack(SIGNED_PREFIX,
10275+                         0, # version,
10276+                         seqnum,
10277+                         root_hash,
10278+                         IV,
10279+                         required_shares,
10280+                         total_shares,
10281+                         segment_size,
10282+                         data_length,
10283+                         )
10284+    return prefix
10285+
10286+
10287+class SDMFSlotWriteProxy:
10288+    implements(IMutableSlotWriter)
10289+    """
10290+    I represent a remote write slot for an SDMF mutable file. I build a
10291+    share in memory, and then write it in one piece to the remote
10292+    server. This mimics how SDMF shares were built before MDMF (and the
10293+    new MDMF uploader), but provides that functionality in a way that
10294+    allows the MDMF uploader to be built without much special-casing for
10295+    file format, which makes the uploader code more readable.
10296+    """
10297+    def __init__(self,
10298+                 shnum,
10299+                 rref, # a remote reference to a storage server
10300+                 storage_index,
10301+                 secrets, # (write_enabler, renew_secret, cancel_secret)
10302+                 seqnum, # the sequence number of the mutable file
10303+                 required_shares,
10304+                 total_shares,
10305+                 segment_size,
10306+                 data_length): # the length of the original file
10307+        self.shnum = shnum
10308+        self._rref = rref
10309+        self._storage_index = storage_index
10310+        self._secrets = secrets
10311+        self._seqnum = seqnum
10312+        self._required_shares = required_shares
10313+        self._total_shares = total_shares
10314+        self._segment_size = segment_size
10315+        self._data_length = data_length
10316+
10317+        # This is an SDMF file, so it should have only one segment, so,
10318+        # modulo padding of the data length, the segment size and the
10319+        # data length should be the same.
10320+        expected_segment_size = mathutil.next_multiple(data_length,
10321+                                                       self._required_shares)
10322+        assert expected_segment_size == segment_size
10323+
10324+        self._block_size = self._segment_size / self._required_shares
10325+
10326+        # This is meant to mimic how SDMF files were built before MDMF
10327+        # entered the picture: we generate each share in its entirety,
10328+        # then push it off to the storage server in one write. When
10329+        # callers call set_*, they are just populating this dict.
10330+        # finish_publishing will stitch these pieces together into a
10331+        # coherent share, and then write the coherent share to the
10332+        # storage server.
10333+        self._share_pieces = {}
10334+
10335+        # This tells the write logic what checkstring to use when
10336+        # writing remote shares.
10337+        self._testvs = []
10338+
10339+        self._readvs = [(0, struct.calcsize(PREFIX))]
10340+
10341+
10342+    def set_checkstring(self, checkstring_or_seqnum,
10343+                              root_hash=None,
10344+                              salt=None):
10345+        """
10346+        Set the checkstring that I will pass to the remote server when
10347+        writing.
10348+
10349+            @param checkstring_or_seqnum: A packed checkstring to use,
10350+                   or a sequence number. I will treat this as a checkstr
10351+
10352+        Note that implementations can differ in which semantics they
10353+        wish to support for set_checkstring -- they can, for example,
10354+        build the checkstring themselves from its constituents, or
10355+        some other thing.
10356+        """
10357+        if root_hash and salt:
10358+            checkstring = struct.pack(PREFIX,
10359+                                      0,
10360+                                      checkstring_or_seqnum,
10361+                                      root_hash,
10362+                                      salt)
10363+        else:
10364+            checkstring = checkstring_or_seqnum
10365+        self._testvs = [(0, len(checkstring), "eq", checkstring)]
10366+
10367+
10368+    def get_checkstring(self):
10369+        """
10370+        Get the checkstring that I think currently exists on the remote
10371+        server.
10372+        """
10373+        if self._testvs:
10374+            return self._testvs[0][3]
10375+        return ""
10376+
10377+
10378+    def put_block(self, data, segnum, salt):
10379+        """
10380+        Add a block and salt to the share.
10381+        """
10382+        # SDMF files have only one segment
10383+        assert segnum == 0
10384+        assert len(data) == self._block_size
10385+        assert len(salt) == SALT_SIZE
10386+
10387+        self._share_pieces['sharedata'] = data
10388+        self._share_pieces['salt'] = salt
10389+
10390+        # TODO: Figure out something intelligent to return.
10391+        return defer.succeed(None)
10392+
10393+
10394+    def put_encprivkey(self, encprivkey):
10395+        """
10396+        Add the encrypted private key to the share.
10397+        """
10398+        self._share_pieces['encprivkey'] = encprivkey
10399+
10400+        return defer.succeed(None)
10401+
10402+
10403+    def put_blockhashes(self, blockhashes):
10404+        """
10405+        Add the block hash tree to the share.
10406+        """
10407+        assert isinstance(blockhashes, list)
10408+        for h in blockhashes:
10409+            assert len(h) == HASH_SIZE
10410+
10411+        # serialize the blockhashes, then set them.
10412+        blockhashes_s = "".join(blockhashes)
10413+        self._share_pieces['block_hash_tree'] = blockhashes_s
10414+
10415+        return defer.succeed(None)
10416+
10417+
10418+    def put_sharehashes(self, sharehashes):
10419+        """
10420+        Add the share hash chain to the share.
10421+        """
10422+        assert isinstance(sharehashes, dict)
10423+        for h in sharehashes.itervalues():
10424+            assert len(h) == HASH_SIZE
10425+
10426+        # serialize the sharehashes, then set them.
10427+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
10428+                                 for i in sorted(sharehashes.keys())])
10429+        self._share_pieces['share_hash_chain'] = sharehashes_s
10430+
10431+        return defer.succeed(None)
10432+
10433+
10434+    def put_root_hash(self, root_hash):
10435+        """
10436+        Add the root hash to the share.
10437+        """
10438+        assert len(root_hash) == HASH_SIZE
10439+
10440+        self._share_pieces['root_hash'] = root_hash
10441+
10442+        return defer.succeed(None)
10443+
10444+
10445+    def put_salt(self, salt):
10446+        """
10447+        Add a salt to an empty SDMF file.
10448+        """
10449+        assert len(salt) == SALT_SIZE
10450+
10451+        self._share_pieces['salt'] = salt
10452+        self._share_pieces['sharedata'] = ""
10453+
10454+
10455+    def get_signable(self):
10456+        """
10457+        Return the part of the share that needs to be signed.
10458+
10459+        SDMF writers need to sign the packed representation of the
10460+        first eight fields of the remote share, that is:
10461+            - version number (0)
10462+            - sequence number
10463+            - root of the share hash tree
10464+            - salt
10465+            - k
10466+            - n
10467+            - segsize
10468+            - datalen
10469+
10470+        This method is responsible for returning that to callers.
10471+        """
10472+        return struct.pack(SIGNED_PREFIX,
10473+                           0,
10474+                           self._seqnum,
10475+                           self._share_pieces['root_hash'],
10476+                           self._share_pieces['salt'],
10477+                           self._required_shares,
10478+                           self._total_shares,
10479+                           self._segment_size,
10480+                           self._data_length)
10481+
10482+
10483+    def put_signature(self, signature):
10484+        """
10485+        Add the signature to the share.
10486+        """
10487+        self._share_pieces['signature'] = signature
10488+
10489+        return defer.succeed(None)
10490+
10491+
10492+    def put_verification_key(self, verification_key):
10493+        """
10494+        Add the verification key to the share.
10495+        """
10496+        self._share_pieces['verification_key'] = verification_key
10497+
10498+        return defer.succeed(None)
10499+
10500+
10501+    def get_verinfo(self):
10502+        """
10503+        I return my verinfo tuple. This is used by the ServermapUpdater
10504+        to keep track of versions of mutable files.
10505+
10506+        The verinfo tuple for MDMF files contains:
10507+            - seqnum
10508+            - root hash
10509+            - a blank (nothing)
10510+            - segsize
10511+            - datalen
10512+            - k
10513+            - n
10514+            - prefix (the thing that you sign)
10515+            - a tuple of offsets
10516+
10517+        We include the nonce in MDMF to simplify processing of version
10518+        information tuples.
10519+
10520+        The verinfo tuple for SDMF files is the same, but contains a
10521+        16-byte IV instead of a hash of salts.
10522+        """
10523+        return (self._seqnum,
10524+                self._share_pieces['root_hash'],
10525+                self._share_pieces['salt'],
10526+                self._segment_size,
10527+                self._data_length,
10528+                self._required_shares,
10529+                self._total_shares,
10530+                self.get_signable(),
10531+                self._get_offsets_tuple())
10532+
10533+    def _get_offsets_dict(self):
10534+        post_offset = HEADER_LENGTH
10535+        offsets = {}
10536+
10537+        verification_key_length = len(self._share_pieces['verification_key'])
10538+        o1 = offsets['signature'] = post_offset + verification_key_length
10539+
10540+        signature_length = len(self._share_pieces['signature'])
10541+        o2 = offsets['share_hash_chain'] = o1 + signature_length
10542+
10543+        share_hash_chain_length = len(self._share_pieces['share_hash_chain'])
10544+        o3 = offsets['block_hash_tree'] = o2 + share_hash_chain_length
10545+
10546+        block_hash_tree_length = len(self._share_pieces['block_hash_tree'])
10547+        o4 = offsets['share_data'] = o3 + block_hash_tree_length
10548+
10549+        share_data_length = len(self._share_pieces['sharedata'])
10550+        o5 = offsets['enc_privkey'] = o4 + share_data_length
10551+
10552+        encprivkey_length = len(self._share_pieces['encprivkey'])
10553+        offsets['EOF'] = o5 + encprivkey_length
10554+        return offsets
10555+
10556+
10557+    def _get_offsets_tuple(self):
10558+        offsets = self._get_offsets_dict()
10559+        return tuple([(key, value) for key, value in offsets.items()])
10560+
10561+
10562+    def _pack_offsets(self):
10563+        offsets = self._get_offsets_dict()
10564+        return struct.pack(">LLLLQQ",
10565+                           offsets['signature'],
10566+                           offsets['share_hash_chain'],
10567+                           offsets['block_hash_tree'],
10568+                           offsets['share_data'],
10569+                           offsets['enc_privkey'],
10570+                           offsets['EOF'])
10571+
10572+
10573+    def finish_publishing(self):
10574+        """
10575+        Do anything necessary to finish writing the share to a remote
10576+        server. I require that no further publishing needs to take place
10577+        after this method has been called.
10578+        """
10579+        for k in ["sharedata", "encprivkey", "signature", "verification_key",
10580+                  "share_hash_chain", "block_hash_tree"]:
10581+            assert k in self._share_pieces
10582+        # This is the only method that actually writes something to the
10583+        # remote server.
10584+        # First, we need to pack the share into data that we can write
10585+        # to the remote server in one write.
10586+        offsets = self._pack_offsets()
10587+        prefix = self.get_signable()
10588+        final_share = "".join([prefix,
10589+                               offsets,
10590+                               self._share_pieces['verification_key'],
10591+                               self._share_pieces['signature'],
10592+                               self._share_pieces['share_hash_chain'],
10593+                               self._share_pieces['block_hash_tree'],
10594+                               self._share_pieces['sharedata'],
10595+                               self._share_pieces['encprivkey']])
10596+
10597+        # Our only data vector is going to be writing the final share,
10598+        # in its entirely.
10599+        datavs = [(0, final_share)]
10600+
10601+        if not self._testvs:
10602+            # Our caller has not provided us with another checkstring
10603+            # yet, so we assume that we are writing a new share, and set
10604+            # a test vector that will allow a new share to be written.
10605+            self._testvs = []
10606+            self._testvs.append(tuple([0, 1, "eq", ""]))
10607+
10608+        tw_vectors = {}
10609+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
10610+        return self._rref.callRemote("slot_testv_and_readv_and_writev",
10611+                                     self._storage_index,
10612+                                     self._secrets,
10613+                                     tw_vectors,
10614+                                     # TODO is it useful to read something?
10615+                                     self._readvs)
10616+
10617+
10618+MDMFHEADER = ">BQ32sBBQQ QQQQQQQQ"
10619+MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ"
10620+MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
10621+MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
10622+MDMFCHECKSTRING = ">BQ32s"
10623+MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
10624+MDMFOFFSETS = ">QQQQQQQQ"
10625+MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
10626+
10627+PRIVATE_KEY_SIZE = 1220
10628+SIGNATURE_SIZE = 260
10629+VERIFICATION_KEY_SIZE = 292
10630+# We know we won't have more than 256 shares, and we know that we won't
10631+# need to store more than lg 256 of them to validate, so that's our
10632+# bound. We add 1 to the int cast to round to the next integer.
10633+SHARE_HASH_CHAIN_SIZE = int(math.log(HASH_SIZE * 256)) + 1
10634+
10635+class MDMFSlotWriteProxy:
10636+    implements(IMutableSlotWriter)
10637+
10638+    """
10639+    I represent a remote write slot for an MDMF mutable file.
10640+
10641+    I abstract away from my caller the details of block and salt
10642+    management, and the implementation of the on-disk format for MDMF
10643+    shares.
10644+    """
10645+    # Expected layout, MDMF:
10646+    # offset:     size:       name:
10647+    #-- signed part --
10648+    # 0           1           version number (01)
10649+    # 1           8           sequence number
10650+    # 9           32          share tree root hash
10651+    # 41          1           The "k" encoding parameter
10652+    # 42          1           The "N" encoding parameter
10653+    # 43          8           The segment size of the uploaded file
10654+    # 51          8           The data length of the original plaintext
10655+    #-- end signed part --
10656+    # 59          8           The offset of the encrypted private key
10657+    # 67          8           The offset of the signature
10658+    # 75          8           The offset of the verification key
10659+    # 83          8           The offset of the end of the v. key.
10660+    # 92          8           The offset of the share data
10661+    # 100         8           The offset of the block hash tree
10662+    # 108         8           The offset of the share hash chain
10663+    # 116         8           The offset of EOF
10664+    #
10665+    # followed by the encrypted private key, signature, verification
10666+    # key, share hash chain, data, and block hash tree. We order the
10667+    # fields that way to make smart downloaders -- downloaders which
10668+    # prempetively read a big part of the share -- possible.
10669+    #
10670+    # The checkstring is the first three fields -- the version number,
10671+    # sequence number, root hash and root salt hash. This is consistent
10672+    # in meaning to what we have with SDMF files, except now instead of
10673+    # using the literal salt, we use a value derived from all of the
10674+    # salts -- the share hash root.
10675+    #
10676+    # The salt is stored before the block for each segment. The block
10677+    # hash tree is computed over the combination of block and salt for
10678+    # each segment. In this way, we get integrity checking for both
10679+    # block and salt with the current block hash tree arrangement.
10680+    #
10681+    # The ordering of the offsets is different to reflect the dependencies
10682+    # that we'll run into with an MDMF file. The expected write flow is
10683+    # something like this:
10684+    #
10685+    #   0: Initialize with the sequence number, encoding parameters and
10686+    #      data length. From this, we can deduce the number of segments,
10687+    #      and where they should go.. We can also figure out where the
10688+    #      encrypted private key should go, because we can figure out how
10689+    #      big the share data will be.
10690+    #
10691+    #   1: Encrypt, encode, and upload the file in chunks. Do something
10692+    #      like
10693+    #
10694+    #       put_block(data, segnum, salt)
10695+    #
10696+    #      to write a block and a salt to the disk. We can do both of
10697+    #      these operations now because we have enough of the offsets to
10698+    #      know where to put them.
10699+    #
10700+    #   2: Put the encrypted private key. Use:
10701+    #
10702+    #        put_encprivkey(encprivkey)
10703+    #
10704+    #      Now that we know the length of the private key, we can fill
10705+    #      in the offset for the block hash tree.
10706+    #
10707+    #   3: We're now in a position to upload the block hash tree for
10708+    #      a share. Put that using something like:
10709+    #       
10710+    #        put_blockhashes(block_hash_tree)
10711+    #
10712+    #      Note that block_hash_tree is a list of hashes -- we'll take
10713+    #      care of the details of serializing that appropriately. When
10714+    #      we get the block hash tree, we are also in a position to
10715+    #      calculate the offset for the share hash chain, and fill that
10716+    #      into the offsets table.
10717+    #
10718+    #   4: We're now in a position to upload the share hash chain for
10719+    #      a share. Do that with something like:
10720+    #     
10721+    #        put_sharehashes(share_hash_chain)
10722+    #
10723+    #      share_hash_chain should be a dictionary mapping shnums to
10724+    #      32-byte hashes -- the wrapper handles serialization.
10725+    #      We'll know where to put the signature at this point, also.
10726+    #      The root of this tree will be put explicitly in the next
10727+    #      step.
10728+    #
10729+    #   5: Before putting the signature, we must first put the
10730+    #      root_hash. Do this with:
10731+    #
10732+    #        put_root_hash(root_hash).
10733+    #     
10734+    #      In terms of knowing where to put this value, it was always
10735+    #      possible to place it, but it makes sense semantically to
10736+    #      place it after the share hash tree, so that's why you do it
10737+    #      in this order.
10738+    #
10739+    #   6: With the root hash put, we can now sign the header. Use:
10740+    #
10741+    #        get_signable()
10742+    #
10743+    #      to get the part of the header that you want to sign, and use:
10744+    #       
10745+    #        put_signature(signature)
10746+    #
10747+    #      to write your signature to the remote server.
10748+    #
10749+    #   6: Add the verification key, and finish. Do:
10750+    #
10751+    #        put_verification_key(key)
10752+    #
10753+    #      and
10754+    #
10755+    #        finish_publish()
10756+    #
10757+    # Checkstring management:
10758+    #
10759+    # To write to a mutable slot, we have to provide test vectors to ensure
10760+    # that we are writing to the same data that we think we are. These
10761+    # vectors allow us to detect uncoordinated writes; that is, writes
10762+    # where both we and some other shareholder are writing to the
10763+    # mutable slot, and to report those back to the parts of the program
10764+    # doing the writing.
10765+    #
10766+    # With SDMF, this was easy -- all of the share data was written in
10767+    # one go, so it was easy to detect uncoordinated writes, and we only
10768+    # had to do it once. With MDMF, not all of the file is written at
10769+    # once.
10770+    #
10771+    # If a share is new, we write out as much of the header as we can
10772+    # before writing out anything else. This gives other writers a
10773+    # canary that they can use to detect uncoordinated writes, and, if
10774+    # they do the same thing, gives us the same canary. We them update
10775+    # the share. We won't be able to write out two fields of the header
10776+    # -- the share tree hash and the salt hash -- until we finish
10777+    # writing out the share. We only require the writer to provide the
10778+    # initial checkstring, and keep track of what it should be after
10779+    # updates ourselves.
10780+    #
10781+    # If we haven't written anything yet, then on the first write (which
10782+    # will probably be a block + salt of a share), we'll also write out
10783+    # the header. On subsequent passes, we'll expect to see the header.
10784+    # This changes in two places:
10785+    #
10786+    #   - When we write out the salt hash
10787+    #   - When we write out the root of the share hash tree
10788+    #
10789+    # since these values will change the header. It is possible that we
10790+    # can just make those be written in one operation to minimize
10791+    # disruption.
10792+    def __init__(self,
10793+                 shnum,
10794+                 rref, # a remote reference to a storage server
10795+                 storage_index,
10796+                 secrets, # (write_enabler, renew_secret, cancel_secret)
10797+                 seqnum, # the sequence number of the mutable file
10798+                 required_shares,
10799+                 total_shares,
10800+                 segment_size,
10801+                 data_length): # the length of the original file
10802+        self.shnum = shnum
10803+        self._rref = rref
10804+        self._storage_index = storage_index
10805+        self._seqnum = seqnum
10806+        self._required_shares = required_shares
10807+        assert self.shnum >= 0 and self.shnum < total_shares
10808+        self._total_shares = total_shares
10809+        # We build up the offset table as we write things. It is the
10810+        # last thing we write to the remote server.
10811+        self._offsets = {}
10812+        self._testvs = []
10813+        # This is a list of write vectors that will be sent to our
10814+        # remote server once we are directed to write things there.
10815+        self._writevs = []
10816+        self._secrets = secrets
10817+        # The segment size needs to be a multiple of the k parameter --
10818+        # any padding should have been carried out by the publisher
10819+        # already.
10820+        assert segment_size % required_shares == 0
10821+        self._segment_size = segment_size
10822+        self._data_length = data_length
10823+
10824+        # These are set later -- we define them here so that we can
10825+        # check for their existence easily
10826+
10827+        # This is the root of the share hash tree -- the Merkle tree
10828+        # over the roots of the block hash trees computed for shares in
10829+        # this upload.
10830+        self._root_hash = None
10831+
10832+        # We haven't yet written anything to the remote bucket. By
10833+        # setting this, we tell the _write method as much. The write
10834+        # method will then know that it also needs to add a write vector
10835+        # for the checkstring (or what we have of it) to the first write
10836+        # request. We'll then record that value for future use.  If
10837+        # we're expecting something to be there already, we need to call
10838+        # set_checkstring before we write anything to tell the first
10839+        # write about that.
10840+        self._written = False
10841+
10842+        # When writing data to the storage servers, we get a read vector
10843+        # for free. We'll read the checkstring, which will help us
10844+        # figure out what's gone wrong if a write fails.
10845+        self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))]
10846+
10847+        # We calculate the number of segments because it tells us
10848+        # where the salt part of the file ends/share segment begins,
10849+        # and also because it provides a useful amount of bounds checking.
10850+        self._num_segments = mathutil.div_ceil(self._data_length,
10851+                                               self._segment_size)
10852+        self._block_size = self._segment_size / self._required_shares
10853+        # We also calculate the share size, to help us with block
10854+        # constraints later.
10855+        tail_size = self._data_length % self._segment_size
10856+        if not tail_size:
10857+            self._tail_block_size = self._block_size
10858+        else:
10859+            self._tail_block_size = mathutil.next_multiple(tail_size,
10860+                                                           self._required_shares)
10861+            self._tail_block_size /= self._required_shares
10862+
10863+        # We already know where the sharedata starts; right after the end
10864+        # of the header (which is defined as the signable part + the offsets)
10865+        # We can also calculate where the encrypted private key begins
10866+        # from what we know know.
10867+        self._actual_block_size = self._block_size + SALT_SIZE
10868+        data_size = self._actual_block_size * (self._num_segments - 1)
10869+        data_size += self._tail_block_size
10870+        data_size += SALT_SIZE
10871+        self._offsets['enc_privkey'] = MDMFHEADERSIZE
10872+
10873+        # We don't define offsets for these because we want them to be
10874+        # tightly packed -- this allows us to ignore the responsibility
10875+        # of padding individual values, and of removing that padding
10876+        # later. So nonconstant_start is where we start writing
10877+        # nonconstant data.
10878+        nonconstant_start = self._offsets['enc_privkey']
10879+        nonconstant_start += PRIVATE_KEY_SIZE
10880+        nonconstant_start += SIGNATURE_SIZE
10881+        nonconstant_start += VERIFICATION_KEY_SIZE
10882+        nonconstant_start += SHARE_HASH_CHAIN_SIZE
10883+
10884+        self._offsets['share_data'] = nonconstant_start
10885+
10886+        # Finally, we know how big the share data will be, so we can
10887+        # figure out where the block hash tree needs to go.
10888+        # XXX: But this will go away if Zooko wants to make it so that
10889+        # you don't need to know the size of the file before you start
10890+        # uploading it.
10891+        self._offsets['block_hash_tree'] = self._offsets['share_data'] + \
10892+                    data_size
10893+
10894+        # Done. We can snow start writing.
10895+
10896+
10897+    def set_checkstring(self,
10898+                        seqnum_or_checkstring,
10899+                        root_hash=None,
10900+                        salt=None):
10901+        """
10902+        Set checkstring checkstring for the given shnum.
10903+
10904+        This can be invoked in one of two ways.
10905+
10906+        With one argument, I assume that you are giving me a literal
10907+        checkstring -- e.g., the output of get_checkstring. I will then
10908+        set that checkstring as it is. This form is used by unit tests.
10909+
10910+        With two arguments, I assume that you are giving me a sequence
10911+        number and root hash to make a checkstring from. In that case, I
10912+        will build a checkstring and set it for you. This form is used
10913+        by the publisher.
10914+
10915+        By default, I assume that I am writing new shares to the grid.
10916+        If you don't explcitly set your own checkstring, I will use
10917+        one that requires that the remote share not exist. You will want
10918+        to use this method if you are updating a share in-place;
10919+        otherwise, writes will fail.
10920+        """
10921+        # You're allowed to overwrite checkstrings with this method;
10922+        # I assume that users know what they are doing when they call
10923+        # it.
10924+        if root_hash:
10925+            checkstring = struct.pack(MDMFCHECKSTRING,
10926+                                      1,
10927+                                      seqnum_or_checkstring,
10928+                                      root_hash)
10929+        else:
10930+            checkstring = seqnum_or_checkstring
10931+
10932+        if checkstring == "":
10933+            # We special-case this, since len("") = 0, but we need
10934+            # length of 1 for the case of an empty share to work on the
10935+            # storage server, which is what a checkstring that is the
10936+            # empty string means.
10937+            self._testvs = []
10938+        else:
10939+            self._testvs = []
10940+            self._testvs.append((0, len(checkstring), "eq", checkstring))
10941+
10942+
10943+    def __repr__(self):
10944+        return "MDMFSlotWriteProxy for share %d" % self.shnum
10945+
10946+
10947+    def get_checkstring(self):
10948+        """
10949+        Given a share number, I return a representation of what the
10950+        checkstring for that share on the server will look like.
10951+
10952+        I am mostly used for tests.
10953+        """
10954+        if self._root_hash:
10955+            roothash = self._root_hash
10956+        else:
10957+            roothash = "\x00" * 32
10958+        return struct.pack(MDMFCHECKSTRING,
10959+                           1,
10960+                           self._seqnum,
10961+                           roothash)
10962+
10963+
10964+    def put_block(self, data, segnum, salt):
10965+        """
10966+        I queue a write vector for the data, salt, and segment number
10967+        provided to me. I return None, as I do not actually cause
10968+        anything to be written yet.
10969+        """
10970+        if segnum >= self._num_segments:
10971+            raise LayoutInvalid("I won't overwrite the block hash tree")
10972+        if len(salt) != SALT_SIZE:
10973+            raise LayoutInvalid("I was given a salt of size %d, but "
10974+                                "I wanted a salt of size %d")
10975+        if segnum + 1 == self._num_segments:
10976+            if len(data) != self._tail_block_size:
10977+                raise LayoutInvalid("I was given the wrong size block to write")
10978+        elif len(data) != self._block_size:
10979+            raise LayoutInvalid("I was given the wrong size block to write")
10980+
10981+        # We want to write at len(MDMFHEADER) + segnum * block_size.
10982+        offset = self._offsets['share_data'] + \
10983+            (self._actual_block_size * segnum)
10984+        data = salt + data
10985+
10986+        self._writevs.append(tuple([offset, data]))
10987+
10988+
10989+    def put_encprivkey(self, encprivkey):
10990+        """
10991+        I queue a write vector for the encrypted private key provided to
10992+        me.
10993+        """
10994+        assert self._offsets
10995+        assert self._offsets['enc_privkey']
10996+        # You shouldn't re-write the encprivkey after the block hash
10997+        # tree is written, since that could cause the private key to run
10998+        # into the block hash tree. Before it writes the block hash
10999+        # tree, the block hash tree writing method writes the offset of
11000+        # the share hash chain. So that's a good indicator of whether or
11001+        # not the block hash tree has been written.
11002+        if "signature" in self._offsets:
11003+            raise LayoutInvalid("You can't put the encrypted private key "
11004+                                "after putting the share hash chain")
11005+
11006+        self._offsets['share_hash_chain'] = self._offsets['enc_privkey'] + \
11007+                len(encprivkey)
11008+
11009+        self._writevs.append(tuple([self._offsets['enc_privkey'], encprivkey]))
11010+
11011+
11012+    def put_blockhashes(self, blockhashes):
11013+        """
11014+        I queue a write vector to put the block hash tree in blockhashes
11015+        onto the remote server.
11016+
11017+        The encrypted private key must be queued before the block hash
11018+        tree, since we need to know how large it is to know where the
11019+        block hash tree should go. The block hash tree must be put
11020+        before the share hash chain, since its size determines the
11021+        offset of the share hash chain.
11022+        """
11023+        assert self._offsets
11024+        assert "block_hash_tree" in self._offsets
11025+
11026+        assert isinstance(blockhashes, list)
11027+
11028+        blockhashes_s = "".join(blockhashes)
11029+        self._offsets['EOF'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
11030+
11031+        self._writevs.append(tuple([self._offsets['block_hash_tree'],
11032+                                  blockhashes_s]))
11033+
11034+
11035+    def put_sharehashes(self, sharehashes):
11036+        """
11037+        I queue a write vector to put the share hash chain in my
11038+        argument onto the remote server.
11039+
11040+        The block hash tree must be queued before the share hash chain,
11041+        since we need to know where the block hash tree ends before we
11042+        can know where the share hash chain starts. The share hash chain
11043+        must be put before the signature, since the length of the packed
11044+        share hash chain determines the offset of the signature. Also,
11045+        semantically, you must know what the root of the block hash tree
11046+        is before you can generate a valid signature.
11047+        """
11048+        assert isinstance(sharehashes, dict)
11049+        assert self._offsets
11050+        if "share_hash_chain" not in self._offsets:
11051+            raise LayoutInvalid("You must put the block hash tree before "
11052+                                "putting the share hash chain")
11053+
11054+        # The signature comes after the share hash chain. If the
11055+        # signature has already been written, we must not write another
11056+        # share hash chain. The signature writes the verification key
11057+        # offset when it gets sent to the remote server, so we look for
11058+        # that.
11059+        if "verification_key" in self._offsets:
11060+            raise LayoutInvalid("You must write the share hash chain "
11061+                                "before you write the signature")
11062+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
11063+                                  for i in sorted(sharehashes.keys())])
11064+        self._offsets['signature'] = self._offsets['share_hash_chain'] + \
11065+            len(sharehashes_s)
11066+        self._writevs.append(tuple([self._offsets['share_hash_chain'],
11067+                            sharehashes_s]))
11068+
11069+
11070+    def put_root_hash(self, roothash):
11071+        """
11072+        Put the root hash (the root of the share hash tree) in the
11073+        remote slot.
11074+        """
11075+        # It does not make sense to be able to put the root
11076+        # hash without first putting the share hashes, since you need
11077+        # the share hashes to generate the root hash.
11078+        #
11079+        # Signature is defined by the routine that places the share hash
11080+        # chain, so it's a good thing to look for in finding out whether
11081+        # or not the share hash chain exists on the remote server.
11082+        if len(roothash) != HASH_SIZE:
11083+            raise LayoutInvalid("hashes and salts must be exactly %d bytes"
11084+                                 % HASH_SIZE)
11085+        self._root_hash = roothash
11086+        # To write both of these values, we update the checkstring on
11087+        # the remote server, which includes them
11088+        checkstring = self.get_checkstring()
11089+        self._writevs.append(tuple([0, checkstring]))
11090+        # This write, if successful, changes the checkstring, so we need
11091+        # to update our internal checkstring to be consistent with the
11092+        # one on the server.
11093+
11094+
11095+    def get_signable(self):
11096+        """
11097+        Get the first seven fields of the mutable file; the parts that
11098+        are signed.
11099+        """
11100+        if not self._root_hash:
11101+            raise LayoutInvalid("You need to set the root hash "
11102+                                "before getting something to "
11103+                                "sign")
11104+        return struct.pack(MDMFSIGNABLEHEADER,
11105+                           1,
11106+                           self._seqnum,
11107+                           self._root_hash,
11108+                           self._required_shares,
11109+                           self._total_shares,
11110+                           self._segment_size,
11111+                           self._data_length)
11112+
11113+
11114+    def put_signature(self, signature):
11115+        """
11116+        I queue a write vector for the signature of the MDMF share.
11117+
11118+        I require that the root hash and share hash chain have been put
11119+        to the grid before I will write the signature to the grid.
11120+        """
11121+        if "signature" not in self._offsets:
11122+            raise LayoutInvalid("You must put the share hash chain "
11123+        # It does not make sense to put a signature without first
11124+        # putting the root hash and the salt hash (since otherwise
11125+        # the signature would be incomplete), so we don't allow that.
11126+                       "before putting the signature")
11127+        if not self._root_hash:
11128+            raise LayoutInvalid("You must complete the signed prefix "
11129+                                "before computing a signature")
11130+        # If we put the signature after we put the verification key, we
11131+        # could end up running into the verification key, and will
11132+        # probably screw up the offsets as well. So we don't allow that.
11133+        if "verification_key_end" in self._offsets:
11134+            raise LayoutInvalid("You can't put the signature after the "
11135+                                "verification key")
11136+        # The method that writes the verification key defines the EOF
11137+        # offset before writing the verification key, so look for that.
11138+        self._offsets['verification_key'] = self._offsets['signature'] +\
11139+            len(signature)
11140+        self._writevs.append(tuple([self._offsets['signature'], signature]))
11141+
11142+
11143+    def put_verification_key(self, verification_key):
11144+        """
11145+        I queue a write vector for the verification key.
11146+
11147+        I require that the signature have been written to the storage
11148+        server before I allow the verification key to be written to the
11149+        remote server.
11150+        """
11151+        if "verification_key" not in self._offsets:
11152+            raise LayoutInvalid("You must put the signature before you "
11153+                                "can put the verification key")
11154+
11155+        self._offsets['verification_key_end'] = \
11156+            self._offsets['verification_key'] + len(verification_key)
11157+        assert self._offsets['verification_key_end'] <= self._offsets['share_data']
11158+        self._writevs.append(tuple([self._offsets['verification_key'],
11159+                            verification_key]))
11160+
11161+
11162+    def _get_offsets_tuple(self):
11163+        return tuple([(key, value) for key, value in self._offsets.items()])
11164+
11165+
11166+    def get_verinfo(self):
11167+        return (self._seqnum,
11168+                self._root_hash,
11169+                self._required_shares,
11170+                self._total_shares,
11171+                self._segment_size,
11172+                self._data_length,
11173+                self.get_signable(),
11174+                self._get_offsets_tuple())
11175+
11176+
11177+    def finish_publishing(self):
11178+        """
11179+        I add a write vector for the offsets table, and then cause all
11180+        of the write vectors that I've dealt with so far to be published
11181+        to the remote server, ending the write process.
11182+        """
11183+        if "verification_key_end" not in self._offsets:
11184+            raise LayoutInvalid("You must put the verification key before "
11185+                                "you can publish the offsets")
11186+        offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
11187+        offsets = struct.pack(MDMFOFFSETS,
11188+                              self._offsets['enc_privkey'],
11189+                              self._offsets['share_hash_chain'],
11190+                              self._offsets['signature'],
11191+                              self._offsets['verification_key'],
11192+                              self._offsets['verification_key_end'],
11193+                              self._offsets['share_data'],
11194+                              self._offsets['block_hash_tree'],
11195+                              self._offsets['EOF'])
11196+        self._writevs.append(tuple([offsets_offset, offsets]))
11197+        encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
11198+        params = struct.pack(">BBQQ",
11199+                             self._required_shares,
11200+                             self._total_shares,
11201+                             self._segment_size,
11202+                             self._data_length)
11203+        self._writevs.append(tuple([encoding_parameters_offset, params]))
11204+        return self._write(self._writevs)
11205+
11206+
11207+    def _write(self, datavs, on_failure=None, on_success=None):
11208+        """I write the data vectors in datavs to the remote slot."""
11209+        tw_vectors = {}
11210+        if not self._testvs:
11211+            self._testvs = []
11212+            self._testvs.append(tuple([0, 1, "eq", ""]))
11213+        if not self._written:
11214+            # Write a new checkstring to the share when we write it, so
11215+            # that we have something to check later.
11216+            new_checkstring = self.get_checkstring()
11217+            datavs.append((0, new_checkstring))
11218+            def _first_write():
11219+                self._written = True
11220+                self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)]
11221+            on_success = _first_write
11222+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
11223+        d = self._rref.callRemote("slot_testv_and_readv_and_writev",
11224+                                  self._storage_index,
11225+                                  self._secrets,
11226+                                  tw_vectors,
11227+                                  self._readv)
11228+        def _result(results):
11229+            if isinstance(results, failure.Failure) or not results[0]:
11230+                # Do nothing; the write was unsuccessful.
11231+                if on_failure: on_failure()
11232+            else:
11233+                if on_success: on_success()
11234+            return results
11235+        d.addCallback(_result)
11236+        return d
11237+
11238+
11239+class MDMFSlotReadProxy:
11240+    """
11241+    I read from a mutable slot filled with data written in the MDMF data
11242+    format (which is described above).
11243+
11244+    I can be initialized with some amount of data, which I will use (if
11245+    it is valid) to eliminate some of the need to fetch it from servers.
11246+    """
11247+    def __init__(self,
11248+                 rref,
11249+                 storage_index,
11250+                 shnum,
11251+                 data=""):
11252+        # Start the initialization process.
11253+        self._rref = rref
11254+        self._storage_index = storage_index
11255+        self.shnum = shnum
11256+
11257+        # Before doing anything, the reader is probably going to want to
11258+        # verify that the signature is correct. To do that, they'll need
11259+        # the verification key, and the signature. To get those, we'll
11260+        # need the offset table. So fetch the offset table on the
11261+        # assumption that that will be the first thing that a reader is
11262+        # going to do.
11263+
11264+        # The fact that these encoding parameters are None tells us
11265+        # that we haven't yet fetched them from the remote share, so we
11266+        # should. We could just not set them, but the checks will be
11267+        # easier to read if we don't have to use hasattr.
11268+        self._version_number = None
11269+        self._sequence_number = None
11270+        self._root_hash = None
11271+        # Filled in if we're dealing with an SDMF file. Unused
11272+        # otherwise.
11273+        self._salt = None
11274+        self._required_shares = None
11275+        self._total_shares = None
11276+        self._segment_size = None
11277+        self._data_length = None
11278+        self._offsets = None
11279+
11280+        # If the user has chosen to initialize us with some data, we'll
11281+        # try to satisfy subsequent data requests with that data before
11282+        # asking the storage server for it. If
11283+        self._data = data
11284+        # The way callers interact with cache in the filenode returns
11285+        # None if there isn't any cached data, but the way we index the
11286+        # cached data requires a string, so convert None to "".
11287+        if self._data == None:
11288+            self._data = ""
11289+
11290+        self._queue_observers = observer.ObserverList()
11291+        self._queue_errbacks = observer.ObserverList()
11292+        self._readvs = []
11293+
11294+
11295+    def _maybe_fetch_offsets_and_header(self, force_remote=False):
11296+        """
11297+        I fetch the offset table and the header from the remote slot if
11298+        I don't already have them. If I do have them, I do nothing and
11299+        return an empty Deferred.
11300+        """
11301+        if self._offsets:
11302+            return defer.succeed(None)
11303+        # At this point, we may be either SDMF or MDMF. Fetching 107
11304+        # bytes will be enough to get header and offsets for both SDMF and
11305+        # MDMF, though we'll be left with 4 more bytes than we
11306+        # need if this ends up being MDMF. This is probably less
11307+        # expensive than the cost of a second roundtrip.
11308+        readvs = [(0, 123)]
11309+        d = self._read(readvs, force_remote)
11310+        d.addCallback(self._process_encoding_parameters)
11311+        d.addCallback(self._process_offsets)
11312+        return d
11313+
11314+
11315+    def _process_encoding_parameters(self, encoding_parameters):
11316+        assert self.shnum in encoding_parameters
11317+        encoding_parameters = encoding_parameters[self.shnum][0]
11318+        # The first byte is the version number. It will tell us what
11319+        # to do next.
11320+        (verno,) = struct.unpack(">B", encoding_parameters[:1])
11321+        if verno == MDMF_VERSION:
11322+            read_size = MDMFHEADERWITHOUTOFFSETSSIZE
11323+            (verno,
11324+             seqnum,
11325+             root_hash,
11326+             k,
11327+             n,
11328+             segsize,
11329+             datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS,
11330+                                      encoding_parameters[:read_size])
11331+            if segsize == 0 and datalen == 0:
11332+                # Empty file, no segments.
11333+                self._num_segments = 0
11334+            else:
11335+                self._num_segments = mathutil.div_ceil(datalen, segsize)
11336+
11337+        elif verno == SDMF_VERSION:
11338+            read_size = SIGNED_PREFIX_LENGTH
11339+            (verno,
11340+             seqnum,
11341+             root_hash,
11342+             salt,
11343+             k,
11344+             n,
11345+             segsize,
11346+             datalen) = struct.unpack(">BQ32s16s BBQQ",
11347+                                encoding_parameters[:SIGNED_PREFIX_LENGTH])
11348+            self._salt = salt
11349+            if segsize == 0 and datalen == 0:
11350+                # empty file
11351+                self._num_segments = 0
11352+            else:
11353+                # non-empty SDMF files have one segment.
11354+                self._num_segments = 1
11355+        else:
11356+            raise UnknownVersionError("You asked me to read mutable file "
11357+                                      "version %d, but I only understand "
11358+                                      "%d and %d" % (verno, SDMF_VERSION,
11359+                                                     MDMF_VERSION))
11360+
11361+        self._version_number = verno
11362+        self._sequence_number = seqnum
11363+        self._root_hash = root_hash
11364+        self._required_shares = k
11365+        self._total_shares = n
11366+        self._segment_size = segsize
11367+        self._data_length = datalen
11368+
11369+        self._block_size = self._segment_size / self._required_shares
11370+        # We can upload empty files, and need to account for this fact
11371+        # so as to avoid zero-division and zero-modulo errors.
11372+        if datalen > 0:
11373+            tail_size = self._data_length % self._segment_size
11374+        else:
11375+            tail_size = 0
11376+        if not tail_size:
11377+            self._tail_block_size = self._block_size
11378+        else:
11379+            self._tail_block_size = mathutil.next_multiple(tail_size,
11380+                                                    self._required_shares)
11381+            self._tail_block_size /= self._required_shares
11382+
11383+        return encoding_parameters
11384+
11385+
11386+    def _process_offsets(self, offsets):
11387+        if self._version_number == 0:
11388+            read_size = OFFSETS_LENGTH
11389+            read_offset = SIGNED_PREFIX_LENGTH
11390+            end = read_size + read_offset
11391+            (signature,
11392+             share_hash_chain,
11393+             block_hash_tree,
11394+             share_data,
11395+             enc_privkey,
11396+             EOF) = struct.unpack(">LLLLQQ",
11397+                                  offsets[read_offset:end])
11398+            self._offsets = {}
11399+            self._offsets['signature'] = signature
11400+            self._offsets['share_data'] = share_data
11401+            self._offsets['block_hash_tree'] = block_hash_tree
11402+            self._offsets['share_hash_chain'] = share_hash_chain
11403+            self._offsets['enc_privkey'] = enc_privkey
11404+            self._offsets['EOF'] = EOF
11405+
11406+        elif self._version_number == 1:
11407+            read_offset = MDMFHEADERWITHOUTOFFSETSSIZE
11408+            read_length = MDMFOFFSETS_LENGTH
11409+            end = read_offset + read_length
11410+            (encprivkey,
11411+             sharehashes,
11412+             signature,
11413+             verification_key,
11414+             verification_key_end,
11415+             sharedata,
11416+             blockhashes,
11417+             eof) = struct.unpack(MDMFOFFSETS,
11418+                                  offsets[read_offset:end])
11419+            self._offsets = {}
11420+            self._offsets['enc_privkey'] = encprivkey
11421+            self._offsets['block_hash_tree'] = blockhashes
11422+            self._offsets['share_hash_chain'] = sharehashes
11423+            self._offsets['signature'] = signature
11424+            self._offsets['verification_key'] = verification_key
11425+            self._offsets['verification_key_end']= \
11426+                verification_key_end
11427+            self._offsets['EOF'] = eof
11428+            self._offsets['share_data'] = sharedata
11429+
11430+
11431+    def get_block_and_salt(self, segnum, queue=False):
11432+        """
11433+        I return (block, salt), where block is the block data and
11434+        salt is the salt used to encrypt that segment.
11435+        """
11436+        d = self._maybe_fetch_offsets_and_header()
11437+        def _then(ignored):
11438+            base_share_offset = self._offsets['share_data']
11439+
11440+            if segnum + 1 > self._num_segments:
11441+                raise LayoutInvalid("Not a valid segment number")
11442+
11443+            if self._version_number == 0:
11444+                share_offset = base_share_offset + self._block_size * segnum
11445+            else:
11446+                share_offset = base_share_offset + (self._block_size + \
11447+                                                    SALT_SIZE) * segnum
11448+            if segnum + 1 == self._num_segments:
11449+                data = self._tail_block_size
11450+            else:
11451+                data = self._block_size
11452+
11453+            if self._version_number == 1:
11454+                data += SALT_SIZE
11455+
11456+            readvs = [(share_offset, data)]
11457+            return readvs
11458+        d.addCallback(_then)
11459+        d.addCallback(lambda readvs:
11460+            self._read(readvs, queue=queue))
11461+        def _process_results(results):
11462+            assert self.shnum in results
11463+            if self._version_number == 0:
11464+                # We only read the share data, but we know the salt from
11465+                # when we fetched the header
11466+                data = results[self.shnum]
11467+                if not data:
11468+                    data = ""
11469+                else:
11470+                    assert len(data) == 1
11471+                    data = data[0]
11472+                salt = self._salt
11473+            else:
11474+                data = results[self.shnum]
11475+                if not data:
11476+                    salt = data = ""
11477+                else:
11478+                    salt_and_data = results[self.shnum][0]
11479+                    salt = salt_and_data[:SALT_SIZE]
11480+                    data = salt_and_data[SALT_SIZE:]
11481+            return data, salt
11482+        d.addCallback(_process_results)
11483+        return d
11484+
11485+
11486+    def get_blockhashes(self, needed=None, queue=False, force_remote=False):
11487+        """
11488+        I return the block hash tree
11489+
11490+        I take an optional argument, needed, which is a set of indices
11491+        correspond to hashes that I should fetch. If this argument is
11492+        missing, I will fetch the entire block hash tree; otherwise, I
11493+        may attempt to fetch fewer hashes, based on what needed says
11494+        that I should do. Note that I may fetch as many hashes as I
11495+        want, so long as the set of hashes that I do fetch is a superset
11496+        of the ones that I am asked for, so callers should be prepared
11497+        to tolerate additional hashes.
11498+        """
11499+        # TODO: Return only the parts of the block hash tree necessary
11500+        # to validate the blocknum provided?
11501+        # This is a good idea, but it is hard to implement correctly. It
11502+        # is bad to fetch any one block hash more than once, so we
11503+        # probably just want to fetch the whole thing at once and then
11504+        # serve it.
11505+        if needed == set([]):
11506+            return defer.succeed([])
11507+        d = self._maybe_fetch_offsets_and_header()
11508+        def _then(ignored):
11509+            blockhashes_offset = self._offsets['block_hash_tree']
11510+            if self._version_number == 1:
11511+                blockhashes_length = self._offsets['EOF'] - blockhashes_offset
11512+            else:
11513+                blockhashes_length = self._offsets['share_data'] - blockhashes_offset
11514+            readvs = [(blockhashes_offset, blockhashes_length)]
11515+            return readvs
11516+        d.addCallback(_then)
11517+        d.addCallback(lambda readvs:
11518+            self._read(readvs, queue=queue, force_remote=force_remote))
11519+        def _build_block_hash_tree(results):
11520+            assert self.shnum in results
11521+
11522+            rawhashes = results[self.shnum][0]
11523+            results = [rawhashes[i:i+HASH_SIZE]
11524+                       for i in range(0, len(rawhashes), HASH_SIZE)]
11525+            return results
11526+        d.addCallback(_build_block_hash_tree)
11527+        return d
11528+
11529+
11530+    def get_sharehashes(self, needed=None, queue=False, force_remote=False):
11531+        """
11532+        I return the part of the share hash chain placed to validate
11533+        this share.
11534+
11535+        I take an optional argument, needed. Needed is a set of indices
11536+        that correspond to the hashes that I should fetch. If needed is
11537+        not present, I will fetch and return the entire share hash
11538+        chain. Otherwise, I may fetch and return any part of the share
11539+        hash chain that is a superset of the part that I am asked to
11540+        fetch. Callers should be prepared to deal with more hashes than
11541+        they've asked for.
11542+        """
11543+        if needed == set([]):
11544+            return defer.succeed([])
11545+        d = self._maybe_fetch_offsets_and_header()
11546+
11547+        def _make_readvs(ignored):
11548+            sharehashes_offset = self._offsets['share_hash_chain']
11549+            if self._version_number == 0:
11550+                sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset
11551+            else:
11552+                sharehashes_length = self._offsets['signature'] - sharehashes_offset
11553+            readvs = [(sharehashes_offset, sharehashes_length)]
11554+            return readvs
11555+        d.addCallback(_make_readvs)
11556+        d.addCallback(lambda readvs:
11557+            self._read(readvs, queue=queue, force_remote=force_remote))
11558+        def _build_share_hash_chain(results):
11559+            assert self.shnum in results
11560+
11561+            sharehashes = results[self.shnum][0]
11562+            results = [sharehashes[i:i+(HASH_SIZE + 2)]
11563+                       for i in range(0, len(sharehashes), HASH_SIZE + 2)]
11564+            results = dict([struct.unpack(">H32s", data)
11565+                            for data in results])
11566+            return results
11567+        d.addCallback(_build_share_hash_chain)
11568+        return d
11569+
11570+
11571+    def get_encprivkey(self, queue=False):
11572+        """
11573+        I return the encrypted private key.
11574+        """
11575+        d = self._maybe_fetch_offsets_and_header()
11576+
11577+        def _make_readvs(ignored):
11578+            privkey_offset = self._offsets['enc_privkey']
11579+            if self._version_number == 0:
11580+                privkey_length = self._offsets['EOF'] - privkey_offset
11581+            else:
11582+                privkey_length = self._offsets['share_hash_chain'] - privkey_offset
11583+            readvs = [(privkey_offset, privkey_length)]
11584+            return readvs
11585+        d.addCallback(_make_readvs)
11586+        d.addCallback(lambda readvs:
11587+            self._read(readvs, queue=queue))
11588+        def _process_results(results):
11589+            assert self.shnum in results
11590+            privkey = results[self.shnum][0]
11591+            return privkey
11592+        d.addCallback(_process_results)
11593+        return d
11594+
11595+
11596+    def get_signature(self, queue=False):
11597+        """
11598+        I return the signature of my share.
11599+        """
11600+        d = self._maybe_fetch_offsets_and_header()
11601+
11602+        def _make_readvs(ignored):
11603+            signature_offset = self._offsets['signature']
11604+            if self._version_number == 1:
11605+                signature_length = self._offsets['verification_key'] - signature_offset
11606+            else:
11607+                signature_length = self._offsets['share_hash_chain'] - signature_offset
11608+            readvs = [(signature_offset, signature_length)]
11609+            return readvs
11610+        d.addCallback(_make_readvs)
11611+        d.addCallback(lambda readvs:
11612+            self._read(readvs, queue=queue))
11613+        def _process_results(results):
11614+            assert self.shnum in results
11615+            signature = results[self.shnum][0]
11616+            return signature
11617+        d.addCallback(_process_results)
11618+        return d
11619+
11620+
11621+    def get_verification_key(self, queue=False):
11622+        """
11623+        I return the verification key.
11624+        """
11625+        d = self._maybe_fetch_offsets_and_header()
11626+
11627+        def _make_readvs(ignored):
11628+            if self._version_number == 1:
11629+                vk_offset = self._offsets['verification_key']
11630+                vk_length = self._offsets['verification_key_end'] - vk_offset
11631+            else:
11632+                vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
11633+                vk_length = self._offsets['signature'] - vk_offset
11634+            readvs = [(vk_offset, vk_length)]
11635+            return readvs
11636+        d.addCallback(_make_readvs)
11637+        d.addCallback(lambda readvs:
11638+            self._read(readvs, queue=queue))
11639+        def _process_results(results):
11640+            assert self.shnum in results
11641+            verification_key = results[self.shnum][0]
11642+            return verification_key
11643+        d.addCallback(_process_results)
11644+        return d
11645+
11646+
11647+    def get_encoding_parameters(self):
11648+        """
11649+        I return (k, n, segsize, datalen)
11650+        """
11651+        d = self._maybe_fetch_offsets_and_header()
11652+        d.addCallback(lambda ignored:
11653+            (self._required_shares,
11654+             self._total_shares,
11655+             self._segment_size,
11656+             self._data_length))
11657+        return d
11658+
11659+
11660+    def get_seqnum(self):
11661+        """
11662+        I return the sequence number for this share.
11663+        """
11664+        d = self._maybe_fetch_offsets_and_header()
11665+        d.addCallback(lambda ignored:
11666+            self._sequence_number)
11667+        return d
11668+
11669+
11670+    def get_root_hash(self):
11671+        """
11672+        I return the root of the block hash tree
11673+        """
11674+        d = self._maybe_fetch_offsets_and_header()
11675+        d.addCallback(lambda ignored: self._root_hash)
11676+        return d
11677+
11678+
11679+    def get_checkstring(self):
11680+        """
11681+        I return the packed representation of the following:
11682+
11683+            - version number
11684+            - sequence number
11685+            - root hash
11686+            - salt hash
11687+
11688+        which my users use as a checkstring to detect other writers.
11689+        """
11690+        d = self._maybe_fetch_offsets_and_header()
11691+        def _build_checkstring(ignored):
11692+            if self._salt:
11693+                checkstring = struct.pack(PREFIX,
11694+                                          self._version_number,
11695+                                          self._sequence_number,
11696+                                          self._root_hash,
11697+                                          self._salt)
11698+            else:
11699+                checkstring = struct.pack(MDMFCHECKSTRING,
11700+                                          self._version_number,
11701+                                          self._sequence_number,
11702+                                          self._root_hash)
11703+
11704+            return checkstring
11705+        d.addCallback(_build_checkstring)
11706+        return d
11707+
11708+
11709+    def get_prefix(self, force_remote):
11710+        d = self._maybe_fetch_offsets_and_header(force_remote)
11711+        d.addCallback(lambda ignored:
11712+            self._build_prefix())
11713+        return d
11714+
11715+
11716+    def _build_prefix(self):
11717+        # The prefix is another name for the part of the remote share
11718+        # that gets signed. It consists of everything up to and
11719+        # including the datalength, packed by struct.
11720+        if self._version_number == SDMF_VERSION:
11721+            return struct.pack(SIGNED_PREFIX,
11722+                           self._version_number,
11723+                           self._sequence_number,
11724+                           self._root_hash,
11725+                           self._salt,
11726+                           self._required_shares,
11727+                           self._total_shares,
11728+                           self._segment_size,
11729+                           self._data_length)
11730+
11731+        else:
11732+            return struct.pack(MDMFSIGNABLEHEADER,
11733+                           self._version_number,
11734+                           self._sequence_number,
11735+                           self._root_hash,
11736+                           self._required_shares,
11737+                           self._total_shares,
11738+                           self._segment_size,
11739+                           self._data_length)
11740+
11741+
11742+    def _get_offsets_tuple(self):
11743+        # The offsets tuple is another component of the version
11744+        # information tuple. It is basically our offsets dictionary,
11745+        # itemized and in a tuple.
11746+        return self._offsets.copy()
11747+
11748+
11749+    def get_verinfo(self):
11750+        """
11751+        I return my verinfo tuple. This is used by the ServermapUpdater
11752+        to keep track of versions of mutable files.
11753+
11754+        The verinfo tuple for MDMF files contains:
11755+            - seqnum
11756+            - root hash
11757+            - a blank (nothing)
11758+            - segsize
11759+            - datalen
11760+            - k
11761+            - n
11762+            - prefix (the thing that you sign)
11763+            - a tuple of offsets
11764+
11765+        We include the nonce in MDMF to simplify processing of version
11766+        information tuples.
11767+
11768+        The verinfo tuple for SDMF files is the same, but contains a
11769+        16-byte IV instead of a hash of salts.
11770+        """
11771+        d = self._maybe_fetch_offsets_and_header()
11772+        def _build_verinfo(ignored):
11773+            if self._version_number == SDMF_VERSION:
11774+                salt_to_use = self._salt
11775+            else:
11776+                salt_to_use = None
11777+            return (self._sequence_number,
11778+                    self._root_hash,
11779+                    salt_to_use,
11780+                    self._segment_size,
11781+                    self._data_length,
11782+                    self._required_shares,
11783+                    self._total_shares,
11784+                    self._build_prefix(),
11785+                    self._get_offsets_tuple())
11786+        d.addCallback(_build_verinfo)
11787+        return d
11788+
11789+
11790+    def flush(self):
11791+        """
11792+        I flush my queue of read vectors.
11793+        """
11794+        d = self._read(self._readvs)
11795+        def _then(results):
11796+            self._readvs = []
11797+            if isinstance(results, failure.Failure):
11798+                self._queue_errbacks.notify(results)
11799+            else:
11800+                self._queue_observers.notify(results)
11801+            self._queue_observers = observer.ObserverList()
11802+            self._queue_errbacks = observer.ObserverList()
11803+        d.addBoth(_then)
11804+
11805+
11806+    def _read(self, readvs, force_remote=False, queue=False):
11807+        unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs)
11808+        # TODO: It's entirely possible to tweak this so that it just
11809+        # fulfills the requests that it can, and not demand that all
11810+        # requests are satisfiable before running it.
11811+        if not unsatisfiable and not force_remote:
11812+            results = [self._data[offset:offset+length]
11813+                       for (offset, length) in readvs]
11814+            results = {self.shnum: results}
11815+            return defer.succeed(results)
11816+        else:
11817+            if queue:
11818+                start = len(self._readvs)
11819+                self._readvs += readvs
11820+                end = len(self._readvs)
11821+                def _get_results(results, start, end):
11822+                    if not self.shnum in results:
11823+                        return {self._shnum: [""]}
11824+                    return {self.shnum: results[self.shnum][start:end]}
11825+                d = defer.Deferred()
11826+                d.addCallback(_get_results, start, end)
11827+                self._queue_observers.subscribe(d.callback)
11828+                self._queue_errbacks.subscribe(d.errback)
11829+                return d
11830+            return self._rref.callRemote("slot_readv",
11831+                                         self._storage_index,
11832+                                         [self.shnum],
11833+                                         readvs)
11834+
11835+
11836+    def is_sdmf(self):
11837+        """I tell my caller whether or not my remote file is SDMF or MDMF
11838+        """
11839+        d = self._maybe_fetch_offsets_and_header()
11840+        d.addCallback(lambda ignored:
11841+            self._version_number == 0)
11842+        return d
11843+
11844+
11845+class LayoutInvalid(Exception):
11846+    """
11847+    This isn't a valid MDMF mutable file
11848+    """
11849hunk ./src/allmydata/test/test_storage.py 1
11850-import time, os.path, platform, stat, re, simplejson, struct
11851+import time, os.path, platform, stat, re, simplejson, struct, shutil
11852 
11853 import mock
11854 
11855hunk ./src/allmydata/test/test_storage.py 23
11856 from allmydata.storage.expirer import LeaseCheckingCrawler
11857 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
11858      ReadBucketProxy
11859+from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
11860+                                     LayoutInvalid, MDMFSIGNABLEHEADER, \
11861+                                     SIGNED_PREFIX, MDMFHEADER, \
11862+                                     MDMFOFFSETS, SDMFSlotWriteProxy, \
11863+                                     PRIVATE_KEY_SIZE, \
11864+                                     SIGNATURE_SIZE, \
11865+                                     VERIFICATION_KEY_SIZE, \
11866+                                     SHARE_HASH_CHAIN_SIZE
11867 from allmydata.interfaces import BadWriteEnablerError
11868hunk ./src/allmydata/test/test_storage.py 32
11869-from allmydata.test.common import LoggingServiceParent
11870+from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
11871 from allmydata.test.common_web import WebRenderingMixin
11872 from allmydata.test.no_network import NoNetworkServer
11873 from allmydata.web.storage import StorageStatus, remove_prefix
11874hunk ./src/allmydata/test/test_storage.py 111
11875 
11876 class RemoteBucket:
11877 
11878+    def __init__(self):
11879+        self.read_count = 0
11880+        self.write_count = 0
11881+
11882     def callRemote(self, methname, *args, **kwargs):
11883         def _call():
11884             meth = getattr(self.target, "remote_" + methname)
11885hunk ./src/allmydata/test/test_storage.py 119
11886             return meth(*args, **kwargs)
11887+
11888+        if methname == "slot_readv":
11889+            self.read_count += 1
11890+        if "writev" in methname:
11891+            self.write_count += 1
11892+
11893         return defer.maybeDeferred(_call)
11894 
11895hunk ./src/allmydata/test/test_storage.py 127
11896+
11897 class BucketProxy(unittest.TestCase):
11898     def make_bucket(self, name, size):
11899         basedir = os.path.join("storage", "BucketProxy", name)
11900hunk ./src/allmydata/test/test_storage.py 1310
11901         self.failUnless(os.path.exists(prefixdir), prefixdir)
11902         self.failIf(os.path.exists(bucketdir), bucketdir)
11903 
11904+
11905+class MDMFProxies(unittest.TestCase, ShouldFailMixin):
11906+    def setUp(self):
11907+        self.sparent = LoggingServiceParent()
11908+        self._lease_secret = itertools.count()
11909+        self.ss = self.create("MDMFProxies storage test server")
11910+        self.rref = RemoteBucket()
11911+        self.rref.target = self.ss
11912+        self.secrets = (self.write_enabler("we_secret"),
11913+                        self.renew_secret("renew_secret"),
11914+                        self.cancel_secret("cancel_secret"))
11915+        self.segment = "aaaaaa"
11916+        self.block = "aa"
11917+        self.salt = "a" * 16
11918+        self.block_hash = "a" * 32
11919+        self.block_hash_tree = [self.block_hash for i in xrange(6)]
11920+        self.share_hash = self.block_hash
11921+        self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)])
11922+        self.signature = "foobarbaz"
11923+        self.verification_key = "vvvvvv"
11924+        self.encprivkey = "private"
11925+        self.root_hash = self.block_hash
11926+        self.salt_hash = self.root_hash
11927+        self.salt_hash_tree = [self.salt_hash for i in xrange(6)]
11928+        self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree)
11929+        self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain)
11930+        # blockhashes and salt hashes are serialized in the same way,
11931+        # only we lop off the first element and store that in the
11932+        # header.
11933+        self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
11934+
11935+
11936+    def tearDown(self):
11937+        self.sparent.stopService()
11938+        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
11939+
11940+
11941+    def write_enabler(self, we_tag):
11942+        return hashutil.tagged_hash("we_blah", we_tag)
11943+
11944+
11945+    def renew_secret(self, tag):
11946+        return hashutil.tagged_hash("renew_blah", str(tag))
11947+
11948+
11949+    def cancel_secret(self, tag):
11950+        return hashutil.tagged_hash("cancel_blah", str(tag))
11951+
11952+
11953+    def workdir(self, name):
11954+        basedir = os.path.join("storage", "MutableServer", name)
11955+        return basedir
11956+
11957+
11958+    def create(self, name):
11959+        workdir = self.workdir(name)
11960+        ss = StorageServer(workdir, "\x00" * 20)
11961+        ss.setServiceParent(self.sparent)
11962+        return ss
11963+
11964+
11965+    def build_test_mdmf_share(self, tail_segment=False, empty=False):
11966+        # Start with the checkstring
11967+        data = struct.pack(">BQ32s",
11968+                           1,
11969+                           0,
11970+                           self.root_hash)
11971+        self.checkstring = data
11972+        # Next, the encoding parameters
11973+        if tail_segment:
11974+            data += struct.pack(">BBQQ",
11975+                                3,
11976+                                10,
11977+                                6,
11978+                                33)
11979+        elif empty:
11980+            data += struct.pack(">BBQQ",
11981+                                3,
11982+                                10,
11983+                                0,
11984+                                0)
11985+        else:
11986+            data += struct.pack(">BBQQ",
11987+                                3,
11988+                                10,
11989+                                6,
11990+                                36)
11991+        # Now we'll build the offsets.
11992+        sharedata = ""
11993+        if not tail_segment and not empty:
11994+            for i in xrange(6):
11995+                sharedata += self.salt + self.block
11996+        elif tail_segment:
11997+            for i in xrange(5):
11998+                sharedata += self.salt + self.block
11999+            sharedata += self.salt + "a"
12000+
12001+        # The encrypted private key comes after the shares + salts
12002+        offset_size = struct.calcsize(MDMFOFFSETS)
12003+        encrypted_private_key_offset = len(data) + offset_size
12004+        # The share has chain comes after the private key
12005+        sharehashes_offset = encrypted_private_key_offset + \
12006+            len(self.encprivkey)
12007+
12008+        # The signature comes after the share hash chain.
12009+        signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
12010+
12011+        verification_key_offset = signature_offset + len(self.signature)
12012+        verification_key_end = verification_key_offset + \
12013+            len(self.verification_key)
12014+
12015+        share_data_offset = offset_size
12016+        share_data_offset += PRIVATE_KEY_SIZE
12017+        share_data_offset += SIGNATURE_SIZE
12018+        share_data_offset += VERIFICATION_KEY_SIZE
12019+        share_data_offset += SHARE_HASH_CHAIN_SIZE
12020+
12021+        blockhashes_offset = share_data_offset + len(sharedata)
12022+        eof_offset = blockhashes_offset + len(self.block_hash_tree_s)
12023+
12024+        data += struct.pack(MDMFOFFSETS,
12025+                            encrypted_private_key_offset,
12026+                            sharehashes_offset,
12027+                            signature_offset,
12028+                            verification_key_offset,
12029+                            verification_key_end,
12030+                            share_data_offset,
12031+                            blockhashes_offset,
12032+                            eof_offset)
12033+
12034+        self.offsets = {}
12035+        self.offsets['enc_privkey'] = encrypted_private_key_offset
12036+        self.offsets['block_hash_tree'] = blockhashes_offset
12037+        self.offsets['share_hash_chain'] = sharehashes_offset
12038+        self.offsets['signature'] = signature_offset
12039+        self.offsets['verification_key'] = verification_key_offset
12040+        self.offsets['share_data'] = share_data_offset
12041+        self.offsets['verification_key_end'] = verification_key_end
12042+        self.offsets['EOF'] = eof_offset
12043+
12044+        # the private key,
12045+        data += self.encprivkey
12046+        # the sharehashes
12047+        data += self.share_hash_chain_s
12048+        # the signature,
12049+        data += self.signature
12050+        # and the verification key
12051+        data += self.verification_key
12052+        # Then we'll add in gibberish until we get to the right point.
12053+        nulls = "".join([" " for i in xrange(len(data), share_data_offset)])
12054+        data += nulls
12055+
12056+        # Then the share data
12057+        data += sharedata
12058+        # the blockhashes
12059+        data += self.block_hash_tree_s
12060+        return data
12061+
12062+
12063+    def write_test_share_to_server(self,
12064+                                   storage_index,
12065+                                   tail_segment=False,
12066+                                   empty=False):
12067+        """
12068+        I write some data for the read tests to read to self.ss
12069+
12070+        If tail_segment=True, then I will write a share that has a
12071+        smaller tail segment than other segments.
12072+        """
12073+        write = self.ss.remote_slot_testv_and_readv_and_writev
12074+        data = self.build_test_mdmf_share(tail_segment, empty)
12075+        # Finally, we write the whole thing to the storage server in one
12076+        # pass.
12077+        testvs = [(0, 1, "eq", "")]
12078+        tws = {}
12079+        tws[0] = (testvs, [(0, data)], None)
12080+        readv = [(0, 1)]
12081+        results = write(storage_index, self.secrets, tws, readv)
12082+        self.failUnless(results[0])
12083+
12084+
12085+    def build_test_sdmf_share(self, empty=False):
12086+        if empty:
12087+            sharedata = ""
12088+        else:
12089+            sharedata = self.segment * 6
12090+        self.sharedata = sharedata
12091+        blocksize = len(sharedata) / 3
12092+        block = sharedata[:blocksize]
12093+        self.blockdata = block
12094+        prefix = struct.pack(">BQ32s16s BBQQ",
12095+                             0, # version,
12096+                             0,
12097+                             self.root_hash,
12098+                             self.salt,
12099+                             3,
12100+                             10,
12101+                             len(sharedata),
12102+                             len(sharedata),
12103+                            )
12104+        post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
12105+        signature_offset = post_offset + len(self.verification_key)
12106+        sharehashes_offset = signature_offset + len(self.signature)
12107+        blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s)
12108+        sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s)
12109+        encprivkey_offset = sharedata_offset + len(block)
12110+        eof_offset = encprivkey_offset + len(self.encprivkey)
12111+        offsets = struct.pack(">LLLLQQ",
12112+                              signature_offset,
12113+                              sharehashes_offset,
12114+                              blockhashes_offset,
12115+                              sharedata_offset,
12116+                              encprivkey_offset,
12117+                              eof_offset)
12118+        final_share = "".join([prefix,
12119+                           offsets,
12120+                           self.verification_key,
12121+                           self.signature,
12122+                           self.share_hash_chain_s,
12123+                           self.block_hash_tree_s,
12124+                           block,
12125+                           self.encprivkey])
12126+        self.offsets = {}
12127+        self.offsets['signature'] = signature_offset
12128+        self.offsets['share_hash_chain'] = sharehashes_offset
12129+        self.offsets['block_hash_tree'] = blockhashes_offset
12130+        self.offsets['share_data'] = sharedata_offset
12131+        self.offsets['enc_privkey'] = encprivkey_offset
12132+        self.offsets['EOF'] = eof_offset
12133+        return final_share
12134+
12135+
12136+    def write_sdmf_share_to_server(self,
12137+                                   storage_index,
12138+                                   empty=False):
12139+        # Some tests need SDMF shares to verify that we can still
12140+        # read them. This method writes one, which resembles but is not
12141+        assert self.rref
12142+        write = self.ss.remote_slot_testv_and_readv_and_writev
12143+        share = self.build_test_sdmf_share(empty)
12144+        testvs = [(0, 1, "eq", "")]
12145+        tws = {}
12146+        tws[0] = (testvs, [(0, share)], None)
12147+        readv = []
12148+        results = write(storage_index, self.secrets, tws, readv)
12149+        self.failUnless(results[0])
12150+
12151+
12152+    def test_read(self):
12153+        self.write_test_share_to_server("si1")
12154+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12155+        # Check that every method equals what we expect it to.
12156+        d = defer.succeed(None)
12157+        def _check_block_and_salt((block, salt)):
12158+            self.failUnlessEqual(block, self.block)
12159+            self.failUnlessEqual(salt, self.salt)
12160+
12161+        for i in xrange(6):
12162+            d.addCallback(lambda ignored, i=i:
12163+                mr.get_block_and_salt(i))
12164+            d.addCallback(_check_block_and_salt)
12165+
12166+        d.addCallback(lambda ignored:
12167+            mr.get_encprivkey())
12168+        d.addCallback(lambda encprivkey:
12169+            self.failUnlessEqual(self.encprivkey, encprivkey))
12170+
12171+        d.addCallback(lambda ignored:
12172+            mr.get_blockhashes())
12173+        d.addCallback(lambda blockhashes:
12174+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
12175+
12176+        d.addCallback(lambda ignored:
12177+            mr.get_sharehashes())
12178+        d.addCallback(lambda sharehashes:
12179+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
12180+
12181+        d.addCallback(lambda ignored:
12182+            mr.get_signature())
12183+        d.addCallback(lambda signature:
12184+            self.failUnlessEqual(signature, self.signature))
12185+
12186+        d.addCallback(lambda ignored:
12187+            mr.get_verification_key())
12188+        d.addCallback(lambda verification_key:
12189+            self.failUnlessEqual(verification_key, self.verification_key))
12190+
12191+        d.addCallback(lambda ignored:
12192+            mr.get_seqnum())
12193+        d.addCallback(lambda seqnum:
12194+            self.failUnlessEqual(seqnum, 0))
12195+
12196+        d.addCallback(lambda ignored:
12197+            mr.get_root_hash())
12198+        d.addCallback(lambda root_hash:
12199+            self.failUnlessEqual(self.root_hash, root_hash))
12200+
12201+        d.addCallback(lambda ignored:
12202+            mr.get_seqnum())
12203+        d.addCallback(lambda seqnum:
12204+            self.failUnlessEqual(0, seqnum))
12205+
12206+        d.addCallback(lambda ignored:
12207+            mr.get_encoding_parameters())
12208+        def _check_encoding_parameters((k, n, segsize, datalen)):
12209+            self.failUnlessEqual(k, 3)
12210+            self.failUnlessEqual(n, 10)
12211+            self.failUnlessEqual(segsize, 6)
12212+            self.failUnlessEqual(datalen, 36)
12213+        d.addCallback(_check_encoding_parameters)
12214+
12215+        d.addCallback(lambda ignored:
12216+            mr.get_checkstring())
12217+        d.addCallback(lambda checkstring:
12218+            self.failUnlessEqual(checkstring, checkstring))
12219+        return d
12220+
12221+
12222+    def test_read_with_different_tail_segment_size(self):
12223+        self.write_test_share_to_server("si1", tail_segment=True)
12224+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12225+        d = mr.get_block_and_salt(5)
12226+        def _check_tail_segment(results):
12227+            block, salt = results
12228+            self.failUnlessEqual(len(block), 1)
12229+            self.failUnlessEqual(block, "a")
12230+        d.addCallback(_check_tail_segment)
12231+        return d
12232+
12233+
12234+    def test_get_block_with_invalid_segnum(self):
12235+        self.write_test_share_to_server("si1")
12236+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12237+        d = defer.succeed(None)
12238+        d.addCallback(lambda ignored:
12239+            self.shouldFail(LayoutInvalid, "test invalid segnum",
12240+                            None,
12241+                            mr.get_block_and_salt, 7))
12242+        return d
12243+
12244+
12245+    def test_get_encoding_parameters_first(self):
12246+        self.write_test_share_to_server("si1")
12247+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12248+        d = mr.get_encoding_parameters()
12249+        def _check_encoding_parameters((k, n, segment_size, datalen)):
12250+            self.failUnlessEqual(k, 3)
12251+            self.failUnlessEqual(n, 10)
12252+            self.failUnlessEqual(segment_size, 6)
12253+            self.failUnlessEqual(datalen, 36)
12254+        d.addCallback(_check_encoding_parameters)
12255+        return d
12256+
12257+
12258+    def test_get_seqnum_first(self):
12259+        self.write_test_share_to_server("si1")
12260+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12261+        d = mr.get_seqnum()
12262+        d.addCallback(lambda seqnum:
12263+            self.failUnlessEqual(seqnum, 0))
12264+        return d
12265+
12266+
12267+    def test_get_root_hash_first(self):
12268+        self.write_test_share_to_server("si1")
12269+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12270+        d = mr.get_root_hash()
12271+        d.addCallback(lambda root_hash:
12272+            self.failUnlessEqual(root_hash, self.root_hash))
12273+        return d
12274+
12275+
12276+    def test_get_checkstring_first(self):
12277+        self.write_test_share_to_server("si1")
12278+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12279+        d = mr.get_checkstring()
12280+        d.addCallback(lambda checkstring:
12281+            self.failUnlessEqual(checkstring, self.checkstring))
12282+        return d
12283+
12284+
12285+    def test_write_read_vectors(self):
12286+        # When writing for us, the storage server will return to us a
12287+        # read vector, along with its result. If a write fails because
12288+        # the test vectors failed, this read vector can help us to
12289+        # diagnose the problem. This test ensures that the read vector
12290+        # is working appropriately.
12291+        mw = self._make_new_mw("si1", 0)
12292+
12293+        for i in xrange(6):
12294+            mw.put_block(self.block, i, self.salt)
12295+        mw.put_encprivkey(self.encprivkey)
12296+        mw.put_blockhashes(self.block_hash_tree)
12297+        mw.put_sharehashes(self.share_hash_chain)
12298+        mw.put_root_hash(self.root_hash)
12299+        mw.put_signature(self.signature)
12300+        mw.put_verification_key(self.verification_key)
12301+        d = mw.finish_publishing()
12302+        def _then(results):
12303+            self.failUnless(len(results), 2)
12304+            result, readv = results
12305+            self.failUnless(result)
12306+            self.failIf(readv)
12307+            self.old_checkstring = mw.get_checkstring()
12308+            mw.set_checkstring("")
12309+        d.addCallback(_then)
12310+        d.addCallback(lambda ignored:
12311+            mw.finish_publishing())
12312+        def _then_again(results):
12313+            self.failUnlessEqual(len(results), 2)
12314+            result, readvs = results
12315+            self.failIf(result)
12316+            self.failUnlessIn(0, readvs)
12317+            readv = readvs[0][0]
12318+            self.failUnlessEqual(readv, self.old_checkstring)
12319+        d.addCallback(_then_again)
12320+        # The checkstring remains the same for the rest of the process.
12321+        return d
12322+
12323+
12324+    def test_private_key_after_share_hash_chain(self):
12325+        mw = self._make_new_mw("si1", 0)
12326+        d = defer.succeed(None)
12327+        for i in xrange(6):
12328+            d.addCallback(lambda ignored, i=i:
12329+                mw.put_block(self.block, i, self.salt))
12330+        d.addCallback(lambda ignored:
12331+            mw.put_encprivkey(self.encprivkey))
12332+        d.addCallback(lambda ignored:
12333+            mw.put_sharehashes(self.share_hash_chain))
12334+
12335+        # Now try to put the private key again.
12336+        d.addCallback(lambda ignored:
12337+            self.shouldFail(LayoutInvalid, "test repeat private key",
12338+                            None,
12339+                            mw.put_encprivkey, self.encprivkey))
12340+        return d
12341+
12342+
12343+    def test_signature_after_verification_key(self):
12344+        mw = self._make_new_mw("si1", 0)
12345+        d = defer.succeed(None)
12346+        # Put everything up to and including the verification key.
12347+        for i in xrange(6):
12348+            d.addCallback(lambda ignored, i=i:
12349+                mw.put_block(self.block, i, self.salt))
12350+        d.addCallback(lambda ignored:
12351+            mw.put_encprivkey(self.encprivkey))
12352+        d.addCallback(lambda ignored:
12353+            mw.put_blockhashes(self.block_hash_tree))
12354+        d.addCallback(lambda ignored:
12355+            mw.put_sharehashes(self.share_hash_chain))
12356+        d.addCallback(lambda ignored:
12357+            mw.put_root_hash(self.root_hash))
12358+        d.addCallback(lambda ignored:
12359+            mw.put_signature(self.signature))
12360+        d.addCallback(lambda ignored:
12361+            mw.put_verification_key(self.verification_key))
12362+        # Now try to put the signature again. This should fail
12363+        d.addCallback(lambda ignored:
12364+            self.shouldFail(LayoutInvalid, "signature after verification",
12365+                            None,
12366+                            mw.put_signature, self.signature))
12367+        return d
12368+
12369+
12370+    def test_uncoordinated_write(self):
12371+        # Make two mutable writers, both pointing to the same storage
12372+        # server, both at the same storage index, and try writing to the
12373+        # same share.
12374+        mw1 = self._make_new_mw("si1", 0)
12375+        mw2 = self._make_new_mw("si1", 0)
12376+
12377+        def _check_success(results):
12378+            result, readvs = results
12379+            self.failUnless(result)
12380+
12381+        def _check_failure(results):
12382+            result, readvs = results
12383+            self.failIf(result)
12384+
12385+        def _write_share(mw):
12386+            for i in xrange(6):
12387+                mw.put_block(self.block, i, self.salt)
12388+            mw.put_encprivkey(self.encprivkey)
12389+            mw.put_blockhashes(self.block_hash_tree)
12390+            mw.put_sharehashes(self.share_hash_chain)
12391+            mw.put_root_hash(self.root_hash)
12392+            mw.put_signature(self.signature)
12393+            mw.put_verification_key(self.verification_key)
12394+            return mw.finish_publishing()
12395+        d = _write_share(mw1)
12396+        d.addCallback(_check_success)
12397+        d.addCallback(lambda ignored:
12398+            _write_share(mw2))
12399+        d.addCallback(_check_failure)
12400+        return d
12401+
12402+
12403+    def test_invalid_salt_size(self):
12404+        # Salts need to be 16 bytes in size. Writes that attempt to
12405+        # write more or less than this should be rejected.
12406+        mw = self._make_new_mw("si1", 0)
12407+        invalid_salt = "a" * 17 # 17 bytes
12408+        another_invalid_salt = "b" * 15 # 15 bytes
12409+        d = defer.succeed(None)
12410+        d.addCallback(lambda ignored:
12411+            self.shouldFail(LayoutInvalid, "salt too big",
12412+                            None,
12413+                            mw.put_block, self.block, 0, invalid_salt))
12414+        d.addCallback(lambda ignored:
12415+            self.shouldFail(LayoutInvalid, "salt too small",
12416+                            None,
12417+                            mw.put_block, self.block, 0,
12418+                            another_invalid_salt))
12419+        return d
12420+
12421+
12422+    def test_write_test_vectors(self):
12423+        # If we give the write proxy a bogus test vector at
12424+        # any point during the process, it should fail to write when we
12425+        # tell it to write.
12426+        def _check_failure(results):
12427+            self.failUnlessEqual(len(results), 2)
12428+            res, d = results
12429+            self.failIf(res)
12430+
12431+        def _check_success(results):
12432+            self.failUnlessEqual(len(results), 2)
12433+            res, d = results
12434+            self.failUnless(results)
12435+
12436+        mw = self._make_new_mw("si1", 0)
12437+        mw.set_checkstring("this is a lie")
12438+        for i in xrange(6):
12439+            mw.put_block(self.block, i, self.salt)
12440+        mw.put_encprivkey(self.encprivkey)
12441+        mw.put_blockhashes(self.block_hash_tree)
12442+        mw.put_sharehashes(self.share_hash_chain)
12443+        mw.put_root_hash(self.root_hash)
12444+        mw.put_signature(self.signature)
12445+        mw.put_verification_key(self.verification_key)
12446+        d = mw.finish_publishing()
12447+        d.addCallback(_check_failure)
12448+        d.addCallback(lambda ignored:
12449+            mw.set_checkstring(""))
12450+        d.addCallback(lambda ignored:
12451+            mw.finish_publishing())
12452+        d.addCallback(_check_success)
12453+        return d
12454+
12455+
12456+    def serialize_blockhashes(self, blockhashes):
12457+        return "".join(blockhashes)
12458+
12459+
12460+    def serialize_sharehashes(self, sharehashes):
12461+        ret = "".join([struct.pack(">H32s", i, sharehashes[i])
12462+                        for i in sorted(sharehashes.keys())])
12463+        return ret
12464+
12465+
12466+    def test_write(self):
12467+        # This translates to a file with 6 6-byte segments, and with 2-byte
12468+        # blocks.
12469+        mw = self._make_new_mw("si1", 0)
12470+        # Test writing some blocks.
12471+        read = self.ss.remote_slot_readv
12472+        expected_private_key_offset = struct.calcsize(MDMFHEADER)
12473+        expected_sharedata_offset = struct.calcsize(MDMFHEADER) + \
12474+                                    PRIVATE_KEY_SIZE + \
12475+                                    SIGNATURE_SIZE + \
12476+                                    VERIFICATION_KEY_SIZE + \
12477+                                    SHARE_HASH_CHAIN_SIZE
12478+        written_block_size = 2 + len(self.salt)
12479+        written_block = self.block + self.salt
12480+        for i in xrange(6):
12481+            mw.put_block(self.block, i, self.salt)
12482+
12483+        mw.put_encprivkey(self.encprivkey)
12484+        mw.put_blockhashes(self.block_hash_tree)
12485+        mw.put_sharehashes(self.share_hash_chain)
12486+        mw.put_root_hash(self.root_hash)
12487+        mw.put_signature(self.signature)
12488+        mw.put_verification_key(self.verification_key)
12489+        d = mw.finish_publishing()
12490+        def _check_publish(results):
12491+            self.failUnlessEqual(len(results), 2)
12492+            result, ign = results
12493+            self.failUnless(result, "publish failed")
12494+            for i in xrange(6):
12495+                self.failUnlessEqual(read("si1", [0], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]),
12496+                                {0: [written_block]})
12497+
12498+            self.failUnlessEqual(len(self.encprivkey), 7)
12499+            self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]),
12500+                                 {0: [self.encprivkey]})
12501+
12502+            expected_block_hash_offset = expected_sharedata_offset + \
12503+                        (6 * written_block_size)
12504+            self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
12505+            self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
12506+                                 {0: [self.block_hash_tree_s]})
12507+
12508+            expected_share_hash_offset = expected_private_key_offset + len(self.encprivkey)
12509+            self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
12510+                                 {0: [self.share_hash_chain_s]})
12511+
12512+            self.failUnlessEqual(read("si1", [0], [(9, 32)]),
12513+                                 {0: [self.root_hash]})
12514+            expected_signature_offset = expected_share_hash_offset + \
12515+                len(self.share_hash_chain_s)
12516+            self.failUnlessEqual(len(self.signature), 9)
12517+            self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
12518+                                 {0: [self.signature]})
12519+
12520+            expected_verification_key_offset = expected_signature_offset + len(self.signature)
12521+            self.failUnlessEqual(len(self.verification_key), 6)
12522+            self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]),
12523+                                 {0: [self.verification_key]})
12524+
12525+            signable = mw.get_signable()
12526+            verno, seq, roothash, k, n, segsize, datalen = \
12527+                                            struct.unpack(">BQ32sBBQQ",
12528+                                                          signable)
12529+            self.failUnlessEqual(verno, 1)
12530+            self.failUnlessEqual(seq, 0)
12531+            self.failUnlessEqual(roothash, self.root_hash)
12532+            self.failUnlessEqual(k, 3)
12533+            self.failUnlessEqual(n, 10)
12534+            self.failUnlessEqual(segsize, 6)
12535+            self.failUnlessEqual(datalen, 36)
12536+            expected_eof_offset = expected_block_hash_offset + \
12537+                len(self.block_hash_tree_s)
12538+
12539+            # Check the version number to make sure that it is correct.
12540+            expected_version_number = struct.pack(">B", 1)
12541+            self.failUnlessEqual(read("si1", [0], [(0, 1)]),
12542+                                 {0: [expected_version_number]})
12543+            # Check the sequence number to make sure that it is correct
12544+            expected_sequence_number = struct.pack(">Q", 0)
12545+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
12546+                                 {0: [expected_sequence_number]})
12547+            # Check that the encoding parameters (k, N, segement size, data
12548+            # length) are what they should be. These are  3, 10, 6, 36
12549+            expected_k = struct.pack(">B", 3)
12550+            self.failUnlessEqual(read("si1", [0], [(41, 1)]),
12551+                                 {0: [expected_k]})
12552+            expected_n = struct.pack(">B", 10)
12553+            self.failUnlessEqual(read("si1", [0], [(42, 1)]),
12554+                                 {0: [expected_n]})
12555+            expected_segment_size = struct.pack(">Q", 6)
12556+            self.failUnlessEqual(read("si1", [0], [(43, 8)]),
12557+                                 {0: [expected_segment_size]})
12558+            expected_data_length = struct.pack(">Q", 36)
12559+            self.failUnlessEqual(read("si1", [0], [(51, 8)]),
12560+                                 {0: [expected_data_length]})
12561+            expected_offset = struct.pack(">Q", expected_private_key_offset)
12562+            self.failUnlessEqual(read("si1", [0], [(59, 8)]),
12563+                                 {0: [expected_offset]})
12564+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
12565+            self.failUnlessEqual(read("si1", [0], [(67, 8)]),
12566+                                 {0: [expected_offset]})
12567+            expected_offset = struct.pack(">Q", expected_signature_offset)
12568+            self.failUnlessEqual(read("si1", [0], [(75, 8)]),
12569+                                 {0: [expected_offset]})
12570+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
12571+            self.failUnlessEqual(read("si1", [0], [(83, 8)]),
12572+                                 {0: [expected_offset]})
12573+            expected_offset = struct.pack(">Q", expected_verification_key_offset + len(self.verification_key))
12574+            self.failUnlessEqual(read("si1", [0], [(91, 8)]),
12575+                                 {0: [expected_offset]})
12576+            expected_offset = struct.pack(">Q", expected_sharedata_offset)
12577+            self.failUnlessEqual(read("si1", [0], [(99, 8)]),
12578+                                 {0: [expected_offset]})
12579+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
12580+            self.failUnlessEqual(read("si1", [0], [(107, 8)]),
12581+                                 {0: [expected_offset]})
12582+            expected_offset = struct.pack(">Q", expected_eof_offset)
12583+            self.failUnlessEqual(read("si1", [0], [(115, 8)]),
12584+                                 {0: [expected_offset]})
12585+        d.addCallback(_check_publish)
12586+        return d
12587+
12588+    def _make_new_mw(self, si, share, datalength=36):
12589+        # This is a file of size 36 bytes. Since it has a segment
12590+        # size of 6, we know that it has 6 byte segments, which will
12591+        # be split into blocks of 2 bytes because our FEC k
12592+        # parameter is 3.
12593+        mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10,
12594+                                6, datalength)
12595+        return mw
12596+
12597+
12598+    def test_write_rejected_with_too_many_blocks(self):
12599+        mw = self._make_new_mw("si0", 0)
12600+
12601+        # Try writing too many blocks. We should not be able to write
12602+        # more than 6
12603+        # blocks into each share.
12604+        d = defer.succeed(None)
12605+        for i in xrange(6):
12606+            d.addCallback(lambda ignored, i=i:
12607+                mw.put_block(self.block, i, self.salt))
12608+        d.addCallback(lambda ignored:
12609+            self.shouldFail(LayoutInvalid, "too many blocks",
12610+                            None,
12611+                            mw.put_block, self.block, 7, self.salt))
12612+        return d
12613+
12614+
12615+    def test_write_rejected_with_invalid_salt(self):
12616+        # Try writing an invalid salt. Salts are 16 bytes -- any more or
12617+        # less should cause an error.
12618+        mw = self._make_new_mw("si1", 0)
12619+        bad_salt = "a" * 17 # 17 bytes
12620+        d = defer.succeed(None)
12621+        d.addCallback(lambda ignored:
12622+            self.shouldFail(LayoutInvalid, "test_invalid_salt",
12623+                            None, mw.put_block, self.block, 7, bad_salt))
12624+        return d
12625+
12626+
12627+    def test_write_rejected_with_invalid_root_hash(self):
12628+        # Try writing an invalid root hash. This should be SHA256d, and
12629+        # 32 bytes long as a result.
12630+        mw = self._make_new_mw("si2", 0)
12631+        # 17 bytes != 32 bytes
12632+        invalid_root_hash = "a" * 17
12633+        d = defer.succeed(None)
12634+        # Before this test can work, we need to put some blocks + salts,
12635+        # a block hash tree, and a share hash tree. Otherwise, we'll see
12636+        # failures that match what we are looking for, but are caused by
12637+        # the constraints imposed on operation ordering.
12638+        for i in xrange(6):
12639+            d.addCallback(lambda ignored, i=i:
12640+                mw.put_block(self.block, i, self.salt))
12641+        d.addCallback(lambda ignored:
12642+            mw.put_encprivkey(self.encprivkey))
12643+        d.addCallback(lambda ignored:
12644+            mw.put_blockhashes(self.block_hash_tree))
12645+        d.addCallback(lambda ignored:
12646+            mw.put_sharehashes(self.share_hash_chain))
12647+        d.addCallback(lambda ignored:
12648+            self.shouldFail(LayoutInvalid, "invalid root hash",
12649+                            None, mw.put_root_hash, invalid_root_hash))
12650+        return d
12651+
12652+
12653+    def test_write_rejected_with_invalid_blocksize(self):
12654+        # The blocksize implied by the writer that we get from
12655+        # _make_new_mw is 2bytes -- any more or any less than this
12656+        # should be cause for failure, unless it is the tail segment, in
12657+        # which case it may not be failure.
12658+        invalid_block = "a"
12659+        mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with
12660+                                             # one byte blocks
12661+        # 1 bytes != 2 bytes
12662+        d = defer.succeed(None)
12663+        d.addCallback(lambda ignored, invalid_block=invalid_block:
12664+            self.shouldFail(LayoutInvalid, "test blocksize too small",
12665+                            None, mw.put_block, invalid_block, 0,
12666+                            self.salt))
12667+        invalid_block = invalid_block * 3
12668+        # 3 bytes != 2 bytes
12669+        d.addCallback(lambda ignored:
12670+            self.shouldFail(LayoutInvalid, "test blocksize too large",
12671+                            None,
12672+                            mw.put_block, invalid_block, 0, self.salt))
12673+        for i in xrange(5):
12674+            d.addCallback(lambda ignored, i=i:
12675+                mw.put_block(self.block, i, self.salt))
12676+        # Try to put an invalid tail segment
12677+        d.addCallback(lambda ignored:
12678+            self.shouldFail(LayoutInvalid, "test invalid tail segment",
12679+                            None,
12680+                            mw.put_block, self.block, 5, self.salt))
12681+        valid_block = "a"
12682+        d.addCallback(lambda ignored:
12683+            mw.put_block(valid_block, 5, self.salt))
12684+        return d
12685+
12686+
12687+    def test_write_enforces_order_constraints(self):
12688+        # We require that the MDMFSlotWriteProxy be interacted with in a
12689+        # specific way.
12690+        # That way is:
12691+        # 0: __init__
12692+        # 1: write blocks and salts
12693+        # 2: Write the encrypted private key
12694+        # 3: Write the block hashes
12695+        # 4: Write the share hashes
12696+        # 5: Write the root hash and salt hash
12697+        # 6: Write the signature and verification key
12698+        # 7: Write the file.
12699+        #
12700+        # Some of these can be performed out-of-order, and some can't.
12701+        # The dependencies that I want to test here are:
12702+        #  - Private key before block hashes
12703+        #  - share hashes and block hashes before root hash
12704+        #  - root hash before signature
12705+        #  - signature before verification key
12706+        mw0 = self._make_new_mw("si0", 0)
12707+        # Write some shares
12708+        d = defer.succeed(None)
12709+        for i in xrange(6):
12710+            d.addCallback(lambda ignored, i=i:
12711+                mw0.put_block(self.block, i, self.salt))
12712+
12713+        # Try to write the share hash chain without writing the
12714+        # encrypted private key
12715+        d.addCallback(lambda ignored:
12716+            self.shouldFail(LayoutInvalid, "share hash chain before "
12717+                                           "private key",
12718+                            None,
12719+                            mw0.put_sharehashes, self.share_hash_chain))
12720+        # Write the private key.
12721+        d.addCallback(lambda ignored:
12722+            mw0.put_encprivkey(self.encprivkey))
12723+
12724+        # Now write the block hashes and try again
12725+        d.addCallback(lambda ignored:
12726+            mw0.put_blockhashes(self.block_hash_tree))
12727+
12728+        # We haven't yet put the root hash on the share, so we shouldn't
12729+        # be able to sign it.
12730+        d.addCallback(lambda ignored:
12731+            self.shouldFail(LayoutInvalid, "signature before root hash",
12732+                            None, mw0.put_signature, self.signature))
12733+
12734+        d.addCallback(lambda ignored:
12735+            self.failUnlessRaises(LayoutInvalid, mw0.get_signable))
12736+
12737+        # ..and, since that fails, we also shouldn't be able to put the
12738+        # verification key.
12739+        d.addCallback(lambda ignored:
12740+            self.shouldFail(LayoutInvalid, "key before signature",
12741+                            None, mw0.put_verification_key,
12742+                            self.verification_key))
12743+
12744+        # Now write the share hashes.
12745+        d.addCallback(lambda ignored:
12746+            mw0.put_sharehashes(self.share_hash_chain))
12747+        # We should be able to write the root hash now too
12748+        d.addCallback(lambda ignored:
12749+            mw0.put_root_hash(self.root_hash))
12750+
12751+        # We should still be unable to put the verification key
12752+        d.addCallback(lambda ignored:
12753+            self.shouldFail(LayoutInvalid, "key before signature",
12754+                            None, mw0.put_verification_key,
12755+                            self.verification_key))
12756+
12757+        d.addCallback(lambda ignored:
12758+            mw0.put_signature(self.signature))
12759+
12760+        # We shouldn't be able to write the offsets to the remote server
12761+        # until the offset table is finished; IOW, until we have written
12762+        # the verification key.
12763+        d.addCallback(lambda ignored:
12764+            self.shouldFail(LayoutInvalid, "offsets before verification key",
12765+                            None,
12766+                            mw0.finish_publishing))
12767+
12768+        d.addCallback(lambda ignored:
12769+            mw0.put_verification_key(self.verification_key))
12770+        return d
12771+
12772+
12773+    def test_end_to_end(self):
12774+        mw = self._make_new_mw("si1", 0)
12775+        # Write a share using the mutable writer, and make sure that the
12776+        # reader knows how to read everything back to us.
12777+        d = defer.succeed(None)
12778+        for i in xrange(6):
12779+            d.addCallback(lambda ignored, i=i:
12780+                mw.put_block(self.block, i, self.salt))
12781+        d.addCallback(lambda ignored:
12782+            mw.put_encprivkey(self.encprivkey))
12783+        d.addCallback(lambda ignored:
12784+            mw.put_blockhashes(self.block_hash_tree))
12785+        d.addCallback(lambda ignored:
12786+            mw.put_sharehashes(self.share_hash_chain))
12787+        d.addCallback(lambda ignored:
12788+            mw.put_root_hash(self.root_hash))
12789+        d.addCallback(lambda ignored:
12790+            mw.put_signature(self.signature))
12791+        d.addCallback(lambda ignored:
12792+            mw.put_verification_key(self.verification_key))
12793+        d.addCallback(lambda ignored:
12794+            mw.finish_publishing())
12795+
12796+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12797+        def _check_block_and_salt((block, salt)):
12798+            self.failUnlessEqual(block, self.block)
12799+            self.failUnlessEqual(salt, self.salt)
12800+
12801+        for i in xrange(6):
12802+            d.addCallback(lambda ignored, i=i:
12803+                mr.get_block_and_salt(i))
12804+            d.addCallback(_check_block_and_salt)
12805+
12806+        d.addCallback(lambda ignored:
12807+            mr.get_encprivkey())
12808+        d.addCallback(lambda encprivkey:
12809+            self.failUnlessEqual(self.encprivkey, encprivkey))
12810+
12811+        d.addCallback(lambda ignored:
12812+            mr.get_blockhashes())
12813+        d.addCallback(lambda blockhashes:
12814+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
12815+
12816+        d.addCallback(lambda ignored:
12817+            mr.get_sharehashes())
12818+        d.addCallback(lambda sharehashes:
12819+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
12820+
12821+        d.addCallback(lambda ignored:
12822+            mr.get_signature())
12823+        d.addCallback(lambda signature:
12824+            self.failUnlessEqual(signature, self.signature))
12825+
12826+        d.addCallback(lambda ignored:
12827+            mr.get_verification_key())
12828+        d.addCallback(lambda verification_key:
12829+            self.failUnlessEqual(verification_key, self.verification_key))
12830+
12831+        d.addCallback(lambda ignored:
12832+            mr.get_seqnum())
12833+        d.addCallback(lambda seqnum:
12834+            self.failUnlessEqual(seqnum, 0))
12835+
12836+        d.addCallback(lambda ignored:
12837+            mr.get_root_hash())
12838+        d.addCallback(lambda root_hash:
12839+            self.failUnlessEqual(self.root_hash, root_hash))
12840+
12841+        d.addCallback(lambda ignored:
12842+            mr.get_encoding_parameters())
12843+        def _check_encoding_parameters((k, n, segsize, datalen)):
12844+            self.failUnlessEqual(k, 3)
12845+            self.failUnlessEqual(n, 10)
12846+            self.failUnlessEqual(segsize, 6)
12847+            self.failUnlessEqual(datalen, 36)
12848+        d.addCallback(_check_encoding_parameters)
12849+
12850+        d.addCallback(lambda ignored:
12851+            mr.get_checkstring())
12852+        d.addCallback(lambda checkstring:
12853+            self.failUnlessEqual(checkstring, mw.get_checkstring()))
12854+        return d
12855+
12856+
12857+    def test_is_sdmf(self):
12858+        # The MDMFSlotReadProxy should also know how to read SDMF files,
12859+        # since it will encounter them on the grid. Callers use the
12860+        # is_sdmf method to test this.
12861+        self.write_sdmf_share_to_server("si1")
12862+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12863+        d = mr.is_sdmf()
12864+        d.addCallback(lambda issdmf:
12865+            self.failUnless(issdmf))
12866+        return d
12867+
12868+
12869+    def test_reads_sdmf(self):
12870+        # The slot read proxy should, naturally, know how to tell us
12871+        # about data in the SDMF format
12872+        self.write_sdmf_share_to_server("si1")
12873+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12874+        d = defer.succeed(None)
12875+        d.addCallback(lambda ignored:
12876+            mr.is_sdmf())
12877+        d.addCallback(lambda issdmf:
12878+            self.failUnless(issdmf))
12879+
12880+        # What do we need to read?
12881+        #  - The sharedata
12882+        #  - The salt
12883+        d.addCallback(lambda ignored:
12884+            mr.get_block_and_salt(0))
12885+        def _check_block_and_salt(results):
12886+            block, salt = results
12887+            # Our original file is 36 bytes long. Then each share is 12
12888+            # bytes in size. The share is composed entirely of the
12889+            # letter a. self.block contains 2 as, so 6 * self.block is
12890+            # what we are looking for.
12891+            self.failUnlessEqual(block, self.block * 6)
12892+            self.failUnlessEqual(salt, self.salt)
12893+        d.addCallback(_check_block_and_salt)
12894+
12895+        #  - The blockhashes
12896+        d.addCallback(lambda ignored:
12897+            mr.get_blockhashes())
12898+        d.addCallback(lambda blockhashes:
12899+            self.failUnlessEqual(self.block_hash_tree,
12900+                                 blockhashes,
12901+                                 blockhashes))
12902+        #  - The sharehashes
12903+        d.addCallback(lambda ignored:
12904+            mr.get_sharehashes())
12905+        d.addCallback(lambda sharehashes:
12906+            self.failUnlessEqual(self.share_hash_chain,
12907+                                 sharehashes))
12908+        #  - The keys
12909+        d.addCallback(lambda ignored:
12910+            mr.get_encprivkey())
12911+        d.addCallback(lambda encprivkey:
12912+            self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey))
12913+        d.addCallback(lambda ignored:
12914+            mr.get_verification_key())
12915+        d.addCallback(lambda verification_key:
12916+            self.failUnlessEqual(verification_key,
12917+                                 self.verification_key,
12918+                                 verification_key))
12919+        #  - The signature
12920+        d.addCallback(lambda ignored:
12921+            mr.get_signature())
12922+        d.addCallback(lambda signature:
12923+            self.failUnlessEqual(signature, self.signature, signature))
12924+
12925+        #  - The sequence number
12926+        d.addCallback(lambda ignored:
12927+            mr.get_seqnum())
12928+        d.addCallback(lambda seqnum:
12929+            self.failUnlessEqual(seqnum, 0, seqnum))
12930+
12931+        #  - The root hash
12932+        d.addCallback(lambda ignored:
12933+            mr.get_root_hash())
12934+        d.addCallback(lambda root_hash:
12935+            self.failUnlessEqual(root_hash, self.root_hash, root_hash))
12936+        return d
12937+
12938+
12939+    def test_only_reads_one_segment_sdmf(self):
12940+        # SDMF shares have only one segment, so it doesn't make sense to
12941+        # read more segments than that. The reader should know this and
12942+        # complain if we try to do that.
12943+        self.write_sdmf_share_to_server("si1")
12944+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
12945+        d = defer.succeed(None)
12946+        d.addCallback(lambda ignored:
12947+            mr.is_sdmf())
12948+        d.addCallback(lambda issdmf:
12949+            self.failUnless(issdmf))
12950+        d.addCallback(lambda ignored:
12951+            self.shouldFail(LayoutInvalid, "test bad segment",
12952+                            None,
12953+                            mr.get_block_and_salt, 1))
12954+        return d
12955+
12956+
12957+    def test_read_with_prefetched_mdmf_data(self):
12958+        # The MDMFSlotReadProxy will prefill certain fields if you pass
12959+        # it data that you have already fetched. This is useful for
12960+        # cases like the Servermap, which prefetches ~2kb of data while
12961+        # finding out which shares are on the remote peer so that it
12962+        # doesn't waste round trips.
12963+        mdmf_data = self.build_test_mdmf_share()
12964+        self.write_test_share_to_server("si1")
12965+        def _make_mr(ignored, length):
12966+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length])
12967+            return mr
12968+
12969+        d = defer.succeed(None)
12970+        # This should be enough to fill in both the encoding parameters
12971+        # and the table of offsets, which will complete the version
12972+        # information tuple.
12973+        d.addCallback(_make_mr, 123)
12974+        d.addCallback(lambda mr:
12975+            mr.get_verinfo())
12976+        def _check_verinfo(verinfo):
12977+            self.failUnless(verinfo)
12978+            self.failUnlessEqual(len(verinfo), 9)
12979+            (seqnum,
12980+             root_hash,
12981+             salt_hash,
12982+             segsize,
12983+             datalen,
12984+             k,
12985+             n,
12986+             prefix,
12987+             offsets) = verinfo
12988+            self.failUnlessEqual(seqnum, 0)
12989+            self.failUnlessEqual(root_hash, self.root_hash)
12990+            self.failUnlessEqual(segsize, 6)
12991+            self.failUnlessEqual(datalen, 36)
12992+            self.failUnlessEqual(k, 3)
12993+            self.failUnlessEqual(n, 10)
12994+            expected_prefix = struct.pack(MDMFSIGNABLEHEADER,
12995+                                          1,
12996+                                          seqnum,
12997+                                          root_hash,
12998+                                          k,
12999+                                          n,
13000+                                          segsize,
13001+                                          datalen)
13002+            self.failUnlessEqual(expected_prefix, prefix)
13003+            self.failUnlessEqual(self.rref.read_count, 0)
13004+        d.addCallback(_check_verinfo)
13005+        # This is not enough data to read a block and a share, so the
13006+        # wrapper should attempt to read this from the remote server.
13007+        d.addCallback(_make_mr, 123)
13008+        d.addCallback(lambda mr:
13009+            mr.get_block_and_salt(0))
13010+        def _check_block_and_salt((block, salt)):
13011+            self.failUnlessEqual(block, self.block)
13012+            self.failUnlessEqual(salt, self.salt)
13013+            self.failUnlessEqual(self.rref.read_count, 1)
13014+        # This should be enough data to read one block.
13015+        d.addCallback(_make_mr, 123 + PRIVATE_KEY_SIZE + SIGNATURE_SIZE + VERIFICATION_KEY_SIZE + SHARE_HASH_CHAIN_SIZE + 140)
13016+        d.addCallback(lambda mr:
13017+            mr.get_block_and_salt(0))
13018+        d.addCallback(_check_block_and_salt)
13019+        return d
13020+
13021+
13022+    def test_read_with_prefetched_sdmf_data(self):
13023+        sdmf_data = self.build_test_sdmf_share()
13024+        self.write_sdmf_share_to_server("si1")
13025+        def _make_mr(ignored, length):
13026+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length])
13027+            return mr
13028+
13029+        d = defer.succeed(None)
13030+        # This should be enough to get us the encoding parameters,
13031+        # offset table, and everything else we need to build a verinfo
13032+        # string.
13033+        d.addCallback(_make_mr, 123)
13034+        d.addCallback(lambda mr:
13035+            mr.get_verinfo())
13036+        def _check_verinfo(verinfo):
13037+            self.failUnless(verinfo)
13038+            self.failUnlessEqual(len(verinfo), 9)
13039+            (seqnum,
13040+             root_hash,
13041+             salt,
13042+             segsize,
13043+             datalen,
13044+             k,
13045+             n,
13046+             prefix,
13047+             offsets) = verinfo
13048+            self.failUnlessEqual(seqnum, 0)
13049+            self.failUnlessEqual(root_hash, self.root_hash)
13050+            self.failUnlessEqual(salt, self.salt)
13051+            self.failUnlessEqual(segsize, 36)
13052+            self.failUnlessEqual(datalen, 36)
13053+            self.failUnlessEqual(k, 3)
13054+            self.failUnlessEqual(n, 10)
13055+            expected_prefix = struct.pack(SIGNED_PREFIX,
13056+                                          0,
13057+                                          seqnum,
13058+                                          root_hash,
13059+                                          salt,
13060+                                          k,
13061+                                          n,
13062+                                          segsize,
13063+                                          datalen)
13064+            self.failUnlessEqual(expected_prefix, prefix)
13065+            self.failUnlessEqual(self.rref.read_count, 0)
13066+        d.addCallback(_check_verinfo)
13067+        # This shouldn't be enough to read any share data.
13068+        d.addCallback(_make_mr, 123)
13069+        d.addCallback(lambda mr:
13070+            mr.get_block_and_salt(0))
13071+        def _check_block_and_salt((block, salt)):
13072+            self.failUnlessEqual(block, self.block * 6)
13073+            self.failUnlessEqual(salt, self.salt)
13074+            # TODO: Fix the read routine so that it reads only the data
13075+            #       that it has cached if it can't read all of it.
13076+            self.failUnlessEqual(self.rref.read_count, 2)
13077+
13078+        # This should be enough to read share data.
13079+        d.addCallback(_make_mr, self.offsets['share_data'])
13080+        d.addCallback(lambda mr:
13081+            mr.get_block_and_salt(0))
13082+        d.addCallback(_check_block_and_salt)
13083+        return d
13084+
13085+
13086+    def test_read_with_empty_mdmf_file(self):
13087+        # Some tests upload a file with no contents to test things
13088+        # unrelated to the actual handling of the content of the file.
13089+        # The reader should behave intelligently in these cases.
13090+        self.write_test_share_to_server("si1", empty=True)
13091+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13092+        # We should be able to get the encoding parameters, and they
13093+        # should be correct.
13094+        d = defer.succeed(None)
13095+        d.addCallback(lambda ignored:
13096+            mr.get_encoding_parameters())
13097+        def _check_encoding_parameters(params):
13098+            self.failUnlessEqual(len(params), 4)
13099+            k, n, segsize, datalen = params
13100+            self.failUnlessEqual(k, 3)
13101+            self.failUnlessEqual(n, 10)
13102+            self.failUnlessEqual(segsize, 0)
13103+            self.failUnlessEqual(datalen, 0)
13104+        d.addCallback(_check_encoding_parameters)
13105+
13106+        # We should not be able to fetch a block, since there are no
13107+        # blocks to fetch
13108+        d.addCallback(lambda ignored:
13109+            self.shouldFail(LayoutInvalid, "get block on empty file",
13110+                            None,
13111+                            mr.get_block_and_salt, 0))
13112+        return d
13113+
13114+
13115+    def test_read_with_empty_sdmf_file(self):
13116+        self.write_sdmf_share_to_server("si1", empty=True)
13117+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13118+        # We should be able to get the encoding parameters, and they
13119+        # should be correct
13120+        d = defer.succeed(None)
13121+        d.addCallback(lambda ignored:
13122+            mr.get_encoding_parameters())
13123+        def _check_encoding_parameters(params):
13124+            self.failUnlessEqual(len(params), 4)
13125+            k, n, segsize, datalen = params
13126+            self.failUnlessEqual(k, 3)
13127+            self.failUnlessEqual(n, 10)
13128+            self.failUnlessEqual(segsize, 0)
13129+            self.failUnlessEqual(datalen, 0)
13130+        d.addCallback(_check_encoding_parameters)
13131+
13132+        # It does not make sense to get a block in this format, so we
13133+        # should not be able to.
13134+        d.addCallback(lambda ignored:
13135+            self.shouldFail(LayoutInvalid, "get block on an empty file",
13136+                            None,
13137+                            mr.get_block_and_salt, 0))
13138+        return d
13139+
13140+
13141+    def test_verinfo_with_sdmf_file(self):
13142+        self.write_sdmf_share_to_server("si1")
13143+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13144+        # We should be able to get the version information.
13145+        d = defer.succeed(None)
13146+        d.addCallback(lambda ignored:
13147+            mr.get_verinfo())
13148+        def _check_verinfo(verinfo):
13149+            self.failUnless(verinfo)
13150+            self.failUnlessEqual(len(verinfo), 9)
13151+            (seqnum,
13152+             root_hash,
13153+             salt,
13154+             segsize,
13155+             datalen,
13156+             k,
13157+             n,
13158+             prefix,
13159+             offsets) = verinfo
13160+            self.failUnlessEqual(seqnum, 0)
13161+            self.failUnlessEqual(root_hash, self.root_hash)
13162+            self.failUnlessEqual(salt, self.salt)
13163+            self.failUnlessEqual(segsize, 36)
13164+            self.failUnlessEqual(datalen, 36)
13165+            self.failUnlessEqual(k, 3)
13166+            self.failUnlessEqual(n, 10)
13167+            expected_prefix = struct.pack(">BQ32s16s BBQQ",
13168+                                          0,
13169+                                          seqnum,
13170+                                          root_hash,
13171+                                          salt,
13172+                                          k,
13173+                                          n,
13174+                                          segsize,
13175+                                          datalen)
13176+            self.failUnlessEqual(prefix, expected_prefix)
13177+            self.failUnlessEqual(offsets, self.offsets)
13178+        d.addCallback(_check_verinfo)
13179+        return d
13180+
13181+
13182+    def test_verinfo_with_mdmf_file(self):
13183+        self.write_test_share_to_server("si1")
13184+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13185+        d = defer.succeed(None)
13186+        d.addCallback(lambda ignored:
13187+            mr.get_verinfo())
13188+        def _check_verinfo(verinfo):
13189+            self.failUnless(verinfo)
13190+            self.failUnlessEqual(len(verinfo), 9)
13191+            (seqnum,
13192+             root_hash,
13193+             IV,
13194+             segsize,
13195+             datalen,
13196+             k,
13197+             n,
13198+             prefix,
13199+             offsets) = verinfo
13200+            self.failUnlessEqual(seqnum, 0)
13201+            self.failUnlessEqual(root_hash, self.root_hash)
13202+            self.failIf(IV)
13203+            self.failUnlessEqual(segsize, 6)
13204+            self.failUnlessEqual(datalen, 36)
13205+            self.failUnlessEqual(k, 3)
13206+            self.failUnlessEqual(n, 10)
13207+            expected_prefix = struct.pack(">BQ32s BBQQ",
13208+                                          1,
13209+                                          seqnum,
13210+                                          root_hash,
13211+                                          k,
13212+                                          n,
13213+                                          segsize,
13214+                                          datalen)
13215+            self.failUnlessEqual(prefix, expected_prefix)
13216+            self.failUnlessEqual(offsets, self.offsets)
13217+        d.addCallback(_check_verinfo)
13218+        return d
13219+
13220+
13221+    def test_reader_queue(self):
13222+        self.write_test_share_to_server('si1')
13223+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
13224+        d1 = mr.get_block_and_salt(0, queue=True)
13225+        d2 = mr.get_blockhashes(queue=True)
13226+        d3 = mr.get_sharehashes(queue=True)
13227+        d4 = mr.get_signature(queue=True)
13228+        d5 = mr.get_verification_key(queue=True)
13229+        dl = defer.DeferredList([d1, d2, d3, d4, d5])
13230+        mr.flush()
13231+        def _print(results):
13232+            self.failUnlessEqual(len(results), 5)
13233+            # We have one read for version information and offsets, and
13234+            # one for everything else.
13235+            self.failUnlessEqual(self.rref.read_count, 2)
13236+            block, salt = results[0][1] # results[0] is a boolean that says
13237+                                           # whether or not the operation
13238+                                           # worked.
13239+            self.failUnlessEqual(self.block, block)
13240+            self.failUnlessEqual(self.salt, salt)
13241+
13242+            blockhashes = results[1][1]
13243+            self.failUnlessEqual(self.block_hash_tree, blockhashes)
13244+
13245+            sharehashes = results[2][1]
13246+            self.failUnlessEqual(self.share_hash_chain, sharehashes)
13247+
13248+            signature = results[3][1]
13249+            self.failUnlessEqual(self.signature, signature)
13250+
13251+            verification_key = results[4][1]
13252+            self.failUnlessEqual(self.verification_key, verification_key)
13253+        dl.addCallback(_print)
13254+        return dl
13255+
13256+
13257+    def test_sdmf_writer(self):
13258+        # Go through the motions of writing an SDMF share to the storage
13259+        # server. Then read the storage server to see that the share got
13260+        # written in the way that we think it should have.
13261+
13262+        # We do this first so that the necessary instance variables get
13263+        # set the way we want them for the tests below.
13264+        data = self.build_test_sdmf_share()
13265+        sdmfr = SDMFSlotWriteProxy(0,
13266+                                   self.rref,
13267+                                   "si1",
13268+                                   self.secrets,
13269+                                   0, 3, 10, 36, 36)
13270+        # Put the block and salt.
13271+        sdmfr.put_block(self.blockdata, 0, self.salt)
13272+
13273+        # Put the encprivkey
13274+        sdmfr.put_encprivkey(self.encprivkey)
13275+
13276+        # Put the block and share hash chains
13277+        sdmfr.put_blockhashes(self.block_hash_tree)
13278+        sdmfr.put_sharehashes(self.share_hash_chain)
13279+        sdmfr.put_root_hash(self.root_hash)
13280+
13281+        # Put the signature
13282+        sdmfr.put_signature(self.signature)
13283+
13284+        # Put the verification key
13285+        sdmfr.put_verification_key(self.verification_key)
13286+
13287+        # Now check to make sure that nothing has been written yet.
13288+        self.failUnlessEqual(self.rref.write_count, 0)
13289+
13290+        # Now finish publishing
13291+        d = sdmfr.finish_publishing()
13292+        def _then(ignored):
13293+            self.failUnlessEqual(self.rref.write_count, 1)
13294+            read = self.ss.remote_slot_readv
13295+            self.failUnlessEqual(read("si1", [0], [(0, len(data))]),
13296+                                 {0: [data]})
13297+        d.addCallback(_then)
13298+        return d
13299+
13300+
13301+    def test_sdmf_writer_preexisting_share(self):
13302+        data = self.build_test_sdmf_share()
13303+        self.write_sdmf_share_to_server("si1")
13304+
13305+        # Now there is a share on the storage server. To successfully
13306+        # write, we need to set the checkstring correctly. When we
13307+        # don't, no write should occur.
13308+        sdmfw = SDMFSlotWriteProxy(0,
13309+                                   self.rref,
13310+                                   "si1",
13311+                                   self.secrets,
13312+                                   1, 3, 10, 36, 36)
13313+        sdmfw.put_block(self.blockdata, 0, self.salt)
13314+
13315+        # Put the encprivkey
13316+        sdmfw.put_encprivkey(self.encprivkey)
13317+
13318+        # Put the block and share hash chains
13319+        sdmfw.put_blockhashes(self.block_hash_tree)
13320+        sdmfw.put_sharehashes(self.share_hash_chain)
13321+
13322+        # Put the root hash
13323+        sdmfw.put_root_hash(self.root_hash)
13324+
13325+        # Put the signature
13326+        sdmfw.put_signature(self.signature)
13327+
13328+        # Put the verification key
13329+        sdmfw.put_verification_key(self.verification_key)
13330+
13331+        # We shouldn't have a checkstring yet
13332+        self.failUnlessEqual(sdmfw.get_checkstring(), "")
13333+
13334+        d = sdmfw.finish_publishing()
13335+        def _then(results):
13336+            self.failIf(results[0])
13337+            # this is the correct checkstring
13338+            self._expected_checkstring = results[1][0][0]
13339+            return self._expected_checkstring
13340+
13341+        d.addCallback(_then)
13342+        d.addCallback(sdmfw.set_checkstring)
13343+        d.addCallback(lambda ignored:
13344+            sdmfw.get_checkstring())
13345+        d.addCallback(lambda checkstring:
13346+            self.failUnlessEqual(checkstring, self._expected_checkstring))
13347+        d.addCallback(lambda ignored:
13348+            sdmfw.finish_publishing())
13349+        def _then_again(results):
13350+            self.failUnless(results[0])
13351+            read = self.ss.remote_slot_readv
13352+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
13353+                                 {0: [struct.pack(">Q", 1)]})
13354+            self.failUnlessEqual(read("si1", [0], [(9, len(data) - 9)]),
13355+                                 {0: [data[9:]]})
13356+        d.addCallback(_then_again)
13357+        return d
13358+
13359+
13360 class Stats(unittest.TestCase):
13361 
13362     def setUp(self):
13363}
13364[frontends/sftpd: Resolve incompatibilities between SFTP frontend and MDMF changes
13365Kevan Carstensen <kevan@isnotajoke.com>**20110802021207
13366 Ignore-this: 5e0f6e961048f71d4eed6d30210ffd2e
13367] {
13368hunk ./src/allmydata/frontends/sftpd.py 33
13369 from allmydata.interfaces import IFileNode, IDirectoryNode, ExistingChildError, \
13370      NoSuchChildError, ChildOfWrongTypeError
13371 from allmydata.mutable.common import NotWriteableError
13372+from allmydata.mutable.publish import MutableFileHandle
13373 from allmydata.immutable.upload import FileHandle
13374 from allmydata.dirnode import update_metadata
13375 from allmydata.util.fileutil import EncryptedTemporaryFile
13376hunk ./src/allmydata/frontends/sftpd.py 667
13377         else:
13378             assert IFileNode.providedBy(filenode), filenode
13379 
13380-            if filenode.is_mutable():
13381-                self.async.addCallback(lambda ign: filenode.download_best_version())
13382-                def _downloaded(data):
13383-                    self.consumer = OverwriteableFileConsumer(len(data), tempfile_maker)
13384-                    self.consumer.write(data)
13385-                    self.consumer.finish()
13386-                    return None
13387-                self.async.addCallback(_downloaded)
13388-            else:
13389-                download_size = filenode.get_size()
13390-                assert download_size is not None, "download_size is None"
13391+            self.async.addCallback(lambda ignored: filenode.get_best_readable_version())
13392+
13393+            def _read(version):
13394+                if noisy: self.log("_read", level=NOISY)
13395+                download_size = version.get_size()
13396+                assert download_size is not None
13397+
13398                 self.consumer = OverwriteableFileConsumer(download_size, tempfile_maker)
13399hunk ./src/allmydata/frontends/sftpd.py 675
13400-                def _read(ign):
13401-                    if noisy: self.log("_read immutable", level=NOISY)
13402-                    filenode.read(self.consumer, 0, None)
13403-                self.async.addCallback(_read)
13404+
13405+                version.read(self.consumer, 0, None)
13406+            self.async.addCallback(_read)
13407 
13408         eventually(self.async.callback, None)
13409 
13410hunk ./src/allmydata/frontends/sftpd.py 821
13411                     assert parent and childname, (parent, childname, self.metadata)
13412                     d2.addCallback(lambda ign: parent.set_metadata_for(childname, self.metadata))
13413 
13414-                d2.addCallback(lambda ign: self.consumer.get_current_size())
13415-                d2.addCallback(lambda size: self.consumer.read(0, size))
13416-                d2.addCallback(lambda new_contents: self.filenode.overwrite(new_contents))
13417+                d2.addCallback(lambda ign: self.filenode.overwrite(MutableFileHandle(self.consumer.get_file())))
13418             else:
13419                 def _add_file(ign):
13420                     self.log("_add_file childname=%r" % (childname,), level=OPERATIONAL)
13421hunk ./src/allmydata/test/test_sftp.py 32
13422 
13423 from allmydata.util.consumer import download_to_data
13424 from allmydata.immutable import upload
13425+from allmydata.mutable import publish
13426 from allmydata.test.no_network import GridTestMixin
13427 from allmydata.test.common import ShouldFailMixin
13428 from allmydata.test.common_util import ReallyEqualMixin
13429hunk ./src/allmydata/test/test_sftp.py 80
13430         return d
13431 
13432     def _set_up_tree(self):
13433-        d = self.client.create_mutable_file("mutable file contents")
13434+        u = publish.MutableData("mutable file contents")
13435+        d = self.client.create_mutable_file(u)
13436         d.addCallback(lambda node: self.root.set_node(u"mutable", node))
13437         def _created_mutable(n):
13438             self.mutable = n
13439}
13440[uri: add MDMF and MDMF directory caps, add extension hint support
13441Kevan Carstensen <kevan@isnotajoke.com>**20110802021233
13442 Ignore-this: 525f98d5dcb7a6afad601c27dba59e84
13443] {
13444hunk ./src/allmydata/test/test_cli.py 1238
13445         d.addCallback(_check)
13446         return d
13447 
13448+    def _create_directory_structure(self):
13449+        # Create a simple directory structure that we can use for MDMF,
13450+        # SDMF, and immutable testing.
13451+        assert self.g
13452+
13453+        client = self.g.clients[0]
13454+        # Create a dirnode
13455+        d = client.create_dirnode()
13456+        def _got_rootnode(n):
13457+            # Add a few nodes.
13458+            self._dircap = n.get_uri()
13459+            nm = n._nodemaker
13460+            # The uploaders may run at the same time, so we need two
13461+            # MutableData instances or they'll fight over offsets &c and
13462+            # break.
13463+            mutable_data = MutableData("data" * 100000)
13464+            mutable_data2 = MutableData("data" * 100000)
13465+            # Add both kinds of mutable node.
13466+            d1 = nm.create_mutable_file(mutable_data,
13467+                                        version=MDMF_VERSION)
13468+            d2 = nm.create_mutable_file(mutable_data2,
13469+                                        version=SDMF_VERSION)
13470+            # Add an immutable node. We do this through the directory,
13471+            # with add_file.
13472+            immutable_data = upload.Data("immutable data" * 100000,
13473+                                         convergence="")
13474+            d3 = n.add_file(u"immutable", immutable_data)
13475+            ds = [d1, d2, d3]
13476+            dl = defer.DeferredList(ds)
13477+            def _made_files((r1, r2, r3)):
13478+                self.failUnless(r1[0])
13479+                self.failUnless(r2[0])
13480+                self.failUnless(r3[0])
13481+
13482+                # r1, r2, and r3 contain nodes.
13483+                mdmf_node = r1[1]
13484+                sdmf_node = r2[1]
13485+                imm_node = r3[1]
13486+
13487+                self._mdmf_uri = mdmf_node.get_uri()
13488+                self._mdmf_readonly_uri = mdmf_node.get_readonly_uri()
13489+                self._sdmf_uri = mdmf_node.get_uri()
13490+                self._sdmf_readonly_uri = sdmf_node.get_readonly_uri()
13491+                self._imm_uri = imm_node.get_uri()
13492+
13493+                d1 = n.set_node(u"mdmf", mdmf_node)
13494+                d2 = n.set_node(u"sdmf", sdmf_node)
13495+                return defer.DeferredList([d1, d2])
13496+            # We can now list the directory by listing self._dircap.
13497+            dl.addCallback(_made_files)
13498+            return dl
13499+        d.addCallback(_got_rootnode)
13500+        return d
13501+
13502+    def test_list_mdmf(self):
13503+        # 'tahoe ls' should include MDMF files.
13504+        self.basedir = "cli/List/list_mdmf"
13505+        self.set_up_grid()
13506+        d = self._create_directory_structure()
13507+        d.addCallback(lambda ignored:
13508+            self.do_cli("ls", self._dircap))
13509+        def _got_ls((rc, out, err)):
13510+            self.failUnlessEqual(rc, 0)
13511+            self.failUnlessEqual(err, "")
13512+            self.failUnlessIn("immutable", out)
13513+            self.failUnlessIn("mdmf", out)
13514+            self.failUnlessIn("sdmf", out)
13515+        d.addCallback(_got_ls)
13516+        return d
13517+
13518+    def test_list_mdmf_json(self):
13519+        # 'tahoe ls' should include MDMF caps when invoked with MDMF
13520+        # caps.
13521+        self.basedir = "cli/List/list_mdmf_json"
13522+        self.set_up_grid()
13523+        d = self._create_directory_structure()
13524+        d.addCallback(lambda ignored:
13525+            self.do_cli("ls", "--json", self._dircap))
13526+        def _got_json((rc, out, err)):
13527+            self.failUnlessEqual(rc, 0)
13528+            self.failUnlessEqual(err, "")
13529+            self.failUnlessIn(self._mdmf_uri, out)
13530+            self.failUnlessIn(self._mdmf_readonly_uri, out)
13531+            self.failUnlessIn(self._sdmf_uri, out)
13532+            self.failUnlessIn(self._sdmf_readonly_uri, out)
13533+            self.failUnlessIn(self._imm_uri, out)
13534+            self.failUnlessIn('"mutable-type": "sdmf"', out)
13535+            self.failUnlessIn('"mutable-type": "mdmf"', out)
13536+        d.addCallback(_got_json)
13537+        return d
13538+
13539 
13540 class Mv(GridTestMixin, CLITestMixin, unittest.TestCase):
13541     def test_mv_behavior(self):
13542hunk ./src/allmydata/test/test_uri.py 2
13543 
13544+import re
13545 from twisted.trial import unittest
13546 from allmydata import uri
13547 from allmydata.util import hashutil, base32
13548hunk ./src/allmydata/test/test_uri.py 259
13549         uri.CHKFileURI.init_from_string(fileURI)
13550 
13551 class Mutable(testutil.ReallyEqualMixin, unittest.TestCase):
13552-    def test_pack(self):
13553-        writekey = "\x01" * 16
13554-        fingerprint = "\x02" * 32
13555+    def setUp(self):
13556+        self.writekey = "\x01" * 16
13557+        self.fingerprint = "\x02" * 32
13558+        self.readkey = hashutil.ssk_readkey_hash(self.writekey)
13559+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
13560 
13561hunk ./src/allmydata/test/test_uri.py 265
13562-        u = uri.WriteableSSKFileURI(writekey, fingerprint)
13563-        self.failUnlessReallyEqual(u.writekey, writekey)
13564-        self.failUnlessReallyEqual(u.fingerprint, fingerprint)
13565+    def test_pack(self):
13566+        u = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
13567+        self.failUnlessReallyEqual(u.writekey, self.writekey)
13568+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
13569         self.failIf(u.is_readonly())
13570         self.failUnless(u.is_mutable())
13571         self.failUnless(IURI.providedBy(u))
13572hunk ./src/allmydata/test/test_uri.py 281
13573         self.failUnlessReallyEqual(u, u_h)
13574 
13575         u2 = uri.from_string(u.to_string())
13576-        self.failUnlessReallyEqual(u2.writekey, writekey)
13577-        self.failUnlessReallyEqual(u2.fingerprint, fingerprint)
13578+        self.failUnlessReallyEqual(u2.writekey, self.writekey)
13579+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
13580         self.failIf(u2.is_readonly())
13581         self.failUnless(u2.is_mutable())
13582         self.failUnless(IURI.providedBy(u2))
13583hunk ./src/allmydata/test/test_uri.py 297
13584         self.failUnless(isinstance(u2imm, uri.UnknownURI), u2imm)
13585 
13586         u3 = u2.get_readonly()
13587-        readkey = hashutil.ssk_readkey_hash(writekey)
13588-        self.failUnlessReallyEqual(u3.fingerprint, fingerprint)
13589+        readkey = hashutil.ssk_readkey_hash(self.writekey)
13590+        self.failUnlessReallyEqual(u3.fingerprint, self.fingerprint)
13591         self.failUnlessReallyEqual(u3.readkey, readkey)
13592         self.failUnless(u3.is_readonly())
13593         self.failUnless(u3.is_mutable())
13594hunk ./src/allmydata/test/test_uri.py 317
13595         u3_h = uri.ReadonlySSKFileURI.init_from_human_encoding(he)
13596         self.failUnlessReallyEqual(u3, u3_h)
13597 
13598-        u4 = uri.ReadonlySSKFileURI(readkey, fingerprint)
13599-        self.failUnlessReallyEqual(u4.fingerprint, fingerprint)
13600+        u4 = uri.ReadonlySSKFileURI(readkey, self.fingerprint)
13601+        self.failUnlessReallyEqual(u4.fingerprint, self.fingerprint)
13602         self.failUnlessReallyEqual(u4.readkey, readkey)
13603         self.failUnless(u4.is_readonly())
13604         self.failUnless(u4.is_mutable())
13605hunk ./src/allmydata/test/test_uri.py 350
13606         self.failUnlessReallyEqual(u5, u5_h)
13607 
13608 
13609+    def test_writable_mdmf_cap(self):
13610+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13611+        cap = u1.to_string()
13612+        u = uri.WritableMDMFFileURI.init_from_string(cap)
13613+
13614+        self.failUnless(IMutableFileURI.providedBy(u))
13615+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
13616+        self.failUnlessReallyEqual(u.writekey, self.writekey)
13617+        self.failUnless(u.is_mutable())
13618+        self.failIf(u.is_readonly())
13619+        self.failUnlessEqual(cap, u.to_string())
13620+
13621+        # Now get a readonly cap from the writable cap, and test that it
13622+        # degrades gracefully.
13623+        ru = u.get_readonly()
13624+        self.failUnlessReallyEqual(self.readkey, ru.readkey)
13625+        self.failUnlessReallyEqual(self.fingerprint, ru.fingerprint)
13626+        self.failUnless(ru.is_mutable())
13627+        self.failUnless(ru.is_readonly())
13628+
13629+        # Now get a verifier cap.
13630+        vu = ru.get_verify_cap()
13631+        self.failUnlessReallyEqual(self.storage_index, vu.storage_index)
13632+        self.failUnlessReallyEqual(self.fingerprint, vu.fingerprint)
13633+        self.failUnless(IVerifierURI.providedBy(vu))
13634+
13635+    def test_readonly_mdmf_cap(self):
13636+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13637+        cap = u1.to_string()
13638+        u2 = uri.ReadonlyMDMFFileURI.init_from_string(cap)
13639+
13640+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
13641+        self.failUnlessReallyEqual(u2.readkey, self.readkey)
13642+        self.failUnless(u2.is_readonly())
13643+        self.failUnless(u2.is_mutable())
13644+
13645+        vu = u2.get_verify_cap()
13646+        self.failUnlessEqual(u2.storage_index, self.storage_index)
13647+        self.failUnlessEqual(u2.fingerprint, self.fingerprint)
13648+
13649+    def test_create_writable_mdmf_cap_from_readcap(self):
13650+        # we shouldn't be able to create a writable MDMF cap given only a
13651+        # readcap.
13652+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13653+        cap = u1.to_string()
13654+        self.failUnlessRaises(uri.BadURIError,
13655+                              uri.WritableMDMFFileURI.init_from_string,
13656+                              cap)
13657+
13658+    def test_create_writable_mdmf_cap_from_verifycap(self):
13659+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13660+        cap = u1.to_string()
13661+        self.failUnlessRaises(uri.BadURIError,
13662+                              uri.WritableMDMFFileURI.init_from_string,
13663+                              cap)
13664+
13665+    def test_create_readonly_mdmf_cap_from_verifycap(self):
13666+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13667+        cap = u1.to_string()
13668+        self.failUnlessRaises(uri.BadURIError,
13669+                              uri.ReadonlyMDMFFileURI.init_from_string,
13670+                              cap)
13671+
13672+    def test_mdmf_verifier_cap(self):
13673+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13674+        self.failUnless(u1.is_readonly())
13675+        self.failIf(u1.is_mutable())
13676+        self.failUnlessReallyEqual(self.storage_index, u1.storage_index)
13677+        self.failUnlessReallyEqual(self.fingerprint, u1.fingerprint)
13678+
13679+        cap = u1.to_string()
13680+        u2 = uri.MDMFVerifierURI.init_from_string(cap)
13681+        self.failUnless(u2.is_readonly())
13682+        self.failIf(u2.is_mutable())
13683+        self.failUnlessReallyEqual(self.storage_index, u2.storage_index)
13684+        self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
13685+
13686+        u3 = u2.get_readonly()
13687+        self.failUnlessReallyEqual(u3, u2)
13688+
13689+        u4 = u2.get_verify_cap()
13690+        self.failUnlessReallyEqual(u4, u2)
13691+
13692+    def test_mdmf_cap_extra_information(self):
13693+        # MDMF caps can be arbitrarily extended after the fingerprint
13694+        # and key/storage index fields.
13695+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13696+        self.failUnlessEqual([], u1.get_extension_params())
13697+
13698+        cap = u1.to_string()
13699+        # Now let's append some fields. Say, 131073 (the segment size)
13700+        # and 3 (the "k" encoding parameter).
13701+        expected_extensions = []
13702+        for e in ('131073', '3'):
13703+            cap += (":%s" % e)
13704+            expected_extensions.append(e)
13705+
13706+            u2 = uri.WritableMDMFFileURI.init_from_string(cap)
13707+            self.failUnlessReallyEqual(self.writekey, u2.writekey)
13708+            self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
13709+            self.failIf(u2.is_readonly())
13710+            self.failUnless(u2.is_mutable())
13711+
13712+            c2 = u2.to_string()
13713+            u2n = uri.WritableMDMFFileURI.init_from_string(c2)
13714+            self.failUnlessReallyEqual(u2, u2n)
13715+
13716+            # We should get the extra back when we ask for it.
13717+            self.failUnlessEqual(expected_extensions, u2.get_extension_params())
13718+
13719+            # These should be preserved through cap attenuation, too.
13720+            u3 = u2.get_readonly()
13721+            self.failUnlessReallyEqual(self.readkey, u3.readkey)
13722+            self.failUnlessReallyEqual(self.fingerprint, u3.fingerprint)
13723+            self.failUnless(u3.is_readonly())
13724+            self.failUnless(u3.is_mutable())
13725+            self.failUnlessEqual(expected_extensions, u3.get_extension_params())
13726+
13727+            c3 = u3.to_string()
13728+            u3n = uri.ReadonlyMDMFFileURI.init_from_string(c3)
13729+            self.failUnlessReallyEqual(u3, u3n)
13730+
13731+            u4 = u3.get_verify_cap()
13732+            self.failUnlessReallyEqual(self.storage_index, u4.storage_index)
13733+            self.failUnlessReallyEqual(self.fingerprint, u4.fingerprint)
13734+            self.failUnless(u4.is_readonly())
13735+            self.failIf(u4.is_mutable())
13736+
13737+            c4 = u4.to_string()
13738+            u4n = uri.MDMFVerifierURI.init_from_string(c4)
13739+            self.failUnlessReallyEqual(u4n, u4)
13740+
13741+            self.failUnlessEqual(expected_extensions, u4.get_extension_params())
13742+
13743+
13744+    def test_sdmf_cap_extra_information(self):
13745+        # For interface consistency, we define a method to get
13746+        # extensions for SDMF files as well. This method must always
13747+        # return no extensions, since SDMF files were not created with
13748+        # extensions and cannot be modified to include extensions
13749+        # without breaking older clients.
13750+        u1 = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
13751+        cap = u1.to_string()
13752+        u2 = uri.WriteableSSKFileURI.init_from_string(cap)
13753+        self.failUnlessEqual([], u2.get_extension_params())
13754+
13755+    def test_extension_character_range(self):
13756+        # As written now, we shouldn't put things other than numbers in
13757+        # the extension fields.
13758+        writecap = uri.WritableMDMFFileURI(self.writekey, self.fingerprint).to_string()
13759+        readcap  = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint).to_string()
13760+        vcap     = uri.MDMFVerifierURI(self.storage_index, self.fingerprint).to_string()
13761+        self.failUnlessRaises(uri.BadURIError,
13762+                              uri.WritableMDMFFileURI.init_from_string,
13763+                              ("%s:invalid" % writecap))
13764+        self.failUnlessRaises(uri.BadURIError,
13765+                              uri.ReadonlyMDMFFileURI.init_from_string,
13766+                              ("%s:invalid" % readcap))
13767+        self.failUnlessRaises(uri.BadURIError,
13768+                              uri.MDMFVerifierURI.init_from_string,
13769+                              ("%s:invalid" % vcap))
13770+
13771+
13772+    def test_mdmf_valid_human_encoding(self):
13773+        # What's a human encoding? Well, it's of the form:
13774+        base = "https://127.0.0.1:3456/uri/"
13775+        # With a cap on the end. For each of the cap types, we need to
13776+        # test that a valid cap (with and without the traditional
13777+        # separators) is recognized and accepted by the classes.
13778+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13779+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13780+                                     ['131073', '3'])
13781+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13782+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13783+                                     ['131073', '3'])
13784+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13785+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13786+                                 ['131073', '3'])
13787+
13788+        # These will yield six different caps.
13789+        for o in (w1, w2, r1 , r2, v1, v2):
13790+            url = base + o.to_string()
13791+            o1 = o.__class__.init_from_human_encoding(url)
13792+            self.failUnlessReallyEqual(o1, o)
13793+
13794+            # Note that our cap will, by default, have : as separators.
13795+            # But it's expected that users from, e.g., the WUI, will
13796+            # have %3A as a separator. We need to make sure that the
13797+            # initialization routine handles that, too.
13798+            cap = o.to_string()
13799+            cap = re.sub(":", "%3A", cap)
13800+            url = base + cap
13801+            o2 = o.__class__.init_from_human_encoding(url)
13802+            self.failUnlessReallyEqual(o2, o)
13803+
13804+
13805+    def test_mdmf_human_encoding_invalid_base(self):
13806+        # What's a human encoding? Well, it's of the form:
13807+        base = "https://127.0.0.1:3456/foo/bar/bazuri/"
13808+        # With a cap on the end. For each of the cap types, we need to
13809+        # test that a valid cap (with and without the traditional
13810+        # separators) is recognized and accepted by the classes.
13811+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13812+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13813+                                     ['131073', '3'])
13814+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13815+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13816+                                     ['131073', '3'])
13817+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13818+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13819+                                 ['131073', '3'])
13820+
13821+        # These will yield six different caps.
13822+        for o in (w1, w2, r1 , r2, v1, v2):
13823+            url = base + o.to_string()
13824+            self.failUnlessRaises(uri.BadURIError,
13825+                                  o.__class__.init_from_human_encoding,
13826+                                  url)
13827+
13828+    def test_mdmf_human_encoding_invalid_cap(self):
13829+        base = "https://127.0.0.1:3456/uri/"
13830+        # With a cap on the end. For each of the cap types, we need to
13831+        # test that a valid cap (with and without the traditional
13832+        # separators) is recognized and accepted by the classes.
13833+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13834+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13835+                                     ['131073', '3'])
13836+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13837+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13838+                                     ['131073', '3'])
13839+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13840+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13841+                                 ['131073', '3'])
13842+
13843+        # These will yield six different caps.
13844+        for o in (w1, w2, r1 , r2, v1, v2):
13845+            # not exhaustive, obviously...
13846+            url = base + o.to_string() + "foobarbaz"
13847+            url2 = base + "foobarbaz" + o.to_string()
13848+            url3 = base + o.to_string()[:25] + "foo" + o.to_string()[:25]
13849+            for u in (url, url2, url3):
13850+                self.failUnlessRaises(uri.BadURIError,
13851+                                      o.__class__.init_from_human_encoding,
13852+                                      u)
13853+
13854+    def test_mdmf_from_string(self):
13855+        # Make sure that the from_string utility function works with
13856+        # MDMF caps.
13857+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
13858+        cap = u1.to_string()
13859+        self.failUnless(uri.is_uri(cap))
13860+        u2 = uri.from_string(cap)
13861+        self.failUnlessReallyEqual(u1, u2)
13862+        u3 = uri.from_string_mutable_filenode(cap)
13863+        self.failUnlessEqual(u3, u1)
13864+
13865+        # XXX: We should refactor the extension field into setUp
13866+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
13867+                                     ['131073', '3'])
13868+        cap = u1.to_string()
13869+        self.failUnless(uri.is_uri(cap))
13870+        u2 = uri.from_string(cap)
13871+        self.failUnlessReallyEqual(u1, u2)
13872+        u3 = uri.from_string_mutable_filenode(cap)
13873+        self.failUnlessEqual(u3, u1)
13874+
13875+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
13876+        cap = u1.to_string()
13877+        self.failUnless(uri.is_uri(cap))
13878+        u2 = uri.from_string(cap)
13879+        self.failUnlessReallyEqual(u1, u2)
13880+        u3 = uri.from_string_mutable_filenode(cap)
13881+        self.failUnlessEqual(u3, u1)
13882+
13883+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
13884+                                     ['131073', '3'])
13885+        cap = u1.to_string()
13886+        self.failUnless(uri.is_uri(cap))
13887+        u2 = uri.from_string(cap)
13888+        self.failUnlessReallyEqual(u1, u2)
13889+        u3 = uri.from_string_mutable_filenode(cap)
13890+        self.failUnlessEqual(u3, u1)
13891+
13892+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
13893+        cap = u1.to_string()
13894+        self.failUnless(uri.is_uri(cap))
13895+        u2 = uri.from_string(cap)
13896+        self.failUnlessReallyEqual(u1, u2)
13897+        u3 = uri.from_string_verifier(cap)
13898+        self.failUnlessEqual(u3, u1)
13899+
13900+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
13901+                                 ['131073', '3'])
13902+        cap = u1.to_string()
13903+        self.failUnless(uri.is_uri(cap))
13904+        u2 = uri.from_string(cap)
13905+        self.failUnlessReallyEqual(u1, u2)
13906+        u3 = uri.from_string_verifier(cap)
13907+        self.failUnlessEqual(u3, u1)
13908+
13909+
13910 class Dirnode(testutil.ReallyEqualMixin, unittest.TestCase):
13911     def test_pack(self):
13912         writekey = "\x01" * 16
13913hunk ./src/allmydata/test/test_uri.py 794
13914         self.failUnlessReallyEqual(u1.get_verify_cap(), None)
13915         self.failUnlessReallyEqual(u1.get_storage_index(), None)
13916         self.failUnlessReallyEqual(u1.abbrev_si(), "<LIT>")
13917+
13918+    def test_mdmf(self):
13919+        writekey = "\x01" * 16
13920+        fingerprint = "\x02" * 32
13921+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13922+        d1 = uri.MDMFDirectoryURI(uri1)
13923+        self.failIf(d1.is_readonly())
13924+        self.failUnless(d1.is_mutable())
13925+        self.failUnless(IURI.providedBy(d1))
13926+        self.failUnless(IDirnodeURI.providedBy(d1))
13927+        d1_uri = d1.to_string()
13928+
13929+        d2 = uri.from_string(d1_uri)
13930+        self.failUnlessIsInstance(d2, uri.MDMFDirectoryURI)
13931+        self.failIf(d2.is_readonly())
13932+        self.failUnless(d2.is_mutable())
13933+        self.failUnless(IURI.providedBy(d2))
13934+        self.failUnless(IDirnodeURI.providedBy(d2))
13935+
13936+        # It doesn't make sense to ask for a deep immutable URI for a
13937+        # mutable directory, and we should get back a result to that
13938+        # effect.
13939+        d3 = uri.from_string(d2.to_string(), deep_immutable=True)
13940+        self.failUnlessIsInstance(d3, uri.UnknownURI)
13941+
13942+    def test_mdmf_with_extensions(self):
13943+        writekey = "\x01" * 16
13944+        fingerprint = "\x02" * 32
13945+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13946+        d1 = uri.MDMFDirectoryURI(uri1)
13947+        d1_uri = d1.to_string()
13948+        # Add some extensions, verify that the URI is interpreted
13949+        # correctly.
13950+        d1_uri += ":3:131073"
13951+        uri2 = uri.from_string(d1_uri)
13952+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
13953+        self.failUnless(IURI.providedBy(uri2))
13954+        self.failUnless(IDirnodeURI.providedBy(uri2))
13955+        self.failUnless(uri1.is_mutable())
13956+        self.failIf(uri1.is_readonly())
13957+
13958+        d2_uri = uri2.to_string()
13959+        self.failUnlessIn(":3:131073", d2_uri)
13960+
13961+        # Now attenuate, verify that the extensions persist
13962+        ro_uri = uri2.get_readonly()
13963+        self.failUnlessIsInstance(ro_uri, uri.ReadonlyMDMFDirectoryURI)
13964+        self.failUnless(ro_uri.is_mutable())
13965+        self.failUnless(ro_uri.is_readonly())
13966+        self.failUnless(IURI.providedBy(ro_uri))
13967+        self.failUnless(IDirnodeURI.providedBy(ro_uri))
13968+        ro_uri_str = ro_uri.to_string()
13969+        self.failUnlessIn(":3:131073", ro_uri_str)
13970+
13971+    def test_mdmf_attenuation(self):
13972+        writekey = "\x01" * 16
13973+        fingerprint = "\x02" * 32
13974+
13975+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
13976+        d1 = uri.MDMFDirectoryURI(uri1)
13977+        self.failUnless(d1.is_mutable())
13978+        self.failIf(d1.is_readonly())
13979+        self.failUnless(IURI.providedBy(d1))
13980+        self.failUnless(IDirnodeURI.providedBy(d1))
13981+
13982+        d1_uri = d1.to_string()
13983+        d1_uri_from_fn = uri.MDMFDirectoryURI(d1.get_filenode_cap()).to_string()
13984+        self.failUnlessEqual(d1_uri_from_fn, d1_uri)
13985+
13986+        uri2 = uri.from_string(d1_uri)
13987+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
13988+        self.failUnless(IURI.providedBy(uri2))
13989+        self.failUnless(IDirnodeURI.providedBy(uri2))
13990+        self.failUnless(uri2.is_mutable())
13991+        self.failIf(uri2.is_readonly())
13992+
13993+        ro = uri2.get_readonly()
13994+        self.failUnlessIsInstance(ro, uri.ReadonlyMDMFDirectoryURI)
13995+        self.failUnless(ro.is_mutable())
13996+        self.failUnless(ro.is_readonly())
13997+        self.failUnless(IURI.providedBy(ro))
13998+        self.failUnless(IDirnodeURI.providedBy(ro))
13999+
14000+        ro_uri = ro.to_string()
14001+        n = uri.from_string(ro_uri, deep_immutable=True)
14002+        self.failUnlessIsInstance(n, uri.UnknownURI)
14003+
14004+        fn_cap = ro.get_filenode_cap()
14005+        fn_ro_cap = fn_cap.get_readonly()
14006+        d3 = uri.ReadonlyMDMFDirectoryURI(fn_ro_cap)
14007+        self.failUnlessEqual(ro.to_string(), d3.to_string())
14008+        self.failUnless(ro.is_mutable())
14009+        self.failUnless(ro.is_readonly())
14010+
14011+    def test_mdmf_verifier(self):
14012+        # I'm not sure what I want to write here yet.
14013+        writekey = "\x01" * 16
14014+        fingerprint = "\x02" * 32
14015+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
14016+        d1 = uri.MDMFDirectoryURI(uri1)
14017+        v1 = d1.get_verify_cap()
14018+        self.failUnlessIsInstance(v1, uri.MDMFDirectoryURIVerifier)
14019+        self.failIf(v1.is_mutable())
14020+
14021+        d2 = uri.from_string(d1.to_string())
14022+        v2 = d2.get_verify_cap()
14023+        self.failUnlessIsInstance(v2, uri.MDMFDirectoryURIVerifier)
14024+        self.failIf(v2.is_mutable())
14025+        self.failUnlessEqual(v2.to_string(), v1.to_string())
14026+
14027+        # Now attenuate and make sure that works correctly.
14028+        r3 = d2.get_readonly()
14029+        v3 = r3.get_verify_cap()
14030+        self.failUnlessIsInstance(v3, uri.MDMFDirectoryURIVerifier)
14031+        self.failIf(v3.is_mutable())
14032+        self.failUnlessEqual(v3.to_string(), v1.to_string())
14033+        r4 = uri.from_string(r3.to_string())
14034+        v4 = r4.get_verify_cap()
14035+        self.failUnlessIsInstance(v4, uri.MDMFDirectoryURIVerifier)
14036+        self.failIf(v4.is_mutable())
14037+        self.failUnlessEqual(v4.to_string(), v3.to_string())
14038hunk ./src/allmydata/uri.py 31
14039 SEP='(?::|%3A)'
14040 NUMBER='([0-9]+)'
14041 NUMBER_IGNORE='(?:[0-9]+)'
14042+OPTIONAL_EXTENSION_FIELD = '(' + SEP + '[0-9' + SEP + ']+|)'
14043 
14044 # "human-encoded" URIs are allowed to come with a leading
14045 # 'http://127.0.0.1:(8123|3456)/uri/' that will be ignored.
14046hunk ./src/allmydata/uri.py 297
14047     def get_verify_cap(self):
14048         return SSKVerifierURI(self.storage_index, self.fingerprint)
14049 
14050+    def get_extension_params(self):
14051+        return []
14052+
14053+    def set_extension_params(self, params):
14054+        pass
14055 
14056 class ReadonlySSKFileURI(_BaseURI):
14057     implements(IURI, IMutableFileURI)
14058hunk ./src/allmydata/uri.py 357
14059     def get_verify_cap(self):
14060         return SSKVerifierURI(self.storage_index, self.fingerprint)
14061 
14062+    def get_extension_params(self):
14063+        return []
14064+
14065+    def set_extension_params(self, params):
14066+        pass
14067 
14068 class SSKVerifierURI(_BaseURI):
14069     implements(IVerifierURI)
14070hunk ./src/allmydata/uri.py 407
14071     def get_verify_cap(self):
14072         return self
14073 
14074+    def get_extension_params(self):
14075+        return []
14076+
14077+    def set_extension_params(self, params):
14078+        pass
14079+
14080+class WritableMDMFFileURI(_BaseURI):
14081+    implements(IURI, IMutableFileURI)
14082+
14083+    BASE_STRING='URI:MDMF:'
14084+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14085+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14086+
14087+    def __init__(self, writekey, fingerprint, params=[]):
14088+        self.writekey = writekey
14089+        self.readkey = hashutil.ssk_readkey_hash(writekey)
14090+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
14091+        assert len(self.storage_index) == 16
14092+        self.fingerprint = fingerprint
14093+        self.extension = params
14094+
14095+    @classmethod
14096+    def init_from_human_encoding(cls, uri):
14097+        mo = cls.HUMAN_RE.search(uri)
14098+        if not mo:
14099+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14100+        params = filter(lambda x: x != '', re.split(SEP, mo.group(3)))
14101+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14102+
14103+    @classmethod
14104+    def init_from_string(cls, uri):
14105+        mo = cls.STRING_RE.search(uri)
14106+        if not mo:
14107+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14108+        params = mo.group(3)
14109+        params = filter(lambda x: x != '', params.split(":"))
14110+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14111+
14112+    def to_string(self):
14113+        assert isinstance(self.writekey, str)
14114+        assert isinstance(self.fingerprint, str)
14115+        ret = 'URI:MDMF:%s:%s' % (base32.b2a(self.writekey),
14116+                                  base32.b2a(self.fingerprint))
14117+        if self.extension:
14118+            ret += ":"
14119+            ret += ":".join(self.extension)
14120+
14121+        return ret
14122+
14123+    def __repr__(self):
14124+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
14125+
14126+    def abbrev(self):
14127+        return base32.b2a(self.writekey[:5])
14128+
14129+    def abbrev_si(self):
14130+        return base32.b2a(self.storage_index)[:5]
14131+
14132+    def is_readonly(self):
14133+        return False
14134+
14135+    def is_mutable(self):
14136+        return True
14137+
14138+    def get_readonly(self):
14139+        return ReadonlyMDMFFileURI(self.readkey, self.fingerprint, self.extension)
14140+
14141+    def get_verify_cap(self):
14142+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
14143+
14144+    def get_extension_params(self):
14145+        return self.extension
14146+
14147+    def set_extension_params(self, params):
14148+        params = map(str, params)
14149+        self.extension = params
14150+
14151+class ReadonlyMDMFFileURI(_BaseURI):
14152+    implements(IURI, IMutableFileURI)
14153+
14154+    BASE_STRING='URI:MDMF-RO:'
14155+    STRING_RE=re.compile('^' +BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14156+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-RO'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14157+
14158+    def __init__(self, readkey, fingerprint, params=[]):
14159+        self.readkey = readkey
14160+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
14161+        assert len(self.storage_index) == 16
14162+        self.fingerprint = fingerprint
14163+        self.extension = params
14164+
14165+    @classmethod
14166+    def init_from_human_encoding(cls, uri):
14167+        mo = cls.HUMAN_RE.search(uri)
14168+        if not mo:
14169+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14170+        params = mo.group(3)
14171+        params = filter(lambda x: x!= '', re.split(SEP, params))
14172+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14173+
14174+    @classmethod
14175+    def init_from_string(cls, uri):
14176+        mo = cls.STRING_RE.search(uri)
14177+        if not mo:
14178+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14179+
14180+        params = mo.group(3)
14181+        params = filter(lambda x: x != '', params.split(":"))
14182+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14183+
14184+    def to_string(self):
14185+        assert isinstance(self.readkey, str)
14186+        assert isinstance(self.fingerprint, str)
14187+        ret = 'URI:MDMF-RO:%s:%s' % (base32.b2a(self.readkey),
14188+                                     base32.b2a(self.fingerprint))
14189+        if self.extension:
14190+            ret += ":"
14191+            ret += ":".join(self.extension)
14192+
14193+        return ret
14194+
14195+    def __repr__(self):
14196+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
14197+
14198+    def abbrev(self):
14199+        return base32.b2a(self.readkey[:5])
14200+
14201+    def abbrev_si(self):
14202+        return base32.b2a(self.storage_index)[:5]
14203+
14204+    def is_readonly(self):
14205+        return True
14206+
14207+    def is_mutable(self):
14208+        return True
14209+
14210+    def get_readonly(self):
14211+        return self
14212+
14213+    def get_verify_cap(self):
14214+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
14215+
14216+    def get_extension_params(self):
14217+        return self.extension
14218+
14219+    def set_extension_params(self, params):
14220+        params = map(str, params)
14221+        self.extension = params
14222+
14223+class MDMFVerifierURI(_BaseURI):
14224+    implements(IVerifierURI)
14225+
14226+    BASE_STRING='URI:MDMF-Verifier:'
14227+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14228+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-Verifier'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
14229+
14230+    def __init__(self, storage_index, fingerprint, params=[]):
14231+        assert len(storage_index) == 16
14232+        self.storage_index = storage_index
14233+        self.fingerprint = fingerprint
14234+        self.extension = params
14235+
14236+    @classmethod
14237+    def init_from_human_encoding(cls, uri):
14238+        mo = cls.HUMAN_RE.search(uri)
14239+        if not mo:
14240+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14241+        params = mo.group(3)
14242+        params = filter(lambda x: x != '', re.split(SEP, params))
14243+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14244+
14245+    @classmethod
14246+    def init_from_string(cls, uri):
14247+        mo = cls.STRING_RE.search(uri)
14248+        if not mo:
14249+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
14250+        params = mo.group(3)
14251+        params = filter(lambda x: x != '', params.split(":"))
14252+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
14253+
14254+    def to_string(self):
14255+        assert isinstance(self.storage_index, str)
14256+        assert isinstance(self.fingerprint, str)
14257+        ret = 'URI:MDMF-Verifier:%s:%s' % (si_b2a(self.storage_index),
14258+                                           base32.b2a(self.fingerprint))
14259+        if self.extension:
14260+            ret += ':'
14261+            ret += ":".join(self.extension)
14262+
14263+        return ret
14264+
14265+    def is_readonly(self):
14266+        return True
14267+
14268+    def is_mutable(self):
14269+        return False
14270+
14271+    def get_readonly(self):
14272+        return self
14273+
14274+    def get_verify_cap(self):
14275+        return self
14276+
14277+    def get_extension_params(self):
14278+        return self.extension
14279+
14280 class _DirectoryBaseURI(_BaseURI):
14281     implements(IURI, IDirnodeURI)
14282     def __init__(self, filenode_uri=None):
14283hunk ./src/allmydata/uri.py 750
14284         return None
14285 
14286 
14287+class MDMFDirectoryURI(_DirectoryBaseURI):
14288+    implements(IDirectoryURI)
14289+
14290+    BASE_STRING='URI:DIR2-MDMF:'
14291+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14292+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF'+SEP)
14293+    INNER_URI_CLASS=WritableMDMFFileURI
14294+
14295+    def __init__(self, filenode_uri=None):
14296+        if filenode_uri:
14297+            assert not filenode_uri.is_readonly()
14298+        _DirectoryBaseURI.__init__(self, filenode_uri)
14299+
14300+    def is_readonly(self):
14301+        return False
14302+
14303+    def get_readonly(self):
14304+        return ReadonlyMDMFDirectoryURI(self._filenode_uri.get_readonly())
14305+
14306+    def get_verify_cap(self):
14307+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
14308+
14309+
14310+class ReadonlyMDMFDirectoryURI(_DirectoryBaseURI):
14311+    implements(IReadonlyDirectoryURI)
14312+
14313+    BASE_STRING='URI:DIR2-MDMF-RO:'
14314+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14315+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-RO'+SEP)
14316+    INNER_URI_CLASS=ReadonlyMDMFFileURI
14317+
14318+    def __init__(self, filenode_uri=None):
14319+        if filenode_uri:
14320+            assert filenode_uri.is_readonly()
14321+        _DirectoryBaseURI.__init__(self, filenode_uri)
14322+
14323+    def is_readonly(self):
14324+        return True
14325+
14326+    def get_readonly(self):
14327+        return self
14328+
14329+    def get_verify_cap(self):
14330+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
14331+
14332 def wrap_dirnode_cap(filecap):
14333     if isinstance(filecap, WriteableSSKFileURI):
14334         return DirectoryURI(filecap)
14335hunk ./src/allmydata/uri.py 804
14336         return ImmutableDirectoryURI(filecap)
14337     if isinstance(filecap, LiteralFileURI):
14338         return LiteralDirectoryURI(filecap)
14339+    if isinstance(filecap, WritableMDMFFileURI):
14340+        return MDMFDirectoryURI(filecap)
14341+    if isinstance(filecap, ReadonlyMDMFFileURI):
14342+        return ReadonlyMDMFDirectoryURI(filecap)
14343     assert False, "cannot interpret as a directory cap: %s" % filecap.__class__
14344 
14345hunk ./src/allmydata/uri.py 810
14346+class MDMFDirectoryURIVerifier(_DirectoryBaseURI):
14347+    implements(IVerifierURI)
14348+
14349+    BASE_STRING='URI:DIR2-MDMF-Verifier:'
14350+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
14351+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-Verifier'+SEP)
14352+    INNER_URI_CLASS=MDMFVerifierURI
14353+
14354+    def __init__(self, filenode_uri=None):
14355+        if filenode_uri:
14356+            assert IVerifierURI.providedBy(filenode_uri)
14357+        self._filenode_uri = filenode_uri
14358+
14359+    def get_filenode_cap(self):
14360+        return self._filenode_uri
14361+
14362+    def is_mutable(self):
14363+        return False
14364 
14365 class DirectoryURIVerifier(_DirectoryBaseURI):
14366     implements(IVerifierURI)
14367hunk ./src/allmydata/uri.py 915
14368             kind = "URI:SSK-RO readcap to a mutable file"
14369         elif s.startswith('URI:SSK-Verifier:'):
14370             return SSKVerifierURI.init_from_string(s)
14371+        elif s.startswith('URI:MDMF:'):
14372+            return WritableMDMFFileURI.init_from_string(s)
14373+        elif s.startswith('URI:MDMF-RO:'):
14374+            return ReadonlyMDMFFileURI.init_from_string(s)
14375+        elif s.startswith('URI:MDMF-Verifier:'):
14376+            return MDMFVerifierURI.init_from_string(s)
14377         elif s.startswith('URI:DIR2:'):
14378             if can_be_writeable:
14379                 return DirectoryURI.init_from_string(s)
14380hunk ./src/allmydata/uri.py 935
14381             return ImmutableDirectoryURI.init_from_string(s)
14382         elif s.startswith('URI:DIR2-LIT:'):
14383             return LiteralDirectoryURI.init_from_string(s)
14384+        elif s.startswith('URI:DIR2-MDMF:'):
14385+            if can_be_writeable:
14386+                return MDMFDirectoryURI.init_from_string(s)
14387+            kind = "URI:DIR2-MDMF directory writecap"
14388+        elif s.startswith('URI:DIR2-MDMF-RO:'):
14389+            if can_be_mutable:
14390+                return ReadonlyMDMFDirectoryURI.init_from_string(s)
14391+            kind = "URI:DIR2-MDMF-RO readcap to a mutable directory"
14392         elif s.startswith('x-tahoe-future-test-writeable:') and not can_be_writeable:
14393             # For testing how future writeable caps would behave in read-only contexts.
14394             kind = "x-tahoe-future-test-writeable: testing cap"
14395}
14396[webapi changes for MDMF
14397Kevan Carstensen <kevan@isnotajoke.com>**20110802021311
14398 Ignore-this: cf8a873b621c654b23c394c78b1036b6
14399 
14400     - Learn how to create MDMF files and directories through the
14401       mutable-type argument.
14402     - Operate with the interface changes associated with MDMF and #993.
14403     - Learn how to do partial updates of mutable files.
14404] {
14405hunk ./src/allmydata/test/test_web.py 27
14406 from allmydata.util.netstring import split_netstring
14407 from allmydata.util.encodingutil import to_str
14408 from allmydata.test.common import FakeCHKFileNode, FakeMutableFileNode, \
14409-     create_chk_filenode, WebErrorMixin, ShouldFailMixin, make_mutable_file_uri
14410-from allmydata.interfaces import IMutableFileNode
14411+     create_chk_filenode, WebErrorMixin, ShouldFailMixin, \
14412+     make_mutable_file_uri, create_mutable_filenode
14413+from allmydata.interfaces import IMutableFileNode, SDMF_VERSION, MDMF_VERSION
14414 from allmydata.mutable import servermap, publish, retrieve
14415 import allmydata.test.common_util as testutil
14416 from allmydata.test.no_network import GridTestMixin
14417hunk ./src/allmydata/test/test_web.py 52
14418         return stats
14419 
14420 class FakeNodeMaker(NodeMaker):
14421+    encoding_params = {
14422+        'k': 3,
14423+        'n': 10,
14424+        'happy': 7,
14425+        'max_segment_size':128*1024 # 1024=KiB
14426+    }
14427     def _create_lit(self, cap):
14428         return FakeCHKFileNode(cap)
14429     def _create_immutable(self, cap):
14430hunk ./src/allmydata/test/test_web.py 63
14431         return FakeCHKFileNode(cap)
14432     def _create_mutable(self, cap):
14433-        return FakeMutableFileNode(None, None, None, None).init_from_cap(cap)
14434-    def create_mutable_file(self, contents="", keysize=None):
14435-        n = FakeMutableFileNode(None, None, None, None)
14436-        return n.create(contents)
14437+        return FakeMutableFileNode(None,
14438+                                   None,
14439+                                   self.encoding_params, None).init_from_cap(cap)
14440+    def create_mutable_file(self, contents="", keysize=None,
14441+                            version=SDMF_VERSION):
14442+        n = FakeMutableFileNode(None, None, self.encoding_params, None)
14443+        return n.create(contents, version=version)
14444 
14445 class FakeUploader(service.Service):
14446     name = "uploader"
14447hunk ./src/allmydata/test/test_web.py 177
14448         self.nodemaker = FakeNodeMaker(None, self._secret_holder, None,
14449                                        self.uploader, None,
14450                                        None, None)
14451+        self.mutable_file_default = SDMF_VERSION
14452 
14453     def startService(self):
14454         return service.MultiService.startService(self)
14455hunk ./src/allmydata/test/test_web.py 222
14456             foo.set_uri(u"bar.txt", self._bar_txt_uri, self._bar_txt_uri)
14457             self._bar_txt_verifycap = n.get_verify_cap().to_string()
14458 
14459+            # sdmf
14460+            # XXX: Do we ever use this?
14461+            self.BAZ_CONTENTS, n, self._baz_txt_uri, self._baz_txt_readonly_uri = self.makefile_mutable(0)
14462+
14463+            foo.set_uri(u"baz.txt", self._baz_txt_uri, self._baz_txt_readonly_uri)
14464+
14465+            # mdmf
14466+            self.QUUX_CONTENTS, n, self._quux_txt_uri, self._quux_txt_readonly_uri = self.makefile_mutable(0, mdmf=True)
14467+            assert self._quux_txt_uri.startswith("URI:MDMF")
14468+            foo.set_uri(u"quux.txt", self._quux_txt_uri, self._quux_txt_readonly_uri)
14469+
14470             foo.set_uri(u"empty", res[3][1].get_uri(),
14471                         res[3][1].get_readonly_uri())
14472             sub_uri = res[4][1].get_uri()
14473hunk ./src/allmydata/test/test_web.py 264
14474             # public/
14475             # public/foo/
14476             # public/foo/bar.txt
14477+            # public/foo/baz.txt
14478+            # public/foo/quux.txt
14479             # public/foo/blockingfile
14480             # public/foo/empty/
14481             # public/foo/sub/
14482hunk ./src/allmydata/test/test_web.py 286
14483         n = create_chk_filenode(contents)
14484         return contents, n, n.get_uri()
14485 
14486+    def makefile_mutable(self, number, mdmf=False):
14487+        contents = "contents of mutable file %s\n" % number
14488+        n = create_mutable_filenode(contents, mdmf)
14489+        return contents, n, n.get_uri(), n.get_readonly_uri()
14490+
14491     def tearDown(self):
14492         return self.s.stopService()
14493 
14494hunk ./src/allmydata/test/test_web.py 297
14495     def failUnlessIsBarDotTxt(self, res):
14496         self.failUnlessReallyEqual(res, self.BAR_CONTENTS, res)
14497 
14498+    def failUnlessIsQuuxDotTxt(self, res):
14499+        self.failUnlessReallyEqual(res, self.QUUX_CONTENTS, res)
14500+
14501+    def failUnlessIsBazDotTxt(self, res):
14502+        self.failUnlessReallyEqual(res, self.BAZ_CONTENTS, res)
14503+
14504     def failUnlessIsBarJSON(self, res):
14505         data = simplejson.loads(res)
14506         self.failUnless(isinstance(data, list))
14507hunk ./src/allmydata/test/test_web.py 314
14508         self.failUnlessReallyEqual(to_str(data[1]["verify_uri"]), self._bar_txt_verifycap)
14509         self.failUnlessReallyEqual(data[1]["size"], len(self.BAR_CONTENTS))
14510 
14511+    def failUnlessIsQuuxJSON(self, res, readonly=False):
14512+        data = simplejson.loads(res)
14513+        self.failUnless(isinstance(data, list))
14514+        self.failUnlessEqual(data[0], "filenode")
14515+        self.failUnless(isinstance(data[1], dict))
14516+        metadata = data[1]
14517+        return self.failUnlessIsQuuxDotTxtMetadata(metadata, readonly)
14518+
14519+    def failUnlessIsQuuxDotTxtMetadata(self, metadata, readonly):
14520+        self.failUnless(metadata['mutable'])
14521+        if readonly:
14522+            self.failIf("rw_uri" in metadata)
14523+        else:
14524+            self.failUnless("rw_uri" in metadata)
14525+            self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
14526+        self.failUnless("ro_uri" in metadata)
14527+        self.failUnlessEqual(metadata['ro_uri'], self._quux_txt_readonly_uri)
14528+        self.failUnlessReallyEqual(metadata['size'], len(self.QUUX_CONTENTS))
14529+
14530     def failUnlessIsFooJSON(self, res):
14531         data = simplejson.loads(res)
14532         self.failUnless(isinstance(data, list))
14533hunk ./src/allmydata/test/test_web.py 346
14534 
14535         kidnames = sorted([unicode(n) for n in data[1]["children"]])
14536         self.failUnlessEqual(kidnames,
14537-                             [u"bar.txt", u"blockingfile", u"empty",
14538-                              u"n\u00fc.txt", u"sub"])
14539+                             [u"bar.txt", u"baz.txt", u"blockingfile",
14540+                              u"empty", u"n\u00fc.txt", u"quux.txt", u"sub"])
14541         kids = dict( [(unicode(name),value)
14542                       for (name,value)
14543                       in data[1]["children"].iteritems()] )
14544hunk ./src/allmydata/test/test_web.py 368
14545                                    self._bar_txt_metadata["tahoe"]["linkcrtime"])
14546         self.failUnlessReallyEqual(to_str(kids[u"n\u00fc.txt"][1]["ro_uri"]),
14547                                    self._bar_txt_uri)
14548+        self.failUnlessIn("quux.txt", kids)
14549+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["rw_uri"],
14550+                                   self._quux_txt_uri)
14551+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["ro_uri"],
14552+                                   self._quux_txt_readonly_uri)
14553 
14554     def GET(self, urlpath, followRedirect=False, return_response=False,
14555             **kwargs):
14556hunk ./src/allmydata/test/test_web.py 845
14557                              self.PUT, base + "/@@name=/blah.txt", "")
14558         return d
14559 
14560+
14561     def test_GET_DIRURL_named_bad(self):
14562         base = "/file/%s" % urllib.quote(self._foo_uri)
14563         d = self.shouldFail2(error.Error, "test_PUT_DIRURL_named_bad",
14564hunk ./src/allmydata/test/test_web.py 888
14565         d.addCallback(self.failUnlessIsBarDotTxt)
14566         return d
14567 
14568+    def test_GET_FILE_URI_mdmf(self):
14569+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
14570+        d = self.GET(base)
14571+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14572+        return d
14573+
14574+    def test_GET_FILE_URI_mdmf_extensions(self):
14575+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
14576+        d = self.GET(base)
14577+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14578+        return d
14579+
14580+    def test_GET_FILE_URI_mdmf_bare_cap(self):
14581+        cap_elements = self._quux_txt_uri.split(":")
14582+        # 6 == expected cap length with two extensions.
14583+        self.failUnlessEqual(len(cap_elements), 6)
14584+
14585+        # Now lop off the extension parameters and stitch everything
14586+        # back together
14587+        quux_uri = ":".join(cap_elements[:len(cap_elements) - 2])
14588+
14589+        # Now GET that. We should get back quux.
14590+        base = "/uri/%s" % urllib.quote(quux_uri)
14591+        d = self.GET(base)
14592+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14593+        return d
14594+
14595+    def test_GET_FILE_URI_mdmf_readonly(self):
14596+        base = "/uri/%s" % urllib.quote(self._quux_txt_readonly_uri)
14597+        d = self.GET(base)
14598+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14599+        return d
14600+
14601     def test_GET_FILE_URI_badchild(self):
14602         base = "/uri/%s/boguschild" % urllib.quote(self._bar_txt_uri)
14603         errmsg = "Files have no children, certainly not named 'boguschild'"
14604hunk ./src/allmydata/test/test_web.py 937
14605                              self.PUT, base, "")
14606         return d
14607 
14608+    def test_PUT_FILE_URI_mdmf(self):
14609+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
14610+        self._quux_new_contents = "new_contents"
14611+        d = self.GET(base)
14612+        d.addCallback(lambda res:
14613+            self.failUnlessIsQuuxDotTxt(res))
14614+        d.addCallback(lambda ignored:
14615+            self.PUT(base, self._quux_new_contents))
14616+        d.addCallback(lambda ignored:
14617+            self.GET(base))
14618+        d.addCallback(lambda res:
14619+            self.failUnlessReallyEqual(res, self._quux_new_contents))
14620+        return d
14621+
14622+    def test_PUT_FILE_URI_mdmf_extensions(self):
14623+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
14624+        self._quux_new_contents = "new_contents"
14625+        d = self.GET(base)
14626+        d.addCallback(lambda res: self.failUnlessIsQuuxDotTxt(res))
14627+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
14628+        d.addCallback(lambda ignored: self.GET(base))
14629+        d.addCallback(lambda res: self.failUnlessEqual(self._quux_new_contents,
14630+                                                       res))
14631+        return d
14632+
14633+    def test_PUT_FILE_URI_mdmf_bare_cap(self):
14634+        elements = self._quux_txt_uri.split(":")
14635+        self.failUnlessEqual(len(elements), 6)
14636+
14637+        quux_uri = ":".join(elements[:len(elements) - 2])
14638+        base = "/uri/%s" % urllib.quote(quux_uri)
14639+        self._quux_new_contents = "new_contents" * 50000
14640+
14641+        d = self.GET(base)
14642+        d.addCallback(self.failUnlessIsQuuxDotTxt)
14643+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
14644+        d.addCallback(lambda ignored: self.GET(base))
14645+        d.addCallback(lambda res:
14646+            self.failUnlessEqual(res, self._quux_new_contents))
14647+        return d
14648+
14649+    def test_PUT_FILE_URI_mdmf_readonly(self):
14650+        # We're not allowed to PUT things to a readonly cap.
14651+        base = "/uri/%s" % self._quux_txt_readonly_uri
14652+        d = self.GET(base)
14653+        d.addCallback(lambda res:
14654+            self.failUnlessIsQuuxDotTxt(res))
14655+        # What should we get here? We get a 500 error now; that's not right.
14656+        d.addCallback(lambda ignored:
14657+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_mdmf_readonly",
14658+                             "400 Bad Request", "read-only cap",
14659+                             self.PUT, base, "new data"))
14660+        return d
14661+
14662+    def test_PUT_FILE_URI_sdmf_readonly(self):
14663+        # We're not allowed to put things to a readonly cap.
14664+        base = "/uri/%s" % self._baz_txt_readonly_uri
14665+        d = self.GET(base)
14666+        d.addCallback(lambda res:
14667+            self.failUnlessIsBazDotTxt(res))
14668+        d.addCallback(lambda ignored:
14669+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_sdmf_readonly",
14670+                             "400 Bad Request", "read-only cap",
14671+                             self.PUT, base, "new_data"))
14672+        return d
14673+
14674     # TODO: version of this with a Unicode filename
14675     def test_GET_FILEURL_save(self):
14676         d = self.GET(self.public_url + "/foo/bar.txt?filename=bar.txt&save=true",
14677hunk ./src/allmydata/test/test_web.py 1019
14678         d.addBoth(self.should404, "test_GET_FILEURL_missing")
14679         return d
14680 
14681+    def test_GET_FILEURL_info_mdmf(self):
14682+        d = self.GET("/uri/%s?t=info" % self._quux_txt_uri)
14683+        def _got(res):
14684+            self.failUnlessIn("mutable file (mdmf)", res)
14685+            self.failUnlessIn(self._quux_txt_uri, res)
14686+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14687+        d.addCallback(_got)
14688+        return d
14689+
14690+    def test_GET_FILEURL_info_mdmf_readonly(self):
14691+        d = self.GET("/uri/%s?t=info" % self._quux_txt_readonly_uri)
14692+        def _got(res):
14693+            self.failUnlessIn("mutable file (mdmf)", res)
14694+            self.failIfIn(self._quux_txt_uri, res)
14695+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14696+        d.addCallback(_got)
14697+        return d
14698+
14699+    def test_GET_FILEURL_info_sdmf(self):
14700+        d = self.GET("/uri/%s?t=info" % self._baz_txt_uri)
14701+        def _got(res):
14702+            self.failUnlessIn("mutable file (sdmf)", res)
14703+            self.failUnlessIn(self._baz_txt_uri, res)
14704+        d.addCallback(_got)
14705+        return d
14706+
14707+    def test_GET_FILEURL_info_mdmf_extensions(self):
14708+        d = self.GET("/uri/%s:3:131073?t=info" % self._quux_txt_uri)
14709+        def _got(res):
14710+            self.failUnlessIn("mutable file (mdmf)", res)
14711+            self.failUnlessIn(self._quux_txt_uri, res)
14712+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
14713+        d.addCallback(_got)
14714+        return d
14715+
14716+    def test_GET_FILEURL_info_mdmf_bare_cap(self):
14717+        elements = self._quux_txt_uri.split(":")
14718+        self.failUnlessEqual(len(elements), 6)
14719+
14720+        quux_uri = ":".join(elements[:len(elements) - 2])
14721+        base = "/uri/%s?t=info" % urllib.quote(quux_uri)
14722+        d = self.GET(base)
14723+        def _got(res):
14724+            self.failUnlessIn("mutable file (mdmf)", res)
14725+            self.failUnlessIn(quux_uri, res)
14726+        d.addCallback(_got)
14727+        return d
14728+
14729     def test_PUT_overwrite_only_files(self):
14730         # create a directory, put a file in that directory.
14731         contents, n, filecap = self.makefile(8)
14732hunk ./src/allmydata/test/test_web.py 1108
14733                                                       self.NEWFILE_CONTENTS))
14734         return d
14735 
14736+    def test_PUT_NEWFILEURL_unlinked_mdmf(self):
14737+        # this should get us a few segments of an MDMF mutable file,
14738+        # which we can then test for.
14739+        contents = self.NEWFILE_CONTENTS * 300000
14740+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
14741+                     contents)
14742+        def _got_filecap(filecap):
14743+            self.failUnless(filecap.startswith("URI:MDMF"))
14744+            return filecap
14745+        d.addCallback(_got_filecap)
14746+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14747+        d.addCallback(lambda json: self.failUnlessIn("mdmf", json))
14748+        return d
14749+
14750+    def test_PUT_NEWFILEURL_unlinked_sdmf(self):
14751+        contents = self.NEWFILE_CONTENTS * 300000
14752+        d = self.PUT("/uri?mutable=true&mutable-type=sdmf",
14753+                     contents)
14754+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14755+        d.addCallback(lambda json: self.failUnlessIn("sdmf", json))
14756+        return d
14757+
14758+    def test_PUT_NEWFILEURL_unlinked_bad_mutable_type(self):
14759+        contents = self.NEWFILE_CONTENTS * 300000
14760+        return self.shouldHTTPError("test bad mutable type",
14761+                                    400, "Bad Request", "Unknown type: foo",
14762+                                    self.PUT, "/uri?mutable=true&mutable-type=foo",
14763+                                    contents)
14764+
14765     def test_PUT_NEWFILEURL_range_bad(self):
14766         headers = {"content-range": "bytes 1-10/%d" % len(self.NEWFILE_CONTENTS)}
14767         target = self.public_url + "/foo/new.txt"
14768hunk ./src/allmydata/test/test_web.py 1169
14769         return d
14770 
14771     def test_PUT_NEWFILEURL_mutable_toobig(self):
14772-        d = self.shouldFail2(error.Error, "test_PUT_NEWFILEURL_mutable_toobig",
14773-                             "413 Request Entity Too Large",
14774-                             "SDMF is limited to one segment, and 10001 > 10000",
14775-                             self.PUT,
14776-                             self.public_url + "/foo/new.txt?mutable=true",
14777-                             "b" * (self.s.MUTABLE_SIZELIMIT+1))
14778+        # It is okay to upload large mutable files, so we should be able
14779+        # to do that.
14780+        d = self.PUT(self.public_url + "/foo/new.txt?mutable=true",
14781+                     "b" * (self.s.MUTABLE_SIZELIMIT + 1))
14782         return d
14783 
14784     def test_PUT_NEWFILEURL_replace(self):
14785hunk ./src/allmydata/test/test_web.py 1267
14786         d.addCallback(_check1)
14787         return d
14788 
14789+    def test_GET_FILEURL_json_mutable_type(self):
14790+        # The JSON should include mutable-type, which says whether the
14791+        # file is SDMF or MDMF
14792+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
14793+                     self.NEWFILE_CONTENTS * 300000)
14794+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14795+        def _got_json(json, version):
14796+            data = simplejson.loads(json)
14797+            assert "filenode" == data[0]
14798+            data = data[1]
14799+            assert isinstance(data, dict)
14800+
14801+            self.failUnlessIn("mutable-type", data)
14802+            self.failUnlessEqual(data['mutable-type'], version)
14803+
14804+        d.addCallback(_got_json, "mdmf")
14805+        # Now make an SDMF file and check that it is reported correctly.
14806+        d.addCallback(lambda ignored:
14807+            self.PUT("/uri?mutable=true&mutable-type=sdmf",
14808+                      self.NEWFILE_CONTENTS * 300000))
14809+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
14810+        d.addCallback(_got_json, "sdmf")
14811+        return d
14812+
14813+    def test_GET_FILEURL_json_mdmf_extensions(self):
14814+        # A GET invoked against a URL that includes an MDMF cap with
14815+        # extensions should fetch the same JSON information as a GET
14816+        # invoked against a bare cap.
14817+        self._quux_txt_uri = "%s:3:131073" % self._quux_txt_uri
14818+        self._quux_txt_readonly_uri = "%s:3:131073" % self._quux_txt_readonly_uri
14819+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
14820+        d.addCallback(self.failUnlessIsQuuxJSON)
14821+        return d
14822+
14823+    def test_GET_FILEURL_json_mdmf_bare_cap(self):
14824+        elements = self._quux_txt_uri.split(":")
14825+        self.failUnlessEqual(len(elements), 6)
14826+
14827+        quux_uri = ":".join(elements[:len(elements) - 2])
14828+        # so failUnlessIsQuuxJSON will work.
14829+        self._quux_txt_uri = quux_uri
14830+
14831+        # we need to alter the readonly URI in the same way, again so
14832+        # failUnlessIsQuuxJSON will work
14833+        elements = self._quux_txt_readonly_uri.split(":")
14834+        self.failUnlessEqual(len(elements), 6)
14835+        quux_ro_uri = ":".join(elements[:len(elements) - 2])
14836+        self._quux_txt_readonly_uri = quux_ro_uri
14837+
14838+        base = "/uri/%s?t=json" % urllib.quote(quux_uri)
14839+        d = self.GET(base)
14840+        d.addCallback(self.failUnlessIsQuuxJSON)
14841+        return d
14842+
14843+    def test_GET_FILEURL_json_mdmf_bare_readonly_cap(self):
14844+        elements = self._quux_txt_readonly_uri.split(":")
14845+        self.failUnlessEqual(len(elements), 6)
14846+
14847+        quux_readonly_uri = ":".join(elements[:len(elements) - 2])
14848+        # so failUnlessIsQuuxJSON will work
14849+        self._quux_txt_readonly_uri = quux_readonly_uri
14850+        base = "/uri/%s?t=json" % quux_readonly_uri
14851+        d = self.GET(base)
14852+        # XXX: We may need to make a method that knows how to check for
14853+        # readonly JSON, or else alter that one so that it knows how to
14854+        # do that.
14855+        d.addCallback(self.failUnlessIsQuuxJSON, readonly=True)
14856+        return d
14857+
14858+    def test_GET_FILEURL_json_mdmf(self):
14859+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
14860+        d.addCallback(self.failUnlessIsQuuxJSON)
14861+        return d
14862+
14863     def test_GET_FILEURL_json_missing(self):
14864         d = self.GET(self.public_url + "/foo/missing?json")
14865         d.addBoth(self.should404, "test_GET_FILEURL_json_missing")
14866hunk ./src/allmydata/test/test_web.py 1373
14867             self.failUnless(CSS_STYLE.search(res), res)
14868         d.addCallback(_check)
14869         return d
14870-   
14871+
14872     def test_GET_FILEURL_uri_missing(self):
14873         d = self.GET(self.public_url + "/foo/missing?t=uri")
14874         d.addBoth(self.should404, "test_GET_FILEURL_uri_missing")
14875hunk ./src/allmydata/test/test_web.py 1379
14876         return d
14877 
14878-    def test_GET_DIRECTORY_html_banner(self):
14879+    def test_GET_DIRECTORY_html(self):
14880         d = self.GET(self.public_url + "/foo", followRedirect=True)
14881         def _check(res):
14882             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
14883hunk ./src/allmydata/test/test_web.py 1383
14884+            # These are radio buttons that allow a user to toggle
14885+            # whether a particular mutable file is SDMF or MDMF.
14886+            self.failUnlessIn("mutable-type-mdmf", res)
14887+            self.failUnlessIn("mutable-type-sdmf", res)
14888+            # Similarly, these toggle whether a particular directory
14889+            # should be MDMF or SDMF.
14890+            self.failUnlessIn("mutable-directory-mdmf", res)
14891+            self.failUnlessIn("mutable-directory-sdmf", res)
14892+            self.failUnlessIn("quux", res)
14893         d.addCallback(_check)
14894         return d
14895 
14896hunk ./src/allmydata/test/test_web.py 1395
14897+    def test_GET_root_html(self):
14898+        # make sure that we have the option to upload an unlinked
14899+        # mutable file in SDMF and MDMF formats.
14900+        d = self.GET("/")
14901+        def _got_html(html):
14902+            # These are radio buttons that allow the user to toggle
14903+            # whether a particular mutable file is MDMF or SDMF.
14904+            self.failUnlessIn("mutable-type-mdmf", html)
14905+            self.failUnlessIn("mutable-type-sdmf", html)
14906+            # We should also have the ability to create a mutable directory.
14907+            self.failUnlessIn("mkdir", html)
14908+            # ...and we should have the ability to say whether that's an
14909+            # MDMF or SDMF directory
14910+            self.failUnlessIn("mutable-directory-mdmf", html)
14911+            self.failUnlessIn("mutable-directory-sdmf", html)
14912+        d.addCallback(_got_html)
14913+        return d
14914+
14915+    def test_mutable_type_defaults(self):
14916+        # The checked="checked" attribute of the inputs corresponding to
14917+        # the mutable-type parameter should change as expected with the
14918+        # value configured in tahoe.cfg.
14919+        #
14920+        # By default, the value configured with the client is
14921+        # SDMF_VERSION, so that should be checked.
14922+        assert self.s.mutable_file_default == SDMF_VERSION
14923+
14924+        d = self.GET("/")
14925+        def _got_html(html, value):
14926+            i = 'input checked="checked" type="radio" id="mutable-type-%s"'
14927+            self.failUnlessIn(i % value, html)
14928+        d.addCallback(_got_html, "sdmf")
14929+        d.addCallback(lambda ignored:
14930+            self.GET(self.public_url + "/foo", followRedirect=True))
14931+        d.addCallback(_got_html, "sdmf")
14932+        # Now switch the configuration value to MDMF. The MDMF radio
14933+        # buttons should now be checked on these pages.
14934+        def _swap_values(ignored):
14935+            self.s.mutable_file_default = MDMF_VERSION
14936+        d.addCallback(_swap_values)
14937+        d.addCallback(lambda ignored: self.GET("/"))
14938+        d.addCallback(_got_html, "mdmf")
14939+        d.addCallback(lambda ignored:
14940+            self.GET(self.public_url + "/foo", followRedirect=True))
14941+        d.addCallback(_got_html, "mdmf")
14942+        return d
14943+
14944     def test_GET_DIRURL(self):
14945         # the addSlash means we get a redirect here
14946         # from /uri/$URI/foo/ , we need ../../../ to get back to the root
14947hunk ./src/allmydata/test/test_web.py 1535
14948         d.addCallback(self.failUnlessIsFooJSON)
14949         return d
14950 
14951+    def test_GET_DIRURL_json_mutable_type(self):
14952+        d = self.PUT(self.public_url + \
14953+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
14954+                     self.NEWFILE_CONTENTS * 300000)
14955+        d.addCallback(lambda ignored:
14956+            self.PUT(self.public_url + \
14957+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
14958+                     self.NEWFILE_CONTENTS * 300000))
14959+        # Now we have an MDMF and SDMF file in the directory. If we GET
14960+        # its JSON, we should see their encodings.
14961+        d.addCallback(lambda ignored:
14962+            self.GET(self.public_url + "/foo?t=json"))
14963+        def _got_json(json):
14964+            data = simplejson.loads(json)
14965+            assert data[0] == "dirnode"
14966+
14967+            data = data[1]
14968+            kids = data['children']
14969+
14970+            mdmf_data = kids['mdmf.txt'][1]
14971+            self.failUnlessIn("mutable-type", mdmf_data)
14972+            self.failUnlessEqual(mdmf_data['mutable-type'], "mdmf")
14973+
14974+            sdmf_data = kids['sdmf.txt'][1]
14975+            self.failUnlessIn("mutable-type", sdmf_data)
14976+            self.failUnlessEqual(sdmf_data['mutable-type'], "sdmf")
14977+        d.addCallback(_got_json)
14978+        return d
14979+
14980 
14981     def test_POST_DIRURL_manifest_no_ophandle(self):
14982         d = self.shouldFail2(error.Error,
14983hunk ./src/allmydata/test/test_web.py 1659
14984         d.addCallback(self.get_operation_results, "127", "json")
14985         def _got_json(stats):
14986             expected = {"count-immutable-files": 3,
14987-                        "count-mutable-files": 0,
14988+                        "count-mutable-files": 2,
14989                         "count-literal-files": 0,
14990hunk ./src/allmydata/test/test_web.py 1661
14991-                        "count-files": 3,
14992+                        "count-files": 5,
14993                         "count-directories": 3,
14994                         "size-immutable-files": 57,
14995                         "size-literal-files": 0,
14996hunk ./src/allmydata/test/test_web.py 1667
14997                         #"size-directories": 1912, # varies
14998                         #"largest-directory": 1590,
14999-                        "largest-directory-children": 5,
15000+                        "largest-directory-children": 7,
15001                         "largest-immutable-file": 19,
15002                         }
15003             for k,v in expected.iteritems():
15004hunk ./src/allmydata/test/test_web.py 1684
15005         def _check(res):
15006             self.failUnless(res.endswith("\n"))
15007             units = [simplejson.loads(t) for t in res[:-1].split("\n")]
15008-            self.failUnlessReallyEqual(len(units), 7)
15009+            self.failUnlessReallyEqual(len(units), 9)
15010             self.failUnlessEqual(units[-1]["type"], "stats")
15011             first = units[0]
15012             self.failUnlessEqual(first["path"], [])
15013hunk ./src/allmydata/test/test_web.py 1695
15014             self.failIfEqual(baz["storage-index"], None)
15015             self.failIfEqual(baz["verifycap"], None)
15016             self.failIfEqual(baz["repaircap"], None)
15017+            # XXX: Add quux and baz to this test.
15018             return
15019         d.addCallback(_check)
15020         return d
15021hunk ./src/allmydata/test/test_web.py 1722
15022         d.addCallback(self.failUnlessNodeKeysAre, [])
15023         return d
15024 
15025+    def test_PUT_NEWDIRURL_mdmf(self):
15026+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
15027+        d.addCallback(lambda res:
15028+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15029+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15030+        d.addCallback(lambda node:
15031+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15032+        return d
15033+
15034+    def test_PUT_NEWDIRURL_sdmf(self):
15035+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf",
15036+                     "")
15037+        d.addCallback(lambda res:
15038+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15039+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15040+        d.addCallback(lambda node:
15041+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15042+        return d
15043+
15044+    def test_PUT_NEWDIRURL_bad_mutable_type(self):
15045+        return self.shouldHTTPError("test bad mutable type",
15046+                             400, "Bad Request", "Unknown type: foo",
15047+                             self.PUT, self.public_url + \
15048+                             "/foo/newdir=?t=mkdir&mutable-type=foo", "")
15049+
15050     def test_POST_NEWDIRURL(self):
15051         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir", "")
15052         d.addCallback(lambda res:
15053hunk ./src/allmydata/test/test_web.py 1755
15054         d.addCallback(self.failUnlessNodeKeysAre, [])
15055         return d
15056 
15057+    def test_POST_NEWDIRURL_mdmf(self):
15058+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
15059+        d.addCallback(lambda res:
15060+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15061+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15062+        d.addCallback(lambda node:
15063+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15064+        return d
15065+
15066+    def test_POST_NEWDIRURL_sdmf(self):
15067+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf", "")
15068+        d.addCallback(lambda res:
15069+            self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15070+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15071+        d.addCallback(lambda node:
15072+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15073+        return d
15074+
15075+    def test_POST_NEWDIRURL_bad_mutable_type(self):
15076+        return self.shouldHTTPError("test bad mutable type",
15077+                                    400, "Bad Request", "Unknown type: foo",
15078+                                    self.POST2, self.public_url + \
15079+                                    "/foo/newdir?t=mkdir&mutable-type=foo", "")
15080+
15081     def test_POST_NEWDIRURL_emptyname(self):
15082         # an empty pathname component (i.e. a double-slash) is disallowed
15083         d = self.shouldFail2(error.Error, "test_POST_NEWDIRURL_emptyname",
15084hunk ./src/allmydata/test/test_web.py 1787
15085                              self.POST, self.public_url + "//?t=mkdir")
15086         return d
15087 
15088-    def test_POST_NEWDIRURL_initial_children(self):
15089+    def _do_POST_NEWDIRURL_initial_children_test(self, version=None):
15090         (newkids, caps) = self._create_initial_children()
15091hunk ./src/allmydata/test/test_web.py 1789
15092-        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-with-children",
15093+        query = "/foo/newdir?t=mkdir-with-children"
15094+        if version == MDMF_VERSION:
15095+            query += "&mutable-type=mdmf"
15096+        elif version == SDMF_VERSION:
15097+            query += "&mutable-type=sdmf"
15098+        else:
15099+            version = SDMF_VERSION # for later
15100+        d = self.POST2(self.public_url + query,
15101                        simplejson.dumps(newkids))
15102         def _check(uri):
15103             n = self.s.create_node_from_uri(uri.strip())
15104hunk ./src/allmydata/test/test_web.py 1801
15105             d2 = self.failUnlessNodeKeysAre(n, newkids.keys())
15106+            self.failUnlessEqual(n._node.get_version(), version)
15107             d2.addCallback(lambda ign:
15108                            self.failUnlessROChildURIIs(n, u"child-imm",
15109                                                        caps['filecap1']))
15110hunk ./src/allmydata/test/test_web.py 1839
15111         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
15112         return d
15113 
15114+    def test_POST_NEWDIRURL_initial_children(self):
15115+        return self._do_POST_NEWDIRURL_initial_children_test()
15116+
15117+    def test_POST_NEWDIRURL_initial_children_mdmf(self):
15118+        return self._do_POST_NEWDIRURL_initial_children_test(MDMF_VERSION)
15119+
15120+    def test_POST_NEWDIRURL_initial_children_sdmf(self):
15121+        return self._do_POST_NEWDIRURL_initial_children_test(SDMF_VERSION)
15122+
15123+    def test_POST_NEWDIRURL_initial_children_bad_mutable_type(self):
15124+        (newkids, caps) = self._create_initial_children()
15125+        return self.shouldHTTPError("test bad mutable type",
15126+                                    400, "Bad Request", "Unknown type: foo",
15127+                                    self.POST2, self.public_url + \
15128+                                    "/foo/newdir?t=mkdir-with-children&mutable-type=foo",
15129+                                    simplejson.dumps(newkids))
15130+
15131     def test_POST_NEWDIRURL_immutable(self):
15132         (newkids, caps) = self._create_immutable_children()
15133         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-immutable",
15134hunk ./src/allmydata/test/test_web.py 1956
15135         d.addCallback(self.failUnlessNodeKeysAre, [])
15136         return d
15137 
15138+    def test_PUT_NEWDIRURL_mkdirs_mdmf(self):
15139+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=mdmf", "")
15140+        d.addCallback(lambda ignored:
15141+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
15142+        d.addCallback(lambda ignored:
15143+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
15144+        d.addCallback(lambda ignored:
15145+            self._foo_node.get_child_at_path(u"subdir"))
15146+        def _got_subdir(subdir):
15147+            # XXX: What we want?
15148+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
15149+            self.failUnlessNodeHasChild(subdir, u"newdir")
15150+            return subdir.get_child_at_path(u"newdir")
15151+        d.addCallback(_got_subdir)
15152+        d.addCallback(lambda newdir:
15153+            self.failUnlessEqual(newdir._node.get_version(), MDMF_VERSION))
15154+        return d
15155+
15156+    def test_PUT_NEWDIRURL_mkdirs_sdmf(self):
15157+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=sdmf", "")
15158+        d.addCallback(lambda ignored:
15159+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
15160+        d.addCallback(lambda ignored:
15161+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
15162+        d.addCallback(lambda ignored:
15163+            self._foo_node.get_child_at_path(u"subdir"))
15164+        def _got_subdir(subdir):
15165+            # XXX: What we want?
15166+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
15167+            self.failUnlessNodeHasChild(subdir, u"newdir")
15168+            return subdir.get_child_at_path(u"newdir")
15169+        d.addCallback(_got_subdir)
15170+        d.addCallback(lambda newdir:
15171+            self.failUnlessEqual(newdir._node.get_version(), SDMF_VERSION))
15172+        return d
15173+
15174+    def test_PUT_NEWDIRURL_mkdirs_bad_mutable_type(self):
15175+        return self.shouldHTTPError("test bad mutable type",
15176+                                    400, "Bad Request", "Unknown type: foo",
15177+                                    self.PUT, self.public_url + \
15178+                                    "/foo/subdir/newdir?t=mkdir&mutable-type=foo",
15179+                                    "")
15180+
15181     def test_DELETE_DIRURL(self):
15182         d = self.DELETE(self.public_url + "/foo")
15183         d.addCallback(lambda res:
15184hunk ./src/allmydata/test/test_web.py 2236
15185         return d
15186 
15187     def test_POST_upload_no_link_mutable_toobig(self):
15188-        d = self.shouldFail2(error.Error,
15189-                             "test_POST_upload_no_link_mutable_toobig",
15190-                             "413 Request Entity Too Large",
15191-                             "SDMF is limited to one segment, and 10001 > 10000",
15192-                             self.POST,
15193-                             "/uri", t="upload", mutable="true",
15194-                             file=("new.txt",
15195-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
15196+        # The SDMF size limit is no longer in place, so we should be
15197+        # able to upload mutable files that are as large as we want them
15198+        # to be.
15199+        d = self.POST("/uri", t="upload", mutable="true",
15200+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
15201+        return d
15202+
15203+
15204+    def test_POST_upload_mutable_type_unlinked(self):
15205+        d = self.POST("/uri?t=upload&mutable=true&mutable-type=sdmf",
15206+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
15207+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
15208+        def _got_json(json, version):
15209+            data = simplejson.loads(json)
15210+            data = data[1]
15211+
15212+            self.failUnlessIn("mutable-type", data)
15213+            self.failUnlessEqual(data['mutable-type'], version)
15214+        d.addCallback(_got_json, "sdmf")
15215+        d.addCallback(lambda ignored:
15216+            self.POST("/uri?t=upload&mutable=true&mutable-type=mdmf",
15217+                      file=('mdmf.txt', self.NEWFILE_CONTENTS * 300000)))
15218+        def _got_filecap(filecap):
15219+            self.failUnless(filecap.startswith("URI:MDMF"))
15220+            return filecap
15221+        d.addCallback(_got_filecap)
15222+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
15223+        d.addCallback(_got_json, "mdmf")
15224+        return d
15225+
15226+    def test_POST_upload_mutable_type_unlinked_bad_mutable_type(self):
15227+        return self.shouldHTTPError("test bad mutable type",
15228+                                    400, "Bad Request", "Unknown type: foo",
15229+                                    self.POST,
15230+                                    "/uri?5=upload&mutable=true&mutable-type=foo",
15231+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
15232+
15233+    def test_POST_upload_mutable_type(self):
15234+        d = self.POST(self.public_url + \
15235+                      "/foo?t=upload&mutable=true&mutable-type=sdmf",
15236+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
15237+        fn = self._foo_node
15238+        def _got_cap(filecap, filename):
15239+            filenameu = unicode(filename)
15240+            self.failUnlessURIMatchesRWChild(filecap, fn, filenameu)
15241+            return self.GET(self.public_url + "/foo/%s?t=json" % filename)
15242+        def _got_mdmf_cap(filecap):
15243+            self.failUnless(filecap.startswith("URI:MDMF"))
15244+            return filecap
15245+        d.addCallback(_got_cap, "sdmf.txt")
15246+        def _got_json(json, version):
15247+            data = simplejson.loads(json)
15248+            data = data[1]
15249+
15250+            self.failUnlessIn("mutable-type", data)
15251+            self.failUnlessEqual(data['mutable-type'], version)
15252+        d.addCallback(_got_json, "sdmf")
15253+        d.addCallback(lambda ignored:
15254+            self.POST(self.public_url + \
15255+                      "/foo?t=upload&mutable=true&mutable-type=mdmf",
15256+                      file=("mdmf.txt", self.NEWFILE_CONTENTS * 300000)))
15257+        d.addCallback(_got_mdmf_cap)
15258+        d.addCallback(_got_cap, "mdmf.txt")
15259+        d.addCallback(_got_json, "mdmf")
15260         return d
15261 
15262hunk ./src/allmydata/test/test_web.py 2302
15263+    def test_POST_upload_bad_mutable_type(self):
15264+        return self.shouldHTTPError("test bad mutable type",
15265+                                    400, "Bad Request", "Unknown type: foo",
15266+                                    self.POST, self.public_url + \
15267+                                    "/foo?t=upload&mutable=true&mutable-type=foo",
15268+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
15269+
15270     def test_POST_upload_mutable(self):
15271         # this creates a mutable file
15272         d = self.POST(self.public_url + "/foo", t="upload", mutable="true",
15273hunk ./src/allmydata/test/test_web.py 2433
15274             self.failUnlessReallyEqual(headers["content-type"], ["text/plain"])
15275         d.addCallback(_got_headers)
15276 
15277-        # make sure that size errors are displayed correctly for overwrite
15278-        d.addCallback(lambda res:
15279-                      self.shouldFail2(error.Error,
15280-                                       "test_POST_upload_mutable-toobig",
15281-                                       "413 Request Entity Too Large",
15282-                                       "SDMF is limited to one segment, and 10001 > 10000",
15283-                                       self.POST,
15284-                                       self.public_url + "/foo", t="upload",
15285-                                       mutable="true",
15286-                                       file=("new.txt",
15287-                                             "b" * (self.s.MUTABLE_SIZELIMIT+1)),
15288-                                       ))
15289-
15290+        # make sure that outdated size limits aren't enforced anymore.
15291+        d.addCallback(lambda ignored:
15292+            self.POST(self.public_url + "/foo", t="upload",
15293+                      mutable="true",
15294+                      file=("new.txt",
15295+                            "b" * (self.s.MUTABLE_SIZELIMIT+1))))
15296         d.addErrback(self.dump_error)
15297         return d
15298 
15299hunk ./src/allmydata/test/test_web.py 2443
15300     def test_POST_upload_mutable_toobig(self):
15301-        d = self.shouldFail2(error.Error,
15302-                             "test_POST_upload_mutable_toobig",
15303-                             "413 Request Entity Too Large",
15304-                             "SDMF is limited to one segment, and 10001 > 10000",
15305-                             self.POST,
15306-                             self.public_url + "/foo",
15307-                             t="upload", mutable="true",
15308-                             file=("new.txt",
15309-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
15310+        # SDMF had a size limti that was removed a while ago. MDMF has
15311+        # never had a size limit. Test to make sure that we do not
15312+        # encounter errors when trying to upload large mutable files,
15313+        # since there should be no coded prohibitions regarding large
15314+        # mutable files.
15315+        d = self.POST(self.public_url + "/foo",
15316+                      t="upload", mutable="true",
15317+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
15318         return d
15319 
15320     def dump_error(self, f):
15321hunk ./src/allmydata/test/test_web.py 2538
15322         # make sure that nothing was added
15323         d.addCallback(lambda res:
15324                       self.failUnlessNodeKeysAre(self._foo_node,
15325-                                                 [u"bar.txt", u"blockingfile",
15326-                                                  u"empty", u"n\u00fc.txt",
15327+                                                 [u"bar.txt", u"baz.txt", u"blockingfile",
15328+                                                  u"empty", u"n\u00fc.txt", u"quux.txt",
15329                                                   u"sub"]))
15330         return d
15331 
15332hunk ./src/allmydata/test/test_web.py 2661
15333         d.addCallback(_check3)
15334         return d
15335 
15336+    def test_POST_FILEURL_mdmf_check(self):
15337+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
15338+        d = self.POST(quux_url, t="check")
15339+        def _check(res):
15340+            self.failUnlessIn("Healthy", res)
15341+        d.addCallback(_check)
15342+        quux_extension_url = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
15343+        d.addCallback(lambda ignored:
15344+            self.POST(quux_extension_url, t="check"))
15345+        d.addCallback(_check)
15346+        return d
15347+
15348+    def test_POST_FILEURL_mdmf_check_and_repair(self):
15349+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
15350+        d = self.POST(quux_url, t="check", repair="true")
15351+        def _check(res):
15352+            self.failUnlessIn("Healthy", res)
15353+        d.addCallback(_check)
15354+        quux_extension_url = "/uri/%s" %\
15355+            urllib.quote("%s:3:131073" % self._quux_txt_uri)
15356+        d.addCallback(lambda ignored:
15357+            self.POST(quux_extension_url, t="check", repair="true"))
15358+        d.addCallback(_check)
15359+        return d
15360+
15361     def wait_for_operation(self, ignored, ophandle):
15362         url = "/operations/" + ophandle
15363         url += "?t=status&output=JSON"
15364hunk ./src/allmydata/test/test_web.py 2731
15365         d.addCallback(self.wait_for_operation, "123")
15366         def _check_json(data):
15367             self.failUnlessReallyEqual(data["finished"], True)
15368-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
15369-            self.failUnlessReallyEqual(data["count-objects-healthy"], 8)
15370+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
15371+            self.failUnlessReallyEqual(data["count-objects-healthy"], 10)
15372         d.addCallback(_check_json)
15373         d.addCallback(self.get_operation_results, "123", "html")
15374         def _check_html(res):
15375hunk ./src/allmydata/test/test_web.py 2736
15376-            self.failUnless("Objects Checked: <span>8</span>" in res)
15377-            self.failUnless("Objects Healthy: <span>8</span>" in res)
15378+            self.failUnless("Objects Checked: <span>10</span>" in res)
15379+            self.failUnless("Objects Healthy: <span>10</span>" in res)
15380         d.addCallback(_check_html)
15381 
15382         d.addCallback(lambda res:
15383hunk ./src/allmydata/test/test_web.py 2766
15384         d.addCallback(self.wait_for_operation, "124")
15385         def _check_json(data):
15386             self.failUnlessReallyEqual(data["finished"], True)
15387-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
15388-            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 8)
15389+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
15390+            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 10)
15391             self.failUnlessReallyEqual(data["count-objects-unhealthy-pre-repair"], 0)
15392             self.failUnlessReallyEqual(data["count-corrupt-shares-pre-repair"], 0)
15393             self.failUnlessReallyEqual(data["count-repairs-attempted"], 0)
15394hunk ./src/allmydata/test/test_web.py 2773
15395             self.failUnlessReallyEqual(data["count-repairs-successful"], 0)
15396             self.failUnlessReallyEqual(data["count-repairs-unsuccessful"], 0)
15397-            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 8)
15398+            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 10)
15399             self.failUnlessReallyEqual(data["count-objects-unhealthy-post-repair"], 0)
15400             self.failUnlessReallyEqual(data["count-corrupt-shares-post-repair"], 0)
15401         d.addCallback(_check_json)
15402hunk ./src/allmydata/test/test_web.py 2779
15403         d.addCallback(self.get_operation_results, "124", "html")
15404         def _check_html(res):
15405-            self.failUnless("Objects Checked: <span>8</span>" in res)
15406+            self.failUnless("Objects Checked: <span>10</span>" in res)
15407 
15408hunk ./src/allmydata/test/test_web.py 2781
15409-            self.failUnless("Objects Healthy (before repair): <span>8</span>" in res)
15410+            self.failUnless("Objects Healthy (before repair): <span>10</span>" in res)
15411             self.failUnless("Objects Unhealthy (before repair): <span>0</span>" in res)
15412             self.failUnless("Corrupt Shares (before repair): <span>0</span>" in res)
15413 
15414hunk ./src/allmydata/test/test_web.py 2789
15415             self.failUnless("Repairs Successful: <span>0</span>" in res)
15416             self.failUnless("Repairs Unsuccessful: <span>0</span>" in res)
15417 
15418-            self.failUnless("Objects Healthy (after repair): <span>8</span>" in res)
15419+            self.failUnless("Objects Healthy (after repair): <span>10</span>" in res)
15420             self.failUnless("Objects Unhealthy (after repair): <span>0</span>" in res)
15421             self.failUnless("Corrupt Shares (after repair): <span>0</span>" in res)
15422         d.addCallback(_check_html)
15423hunk ./src/allmydata/test/test_web.py 2808
15424         d.addCallback(self.failUnlessNodeKeysAre, [])
15425         return d
15426 
15427+    def test_POST_mkdir_mdmf(self):
15428+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=mdmf")
15429+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15430+        d.addCallback(lambda node:
15431+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15432+        return d
15433+
15434+    def test_POST_mkdir_sdmf(self):
15435+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=sdmf")
15436+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15437+        d.addCallback(lambda node:
15438+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15439+        return d
15440+
15441+    def test_POST_mkdir_bad_mutable_type(self):
15442+        return self.shouldHTTPError("test bad mutable type",
15443+                                    400, "Bad Request", "Unknown type: foo",
15444+                                    self.POST, self.public_url + \
15445+                                    "/foo?t=mkdir&name=newdir&mutable-type=foo")
15446+
15447     def test_POST_mkdir_initial_children(self):
15448         (newkids, caps) = self._create_initial_children()
15449         d = self.POST2(self.public_url +
15450hunk ./src/allmydata/test/test_web.py 2841
15451         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
15452         return d
15453 
15454+    def test_POST_mkdir_initial_children_mdmf(self):
15455+        (newkids, caps) = self._create_initial_children()
15456+        d = self.POST2(self.public_url +
15457+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=mdmf",
15458+                       simplejson.dumps(newkids))
15459+        d.addCallback(lambda res:
15460+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15461+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15462+        d.addCallback(lambda node:
15463+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
15464+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15465+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
15466+                       caps['filecap1'])
15467+        return d
15468+
15469+    # XXX: Duplication.
15470+    def test_POST_mkdir_initial_children_sdmf(self):
15471+        (newkids, caps) = self._create_initial_children()
15472+        d = self.POST2(self.public_url +
15473+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=sdmf",
15474+                       simplejson.dumps(newkids))
15475+        d.addCallback(lambda res:
15476+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
15477+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15478+        d.addCallback(lambda node:
15479+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
15480+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
15481+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
15482+                       caps['filecap1'])
15483+        return d
15484+
15485+    def test_POST_mkdir_initial_children_bad_mutable_type(self):
15486+        (newkids, caps) = self._create_initial_children()
15487+        return self.shouldHTTPError("test bad mutable type",
15488+                                    400, "Bad Request", "Unknown type: foo",
15489+                                    self.POST, self.public_url + \
15490+                                    "/foo?t=mkdir-with-children&name=newdir&mutable-type=foo",
15491+                                    simplejson.dumps(newkids))
15492+
15493     def test_POST_mkdir_immutable(self):
15494         (newkids, caps) = self._create_immutable_children()
15495         d = self.POST2(self.public_url +
15496hunk ./src/allmydata/test/test_web.py 2936
15497         d.addCallback(_after_mkdir)
15498         return d
15499 
15500+    def test_POST_mkdir_no_parentdir_noredirect_mdmf(self):
15501+        d = self.POST("/uri?t=mkdir&mutable-type=mdmf")
15502+        def _after_mkdir(res):
15503+            u = uri.from_string(res)
15504+            # Check that this is an MDMF writecap
15505+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
15506+        d.addCallback(_after_mkdir)
15507+        return d
15508+
15509+    def test_POST_mkdir_no_parentdir_noredirect_sdmf(self):
15510+        d = self.POST("/uri?t=mkdir&mutable-type=sdmf")
15511+        def _after_mkdir(res):
15512+            u = uri.from_string(res)
15513+            self.failUnlessIsInstance(u, uri.DirectoryURI)
15514+        d.addCallback(_after_mkdir)
15515+        return d
15516+
15517+    def test_POST_mkdir_no_parentdir_noredirect_bad_mutable_type(self):
15518+        return self.shouldHTTPError("test bad mutable type",
15519+                                    400, "Bad Request", "Unknown type: foo",
15520+                                    self.POST, self.public_url + \
15521+                                    "/uri?t=mkdir&mutable-type=foo")
15522+
15523     def test_POST_mkdir_no_parentdir_noredirect2(self):
15524         # make sure form-based arguments (as on the welcome page) still work
15525         d = self.POST("/uri", t="mkdir")
15526hunk ./src/allmydata/test/test_web.py 3001
15527         filecap3 = node3.get_readonly_uri()
15528         node4 = self.s.create_node_from_uri(make_mutable_file_uri())
15529         dircap = DirectoryNode(node4, None, None).get_uri()
15530+        mdmfcap = make_mutable_file_uri(mdmf=True)
15531         litdircap = "URI:DIR2-LIT:ge3dumj2mewdcotyfqydulbshj5x2lbm"
15532         emptydircap = "URI:DIR2-LIT:"
15533         newkids = {u"child-imm":        ["filenode", {"rw_uri": filecap1,
15534hunk ./src/allmydata/test/test_web.py 3018
15535                                                       "ro_uri": self._make_readonly(dircap)}],
15536                    u"dirchild-lit":     ["dirnode",  {"ro_uri": litdircap}],
15537                    u"dirchild-empty":   ["dirnode",  {"ro_uri": emptydircap}],
15538+                   u"child-mutable-mdmf": ["filenode", {"rw_uri": mdmfcap,
15539+                                                        "ro_uri": self._make_readonly(mdmfcap)}],
15540                    }
15541         return newkids, {'filecap1': filecap1,
15542                          'filecap2': filecap2,
15543hunk ./src/allmydata/test/test_web.py 3029
15544                          'unknown_immcap': unknown_immcap,
15545                          'dircap': dircap,
15546                          'litdircap': litdircap,
15547-                         'emptydircap': emptydircap}
15548+                         'emptydircap': emptydircap,
15549+                         'mdmfcap': mdmfcap}
15550 
15551     def _create_immutable_children(self):
15552         contents, n, filecap1 = self.makefile(12)
15553hunk ./src/allmydata/test/test_web.py 3571
15554                                                       contents))
15555         return d
15556 
15557+    def test_PUT_NEWFILEURL_mdmf(self):
15558+        new_contents = self.NEWFILE_CONTENTS * 300000
15559+        d = self.PUT(self.public_url + \
15560+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
15561+                     new_contents)
15562+        d.addCallback(lambda ignored:
15563+            self.GET(self.public_url + "/foo/mdmf.txt?t=json"))
15564+        def _got_json(json):
15565+            data = simplejson.loads(json)
15566+            data = data[1]
15567+            self.failUnlessIn("mutable-type", data)
15568+            self.failUnlessEqual(data['mutable-type'], "mdmf")
15569+            self.failUnless(data['rw_uri'].startswith("URI:MDMF"))
15570+            self.failUnless(data['ro_uri'].startswith("URI:MDMF"))
15571+        d.addCallback(_got_json)
15572+        return d
15573+
15574+    def test_PUT_NEWFILEURL_sdmf(self):
15575+        new_contents = self.NEWFILE_CONTENTS * 300000
15576+        d = self.PUT(self.public_url + \
15577+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
15578+                     new_contents)
15579+        d.addCallback(lambda ignored:
15580+            self.GET(self.public_url + "/foo/sdmf.txt?t=json"))
15581+        def _got_json(json):
15582+            data = simplejson.loads(json)
15583+            data = data[1]
15584+            self.failUnlessIn("mutable-type", data)
15585+            self.failUnlessEqual(data['mutable-type'], "sdmf")
15586+        d.addCallback(_got_json)
15587+        return d
15588+
15589+    def test_PUT_NEWFILEURL_bad_mutable_type(self):
15590+       new_contents = self.NEWFILE_CONTENTS * 300000
15591+       return self.shouldHTTPError("test bad mutable type",
15592+                                   400, "Bad Request", "Unknown type: foo",
15593+                                   self.PUT, self.public_url + \
15594+                                   "/foo/foo.txt?mutable=true&mutable-type=foo",
15595+                                   new_contents)
15596+
15597     def test_PUT_NEWFILEURL_uri_replace(self):
15598         contents, n, new_uri = self.makefile(8)
15599         d = self.PUT(self.public_url + "/foo/bar.txt?t=uri", new_uri)
15600hunk ./src/allmydata/test/test_web.py 3720
15601         d.addCallback(self.failUnlessIsEmptyJSON)
15602         return d
15603 
15604+    def test_PUT_mkdir_mdmf(self):
15605+        d = self.PUT("/uri?t=mkdir&mutable-type=mdmf", "")
15606+        def _got(res):
15607+            u = uri.from_string(res)
15608+            # Check that this is an MDMF writecap
15609+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
15610+        d.addCallback(_got)
15611+        return d
15612+
15613+    def test_PUT_mkdir_sdmf(self):
15614+        d = self.PUT("/uri?t=mkdir&mutable-type=sdmf", "")
15615+        def _got(res):
15616+            u = uri.from_string(res)
15617+            self.failUnlessIsInstance(u, uri.DirectoryURI)
15618+        d.addCallback(_got)
15619+        return d
15620+
15621+    def test_PUT_mkdir_bad_mutable_type(self):
15622+        return self.shouldHTTPError("bad mutable type",
15623+                                    400, "Bad Request", "Unknown type: foo",
15624+                                    self.PUT, "/uri?t=mkdir&mutable-type=foo",
15625+                                    "")
15626+
15627     def test_POST_check(self):
15628         d = self.POST(self.public_url + "/foo", t="check", name="bar.txt")
15629         def _done(res):
15630hunk ./src/allmydata/test/test_web.py 3755
15631         d.addCallback(_done)
15632         return d
15633 
15634+
15635+    def test_PUT_update_at_offset(self):
15636+        file_contents = "test file" * 100000 # about 900 KiB
15637+        d = self.PUT("/uri?mutable=true", file_contents)
15638+        def _then(filecap):
15639+            self.filecap = filecap
15640+            new_data = file_contents[:100]
15641+            new = "replaced and so on"
15642+            new_data += new
15643+            new_data += file_contents[len(new_data):]
15644+            assert len(new_data) == len(file_contents)
15645+            self.new_data = new_data
15646+        d.addCallback(_then)
15647+        d.addCallback(lambda ignored:
15648+            self.PUT("/uri/%s?replace=True&offset=100" % self.filecap,
15649+                     "replaced and so on"))
15650+        def _get_data(filecap):
15651+            n = self.s.create_node_from_uri(filecap)
15652+            return n.download_best_version()
15653+        d.addCallback(_get_data)
15654+        d.addCallback(lambda results:
15655+            self.failUnlessEqual(results, self.new_data))
15656+        # Now try appending things to the file
15657+        d.addCallback(lambda ignored:
15658+            self.PUT("/uri/%s?offset=%d" % (self.filecap, len(self.new_data)),
15659+                     "puppies" * 100))
15660+        d.addCallback(_get_data)
15661+        d.addCallback(lambda results:
15662+            self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
15663+        # and try replacing the beginning of the file
15664+        d.addCallback(lambda ignored:
15665+            self.PUT("/uri/%s?offset=0" % self.filecap, "begin"))
15666+        d.addCallback(_get_data)
15667+        d.addCallback(lambda results:
15668+            self.failUnlessEqual(results, "begin"+self.new_data[len("begin"):]+("puppies"*100)))
15669+        return d
15670+
15671+    def test_PUT_update_at_invalid_offset(self):
15672+        file_contents = "test file" * 100000 # about 900 KiB
15673+        d = self.PUT("/uri?mutable=true", file_contents)
15674+        def _then(filecap):
15675+            self.filecap = filecap
15676+        d.addCallback(_then)
15677+        # Negative offsets should cause an error.
15678+        d.addCallback(lambda ignored:
15679+            self.shouldHTTPError("test mutable invalid offset negative",
15680+                                 400, "Bad Request",
15681+                                 "Invalid offset",
15682+                                 self.PUT,
15683+                                 "/uri/%s?offset=-1" % self.filecap,
15684+                                 "foo"))
15685+        return d
15686+
15687+    def test_PUT_update_at_offset_immutable(self):
15688+        file_contents = "Test file" * 100000
15689+        d = self.PUT("/uri", file_contents)
15690+        def _then(filecap):
15691+            self.filecap = filecap
15692+        d.addCallback(_then)
15693+        d.addCallback(lambda ignored:
15694+            self.shouldHTTPError("test immutable update",
15695+                                 400, "Bad Request",
15696+                                 "immutable",
15697+                                 self.PUT,
15698+                                 "/uri/%s?offset=50" % self.filecap,
15699+                                 "foo"))
15700+        return d
15701+
15702+
15703     def test_bad_method(self):
15704         url = self.webish_url + self.public_url + "/foo/bar.txt"
15705         d = self.shouldHTTPError("test_bad_method",
15706hunk ./src/allmydata/test/test_web.py 4093
15707         def _stash_mutable_uri(n, which):
15708             self.uris[which] = n.get_uri()
15709             assert isinstance(self.uris[which], str)
15710-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
15711+        d.addCallback(lambda ign:
15712+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
15713         d.addCallback(_stash_mutable_uri, "corrupt")
15714         d.addCallback(lambda ign:
15715                       c0.upload(upload.Data("literal", convergence="")))
15716hunk ./src/allmydata/test/test_web.py 4240
15717         def _stash_mutable_uri(n, which):
15718             self.uris[which] = n.get_uri()
15719             assert isinstance(self.uris[which], str)
15720-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
15721+        d.addCallback(lambda ign:
15722+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
15723         d.addCallback(_stash_mutable_uri, "corrupt")
15724 
15725         def _compute_fileurls(ignored):
15726hunk ./src/allmydata/test/test_web.py 4903
15727         def _stash_mutable_uri(n, which):
15728             self.uris[which] = n.get_uri()
15729             assert isinstance(self.uris[which], str)
15730-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"2"))
15731+        d.addCallback(lambda ign:
15732+            c0.create_mutable_file(publish.MutableData(DATA+"2")))
15733         d.addCallback(_stash_mutable_uri, "mutable")
15734 
15735         def _compute_fileurls(ignored):
15736hunk ./src/allmydata/test/test_web.py 5003
15737                                                         convergence="")))
15738         d.addCallback(_stash_uri, "small")
15739 
15740-        d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
15741+        d.addCallback(lambda ign:
15742+            c0.create_mutable_file(publish.MutableData("mutable")))
15743         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
15744         d.addCallback(_stash_uri, "mutable")
15745 
15746hunk ./src/allmydata/web/common.py 12
15747 from allmydata.interfaces import ExistingChildError, NoSuchChildError, \
15748      FileTooLargeError, NotEnoughSharesError, NoSharesError, \
15749      EmptyPathnameComponentError, MustBeDeepImmutableError, \
15750-     MustBeReadonlyError, MustNotBeUnknownRWError
15751+     MustBeReadonlyError, MustNotBeUnknownRWError, SDMF_VERSION, MDMF_VERSION
15752 from allmydata.mutable.common import UnrecoverableFileError
15753 from allmydata.util import abbreviate
15754 from allmydata.util.encodingutil import to_str, quote_output
15755hunk ./src/allmydata/web/common.py 35
15756     else:
15757         return boolean_of_arg(replace)
15758 
15759+
15760+def parse_mutable_type_arg(arg):
15761+    if not arg:
15762+        return None # interpreted by the caller as "let the nodemaker decide"
15763+
15764+    arg = arg.lower()
15765+    if arg == "mdmf":
15766+        return MDMF_VERSION
15767+    elif arg == "sdmf":
15768+        return SDMF_VERSION
15769+
15770+    return "invalid"
15771+
15772+
15773+def parse_offset_arg(offset):
15774+    # XXX: This will raise a ValueError when invoked on something that
15775+    # is not an integer. Is that okay? Or do we want a better error
15776+    # message? Since this call is going to be used by programmers and
15777+    # their tools rather than users (through the wui), it is not
15778+    # inconsistent to return that, I guess.
15779+    if offset is not None:
15780+        offset = int(offset)
15781+
15782+    return offset
15783+
15784+
15785 def get_root(ctx_or_req):
15786     req = IRequest(ctx_or_req)
15787     # the addSlash=True gives us one extra (empty) segment
15788hunk ./src/allmydata/web/directory.py 19
15789 from allmydata.uri import from_string_dirnode
15790 from allmydata.interfaces import IDirectoryNode, IFileNode, IFilesystemNode, \
15791      IImmutableFileNode, IMutableFileNode, ExistingChildError, \
15792-     NoSuchChildError, EmptyPathnameComponentError
15793+     NoSuchChildError, EmptyPathnameComponentError, SDMF_VERSION, MDMF_VERSION
15794 from allmydata.monitor import Monitor, OperationCancelledError
15795 from allmydata import dirnode
15796 from allmydata.web.common import text_plain, WebError, \
15797hunk ./src/allmydata/web/directory.py 26
15798      IOpHandleTable, NeedOperationHandleError, \
15799      boolean_of_arg, get_arg, get_root, parse_replace_arg, \
15800      should_create_intermediate_directories, \
15801-     getxmlfile, RenderMixin, humanize_failure, convert_children_json
15802+     getxmlfile, RenderMixin, humanize_failure, convert_children_json, \
15803+     parse_mutable_type_arg
15804 from allmydata.web.filenode import ReplaceMeMixin, \
15805      FileNodeHandler, PlaceHolderNodeHandler
15806 from allmydata.web.check_results import CheckResults, \
15807hunk ./src/allmydata/web/directory.py 112
15808                     mutable = True
15809                     if t == "mkdir-immutable":
15810                         mutable = False
15811+
15812+                    mt = None
15813+                    if mutable:
15814+                        arg = get_arg(req, "mutable-type", None)
15815+                        mt = parse_mutable_type_arg(arg)
15816+                        if mt is "invalid":
15817+                            raise WebError("Unknown type: %s" % arg,
15818+                                           http.BAD_REQUEST)
15819                     d = self.node.create_subdirectory(name, kids,
15820hunk ./src/allmydata/web/directory.py 121
15821-                                                      mutable=mutable)
15822+                                                      mutable=mutable,
15823+                                                      mutable_version=mt)
15824                     d.addCallback(make_handler_for,
15825                                   self.client, self.node, name)
15826                     return d
15827hunk ./src/allmydata/web/directory.py 163
15828         if not t:
15829             # render the directory as HTML, using the docFactory and Nevow's
15830             # whole templating thing.
15831-            return DirectoryAsHTML(self.node)
15832+            return DirectoryAsHTML(self.node,
15833+                                   self.client.mutable_file_default)
15834 
15835         if t == "json":
15836             return DirectoryJSONMetadata(ctx, self.node)
15837hunk ./src/allmydata/web/directory.py 253
15838         name = name.decode("utf-8")
15839         replace = boolean_of_arg(get_arg(req, "replace", "true"))
15840         kids = {}
15841-        d = self.node.create_subdirectory(name, kids, overwrite=replace)
15842+        arg = get_arg(req, "mutable-type", None)
15843+        mt = parse_mutable_type_arg(arg)
15844+        if mt is not None and mt is not "invalid":
15845+            d = self.node.create_subdirectory(name, kids, overwrite=replace,
15846+                                          mutable_version=mt)
15847+        elif mt is "invalid":
15848+            raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
15849+        else:
15850+            d = self.node.create_subdirectory(name, kids, overwrite=replace)
15851         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
15852         return d
15853 
15854hunk ./src/allmydata/web/directory.py 277
15855         req.content.seek(0)
15856         kids_json = req.content.read()
15857         kids = convert_children_json(self.client.nodemaker, kids_json)
15858-        d = self.node.create_subdirectory(name, kids, overwrite=False)
15859+        arg = get_arg(req, "mutable-type", None)
15860+        mt = parse_mutable_type_arg(arg)
15861+        if mt is not None and mt is not "invalid":
15862+            d = self.node.create_subdirectory(name, kids, overwrite=False,
15863+                                              mutable_version=mt)
15864+        elif mt is "invalid":
15865+            raise WebError("Unknown type: %s" % arg)
15866+        else:
15867+            d = self.node.create_subdirectory(name, kids, overwrite=False)
15868         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
15869         return d
15870 
15871hunk ./src/allmydata/web/directory.py 582
15872     docFactory = getxmlfile("directory.xhtml")
15873     addSlash = True
15874 
15875-    def __init__(self, node):
15876+    def __init__(self, node, default_mutable_format):
15877         rend.Page.__init__(self)
15878         self.node = node
15879 
15880hunk ./src/allmydata/web/directory.py 586
15881+        assert default_mutable_format in (MDMF_VERSION, SDMF_VERSION)
15882+        self.default_mutable_format = default_mutable_format
15883+
15884     def beforeRender(self, ctx):
15885         # attempt to get the dirnode's children, stashing them (or the
15886         # failure that results) for later use
15887hunk ./src/allmydata/web/directory.py 786
15888 
15889         return ctx.tag
15890 
15891+    # XXX: Duplicated from root.py.
15892     def render_forms(self, ctx, data):
15893         forms = []
15894 
15895hunk ./src/allmydata/web/directory.py 795
15896         if self.dirnode_children is None:
15897             return T.div["No upload forms: directory is unreadable"]
15898 
15899+        mdmf_directory_input = T.input(type='radio', name='mutable-type',
15900+                                       id='mutable-directory-mdmf',
15901+                                       value='mdmf')
15902+        sdmf_directory_input = T.input(type='radio', name='mutable-type',
15903+                                       id='mutable-directory-sdmf',
15904+                                       value='sdmf', checked='checked')
15905         mkdir = T.form(action=".", method="post",
15906                        enctype="multipart/form-data")[
15907             T.fieldset[
15908hunk ./src/allmydata/web/directory.py 809
15909             T.legend(class_="freeform-form-label")["Create a new directory in this directory"],
15910             "New directory name: ",
15911             T.input(type="text", name="name"), " ",
15912+            T.label(for_='mutable-directory-sdmf')["SDMF"],
15913+            sdmf_directory_input,
15914+            T.label(for_='mutable-directory-mdmf')["MDMF"],
15915+            mdmf_directory_input,
15916             T.input(type="submit", value="Create"),
15917             ]]
15918         forms.append(T.div(class_="freeform-form")[mkdir])
15919hunk ./src/allmydata/web/directory.py 817
15920 
15921+        # Build input elements for mutable file type. We do this outside
15922+        # of the list so we can check the appropriate format, based on
15923+        # the default configured in the client (which reflects the
15924+        # default configured in tahoe.cfg)
15925+        if self.default_mutable_format == MDMF_VERSION:
15926+            mdmf_input = T.input(type='radio', name='mutable-type',
15927+                                 id='mutable-type-mdmf', value='mdmf',
15928+                                 checked='checked')
15929+        else:
15930+            mdmf_input = T.input(type='radio', name='mutable-type',
15931+                                 id='mutable-type-mdmf', value='mdmf')
15932+
15933+        if self.default_mutable_format == SDMF_VERSION:
15934+            sdmf_input = T.input(type='radio', name='mutable-type',
15935+                                 id='mutable-type-sdmf', value='sdmf',
15936+                                 checked="checked")
15937+        else:
15938+            sdmf_input = T.input(type='radio', name='mutable-type',
15939+                                 id='mutable-type-sdmf', value='sdmf')
15940+
15941         upload = T.form(action=".", method="post",
15942                         enctype="multipart/form-data")[
15943             T.fieldset[
15944hunk ./src/allmydata/web/directory.py 849
15945             T.input(type="submit", value="Upload"),
15946             " Mutable?:",
15947             T.input(type="checkbox", name="mutable"),
15948+            sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
15949+            mdmf_input,
15950+            T.label(for_="mutable-type-mdmf")["MDMF (experimental)"],
15951             ]]
15952         forms.append(T.div(class_="freeform-form")[upload])
15953 
15954hunk ./src/allmydata/web/directory.py 887
15955                 kiddata = ("filenode", {'size': childnode.get_size(),
15956                                         'mutable': childnode.is_mutable(),
15957                                         })
15958+                if childnode.is_mutable() and \
15959+                    childnode.get_version() is not None:
15960+                    mutable_type = childnode.get_version()
15961+                    assert mutable_type in (SDMF_VERSION, MDMF_VERSION)
15962+
15963+                    if mutable_type == MDMF_VERSION:
15964+                        mutable_type = "mdmf"
15965+                    else:
15966+                        mutable_type = "sdmf"
15967+                    kiddata[1]['mutable-type'] = mutable_type
15968+
15969             elif IDirectoryNode.providedBy(childnode):
15970                 kiddata = ("dirnode", {'mutable': childnode.is_mutable()})
15971             else:
15972hunk ./src/allmydata/web/filenode.py 9
15973 from nevow import url, rend
15974 from nevow.inevow import IRequest
15975 
15976-from allmydata.interfaces import ExistingChildError
15977+from allmydata.interfaces import ExistingChildError, SDMF_VERSION, MDMF_VERSION
15978 from allmydata.monitor import Monitor
15979 from allmydata.immutable.upload import FileHandle
15980hunk ./src/allmydata/web/filenode.py 12
15981+from allmydata.mutable.publish import MutableFileHandle
15982+from allmydata.mutable.common import MODE_READ
15983 from allmydata.util import log, base32
15984 
15985 from allmydata.web.common import text_plain, WebError, RenderMixin, \
15986hunk ./src/allmydata/web/filenode.py 18
15987      boolean_of_arg, get_arg, should_create_intermediate_directories, \
15988-     MyExceptionHandler, parse_replace_arg
15989+     MyExceptionHandler, parse_replace_arg, parse_offset_arg, \
15990+     parse_mutable_type_arg
15991 from allmydata.web.check_results import CheckResults, \
15992      CheckAndRepairResults, LiteralCheckResults
15993 from allmydata.web.info import MoreInfo
15994hunk ./src/allmydata/web/filenode.py 29
15995         # a new file is being uploaded in our place.
15996         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
15997         if mutable:
15998-            req.content.seek(0)
15999-            data = req.content.read()
16000-            d = client.create_mutable_file(data)
16001+            arg = get_arg(req, "mutable-type", None)
16002+            mutable_type = parse_mutable_type_arg(arg)
16003+            if mutable_type is "invalid":
16004+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16005+
16006+            data = MutableFileHandle(req.content)
16007+            d = client.create_mutable_file(data, version=mutable_type)
16008             def _uploaded(newnode):
16009                 d2 = self.parentnode.set_node(self.name, newnode,
16010                                               overwrite=replace)
16011hunk ./src/allmydata/web/filenode.py 68
16012         d.addCallback(lambda res: childnode.get_uri())
16013         return d
16014 
16015-    def _read_data_from_formpost(self, req):
16016-        # SDMF: files are small, and we can only upload data, so we read
16017-        # the whole file into memory before uploading.
16018-        contents = req.fields["file"]
16019-        contents.file.seek(0)
16020-        data = contents.file.read()
16021-        return data
16022 
16023     def replace_me_with_a_formpost(self, req, client, replace):
16024         # create a new file, maybe mutable, maybe immutable
16025hunk ./src/allmydata/web/filenode.py 73
16026         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
16027 
16028+        # create an immutable file
16029+        contents = req.fields["file"]
16030         if mutable:
16031hunk ./src/allmydata/web/filenode.py 76
16032-            data = self._read_data_from_formpost(req)
16033-            d = client.create_mutable_file(data)
16034+            arg = get_arg(req, "mutable-type", None)
16035+            mutable_type = parse_mutable_type_arg(arg)
16036+            if mutable_type is "invalid":
16037+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16038+            uploadable = MutableFileHandle(contents.file)
16039+            d = client.create_mutable_file(uploadable, version=mutable_type)
16040             def _uploaded(newnode):
16041                 d2 = self.parentnode.set_node(self.name, newnode,
16042                                               overwrite=replace)
16043hunk ./src/allmydata/web/filenode.py 89
16044                 return d2
16045             d.addCallback(_uploaded)
16046             return d
16047-        # create an immutable file
16048-        contents = req.fields["file"]
16049+
16050         uploadable = FileHandle(contents.file, convergence=client.convergence)
16051         d = self.parentnode.add_file(self.name, uploadable, overwrite=replace)
16052         d.addCallback(lambda newnode: newnode.get_uri())
16053hunk ./src/allmydata/web/filenode.py 95
16054         return d
16055 
16056+
16057 class PlaceHolderNodeHandler(RenderMixin, rend.Page, ReplaceMeMixin):
16058     def __init__(self, client, parentnode, name):
16059         rend.Page.__init__(self)
16060hunk ./src/allmydata/web/filenode.py 178
16061             # properly. So we assume that at least the browser will agree
16062             # with itself, and echo back the same bytes that we were given.
16063             filename = get_arg(req, "filename", self.name) or "unknown"
16064-            if self.node.is_mutable():
16065-                # some day: d = self.node.get_best_version()
16066-                d = makeMutableDownloadable(self.node)
16067-            else:
16068-                d = defer.succeed(self.node)
16069+            d = self.node.get_best_readable_version()
16070             d.addCallback(lambda dn: FileDownloader(dn, filename))
16071             return d
16072         if t == "json":
16073hunk ./src/allmydata/web/filenode.py 182
16074-            if self.parentnode and self.name:
16075-                d = self.parentnode.get_metadata_for(self.name)
16076+            # We do this to make sure that fields like size and
16077+            # mutable-type (which depend on the file on the grid and not
16078+            # just on the cap) are filled in. The latter gets used in
16079+            # tests, in particular.
16080+            #
16081+            # TODO: Make it so that the servermap knows how to update in
16082+            # a mode specifically designed to fill in these fields, and
16083+            # then update it in that mode.
16084+            if self.node.is_mutable():
16085+                d = self.node.get_servermap(MODE_READ)
16086             else:
16087                 d = defer.succeed(None)
16088hunk ./src/allmydata/web/filenode.py 194
16089+            if self.parentnode and self.name:
16090+                d.addCallback(lambda ignored:
16091+                    self.parentnode.get_metadata_for(self.name))
16092+            else:
16093+                d.addCallback(lambda ignored: None)
16094             d.addCallback(lambda md: FileJSONMetadata(ctx, self.node, md))
16095             return d
16096         if t == "info":
16097hunk ./src/allmydata/web/filenode.py 215
16098         if t:
16099             raise WebError("GET file: bad t=%s" % t)
16100         filename = get_arg(req, "filename", self.name) or "unknown"
16101-        if self.node.is_mutable():
16102-            # some day: d = self.node.get_best_version()
16103-            d = makeMutableDownloadable(self.node)
16104-        else:
16105-            d = defer.succeed(self.node)
16106+        d = self.node.get_best_readable_version()
16107         d.addCallback(lambda dn: FileDownloader(dn, filename))
16108         return d
16109 
16110hunk ./src/allmydata/web/filenode.py 223
16111         req = IRequest(ctx)
16112         t = get_arg(req, "t", "").strip()
16113         replace = parse_replace_arg(get_arg(req, "replace", "true"))
16114+        offset = parse_offset_arg(get_arg(req, "offset", None))
16115 
16116         if not t:
16117hunk ./src/allmydata/web/filenode.py 226
16118-            if self.node.is_mutable():
16119-                return self.replace_my_contents(req)
16120             if not replace:
16121                 # this is the early trap: if someone else modifies the
16122                 # directory while we're uploading, the add_file(overwrite=)
16123hunk ./src/allmydata/web/filenode.py 231
16124                 # call in replace_me_with_a_child will do the late trap.
16125                 raise ExistingChildError()
16126-            assert self.parentnode and self.name
16127-            return self.replace_me_with_a_child(req, self.client, replace)
16128+
16129+            if self.node.is_mutable():
16130+                # Are we a readonly filenode? We shouldn't allow callers
16131+                # to try to replace us if we are.
16132+                if self.node.is_readonly():
16133+                    raise WebError("PUT to a mutable file: replace or update"
16134+                                   " requested with read-only cap")
16135+                if offset is None:
16136+                    return self.replace_my_contents(req)
16137+
16138+                if offset >= 0:
16139+                    return self.update_my_contents(req, offset)
16140+
16141+                raise WebError("PUT to a mutable file: Invalid offset")
16142+
16143+            else:
16144+                if offset is not None:
16145+                    raise WebError("PUT to a file: append operation invoked "
16146+                                   "on an immutable cap")
16147+
16148+                assert self.parentnode and self.name
16149+                return self.replace_me_with_a_child(req, self.client, replace)
16150+
16151         if t == "uri":
16152             if not replace:
16153                 raise ExistingChildError()
16154hunk ./src/allmydata/web/filenode.py 314
16155 
16156     def replace_my_contents(self, req):
16157         req.content.seek(0)
16158-        new_contents = req.content.read()
16159+        new_contents = MutableFileHandle(req.content)
16160         d = self.node.overwrite(new_contents)
16161         d.addCallback(lambda res: self.node.get_uri())
16162         return d
16163hunk ./src/allmydata/web/filenode.py 319
16164 
16165+
16166+    def update_my_contents(self, req, offset):
16167+        req.content.seek(0)
16168+        added_contents = MutableFileHandle(req.content)
16169+
16170+        d = self.node.get_best_mutable_version()
16171+        d.addCallback(lambda mv:
16172+            mv.update(added_contents, offset))
16173+        d.addCallback(lambda ignored:
16174+            self.node.get_uri())
16175+        return d
16176+
16177+
16178     def replace_my_contents_with_a_formpost(self, req):
16179         # we have a mutable file. Get the data from the formpost, and replace
16180         # the mutable file's contents with it.
16181hunk ./src/allmydata/web/filenode.py 335
16182-        new_contents = self._read_data_from_formpost(req)
16183+        new_contents = req.fields['file']
16184+        new_contents = MutableFileHandle(new_contents.file)
16185+
16186         d = self.node.overwrite(new_contents)
16187         d.addCallback(lambda res: self.node.get_uri())
16188         return d
16189hunk ./src/allmydata/web/filenode.py 342
16190 
16191-class MutableDownloadable:
16192-    #implements(IDownloadable)
16193-    def __init__(self, size, node):
16194-        self.size = size
16195-        self.node = node
16196-    def get_size(self):
16197-        return self.size
16198-    def is_mutable(self):
16199-        return True
16200-    def read(self, consumer, offset=0, size=None):
16201-        d = self.node.download_best_version()
16202-        d.addCallback(self._got_data, consumer, offset, size)
16203-        return d
16204-    def _got_data(self, contents, consumer, offset, size):
16205-        start = offset
16206-        if size is not None:
16207-            end = offset+size
16208-        else:
16209-            end = self.size
16210-        # SDMF: we can write the whole file in one big chunk
16211-        consumer.write(contents[start:end])
16212-        return consumer
16213-
16214-def makeMutableDownloadable(n):
16215-    d = defer.maybeDeferred(n.get_size_of_best_version)
16216-    d.addCallback(MutableDownloadable, n)
16217-    return d
16218 
16219 class FileDownloader(rend.Page):
16220     def __init__(self, filenode, filename):
16221hunk ./src/allmydata/web/filenode.py 516
16222     data[1]['mutable'] = filenode.is_mutable()
16223     if edge_metadata is not None:
16224         data[1]['metadata'] = edge_metadata
16225+
16226+    if filenode.is_mutable() and filenode.get_version() is not None:
16227+        mutable_type = filenode.get_version()
16228+        assert mutable_type in (MDMF_VERSION, SDMF_VERSION)
16229+        if mutable_type == MDMF_VERSION:
16230+            mutable_type = "mdmf"
16231+        else:
16232+            mutable_type = "sdmf"
16233+        data[1]['mutable-type'] = mutable_type
16234+
16235     return text_plain(simplejson.dumps(data, indent=1) + "\n", ctx)
16236 
16237 def FileURI(ctx, filenode):
16238hunk ./src/allmydata/web/info.py 8
16239 from nevow.inevow import IRequest
16240 
16241 from allmydata.util import base32
16242-from allmydata.interfaces import IDirectoryNode, IFileNode
16243+from allmydata.interfaces import IDirectoryNode, IFileNode, MDMF_VERSION, SDMF_VERSION
16244 from allmydata.web.common import getxmlfile
16245 from allmydata.mutable.common import UnrecoverableFileError # TODO: move
16246 
16247hunk ./src/allmydata/web/info.py 31
16248             si = node.get_storage_index()
16249             if si:
16250                 if node.is_mutable():
16251-                    return "mutable file"
16252+                    ret = "mutable file"
16253+                    if node.get_version() == MDMF_VERSION:
16254+                        ret += " (mdmf)"
16255+                    else:
16256+                        ret += " (sdmf)"
16257+                    return ret
16258                 return "immutable file"
16259             return "immutable LIT file"
16260         return "unknown"
16261hunk ./src/allmydata/web/root.py 15
16262 from allmydata import get_package_versions_string
16263 from allmydata import provisioning
16264 from allmydata.util import idlib, log
16265-from allmydata.interfaces import IFileNode
16266+from allmydata.interfaces import IFileNode, MDMF_VERSION, SDMF_VERSION
16267 from allmydata.web import filenode, directory, unlinked, status, operations
16268 from allmydata.web import reliability, storage
16269 from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
16270hunk ./src/allmydata/web/root.py 19
16271-     get_arg, RenderMixin, boolean_of_arg
16272+     get_arg, RenderMixin, boolean_of_arg, parse_mutable_type_arg
16273 
16274 
16275 class URIHandler(RenderMixin, rend.Page):
16276hunk ./src/allmydata/web/root.py 50
16277         if t == "":
16278             mutable = boolean_of_arg(get_arg(req, "mutable", "false").strip())
16279             if mutable:
16280-                return unlinked.PUTUnlinkedSSK(req, self.client)
16281+                arg = get_arg(req, "mutable-type", None)
16282+                version = parse_mutable_type_arg(arg)
16283+                if version == "invalid":
16284+                    errmsg = "Unknown type: %s" % arg
16285+                    raise WebError(errmsg, http.BAD_REQUEST)
16286+
16287+                return unlinked.PUTUnlinkedSSK(req, self.client, version)
16288             else:
16289                 return unlinked.PUTUnlinkedCHK(req, self.client)
16290         if t == "mkdir":
16291hunk ./src/allmydata/web/root.py 74
16292         if t in ("", "upload"):
16293             mutable = bool(get_arg(req, "mutable", "").strip())
16294             if mutable:
16295-                return unlinked.POSTUnlinkedSSK(req, self.client)
16296+                arg = get_arg(req, "mutable-type", None)
16297+                version = parse_mutable_type_arg(arg)
16298+                if version is "invalid":
16299+                    raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
16300+                return unlinked.POSTUnlinkedSSK(req, self.client, version)
16301             else:
16302                 return unlinked.POSTUnlinkedCHK(req, self.client)
16303         if t == "mkdir":
16304hunk ./src/allmydata/web/root.py 335
16305 
16306     def render_upload_form(self, ctx, data):
16307         # this is a form where users can upload unlinked files
16308+        #
16309+        # for mutable files, users can choose the format by selecting
16310+        # MDMF or SDMF from a radio button. They can also configure a
16311+        # default format in tahoe.cfg, which they rightly expect us to
16312+        # obey. we convey to them that we are obeying their choice by
16313+        # ensuring that the one that they've chosen is selected in the
16314+        # interface.
16315+        if self.client.mutable_file_default == MDMF_VERSION:
16316+            mdmf_input = T.input(type='radio', name='mutable-type',
16317+                                 value='mdmf', id='mutable-type-mdmf',
16318+                                 checked='checked')
16319+        else:
16320+            mdmf_input = T.input(type='radio', name='mutable-type',
16321+                                 value='mdmf', id='mutable-type-mdmf')
16322+
16323+        if self.client.mutable_file_default == SDMF_VERSION:
16324+            sdmf_input = T.input(type='radio', name='mutable-type',
16325+                                 value='sdmf', id='mutable-type-sdmf',
16326+                                 checked='checked')
16327+        else:
16328+            sdmf_input = T.input(type='radio', name='mutable-type',
16329+                                 value='sdmf', id='mutable-type-sdmf')
16330+
16331+
16332         form = T.form(action="uri", method="post",
16333                       enctype="multipart/form-data")[
16334             T.fieldset[
16335hunk ./src/allmydata/web/root.py 367
16336                   T.input(type="file", name="file", class_="freeform-input-file")],
16337             T.input(type="hidden", name="t", value="upload"),
16338             T.div[T.input(type="checkbox", name="mutable"), T.label(for_="mutable")["Create mutable file"],
16339+                  sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
16340+                  mdmf_input,
16341+                  T.label(for_='mutable-type-mdmf')['MDMF (experimental)'],
16342                   " ", T.input(type="submit", value="Upload!")],
16343             ]]
16344         return T.div[form]
16345hunk ./src/allmydata/web/root.py 376
16346 
16347     def render_mkdir_form(self, ctx, data):
16348         # this is a form where users can create new directories
16349+        mdmf_input = T.input(type='radio', name='mutable-type',
16350+                             value='mdmf', id='mutable-directory-mdmf')
16351+        sdmf_input = T.input(type='radio', name='mutable-type',
16352+                             value='sdmf', id='mutable-directory-sdmf',
16353+                             checked='checked')
16354         form = T.form(action="uri", method="post",
16355                       enctype="multipart/form-data")[
16356             T.fieldset[
16357hunk ./src/allmydata/web/root.py 385
16358             T.legend(class_="freeform-form-label")["Create a directory"],
16359+            T.label(for_='mutable-directory-sdmf')["SDMF"],
16360+            sdmf_input,
16361+            T.label(for_='mutable-directory-mdmf')["MDMF"],
16362+            mdmf_input,
16363             T.input(type="hidden", name="t", value="mkdir"),
16364             T.input(type="hidden", name="redirect_to_result", value="true"),
16365             T.input(type="submit", value="Create a directory"),
16366hunk ./src/allmydata/web/unlinked.py 7
16367 from twisted.internet import defer
16368 from nevow import rend, url, tags as T
16369 from allmydata.immutable.upload import FileHandle
16370+from allmydata.mutable.publish import MutableFileHandle
16371 from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
16372hunk ./src/allmydata/web/unlinked.py 9
16373-     convert_children_json, WebError
16374+     convert_children_json, WebError, parse_mutable_type_arg
16375 from allmydata.web import status
16376 
16377 def PUTUnlinkedCHK(req, client):
16378hunk ./src/allmydata/web/unlinked.py 20
16379     # that fires with the URI of the new file
16380     return d
16381 
16382-def PUTUnlinkedSSK(req, client):
16383+def PUTUnlinkedSSK(req, client, version):
16384     # SDMF: files are small, and we can only upload data
16385     req.content.seek(0)
16386hunk ./src/allmydata/web/unlinked.py 23
16387-    data = req.content.read()
16388-    d = client.create_mutable_file(data)
16389+    data = MutableFileHandle(req.content)
16390+    d = client.create_mutable_file(data, version=version)
16391     d.addCallback(lambda n: n.get_uri())
16392     return d
16393 
16394hunk ./src/allmydata/web/unlinked.py 30
16395 def PUTUnlinkedCreateDirectory(req, client):
16396     # "PUT /uri?t=mkdir", to create an unlinked directory.
16397-    d = client.create_dirnode()
16398+    arg = get_arg(req, "mutable-type", None)
16399+    mt = parse_mutable_type_arg(arg)
16400+    if mt is not None and mt is not "invalid":
16401+        d = client.create_dirnode(version=mt)
16402+    elif mt is "invalid":
16403+        msg = "Unknown type: %s" % arg
16404+        raise WebError(msg, http.BAD_REQUEST)
16405+    else:
16406+        d = client.create_dirnode()
16407     d.addCallback(lambda dirnode: dirnode.get_uri())
16408     # XXX add redirect_to_result
16409     return d
16410hunk ./src/allmydata/web/unlinked.py 91
16411                       ["/uri/" + res.uri])
16412         return d
16413 
16414-def POSTUnlinkedSSK(req, client):
16415+def POSTUnlinkedSSK(req, client, version):
16416     # "POST /uri", to create an unlinked file.
16417     # SDMF: files are small, and we can only upload data
16418hunk ./src/allmydata/web/unlinked.py 94
16419-    contents = req.fields["file"]
16420-    contents.file.seek(0)
16421-    data = contents.file.read()
16422-    d = client.create_mutable_file(data)
16423+    contents = req.fields["file"].file
16424+    data = MutableFileHandle(contents)
16425+    d = client.create_mutable_file(data, version=version)
16426     d.addCallback(lambda n: n.get_uri())
16427     return d
16428 
16429hunk ./src/allmydata/web/unlinked.py 115
16430             raise WebError("t=mkdir does not accept children=, "
16431                            "try t=mkdir-with-children instead",
16432                            http.BAD_REQUEST)
16433-    d = client.create_dirnode()
16434+    arg = get_arg(req, "mutable-type", None)
16435+    mt = parse_mutable_type_arg(arg)
16436+    if mt is not None and mt is not "invalid":
16437+        d = client.create_dirnode(version=mt)
16438+    elif mt is "invalid":
16439+        msg = "Unknown type: %s" % arg
16440+        raise WebError(msg, http.BAD_REQUEST)
16441+    else:
16442+        d = client.create_dirnode()
16443     redirect = get_arg(req, "redirect_to_result", "false")
16444     if boolean_of_arg(redirect):
16445         def _then_redir(res):
16446}
16447[test: fix assorted tests broken by MDMF changes
16448Kevan Carstensen <kevan@isnotajoke.com>**20110802021438
16449 Ignore-this: d6ca88ef20ce52de9c2527b893e25fa4
16450] {
16451hunk ./src/allmydata/test/test_checker.py 11
16452 from allmydata.test.no_network import GridTestMixin
16453 from allmydata.immutable.upload import Data
16454 from allmydata.test.common_web import WebRenderingMixin
16455+from allmydata.mutable.publish import MutableData
16456 
16457 class FakeClient:
16458     def get_storage_broker(self):
16459hunk ./src/allmydata/test/test_checker.py 291
16460         def _stash_immutable(ur):
16461             self.imm = c0.create_node_from_uri(ur.uri)
16462         d.addCallback(_stash_immutable)
16463-        d.addCallback(lambda ign: c0.create_mutable_file("contents"))
16464+        d.addCallback(lambda ign:
16465+            c0.create_mutable_file(MutableData("contents")))
16466         def _stash_mutable(node):
16467             self.mut = node
16468         d.addCallback(_stash_mutable)
16469hunk ./src/allmydata/test/test_cli.py 13
16470 from allmydata.util import fileutil, hashutil, base32
16471 from allmydata import uri
16472 from allmydata.immutable import upload
16473+from allmydata.interfaces import MDMF_VERSION, SDMF_VERSION
16474+from allmydata.mutable.publish import MutableData
16475 from allmydata.dirnode import normalize
16476 
16477 # Test that the scripts can be imported.
16478hunk ./src/allmydata/test/test_cli.py 2009
16479             self.do_cli("cp", replacement_file_path, "tahoe:test_file.txt"))
16480         def _check_error_message((rc, out, err)):
16481             self.failUnlessEqual(rc, 1)
16482-            self.failUnlessIn("need write capability to publish", err)
16483+            self.failUnlessIn("replace or update requested with read-only cap", err)
16484         d.addCallback(_check_error_message)
16485         # Make extra sure that that didn't work.
16486         d.addCallback(lambda ignored:
16487hunk ./src/allmydata/test/test_cli.py 2571
16488         self.set_up_grid()
16489         c0 = self.g.clients[0]
16490         DATA = "data" * 100
16491-        d = c0.create_mutable_file(DATA)
16492+        DATA_uploadable = MutableData(DATA)
16493+        d = c0.create_mutable_file(DATA_uploadable)
16494         def _stash_uri(n):
16495             self.uri = n.get_uri()
16496         d.addCallback(_stash_uri)
16497hunk ./src/allmydata/test/test_cli.py 2673
16498                                            upload.Data("literal",
16499                                                         convergence="")))
16500         d.addCallback(_stash_uri, "small")
16501-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"1"))
16502+        d.addCallback(lambda ign:
16503+            c0.create_mutable_file(MutableData(DATA+"1")))
16504         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
16505         d.addCallback(_stash_uri, "mutable")
16506 
16507hunk ./src/allmydata/test/test_deepcheck.py 9
16508 from twisted.internet import threads # CLI tests use deferToThread
16509 from allmydata.immutable import upload
16510 from allmydata.mutable.common import UnrecoverableFileError
16511+from allmydata.mutable.publish import MutableData
16512 from allmydata.util import idlib
16513 from allmydata.util import base32
16514 from allmydata.scripts import runner
16515hunk ./src/allmydata/test/test_deepcheck.py 38
16516         self.basedir = "deepcheck/MutableChecker/good"
16517         self.set_up_grid()
16518         CONTENTS = "a little bit of data"
16519-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16520+        CONTENTS_uploadable = MutableData(CONTENTS)
16521+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16522         def _created(node):
16523             self.node = node
16524             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16525hunk ./src/allmydata/test/test_deepcheck.py 61
16526         self.basedir = "deepcheck/MutableChecker/corrupt"
16527         self.set_up_grid()
16528         CONTENTS = "a little bit of data"
16529-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16530+        CONTENTS_uploadable = MutableData(CONTENTS)
16531+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16532         def _stash_and_corrupt(node):
16533             self.node = node
16534             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16535hunk ./src/allmydata/test/test_deepcheck.py 99
16536         self.basedir = "deepcheck/MutableChecker/delete_share"
16537         self.set_up_grid()
16538         CONTENTS = "a little bit of data"
16539-        d = self.g.clients[0].create_mutable_file(CONTENTS)
16540+        CONTENTS_uploadable = MutableData(CONTENTS)
16541+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
16542         def _stash_and_delete(node):
16543             self.node = node
16544             self.fileurl = "uri/" + urllib.quote(node.get_uri())
16545hunk ./src/allmydata/test/test_deepcheck.py 223
16546             self.root = n
16547             self.root_uri = n.get_uri()
16548         d.addCallback(_created_root)
16549-        d.addCallback(lambda ign: c0.create_mutable_file("mutable file contents"))
16550+        d.addCallback(lambda ign:
16551+            c0.create_mutable_file(MutableData("mutable file contents")))
16552         d.addCallback(lambda n: self.root.set_node(u"mutable", n))
16553         def _created_mutable(n):
16554             self.mutable = n
16555hunk ./src/allmydata/test/test_deepcheck.py 965
16556     def create_mangled(self, ignored, name):
16557         nodetype, mangletype = name.split("-", 1)
16558         if nodetype == "mutable":
16559-            d = self.g.clients[0].create_mutable_file("mutable file contents")
16560+            mutable_uploadable = MutableData("mutable file contents")
16561+            d = self.g.clients[0].create_mutable_file(mutable_uploadable)
16562             d.addCallback(lambda n: self.root.set_node(unicode(name), n))
16563         elif nodetype == "large":
16564             large = upload.Data("Lots of data\n" * 1000 + name + "\n", None)
16565hunk ./src/allmydata/test/test_hung_server.py 10
16566 from allmydata.util.consumer import download_to_data
16567 from allmydata.immutable import upload
16568 from allmydata.mutable.common import UnrecoverableFileError
16569+from allmydata.mutable.publish import MutableData
16570 from allmydata.storage.common import storage_index_to_dir
16571 from allmydata.test.no_network import GridTestMixin
16572 from allmydata.test.common import ShouldFailMixin
16573hunk ./src/allmydata/test/test_hung_server.py 110
16574         self.servers = self.servers[5:] + self.servers[:5]
16575 
16576         if mutable:
16577-            d = nm.create_mutable_file(mutable_plaintext)
16578+            uploadable = MutableData(mutable_plaintext)
16579+            d = nm.create_mutable_file(uploadable)
16580             def _uploaded_mutable(node):
16581                 self.uri = node.get_uri()
16582                 self.shares = self.find_uri_shares(self.uri)
16583hunk ./src/allmydata/test/test_system.py 26
16584 from allmydata.monitor import Monitor
16585 from allmydata.mutable.common import NotWriteableError
16586 from allmydata.mutable import layout as mutable_layout
16587+from allmydata.mutable.publish import MutableData
16588 from foolscap.api import DeadReferenceError
16589 from twisted.python.failure import Failure
16590 from twisted.web.client import getPage
16591hunk ./src/allmydata/test/test_system.py 467
16592     def test_mutable(self):
16593         self.basedir = "system/SystemTest/test_mutable"
16594         DATA = "initial contents go here."  # 25 bytes % 3 != 0
16595+        DATA_uploadable = MutableData(DATA)
16596         NEWDATA = "new contents yay"
16597hunk ./src/allmydata/test/test_system.py 469
16598+        NEWDATA_uploadable = MutableData(NEWDATA)
16599         NEWERDATA = "this is getting old"
16600hunk ./src/allmydata/test/test_system.py 471
16601+        NEWERDATA_uploadable = MutableData(NEWERDATA)
16602 
16603         d = self.set_up_nodes(use_key_generator=True)
16604 
16605hunk ./src/allmydata/test/test_system.py 478
16606         def _create_mutable(res):
16607             c = self.clients[0]
16608             log.msg("starting create_mutable_file")
16609-            d1 = c.create_mutable_file(DATA)
16610+            d1 = c.create_mutable_file(DATA_uploadable)
16611             def _done(res):
16612                 log.msg("DONE: %s" % (res,))
16613                 self._mutable_node_1 = res
16614hunk ./src/allmydata/test/test_system.py 565
16615             self.failUnlessEqual(res, DATA)
16616             # replace the data
16617             log.msg("starting replace1")
16618-            d1 = newnode.overwrite(NEWDATA)
16619+            d1 = newnode.overwrite(NEWDATA_uploadable)
16620             d1.addCallback(lambda res: newnode.download_best_version())
16621             return d1
16622         d.addCallback(_check_download_3)
16623hunk ./src/allmydata/test/test_system.py 579
16624             newnode2 = self.clients[3].create_node_from_uri(uri)
16625             self._newnode3 = self.clients[3].create_node_from_uri(uri)
16626             log.msg("starting replace2")
16627-            d1 = newnode1.overwrite(NEWERDATA)
16628+            d1 = newnode1.overwrite(NEWERDATA_uploadable)
16629             d1.addCallback(lambda res: newnode2.download_best_version())
16630             return d1
16631         d.addCallback(_check_download_4)
16632hunk ./src/allmydata/test/test_system.py 649
16633         def _check_empty_file(res):
16634             # make sure we can create empty files, this usually screws up the
16635             # segsize math
16636-            d1 = self.clients[2].create_mutable_file("")
16637+            d1 = self.clients[2].create_mutable_file(MutableData(""))
16638             d1.addCallback(lambda newnode: newnode.download_best_version())
16639             d1.addCallback(lambda res: self.failUnlessEqual("", res))
16640             return d1
16641hunk ./src/allmydata/test/test_system.py 680
16642                                  self.key_generator_svc.key_generator.pool_size + size_delta)
16643 
16644         d.addCallback(check_kg_poolsize, 0)
16645-        d.addCallback(lambda junk: self.clients[3].create_mutable_file('hello, world'))
16646+        d.addCallback(lambda junk:
16647+            self.clients[3].create_mutable_file(MutableData('hello, world')))
16648         d.addCallback(check_kg_poolsize, -1)
16649         d.addCallback(lambda junk: self.clients[3].create_dirnode())
16650         d.addCallback(check_kg_poolsize, -2)
16651}
16652[cli: teach CLI how to create MDMF mutable files
16653Kevan Carstensen <kevan@isnotajoke.com>**20110802021613
16654 Ignore-this: 18d0ff98e75be231eed3c53319e76936
16655 
16656 Specifically, 'tahoe mkdir' and 'tahoe put' now take a --mutable-type
16657 argument.
16658] {
16659hunk ./src/allmydata/scripts/cli.py 53
16660 
16661 
16662 class MakeDirectoryOptions(VDriveOptions):
16663+    optParameters = [
16664+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
16665+        ]
16666+
16667     def parseArgs(self, where=""):
16668         self.where = argv_to_unicode(where)
16669 
16670hunk ./src/allmydata/scripts/cli.py 60
16671+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
16672+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
16673+
16674     def getSynopsis(self):
16675         return "Usage:  %s mkdir [options] [REMOTE_DIR]" % (self.command_name,)
16676 
16677hunk ./src/allmydata/scripts/cli.py 174
16678     optFlags = [
16679         ("mutable", "m", "Create a mutable file instead of an immutable one."),
16680         ]
16681+    optParameters = [
16682+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
16683+        ]
16684 
16685     def parseArgs(self, arg1=None, arg2=None):
16686         # see Examples below
16687hunk ./src/allmydata/scripts/cli.py 193
16688         if self.from_file == u"-":
16689             self.from_file = None
16690 
16691+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
16692+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
16693+
16694+
16695     def getSynopsis(self):
16696         return "Usage:  %s put [options] LOCAL_FILE REMOTE_FILE" % (self.command_name,)
16697 
16698hunk ./src/allmydata/scripts/tahoe_mkdir.py 25
16699     if not where or not path:
16700         # create a new unlinked directory
16701         url = nodeurl + "uri?t=mkdir"
16702+        if options["mutable-type"]:
16703+            url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
16704         resp = do_http("POST", url)
16705         rc = check_http_error(resp, stderr)
16706         if rc:
16707hunk ./src/allmydata/scripts/tahoe_mkdir.py 42
16708     # path must be "/".join([s.encode("utf-8") for s in segments])
16709     url = nodeurl + "uri/%s/%s?t=mkdir" % (urllib.quote(rootcap),
16710                                            urllib.quote(path))
16711+    if options['mutable-type']:
16712+        url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
16713+
16714     resp = do_http("POST", url)
16715     check_http_error(resp, stderr)
16716     new_uri = resp.read().strip()
16717hunk ./src/allmydata/scripts/tahoe_put.py 21
16718     from_file = options.from_file
16719     to_file = options.to_file
16720     mutable = options['mutable']
16721+    mutable_type = False
16722+
16723+    if mutable:
16724+        mutable_type = options['mutable-type']
16725     if options['quiet']:
16726         verbosity = 0
16727     else:
16728hunk ./src/allmydata/scripts/tahoe_put.py 49
16729         #  DIRCAP:./subdir/foo : DIRCAP/subdir/foo
16730         #  MUTABLE-FILE-WRITECAP : filecap
16731 
16732-        # FIXME: this shouldn't rely on a particular prefix.
16733-        if to_file.startswith("URI:SSK:"):
16734+        # FIXME: don't hardcode cap format.
16735+        if to_file.startswith("URI:MDMF:") or to_file.startswith("URI:SSK:"):
16736             url = nodeurl + "uri/%s" % urllib.quote(to_file)
16737         else:
16738             try:
16739hunk ./src/allmydata/scripts/tahoe_put.py 71
16740         url = nodeurl + "uri"
16741     if mutable:
16742         url += "?mutable=true"
16743+    if mutable_type:
16744+        assert mutable
16745+        url += "&mutable-type=%s" % mutable_type
16746+
16747     if from_file:
16748         infileobj = open(os.path.expanduser(from_file), "rb")
16749     else:
16750hunk ./src/allmydata/test/test_cli.py 33
16751 from allmydata.test.common_util import StallMixin, ReallyEqualMixin
16752 from allmydata.test.no_network import GridTestMixin
16753 from twisted.internet import threads # CLI tests use deferToThread
16754+from twisted.internet import defer # List uses a DeferredList in one place.
16755 from twisted.python import usage
16756 
16757 from allmydata.util.assertutil import precondition
16758hunk ./src/allmydata/test/test_cli.py 1014
16759         d.addCallback(lambda (rc,out,err): self.failUnlessReallyEqual(out, DATA2))
16760         return d
16761 
16762+    def _check_mdmf_json(self, (rc, json, err)):
16763+         self.failUnlessEqual(rc, 0)
16764+         self.failUnlessEqual(err, "")
16765+         self.failUnlessIn('"mutable-type": "mdmf"', json)
16766+         # We also want a valid MDMF cap to be in the json.
16767+         self.failUnlessIn("URI:MDMF", json)
16768+         self.failUnlessIn("URI:MDMF-RO", json)
16769+         self.failUnlessIn("URI:MDMF-Verifier", json)
16770+
16771+    def _check_sdmf_json(self, (rc, json, err)):
16772+        self.failUnlessEqual(rc, 0)
16773+        self.failUnlessEqual(err, "")
16774+        self.failUnlessIn('"mutable-type": "sdmf"', json)
16775+        # We also want to see the appropriate SDMF caps.
16776+        self.failUnlessIn("URI:SSK", json)
16777+        self.failUnlessIn("URI:SSK-RO", json)
16778+        self.failUnlessIn("URI:SSK-Verifier", json)
16779+
16780+    def test_mutable_type(self):
16781+        self.basedir = "cli/Put/mutable_type"
16782+        self.set_up_grid()
16783+        data = "data" * 100000
16784+        fn1 = os.path.join(self.basedir, "data")
16785+        fileutil.write(fn1, data)
16786+        d = self.do_cli("create-alias", "tahoe")
16787+        d.addCallback(lambda ignored:
16788+            self.do_cli("put", "--mutable", "--mutable-type=mdmf",
16789+                        fn1, "tahoe:uploaded.txt"))
16790+        d.addCallback(lambda ignored:
16791+            self.do_cli("ls", "--json", "tahoe:uploaded.txt"))
16792+        d.addCallback(self._check_mdmf_json)
16793+        d.addCallback(lambda ignored:
16794+            self.do_cli("put", "--mutable", "--mutable-type=sdmf",
16795+                        fn1, "tahoe:uploaded2.txt"))
16796+        d.addCallback(lambda ignored:
16797+            self.do_cli("ls", "--json", "tahoe:uploaded2.txt"))
16798+        d.addCallback(self._check_sdmf_json)
16799+        return d
16800+
16801+    def test_mutable_type_unlinked(self):
16802+        self.basedir = "cli/Put/mutable_type_unlinked"
16803+        self.set_up_grid()
16804+        data = "data" * 100000
16805+        fn1 = os.path.join(self.basedir, "data")
16806+        fileutil.write(fn1, data)
16807+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
16808+        d.addCallback(lambda (rc, cap, err):
16809+            self.do_cli("ls", "--json", cap))
16810+        d.addCallback(self._check_mdmf_json)
16811+        d.addCallback(lambda ignored:
16812+            self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1))
16813+        d.addCallback(lambda (rc, cap, err):
16814+            self.do_cli("ls", "--json", cap))
16815+        d.addCallback(self._check_sdmf_json)
16816+        return d
16817+
16818+    def test_put_to_mdmf_cap(self):
16819+        self.basedir = "cli/Put/put_to_mdmf_cap"
16820+        self.set_up_grid()
16821+        data = "data" * 100000
16822+        fn1 = os.path.join(self.basedir, "data")
16823+        fileutil.write(fn1, data)
16824+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
16825+        def _got_cap((rc, out, err)):
16826+            self.failUnlessEqual(rc, 0)
16827+            self.cap = out
16828+        d.addCallback(_got_cap)
16829+        # Now try to write something to the cap using put.
16830+        data2 = "data2" * 100000
16831+        fn2 = os.path.join(self.basedir, "data2")
16832+        fileutil.write(fn2, data2)
16833+        d.addCallback(lambda ignored:
16834+            self.do_cli("put", fn2, self.cap))
16835+        def _got_put((rc, out, err)):
16836+            self.failUnlessEqual(rc, 0)
16837+            self.failUnlessIn(self.cap, out)
16838+        d.addCallback(_got_put)
16839+        # Now get the cap. We should see the data we just put there.
16840+        d.addCallback(lambda ignored:
16841+            self.do_cli("get", self.cap))
16842+        def _got_data((rc, out, err)):
16843+            self.failUnlessEqual(rc, 0)
16844+            self.failUnlessEqual(out, data2)
16845+        d.addCallback(_got_data)
16846+        # Now strip the extension information off of the cap and try
16847+        # to put something to it.
16848+        def _make_bare_cap(ignored):
16849+            cap = self.cap.split(":")
16850+            cap = ":".join(cap[:len(cap) - 2])
16851+            self.cap = cap
16852+        d.addCallback(_make_bare_cap)
16853+        data3 = "data3" * 100000
16854+        fn3 = os.path.join(self.basedir, "data3")
16855+        fileutil.write(fn3, data3)
16856+        d.addCallback(lambda ignored:
16857+            self.do_cli("put", fn3, self.cap))
16858+        d.addCallback(lambda ignored:
16859+            self.do_cli("get", self.cap))
16860+        def _got_data3((rc, out, err)):
16861+            self.failUnlessEqual(rc, 0)
16862+            self.failUnlessEqual(out, data3)
16863+        d.addCallback(_got_data3)
16864+        return d
16865+
16866+    def test_put_to_sdmf_cap(self):
16867+        self.basedir = "cli/Put/put_to_sdmf_cap"
16868+        self.set_up_grid()
16869+        data = "data" * 100000
16870+        fn1 = os.path.join(self.basedir, "data")
16871+        fileutil.write(fn1, data)
16872+        d = self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1)
16873+        def _got_cap((rc, out, err)):
16874+            self.failUnlessEqual(rc, 0)
16875+            self.cap = out
16876+        d.addCallback(_got_cap)
16877+        # Now try to write something to the cap using put.
16878+        data2 = "data2" * 100000
16879+        fn2 = os.path.join(self.basedir, "data2")
16880+        fileutil.write(fn2, data2)
16881+        d.addCallback(lambda ignored:
16882+            self.do_cli("put", fn2, self.cap))
16883+        def _got_put((rc, out, err)):
16884+            self.failUnlessEqual(rc, 0)
16885+            self.failUnlessIn(self.cap, out)
16886+        d.addCallback(_got_put)
16887+        # Now get the cap. We should see the data we just put there.
16888+        d.addCallback(lambda ignored:
16889+            self.do_cli("get", self.cap))
16890+        def _got_data((rc, out, err)):
16891+            self.failUnlessEqual(rc, 0)
16892+            self.failUnlessEqual(out, data2)
16893+        d.addCallback(_got_data)
16894+        return d
16895+
16896+    def test_mutable_type_invalid_format(self):
16897+        o = cli.PutOptions()
16898+        self.failUnlessRaises(usage.UsageError,
16899+                              o.parseOptions,
16900+                              ["--mutable", "--mutable-type=ldmf"])
16901+
16902     def test_put_with_nonexistent_alias(self):
16903         # when invoked with an alias that doesn't exist, 'tahoe put'
16904         # should output a useful error message, not a stack trace
16905hunk ./src/allmydata/test/test_cli.py 3147
16906 
16907         return d
16908 
16909+    def test_mkdir_mutable_type(self):
16910+        self.basedir = os.path.dirname(self.mktemp())
16911+        self.set_up_grid()
16912+        d = self.do_cli("create-alias", "tahoe")
16913+        d.addCallback(lambda ignored:
16914+            self.do_cli("mkdir", "--mutable-type=sdmf", "tahoe:foo"))
16915+        def _check((rc, out, err), st):
16916+            self.failUnlessReallyEqual(rc, 0)
16917+            self.failUnlessReallyEqual(err, "")
16918+            self.failUnlessIn(st, out)
16919+            return out
16920+        def _stash_dircap(cap):
16921+            self._dircap = cap
16922+            u = uri.from_string(cap)
16923+            fn_uri = u.get_filenode_cap()
16924+            self._filecap = fn_uri.to_string()
16925+        d.addCallback(_check, "URI:DIR2")
16926+        d.addCallback(_stash_dircap)
16927+        d.addCallback(lambda ignored:
16928+            self.do_cli("ls", "--json", "tahoe:foo"))
16929+        d.addCallback(_check, "URI:DIR2")
16930+        d.addCallback(lambda ignored:
16931+            self.do_cli("ls", "--json", self._filecap))
16932+        d.addCallback(_check, '"mutable-type": "sdmf"')
16933+        d.addCallback(lambda ignored:
16934+            self.do_cli("mkdir", "--mutable-type=mdmf", "tahoe:bar"))
16935+        d.addCallback(_check, "URI:DIR2-MDMF")
16936+        d.addCallback(_stash_dircap)
16937+        d.addCallback(lambda ignored:
16938+            self.do_cli("ls", "--json", "tahoe:bar"))
16939+        d.addCallback(_check, "URI:DIR2-MDMF")
16940+        d.addCallback(lambda ignored:
16941+            self.do_cli("ls", "--json", self._filecap))
16942+        d.addCallback(_check, '"mutable-type": "mdmf"')
16943+        return d
16944+
16945+    def test_mkdir_mutable_type_unlinked(self):
16946+        self.basedir = os.path.dirname(self.mktemp())
16947+        self.set_up_grid()
16948+        d = self.do_cli("mkdir", "--mutable-type=sdmf")
16949+        def _check((rc, out, err), st):
16950+            self.failUnlessReallyEqual(rc, 0)
16951+            self.failUnlessReallyEqual(err, "")
16952+            self.failUnlessIn(st, out)
16953+            return out
16954+        d.addCallback(_check, "URI:DIR2")
16955+        def _stash_dircap(cap):
16956+            self._dircap = cap
16957+            # Now we're going to feed the cap into uri.from_string...
16958+            u = uri.from_string(cap)
16959+            # ...grab the underlying filenode uri.
16960+            fn_uri = u.get_filenode_cap()
16961+            # ...and stash that.
16962+            self._filecap = fn_uri.to_string()
16963+        d.addCallback(_stash_dircap)
16964+        d.addCallback(lambda res: self.do_cli("ls", "--json",
16965+                                              self._filecap))
16966+        d.addCallback(_check, '"mutable-type": "sdmf"')
16967+        d.addCallback(lambda res: self.do_cli("mkdir", "--mutable-type=mdmf"))
16968+        d.addCallback(_check, "URI:DIR2-MDMF")
16969+        d.addCallback(_stash_dircap)
16970+        d.addCallback(lambda res: self.do_cli("ls", "--json",
16971+                                              self._filecap))
16972+        d.addCallback(_check, '"mutable-type": "mdmf"')
16973+        return d
16974+
16975+    def test_mkdir_bad_mutable_type(self):
16976+        o = cli.MakeDirectoryOptions()
16977+        self.failUnlessRaises(usage.UsageError,
16978+                              o.parseOptions,
16979+                              ["--mutable", "--mutable-type=ldmf"])
16980+
16981     def test_mkdir_unicode(self):
16982         self.basedir = os.path.dirname(self.mktemp())
16983         self.set_up_grid()
16984}
16985[docs: amend configuration, webapi documentation to talk about MDMF
16986Kevan Carstensen <kevan@isnotajoke.com>**20110802022056
16987 Ignore-this: 4cab9b7e4ab79cc1efdabe2d457f27a6
16988] {
16989hunk ./docs/configuration.rst 328
16990     (Mutable files use a different share placement algorithm that does not
16991     currently consider this parameter.)
16992 
16993+``mutable.format = sdmf or mdmf``
16994+
16995+    This value tells Tahoe-LAFS what the default mutable file format should
16996+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
16997+    in the old SDMF format. This is desirable for clients that operate on
16998+    grids where some peers run older versions of Tahoe-LAFS, as these older
16999+    versions cannot read the new MDMF mutable file format. If
17000+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
17001+    the new MDMF format, which supports efficient in-place modification and
17002+    streaming downloads. You can overwrite this value using a special
17003+    mutable-type parameter in the webapi. If you do not specify a value here,
17004+    Tahoe-LAFS will use SDMF for all newly-created mutable files.
17005+
17006+    Note that this parameter only applies to mutable files. Mutable
17007+    directories, which are stored as mutable files, are not controlled by
17008+    this parameter and will always use SDMF. We may revisit this decision
17009+    in future versions of Tahoe-LAFS.
17010 
17011 Frontend Configuration
17012 ======================
17013hunk ./docs/frontends/webapi.rst 368
17014  To use the /uri/$FILECAP form, $FILECAP must be a write-cap for a mutable file.
17015 
17016  In the /uri/$DIRCAP/[SUBDIRS../]FILENAME form, if the target file is a
17017- writeable mutable file, that file's contents will be overwritten in-place. If
17018- it is a read-cap for a mutable file, an error will occur. If it is an
17019- immutable file, the old file will be discarded, and a new one will be put in
17020- its place.
17021+ writeable mutable file, that file's contents will be overwritten
17022+ in-place. If it is a read-cap for a mutable file, an error will occur.
17023+ If it is an immutable file, the old file will be discarded, and a new
17024+ one will be put in its place. If the target file is a writable mutable
17025+ file, you may also specify an "offset" parameter -- a byte offset that
17026+ determines where in the mutable file the data from the HTTP request
17027+ body is placed. This operation is relatively efficient for MDMF mutable
17028+ files, and is relatively inefficient (but still supported) for SDMF
17029+ mutable files. If no offset parameter is specified, then the entire
17030+ file is replaced with the data from the HTTP request body. For an
17031+ immutable file, the "offset" parameter is not valid.
17032 
17033  When creating a new file, if "mutable=true" is in the query arguments, the
17034  operation will create a mutable file instead of an immutable one.
17035hunk ./docs/frontends/webapi.rst 399
17036 
17037  If "mutable=true" is in the query arguments, the operation will create a
17038  mutable file, and return its write-cap in the HTTP respose. The default is
17039- to create an immutable file, returning the read-cap as a response.
17040+ to create an immutable file, returning the read-cap as a response. If
17041+ you create a mutable file, you can also use the "mutable-type" query
17042+ parameter. If "mutable-type=sdmf", then the mutable file will be created
17043+ in the old SDMF mutable file format. This is desirable for files that
17044+ need to be read by old clients. If "mutable-type=mdmf", then the file
17045+ will be created in the new MDMF mutable file format. MDMF mutable files
17046+ can be downloaded more efficiently, and modified in-place efficiently,
17047+ but are not compatible with older versions of Tahoe-LAFS. If no
17048+ "mutable-type" argument is given, the file is created in whatever
17049+ format was configured in tahoe.cfg.
17050 
17051 
17052 Creating A New Directory
17053hunk ./docs/frontends/webapi.rst 1101
17054  If a "mutable=true" argument is provided, the operation will create a
17055  mutable file, and the response body will contain the write-cap instead of
17056  the upload results page. The default is to create an immutable file,
17057- returning the upload results page as a response.
17058+ returning the upload results page as a response. If you create a
17059+ mutable file, you may choose to specify the format of that mutable file
17060+ with the "mutable-type" parameter. If "mutable-type=mdmf", then the
17061+ file will be created as an MDMF mutable file. If "mutable-type=sdmf",
17062+ then the file will be created as an SDMF mutable file. If no value is
17063+ specified, the file will be created in whatever format is specified in
17064+ tahoe.cfg.
17065 
17066 
17067 ``POST /uri/$DIRCAP/[SUBDIRS../]?t=upload``
17068}
17069
17070Context:
17071
17072[remove nodeid from WriteBucketProxy classes and customers
17073warner@lothar.com**20110801224317
17074 Ignore-this: e55334bb0095de11711eeb3af827e8e8
17075 refs #1363
17076] 
17077[remove get_serverid() from ReadBucketProxy and customers, including Checker
17078warner@lothar.com**20110801224307
17079 Ignore-this: 837aba457bc853e4fd413ab1a94519cb
17080 and debug.py dump-share commands
17081 refs #1363
17082] 
17083[reject old-style (pre-Tahoe-LAFS-v1.3) configuration files
17084zooko@zooko.com**20110801232423
17085 Ignore-this: b58218fcc064cc75ad8f05ed0c38902b
17086 Check for the existence of any of them and if any are found raise exception which will abort the startup of the node.
17087 This is a backwards-incompatible change for anyone who is still using old-style configuration files.
17088 fixes #1385
17089] 
17090[whitespace-cleanup
17091zooko@zooko.com**20110725015546
17092 Ignore-this: 442970d0545183b97adc7bd66657876c
17093] 
17094[tests: use fileutil.write() instead of open() to ensure timely close even without CPython-style reference counting
17095zooko@zooko.com**20110331145427
17096 Ignore-this: 75aae4ab8e5fa0ad698f998aaa1888ce
17097 Some of these already had an explicit close() but I went ahead and replaced them with fileutil.write() as well for the sake of uniformity.
17098] 
17099[Address Kevan's comment in #776 about Options classes missed when adding 'self.command_name'. refs #776, #1359
17100david-sarah@jacaranda.org**20110801221317
17101 Ignore-this: 8881d42cf7e6a1d15468291b0cb8fab9
17102] 
17103[docs/frontends/webapi.rst: change some more instances of 'delete' or 'remove' to 'unlink', change some section titles, and use two blank lines between all sections. refs #776, #1104
17104david-sarah@jacaranda.org**20110801220919
17105 Ignore-this: 572327591137bb05c24c44812d4b163f
17106] 
17107[cleanup: implement rm as a synonym for unlink rather than vice-versa. refs #776
17108david-sarah@jacaranda.org**20110801220108
17109 Ignore-this: 598dcbed870f4f6bb9df62de9111b343
17110] 
17111[docs/webapi.rst: address Kevan's comments about use of 'delete' on ref #1104
17112david-sarah@jacaranda.org**20110801205356
17113 Ignore-this: 4fbf03864934753c951ddeff64392491
17114] 
17115[docs: some changes of 'delete' or 'rm' to 'unlink'. refs #1104
17116david-sarah@jacaranda.org**20110713002722
17117 Ignore-this: 304d2a330d5e6e77d5f1feed7814b21c
17118] 
17119[WUI: change the label of the button to unlink a file from 'del' to 'unlink'. Also change some internal names to 'unlink', and allow 't=unlink' as a synonym for 't=delete' in the web-API interface. Incidentally, improve a test to check for the rename button as well as the unlink button. fixes #1104
17120david-sarah@jacaranda.org**20110713001218
17121 Ignore-this: 3eef6b3f81b94a9c0020a38eb20aa069
17122] 
17123[src/allmydata/web/filenode.py: delete a stale comment that was made incorrect by changeset [3133].
17124david-sarah@jacaranda.org**20110801203009
17125 Ignore-this: b3912e95a874647027efdc97822dd10e
17126] 
17127[fix typo introduced during rebasing of 'remove get_serverid from
17128Brian Warner <warner@lothar.com>**20110801200341
17129 Ignore-this: 4235b0f585c0533892193941dbbd89a8
17130 DownloadStatus.add_dyhb_request and customers' patch, to fix test failure.
17131] 
17132[remove get_serverid from DownloadStatus.add_dyhb_request and customers
17133zooko@zooko.com**20110801185401
17134 Ignore-this: db188c18566d2d0ab39a80c9dc8f6be6
17135 This patch is a rebase of a patch originally written by Brian. I didn't change any of the intent of Brian's patch, just ported it to current trunk.
17136 refs #1363
17137] 
17138[remove get_serverid from DownloadStatus.add_block_request and customers
17139zooko@zooko.com**20110801185344
17140 Ignore-this: 8bfa8201d6147f69b0fbe31beea9c1e
17141 This is a rebase of a patch Brian originally wrote. I haven't changed the intent of that patch, just ported it to trunk.
17142 refs #1363
17143] 
17144[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
17145warner@lothar.com**20110801174452
17146 Ignore-this: 2aa13ea6cbed4e9084bd604bf8633692
17147 refs #1363
17148] 
17149[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
17150warner@lothar.com**20110801174444
17151 Ignore-this: 54f30b5d7461d2b3514e2a0172f3a98c
17152 remove now-unused ShareManglingMixin
17153 refs #1363
17154] 
17155[DownloadStatus.add_known_share wants to be used by Finder, web.status
17156warner@lothar.com**20110801174436
17157 Ignore-this: 1433bcd73099a579abe449f697f35f9
17158 refs #1363
17159] 
17160[replace IServer.name() with get_name(), and get_longname()
17161warner@lothar.com**20110801174428
17162 Ignore-this: e5a6f7f6687fd7732ddf41cfdd7c491b
17163 
17164 This patch was originally written by Brian, but was re-recorded by Zooko to use
17165 darcs replace instead of hunks for any file in which it would result in fewer
17166 total hunks.
17167 refs #1363
17168] 
17169[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
17170zooko@zooko.com**20110801174143
17171 Ignore-this: e36e1420bba0620a0107bd90032a5198
17172 This patch was written by Brian but was re-recorded by Zooko (with David-Sarah looking on) to use darcs replace instead of editing to rename the three variables to their new names.
17173 refs #1363
17174] 
17175[Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
17176Brian Warner <warner@lothar.com>**20110801151834
17177 Ignore-this: 48530fce36c01c0ff708f61c2de7e67a
17178] 
17179[src/allmydata/_auto_deps.py: 'i686' is another way of spelling x86.
17180david-sarah@jacaranda.org**20110801034035
17181 Ignore-this: 6971e0621db2fba794d86395b4d51038
17182] 
17183[tahoe_rm.py: better error message when there is no path. refs #1292
17184david-sarah@jacaranda.org**20110122064212
17185 Ignore-this: ff3bb2c9f376250e5fd77eb009e09018
17186] 
17187[test_cli.py: Test for error message when 'tahoe rm' is invoked without a path. refs #1292
17188david-sarah@jacaranda.org**20110104105108
17189 Ignore-this: 29ec2f2e0251e446db96db002ad5dd7d
17190] 
17191[src/allmydata/__init__.py: suppress a spurious warning from 'bin/tahoe --version[-and-path]' about twisted-web and twisted-core packages.
17192david-sarah@jacaranda.org**20110801005209
17193 Ignore-this: 50e7cd53cca57b1870d9df0361c7c709
17194] 
17195[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
17196david-sarah@jacaranda.org**20110730032521
17197 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
17198] 
17199[cli: make 'tahoe cp' overwrite mutable files in-place
17200Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
17201 Ignore-this: b2ad21a19439722f05c49bfd35b01855
17202] 
17203[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
17204david-sarah@jacaranda.org**20110729233102
17205 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
17206] 
17207[src/allmydata/scripts/cli.py: fix pyflakes warning.
17208david-sarah@jacaranda.org**20110728021402
17209 Ignore-this: 94050140ddb99865295973f49927c509
17210] 
17211[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
17212david-sarah@jacaranda.org**20110724225440
17213 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
17214] 
17215[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
17216david-sarah@jacaranda.org**20110629185356
17217 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
17218] 
17219[docs/man/tahoe.1: add man page. fixes #1420
17220david-sarah@jacaranda.org**20110724171728
17221 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
17222] 
17223[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
17224david-sarah@jacaranda.org**20110721234941
17225 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
17226] 
17227[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
17228david-sarah@jacaranda.org**20110722000320
17229 Ignore-this: 55cd558b791526113db3f83c00ec328a
17230] 
17231[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
17232david-sarah@jacaranda.org**20110721233658
17233 Ignore-this: 81b41745477163c9b39c0b59db91cc62
17234] 
17235[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
17236david-sarah@jacaranda.org**20110722035402
17237 Ignore-this: 5d03f544c4154f088e26c7107494bf39
17238] 
17239[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
17240david-sarah@jacaranda.org**20110722024907
17241 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
17242] 
17243[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
17244david-sarah@jacaranda.org**20110718005949
17245 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
17246] 
17247[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
17248david-sarah@jacaranda.org**20110717194315
17249 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
17250] 
17251[README.txt: say that quickstart.rst is in the docs directory.
17252david-sarah@jacaranda.org**20110717192400
17253 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
17254] 
17255[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
17256zooko@zooko.com**20110717114226
17257 Ignore-this: df222120d41447ce4102616921626c82
17258 fixes #1383
17259] 
17260[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
17261david-sarah@jacaranda.org**20110716181813
17262 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
17263] 
17264[docs: add missing link in NEWS.rst
17265zooko@zooko.com**20110712153307
17266 Ignore-this: be7b7eb81c03700b739daa1027d72b35
17267] 
17268[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
17269zooko@zooko.com**20110712153229
17270 Ignore-this: 723c4f9e2211027c79d711715d972c5
17271 Also remove a couple of vestigial references to figleaf, which is long gone.
17272 fixes #1409 (remove contrib/fuse)
17273] 
17274[add Protovis.js-based download-status timeline visualization
17275Brian Warner <warner@lothar.com>**20110629222606
17276 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
17277 
17278 provide status overlap info on the webapi t=json output, add decode/decrypt
17279 rate tooltips, add zoomin/zoomout buttons
17280] 
17281[add more download-status data, fix tests
17282Brian Warner <warner@lothar.com>**20110629222555
17283 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
17284] 
17285[prepare for viz: improve DownloadStatus events
17286Brian Warner <warner@lothar.com>**20110629222542
17287 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
17288 
17289 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
17290] 
17291[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
17292zooko@zooko.com**20110629185711
17293 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
17294] 
17295[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
17296david-sarah@jacaranda.org**20110130235809
17297 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
17298] 
17299[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
17300david-sarah@jacaranda.org**20110626054124
17301 Ignore-this: abb864427a1b91bd10d5132b4589fd90
17302] 
17303[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
17304david-sarah@jacaranda.org**20110623205528
17305 Ignore-this: c63e23146c39195de52fb17c7c49b2da
17306] 
17307[Rename test_package_initialization.py to (much shorter) test_import.py .
17308Brian Warner <warner@lothar.com>**20110611190234
17309 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
17310 
17311 The former name was making my 'ls' listings hard to read, by forcing them
17312 down to just two columns.
17313] 
17314[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
17315zooko@zooko.com**20110611163741
17316 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
17317 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
17318 fixes #1412
17319] 
17320[wui: right-align the size column in the WUI
17321zooko@zooko.com**20110611153758
17322 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
17323 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
17324 fixes #1412
17325] 
17326[docs: three minor fixes
17327zooko@zooko.com**20110610121656
17328 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
17329 CREDITS for arc for stats tweak
17330 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
17331 English usage tweak
17332] 
17333[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
17334david-sarah@jacaranda.org**20110609223719
17335 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
17336] 
17337[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
17338wilcoxjg@gmail.com**20110527120135
17339 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
17340 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
17341 NEWS.rst, stats.py: documentation of change to get_latencies
17342 stats.rst: now documents percentile modification in get_latencies
17343 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
17344 fixes #1392
17345] 
17346[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
17347david-sarah@jacaranda.org**20110517011214
17348 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
17349] 
17350[docs: convert NEWS to NEWS.rst and change all references to it.
17351david-sarah@jacaranda.org**20110517010255
17352 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
17353] 
17354[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
17355david-sarah@jacaranda.org**20110512140559
17356 Ignore-this: 784548fc5367fac5450df1c46890876d
17357] 
17358[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
17359david-sarah@jacaranda.org**20110130164923
17360 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
17361] 
17362[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
17363zooko@zooko.com**20110128142006
17364 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
17365 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
17366] 
17367[M-x whitespace-cleanup
17368zooko@zooko.com**20110510193653
17369 Ignore-this: dea02f831298c0f65ad096960e7df5c7
17370] 
17371[docs: fix typo in running.rst, thanks to arch_o_median
17372zooko@zooko.com**20110510193633
17373 Ignore-this: ca06de166a46abbc61140513918e79e8
17374] 
17375[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
17376david-sarah@jacaranda.org**20110204204902
17377 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
17378] 
17379[relnotes.txt: forseeable -> foreseeable. refs #1342
17380david-sarah@jacaranda.org**20110204204116
17381 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
17382] 
17383[replace remaining .html docs with .rst docs
17384zooko@zooko.com**20110510191650
17385 Ignore-this: d557d960a986d4ac8216d1677d236399
17386 Remove install.html (long since deprecated).
17387 Also replace some obsolete references to install.html with references to quickstart.rst.
17388 Fix some broken internal references within docs/historical/historical_known_issues.txt.
17389 Thanks to Ravi Pinjala and Patrick McDonald.
17390 refs #1227
17391] 
17392[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
17393zooko@zooko.com**20110428055232
17394 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
17395] 
17396[munin tahoe_files plugin: fix incorrect file count
17397francois@ctrlaltdel.ch**20110428055312
17398 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
17399 fixes #1391
17400] 
17401[corrected "k must never be smaller than N" to "k must never be greater than N"
17402secorp@allmydata.org**20110425010308
17403 Ignore-this: 233129505d6c70860087f22541805eac
17404] 
17405[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
17406david-sarah@jacaranda.org**20110411190738
17407 Ignore-this: 7847d26bc117c328c679f08a7baee519
17408] 
17409[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
17410david-sarah@jacaranda.org**20110410155844
17411 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
17412] 
17413[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
17414david-sarah@jacaranda.org**20110410155705
17415 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
17416] 
17417[remove unused variable detected by pyflakes
17418zooko@zooko.com**20110407172231
17419 Ignore-this: 7344652d5e0720af822070d91f03daf9
17420] 
17421[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
17422david-sarah@jacaranda.org**20110401202750
17423 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
17424] 
17425[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
17426Brian Warner <warner@lothar.com>**20110325232511
17427 Ignore-this: d5307faa6900f143193bfbe14e0f01a
17428] 
17429[control.py: remove all uses of s.get_serverid()
17430warner@lothar.com**20110227011203
17431 Ignore-this: f80a787953bd7fa3d40e828bde00e855
17432] 
17433[web: remove some uses of s.get_serverid(), not all
17434warner@lothar.com**20110227011159
17435 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
17436] 
17437[immutable/downloader/fetcher.py: remove all get_serverid() calls
17438warner@lothar.com**20110227011156
17439 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
17440] 
17441[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
17442warner@lothar.com**20110227011153
17443 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
17444 
17445 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
17446 _shares_from_server dict was being popped incorrectly (using shnum as the
17447 index instead of serverid). I'm still thinking through the consequences of
17448 this bug. It was probably benign and really hard to detect. I think it would
17449 cause us to incorrectly believe that we're pulling too many shares from a
17450 server, and thus prefer a different server rather than asking for a second
17451 share from the first server. The diversity code is intended to spread out the
17452 number of shares simultaneously being requested from each server, but with
17453 this bug, it might be spreading out the total number of shares requested at
17454 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
17455 segment, so the effect doesn't last very long).
17456] 
17457[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
17458warner@lothar.com**20110227011150
17459 Ignore-this: d8d56dd8e7b280792b40105e13664554
17460 
17461 test_download.py: create+check MyShare instances better, make sure they share
17462 Server objects, now that finder.py cares
17463] 
17464[immutable/downloader/finder.py: reduce use of get_serverid(), one left
17465warner@lothar.com**20110227011146
17466 Ignore-this: 5785be173b491ae8a78faf5142892020
17467] 
17468[immutable/offloaded.py: reduce use of get_serverid() a bit more
17469warner@lothar.com**20110227011142
17470 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
17471] 
17472[immutable/upload.py: reduce use of get_serverid()
17473warner@lothar.com**20110227011138
17474 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
17475] 
17476[immutable/checker.py: remove some uses of s.get_serverid(), not all
17477warner@lothar.com**20110227011134
17478 Ignore-this: e480a37efa9e94e8016d826c492f626e
17479] 
17480[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
17481warner@lothar.com**20110227011132
17482 Ignore-this: 6078279ddf42b179996a4b53bee8c421
17483 MockIServer stubs
17484] 
17485[upload.py: rearrange _make_trackers a bit, no behavior changes
17486warner@lothar.com**20110227011128
17487 Ignore-this: 296d4819e2af452b107177aef6ebb40f
17488] 
17489[happinessutil.py: finally rename merge_peers to merge_servers
17490warner@lothar.com**20110227011124
17491 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
17492] 
17493[test_upload.py: factor out FakeServerTracker
17494warner@lothar.com**20110227011120
17495 Ignore-this: 6c182cba90e908221099472cc159325b
17496] 
17497[test_upload.py: server-vs-tracker cleanup
17498warner@lothar.com**20110227011115
17499 Ignore-this: 2915133be1a3ba456e8603885437e03
17500] 
17501[happinessutil.py: server-vs-tracker cleanup
17502warner@lothar.com**20110227011111
17503 Ignore-this: b856c84033562d7d718cae7cb01085a9
17504] 
17505[upload.py: more tracker-vs-server cleanup
17506warner@lothar.com**20110227011107
17507 Ignore-this: bb75ed2afef55e47c085b35def2de315
17508] 
17509[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
17510warner@lothar.com**20110227011103
17511 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
17512] 
17513[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
17514warner@lothar.com**20110227011100
17515 Ignore-this: 7ea858755cbe5896ac212a925840fe68
17516 
17517 No behavioral changes, just updating variable/method names and log messages.
17518 The effects outside these three files should be minimal: some exception
17519 messages changed (to say "server" instead of "peer"), and some internal class
17520 names were changed. A few things still use "peer" to minimize external
17521 changes, like UploadResults.timings["peer_selection"] and
17522 happinessutil.merge_peers, which can be changed later.
17523] 
17524[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
17525warner@lothar.com**20110227011056
17526 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
17527] 
17528[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
17529warner@lothar.com**20110227011051
17530 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
17531] 
17532[test: increase timeout on a network test because Francois's ARM machine hit that timeout
17533zooko@zooko.com**20110317165909
17534 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
17535 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
17536] 
17537[docs/configuration.rst: add a "Frontend Configuration" section
17538Brian Warner <warner@lothar.com>**20110222014323
17539 Ignore-this: 657018aa501fe4f0efef9851628444ca
17540 
17541 this points to docs/frontends/*.rst, which were previously underlinked
17542] 
17543[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
17544"Brian Warner <warner@lothar.com>"**20110221061544
17545 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
17546] 
17547[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
17548david-sarah@jacaranda.org**20110221015817
17549 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
17550] 
17551[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
17552david-sarah@jacaranda.org**20110221020125
17553 Ignore-this: b0744ed58f161bf188e037bad077fc48
17554] 
17555[Refactor StorageFarmBroker handling of servers
17556Brian Warner <warner@lothar.com>**20110221015804
17557 Ignore-this: 842144ed92f5717699b8f580eab32a51
17558 
17559 Pass around IServer instance instead of (peerid, rref) tuple. Replace
17560 "descriptor" with "server". Other replacements:
17561 
17562  get_all_servers -> get_connected_servers/get_known_servers
17563  get_servers_for_index -> get_servers_for_psi (now returns IServers)
17564 
17565 This change still needs to be pushed further down: lots of code is now
17566 getting the IServer and then distributing (peerid, rref) internally.
17567 Instead, it ought to distribute the IServer internally and delay
17568 extracting a serverid or rref until the last moment.
17569 
17570 no_network.py was updated to retain parallelism.
17571] 
17572[TAG allmydata-tahoe-1.8.2
17573warner@lothar.com**20110131020101] 
17574Patch bundle hash:
1757542eb502d8ad7f778eec52e8ef931baf8d222ca63